Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
dotster
Aug 28, 2013

Here is a very basic deck that covers ZFS and some of the adding and expanding options, at least for freenas.

http://forums.freenas.org/threads/slideshow-explaining-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Here is a link to the page it is from if you want to read more.


http://doc.freenas.org/index.php/Hardware_Recommendations#ZFS_Overview

Adbot
ADBOT LOVES YOU

dotster
Aug 28, 2013

modeski posted:

Does anyone have a NAS with 8 drives? I'm speccing out my new server and need a PSU that will support up to eight HDs (I'm starting off with 5). I'd like something efficient with enough connections for eight drives, but have no clue what else I need to be considering.

I have a server with 8 bays, it has a sas/sata backplane in the server and it takes two 4 pin molex connectors. Drives don't pull that much power, 5-10w per drive. I would also recommend RAID 6 over 5 (assuming you meant 5 disks for RAID5).

dotster
Aug 28, 2013

Only reason I can think of

tarepanda posted:

Two questiosns:

1. What's the easiest way to transition from RAID-Z to regular RAID? Is it necessarily going to be a two-step process where I copy everything off the RAID-Z array?

2. Is there a way to share a BR drive on my NAS (FreeNAS) so that I can watch DVDs on my htpcs?

#1, what Moey said.

#2, you are better off ripping it and then playing it from the NAS, a BR will peak at 40 Mbps and streaming that to a HTPC without glitching is hard. I have not done it through FreeNAS, I am guessing that you will have HDCP issues though.

dotster
Aug 28, 2013

Moey posted:

1. You will have to copy it to a temp location, then rebuild the array.

No advise on 2.

Why do you want off ZFS?

Unless the system is low on memory and he is switching to UFS there is no reason to switch off of ZFS.

dotster
Aug 28, 2013

SopWATh posted:

I've got an Atom D525 box, a SuperMicro X7SPA-HF-D525 with the maximum 4GB of RAM, and 3x 2TB drives in software RAID5.

I originally ran WHS on it, but I had some driver issues and the backup client conflicted with some other stuff on my laptop. I dumped WHS and installed Windows Server 2008 r2 to a 4th 500GB drive. It runs slowly, which is fine for now, but I'm getting really tired of the server crashing. (Which interferes with backups and obviously if it keeps running consistency checks on the array it's down for days at a time)

I need something reliable, always on, easy to manage, and the ability to assign access rights depending on user. These seem like pretty basic requirements and I was thinking about getting a SATA Disk On Module and installing FreeNAS, but all the features listed on the FreeNAS page make it look like zfs is a requirement and with only 4GB of ram that looks like it will be a problem.

I've basically got 3 questions:
1) If I've got 3 users, will the recommended 8GB minimum amount of ram be a non-factor for FreeNAS (despite not being able to cache disk operations and such)

2) If having only 4GB of ram will be an issue, do FreeNAS features work with ufs? I'm not too worried about snapshots, but I need the access rights to work as my wife's work stuff can't be shared even with me.

3) Should I skip FreeNAS and use something else or get a Synology/Drobo/blah device and just use the Atom board as a pfsense firewall?

You could try FreeNAS, there are plenty of people running it on less than 8GB but several people report having performance and stability issues on low memory systems. I have never run it with less than 16GB so I don't have personnel experience in this setup.

If you do run it you wont get any kind of caching, you need to make sure you do not enable L2ARC, and there is some other tuning you will want to follow from the FreeNAS docs for low memory systems. If your onboard video uses system RAM you might want to disable that and use a cheap video card just for setup. I would not expect great performance.

You could also look at NAS4Free as it lists lower system requirements for ZFS NAS than FreeNAS. I have not used this but it reviews pretty well.

The last comment on your setup that I have is I would recommend a forth 2TB drive and RAIDZ2 instead of RAIDZ. Rebuild time on larger drives leave plenty of time for a second disk failure (I have experienced this personally) and your chances of a successful rebuild go up greatly in a RAIDZ2 setup. If you need better write performance than RAIDZ2 then look at Striped Mirrored VDEVs (RAID10). I would have a spare drive on the shelf for when you have a disk failure.

dotster
Aug 28, 2013

modeski posted:

It's only anecdotal, but I've had my WHSv1 server for three years now, with 2Tb WD Caviar Green drives, and had to return/replace three of them over that time. I'll be going with Reds for my next server.

So just to make it more difficult I have two QNAPs with 6 WD greens and not had a problem with any of them. They don't get used a bunch but have been running for over a year.

dotster
Aug 28, 2013

Combat Pretzel posted:

RE drives are essentially Black editions with different firmware and ostensibly extended burn-in testing. They're not suddenly attain SSD like speeds out of the blue.

RE hard drives are disks with TLER for hardware RAID. They may use better components to get the longer MTBF that is listed or just take care of it with warranty and charge the price premium as insurance. RE4s and Blacks perform at the same level. It could be like a lot of employees at a lot of companies that he believed the marketing.

dotster
Aug 28, 2013

Agrikk posted:

For more anecdotal evidence, I accidentally purchased sixteen 1TB WD green drives for a project and we ended up keeping them for something else. Of those 16 drives (in RAID-0 and RAID-10 configurations), five have failed in the last two years. Their five warranty replacements have not failed.

So, out of 21 WD Greens in non-Parity hardware- and software-RAID configurations, five have failed.

Take note: I did have to do some tweaking of the hardware RAID adapters to keep the drives from powering down while a member of a RAID set. The drives would power down on their own and fall out of the set and break the RAID volume until I realized what was happening and how to fix it.


Make of this data what you will.

I had 12 WD RE4s in a RAID 60 on one of our servers, no SMART warnings from the RAID controller everything was fine the system ran for almost two years. Powered the server off for maintenance and of the 12 only 4 came back online. Since then I think we have replaced two more that failed a bit more gracefully.

Also, I have a friend who ran IT storage for his last company with a consumption rate of about a petabyte per quarter. Their average yearly failure rate on enterprise class disks was 12%.

dotster
Aug 28, 2013

AlternateAccount posted:

This sounds absurdly high.

Those were the numbers he gave me. I was a large HPC environment for the most part, a bunch of engineers accessing large data sets as well.

I wasn't too surprised really. We have pretty regular disk failures on our EMC and Netapp arrays.

dotster
Aug 28, 2013

Combat Pretzel posted:

Thanks for making me crap my pants. I've opted for the RE4 in my system after good experiences with the RE2 (almost 40000 power-on hours on each of them), after some WD Greens and some Seagate Barracudas taking a poo poo in rapid succession.

Well like everyone says over and over RAID is not backup. I would just make sure they are not to hot and that you have a spare on hand.

dotster
Aug 28, 2013

DNova posted:

Google's huge dataset showed that cooler drives failed at significantly higher rates :v:

It does show higher failure rates in lower temp drives, especially for young disks (3-6 months old).

The report showed, "...a mostly flat failure rate at mid-range temperatures
and a modest increase at the low end of the temperature
distribution."

It went on though to say,"What stands out are the 3 and 4 year old drives, where the trend for higher failures with higher temperature is much more constant and also more pronounced."

They go on to say,"Overall our experiments can confirm previously reported temperature effects only for the high end of our temperature range and especially for older drives."

And the important final notes on temperature say, "We can conclude that at moderate temperature ranges it is likely that there are other effects which affect failure rates much more strongly than temperatures do."

I think what is being seen here in the very young disks are failures that are part of the infant mortality pool and not necessary related solely to temp. There is a definite spike in failures on the other end of the temp. range for drives 3+ years old that temperature probably plays a more significant part in (especially over 40C) but it is hard to tell since general AFR spikes again and they do not show a drive utilization vs. temp. graph so we could draw better conclusions.

It does show failure numbers inline with the anecdotal info I have from friends who run large enterprise storage deployments, for high utilization disks the failure rate is ~15% in the first year.

For fellow insomniacs Failure Trends in a Large Disk Drive Population.

dotster
Aug 28, 2013

D. Ebdrup posted:

This is bugging me a bit, so just to clarify: those numbers aren't indicators of the amount of parity disks - rather, it's an indicator of how many disks you can lose and still be able to rebuild/resilver because of parity stored on each drive in the array/pool.

EDIT: here is a look into how zpools do parity across disks from my bookmarks.

Well and the same is true for parity with RAID5/6 as well, nothing uses a dedicated disk for parity (unless you are doing RAID 4 somehow). RAID 5 is roughly equivalent to RAIDZ in that they can tolerate a single disk loss and RAID 6 and RAIDZ2 can tolerate two disk failures in a pool. The parity information is striped across all the disks in the pool in all cases listed. Many people say RAID5/RAIDZ has one parity disk and RAID6/RAIDZ2 has two parity disks because that is the amount of space you lost to parity and they were taught wrong or at least a simplified version of what was really going on.

dotster
Aug 28, 2013

Jesse Iceberg posted:

Regarding ZFS chat, I manage three 32TB storage zpools for my work, holding some scientific databases. The pools are almost constantly being written to with new data, but only rarely read from.

Each 32TB pool is comprised of 4 RAIDZ2 vdevs of 11 1TB disks. Our problem has become that scrubbing the three pools takes in excess of 30-35 hours (each). Scrubbing a pool while its active results in slowing down IO such that the pool falls behind on its allocated workload.

I wasn't involved in the initial construction of the pools, but maybe I can pick your brains about this; Would I be right in thinking that, where regular scrubbing of a 96TB total storage capacity is concerned, many more smaller pools could have been a better approach to the three monolithic ones we have?

Each pool corresponds to its physical housing (one pool on a Sun X4540, the other two on a Sun J4500 each), but there wasn't any actual need to do that that I can see. With smaller pools the scrubs would in general finish faster, and writes that were destined for a pool that's taken out of rotation to be scrubbed could instead be send to one of the others until it rejoins the pool of pools.

Am I crazy?

Have you done anything to speed up scrubbing? Not sure if it would help but if you haven't you could give more time to the scrub and get it over in a few hrs maybe instead of taking the hit for 1-2 days. Look at this http://broken.net/uncategorized/zfs-performance-tuning-for-scrubs-and-resilvers/. If the systems are 24x7 then this might not help but if there are a few hrs a day when they are not used then it might work.

Adbot
ADBOT LOVES YOU

dotster
Aug 28, 2013

NFX posted:

It seems like the HP ProLiant N54L (previous generation) is pretty cheap, and I've been thinking about getting a NAS for a while.

So here's what I'm thinking about buying:
A HP ProLiant N54L. 2.2 GHz Amd Turion II processor, no disks, 2-4 GB memory.
8 GB ECC RAM in order to make ZFS happy.

I don't have any specific needs, but I'd like to not lose everything if a disk dies, and I think I want at least 4 TB of usable storage. It seems like 3 x 3 TB disks is at a pretty good price point, and then they can be configured for RAID-Z. Would I be more happy with two 4 TB disks in raid 1?

Is there anything I should be aware of with this setup? Will the CPU be hilariously underpowered?

If it is just for general storage this is probably fine. I would look at 4x 3TB drives in RAIDZ2, that will give you a bit over 5TB of usable space. If you need more performance you can run striped mirrors (RAID 10 basically), that is safer than RAIDZ/RAID5.

I would spring for 16GB of DRAM, it looks like it will support it contrary to what HP lists. I found someone in the freenas forum that is running one with 16 GB DRAM using this PN (Kingston 16GB (2 x 8GB) 240-Pin DDR3 SDRAM DDR3 1333 ECC Unbuffered Server Memory Model KVR1333D3E9SK2/16G).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply