|
Here is a very basic deck that covers ZFS and some of the adding and expanding options, at least for freenas. http://forums.freenas.org/threads/slideshow-explaining-vdev-zpool-zil-and-l2arc-for-noobs.7775/ Here is a link to the page it is from if you want to read more. http://doc.freenas.org/index.php/Hardware_Recommendations#ZFS_Overview
|
# ¿ Sep 22, 2013 00:21 |
|
|
# ¿ Apr 26, 2024 11:09 |
|
modeski posted:Does anyone have a NAS with 8 drives? I'm speccing out my new server and need a PSU that will support up to eight HDs (I'm starting off with 5). I'd like something efficient with enough connections for eight drives, but have no clue what else I need to be considering. I have a server with 8 bays, it has a sas/sata backplane in the server and it takes two 4 pin molex connectors. Drives don't pull that much power, 5-10w per drive. I would also recommend RAID 6 over 5 (assuming you meant 5 disks for RAID5).
|
# ¿ Sep 22, 2013 01:18 |
|
Only reason I can think of tarepanda posted:Two questiosns: #1, what Moey said. #2, you are better off ripping it and then playing it from the NAS, a BR will peak at 40 Mbps and streaming that to a HTPC without glitching is hard. I have not done it through FreeNAS, I am guessing that you will have HDCP issues though.
|
# ¿ Sep 22, 2013 21:43 |
|
Moey posted:1. You will have to copy it to a temp location, then rebuild the array. Unless the system is low on memory and he is switching to UFS there is no reason to switch off of ZFS.
|
# ¿ Sep 22, 2013 21:44 |
|
SopWATh posted:I've got an Atom D525 box, a SuperMicro X7SPA-HF-D525 with the maximum 4GB of RAM, and 3x 2TB drives in software RAID5. You could try FreeNAS, there are plenty of people running it on less than 8GB but several people report having performance and stability issues on low memory systems. I have never run it with less than 16GB so I don't have personnel experience in this setup. If you do run it you wont get any kind of caching, you need to make sure you do not enable L2ARC, and there is some other tuning you will want to follow from the FreeNAS docs for low memory systems. If your onboard video uses system RAM you might want to disable that and use a cheap video card just for setup. I would not expect great performance. You could also look at NAS4Free as it lists lower system requirements for ZFS NAS than FreeNAS. I have not used this but it reviews pretty well. The last comment on your setup that I have is I would recommend a forth 2TB drive and RAIDZ2 instead of RAIDZ. Rebuild time on larger drives leave plenty of time for a second disk failure (I have experienced this personally) and your chances of a successful rebuild go up greatly in a RAIDZ2 setup. If you need better write performance than RAIDZ2 then look at Striped Mirrored VDEVs (RAID10). I would have a spare drive on the shelf for when you have a disk failure.
|
# ¿ Sep 23, 2013 01:35 |
|
modeski posted:It's only anecdotal, but I've had my WHSv1 server for three years now, with 2Tb WD Caviar Green drives, and had to return/replace three of them over that time. I'll be going with Reds for my next server. So just to make it more difficult I have two QNAPs with 6 WD greens and not had a problem with any of them. They don't get used a bunch but have been running for over a year.
|
# ¿ Sep 24, 2013 04:57 |
|
Combat Pretzel posted:RE drives are essentially Black editions with different firmware and ostensibly extended burn-in testing. They're not suddenly attain SSD like speeds out of the blue. RE hard drives are disks with TLER for hardware RAID. They may use better components to get the longer MTBF that is listed or just take care of it with warranty and charge the price premium as insurance. RE4s and Blacks perform at the same level. It could be like a lot of employees at a lot of companies that he believed the marketing.
|
# ¿ Sep 26, 2013 16:54 |
|
Agrikk posted:For more anecdotal evidence, I accidentally purchased sixteen 1TB WD green drives for a project and we ended up keeping them for something else. Of those 16 drives (in RAID-0 and RAID-10 configurations), five have failed in the last two years. Their five warranty replacements have not failed. I had 12 WD RE4s in a RAID 60 on one of our servers, no SMART warnings from the RAID controller everything was fine the system ran for almost two years. Powered the server off for maintenance and of the 12 only 4 came back online. Since then I think we have replaced two more that failed a bit more gracefully. Also, I have a friend who ran IT storage for his last company with a consumption rate of about a petabyte per quarter. Their average yearly failure rate on enterprise class disks was 12%.
|
# ¿ Sep 26, 2013 22:39 |
|
AlternateAccount posted:This sounds absurdly high. Those were the numbers he gave me. I was a large HPC environment for the most part, a bunch of engineers accessing large data sets as well. I wasn't too surprised really. We have pretty regular disk failures on our EMC and Netapp arrays.
|
# ¿ Sep 28, 2013 02:15 |
|
Combat Pretzel posted:Thanks for making me crap my pants. I've opted for the RE4 in my system after good experiences with the RE2 (almost 40000 power-on hours on each of them), after some WD Greens and some Seagate Barracudas taking a poo poo in rapid succession. Well like everyone says over and over RAID is not backup. I would just make sure they are not to hot and that you have a spare on hand.
|
# ¿ Sep 28, 2013 02:18 |
|
DNova posted:Google's huge dataset showed that cooler drives failed at significantly higher rates It does show higher failure rates in lower temp drives, especially for young disks (3-6 months old). The report showed, "...a mostly flat failure rate at mid-range temperatures and a modest increase at the low end of the temperature distribution." It went on though to say,"What stands out are the 3 and 4 year old drives, where the trend for higher failures with higher temperature is much more constant and also more pronounced." They go on to say,"Overall our experiments can confirm previously reported temperature effects only for the high end of our temperature range and especially for older drives." And the important final notes on temperature say, "We can conclude that at moderate temperature ranges it is likely that there are other effects which affect failure rates much more strongly than temperatures do." I think what is being seen here in the very young disks are failures that are part of the infant mortality pool and not necessary related solely to temp. There is a definite spike in failures on the other end of the temp. range for drives 3+ years old that temperature probably plays a more significant part in (especially over 40C) but it is hard to tell since general AFR spikes again and they do not show a drive utilization vs. temp. graph so we could draw better conclusions. It does show failure numbers inline with the anecdotal info I have from friends who run large enterprise storage deployments, for high utilization disks the failure rate is ~15% in the first year. For fellow insomniacs Failure Trends in a Large Disk Drive Population.
|
# ¿ Oct 4, 2013 14:02 |
|
D. Ebdrup posted:This is bugging me a bit, so just to clarify: those numbers aren't indicators of the amount of parity disks - rather, it's an indicator of how many disks you can lose and still be able to rebuild/resilver because of parity stored on each drive in the array/pool. Well and the same is true for parity with RAID5/6 as well, nothing uses a dedicated disk for parity (unless you are doing RAID 4 somehow). RAID 5 is roughly equivalent to RAIDZ in that they can tolerate a single disk loss and RAID 6 and RAIDZ2 can tolerate two disk failures in a pool. The parity information is striped across all the disks in the pool in all cases listed. Many people say RAID5/RAIDZ has one parity disk and RAID6/RAIDZ2 has two parity disks because that is the amount of space you lost to parity and they were taught wrong or at least a simplified version of what was really going on.
|
# ¿ Dec 7, 2013 18:14 |
|
Jesse Iceberg posted:Regarding ZFS chat, I manage three 32TB storage zpools for my work, holding some scientific databases. The pools are almost constantly being written to with new data, but only rarely read from. Have you done anything to speed up scrubbing? Not sure if it would help but if you haven't you could give more time to the scrub and get it over in a few hrs maybe instead of taking the hit for 1-2 days. Look at this http://broken.net/uncategorized/zfs-performance-tuning-for-scrubs-and-resilvers/. If the systems are 24x7 then this might not help but if there are a few hrs a day when they are not used then it might work.
|
# ¿ Dec 11, 2013 01:11 |
|
|
# ¿ Apr 26, 2024 11:09 |
|
NFX posted:It seems like the HP ProLiant N54L (previous generation) is pretty cheap, and I've been thinking about getting a NAS for a while. If it is just for general storage this is probably fine. I would look at 4x 3TB drives in RAIDZ2, that will give you a bit over 5TB of usable space. If you need more performance you can run striped mirrors (RAID 10 basically), that is safer than RAIDZ/RAID5. I would spring for 16GB of DRAM, it looks like it will support it contrary to what HP lists. I found someone in the freenas forum that is running one with 16 GB DRAM using this PN (Kingston 16GB (2 x 8GB) 240-Pin DDR3 SDRAM DDR3 1333 ECC Unbuffered Server Memory Model KVR1333D3E9SK2/16G).
|
# ¿ Aug 31, 2014 22:28 |