|
Combat Pretzel posted:Ars Technica's guide to building a NAS: With Storage Spaces even. That was a poo poo article.
|
# ? Feb 11, 2016 21:07 |
|
|
# ? Apr 18, 2024 10:30 |
|
Well, I asked here a couple if months ago on the best way to upgrade a full NAS, expanding my Synology DS412+ from 4x3gb drives to 4x5gb drives. Single array, configured as SHR. No backup, as Crashplan just can't handle that much data even though the DS has extra memory. In the end, I just but the bullet and hot-swapped the drives, one by one. * First scrubbed the array to check for faulty drives. (A day or so) * Checked each drives smart info, no obvious problems * Removed first drive, unit beeps continuously until the warning beep is disabled through the web interface * Inserted replacement drive, tagged the array for repair using the new drive in the web interface. Wait 6 hours. * Rinse and repeat for each subsequent drive. Time to rebuild goes up by a few hours for each drive, until the last one took over 24 hours * Array gradually increases in space after insertion of second replacement drive, and keeps growing each time. * No other drive failures, in the meantime. Breathe huge sigh of relief after three last rebuild. * Device continues working well throughout, no interruptions. Nerve wracking, but a good outcome, and the unit performed exactly as it should have.
|
# ? Feb 12, 2016 11:44 |
|
The Gunslinger posted:With Storage Spaces even. That was a poo poo article. That's what I love about Linux et al, they're not blackboxes and you can actually see what hoops they're jumping through to maximize system performance (for instance I find the transparent hugepages a pretty cool idea). With Windows, you're left in the dark whether they're actually trying to use the processor features available to the maximum extent possible.
|
# ? Feb 12, 2016 15:40 |
|
This is a much better article http://blog.brianmoses.net/2016/02/diy-nas-2016-edition.html
|
# ? Feb 12, 2016 16:21 |
|
SamDabbers posted:Same here, and agreed. The TS440 definitely gets my recommendation. So this SAS expander states it's supposed to be used in conjunction with a RAID Controller, however if I'm going for a FreeNas setup, is there anything stopping me from slapping that expander in by itself?
|
# ? Feb 12, 2016 18:43 |
|
reL posted:So this SAS expander states it's supposed to be used in conjunction with a RAID Controller, however if I'm going for a FreeNas setup, is there anything stopping me from slapping that expander in by itself? The expander card is basically the SAS equivalent of an Ethernet switch. It'll turn two SAS ports from your controller card into four, sharing the aggregate bandwidth between the downstream disks, which isn't actually noticeable unless you're building all-SSD arrays. Although it has a PCIe x8 connector, it only uses it for power and doesn't show up as a PCIe-speaking device, so your server can't use it to talk to drives directly. You'll still need an actual controller card, such as an IBM M1015 or another rebadged SAS2008-based card which can be found inexpensively on eBay. It's probably cheaper to just get two controller cards if you have sufficient PCIe slots in your server, rather than a controller card and the expander. The motherboard in my TS440 has one x16 slot used by the M1015, one x1 slot (unpopulated), and one x4 slot which I'm using for an Intel i340-T4 NIC. I picked up one of these cheesy riser boards intended for GPU bitcoin mining in order to power the expander card. I've left the x1 board and USB 3.0 cable out since the expander card doesn't communicate via PCIe anyway. Other options would be to dremel out the back end of the x1 slot on the motherboard, or chop off the PCIe data pins on the expander card, but for $6 it's probably not worth the hassle or risk. SamDabbers fucked around with this message at 19:24 on Feb 12, 2016 |
# ? Feb 12, 2016 18:59 |
|
reL posted:So this SAS expander states it's supposed to be used in conjunction with a RAID Controller, however if I'm going for a FreeNas setup, is there anything stopping me from slapping that expander in by itself? Nope, you can even get a cheap rear end controller card if you want because the RAID magic is going to be happening via FreeNAS, not the hardware.
|
# ? Feb 12, 2016 19:30 |
|
I'm looking for cpu/mb recommendations for a home nas. I'm currently running a 5 year old proliant with amd turion processor on freebsd with raidz. For file serving/backup purposes its been amazing. No downtime except when upgrading between freebsd versions. However, I recently expanded to plex for media serving for my parents. It's way too slow to transcode even a single 1080p file so I actually run the plex server on a desktop. I just saw my utilities after a month experiment and my wife ain't happy. The good news is that I have an excuse to build a new system. I'm looking for recommendations for a low tdp combo that can transcode probably 3-4 streams at the same time. The plan is to keep using freebsd so gpu accelerated transcoding is probably out. The ars technica article was pretty ridiculed but it seems the cpu choice wasn't that bad. Maybe I can wait for the Pentium g4500t. I don't mind going to haswell to save a few bucks either. To sell it to the wife, I would have to justify power savings. Future proofing isnt that high a priority because I don't plan on upgrading for another 5 years. Also it might be cool to run some VMs too but not priority either.
|
# ? Feb 12, 2016 22:23 |
|
How about another ProLiant? There's a $10 off code EMCEGFG38 from the weekly email and a $20 rebate which brings it down to $220. Alternatively, the Lenovo TS140 is another solid pick in the same price range.
|
# ? Feb 12, 2016 23:24 |
|
lostleaf posted:I'm looking for cpu/mb recommendations for a home nas. The hardware choices were the least of the problems with that article... The real question you should consider is how much storage you need now, and if you'll need to expand upon that later.
|
# ? Feb 12, 2016 23:41 |
|
I've been looking at the ts140 for a bit. They sometimes show up on slickdeals with a xeon for 280. Is the xeon premium worth it over an i3? The plan is just transcoding 3-4 1080 streams concurrently along with maybe some VM stuff. The VM is just for screwing around. I don't actually have any utility for it. Also my current storage is going to be adequate for the foreseeable future. The current plan is just to transplant the existing disks into the new nas.
|
# ? Feb 13, 2016 09:00 |
|
My HTPC RAID 5 array (6x 2TB drives) just went down with two simultaneous disk failures. I'm trying to rebuild it now and I'm afraid that the strain of rebuilding is going to bring more failures and the loss of lots of data. I haven't been following the discussions about RAID starting to be a bad thing with huge disks, but now that I'm experiencing it firsthand I'm ready to make a change. What is the new suggested software solution (Linux based) for me to combine multiple large drives for NAS/media storage purposes? E: The Linux subreddit seems to be saying I'm a baby for being concerned and to just stick it out with RAID5. To be clear, I'm not putting heavy use on the disks, and while they'll be running 24/7 there will only be read/write access for maybe 4-6 hours/day. However, I absolutely don't want to risk data loss. Richard M Nixon fucked around with this message at 06:59 on Feb 14, 2016 |
# ? Feb 14, 2016 06:55 |
|
Richard M Nixon posted:My HTPC RAID 5 array (6x 2TB drives) just went down with two simultaneous disk failures. I'm trying to rebuild it now and I'm afraid that the strain of rebuilding is going to bring more failures and the loss of lots of data. There really isn't anything "better" than RAID of one form or another, really. Even the big distributed systems used by Google/Amazon/et al use the same principles as what you can get with locally, just on a much larger scale. First off you should have a good backup strategy. Secondly what people are saying about "RAID" being a bad thing with larger disks is much more specific to RAID 5, which has gotten a lot more unreliable as disks grow in size. The main reason is that disk size has been growing a lot faster than throughput so the rebuilds take longer, and the chance of a second failure during rebuild keeps getting higher and higher. If you have to replace a single disk in RAID 5 you have no safety margin. There's another reason: the SATA spec allows for one unrecoverable read error in about 12TB of data even with disks in good condition, though you can argue about how much meaning that actually has. So one disk failure in a 7x2TB drive array and you can expect to hit one uncorrectable error while rebuilding, giving you potentially silent data corruption. The way around this is a checksummed file system like ZFS, BTRFS, ReFS, or whatever, so if nothing else it can detect those errors.
|
# ? Feb 14, 2016 07:15 |
|
Richard M Nixon posted:I haven't been following the discussions about RAID starting to be a bad thing with huge disks, but now that I'm experiencing it firsthand I'm ready to make a change. What is the new suggested software solution (Linux based) for me to combine multiple large drives for NAS/media storage purposes? Upgrade to RAID-6. RAID-5 is obsolete with these storage sizes.
|
# ? Feb 14, 2016 10:29 |
|
This might be the wrong thread, but the Short Hardware question thread didn't reply. Can anyone confirm that under Windows 10 -- if I create a "mirror space" (essentially RAID1) with two drives formatted with ReFS -- I can use read-patrolling and bit-rot protection works? From a technical standpoint I believe that you only need 2 drives for bit-rot protection since every block is checksummed, and the (poor) doccumentation leads me to believe that it should be enabled with a two drive mirror pool -- but I swore I read a long time ago Storage Spaces requires 3 drives for those features even when using mirror spaces because they use a voting algorithm -- but haven't been able to track that technical reference since. Chuu fucked around with this message at 16:30 on Feb 14, 2016 |
# ? Feb 14, 2016 15:09 |
|
Richard M Nixon posted:My HTPC RAID 5 array (6x 2TB drives) just went down with two simultaneous disk failures. I'm trying to rebuild it now and I'm afraid that the strain of rebuilding is going to bring more failures and the loss of lots of data. Wait, how are you rebuilding from 2 simultaneous disk failures? RAID5 only allows recovery from 1. Unless by 'rebuild', you mean 'creating a new array with the remaining disks'. Either way, something to consider might be something like Unraid or SnapRaid. One of the nice features they have is they don't stripe data, so in the case where you suffer more failures than you have redundancy protection for, you still have all the data on the disks that did not fail.
|
# ? Feb 14, 2016 15:31 |
|
Thanks for the information. I'll look at unraid and snapraid to see if it would be better than switching to raid 6. Two drives failed at once so it was suggested that I try and force the array back online to see if it would recover. It did not. I am trying to avoid total data loss, though I know I may well be facing that. Someone on stackexchange suggested that if I want to try and save the data that I could consider doing a sector-level low level copy to the replacement drives I ordered and then rebuild using them, which may leave me with only minor data corruption instead of total loss. Is this a viable option to try? This was brought up when I mentioned that my array starts to resync when I force the failed drives back into md0 and then it fails at about 14%. I don't know enough about the inner workings of RAID to know if there's a possibility of some hardware level failure that I can overcome or if the actual RAID structure on the failed drives is bad. If it helps, this is what I see when I examine the two failed drives (sdd1 and sda1: code:
code:
|
# ? Feb 14, 2016 18:35 |
|
This still makes no sense because a two drive failure on a RAID5 should mean instant, unrecoverable, and total loss of the array. What RAID solution are you using? Mdraid? Hardware of some sort? What does it say about the health of the whole thing?
|
# ? Feb 14, 2016 18:38 |
|
IOwnCalculus posted:This still makes no sense because a two drive failure on a RAID5 should mean instant, unrecoverable, and total loss of the array. This is not true. You can always force online a disk in an array and try to get data off of it. It's not a good position to be in and will likely involve lots of corrupt files at best. If you want to recover your data your best bet is to stop trying to do anything with the array and the drives. Then use something like Drive XML to take raw (with empty space) images of each drive, then use something like Runtime's RAID Reconstructor against those images. You will need a ton of free space for those images. It will take a long time. But this is the best you can do, as long as the drives themselves still power on and mount.
|
# ? Feb 14, 2016 18:49 |
|
IOwnCalculus posted:This still makes no sense because a two drive failure on a RAID5 should mean instant, unrecoverable, and total loss of the array. It's mdraid. This is what I saw immediately after I got an I/O error and before I stopped the array (one spare, two failed, two active drives): code:
Internet Explorer posted:This is not true. You can always force online a disk in an array and try to get data off of it. It's not a good position to be in and will likely involve lots of corrupt files at best. I imagine this is substantially different than the other suggestion I received to do a sector-level copy of the failed drives and then try and rebuild from the copies? The drives are powered and mounted just fine and I can still read the superblock data on them (see previous post) so I can give it a shot.
|
# ? Feb 14, 2016 18:55 |
|
Every person that I personally know that use(d) raid-5 has lost their entire array. It's literally the dumbest thing someone can do without an entire set of backups.
|
# ? Feb 14, 2016 19:13 |
|
Richard M Nixon posted:It's mdraid. This is what I saw immediately after I got an I/O error and before I stopped the array (one spare, two failed, two active drives): It's basically the same thing as far as sector level copy goes, but the rebuild process of RAID Reconstructor does data recovery type analysis while it is looking at the array. I'd assume it supports the file system you're using. Like I said, you'll just need a lot of scratch space. Those sector level images will be the size of your drives, and you'll need space to restore to. Like everyone said, little late but RAID 5 is rarely the right choice these days. Although with 2 TB drives I'd say you're on the cusp.
|
# ? Feb 14, 2016 20:15 |
|
Internet Explorer posted:This is not true. You can always force online a disk in an array and try to get data off of it. It's not a good position to be in and will likely involve lots of corrupt files at best. While this is possible, we need to be realistic. He is unlikely to get any significant portion of his data back in this way if the disks themselves have failed. Richard M Nixon posted:Thanks for the information. I'll look at unraid and snapraid to see if it would be better than switching to raid 6. One of the reasons I switched to SnapRaid from Unraid is that it also allows for multiple parity disks, so I have 2 disk redundancy on top of mitigated failure. One thing to keep in mind though is these array strategies are not meant for anywhere near high performance. You will get worse write speed than a native RAID5/6 array, and your read speed will be as good as the individual disk you are reading from. This is usually perfectly fine for things like media libraries or backup devices, but not great for things that are constantly read/written to.
|
# ? Feb 14, 2016 20:59 |
|
Skandranon posted:While this is possible, we need to be realistic. He is unlikely to get any significant portion of his data back in this way if the disks themselves have failed. I said he was not in a good situation and at best he'd be able to recover files and have a lot of corrupt files. With the way RAID5 tends to fail during rebuild, it does not necessarily mean the second failed disk is completely failed. I know a bit about what I am talking about and have dealt with data loss situations like this in the past.
|
# ? Feb 14, 2016 21:03 |
|
Internet Explorer posted:I said he was not in a good situation and at best he'd be able to recover files and have a lot of corrupt files. With the way RAID5 tends to fail during rebuild, it does not necessarily mean the second failed disk is completely failed. I know a bit about what I am talking about and have dealt with data loss situations like this in the past. I'm not suggesting you know nothing, I just don't want there to be too much false hope in the air.
|
# ? Feb 14, 2016 21:16 |
|
Skandranon posted:I'm not suggesting you know nothing, I just don't want there to be too much false hope in the air. Hope is in short supply. It's just a few years worth of media so it's not irreplaceable, just a huge pain in the rear end I'd like to avoid. Still, it's not worth it to build a fully redundant array to back everything up, so if it's a loss then it's something I knew was a possibility. It's still really loving weird that two disks failed at the exact same time. I'll get replacement drives on Tuesday and try the clone and recover idea and post results. I'm trying to learn about how something like unraid will work in comparison to switching to raid6.
|
# ? Feb 14, 2016 22:51 |
|
Richard M Nixon posted:Hope is in short supply. It's just a few years worth of media so it's not irreplaceable, just a huge pain in the rear end I'd like to avoid. Still, it's not worth it to build a fully redundant array to back everything up, so if it's a loss then it's something I knew was a possibility. It's still really loving weird that two disks failed at the exact same time. I can supply a quick summary. RAID5/6 stripe all files saved at the block level to each disk in the array. This is why recovery is impossible if more disks than redundancy are removed, since every disk effectively has 1/(x-1) of the file. The parity for each 'split' in this manner is stored on the last disk. RAID4 has a dedicated parity, wheras RAID5/6 distribute the parity block evenly to all disks. On the other hand, Unraid/Snapraid have a dedicated parity drive (similar to RAID4), but instead of the data being striped at the block level, all data disks effectively function as normal drives, that could even be removed from the array and read from individually. The parity is then computed vertically (basically, there is a parity bit which coresponds to every bit 0 from every disk, then bit 1, and so forth) and stored on the parity disk. So, if any data disk fails, the data can be computed from reading from all disks and using the parity. If more disks fail, then you are basically back to having a number of disks with data on them.
|
# ? Feb 14, 2016 23:05 |
|
Skandranon posted:I can supply a quick summary. Do they have check summing?
|
# ? Feb 14, 2016 23:49 |
|
PerrineClostermann posted:Do they have check summing? Unraid does not, but snapraid does. Snapraid and unraid are basically RAID4 but on the file level instead of the block level, as I understand them. Snapraid is a lot more prone to partial data loss than ZFS, mostly because snapraid does parity calculation by snapshots, so new data (often what you'd care about the most in the near future) isn't protected. For a media server the downsides of snapraid don't really matter too much, assuming the media is replaceable. You could even argue that it has benefits because even if three disks in a double parity array fail then you still have between N-3 and N-5 (depends on whether the failures were parity disks or not, parity data becomes useless if the array becomes too damaged) disks worth of data and you won't have to restore as much. If data is important or irreplaceable it should have offsite backups - for consumers that's usually just a cloud backup - and nothing is going to change that.
|
# ? Feb 15, 2016 01:04 |
|
Desuwa posted:Unraid does not, but snapraid does. Snapraid and unraid are basically RAID4 but on the file level instead of the block level, as I understand them. There is an upside to SnapRaids batched parity (rather than realtime like Unraid). It is a lot easier to move files around drives, as you can do your organization, THEN initiate the parity rebuild, instead of constantly slamming the parity disk. It also allows you a limited undelete function, restoring files back to the last time parity was calculated. It's not quite like RAID4, which still does striping of the data disks. SnapRaid/Unraid don't do any data striping, files are stored on regularly formatted partitions (RieserFS for Unraid, multiple available filesystems for SnapRaid), and then has a parity disk alongside them. The disks can be removed and placed in other computers and are instantly readable, which to me is the main feature, as it has a much better recovery model than RAID5/6. It also allows the usage of varying disk sizes (parity disks must be as large as the largest data drive).
|
# ? Feb 15, 2016 01:28 |
|
I don't really get a lot of benefit out of ZFS, though it doesn't really cost me anything to use it either. This recent talk about unraid and the like has gotten my idly pondering about them, because when I run out of space in ZFS, I really miss the flexibility of adding and removing storage I had back when I was running windows home server a few years ago. Is there any benefit to having an SSD as the parity drive with these systems?
|
# ? Feb 15, 2016 01:32 |
|
Thermopyle posted:I don't really get a lot of benefit out of ZFS, though it doesn't really cost me anything to use it either. Not unless you are made of money, as I said, the parity MUST be as large as the largest data drive. It would also only increase the speed of writes to the speed of the data drive, assuming you are not bottlenecked on the parity calculation. However, with Unraid (and I think Snapraid, though it is less helpful), you CAN have a cache drive, which is where your writes will actually go, and then in a batched process, be added to the array. In Unraid, the cache drive can also be set up to function as a hot spare in the case redundancy fails.
|
# ? Feb 15, 2016 01:34 |
|
Skandranon posted:Not unless you are made of money, as I said, the parity MUST be as large as the largest data drive. I wasn't talking about a price/performance analysis, I'm just wondering what kind of operations the parity drive bottlenecks.
|
# ? Feb 15, 2016 01:35 |
|
Thermopyle posted:I wasn't talking about a price/performance analysis, I'm just wondering what kind of operations the parity drive bottlenecks. Edited my original reply elaborating, before I saw this. Specifically, it only bottlenecks writes to the array. Read operations pull only from the drive the file is on. I guess it would also allow you to rebuild parity from scratch at max speed, which would be the slowest read speed of any data drive.
|
# ? Feb 15, 2016 01:36 |
|
Skandranon posted:Edited my original reply elaborating, before I saw this. gotcha, thanks.
|
# ? Feb 15, 2016 01:38 |
|
So it looks like we've got 3 2TB HDDs ordered. They'll supplement my old 2TBs and give us five drives to play with. I'm thinking ZFS with 6TB storage, in RAID6. I figure an old e6750 C2D would be enough to service just old data, but the system would only have 6GB RAM. Being DDR2, it's not like I can exactly go out and buy more. How insane an idea is this?
|
# ? Feb 16, 2016 04:40 |
|
PerrineClostermann posted:So it looks like we've got 3 2TB HDDs ordered. They'll supplement my old 2TBs and give us five drives to play with. I'm thinking ZFS with 6TB storage, in RAID6. I figure an old e6750 C2D would be enough to service just old data, but the system would only have 6GB RAM. Being DDR2, it's not like I can exactly go out and buy more. How insane an idea is this? You should still be able to get DDR2 without too much trouble... either way, sounds pretty normal to me, should be fine.
|
# ? Feb 16, 2016 04:54 |
|
I'm looking for backup software for Windows that will do realtime backups from selected folders from the local PC to the NAS on my network. Would like something fast, lightweight and free if it exists. I am using crashplan to backup to the cloud and I know it can do local LAN backups but it doesn't do 1:1 copies, converts all the data into some other file type.
|
# ? Feb 16, 2016 07:36 |
|
lostleaf posted:I've been looking at the ts140 for a bit. They sometimes show up on slickdeals with a xeon for 280. Is the xeon premium worth it over an i3? The plan is just transcoding 3-4 1080 streams concurrently along with maybe some VM stuff. The VM is just for screwing around. I don't actually have any utility for it. I would just swing for the Xeon. Imagine if you got the i3 and it didn't perform well enough; yeah you could probably return it but how much of a pain would it be.
|
# ? Feb 16, 2016 17:35 |
|
|
# ? Apr 18, 2024 10:30 |
How is rutorrent able to update RSS feeds for auto download when I don't have the webpage open?
|
|
# ? Feb 18, 2016 06:53 |