Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
The Gunslinger
Jul 24, 2004

Do not forget the face of your father.
Fun Shoe

With Storage Spaces even. That was a poo poo article.

Adbot
ADBOT LOVES YOU

Trammel
Dec 31, 2007
.
Well, I asked here a couple if months ago on the best way to upgrade a full NAS, expanding my Synology DS412+ from 4x3gb drives to 4x5gb drives. Single array, configured as SHR. No backup, as Crashplan just can't handle that much data even though the DS has extra memory.

In the end, I just but the bullet and hot-swapped the drives, one by one.

* First scrubbed the array to check for faulty drives. (A day or so)
* Checked each drives smart info, no obvious problems
* Removed first drive, unit beeps continuously until the warning beep is disabled through the web interface
* Inserted replacement drive, tagged the array for repair using the new drive in the web interface. Wait 6 hours.
* Rinse and repeat for each subsequent drive. Time to rebuild goes up by a few hours for each drive, until the last one took over 24 hours
* Array gradually increases in space after insertion of second replacement drive, and keeps growing each time.
* No other drive failures, in the meantime. Breathe huge sigh of relief after three last rebuild.
* Device continues working well throughout, no interruptions.

Nerve wracking, but a good outcome, and the unit performed exactly as it should have.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

The Gunslinger posted:

With Storage Spaces even. That was a poo poo article.
Eh, if there was more documentation about its inner workings and that of ReFS (plus an ability to properly rebalance the storage pool, not that simpleton version currently available), maybe it wasn't an option to overlook.

That's what I love about Linux et al, they're not blackboxes and you can actually see what hoops they're jumping through to maximize system performance (for instance I find the transparent hugepages a pretty cool idea). With Windows, you're left in the dark whether they're actually trying to use the processor features available to the maximum extent possible.

Tevruden
Aug 12, 2004

This is a much better article http://blog.brianmoses.net/2016/02/diy-nas-2016-edition.html

reL
May 20, 2007

SamDabbers posted:

Same here, and agreed. The TS440 definitely gets my recommendation.

I bit on a SAS expander card and the miscellaneous cables and adapters to add more drives using an old ATX case as a cheap JBOD chassis. Trip report to follow once I've got it all set up.

So this SAS expander states it's supposed to be used in conjunction with a RAID Controller, however if I'm going for a FreeNas setup, is there anything stopping me from slapping that expander in by itself?

SamDabbers
May 26, 2003



reL posted:

So this SAS expander states it's supposed to be used in conjunction with a RAID Controller, however if I'm going for a FreeNas setup, is there anything stopping me from slapping that expander in by itself?

The expander card is basically the SAS equivalent of an Ethernet switch. It'll turn two SAS ports from your controller card into four, sharing the aggregate bandwidth between the downstream disks, which isn't actually noticeable unless you're building all-SSD arrays. Although it has a PCIe x8 connector, it only uses it for power and doesn't show up as a PCIe-speaking device, so your server can't use it to talk to drives directly. You'll still need an actual controller card, such as an IBM M1015 or another rebadged SAS2008-based card which can be found inexpensively on eBay.

It's probably cheaper to just get two controller cards if you have sufficient PCIe slots in your server, rather than a controller card and the expander. The motherboard in my TS440 has one x16 slot used by the M1015, one x1 slot (unpopulated), and one x4 slot which I'm using for an Intel i340-T4 NIC. I picked up one of these cheesy riser boards intended for GPU bitcoin mining in order to power the expander card. I've left the x1 board and USB 3.0 cable out since the expander card doesn't communicate via PCIe anyway. Other options would be to dremel out the back end of the x1 slot on the motherboard, or chop off the PCIe data pins on the expander card, but for $6 it's probably not worth the hassle or risk.

SamDabbers fucked around with this message at 19:24 on Feb 12, 2016

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.

reL posted:

So this SAS expander states it's supposed to be used in conjunction with a RAID Controller, however if I'm going for a FreeNas setup, is there anything stopping me from slapping that expander in by itself?

Nope, you can even get a cheap rear end controller card if you want because the RAID magic is going to be happening via FreeNAS, not the hardware.

lostleaf
Jul 12, 2009
I'm looking for cpu/mb recommendations for a home nas.

I'm currently running a 5 year old proliant with amd turion processor on freebsd with raidz. For file serving/backup purposes its been amazing. No downtime except when upgrading between freebsd versions. However, I recently expanded to plex for media serving for my parents. It's way too slow to transcode even a single 1080p file so I actually run the plex server on a desktop. I just saw my utilities after a month experiment and my wife ain't happy. The good news is that I have an excuse to build a new system.


I'm looking for recommendations for a low tdp combo that can transcode probably 3-4 streams at the same time. The plan is to keep using freebsd so gpu accelerated transcoding is probably out. The ars technica article was pretty ridiculed but it seems the cpu choice wasn't that bad. Maybe I can wait for the Pentium g4500t. I don't mind going to haswell to save a few bucks either. To sell it to the wife, I would have to justify power savings. Future proofing isnt that high a priority because I don't plan on upgrading for another 5 years. Also it might be cool to run some VMs too but not priority either.

SamDabbers
May 26, 2003



How about another ProLiant? There's a $10 off code EMCEGFG38 from the weekly email and a $20 rebate which brings it down to $220. Alternatively, the Lenovo TS140 is another solid pick in the same price range.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

lostleaf posted:

I'm looking for cpu/mb recommendations for a home nas.

I'm currently running a 5 year old proliant with amd turion processor on freebsd with raidz. For file serving/backup purposes its been amazing. No downtime except when upgrading between freebsd versions. However, I recently expanded to plex for media serving for my parents. It's way too slow to transcode even a single 1080p file so I actually run the plex server on a desktop. I just saw my utilities after a month experiment and my wife ain't happy. The good news is that I have an excuse to build a new system.


I'm looking for recommendations for a low tdp combo that can transcode probably 3-4 streams at the same time. The plan is to keep using freebsd so gpu accelerated transcoding is probably out. The ars technica article was pretty ridiculed but it seems the cpu choice wasn't that bad. Maybe I can wait for the Pentium g4500t. I don't mind going to haswell to save a few bucks either. To sell it to the wife, I would have to justify power savings. Future proofing isnt that high a priority because I don't plan on upgrading for another 5 years. Also it might be cool to run some VMs too but not priority either.

The hardware choices were the least of the problems with that article...

The real question you should consider is how much storage you need now, and if you'll need to expand upon that later.

lostleaf
Jul 12, 2009
I've been looking at the ts140 for a bit. They sometimes show up on slickdeals with a xeon for 280. Is the xeon premium worth it over an i3? The plan is just transcoding 3-4 1080 streams concurrently along with maybe some VM stuff. The VM is just for screwing around. I don't actually have any utility for it.

Also my current storage is going to be adequate for the foreseeable future. The current plan is just to transplant the existing disks into the new nas.

Richard M Nixon
Apr 26, 2009

"The greatest honor history can bestow is the title of peacemaker."
My HTPC RAID 5 array (6x 2TB drives) just went down with two simultaneous disk failures. I'm trying to rebuild it now and I'm afraid that the strain of rebuilding is going to bring more failures and the loss of lots of data.

I haven't been following the discussions about RAID starting to be a bad thing with huge disks, but now that I'm experiencing it firsthand I'm ready to make a change. What is the new suggested software solution (Linux based) for me to combine multiple large drives for NAS/media storage purposes?

E: The Linux subreddit seems to be saying I'm a baby for being concerned and to just stick it out with RAID5. To be clear, I'm not putting heavy use on the disks, and while they'll be running 24/7 there will only be read/write access for maybe 4-6 hours/day. However, I absolutely don't want to risk data loss.

Richard M Nixon fucked around with this message at 06:59 on Feb 14, 2016

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

Richard M Nixon posted:

My HTPC RAID 5 array (6x 2TB drives) just went down with two simultaneous disk failures. I'm trying to rebuild it now and I'm afraid that the strain of rebuilding is going to bring more failures and the loss of lots of data.

I haven't been following the discussions about RAID starting to be a bad thing with huge disks, but now that I'm experiencing it firsthand I'm ready to make a change. What is the new suggested software solution (Linux based) for me to combine multiple large drives for NAS/media storage purposes?

E: The Linux subreddit seems to be saying I'm a baby for being concerned and to just stick it out with RAID5. To be clear, I'm not putting heavy use on the disks, and while they'll be running 24/7 there will only be read/write access for maybe 4-6 hours/day. However, I absolutely don't want to risk data loss.

There really isn't anything "better" than RAID of one form or another, really. Even the big distributed systems used by Google/Amazon/et al use the same principles as what you can get with locally, just on a much larger scale.

First off you should have a good backup strategy.

Secondly what people are saying about "RAID" being a bad thing with larger disks is much more specific to RAID 5, which has gotten a lot more unreliable as disks grow in size. The main reason is that disk size has been growing a lot faster than throughput so the rebuilds take longer, and the chance of a second failure during rebuild keeps getting higher and higher. If you have to replace a single disk in RAID 5 you have no safety margin.

There's another reason: the SATA spec allows for one unrecoverable read error in about 12TB of data even with disks in good condition, though you can argue about how much meaning that actually has. So one disk failure in a 7x2TB drive array and you can expect to hit one uncorrectable error while rebuilding, giving you potentially silent data corruption. The way around this is a checksummed file system like ZFS, BTRFS, ReFS, or whatever, so if nothing else it can detect those errors.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Richard M Nixon posted:

I haven't been following the discussions about RAID starting to be a bad thing with huge disks, but now that I'm experiencing it firsthand I'm ready to make a change. What is the new suggested software solution (Linux based) for me to combine multiple large drives for NAS/media storage purposes?

Upgrade to RAID-6. RAID-5 is obsolete with these storage sizes.

Chuu
Sep 11, 2004

Grimey Drawer
This might be the wrong thread, but the Short Hardware question thread didn't reply.

Can anyone confirm that under Windows 10 -- if I create a "mirror space" (essentially RAID1) with two drives formatted with ReFS -- I can use read-patrolling and bit-rot protection works?

From a technical standpoint I believe that you only need 2 drives for bit-rot protection since every block is checksummed, and the (poor) doccumentation leads me to believe that it should be enabled with a two drive mirror pool -- but I swore I read a long time ago Storage Spaces requires 3 drives for those features even when using mirror spaces because they use a voting algorithm -- but haven't been able to track that technical reference since.

Chuu fucked around with this message at 16:30 on Feb 14, 2016

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Richard M Nixon posted:

My HTPC RAID 5 array (6x 2TB drives) just went down with two simultaneous disk failures. I'm trying to rebuild it now and I'm afraid that the strain of rebuilding is going to bring more failures and the loss of lots of data.

I haven't been following the discussions about RAID starting to be a bad thing with huge disks, but now that I'm experiencing it firsthand I'm ready to make a change. What is the new suggested software solution (Linux based) for me to combine multiple large drives for NAS/media storage purposes?

E: The Linux subreddit seems to be saying I'm a baby for being concerned and to just stick it out with RAID5. To be clear, I'm not putting heavy use on the disks, and while they'll be running 24/7 there will only be read/write access for maybe 4-6 hours/day. However, I absolutely don't want to risk data loss.

Wait, how are you rebuilding from 2 simultaneous disk failures? RAID5 only allows recovery from 1. Unless by 'rebuild', you mean 'creating a new array with the remaining disks'.

Either way, something to consider might be something like Unraid or SnapRaid. One of the nice features they have is they don't stripe data, so in the case where you suffer more failures than you have redundancy protection for, you still have all the data on the disks that did not fail.

Richard M Nixon
Apr 26, 2009

"The greatest honor history can bestow is the title of peacemaker."
Thanks for the information. I'll look at unraid and snapraid to see if it would be better than switching to raid 6.

Two drives failed at once so it was suggested that I try and force the array back online to see if it would recover. It did not. I am trying to avoid total data loss, though I know I may well be facing that. Someone on stackexchange suggested that if I want to try and save the data that I could consider doing a sector-level low level copy to the replacement drives I ordered and then rebuild using them, which may leave me with only minor data corruption instead of total loss. Is this a viable option to try?

This was brought up when I mentioned that my array starts to resync when I force the failed drives back into md0 and then it fails at about 14%. I don't know enough about the inner workings of RAID to know if there's a possibility of some hardware level failure that I can overcome or if the actual RAID structure on the failed drives is bad.

If it helps, this is what I see when I examine the two failed drives (sdd1 and sda1:
code:
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b2fee98b:a1be2e00:ebbc39a9:9050312e
           Name : htpcbox:0  (local to host htpcbox)
  Creation Time : Fri Jul 22 17:13:28 2011
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 3907027056 (1863.02 GiB 2000.40 GB)
     Array Size : 5860537344 (5589.04 GiB 6001.19 GB)
  Used Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : 4c88c114:31a0fd19:bb38b7d7:dd29e61f

    Update Time : Sat Feb 13 21:48:30 2016
       Checksum : 19085c8 - correct
         Events : 362411

         Layout : left-symmetric
     Chunk Size : 1024K

   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing)
code:
/dev/sdd1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b2fee98b:a1be2e00:ebbc39a9:9050312e
           Name : htpcbox:0  (local to host htpcbox)
  Creation Time : Fri Jul 22 17:13:28 2011
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 3907027056 (1863.02 GiB 2000.40 GB)
     Array Size : 5860537344 (5589.04 GiB 6001.19 GB)
  Used Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 55f816ec:5eb9e8b4:ad1c29be:b9de11cf

    Update Time : Sat Feb 13 23:01:15 2016
       Checksum : 96ae8ad - correct
         Events : 362440

         Layout : left-symmetric
     Chunk Size : 1024K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing)

IOwnCalculus
Apr 2, 2003





This still makes no sense because a two drive failure on a RAID5 should mean instant, unrecoverable, and total loss of the array.

What RAID solution are you using? Mdraid? Hardware of some sort? What does it say about the health of the whole thing?

Internet Explorer
Jun 1, 2005





IOwnCalculus posted:

This still makes no sense because a two drive failure on a RAID5 should mean instant, unrecoverable, and total loss of the array.

What RAID solution are you using? Mdraid? Hardware of some sort? What does it say about the health of the whole thing?

This is not true. You can always force online a disk in an array and try to get data off of it. It's not a good position to be in and will likely involve lots of corrupt files at best.

If you want to recover your data your best bet is to stop trying to do anything with the array and the drives. Then use something like Drive XML to take raw (with empty space) images of each drive, then use something like Runtime's RAID Reconstructor against those images. You will need a ton of free space for those images. It will take a long time. But this is the best you can do, as long as the drives themselves still power on and mount.

Richard M Nixon
Apr 26, 2009

"The greatest honor history can bestow is the title of peacemaker."

IOwnCalculus posted:

This still makes no sense because a two drive failure on a RAID5 should mean instant, unrecoverable, and total loss of the array.

What RAID solution are you using? Mdraid? Hardware of some sort? What does it say about the health of the whole thing?

It's mdraid. This is what I saw immediately after I got an I/O error and before I stopped the array (one spare, two failed, two active drives):
code:
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [ra id10]
md0 : active raid5 sdf1[6](S) sda1[1](F) sdd1[3](F) sdc[5] sde[4]
      5860537344 blocks super 1.2 level 5, 1024k chunk, algorithm 2 [4/2] [U__U]

unused devices: <none>


sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Jul 22 17:13:28 2011
     Raid Level : raid5
     Array Size : 5860537344 (5589.04 GiB 6001.19 GB)
  Used Dev Size : 1953512448 (1863.01 GiB 2000.40 GB)
   Raid Devices : 4
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Sat Feb 13 23:09:52 2016
          State : clean, FAILED
 Active Devices : 2
Working Devices : 3
 Failed Devices : 2
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 1024K

           Name : htpcbox:0  (local to host htpcbox)
           UUID : b2fee98b:a1be2e00:ebbc39a9:9050312e
         Events : 362446

    Number   Major   Minor   RaidDevice State
       4       8       64        0      active sync   /dev/sde
       1       0        0        1      removed
       2       0        0        2      removed
       5       8       32        3      active sync   /dev/sdc

       1       8        1        -      faulty spare   /dev/sda1
       3       8       49        -      faulty spare   /dev/sdd1
       6       8       81        -      spare   /dev/sdf1
The failure happened right after extracting a ~100gb archive so it was suggested that I may be dealing with a block failure and I could possibly save some data. I'm not terribly familiar with raid recovery so I'm just parroting what the Internet has told me so far.

Internet Explorer posted:

This is not true. You can always force online a disk in an array and try to get data off of it. It's not a good position to be in and will likely involve lots of corrupt files at best.

If you want to recover your data your best bet is to stop trying to do anything with the array and the drives. Then use something like Drive XML to take raw (with empty space) images of each drive, then use something like Runtime's RAID Reconstructor against those images. You will need a ton of free space for those images. It will take a long time. But this is the best you can do, as long as the drives themselves still power on and mount.

I imagine this is substantially different than the other suggestion I received to do a sector-level copy of the failed drives and then try and rebuild from the copies? The drives are powered and mounted just fine and I can still read the superblock data on them (see previous post) so I can give it a shot.

redeyes
Sep 14, 2002

by Fluffdaddy
Every person that I personally know that use(d) raid-5 has lost their entire array. It's literally the dumbest thing someone can do without an entire set of backups.

Internet Explorer
Jun 1, 2005





Richard M Nixon posted:

It's mdraid. This is what I saw immediately after I got an I/O error and before I stopped the array (one spare, two failed, two active drives):
code:
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [ra id10]
md0 : active raid5 sdf1[6](S) sda1[1](F) sdd1[3](F) sdc[5] sde[4]
      5860537344 blocks super 1.2 level 5, 1024k chunk, algorithm 2 [4/2] [U__U]

unused devices: <none>


sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Jul 22 17:13:28 2011
     Raid Level : raid5
     Array Size : 5860537344 (5589.04 GiB 6001.19 GB)
  Used Dev Size : 1953512448 (1863.01 GiB 2000.40 GB)
   Raid Devices : 4
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Sat Feb 13 23:09:52 2016
          State : clean, FAILED
 Active Devices : 2
Working Devices : 3
 Failed Devices : 2
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 1024K

           Name : htpcbox:0  (local to host htpcbox)
           UUID : b2fee98b:a1be2e00:ebbc39a9:9050312e
         Events : 362446

    Number   Major   Minor   RaidDevice State
       4       8       64        0      active sync   /dev/sde
       1       0        0        1      removed
       2       0        0        2      removed
       5       8       32        3      active sync   /dev/sdc

       1       8        1        -      faulty spare   /dev/sda1
       3       8       49        -      faulty spare   /dev/sdd1
       6       8       81        -      spare   /dev/sdf1
The failure happened right after extracting a ~100gb archive so it was suggested that I may be dealing with a block failure and I could possibly save some data. I'm not terribly familiar with raid recovery so I'm just parroting what the Internet has told me so far.


I imagine this is substantially different than the other suggestion I received to do a sector-level copy of the failed drives and then try and rebuild from the copies? The drives are powered and mounted just fine and I can still read the superblock data on them (see previous post) so I can give it a shot.

It's basically the same thing as far as sector level copy goes, but the rebuild process of RAID Reconstructor does data recovery type analysis while it is looking at the array. I'd assume it supports the file system you're using. Like I said, you'll just need a lot of scratch space. Those sector level images will be the size of your drives, and you'll need space to restore to.

Like everyone said, little late but RAID 5 is rarely the right choice these days. Although with 2 TB drives I'd say you're on the cusp.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Internet Explorer posted:

This is not true. You can always force online a disk in an array and try to get data off of it. It's not a good position to be in and will likely involve lots of corrupt files at best.

If you want to recover your data your best bet is to stop trying to do anything with the array and the drives. Then use something like Drive XML to take raw (with empty space) images of each drive, then use something like Runtime's RAID Reconstructor against those images. You will need a ton of free space for those images. It will take a long time. But this is the best you can do, as long as the drives themselves still power on and mount.

While this is possible, we need to be realistic. He is unlikely to get any significant portion of his data back in this way if the disks themselves have failed.


Richard M Nixon posted:

Thanks for the information. I'll look at unraid and snapraid to see if it would be better than switching to raid 6.

One of the reasons I switched to SnapRaid from Unraid is that it also allows for multiple parity disks, so I have 2 disk redundancy on top of mitigated failure. One thing to keep in mind though is these array strategies are not meant for anywhere near high performance. You will get worse write speed than a native RAID5/6 array, and your read speed will be as good as the individual disk you are reading from. This is usually perfectly fine for things like media libraries or backup devices, but not great for things that are constantly read/written to.

Internet Explorer
Jun 1, 2005





Skandranon posted:

While this is possible, we need to be realistic. He is unlikely to get any significant portion of his data back in this way if the disks themselves have failed.


One of the reasons I switched to SnapRaid from Unraid is that it also allows for multiple parity disks, so I have 2 disk redundancy on top of mitigated failure. One thing to keep in mind though is these array strategies are not meant for anywhere near high performance. You will get worse write speed than a native RAID5/6 array, and your read speed will be as good as the individual disk you are reading from. This is usually perfectly fine for things like media libraries or backup devices, but not great for things that are constantly read/written to.

I said he was not in a good situation and at best he'd be able to recover files and have a lot of corrupt files. With the way RAID5 tends to fail during rebuild, it does not necessarily mean the second failed disk is completely failed. I know a bit about what I am talking about and have dealt with data loss situations like this in the past.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Internet Explorer posted:

I said he was not in a good situation and at best he'd be able to recover files and have a lot of corrupt files. With the way RAID5 tends to fail during rebuild, it does not necessarily mean the second failed disk is completely failed. I know a bit about what I am talking about and have dealt with data loss situations like this in the past.

I'm not suggesting you know nothing, I just don't want there to be too much false hope in the air.

Richard M Nixon
Apr 26, 2009

"The greatest honor history can bestow is the title of peacemaker."

Skandranon posted:

I'm not suggesting you know nothing, I just don't want there to be too much false hope in the air.

Hope is in short supply. It's just a few years worth of media so it's not irreplaceable, just a huge pain in the rear end I'd like to avoid. Still, it's not worth it to build a fully redundant array to back everything up, so if it's a loss then it's something I knew was a possibility. It's still really loving weird that two disks failed at the exact same time.

I'll get replacement drives on Tuesday and try the clone and recover idea and post results. I'm trying to learn about how something like unraid will work in comparison to switching to raid6.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Richard M Nixon posted:

Hope is in short supply. It's just a few years worth of media so it's not irreplaceable, just a huge pain in the rear end I'd like to avoid. Still, it's not worth it to build a fully redundant array to back everything up, so if it's a loss then it's something I knew was a possibility. It's still really loving weird that two disks failed at the exact same time.

I'll get replacement drives on Tuesday and try the clone and recover idea and post results. I'm trying to learn about how something like unraid will work in comparison to switching to raid6.

I can supply a quick summary.

RAID5/6 stripe all files saved at the block level to each disk in the array. This is why recovery is impossible if more disks than redundancy are removed, since every disk effectively has 1/(x-1) of the file. The parity for each 'split' in this manner is stored on the last disk. RAID4 has a dedicated parity, wheras RAID5/6 distribute the parity block evenly to all disks.

On the other hand, Unraid/Snapraid have a dedicated parity drive (similar to RAID4), but instead of the data being striped at the block level, all data disks effectively function as normal drives, that could even be removed from the array and read from individually. The parity is then computed vertically (basically, there is a parity bit which coresponds to every bit 0 from every disk, then bit 1, and so forth) and stored on the parity disk. So, if any data disk fails, the data can be computed from reading from all disks and using the parity. If more disks fail, then you are basically back to having a number of disks with data on them.

PerrineClostermann
Dec 15, 2012

by FactsAreUseless

Skandranon posted:

I can supply a quick summary.

RAID5/6 stripe all files saved at the block level to each disk in the array. This is why recovery is impossible if more disks than redundancy are removed, since every disk effectively has 1/(x-1) of the file. The parity for each 'split' in this manner is stored on the last disk. RAID4 has a dedicated parity, wheras RAID5/6 distribute the parity block evenly to all disks.

On the other hand, Unraid/Snapraid have a dedicated parity drive (similar to RAID4), but instead of the data being striped at the block level, all data disks effectively function as normal drives, that could even be removed from the array and read from individually. The parity is then computed vertically (basically, there is a parity bit which coresponds to every bit 0 from every disk, then bit 1, and so forth) and stored on the parity disk. So, if any data disk fails, the data can be computed from reading from all disks and using the parity. If more disks fail, then you are basically back to having a number of disks with data on them.

Do they have check summing?

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

PerrineClostermann posted:

Do they have check summing?

Unraid does not, but snapraid does. Snapraid and unraid are basically RAID4 but on the file level instead of the block level, as I understand them.

Snapraid is a lot more prone to partial data loss than ZFS, mostly because snapraid does parity calculation by snapshots, so new data (often what you'd care about the most in the near future) isn't protected.

For a media server the downsides of snapraid don't really matter too much, assuming the media is replaceable. You could even argue that it has benefits because even if three disks in a double parity array fail then you still have between N-3 and N-5 (depends on whether the failures were parity disks or not, parity data becomes useless if the array becomes too damaged) disks worth of data and you won't have to restore as much.

If data is important or irreplaceable it should have offsite backups - for consumers that's usually just a cloud backup - and nothing is going to change that.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Desuwa posted:

Unraid does not, but snapraid does. Snapraid and unraid are basically RAID4 but on the file level instead of the block level, as I understand them.

Snapraid is a lot more prone to partial data loss than ZFS, mostly because snapraid does parity calculation by snapshots, so new data (often what you'd care about the most in the near future) isn't protected.

For a media server the downsides of snapraid don't really matter too much, assuming the media is replaceable. You could even argue that it has benefits because even if three disks in a double parity array fail then you still have between N-3 and N-5 (depends on whether the failures were parity disks or not, parity data becomes useless if the array becomes too damaged) disks worth of data and you won't have to restore as much.

If data is important or irreplaceable it should have offsite backups - for consumers that's usually just a cloud backup - and nothing is going to change that.

There is an upside to SnapRaids batched parity (rather than realtime like Unraid). It is a lot easier to move files around drives, as you can do your organization, THEN initiate the parity rebuild, instead of constantly slamming the parity disk. It also allows you a limited undelete function, restoring files back to the last time parity was calculated.

It's not quite like RAID4, which still does striping of the data disks. SnapRaid/Unraid don't do any data striping, files are stored on regularly formatted partitions (RieserFS for Unraid, multiple available filesystems for SnapRaid), and then has a parity disk alongside them. The disks can be removed and placed in other computers and are instantly readable, which to me is the main feature, as it has a much better recovery model than RAID5/6. It also allows the usage of varying disk sizes (parity disks must be as large as the largest data drive).

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I don't really get a lot of benefit out of ZFS, though it doesn't really cost me anything to use it either.

This recent talk about unraid and the like has gotten my idly pondering about them, because when I run out of space in ZFS, I really miss the flexibility of adding and removing storage I had back when I was running windows home server a few years ago.

Is there any benefit to having an SSD as the parity drive with these systems?

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Thermopyle posted:

I don't really get a lot of benefit out of ZFS, though it doesn't really cost me anything to use it either.

This recent talk about unraid and the like has gotten my idly pondering about them, because when I run out of space in ZFS, I really miss the flexibility of adding and removing storage I had back when I was running windows home server a few years ago.

Is there any benefit to having an SSD as the parity drive with these systems?

Not unless you are made of money, as I said, the parity MUST be as large as the largest data drive. It would also only increase the speed of writes to the speed of the data drive, assuming you are not bottlenecked on the parity calculation.

However, with Unraid (and I think Snapraid, though it is less helpful), you CAN have a cache drive, which is where your writes will actually go, and then in a batched process, be added to the array. In Unraid, the cache drive can also be set up to function as a hot spare in the case redundancy fails.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Skandranon posted:

Not unless you are made of money, as I said, the parity MUST be as large as the largest data drive.

However, with Unraid (and I think Snapraid, though it is less helpful), you CAN have a cache drive, which is where your writes will actually go, and then in a batched process, be added to the array.

I wasn't talking about a price/performance analysis, I'm just wondering what kind of operations the parity drive bottlenecks.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Thermopyle posted:

I wasn't talking about a price/performance analysis, I'm just wondering what kind of operations the parity drive bottlenecks.

Edited my original reply elaborating, before I saw this. Specifically, it only bottlenecks writes to the array. Read operations pull only from the drive the file is on. I guess it would also allow you to rebuild parity from scratch at max speed, which would be the slowest read speed of any data drive.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Skandranon posted:

Edited my original reply elaborating, before I saw this.

gotcha, thanks.

PerrineClostermann
Dec 15, 2012

by FactsAreUseless
So it looks like we've got 3 2TB HDDs ordered. They'll supplement my old 2TBs and give us five drives to play with. I'm thinking ZFS with 6TB storage, in RAID6. I figure an old e6750 C2D would be enough to service just old data, but the system would only have 6GB RAM. Being DDR2, it's not like I can exactly go out and buy more. How insane an idea is this?

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

PerrineClostermann posted:

So it looks like we've got 3 2TB HDDs ordered. They'll supplement my old 2TBs and give us five drives to play with. I'm thinking ZFS with 6TB storage, in RAID6. I figure an old e6750 C2D would be enough to service just old data, but the system would only have 6GB RAM. Being DDR2, it's not like I can exactly go out and buy more. How insane an idea is this?

You should still be able to get DDR2 without too much trouble... either way, sounds pretty normal to me, should be fine.

Vidaeus
Jan 27, 2007

Cats are gonna cat.
I'm looking for backup software for Windows that will do realtime backups from selected folders from the local PC to the NAS on my network. Would like something fast, lightweight and free if it exists. I am using crashplan to backup to the cloud and I know it can do local LAN backups but it doesn't do 1:1 copies, converts all the data into some other file type.

kri kri
Jul 18, 2007

lostleaf posted:

I've been looking at the ts140 for a bit. They sometimes show up on slickdeals with a xeon for 280. Is the xeon premium worth it over an i3? The plan is just transcoding 3-4 1080 streams concurrently along with maybe some VM stuff. The VM is just for screwing around. I don't actually have any utility for it.

Also my current storage is going to be adequate for the foreseeable future. The current plan is just to transplant the existing disks into the new nas.

I would just swing for the Xeon. Imagine if you got the i3 and it didn't perform well enough; yeah you could probably return it but how much of a pain would it be.

Adbot
ADBOT LOVES YOU

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
How is rutorrent able to update RSS feeds for auto download when I don't have the webpage open?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply