Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Generic Monk
Oct 31, 2011

Thermopyle posted:

Well yeah, but I don't see any point to it for the users we're talking about. Home usage type of people want to use RAID-ish solutions because of the redundancy without having to give up half of their storage like you would in a mirroring situation.

i don't think most home users who need more storage even know what a fileserver/NAS is, and even among those who do they're most likely going to be buying a pre-populated synology/equivalent or just plugging a usb hard drive into their router, not worrying about different kinds of RAID levels

if you're techy enough to be using unraid/other NAS-like software you're techy enough to be using freenas/zfs; you've just got to be informed about what you're getting yourself into. i'm a pretty light user with no real need for the zfs data integrity/performance features, but i like the purity of focus, snapshots, the stability that just lets me leave the thing alone for the most part etc. plus i have no real need to expand the capacity any time soon. could prob get most of the functionality except the snapshotting elsewhere but bleh

that being said i wouldn't really recommend freenas to home users just getting into the nas game, partly because of the ui which is a total unintuitive mess (though it's at least faster than it used to be, and the beta ui looks to be slightly less jumbled) and partly becausetrying to get an answer for a simple question that falls outside the purview of the (decent, tbf) documentation is a total crapshoot. the freenas forums remind me of the linux community-at-large in that everyone posting there seems to either use the thing for work or has built an identity around a certain methodology/technology/'best practice' to the point of dogma, neither of which are particularly useful when you just want a simple question answered and end up with a ton of pontificating. at least you don't have to use a CLI to use ZFS in freenas though, which is nice

e: this is a terrible post to start a new page with! just a relative layman's perspective don't kill me

e2: while i'm here: is there any way to set an expiry for the snapshots freenas makes of the boot volume? earlier this year the usb drive i was running it off died, probably because it was keeping snapshots of every update since i first built the server which filled it up and then thrashed the poor drive to death. i didn't know it did this so now i occasionally clean out the old snapshots after a few months have gone by. any way to automate this at all?

Generic Monk fucked around with this message at 16:19 on Dec 19, 2017

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Paul MaudDib posted:

But I suppose you're fixated on the case where someone has a 4-8 drive RAID5/RAID6 pool but also is morally opposed to adding more than one drive at once and also demands redundancy on the single extra drive they're adding. So yes, ZFS does not do that use-case well (yet).

Well, yes, this does describe the typical home user NAS...the very people this conversation started about. It's not a moral opposition, it's just objecting to the cost of adding more storage.

Generic Monk posted:

i don't think most home users who need more storage even know what a fileserver/NAS is, and even among those who do they're most likely going to be buying a pre-populated synology/equivalent or just plugging a usb hard drive into their router, not worrying about different kinds of RAID levels

But this conversation is about using ZFS for your average home user who might choose ZFS or unraid or snapraid....we've already presupposed the level of knowledge here.

Due to hyperbolic discounting people likely don't weight the inconvenience of storage expansion sometime in the nebulous future high enough. Whether that actual inconvenience is high enough to go with unraid/snapraid instead of ZFS depends on your actual needs.

I completely agree with you on how great ZFS is and it is exactly why I'm very conflicted whether to stick with it or not. My major pain point with ZFS for over half a decade has been storage expansion whilst keeping redundancy costs me over a grand.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Thermopyle posted:

I completely agree with you on how great ZFS is and it is exactly why I'm very conflicted whether to stick with it or not. My major pain point with ZFS for over half a decade has been storage expansion whilst keeping redundancy costs me over a grand.

My solution to this has always been to wait for silly HDD deals (like the recent 8TB Easystore) that inevitably come up once or twice a year. Buy enough to double my pool. Sell the old drives on eBay for ~50% of what I paid for the new drives. Keep on hoarding for another 3-4 years until I fill up the new space. Repeat.

Honestly, it ends up being cheaper than doing an ad-hoc system, simply because it pushes me to wait for the sales. Being able to throw an extra drive in there in a pinch would be nice, absolutely, but once you're talking about 4-8 drive systems I figure you should be able to see your data expansion needs coming far enough off to be able to wait for a sale to pop.

SamDabbers
May 26, 2003



DrDork posted:

My solution to this has always been to wait for silly HDD deals (like the recent 8TB Easystore) that inevitably come up once or twice a year. Buy enough to double my pool. Sell the old drives on eBay for ~50% of what I paid for the new drives. Keep on hoarding for another 3-4 years until I fill up the new space. Repeat.

Honestly, it ends up being cheaper than doing an ad-hoc system, simply because it pushes me to wait for the sales. Being able to throw an extra drive in there in a pinch would be nice, absolutely, but once you're talking about 4-8 drive systems I figure you should be able to see your data expansion needs coming far enough off to be able to wait for a sale to pop.

This has been my strategy as well. I've been running single-vdev raidz2 pools with 6-8 drives for years, and am considering switching to multiple mirrored vdevs with the 8TB Easystores I recently picked up for the ease of expansion and fast resilvers without having to drop $1000 every handful of years. The increased IOPS would be nice, but not necessary.

Anybody else gone this route? Has the space/convenience/performance tradeoff been worth it to you?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
I was very happy with the Easystores because they let me cut down from a 8x2TB Z2 to a 4x8TB Z1 array--half the drives, double the usable space, similar redundancy. And since I push everything over simple GigE and have a small SSD for cache/scratch space, it's not like IOPS matters much.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

DrDork posted:

My solution to this has always been to wait for silly HDD deals (like the recent 8TB Easystore) that inevitably come up once or twice a year. Buy enough to double my pool. Sell the old drives on eBay for ~50% of what I paid for the new drives. Keep on hoarding for another 3-4 years until I fill up the new space. Repeat.

Honestly, it ends up being cheaper than doing an ad-hoc system, simply because it pushes me to wait for the sales. Being able to throw an extra drive in there in a pinch would be nice, absolutely, but once you're talking about 4-8 drive systems I figure you should be able to see your data expansion needs coming far enough off to be able to wait for a sale to pop.

Yeah, this is what I've been doing for the last couple of years. I'm not confident its cheaper than if I had just waited until I needed the extra storage because hard drive prices just drop naturally as tech progresses, but its certainly possible.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



SamDabbers posted:

This has been my strategy as well. I've been running single-vdev raidz2 pools with 6-8 drives for years, and am considering switching to multiple mirrored vdevs with the 8TB Easystores I recently picked up for the ease of expansion and fast resilvers without having to drop $1000 every handful of years. The increased IOPS would be nice, but not necessary.

Anybody else gone this route? Has the space/convenience/performance tradeoff been worth it to you?

Do you have a link to a howto for this? My RAIDZ setups (all two) have been very simplistic and I've only expanded one of them once by buying new disks, but i'm having trouble picturing this mirrored vdev structure you guys are on about.

IOwnCalculus
Apr 2, 2003





Mirrored vdevs are similar to a RAID10. The example right at the top of this page shows what that would look like. You would basically create a new pool with one mirrored vdev, then keep adding new mirrored vdevs until you get all of the drives in.

This works with any vdev type. I have my main server set up as two RAIDZ1 vdevs so I can add capacity by swapping out four drives at a time.
code:
  pool: tank
 state: ONLINE
  scan: scrub repaired 0 in 15h34m with 0 errors on Sun Dec 17 17:34:46 2017
config:

        NAME                                          	STATE     READ WRITE CKSUM
        tank                                          	ONLINE       0     0     0
          raidz1-0                                    	ONLINE       0     0     0
            ata-TOSHIBA_HDWE150_         	 	ONLINE       0     0     0
            ata-TOSHIBA_HDWE150_          		ONLINE       0     0     0
            ata-TOSHIBA_HDWE150_          		ONLINE       0     0     0
            ata-TOSHIBA_HDWE150_          		ONLINE       0     0     0
          raidz1-1                                    	ONLINE       0     0     0
            ata-WDC_WD30EFRX-68EUZN0_WD-  		ONLINE       0     0     0
            ata-WDC_WD30EFRX-68EUZN0_WD-  		ONLINE       0     0     0
            ata-WDC_WD30EFRX-68EUZN0_WD-  		ONLINE       0     0     0
            ata-WDC_WD30EFRX-68EUZN0_WD-  		ONLINE       0     0     0

Mr Shiny Pants
Nov 12, 2012

Munkeymon posted:

Do you have a link to a howto for this? My RAIDZ setups (all two) have been very simplistic and I've only expanded one of them once by buying new disks, but i'm having trouble picturing this mirrored vdev structure you guys are on about.

It stripes the VDEVS. The problem with a striped VDEV with 4 drives compared to a RAIDZ2 of 4 drives is that if you lose the wrong two drives, belonging to one VDEV, you lose the whole pool.

BlankSystemDaemon
Mar 13, 2009



IOwnCalculus posted:

Mirrored vdevs are similar to a RAID10. The example right at the top of this page shows what that would look like. You would basically create a new pool with one mirrored vdev, then keep adding new mirrored vdevs until you get all of the drives in.
How do you deal with the downsides of what's essentially raid50 with 5 disks, given that ZFS can't rebalance if you add additional disks, that ZFS can't wear-balance future writes, and that you'll lose the whole pool if you lose any two disks in the same vdev (which becomes increasingly more likely as the older drives in the pool keep getting older? Or do you just not worry about it, because you have verifiable backups and/or the data is easily replacible? Just wondering - if you can live with those downsides, it's one of the better ways to get around the caveats of expanding ZFS, until raidz expansion gets added.

IOwnCalculus
Apr 2, 2003





The server in question is a rackmount I have stashed at work and only has eight 3.5" drive bays, so I have no room to add more drives - raidz expansion won't help me there. I back up everything on it to Crashplan (though I'll need to figure out a new solution in... two years or so) and I also back up the poo poo I can't replace to my old fileserver at home. I'm actually considering reconfiguring that to one of the more flexible storage solutions being discussed here since that box has a lot more drive bays and a lot more variety in drive sizes.

Because I like to live dangerously, in that old server at home, I have one pool set up with two 3-drive raidz vdevs... and one with five drives with zero redundancy, including two 500GB and one 200GB :getin:

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

D. Ebdrup posted:

How do you deal with the downsides of what's essentially raid50 with 5 disks, given that ZFS can't rebalance if you add additional disks, that ZFS can't wear-balance future writes, and that you'll lose the whole pool if you lose any two disks in the same vdev (which becomes increasingly more likely as the older drives in the pool keep getting older? Or do you just not worry about it, because you have verifiable backups and/or the data is easily replacible? Just wondering - if you can live with those downsides, it's one of the better ways to get around the caveats of expanding ZFS, until raidz expansion gets added.

It's not actually RAID-10/RAID-50 in the sense that there is a RAID-0 with literal stripes going over the two vdevs, think of it as a LVM spanned volume with two physical volumes underneath that happen to be RAID-5 arrays. ZFS will naturally attempt to spread blocks over each vdev in a balanced fashion for performance reasons. Yes, once you add a vdev it's now unbalanced. For volumes that consist of mostly static data (MY ANIMES! :bahgawd:) this basically means that your worst-case scenario is the IOPS of a single 4-disk RAIDZ1 vdev (in this example), but if you naturally churn data to a reasonable extent then ZFS will start rebalancing and your performance will eventually approach the IOPS of a pair of 4-disk vdevs (or whatever natural IOPS your pool provides in total). And even if you don't, that level of performance is probably not a big deal for your animes.

Yes, if you lose a vdev you lose the whole pool. If you want to keep your pool while hot-swapping replacements, you can replace one disk at a time from a vdev (assuming RAIDZ1) and then resilver it, repeat N times for an vdev containing N drives.

The other thing to note is that ZFS scrubbing is really good at flushing out failing disks before they actually hard crash, particularly if you are using copies or RAIDZ1/2/3. Obviously it's always possible for a drive to just have a hemorrhage and keel over instantly dead, but usually you do get warning via SMART and data errors if you have the tools in place to watch for it, and ZFS is good at watching for it. That's one reason I say that it's worth using ZFS even if you don't use any of its other fancy capabilities. Just scrubbing once a month is a fantastic defense and ZFS is designed to do that while online/available, unlike tools like fsck.

Paul MaudDib fucked around with this message at 00:30 on Dec 20, 2017

BlankSystemDaemon
Mar 13, 2009



Paul MaudDib posted:

It's not actually RAID-10/RAID-50 in the sense that there is a RAID-0 with literal stripes going over the two vdevs, think of it as a LVM spanned volume with two physical volumes underneath that happen to be RAID-5 arrays. ZFS will naturally attempt to spread blocks over each vdev in a balanced fashion for performance reasons. Yes, once you add a vdev it's now unbalanced. For volumes that consist of mostly static data (MY ANIMES! :bahgawd:) this basically means that your worst-case scenario is the IOPS of a single 4-disk RAIDZ1 vdev (in this example), but if you naturally churn data to a reasonable extent then ZFS will start rebalancing and your performance will eventually approach the IOPS of a pair of 4-disk vdevs (or whatever natural IOPS your pool provides in total). And even if you don't, that level of performance is probably not a big deal for your animes.

Yes, if you lose a vdev you lose the whole pool. If you want to keep your pool while hot-swapping replacements, you can replace one disk at a time from a vdev (assuming RAIDZ1) and then resilver it, repeat N times for an vdev containing N drives.

The other thing to note is that ZFS scrubbing is really good at flushing out failing disks before they actually hard crash, particularly if you are using copies or RAIDZ1/2/3. Obviously it's always possible for a drive to just have a hemorrhage and keel over instantly dead, but usually you do get warning via SMART and data errors if you have the tools in place to watch for it, and ZFS is good at watching for it. That's one reason I say that it's worth using ZFS even if you don't use any of its other fancy capabilities. Just scrubbing once a month is a fantastic defense and ZFS is designed to do that while online/available, unlike tools like fsck.
Nothing you wrote is news to me, so I'm going to assume you're writing it for the sake of anyone else. That said, ZFS isn't magic and 'naturally attempting' isn't something that it does. If a pool has multiple vdevs, and none of them are completely full, ZFS will stripe new writes across vdevs. My point was that there is no wear-leveling to account for individual drive or vdev age.
I'm not sure how ZFS scrubbing "flushes out failing disks" - Can you describe the mechanism by which this happens, and how using ditto blocks or stripe+parity blocks over mirrored blocks make a difference? Or show me the documentation for it, because I'm not finding that either.

Additionally, UFS on FreeBSD has background fsck, a feature introduced in 2001 by Kirk McKusick, although it requires soft-updates. UFS2 over the years added journaling on top of soft-updates (which it does by default), and snapshots.

It's also interesting to note that UFS snapshots weren't quite as instantanenous as those in ZFS.

BlankSystemDaemon fucked around with this message at 12:38 on Dec 20, 2017

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

garfield hentai posted:

Are devices like the Seagate personal cloud any good?

I have had horrible experiences with the lower end Seagate Business NAS devices. Like it failed, RMA comes in, and RMA fails a few weeks later. I had great luck with a Drobo 5N before I took the FreeNAS plunge.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

D. Ebdrup posted:

Nothing you wrote is news to me, so I'm going to assume you're writing it for the sake of anyone else

Again, you're the guy who thinks scrub and checksum don't work on single disks, so yeah, I'm not going to give you a ton of benefit of the doubt when you say something that is subtly wrong. Most likely that subtly-wrong part is actually just being subtle about deeper misunderstandings of underlying concepts.

D. Ebdrup posted:

I'm not sure how ZFS scrubbing "flushes out failing disks" - Can you describe the mechanism by which this happens, and how using ditto blocks or stripe+parity blocks over mirrored blocks make a difference? Or show me the documentation for it, because I'm not finding that either.

For example, questions like this, which show a lack of basic understanding of how disk failures work. But OK, I can google it for you.

General background: https://www.backblaze.com/blog/what-smart-stats-indicate-hard-drive-failures/

ZFS-specific: https://docs.joyent.com/private-cloud/troubleshooting/disk-replacement

(I can't believe a purported ZFS expert would actually be asking for a source on "near-failing drives have higher-than-normal levels of bit-rot and SMART error counts" but here we are!)

D. Ebdrup posted:

Additionally, UFS on FreeBSD has background fsck, a feature introduced in 2001 by Kirk McKusick, although it requires soft-updates. UFS2 over the years added journaling on top of soft-updates (which it does by default), and snapshots

Does fsck even check the data itself? Pretty sure it's purely a metadata thing, right?

Paul MaudDib fucked around with this message at 18:42 on Dec 20, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

H2SO4 posted:

I have had horrible experiences with the lower end Seagate Business NAS devices. Like it failed, RMA comes in, and RMA fails a few weeks later. I had great luck with a Drobo 5N before I took the FreeNAS plunge.

Seagate products are garbage and when you eliminate them from your home/business you will have a blatantly obvious reduction in failure rates. Somehow this is still a controversial concept for some people in 2017 even after like the tenth series of Seagate products that fail at 10-100x the rate of their competitors.

Like, if you really love Seagate then go ahead and kiss them right on the mouth but I'll tell you right now you're going to get herpes.

redeyes
Sep 14, 2002

by Fluffdaddy
Totally agree. In my small shop like 70% of failed drives are Seagate.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
The Define R6 holds 11 drives + 2 2.5 trays on the back.

https://www.youtube.com/watch?v=LkyfpSXb6W0

Yowzah, that's a good option for a home NAS.

redeyes
Sep 14, 2002

by Fluffdaddy
Its really good looking for sure. I have this problem building NASs without hotswap stuff. Just doesn't work for my brain.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Weird thing is, there's no way to buy extra drive cages for it yet. So that's a bummer, also it's expensive.

Hopefully a cheaper, non-glass sided, full drive bay option will come out.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Just shucked 8 easystores and 7 of them were actual red labels...the 8th was a white label but its got tler and the like so I'm fine with it.



Matt Zerella posted:

The Define R6 holds 11 drives + 2 2.5 trays on the back.

https://www.youtube.com/watch?v=LkyfpSXb6W0

Yowzah, that's a good option for a home NAS.

That's really nice looking.

I remember back in the day when 11 drive bays were enough for me. I've now got 23 drives in my case! There's not really much available to hold that many drives that looks nice and isn't a bajillion dollars...which is one of the reasons my server is stuck in the back of my closet now.

BlankSystemDaemon
Mar 13, 2009



Paul MaudDib posted:

Again, you're the guy who thinks scrub and checksum don't work on single disks, so yeah, I'm not going to give you a ton of benefit of the doubt when you say something that is subtly wrong. Most likely that subtly-wrong part is actually just being subtle about deeper misunderstandings of underlying concepts.


For example, questions like this, which show a lack of basic understanding of how disk failures work. But OK, I can google it for you.

General background: https://www.backblaze.com/blog/what-smart-stats-indicate-hard-drive-failures/

ZFS-specific: https://docs.joyent.com/private-cloud/troubleshooting/disk-replacement

(I can't believe a purported ZFS expert would actually be asking for a source on "near-failing drives have higher-than-normal levels of bit-rot and SMART error counts" but here we are!)


Does fsck even check the data itself? Pretty sure it's purely a metadata thing, right?
Show me where I made the claim that scrubs and checksums don't work on a single disk. I made a specific claim that ZFS self-healing (which is what ZFS uses checksumming and scrubbing, of not just metadata but data, for) wont work without copies=N ∈ N ≥ 2, but you apparaently don't read what people write.

Did you actually read the backblaze link, and notice how none of it has anything to do with ZFS specifically, and any filesystem which does read/write operation, ie. all of them, would give the same indications to anyone who's actually charting those smart attributes and doing periodic disk checking using any number of tools such as long smart tests, zfs scrub, fsck or just using the data that's on there. There's that thing about reading comprehension again.

Nothing about Joyents documentation describes how ZFS is special in this regard, either. I can't see why you link it, so it seems like you didn't read it, since it just shows how to replace a disk on Illumos.

A brief overview of what fsck does, on everything from FFS (as well as the derivatives like UFS, EXT and many others), can be found by the man-page for fsck_fss with the caveat that journaling and soft-updates, which most of its derivatives have added, provide additional ways of avoiding a fsck.

ZFS is the revolutionary filesystem because of the way it does things differently than FFS and derivatives, and it did it in 2004-2005 (technically 2003, since that's around when Sun started using it internally). The only thing that comes close, which predates it, is the Log-structured File System from Berkeley, that was in BSD and is still supported in NetBSD where it's being barely kept from bitrotting.
To expand a bit on the idea of something being revolutionary in techology, very few of the things we do in IT are. For storage specifically, Hammer, BTRFS (disregarding all the other issues with that), bcachefs and all these other filesystems came after ZFS. Expanded further, ideas like virtualization, containers, capabilities, and all sorts of things that that we've been doing in the past 10-20 years all originate back in the 70s, and some even go back to the 60s.

(I never claimed to be an expert on ZFS, but at this point I'm not sure you're going to read this, so I'm just gonna shrug and move on.)

BlankSystemDaemon fucked around with this message at 11:00 on Dec 21, 2017

Furism
Feb 21, 2006

Live long and headbang
My father just bought a 2 slots Synology, that'll be fine for him and I'll set it up for him but I wanted to double check if the disks of choice are still the WD Red series, or if something else is recommended now. He's mostly going to use for streaming totally legal movies from it to Kodi (over SMB) and to backup important files (I'll also set that up, using Synology's own software even if it's not the best - that's simply enough for a 60 years old). No need for high performance, just reliability.

Thanks Ants
May 21, 2004

#essereFerrari


WD Reds are fine. Depending on your Kodi box you might want to use NFS rather than SMB.

Greatest Living Man
Jul 22, 2005

ask President Obama
I keep getting weird write errors and critical errors on one of my USB boot drives, and then inevitably the system crumbles even though they're mirrored. Naturally, this time it happened while I'm on vacation and decided to upgrade from 11.0 to 11.1 remotely :downs:


This has happened a few too many times with different drives, so I'm switching over to this setup (mirrored) for my bootdisk:


I don't think there are any enterprise (LSI) mSATA HBAs, but I've been using this Vantec one for a mirrored jail disk for a while and it seems to work.

Greatest Living Man fucked around with this message at 21:00 on Dec 21, 2017

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

D. Ebdrup posted:

You forgot to mention that the only way to make checksums, scrubbing and more importantly self-healing have any effect on a single-disk ZFS setup is to set copies to 2 or higher.

You should go back and read your own posts when someone is making claims about what you posted that you don't agree with. Sometimes it's just a simple typo, and it can be corrected instead of escalating to name-calling. I did last time someone was making baseless claims about what I was posting.

Desuwa fucked around with this message at 23:50 on Dec 21, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I commented the specific bit at the time, but I'll do it again just to save some trouble.

D. Ebdrup posted:

You forgot to mention that the only way to make checksums, scrubbing and more importantly self-healing have any effect on a single-disk ZFS setup is to set copies to 2 or higher.
Ditto blocks are what make it possible, at the cost of effectively doubling (or tripling, quadrupling, et cetera, depending on what you set copies to) diskspace requirements.

You absolutely do not need to have disk-level redundancy or copies=2 to make checksumming or scrubbing work. You've since retconned this to "you need those for self-healing", which is correct, but that's not what you said.

And, these are still good things even if you don't have redundancy to recover from them. Would you rather lose one block of your animes, or not know about it and have the whole drive fail?

D. Ebdrup posted:

Did you actually read the backblaze link, and notice how none of it has anything to do with ZFS specifically, and any filesystem which does read/write operation, ie. all of them, would give the same indications to anyone who's actually charting those smart attributes and doing periodic disk checking using any number of tools such as long smart tests, zfs scrub, fsck or just using the data that's on there. There's that thing about reading comprehension again.

Like I said this is background stuff because you're asking for citations for basics like "SMART errors and individual failing blocks are often precursors to total drive failure".

D. Ebdrup posted:

Nothing about Joyents documentation describes how ZFS is special in this regard, either. I can't see why you link it, so it seems like you didn't read it, since it just shows how to replace a disk on Illumos.

Specifically:

quote:

Checksum errors can occur transiently on individual disks or across multiple disks. The most likely culprits are bit rot or transient storage subsystem errors - oddities like signal loss due to solar flares and so on.

With ZFS, they are not of much concern, but some degree of preventative maintenance is necessary to prevent a failure from accumulation.

From time to time you may see zpool status output similar to this:


NAME STATE READ WRITE CKSUM
zones ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t0d0 ONLINE 0 0 23
c1t1d0 ONLINE 0 0 0


Note the "23" in the CKSUM column.

If this number is significantly large or growing rapidly, the drive is likely in a "pre-failure" state and will fail soon, and is otherwise (in this case) potentially compromising the redundancy of the VDEV.

One thing to make note of is that checksum errors on individual drives, from time to time, is normal and expected behavior (if not optimal). So are many errors on single drives which are about to fail. Many checksum failures across multiple drives can be indicative of a significant storage subsystem problem: a damaged cable, a faulty HBA, or even power problems. If this is noticed, consider contacting Support for assistance with identification.

My point isn't that ZFS is magic but it's a hell of a lot more proactive about data integrity than most filesystems. If your alternative choice of filesystem has a similar mechanism for scrubbing all of the data blocks (not just metadata) then go hog wild. That is the part that tends to flush out drive failures early, because failing drives often tend to have some bitrot before they really go, and ZFS is good at catching it systematically instead of ad-hoc. Based on my experiences with fsck, chkdisk, and similar tools, they usually complete in a matter of minutes so I'm going to doubt they're actually hitting all the data blocks, though.

Whatever though, it's a dumb derail. ZFS is good but it certainly doesn't rule out other filesystems also being good, by any means. I do think ZFS is pretty much the default choice though, and you'd want to have a good reason to use something else.

It's kind of laughable to mention HAMMER in the same breath as ZFS, I don't care how good Matt Dillon is, he's a one-man show and charitably might see one billionth of the usage that ZFS sees, it's literally only available on DragonflyBSD and that's a hyper-niche choice even among the BSDs. btrfs is getting there but still has some use-cases where its stability is nowhere close to ZFS, i.e. anything to do with its RAID is a dumpster fire. It'll get there in time. Really the actual serious alternative would be XFS, but it's more in the category of "run fast and data integrity can get hosed"

Paul MaudDib fucked around with this message at 04:39 on Dec 22, 2017

Kreez
Oct 18, 2003

The second onboard NIC on my Unraid file server just died (first one died 2 years ago), and I'm going to use this as a good reason to stop just using random old terribly inefficient hardware in my fileserver. I've been researching proper server-grade (if only the value lines) hardware for the first time, and wondering how much power I need.

I need to serve files (6 storage drives and a cache drive), and run Deluge for both downloading and seeding several hundred torrents.

My current setup is fine as a file server, but bogs right down if Deluge is doing much of anything. I generally can't even access the configuration page if Deluge is downloading at above 10MB/s, let alone watch any video stored on the server.

I like the idea of the SoC Atom (Aterton? Denverton?) boards from Supermicro or ASRock, though I'm not sure if they are powerful enough for my needs as all the reviews and benchmarks seem to focus on ability to run VMs.

Or should I move up to an entry level Supermicro X9/X10/X11 board and throw an i3 in there?

I just want to be able to pull super high bitrate video off the server, potentially while maxing out my 150mbit internet connection downloading/uploading stuff, without worrying about stuff stuttering and timing out.

Thoughts?

Kreez fucked around with this message at 01:40 on Dec 22, 2017

redeyes
Sep 14, 2002

by Fluffdaddy
What kind of NIC was it out of curiosity.

Kreez
Oct 18, 2003

Whatever the onboard controller is for an Asus P5B Deluxe motherboard from 2006. Pretty much been running 24/7 for 11 years. At least I assume it's the NIC, I've got a cheapo PCI card coming tonight from Amazon to get me back up and running temporarily, hopefully it's not something else...

redeyes
Sep 14, 2002

by Fluffdaddy

Kreez posted:

Whatever the onboard controller is for an Asus P5B Deluxe motherboard from 2006. Pretty much been running 24/7 for 11 years. At least I assume it's the NIC, I've got a cheapo PCI card coming tonight from Amazon to get me back up and running temporarily, hopefully it's not something else...

Ah, yeah Marvell or whatever. 11 years isn't too bad for a nic.

Ziploc
Sep 19, 2006
MX-5
Ugh. Why doesn't unraid support mirrored boot drives? And if you loose a boot drive, you have to wait for support to let you activate another one?

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Use any other OS/hypervisor. Jesus.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Ziploc posted:

Ugh. Why doesn't unraid support mirrored boot drives? And if you loose a boot drive, you have to wait for support to let you activate another one?

It boots off a USB drive. Get a good one, and maybe a spare? If the drive dies they will send you a key pretty quickly once a year.

BlankSystemDaemon
Mar 13, 2009



Paul MaudDib posted:

A lot of right words about ZFS being really good.
On the advice of Desuwa I went back and carefully reread what I'd written (though I'd skimmed before), and as it turns out there is something that is so ambiguously worded that I didn't even catch it myself when initially posting it or re-reading it. I did mean it with reference to self-healing, but that wasn't evident - that's merely an explanation, not an apology. I'm sorry for not being more explicit.

I'd also completely forgotten about the checksum column in the output you mentioned, despite both charting and setting up email alerts for it (I just re-checked). There's a lesson in this, both about hubris and about not thinking you know more than you actually do. I'll try and learn both.

Since this derail has gone on long enough, can I ask whether you're on IRC somewhere? I do have things I want to bounce off you regarding Hammer, BTRFS and XFS, but they don't belong here.

Kreez posted:

Thoughts?
The new Denverton boards, especially the high-core-count versions, beat out at least part of the Xeon-D 1500-series CPUs in multithreaded benchmarks so since torrenting isn't, for the most part, CPU intensive but is more about disk i/o and/or caching depending on configuration, the Denverton SoCs have SATA6 or SAS2/3 connectors that deliver more bandwidth than any platter drive is currently capable of handling. so it shouldn't be a problem.
Watching videos (not transcoding) is a question of bitrate of streaming data, but since that's meassured in Mbps, it's not really a worry for a platform that can easily satuate a 1Gbps NIC (as a benchmark for 1Gbps satuation of samba traffic, my dual-core AMD N36L at 1.3GHz can do it - so anything newer with higher IPC, more cores, and higher frequency can do it too).
Modern NAS harddrives typically also have their firmware optimized somewhat for streaming data workloads, so generally you should be set there too if going with a Denverton.
It's also worth noting that the QuickAssist-enabled Denverton SoCs, that at least SuperMicro make, can offload SHA256 and AES-GCM/XTS (optionally used for ZFS checksums and encryption) once the driver has been added to the OS of your choice and the offload is available to the kernel crypto framework (Intel commits QuickAssist drivers to both Linux and FreeBSD - unfortunately there's no word on Illumos yet).

Matt Zerella posted:

It boots off a USB drive. Get a good one, and maybe a spare? If the drive dies they will send you a key pretty quickly once a year.
SLC flash storage on a USB stick is an option, but it's terribly expensive and not easy to source.
Why do they insist on treating it like an appliance OS? Who benefits from this, unless they're planning on doing a support tier above that where you pay for mirrored SSDs?

BlankSystemDaemon fucked around with this message at 11:05 on Dec 22, 2017

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
No idea why they do it but it loads into ram and after initial config you rarely need access to it. Docker runs off your drives.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Do any of these esata enclosures work with ZFS? I just have no clue if they can just present the raw drives to the OS or how they work.

redeyes
Sep 14, 2002

by Fluffdaddy

Ziploc posted:

Ugh. Why doesn't unraid support mirrored boot drives? And if you loose a boot drive, you have to wait for support to let you activate another one?

I was thinking about this. USB drives are not even close to reliable enough to run as a NAS boot drive. Any other options?

redeyes fucked around with this message at 16:15 on Dec 22, 2017

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib

redeyes posted:

I was thinking about this. USB drives are not even close to reliable enough to run as a NAS boot drive. Any other options?

They do ok so long as there's no writes and limited reads. I've had two drives go tits up from freenas. It's one of the reasons I haven't switched. With freenas, I just have a backup script that writes the config file to one of the shares. If it dies, I just replace the drive, copy the freenas install, and update the config. It's an hour of downtime, max.

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

D. Ebdrup posted:

On the advice of Desuwa I went back and carefully reread what I'd written (though I'd skimmed before), and as it turns out there is something that is so ambiguously worded that I didn't even catch it myself when initially posting it or re-reading it. I did mean it with reference to self-healing, but that wasn't evident - that's merely an explanation, not an apology. I'm sorry for not being more explicit.

I'd also completely forgotten about the checksum column in the output you mentioned, despite both charting and setting up email alerts for it (I just re-checked). There's a lesson in this, both about hubris and about not thinking you know more than you actually do. I'll try and learn both.

Since this derail has gone on long enough, can I ask whether you're on IRC somewhere? I do have things I want to bounce off you regarding Hammer, BTRFS and XFS, but they don't belong here.

Takes two to fight, and I should have been less combative about it too. I'm sorry as well.

I'm not really on IRC but you can PM me. And that kind of stuff is generally fine here, I think.

Thermopyle posted:

Do any of these esata enclosures work with ZFS? I just have no clue if they can just present the raw drives to the OS or how they work.

Usually you can run them in "JBOD mode" which just presents the raw drives to the OS. The one I have just has a set of DIP switches on back that toggle which mode it uses, others may vary.

Paul MaudDib fucked around with this message at 17:06 on Dec 22, 2017

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply