|
Thermopyle posted:Well yeah, but I don't see any point to it for the users we're talking about. Home usage type of people want to use RAID-ish solutions because of the redundancy without having to give up half of their storage like you would in a mirroring situation. i don't think most home users who need more storage even know what a fileserver/NAS is, and even among those who do they're most likely going to be buying a pre-populated synology/equivalent or just plugging a usb hard drive into their router, not worrying about different kinds of RAID levels if you're techy enough to be using unraid/other NAS-like software you're techy enough to be using freenas/zfs; you've just got to be informed about what you're getting yourself into. i'm a pretty light user with no real need for the zfs data integrity/performance features, but i like the purity of focus, snapshots, the stability that just lets me leave the thing alone for the most part etc. plus i have no real need to expand the capacity any time soon. could prob get most of the functionality except the snapshotting elsewhere but bleh that being said i wouldn't really recommend freenas to home users just getting into the nas game, partly because of the ui which is a total unintuitive mess (though it's at least faster than it used to be, and the beta ui looks to be slightly less jumbled) and partly becausetrying to get an answer for a simple question that falls outside the purview of the (decent, tbf) documentation is a total crapshoot. the freenas forums remind me of the linux community-at-large in that everyone posting there seems to either use the thing for work or has built an identity around a certain methodology/technology/'best practice' to the point of dogma, neither of which are particularly useful when you just want a simple question answered and end up with a ton of pontificating. at least you don't have to use a CLI to use ZFS in freenas though, which is nice e: this is a terrible post to start a new page with! just a relative layman's perspective don't kill me e2: while i'm here: is there any way to set an expiry for the snapshots freenas makes of the boot volume? earlier this year the usb drive i was running it off died, probably because it was keeping snapshots of every update since i first built the server which filled it up and then thrashed the poor drive to death. i didn't know it did this so now i occasionally clean out the old snapshots after a few months have gone by. any way to automate this at all? Generic Monk fucked around with this message at 16:19 on Dec 19, 2017 |
# ? Dec 19, 2017 16:13 |
|
|
# ? Apr 25, 2024 19:58 |
|
Paul MaudDib posted:But I suppose you're fixated on the case where someone has a 4-8 drive RAID5/RAID6 pool but also is morally opposed to adding more than one drive at once and also demands redundancy on the single extra drive they're adding. So yes, ZFS does not do that use-case well (yet). Well, yes, this does describe the typical home user NAS...the very people this conversation started about. It's not a moral opposition, it's just objecting to the cost of adding more storage. Generic Monk posted:i don't think most home users who need more storage even know what a fileserver/NAS is, and even among those who do they're most likely going to be buying a pre-populated synology/equivalent or just plugging a usb hard drive into their router, not worrying about different kinds of RAID levels But this conversation is about using ZFS for your average home user who might choose ZFS or unraid or snapraid....we've already presupposed the level of knowledge here. Due to hyperbolic discounting people likely don't weight the inconvenience of storage expansion sometime in the nebulous future high enough. Whether that actual inconvenience is high enough to go with unraid/snapraid instead of ZFS depends on your actual needs. I completely agree with you on how great ZFS is and it is exactly why I'm very conflicted whether to stick with it or not. My major pain point with ZFS for over half a decade has been storage expansion whilst keeping redundancy costs me over a grand.
|
# ? Dec 19, 2017 16:32 |
|
Thermopyle posted:I completely agree with you on how great ZFS is and it is exactly why I'm very conflicted whether to stick with it or not. My major pain point with ZFS for over half a decade has been storage expansion whilst keeping redundancy costs me over a grand. My solution to this has always been to wait for silly HDD deals (like the recent 8TB Easystore) that inevitably come up once or twice a year. Buy enough to double my pool. Sell the old drives on eBay for ~50% of what I paid for the new drives. Keep on hoarding for another 3-4 years until I fill up the new space. Repeat. Honestly, it ends up being cheaper than doing an ad-hoc system, simply because it pushes me to wait for the sales. Being able to throw an extra drive in there in a pinch would be nice, absolutely, but once you're talking about 4-8 drive systems I figure you should be able to see your data expansion needs coming far enough off to be able to wait for a sale to pop.
|
# ? Dec 19, 2017 16:59 |
|
DrDork posted:My solution to this has always been to wait for silly HDD deals (like the recent 8TB Easystore) that inevitably come up once or twice a year. Buy enough to double my pool. Sell the old drives on eBay for ~50% of what I paid for the new drives. Keep on hoarding for another 3-4 years until I fill up the new space. Repeat. This has been my strategy as well. I've been running single-vdev raidz2 pools with 6-8 drives for years, and am considering switching to multiple mirrored vdevs with the 8TB Easystores I recently picked up for the ease of expansion and fast resilvers without having to drop $1000 every handful of years. The increased IOPS would be nice, but not necessary. Anybody else gone this route? Has the space/convenience/performance tradeoff been worth it to you?
|
# ? Dec 19, 2017 17:16 |
|
I was very happy with the Easystores because they let me cut down from a 8x2TB Z2 to a 4x8TB Z1 array--half the drives, double the usable space, similar redundancy. And since I push everything over simple GigE and have a small SSD for cache/scratch space, it's not like IOPS matters much.
|
# ? Dec 19, 2017 17:23 |
|
DrDork posted:My solution to this has always been to wait for silly HDD deals (like the recent 8TB Easystore) that inevitably come up once or twice a year. Buy enough to double my pool. Sell the old drives on eBay for ~50% of what I paid for the new drives. Keep on hoarding for another 3-4 years until I fill up the new space. Repeat. Yeah, this is what I've been doing for the last couple of years. I'm not confident its cheaper than if I had just waited until I needed the extra storage because hard drive prices just drop naturally as tech progresses, but its certainly possible.
|
# ? Dec 19, 2017 17:55 |
|
SamDabbers posted:This has been my strategy as well. I've been running single-vdev raidz2 pools with 6-8 drives for years, and am considering switching to multiple mirrored vdevs with the 8TB Easystores I recently picked up for the ease of expansion and fast resilvers without having to drop $1000 every handful of years. The increased IOPS would be nice, but not necessary. Do you have a link to a howto for this? My RAIDZ setups (all two) have been very simplistic and I've only expanded one of them once by buying new disks, but i'm having trouble picturing this mirrored vdev structure you guys are on about.
|
# ? Dec 19, 2017 18:40 |
|
Mirrored vdevs are similar to a RAID10. The example right at the top of this page shows what that would look like. You would basically create a new pool with one mirrored vdev, then keep adding new mirrored vdevs until you get all of the drives in. This works with any vdev type. I have my main server set up as two RAIDZ1 vdevs so I can add capacity by swapping out four drives at a time. code:
|
# ? Dec 19, 2017 18:57 |
|
Munkeymon posted:Do you have a link to a howto for this? My RAIDZ setups (all two) have been very simplistic and I've only expanded one of them once by buying new disks, but i'm having trouble picturing this mirrored vdev structure you guys are on about. It stripes the VDEVS. The problem with a striped VDEV with 4 drives compared to a RAIDZ2 of 4 drives is that if you lose the wrong two drives, belonging to one VDEV, you lose the whole pool.
|
# ? Dec 19, 2017 20:26 |
IOwnCalculus posted:Mirrored vdevs are similar to a RAID10. The example right at the top of this page shows what that would look like. You would basically create a new pool with one mirrored vdev, then keep adding new mirrored vdevs until you get all of the drives in.
|
|
# ? Dec 19, 2017 22:28 |
|
The server in question is a rackmount I have stashed at work and only has eight 3.5" drive bays, so I have no room to add more drives - raidz expansion won't help me there. I back up everything on it to Crashplan (though I'll need to figure out a new solution in... two years or so) and I also back up the poo poo I can't replace to my old fileserver at home. I'm actually considering reconfiguring that to one of the more flexible storage solutions being discussed here since that box has a lot more drive bays and a lot more variety in drive sizes. Because I like to live dangerously, in that old server at home, I have one pool set up with two 3-drive raidz vdevs... and one with five drives with zero redundancy, including two 500GB and one 200GB
|
# ? Dec 19, 2017 22:54 |
|
D. Ebdrup posted:How do you deal with the downsides of what's essentially raid50 with 5 disks, given that ZFS can't rebalance if you add additional disks, that ZFS can't wear-balance future writes, and that you'll lose the whole pool if you lose any two disks in the same vdev (which becomes increasingly more likely as the older drives in the pool keep getting older? Or do you just not worry about it, because you have verifiable backups and/or the data is easily replacible? Just wondering - if you can live with those downsides, it's one of the better ways to get around the caveats of expanding ZFS, until raidz expansion gets added. It's not actually RAID-10/RAID-50 in the sense that there is a RAID-0 with literal stripes going over the two vdevs, think of it as a LVM spanned volume with two physical volumes underneath that happen to be RAID-5 arrays. ZFS will naturally attempt to spread blocks over each vdev in a balanced fashion for performance reasons. Yes, once you add a vdev it's now unbalanced. For volumes that consist of mostly static data (MY ANIMES! ) this basically means that your worst-case scenario is the IOPS of a single 4-disk RAIDZ1 vdev (in this example), but if you naturally churn data to a reasonable extent then ZFS will start rebalancing and your performance will eventually approach the IOPS of a pair of 4-disk vdevs (or whatever natural IOPS your pool provides in total). And even if you don't, that level of performance is probably not a big deal for your animes. Yes, if you lose a vdev you lose the whole pool. If you want to keep your pool while hot-swapping replacements, you can replace one disk at a time from a vdev (assuming RAIDZ1) and then resilver it, repeat N times for an vdev containing N drives. The other thing to note is that ZFS scrubbing is really good at flushing out failing disks before they actually hard crash, particularly if you are using copies or RAIDZ1/2/3. Obviously it's always possible for a drive to just have a hemorrhage and keel over instantly dead, but usually you do get warning via SMART and data errors if you have the tools in place to watch for it, and ZFS is good at watching for it. That's one reason I say that it's worth using ZFS even if you don't use any of its other fancy capabilities. Just scrubbing once a month is a fantastic defense and ZFS is designed to do that while online/available, unlike tools like fsck. Paul MaudDib fucked around with this message at 00:30 on Dec 20, 2017 |
# ? Dec 19, 2017 23:58 |
Paul MaudDib posted:It's not actually RAID-10/RAID-50 in the sense that there is a RAID-0 with literal stripes going over the two vdevs, think of it as a LVM spanned volume with two physical volumes underneath that happen to be RAID-5 arrays. ZFS will naturally attempt to spread blocks over each vdev in a balanced fashion for performance reasons. Yes, once you add a vdev it's now unbalanced. For volumes that consist of mostly static data (MY ANIMES! ) this basically means that your worst-case scenario is the IOPS of a single 4-disk RAIDZ1 vdev (in this example), but if you naturally churn data to a reasonable extent then ZFS will start rebalancing and your performance will eventually approach the IOPS of a pair of 4-disk vdevs (or whatever natural IOPS your pool provides in total). And even if you don't, that level of performance is probably not a big deal for your animes. I'm not sure how ZFS scrubbing "flushes out failing disks" - Can you describe the mechanism by which this happens, and how using ditto blocks or stripe+parity blocks over mirrored blocks make a difference? Or show me the documentation for it, because I'm not finding that either. Additionally, UFS on FreeBSD has background fsck, a feature introduced in 2001 by Kirk McKusick, although it requires soft-updates. UFS2 over the years added journaling on top of soft-updates (which it does by default), and snapshots. It's also interesting to note that UFS snapshots weren't quite as instantanenous as those in ZFS. BlankSystemDaemon fucked around with this message at 12:38 on Dec 20, 2017 |
|
# ? Dec 20, 2017 12:16 |
|
garfield hentai posted:Are devices like the Seagate personal cloud any good? I have had horrible experiences with the lower end Seagate Business NAS devices. Like it failed, RMA comes in, and RMA fails a few weeks later. I had great luck with a Drobo 5N before I took the FreeNAS plunge.
|
# ? Dec 20, 2017 14:49 |
|
D. Ebdrup posted:Nothing you wrote is news to me, so I'm going to assume you're writing it for the sake of anyone else Again, you're the guy who thinks scrub and checksum don't work on single disks, so yeah, I'm not going to give you a ton of benefit of the doubt when you say something that is subtly wrong. Most likely that subtly-wrong part is actually just being subtle about deeper misunderstandings of underlying concepts. D. Ebdrup posted:I'm not sure how ZFS scrubbing "flushes out failing disks" - Can you describe the mechanism by which this happens, and how using ditto blocks or stripe+parity blocks over mirrored blocks make a difference? Or show me the documentation for it, because I'm not finding that either. For example, questions like this, which show a lack of basic understanding of how disk failures work. But OK, I can google it for you. General background: https://www.backblaze.com/blog/what-smart-stats-indicate-hard-drive-failures/ ZFS-specific: https://docs.joyent.com/private-cloud/troubleshooting/disk-replacement (I can't believe a purported ZFS expert would actually be asking for a source on "near-failing drives have higher-than-normal levels of bit-rot and SMART error counts" but here we are!) D. Ebdrup posted:Additionally, UFS on FreeBSD has background fsck, a feature introduced in 2001 by Kirk McKusick, although it requires soft-updates. UFS2 over the years added journaling on top of soft-updates (which it does by default), and snapshots Does fsck even check the data itself? Pretty sure it's purely a metadata thing, right? Paul MaudDib fucked around with this message at 18:42 on Dec 20, 2017 |
# ? Dec 20, 2017 16:19 |
|
H2SO4 posted:I have had horrible experiences with the lower end Seagate Business NAS devices. Like it failed, RMA comes in, and RMA fails a few weeks later. I had great luck with a Drobo 5N before I took the FreeNAS plunge. Seagate products are garbage and when you eliminate them from your home/business you will have a blatantly obvious reduction in failure rates. Somehow this is still a controversial concept for some people in 2017 even after like the tenth series of Seagate products that fail at 10-100x the rate of their competitors. Like, if you really love Seagate then go ahead and kiss them right on the mouth but I'll tell you right now you're going to get herpes.
|
# ? Dec 20, 2017 16:31 |
|
Totally agree. In my small shop like 70% of failed drives are Seagate.
|
# ? Dec 20, 2017 16:36 |
|
The Define R6 holds 11 drives + 2 2.5 trays on the back. https://www.youtube.com/watch?v=LkyfpSXb6W0 Yowzah, that's a good option for a home NAS.
|
# ? Dec 20, 2017 16:47 |
|
Its really good looking for sure. I have this problem building NASs without hotswap stuff. Just doesn't work for my brain.
|
# ? Dec 20, 2017 17:18 |
|
Weird thing is, there's no way to buy extra drive cages for it yet. So that's a bummer, also it's expensive. Hopefully a cheaper, non-glass sided, full drive bay option will come out.
|
# ? Dec 20, 2017 17:22 |
|
Just shucked 8 easystores and 7 of them were actual red labels...the 8th was a white label but its got tler and the like so I'm fine with it.Matt Zerella posted:The Define R6 holds 11 drives + 2 2.5 trays on the back. That's really nice looking. I remember back in the day when 11 drive bays were enough for me. I've now got 23 drives in my case! There's not really much available to hold that many drives that looks nice and isn't a bajillion dollars...which is one of the reasons my server is stuck in the back of my closet now.
|
# ? Dec 20, 2017 17:53 |
Paul MaudDib posted:Again, you're the guy who thinks scrub and checksum don't work on single disks, so yeah, I'm not going to give you a ton of benefit of the doubt when you say something that is subtly wrong. Most likely that subtly-wrong part is actually just being subtle about deeper misunderstandings of underlying concepts. Did you actually read the backblaze link, and notice how none of it has anything to do with ZFS specifically, and any filesystem which does read/write operation, ie. all of them, would give the same indications to anyone who's actually charting those smart attributes and doing periodic disk checking using any number of tools such as long smart tests, zfs scrub, fsck or just using the data that's on there. There's that thing about reading comprehension again. Nothing about Joyents documentation describes how ZFS is special in this regard, either. I can't see why you link it, so it seems like you didn't read it, since it just shows how to replace a disk on Illumos. A brief overview of what fsck does, on everything from FFS (as well as the derivatives like UFS, EXT and many others), can be found by the man-page for fsck_fss with the caveat that journaling and soft-updates, which most of its derivatives have added, provide additional ways of avoiding a fsck. ZFS is the revolutionary filesystem because of the way it does things differently than FFS and derivatives, and it did it in 2004-2005 (technically 2003, since that's around when Sun started using it internally). The only thing that comes close, which predates it, is the Log-structured File System from Berkeley, that was in BSD and is still supported in NetBSD where it's being barely kept from bitrotting. To expand a bit on the idea of something being revolutionary in techology, very few of the things we do in IT are. For storage specifically, Hammer, BTRFS (disregarding all the other issues with that), bcachefs and all these other filesystems came after ZFS. Expanded further, ideas like virtualization, containers, capabilities, and all sorts of things that that we've been doing in the past 10-20 years all originate back in the 70s, and some even go back to the 60s. (I never claimed to be an expert on ZFS, but at this point I'm not sure you're going to read this, so I'm just gonna shrug and move on.) BlankSystemDaemon fucked around with this message at 11:00 on Dec 21, 2017 |
|
# ? Dec 21, 2017 10:40 |
|
My father just bought a 2 slots Synology, that'll be fine for him and I'll set it up for him but I wanted to double check if the disks of choice are still the WD Red series, or if something else is recommended now. He's mostly going to use for streaming totally legal movies from it to Kodi (over SMB) and to backup important files (I'll also set that up, using Synology's own software even if it's not the best - that's simply enough for a 60 years old). No need for high performance, just reliability.
|
# ? Dec 21, 2017 11:51 |
|
WD Reds are fine. Depending on your Kodi box you might want to use NFS rather than SMB.
|
# ? Dec 21, 2017 12:08 |
|
I keep getting weird write errors and critical errors on one of my USB boot drives, and then inevitably the system crumbles even though they're mirrored. Naturally, this time it happened while I'm on vacation and decided to upgrade from 11.0 to 11.1 remotely This has happened a few too many times with different drives, so I'm switching over to this setup (mirrored) for my bootdisk: I don't think there are any enterprise (LSI) mSATA HBAs, but I've been using this Vantec one for a mirrored jail disk for a while and it seems to work. Greatest Living Man fucked around with this message at 21:00 on Dec 21, 2017 |
# ? Dec 21, 2017 20:56 |
|
D. Ebdrup posted:You forgot to mention that the only way to make checksums, scrubbing and more importantly self-healing have any effect on a single-disk ZFS setup is to set copies to 2 or higher. You should go back and read your own posts when someone is making claims about what you posted that you don't agree with. Sometimes it's just a simple typo, and it can be corrected instead of escalating to name-calling. I did last time someone was making baseless claims about what I was posting. Desuwa fucked around with this message at 23:50 on Dec 21, 2017 |
# ? Dec 21, 2017 23:43 |
|
I commented the specific bit at the time, but I'll do it again just to save some trouble.D. Ebdrup posted:You forgot to mention that the only way to make checksums, scrubbing and more importantly self-healing have any effect on a single-disk ZFS setup is to set copies to 2 or higher. You absolutely do not need to have disk-level redundancy or copies=2 to make checksumming or scrubbing work. You've since retconned this to "you need those for self-healing", which is correct, but that's not what you said. And, these are still good things even if you don't have redundancy to recover from them. Would you rather lose one block of your animes, or not know about it and have the whole drive fail? D. Ebdrup posted:Did you actually read the backblaze link, and notice how none of it has anything to do with ZFS specifically, and any filesystem which does read/write operation, ie. all of them, would give the same indications to anyone who's actually charting those smart attributes and doing periodic disk checking using any number of tools such as long smart tests, zfs scrub, fsck or just using the data that's on there. There's that thing about reading comprehension again. Like I said this is background stuff because you're asking for citations for basics like "SMART errors and individual failing blocks are often precursors to total drive failure". D. Ebdrup posted:Nothing about Joyents documentation describes how ZFS is special in this regard, either. I can't see why you link it, so it seems like you didn't read it, since it just shows how to replace a disk on Illumos. Specifically: quote:Checksum errors can occur transiently on individual disks or across multiple disks. The most likely culprits are bit rot or transient storage subsystem errors - oddities like signal loss due to solar flares and so on. My point isn't that ZFS is magic but it's a hell of a lot more proactive about data integrity than most filesystems. If your alternative choice of filesystem has a similar mechanism for scrubbing all of the data blocks (not just metadata) then go hog wild. That is the part that tends to flush out drive failures early, because failing drives often tend to have some bitrot before they really go, and ZFS is good at catching it systematically instead of ad-hoc. Based on my experiences with fsck, chkdisk, and similar tools, they usually complete in a matter of minutes so I'm going to doubt they're actually hitting all the data blocks, though. Whatever though, it's a dumb derail. ZFS is good but it certainly doesn't rule out other filesystems also being good, by any means. I do think ZFS is pretty much the default choice though, and you'd want to have a good reason to use something else. It's kind of laughable to mention HAMMER in the same breath as ZFS, I don't care how good Matt Dillon is, he's a one-man show and charitably might see one billionth of the usage that ZFS sees, it's literally only available on DragonflyBSD and that's a hyper-niche choice even among the BSDs. btrfs is getting there but still has some use-cases where its stability is nowhere close to ZFS, i.e. anything to do with its RAID is a dumpster fire. It'll get there in time. Really the actual serious alternative would be XFS, but it's more in the category of "run fast and data integrity can get hosed" Paul MaudDib fucked around with this message at 04:39 on Dec 22, 2017 |
# ? Dec 21, 2017 23:58 |
|
The second onboard NIC on my Unraid file server just died (first one died 2 years ago), and I'm going to use this as a good reason to stop just using random old terribly inefficient hardware in my fileserver. I've been researching proper server-grade (if only the value lines) hardware for the first time, and wondering how much power I need. I need to serve files (6 storage drives and a cache drive), and run Deluge for both downloading and seeding several hundred torrents. My current setup is fine as a file server, but bogs right down if Deluge is doing much of anything. I generally can't even access the configuration page if Deluge is downloading at above 10MB/s, let alone watch any video stored on the server. I like the idea of the SoC Atom (Aterton? Denverton?) boards from Supermicro or ASRock, though I'm not sure if they are powerful enough for my needs as all the reviews and benchmarks seem to focus on ability to run VMs. Or should I move up to an entry level Supermicro X9/X10/X11 board and throw an i3 in there? I just want to be able to pull super high bitrate video off the server, potentially while maxing out my 150mbit internet connection downloading/uploading stuff, without worrying about stuff stuttering and timing out. Thoughts? Kreez fucked around with this message at 01:40 on Dec 22, 2017 |
# ? Dec 22, 2017 01:14 |
|
What kind of NIC was it out of curiosity.
|
# ? Dec 22, 2017 01:35 |
|
Whatever the onboard controller is for an Asus P5B Deluxe motherboard from 2006. Pretty much been running 24/7 for 11 years. At least I assume it's the NIC, I've got a cheapo PCI card coming tonight from Amazon to get me back up and running temporarily, hopefully it's not something else...
|
# ? Dec 22, 2017 01:38 |
|
Kreez posted:Whatever the onboard controller is for an Asus P5B Deluxe motherboard from 2006. Pretty much been running 24/7 for 11 years. At least I assume it's the NIC, I've got a cheapo PCI card coming tonight from Amazon to get me back up and running temporarily, hopefully it's not something else... Ah, yeah Marvell or whatever. 11 years isn't too bad for a nic.
|
# ? Dec 22, 2017 01:51 |
|
Ugh. Why doesn't unraid support mirrored boot drives? And if you loose a boot drive, you have to wait for support to let you activate another one?
|
# ? Dec 22, 2017 05:59 |
|
Use any other OS/hypervisor. Jesus.
|
# ? Dec 22, 2017 06:01 |
|
Ziploc posted:Ugh. Why doesn't unraid support mirrored boot drives? And if you loose a boot drive, you have to wait for support to let you activate another one? It boots off a USB drive. Get a good one, and maybe a spare? If the drive dies they will send you a key pretty quickly once a year.
|
# ? Dec 22, 2017 06:07 |
Paul MaudDib posted:A lot of right words about ZFS being really good. I'd also completely forgotten about the checksum column in the output you mentioned, despite both charting and setting up email alerts for it (I just re-checked). There's a lesson in this, both about hubris and about not thinking you know more than you actually do. I'll try and learn both. Since this derail has gone on long enough, can I ask whether you're on IRC somewhere? I do have things I want to bounce off you regarding Hammer, BTRFS and XFS, but they don't belong here. Kreez posted:Thoughts? Watching videos (not transcoding) is a question of bitrate of streaming data, but since that's meassured in Mbps, it's not really a worry for a platform that can easily satuate a 1Gbps NIC (as a benchmark for 1Gbps satuation of samba traffic, my dual-core AMD N36L at 1.3GHz can do it - so anything newer with higher IPC, more cores, and higher frequency can do it too). Modern NAS harddrives typically also have their firmware optimized somewhat for streaming data workloads, so generally you should be set there too if going with a Denverton. It's also worth noting that the QuickAssist-enabled Denverton SoCs, that at least SuperMicro make, can offload SHA256 and AES-GCM/XTS (optionally used for ZFS checksums and encryption) once the driver has been added to the OS of your choice and the offload is available to the kernel crypto framework (Intel commits QuickAssist drivers to both Linux and FreeBSD - unfortunately there's no word on Illumos yet). Matt Zerella posted:It boots off a USB drive. Get a good one, and maybe a spare? If the drive dies they will send you a key pretty quickly once a year. Why do they insist on treating it like an appliance OS? Who benefits from this, unless they're planning on doing a support tier above that where you pay for mirrored SSDs? BlankSystemDaemon fucked around with this message at 11:05 on Dec 22, 2017 |
|
# ? Dec 22, 2017 10:33 |
|
No idea why they do it but it loads into ram and after initial config you rarely need access to it. Docker runs off your drives.
|
# ? Dec 22, 2017 13:43 |
|
Do any of these esata enclosures work with ZFS? I just have no clue if they can just present the raw drives to the OS or how they work.
|
# ? Dec 22, 2017 15:51 |
|
Ziploc posted:Ugh. Why doesn't unraid support mirrored boot drives? And if you loose a boot drive, you have to wait for support to let you activate another one? I was thinking about this. USB drives are not even close to reliable enough to run as a NAS boot drive. Any other options? redeyes fucked around with this message at 16:15 on Dec 22, 2017 |
# ? Dec 22, 2017 16:13 |
|
redeyes posted:I was thinking about this. USB drives are not even close to reliable enough to run as a NAS boot drive. Any other options? They do ok so long as there's no writes and limited reads. I've had two drives go tits up from freenas. It's one of the reasons I haven't switched. With freenas, I just have a backup script that writes the config file to one of the shares. If it dies, I just replace the drive, copy the freenas install, and update the config. It's an hour of downtime, max.
|
# ? Dec 22, 2017 16:39 |
|
|
# ? Apr 25, 2024 19:58 |
|
D. Ebdrup posted:On the advice of Desuwa I went back and carefully reread what I'd written (though I'd skimmed before), and as it turns out there is something that is so ambiguously worded that I didn't even catch it myself when initially posting it or re-reading it. I did mean it with reference to self-healing, but that wasn't evident - that's merely an explanation, not an apology. I'm sorry for not being more explicit. Takes two to fight, and I should have been less combative about it too. I'm sorry as well. I'm not really on IRC but you can PM me. And that kind of stuff is generally fine here, I think. Thermopyle posted:Do any of these esata enclosures work with ZFS? I just have no clue if they can just present the raw drives to the OS or how they work. Usually you can run them in "JBOD mode" which just presents the raw drives to the OS. The one I have just has a set of DIP switches on back that toggle which mode it uses, others may vary. Paul MaudDib fucked around with this message at 17:06 on Dec 22, 2017 |
# ? Dec 22, 2017 17:00 |