Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
JnnyThndrs
May 29, 2001

HERE ARE THE FUCKING TOWELS

EVIL Gibson posted:

It no longer exists but Windows Home Server was great. It let you do exactly that. Still finding 500-750g drives of that server where you can just put it into a HDD dock and find remnants of 2006 internet.

I’m still using that as my NAS, a 500gb WD Black (that’s probably old enough to drive) and 14 2TB ex-datacenter drives hooked to a Core 2 Quad via eSATA. I’m sure the electricity use is horrible, but It_just_keeps_working.

I really need to build something else.

Adbot
ADBOT LOVES YOU

Inept
Jul 8, 2003

JnnyThndrs posted:

I’m still using that as my NAS

Windows Home Server hasn't gotten security updates in over 5 years, you might want to move to something newer.

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe

brains posted:

the default bridge in docker has some quirks so it is recommended to just define your own:

define a bridge network first and verify it exists:
YAML code:
docker network create [name]
docker network ls
in docker compose, define the networks:
YAML code:
networks:
  [name]:
    external: true
and in each container you want to attach to the new bridge, just add it in:
YAML code:
services:
  app:
    networks:
      - [name]

Thanks as well, that doesn’t seem to have fixed it. I’ll have to try some other stuff

JnnyThndrs
May 29, 2001

HERE ARE THE FUCKING TOWELS

Inept posted:

Windows Home Server hasn't gotten security updates in over 5 years, you might want to move to something newer.

I know, I’m being ‘that guy’ who runs outdated poo poo that doesn’t get security updates.

I’m thinking of just going to Win10 Pro with Stablebit Drivepool(which I already own) on an old Ivy Bridge mobo I have laying around, that way I can get rid of these small drives one by one over time without buying anything else.

/\/\/ I’ll check that out, that’s exactly what I’m after\/\/\/

JnnyThndrs fucked around with this message at 16:39 on Nov 27, 2021

Enos Cabell
Nov 3, 2004


If being able to mix and match drive sizes, while being able to pull any drive out and mount it another system to access files is what you like about WHS, then UnRaid is probably what you want. I've added half a dozen drives since I first set mine up and it's been much more stable and hands off than WHS or FreeNAS ever were for me.

kliras
Mar 27, 2021
How much abuse can NAS HDDs handle in general? I just ordered an Ironwolf, and there was not really any padding to keep the HDD from sliding around. I don't think the delivery people played basketball with it, but are HDDs resilient enough that I should just plug it in and assume it's fine?

I assume this is the typical shipping experience, so for those of you who've ordered a bunch, is there anything I should be mindful of here?

BlankSystemDaemon
Mar 13, 2009



kliras posted:

How much abuse can NAS HDDs handle in general? I just ordered an Ironwolf, and there was not really any padding to keep the HDD from sliding around. I don't think the delivery people played basketball with it, but are HDDs resilient enough that I should just plug it in and assume it's fine?

I assume this is the typical shipping experience, so for those of you who've ordered a bunch, is there anything I should be mindful of here?
Do Ironwolf drives have the S.M.A.R.T conveyance test?
The specifications for the drives should specify how many G forces can be tolerated when operating and when not-operating.

IOwnCalculus
Apr 2, 2003





I had exceptionally poor results with the Ironwolf drives I tried but either I was completely cursed or there was some incredibly strange incompatibility with my system, because every Ironwolf I got my hands on failed during the initial stress test. I don't care who made the drive or how carefully it was shipped, step one after hooking it up is run it through multiple cycles on DBAN / nwipe.

BlankSystemDaemon
Mar 13, 2009



Stress-testing is absolutely a must, no matter the vendor or provenance, when setting up storage or adding new storage to an existing setup.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

IOwnCalculus posted:

I had exceptionally poor results with the Ironwolf drives I tried but either I was completely cursed or there was some incredibly strange incompatibility with my system, because every Ironwolf I got my hands on failed during the initial stress test. I don't care who made the drive or how carefully it was shipped, step one after hooking it up is run it through multiple cycles on DBAN / nwipe.

I also have came to the conclusion with Ironwolfs. Stopped using them because they would just go bad fast in any of the NAS system I have (ZFS and Synology)

My brother got one for his daughter for normal PC use and it went bad within three months.

Former Human
Oct 15, 2001

Seagate has had the worst reputation in the hard drive business for many, many years for a reason. Some of the larger Seagate drives have higher than 6% failure rate after 9.8 months average according to Backblaze, which is atrocious.

I think older Toshiba drives are just as bad but I don't know anyone who owns one.

H110Hawk
Dec 28, 2006
Seemingly the only thing which didn't survive the upgrade is minimserver, the dnla music thing. I should have installed a dsm6 version instead of "latest". The new 2.0 version of minim server costs an annual fee and I don't see an obvious reason to upgrade for my uses. Anyone have a suggestion? I just need my music to make it to my receiver or roku. Mp3s and flac files.

Wizard of the Deep
Sep 25, 2005

Another productive workday

Former Human posted:

Seagate has had the worst reputation in the hard drive business for many, many years for a reason. Some of the larger Seagate drives have higher than 6% failure rate after 9.8 months average according to Backblaze, which is atrocious.

I think older Toshiba drives are just as bad but I don't know anyone who owns one.

In the realm of anecdata, I had a 150% failure rate with some 1TB Seagates when they were new. I've avoided Seagates since then.

El Mero Mero
Oct 13, 2001

BlankSystemDaemon posted:

You have to replace all of the 3TB drives, not just some of them.

Only "RAID" I know of that lets you mix-and-match drive capacities with distributed RAID and get use out of the full diskspace is Drobo, and welp.

Doesn't synology's SHR do this?

Nulldevice
Jun 17, 2006
Toilet Rascal

El Mero Mero posted:

Doesn't synology's SHR do this?

Yes, it creates volumes based on disk size. So if you had say two 4TB disks and two 14TB disks, I believe depending on your raid level you could get a mix of volumes with the smaller disks being spread over all four, while the other two larger disks would be spread out over a mirror volume. I think. I'd have to mess with their raid calculator to be sure. Pretty sure you get two volumes though.

YerDa Zabam
Aug 13, 2016



More anecdata about Seagate. I have one from 2009 and one from 2011 that have been working 18 hours per day with no problems.

TBH, I'm trying to put my mind at rest as I have acquired a third. My friend gave me her external drive to shuck and it contained an Iron Wolf (not pro) I'll use it as parity in my new setup, so I hope it's going to last. I'll post if (when?) it dies.

I lucked out an ebay listing for 3x 4TB SAS drives of mixed brands. They were on at £35 each which is pretty good anyway, and I offered £75 for the lot and got it. I'm very pleased, and 12TB is plenty for me. Had to do some messing with them as they were weird sector(?) sizes.
If anyone is wanting to expand, doesn't mind the relatively small size, and is on a budget then 4TB SAS is the cheapest. Either low balling buy it now stuff, or sniping auctions (Gixen) I guess the market is saturated with them after DCs upgraded years back. When I was checking a model number on one of mine a wholesale reseller appeared in the google results and they had multiple thousands of that one alone. £30ish for 4TB is pretty great. Just do the homework as some can't be used, I think IBM System X or maybe Netapp?
If I was going above 4 drives or had anything vital on it I'd use another parity disk for sure though. (and backup of course)

They're going into the Supermicro server/workstation I posted about before, but I'm not convinced I can afford to run it 24/7. It idles at 85W. 110W was the max with the 4 drives being pre-cleared etc and 3 small SSDs too. 120W is about £5 per week which doesn't sound much but money is tight as gently caress.
Time will tell, but at least I managed to mod the PSU to be quiet. Previously I could hear it throughout my (admittedly small) flat.

Hoping to get Plex setup later and try to talk my Dad through the setup at his end and the same with the friend who gave me the drive to shuck. They are both in their 70s and not very computer literate so wish me luck.

YerDa Zabam fucked around with this message at 08:49 on Nov 28, 2021

Rescue Toaster
Mar 13, 2003
I have a NAS-adjacent question. If I'm setting up a couple drives for doing rotating backups with rsync, any guesses on which file system is the most 'robust' for a single drive? It's true obviously you never want to be dependent on a single drive but generally I want something that's recoverable if something goes wrong in the worst-case scenario.

Mainly things like:
1) If one bad sector happens limits the damage.
2) If power is lost or USB disconnects randomly generally repairs or can be recovered well with free/open source tools.
3) Generally non-awful in terms of files/folders becoming unreadable/undeletable. I've seen some weird poo poo (folders replaced with empty files) in the past with supposedly robust journaling filesystems.

My assumption was just go with something older and simpler like ext2.

ChineseBuffet
Mar 7, 2003
Can you do ZFS?

IOwnCalculus
Apr 2, 2003





Yeah, go ZFS here. Even on single drives the checksumming is valuable, along with knowing exactly which files you've lost in the event the drive develops some bad sectors but is otherwise still usable.

Rescue Toaster
Mar 13, 2003
I'm not sure. After some googling there definitely seem to be some 'ZFS won't mount or work after a surprise power outage'. Obviously my NAS is running RAIDZ2 and has a high end UPS. But I'm not 100% convinced it's as robust to sudden power off/ disconnect as it claims.

Sort of a theory vs practice thing here. Like I said I've seen supposedly fancy totally power-loss-tolerant journaling filesystems completely freak the gently caress out after a power loss. Mainly in the embedded world like jffs2/ubifs/yaffs, there's lots of promises made about these filesystems but in practice I think I've seen them all get trashed in one way or another by sudden resets.

https://www.klennet.com/notes/2021-04-26-zfs-and-power-failures.aspx

Rescue Toaster fucked around with this message at 17:15 on Nov 28, 2021

Zorak of Michigan
Jun 10, 2006

My philosophy is a bit different than yours seems to be. I usually run into these problems at work, and any time fsck reports problems, I immediately suggest abandoning the data and restoring from backup. Unless the file system has checksums like zfs, how can I trust it when I know it had errors? ZFS will either fault in a very brittle way, work but tell me exactly what files are no good anymore, or work. If this was my only copy of critical data, I might resent how difficult it is to open up that invalid zpool and try to dig up the blocks that make up the file I care about and get something back. In your rotating disk scenario, you should have a different disk to fall back on, right? Why would you rather take your chances on recovery tools rather than just trying a different disk?

Rescue Toaster
Mar 13, 2003
This is for a house-already-burned-down scenario. (It's likely I'd only have one off-site copy if I wasn't at home to grab the newest one on the way out the door.) I'd rather have my backup work but maybe possibly have a bad file or two than 'oops all mount errors' that seems to happen surprisingly often the fancier a filesystem gets. Call me an old boomer curmudgeon or whatever, but my gut instinct when its 'this is the last drive in existence that has the files' I feel like I want something simple and where recovery/scraping tools are common if the main inode tables or whatever have gotten screwed up by a unclean shutdown last time I swapped the backup drive.

Rescue Toaster fucked around with this message at 18:58 on Nov 28, 2021

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
I have had ZFS systems come back even when not dismounted correctly. Every drive stores the information of every disk that is in the "pool" so it can recognize other disks when you do a ZFS import on a disk; it will say that it found the other disks and automatically include them.

Way more that I can say for hardware raid where it was critical the drives be plugged into the exact same data connectors they were in. You better have all of them there too, even the failed ones you already sent back to the company, before you even try to replace a drive.

This was around 2008 or something and the raid cards were like 1k each or something crazy.

EVIL Gibson fucked around with this message at 19:21 on Nov 28, 2021

Mr. Crow
May 22, 2008

Snap City mayor for life
Is ZFS any closer to incremental disk upgrades yet? I remember someone was working on it what seems like a few years ago, has there been much progress?

BlankSystemDaemon
Mar 13, 2009



Rescue Toaster posted:

I'm not sure. After some googling there definitely seem to be some 'ZFS won't mount or work after a surprise power outage'. Obviously my NAS is running RAIDZ2 and has a high end UPS. But I'm not 100% convinced it's as robust to sudden power off/ disconnect as it claims.

Sort of a theory vs practice thing here. Like I said I've seen supposedly fancy totally power-loss-tolerant journaling filesystems completely freak the gently caress out after a power loss. Mainly in the embedded world like jffs2/ubifs/yaffs, there's lots of promises made about these filesystems but in practice I think I've seen them all get trashed in one way or another by sudden resets.

https://www.klennet.com/notes/2021-04-26-zfs-and-power-failures.aspx
If zpool-import -Fn fails, crashes or hangs due to a faulty space_map that's hardly the end of debugging as the post strongly implies:
  • First step is to install the debugging symbols that were probably stripped from the kernel and userland, as this will make it possible to backtrace the core in a debugger
  • Second step is to use zdb -eL tank which disables space maps and leak detection
  • If the second step fails/hangs, use zdb -eLmmmmm tank to get an entire listing of the spacemap, which can be examined for inconsistencies (this is very verbose, so output it to a file).
  • Third step is to run zdb -e tank which will likely crash as it's operating on the broken space map, with enabled leak detection
  • Fourth step is to backtrace any core files produced in a debugger

If those don't get you anywhere, it's probably not directly related to the space_map, which means it's time to pull out zdb -Ce (which shows the per-vdev config of the pool, and can be useful for finding inconsistencies) and/or zdb -eu (which displays uberblock information), et cetera.

Then you put all of it in the issue that you're hopefully filing on the OpenZFS Github, instead of just trying to sell a piece of software that's conveniently the only way to solve (ie. mask over) the issue.

Mr. Crow posted:

Is ZFS any closer to incremental disk upgrades yet? I remember someone was working on it what seems like a few years ago, has there been much progress?
I mentioned earlier in the thread, I think, that it was covered at the recent online FreeBSD Developer Summit - but the long and the short of it is that it's done and is ready for review.
Here's the video from day 2 of the FreeBSD Developer Summit with a timestamp:
https://www.youtube.com/watch?v=FHtfFWLeQEA&t=11951s

BlankSystemDaemon fucked around with this message at 19:59 on Nov 28, 2021

CopperHound
Feb 14, 2012

Mr. Crow posted:

Is ZFS any closer to incremental disk upgrades yet? I remember someone was working on it what seems like a few years ago, has there been much progress?
Work is being done on vdev expansion:
https://github.com/openzfs/zfs/pull/12225

That doesn't address having to replace all drives to see an increase. Maybe zfs expansion could be used to do something like SHR, but something makes me think one drive serving multiple vdev might be a bad idea.

Zorak of Michigan
Jun 10, 2006

CopperHound posted:

Work is being done on vdev expansion:
https://github.com/openzfs/zfs/pull/12225

That doesn't address having to replace all drives to see an increase. Maybe zfs expansion could be used to do something like SHR, but something makes me think one drive serving multiple vdev might be a bad idea.

It can work by giving ZFS partitions rather than whole disks, but it's neither optimal not recommended.

Mephistopheles
Sep 24, 2003

How can I help you?
Heads up for anyone looking to get unRaid. pro version and upgrades to pro are 20% off for cyber Monday only. I just got myself a new pro version.

Tornhelm
Jul 26, 2008

Rescue Toaster posted:

This is for a house-already-burned-down scenario. (It's likely I'd only have one off-site copy if I wasn't at home to grab the newest one on the way out the door.) I'd rather have my backup work but maybe possibly have a bad file or two than 'oops all mount errors' that seems to happen surprisingly often the fancier a filesystem gets. Call me an old boomer curmudgeon or whatever, but my gut instinct when its 'this is the last drive in existence that has the files' I feel like I want something simple and where recovery/scraping tools are common if the main inode tables or whatever have gotten screwed up by a unclean shutdown last time I swapped the backup drive.

Anecdotally, your NAS might be surprisingly fire resistant. My old Netgear ReadyNAS 104 was surprisingly still usable after my house burnt down after replacing the power cord. I did only use it to get my stuff off of it, but the drives themselves are still chugging along fine in a new NAS a couple of years later.

TransatlanticFoe
Mar 1, 2003

Hell Gem
For the last few weeks after adding a new 14TB drive to my Synology, I've been having an issue with it seemingly deadlocking. Access via SMB seems fine, I can usually still SSH in, but a handful of commands will either not run or run then hang. A handful of my Docker containers will show as running but not be web accessible, while others are working fine. My Synology packages also show as running, but stuff like Plex and the Photos app are seemingly sleeping. Nothing in top or the Resource Monitor seems like it's taking up a huge amount of memory or CPU. Trying to shutdown or restart just makes it wait indefinitely until I force shutdown with the power button. Has anyone come across this before or know how to figure out what's causing it? I'm assuming it's one of my Docker containers, it's been running fine for the last week and a half with a subset of them running, was planning on bringing up the other ones a week at a time to try to narrow it down.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.
More anecdotes but I've got I think 4 Iron Wolfs (4tb) in my NAS now age ranging from 1-3 years old and they've all been solid so far.

Jeff Wiiver
Jul 13, 2007
First-time NAS builder here. I've been digitizing my blu-ray and DVD collection, and I now need more space and some redundancy to protect the files. I'd be using the NAS to run a Plex server for myself, and potentially a few friends/family members. Very small-scale.

Currently looking at getting two IronWolf 4TB drives and using them in a RAID 1 set-up. I'm pretty much done digitizing my physical media, and after encoding the collection is sitting around 450 GB. I figure 4TB is more than enough space for me to grow into, but I can't decide for the life of me on a NAS enclosure. The cheap ones sound like they're slow as poo poo, and the expensive ones seem like overkill for my purposes. Anyone have any experience with ASUSTOR products? Best combo of price and performance I've found so far.

CopperHound
Feb 14, 2012

I'm thinking about trying zfs with truenas in an 8 bay enclosure. Unfortunately I already have 3 drives full of stuff and I can't figure out how to best lay out the vdev.

Ideally I think I would want raidz2 over 8 drives, but would I need to buy 8 blank drives to do that and waste the 3 I already have?

Should I just suck it up and do two raidz1 vdevs consisting of 4 drives each?

E: or wait and see if zfs expansion happens in the next year or ten?

CopperHound fucked around with this message at 20:19 on Nov 30, 2021

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer
If anyone has tips on cheap 4-8 bay enclosures I would also appreciate it! Waiting for my 1515+ to die and want to have a replacement plan.

VelociBacon
Dec 8, 2009

Jeff Wiiver posted:

First-time NAS builder here. I've been digitizing my blu-ray and DVD collection, and I now need more space and some redundancy to protect the files. I'd be using the NAS to run a Plex server for myself, and potentially a few friends/family members. Very small-scale.

Currently looking at getting two IronWolf 4TB drives and using them in a RAID 1 set-up. I'm pretty much done digitizing my physical media, and after encoding the collection is sitting around 450 GB. I figure 4TB is more than enough space for me to grow into, but I can't decide for the life of me on a NAS enclosure. The cheap ones sound like they're slow as poo poo, and the expensive ones seem like overkill for my purposes. Anyone have any experience with ASUSTOR products? Best combo of price and performance I've found so far.

Not sure where you live but I'm relatively certain that if you actually own the content because you bought it at some point, you are legally in the clear for just maintaining a lib of torrent files that point to those products? So long as you don't seed it I guess. Not trying to discuss :filez: but it's a lot cheaper than buying more storage to store the stuff already on a compact storage medium.

Jeff Wiiver
Jul 13, 2007
I'm in the US. I have to buy more storage regardless of what I do, my desktop 1TB SSD is pretty much full.

BlankSystemDaemon
Mar 13, 2009



A new feature named vdev properties just landed in OpenZFS.
Some people might remember me talking about it, because it makes it possible to correlate SES paths with disks used in ZFS, so that when you type zpool status it gives you the location information in a SAS enclosure without having to use GPT information or GEOM labels (or similar meta-data).

However, I also just learned today that another thing it's paving the way for is a feature called removal queueing - basically being able to remove one drive after another, simply by turning off allocations to a device temporarily.
What was also mentioned in that conversation is that that feature can be used as a way of more efficiently rebalancing pools with multiple vdevs, since the ability to turn off allocations is simply a property that can be toggled at runtime. So that's kinda neat.

CopperHound posted:

I'm thinking about trying zfs with truenas in an 8 bay enclosure. Unfortunately I already have 3 drives full of stuff and I can't figure out how to best lay out the vdev.

Ideally I think I would want raidz2 over 8 drives, but would I need to buy 8 blank drives to do that and waste the 3 I already have?

Should I just suck it up and do two raidz1 vdevs consisting of 4 drives each?

E: or wait and see if zfs expansion happens in the next year or ten?
Well, raidz expansion should land within the next year or two, if we assume that people are going to put in effort to review it.
Unfortunately there's never enough domain experts to do review for any opensource project, so there are no guarantees that can be made.

What you can do sort-of depends on how much allocated disk-space you need.
Let's say you've got 3 3TB disks which adds up to roughly 8TB allocated space. If you buy 4 8TB disks (the minimum size if you want to avoid SMR when shucking WDs) and put them into a RAIDz2 that should give you approximately double the amount of diskspace you have now, and still leave room to use 8 disks in RAIDz2 when you've eventually expanded your array.

Smashing Link posted:

If anyone has tips on cheap 4-8 bay enclosures I would also appreciate it! Waiting for my 1515+ to die and want to have a replacement plan.
I imagine you'd want to keep an eye out on ebay/craigslist/whatever's convenient and equivalent for you - the recent sales have probably made a lot of people upgrade, so that's likely the best bet.

CopperHound
Feb 14, 2012

I forgot I have a third option that gives me a high chance of putting my backups to the test: Creating a degraded pool! :v:


https://www.truenas.com/community/resources/creating-a-degraded-pool.100/ posted:

IF YOU DON'T KNOW EXACTLY WHAT YOU'RE DOING, DO NOT FOLLOW THESE INSTRUCTIONS.

IN FACT, IF YOU NEED THESE INSTRUCTIONS, YOU PROBABLY SHOULDN'T FOLLOW THEM.

CopperHound fucked around with this message at 00:11 on Dec 1, 2021

Corin Tucker's Stalker
May 27, 2001


One bullet. One gun. Six Chambers. These are my friends.
It's been a few days since I got the QNAP TS-230 as my first NAS. Here are a few thoughts from a total newcomer and partial idiot.

Setup was easy. Being farted in the face with a million notices when I first launched QTS was unpleasant. I got a little lost in the weeds looking at apps I didn't need, launching things I didn't understand that created folders I didn't want, and went to bed slightly regretting the purchase.

Then the next day I thought... what specific uses did I buy this for? So I focused on those things. Three of the four uses were simply SMB shares, so I focused on setting those up. Easy. And on the first attempt I was able to share games to both my PS2 and MiSTer FPGA. Now this thing feels like magic.

So far I only have one hiccup, which is that both the PS2 and VLC on Xbox (I think) require SMB 1. I know how to enable it. It's just odd having to do so, as it's apparently a security risk. I did set a limit on login attempts and only allow connections from my local network, though, so hopefully that helps.

Adbot
ADBOT LOVES YOU

Yaoi Gagarin
Feb 20, 2014

BlankSystemDaemon posted:

A new feature named vdev properties just landed in OpenZFS.
Some people might remember me talking about it, because it makes it possible to correlate SES paths with disks used in ZFS, so that when you type zpool status it gives you the location information in a SAS enclosure without having to use GPT information or GEOM labels (or similar meta-data).

However, I also just learned today that another thing it's paving the way for is a feature called removal queueing - basically being able to remove one drive after another, simply by turning off allocations to a device temporarily.
What was also mentioned in that conversation is that that feature can be used as a way of more efficiently rebalancing pools with multiple vdevs, since the ability to turn off allocations is simply a property that can be toggled at runtime. So that's kinda neat.

Well, raidz expansion should land within the next year or two, if we assume that people are going to put in effort to review it.
Unfortunately there's never enough domain experts to do review for any opensource project, so there are no guarantees that can be made.

What you can do sort-of depends on how much allocated disk-space you need.
Let's say you've got 3 3TB disks which adds up to roughly 8TB allocated space. If you buy 4 8TB disks (the minimum size if you want to avoid SMR when shucking WDs) and put them into a RAIDz2 that should give you approximately double the amount of diskspace you have now, and still leave room to use 8 disks in RAIDz2 when you've eventually expanded your array.

I imagine you'd want to keep an eye out on ebay/craigslist/whatever's convenient and equivalent for you - the recent sales have probably made a lot of people upgrade, so that's likely the best bet.

Could you in theory write a script to rebalance a zfs pool by disabling allocations on the more full vdevs and then cp-ing a bunch of files around until the vdevs are mostly balanced?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply