Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

G-Prime posted:

It depends on if the drives were added by ID or by the slot. I think you can run 'zpool status' and see which it is. If it's by ID, ordering shouldn't matter. If it's by slot, putting them in the wrong order should be a problem, and I honestly don't know what the impact of doing so would be.
ZFS creates and writes UUIDs into the uberblock of each drive during pool creation. If you do a force import, it'll scan all disks and checks whether it belongs to the pool you're trying to import. That's how it's supposed to work, at least with whole disks. No idea what happens on ZoL. No idea either on FreeNAS, since it wraps poo poo in GELI.

Adbot
ADBOT LOVES YOU

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Paul MaudDib posted:

I was wondering the same thing. A proper shutdown should have everything flushed, is it necessary to explicitly export it or is a shutdown enough synchronization?

edit: maybe just a shutdown wouldn't flush ZIL/SLOG?

Export always flushes.

Also it looks like it is possible to convert drives referenced as disk assignment (/dev/sdx) to use drive id when the zpool is reformed.

Here is a thread but make sure you read it all because the asker messes up but figures out what you need to exactly do.

https://ubuntuforums.org/archive/index.php/t-2087726.html

Ziploc
Sep 19, 2006
MX-5

Ziploc posted:

Hrm. Problems again. It would appear that one of the drives is acting up. These are all sector checked 1tb drives out of the ewaste. So I'm not THAT surprised.



However, the thing that confuses me, is that (after a restart) the volume doesn't come back. When I try to import it, it stalls out. and I notice this on the IPMI.





Why does the state go to Unknown? And why won't it import? Shouldn't the Volume be able to survive this and maintain access to my datas?

I have a fresh 1tb drive on the way. But I'm not sure what the procedure is in this case.

If anyone was wondering. Something happened with ZFS/Zpool during a copy of a 4 60gb files. A hard drive died in the middle of it potentially. I don't know. But after hours of troubleshooting, I ended up mounting the Zpool in read only mode (with the failed HDD removed) which allowed me to copy off everything BUT those 60 gb files.

Once I'm back up and running I'm going to perform the same copy again to see if can handle that properly.

Mr Shiny Pants
Nov 12, 2012
Luckily it was on the GarbageDisks......

eames
May 9, 2009

Not sure if this is the right thread for the question but are spun down HDDs less susceptible to file system corruption from an unexpected shutdown (i.e. blackout) than spun up drives?

Twobirds
Oct 17, 2000

The only talking mouse in all of Britannia.
Thanks for the comments on my issue, I appreciate the honesty. I'll push back and see how far I get.

BlankSystemDaemon
Mar 13, 2009



Corruption from improper shutdown happens when data is written to a filesystem and that write is interrupted before all the data is written to disk₁ in a manner that doesn't go from consistent state to consistent state (that's what sets ZFS and other atomic filesystems apart). Since you have to spin up a drive to write to it, no write corruption can happen.

₁: as opposed to being kept in a buffer in memory or in the disks internal buffer, and assuming it doesn't have a capacitor big enough to ensure that the buffer can always get flushed, something that most enterprise SSDs have.

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer
Not sure if there's a better thread to ask this:

Anyone have recommendations for data recovery services? My sister has an external HD that gives the click of death when she plugs it in, has some financial data and sentimental photos she really wants to recover.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Takes No Damage posted:

Not sure if there's a better thread to ask this:

Anyone have recommendations for data recovery services? My sister has an external HD that gives the click of death when she plugs it in, has some financial data and sentimental photos she really wants to recover.

The level of money you would probably be willing to spend will get you the company running stuff you can already do , maybe freezing it and then hoping the heads can start reading it before it thaws.

Now if we are talking take apart platter by platter and scanning each one in, we are easily hitting thousands and thousands.

Steakandchips
Apr 30, 2009

Takes No Damage posted:

Not sure if there's a better thread to ask this:

Anyone have recommendations for data recovery services? My sister has an external HD that gives the click of death when she plugs it in, has some financial data and sentimental photos she really wants to recover.

It's dead, Jim.

(Unless you want to pay thousands and thousands)

Internet Explorer
Jun 1, 2005





It likely won't cost thousands and thousands. It has gotten a bit cheaper over the years. I'd say ballpark 1k, but it depends on how your hard drive is broken, the model of hard drive, etc.

Try these guys - https://www.securedatarecovery.com/request-help

May also be an important lesson for your sister. Back up important poo poo. There are way too many cheap and easy services to back up your photos these days. I have this conversation with family members constantly, so it is a bit of a pet peeve of mine.

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer
That's unfortunate :( Pretty sure we've tried freezing it and there was no change, was hoping there was a service that could read the platters for hundreds of dollars, thousands is pushing it.

Internet Explorer
Jun 1, 2005





Takes No Damage posted:

That's unfortunate :( Pretty sure we've tried freezing it and there was no change, was hoping there was a service that could read the platters for hundreds of dollars, thousands is pushing it.

I'd reach out. Don't really have anything to lose. It's not going to be $200-300, but it could potentially be $600-$800.

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer

Internet Explorer posted:

I'd reach out. Don't really have anything to lose. It's not going to be $200-300, but it could potentially be $600-$800.

Thanks for the link, Google is turning up a lot by them and Gillware, seeing price ranges in the 700-1200 area. I'm guessing our best bet is to contact the top 3 or 4 companies and see if we can get an estimate then go from there. And yeah I'm definitely going to push for CrashPlan or some other off-site backup solution so this doesn't happen again.

vvv I thought they were just moving from a 15/month family plan with unlimited PCs to 10bux per PC, so if you're only backing up 1 or 2 different machines the price isn't changing that much.

Takes No Damage fucked around with this message at 18:49 on Sep 6, 2017

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl

Takes No Damage posted:

Thanks for the link, Google is turning up a lot by them and Gillware, seeing price ranges in the 700-1200 area. I'm guessing our best bet is to contact the top 3 or 4 companies and see if we can get an estimate then go from there. And yeah I'm definitely going to push for CrashPlan or some other off-site backup solution so this doesn't happen again.

Crashplan is no longer offering consumer-level backup services any more, FYI.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Is RAIDZ1 enough redundancy with an 8x8 TB array or would RAIDZ2 be better? Probably wouldn't care all that much if the array poo poo itself, I'm maintaining backups of the stuff I care about, but it would be inconvenient.

Any idea what resilver times would look like with an array like that?

8-bit Miniboss
May 24, 2005

CORPO COPS CAME FOR MY :filez:

Paul MaudDib posted:

Is RAIDZ1 enough redundancy with an 8x8 TB array or would RAIDZ2 be better? Probably wouldn't care all that much if the array poo poo itself, I'm maintaining backups of the stuff I care about, but it would be inconvenient.

Any idea what resilver times would look like with an array like that?

I'd go RAIDZ2. If the array was split into 2 vdevs I would consider RAIDZ1 but probably wouldn't commit on account of the size of the drives.

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
I'm looking at upgrading my old NAS (8x 1TB drives in RAID-6, ext3) to something modern and bigger using FreeNAS and ZFS.

Is RAID-Z2 reasonable for 8x 8TB drives? Or should I be looking at RAID-Z3 for large-capacity drives?

When I built my last NAS, all the talk was about how RAID-5 was deprecated, and RAID-6 was on it's way with the likelihood of failure during rebuilding. Have those concerns held up?

All my super crucial stuff will be backed up elsewhere, but I'm willing to shell out extra $ to prevent headaches, re-ripping Blu-Rays, etc (though I don't want to waste money either!).

Zorak of Michigan
Jun 10, 2006

I asked the very same question a ways back and while I haven't come up with the cash to build it yet, consensus was that RAID-Z2 would almost certainly be sufficient. Someone posted the math and made a pretty good case for the notion that I should be more worried about my house burning down than seeing a failure case RAID-Z2 wouldn't handle. Based on my experience with Solaris ZFS at work, I'd also urge you to make sure you're doing routine scrubs, just to reduce your chance of an unpleasant surprise.

eames
May 9, 2009

Is RAID-Z1 and RAID-Z2 any different from RAID-5 and RAID-6 in that regard?
http://wintelguy.com/raidmttdl.pl has a calculator for this type of stuff, perhaps not perfectly accurate but it should get you in the ballpark.

Using the data of 8 x 8TB WD80EFZX (error rate of 1 < 1e-14) and the backblaze WD60EFRX cumulative drive stats (4.87% annual failure rate) the probabilities for a complete data loss over 10 years are:

RAID 5 (single parity) 85.4%
RAID 6 (dual parity) 0.16%

If you take backblazes average drive failure rate (1.97%) and shorten the lifespan to 5 years you're still looking at 54.1% versus 0.027%.

(other parameters: 1 group, 8 x 8000GB, hot spare, drive throughput 200 MB/s, 50% rebuild rate)

last time I checked people said that the ratio of drivesize to error rate is becoming a problem for single parity rebuilds with these huge drives, if this data is true then I wouldn't even consider single parity with 8TB LSE 1<1e-14 drives.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

eames posted:

...the probabilities for a complete data loss over 10 years are...

One thing to remember is that the vast majority of sites talking about "data loss" with traditional RAID are really talking the probability of experiencing one or more Unrecoverable Read Errors--which, indeed, means that some data has been irrecoverably lost. Utilizing modern systems like RAIDZ, however, experiencing a URE does not mean complete data loss, as only that particular sector is lost, and the remainder of your data keeps on keepin' on. In many cases, the missing kB (or however much a single sector equates out to) might not even be much of a big deal--think a single missing frame from some random porn you happen to have saved. And, of course, if your array is only half full, then you're only doing half the reads compared to calculators like that, which assume 100% (or near enough) capacity use.

That's not to say that RAIDZ-2 for a 8-drive array is a bad idea (it's not--you should probably use -2), but more a reminder that bumping into a URE isn't as catastrophic as it used to be.

IOwnCalculus
Apr 2, 2003





DrDork posted:

That's not to say that RAIDZ-2 for a 8-drive array is a bad idea (it's not--you should probably use -2), but more a reminder that bumping into a URE isn't as catastrophic as it used to be.

Yep, this. I have encountered a URE during a raidz rebuild, and ZFS is smart enough to not poo poo the entire array down the drain. It flags the files it can't recover (which shouldn't be too many if we're talking about a single URE and not losing a whole drive) and you restore those from backup.

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
That's great info.

To be clear - for my 8x 8TB thing, I'm talking about RAID-Z3 vs RAID-Z2 (not even considering RAID-Z1!). IIRC URE catastrophe was the main thing behind folks saying "RAID-6 will soon be deprecated!" back in the days of 1TB/2TB drives being the biggest out there.

If I'm really looking at a 0.16% or something percent chance of failure in the longrun with -Z2, I think I'm comfortable with -Z2 instead of -Z3 :)

admiraldennis fucked around with this message at 23:35 on Sep 6, 2017

Greatest Living Man
Jul 22, 2005

ask President Obama
All my media data is already in zfs datasets. Let's say I build a server with refurbished 2x 8c/16t Sandy bridge processors and 64 GB ram. Is it possible to use some of these cores and ram to run an instance of freeNAS/some sort of ZFS filesystem and use the other cores for like an Ubuntu server that runs Plex etc? Or is there a simpler solution? I'm running into some limitations with freeNAS/FreeBSD.

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money
Sure. Virtualize FreeNAS with ESXi. You'll need a motherboard and processor capable of passthrough (VT-d). You can connect your ZFS drives to a PCI-e card that ESXi and FreeNAS support, like most LSI models. You can even just passthrough your onboard SATA on some motherboards.

Then once your FreeNAS pool is up, use ESXi to install whatever other OSes you want.

Alternatively, FreeNAS can virtualize stuff too, but not as efficiently as a bare-metal hypervisor like ESXi.

IOwnCalculus
Apr 2, 2003





Or skip virtualization, install Ubuntu on an SSD, mount your tank using ZFS on Linux, and use docker for Plex / plexpy / deluge / sonarr / whatever else you want to run.

kloa
Feb 14, 2007


So my next PC build will most likely be some kind of SFF case where I'll only be able to fit a couple drives into it, which means relocating my desktop RAID setup somewhere else. I've been saturating my 1gig LAN ports on 7200 RPM HDDs, so I've been trying to think of an ideal setup for multiple people accessing the data and nobody competing against each other for bandwidth.

Would buying a used low-power Xeon (e5-2650l or something) and slapping a 10gig network card into it be a good route to go? I plan on getting 10gig on my next motherboard, so it'd be cool to actually get high speed transfer rates to it too.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Going faster than gigabit is still fairly expensive. 10 GbE switches are expensive as hell (several hundred dollars for 2 10 GbE ports last time I checked), the adapters aren't too cheap either (~$150 each). Or you can go with something else like Infiniband, switches and adapters are reasonable (36-port switch for $125, adapters are $40 each) but you have a practical 7m max cable length limit. You can go longer but you need fiber with optical transceivers (which gets expensive).

Having 10GbE between the NAS and your switch might help relieve the bottleneck, in theory you should be able to have like a half dozen computers cranking at full gigabit speeds before the 10GbE segment is saturated.

Paul MaudDib fucked around with this message at 08:20 on Sep 7, 2017

BlankSystemDaemon
Mar 13, 2009



eames posted:

Is RAID-Z1 and RAID-Z2 any different from RAID-5 and RAID-6 in that regard?
The stripe+parity division is the same, except that ZFS does dynamic stripe width which ensures that every write is a full width write.
I thoroughly recommend reading ZFS RAIDZ stripe width, or: How I Learned to Stop Worrying and Love RAIDZ by Matt Ahrens to get an understanding of how it actually works. There's also a spreadsheet that covers parity+padding cost of the various ZFS parity levels.

bobfather posted:

Alternatively, FreeNAS can virtualize stuff too, but not as efficiently as a bare-metal hypervisor like ESXi.
ESXi is an appliance for virtualization, FreeNAS is a appliance for file sharing - you can't really compare them fairly.
I encourage you take FreeBSDs GENERIC kernel, strip it of everything you won't use for driver initialization, bhyve, and zfs, and then install it on as many mirrors of disks you can afford with zfs on root, you can use it to host VMs that use zvols.
I think you'll be surprised at just how efficient of a hyper-visor it can be. Best part about it is that you won't need a storage solution for datastore in addition to what your hypervisor is loaded onto.
On top of that, you can even do jails, CloudABI, and actual-docker virtualization which saves a bunch of resources.

admiraldennis posted:

To be clear - for my 8x 8TB thing, I'm talking about RAID-Z3 vs RAID-Z2 (not even considering RAID-Z1!). IIRC URE catastrophe was the main thing behind folks saying "RAID-6 will soon be deprecated!" back in the days of 1TB/2TB drives being the biggest out there.
The reason why RAID5 and RAID6 were being deprecated back in 2009 already, was that despite harddisk sizes growing orders of magnitude, harddisk bandwidth simply hadn't kept up (and still haven't) - so because traditional RAID arrays can't do what ZFS can, people were worried about losing data since rebuild times would keep getting bigger as disks got bigger.
However, unlike other RAID solutions, ZFS can actually tell you which file get corrupted if it can't self-heal it even during a scrub or resilver, which will let you delete it and restore it from the backup that you have.
You know, the backup that you should test programmatically regularly, because otherwise it's useless? Yeah, that one. :)

Paul MaudDib posted:

Having 10GbE between the NAS and your switch might help relieve the bottleneck, in theory you should be able to have like a half dozen computers cranking at full gigabit speeds before the 10GbE segment is saturated.
I've been pretty vehement about getting 1Gbps at home for all my machines and a server that can saturate it for at least two clients (with LACP), but I'm not convinced 10GbE is worth it. If I ever need that kind of speed, I'd much rather invest in 10G SFP+ with fiber which takes less power and has an order of magnitude lower latency.

BlankSystemDaemon fucked around with this message at 15:57 on Sep 7, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Going beyond 1GbE certainly helps latency in certain workloads. Hell, I even see a difference between 10GbE SFP+ and 40GbE QSFP. The relevant measurements I've been looking at are random 4K reads with a queue depth of 1. Going from my Intel 10GbE SFP+ adapter to that Mellanox 40GbE QSFP one, it more than doubled (14MB/s -> 34MB/s).

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

The consumer NAS thread. Most misleading thread title?

kloa
Feb 14, 2007


Thermopyle posted:

The consumer NAS thread. Most misleading thread title?

Updated OP would be cool and good :kiddo:

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Thermopyle posted:

The consumer NAS thread. Most misleading thread title?

Well, we're not producing any of the networking or NAS hardware, so by definition we're consumers...

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

IOwnCalculus posted:

Or skip virtualization, install Ubuntu on an SSD, mount your tank using ZFS on Linux, and use docker for Plex / plexpy / deluge / sonarr / whatever else you want to run.

Quoting this, I'm going the full docker route and couldn't be happier with everything. I did a CentOS base with ZFS installed through a goofy (but working) method of nuking the partition the xfs/ext4 install was on and rsync -> rpool -> rebuild kernel -> grub.

I found this compose file and got going within a few hours.

Currently in the process of syncing all of my media from the old NAS to the new one... days later... ugh... gigabit... why is 10gigE so expensive still...

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
Shucking report!

Bought 9x WD easystore 8TB external drives at Best Buy for $169.99/ea + tax during this week's sale. They've been as cheap as $159.99.



With some careful (but pretty quick and easy overall) disassembly...



treasure inside!



Keeping stuff safe for the next two years in case of RMA. But after that I have 9x drive enclosures + chipsets, 12V 1.5A power supplies, and USB 3.0 cables - not bad for extras.



Notes:

- These have a 2 year warranty instead of a 3 year warranty
- The enclosure serial is the same as the bare drive serial. The serial shows up as an easystore drive on WD's warranty checker. You'd have to send in the drive intact in the enclosure to receive warranty, I'm sure.
- Disassembly and reassembly can be done non-destructively , though there are easy things to break if you aren't careful.
- From research, there seem to be at least three drives currently possible:

1) WD80EFAX - WD Red 8TB label, 256MB cache, made in Thailand
2) WD80EFZX - WD Red 8TB label, 128MB cache, made in China
3) WD80EMAZ - White label, 256MB cache, made in Thailand

Supposedly #3 is exactly the same as #1 but with a white label, same firmware and TLER enabled and exact specs as the WD Red 8TB labeled one, just with a white label to indicate it wasn't sold bare.

It seems like, at least for the Thailand drives, the White Labels are replacing the Red Labels.

- Before buying, you can tell if it is Made in Thailand or Made in China as it's printed on the bottom of the box. I only saw one Made in China in the two Best Buys I went to for these.
- So far I've only shucked one of the drives, though I've plugged them all in to check their drive model # via SMART over USB. All of my drives are WD80EFAX (Red Label, 256) except one which is WD80EMAZ (White Label, 256).
- You can check the "warranty end date" on the serial on the box before buying - might be a clue to Red vs White label?


(Only the 09/01 one was a white label.)

- Also, FWIW, these drives are helium drives. Good/bad/neutral? It seems like this is/is going to be the new normal for high-capacity drives. I'll admit it scares me a bit from a 'new ways for things to die!' perspective (what if the helium leaks in 5 years?). But apparently HGST has been doing it for a while and drive manufacturers claim that it's more reliable. I could see that angle, if the sealing is effective long-term and if the air filter not being perfect (or some such air/air-hole related thing) is a point of failure on air drives. There's a SMART attribute for Helium level (22), which on my drives shows reading 100, with a pre-fail threshold of 25.

admiraldennis fucked around with this message at 04:45 on Sep 8, 2017

Furism
Feb 21, 2006

Live long and headbang
I'm still baffled by the level of technology we reached. "Yeah, let's throw helium inside because gently caress friction."

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

admiraldennis posted:

Keeping stuff safe for the next two years in case of RMA. But after that I have 9x drive enclosures + chipsets, 12V 1.5A power supplies, and USB 3.0 cables - not bad for extras.



Uggggggggggggggh

that is the opposite of keeping that board safe. Food storage bags are nowhere near ESD-safe.

Any RMA department which knew you had done that to something you were returning would be well within their rights to refuse the RMA. Not that I think you have to worry about that possibility, just making the point.

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

BobHoward posted:

Uggggggggggggggh

that is the opposite of keeping that board safe. Food storage bags are nowhere near ESD-safe.

Any RMA department which knew you had done that to something you were returning would be well within their rights to refuse the RMA. Not that I think you have to worry about that possibility, just making the point.

You're not wrong, though these little chipsets have to be worth about $1 each max :D (they also appear interchangeable - e.g. if one happens to fry I could just use a different one). I could look around to see if I have ESD bags... but they'd have to be the right shape to keep the delicate little plastic mounting bracket thing intact for reassembly. Maybe I'll try to buy some, since I've only shucked 1/9 drives so far. The possibility of refused RMA is definitely factored into my bet hedging here though.

admiraldennis fucked around with this message at 14:03 on Sep 8, 2017

Steakandchips
Apr 30, 2009

You don't need ESD bags. Those are perfectly fine.

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

1 out of 1000 times you might have a problem without ESD bags.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply