Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
I have a similar question. I'm planning a machine to use as a freeNAS box, running a 5 drive Raidz array as a start. Is there any issue with using the Green drives like this? I know running them in a "real" raid setup can/does cause issues, but how do they work in a zfs pool?

Adbot
ADBOT LOVES YOU

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
I've been thinking about trying out an Open Solaris install and trying ZFS to consolidate storage of all my media into one pool of data, rather than spread out over a bunch of 640GB-1.5TB drives. I'm much more familiar with Linux than Solaris though, so is there any issue with running mdadm/LVM under a Linux distro? I'd like to build an initial RAID5 array, and be able to expand the existing mount location by adding new arrays as my storage needs expand.

I'd most likely be starting with an array of 4x1.5TB drives in RAID5, and adding another 6-8 months down the line when the need arises. As far as I can tell, this is possible, but what shortfalls am I missing? Absolute performance isn't that important, as this will be mostly streaming video/audio over a 100mbit home LAN to a few HTPC's and 3-4 desktops.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
I snagged a WD Green 2TB (WD20EARS) back before I knew better :(. It's been solid so far, but I'm just using it as a dump drive on my desktop. I'll have to keep the Hitachi's in mind when I finally splurge on an array, do they play nice with mdadm RAID5 or a RAIDZ1 array?

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
How is the ZFS-on-Linux port? I see there's the FUSE project, as well as the native kernel port. WOndering if there's benefits or drawbacks to either in particular. My old box-of-random-drives finally kicked the bucket, so I'll be rebuilding things properly next week when the new hardware shows up.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!

Megaman posted:

Semi silly question. I'm buying an antec case and 8 drives to make a freeNAS RAIDZ3 array. Is it smarter to use 2 5in3 ICY Dock bays to fit these, or just put them in the case without the ICY DOck bays? I know people have mentioned the backplanes won't fail, but if they do they can wipe out 5 drives with them. Do people recommend docks over no docks? I've alluded to this concern before, the bays have gotten not so great reviews on newegg

I'm using something similar in mine, but its just a 4-in-3 adapter thing with no backplane. I did it just to keep the 2 4-disk vdevs physically together, so swapping a drive after a failure is easier. I wouldn't use anything with a backplane for this situation, try to find one that's just a cage with no electronics in it. You won't have hot swap capability, but I don't think that's really worth the money in this instance.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
I had problems with ZFS and 12.10 on the system I just built. 11.10 on my text VM didn't have any issues, so after some google searches I discovered I had to grab the linux-headers-3.5.0.21-generic package before the ZFS package would install and load correctly on 12.10. I'd check and make sure you've got the headers package for whatever kernel version you're on.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
Hmm. That's odd. Mine's been chugging along pretty well using the Stable PPA and 12.10, so I don't know. What was the error it gave trying to mount?

On another note, I built my first pool, and started copying to it. Been working pretty well thus far. Haven't configured automounting yet, but I've got 6TB (4x2TB WD REds) of RAIDZ1 set up now, and I'll add a second 4x2TB RAIDZ1 once I clear off the 2 drives that still have stuff on them. 12TB should carry me through at least another year or two.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
That's the exact error I was getting. Installing the headers and removing/re-installing the ububtu-zfs package fixed it for me. On my phone right now, but googling that error takes you to a bunch of discussions in the Google Groups for ZoL with some possible remedies.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
Create another vdev, and add it to the same pool. It should automatically utilize all the new available space without issue. Remember you won't be able to remove anything, so if you're planning on adding bigger drives and moving your existing files to the new drives, you'd have to create a new pool.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
FWIW, I've been running ZFS-on-Linux for the last month or so without issue. 8 2TB drives, in a pair of 4 disk RAIDZ1 vdevs in a single pool. I decided against FreeNAS/Nas4Free because I wanted to be able to test VM setups and use the machine for other tasks instead of as a headless server. If you're considering ZFS, it's definitely a solid option. The only glitch I've noticed is kernel updates in Ubuntu will break ZFS, because it's not pilling the headers and source when it updates. As long as you update those and re-add the ZFS modules, it works beautifully.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
I think that depends. Do you want the box to handle the RAID management, or one of the VMs? I'm running ZFS, and my case is out of internal bays. I've looked at a few of the Sans Digital boxes to just pass raw disks via port multiplication and eSATA so I can use ZFS to manage the disks.

Currently I'm trying to figure out if its more cost effective to buy one of the big Norco cases, or if one or two Sans Digital boxes are a more cost-effective option.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
Fwiw, when I bought my Reds on Amazon, they came shipped in individual boxes, packed in plastic containers/shells inside the boxes, and the individual boxes were in a big box filled tight with packing paper and bubble wrap. Newegg seems to just toss their OEM drives in a box with paper, and they're packaged in a static bag. Maybe that accounts for the difference? I've had mine now for 6 months with no issues at all, 5x2TB in a raidz1.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!

LastCaress posted:

I was thinking of buying an external HD enclosure, something like this:

http://www.amazon.es/Sharkoon-8-BAY-RAID-BOX-4044951012916/dp/B008ENU0XM/ref=sr_1_1?ie=UTF8&qid=1369133924&sr=8-1&keywords=sharkoon+8-bay
http://www.amazon.co.uk/Sharkoon-St...=sharkoon+5-bay

It's basically a NAS without the network part. (I already have 3 NAS) - However I don't know if there are many alternatives to this - I also have some HD docks but this would allow me an extra 5 or 8 HDs to be connected via USB3.0 - I don't really plan to do a RAID right now so alternatives that don't do RAID are ok as well. Anyone have any experience with these devices? Anywhere I can find it cheaper (EU)? Thanks!

Something like this? They're cheaper here in the US, but I'm not sure about over there.

http://www.amazon.com/Sans-Digital-...=sans+digital+8

I've been considering one of these, and consolidating my NAS machine into a smaller case and using the Sans Digital towers as expansion chassis. They've got a 5 bay USB/eSATA version, as well as an 8-bay SAS version. My other choice is a Norco case, but it depends on how I can setup the network closet at our new house. Currently I'm limping along with 8 HDDs crammed in a midtower chassis with a pair of SATA controller cards, but airflow and cooling are lacking.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!

dotster posted:

So just to make it more difficult I have two QNAPs with 6 WD greens and not had a problem with any of them. They don't get used a bunch but have been running for over a year.

I have 4 2TB Greens paired with 4 2TB Reds in a Linux-ZFS NAS that have an average powered-on time of anywhere from 18 months to 30 months. They did live as just a bunch of independent drives in a Windows machine up until this past January, when I added the Reds as a vdev on the new machine, moved the contents of the Greens, then added the Greens as a second vdev in the pool. No complaints so far, though I do need to add another vdev, or swap the older drives out for some new 3TB drives to increase capacity.


I will note that all the 2TB drives came from Amazon, I've only ordered older lower capacity drives from Newegg. Amazon does tent to pack their drives better, all my Reds came in an OEM-style box with the plastic drive holders, which was then wrapped in heavy packing paper and in a larger box. Newegg always shipped my drives in static bags, wrapped in bubble wrap, then packed in foam peanuts.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
Is the M1015 still the go-to SAS card? I've currently got a Highpoint 2720SGL, and while it's plenty fast (440M/s running a scrub right now) it seems finicky about taking some of my older Green drives. I've got them on the motherboard controller, and my Reds on the Highpoint, but I'd like to run everything off a controller and be able to use the motherboard controller for SSDs, optical, and a couple scratch HDDs.

I spent an hour today plugging and unplugging drives, before I finally gave up on the Greens and the Highpoint card. I've got enough slots to run a couple of the M1015's, and realistically 16 drives would be as many as this chassis will handle anyway.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
Issue that popped up today after I upgraded the Ubuntu distro on my desktop that hosts my zpool:

code:
  pool: media-pool
 state: UNAVAIL
status: One or more devices could not be used because the label is missing 
	or invalid.  There are insufficient replicas for the pool to continue
	functioning.
action: Destroy and re-create the pool from
	a backup source.
   see: [url]http://zfsonlinux.org/msg/ZFS-8000-5E[/url]
  scan: none requested
config:

	NAME                                           STATE     READ WRITE CKSUM
	media-pool                                     UNAVAIL      0     0     0  insufficient replicas
	  raidz1-0                                     ONLINE       0     0     0
	    ata-WDC_WD20EFRX-68AX9N0_WD-WMC300564617   ONLINE       0     0     0
	    ata-WDC_WD20EFRX-68AX9N0_WD-WMC300253507   ONLINE       0     0     0
	    ata-WDC_WD20EFRX-68AX9N0_WD-WMC300618570   ONLINE       0     0     0
	    ata-WDC_WD20EFRX-68AX9N0_WD-WMC300576341   ONLINE       0     0     0
	  raidz1-1                                     UNAVAIL      0     0     0  insufficient replicas
	    scsi-SATA_WDC_WD20EZRX-00_WD-WMC300641345  UNAVAIL      0     0     0
	    scsi-SATA_WDC_WD20EARX-00_WD-WCAZA8308495  UNAVAIL      0     0     0
	    ata-WDC_WD20EFRX-68EUZN0_WD-WMC4M1489130   ONLINE       0     0     0
	    scsi-SATA_WDC_WD20EARS-00_WD-WMAZA4986032  UNAVAIL      0     0     0
The disks are online, recognized by the Disks utility in Ubuntu, but for whatever reason my zpool isn't seeing them. How on earth do I correct this without destroying the pool? I don't much care for the idea or restoring from the many many scattered DVDs of data, which is only about 1/2 of what's there regardless.

For what it's worth, the 5 disks that are found are attached to a Highpoint HBA card, and the 3 not found are attached to the motherboard SATA controller. Those labels (scsi-SATA-xxx) do NOT show up in my /dev/disk/by-id folder now, only as ata-WDC_xxxx now. Any quick fixes before I start digging into it?

edit: Found the fix. Apparently 13.10 removed the scsi-xxx /dev/disk/by-id names, and doing export/import -f allowed the pool to self-correct with the new by-id names. Whew!

PitViper fucked around with this message at 05:23 on Mar 20, 2014

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!

D. Ebdrup posted:

That's the second issue I've heard of with linux and zfs where it can suddenly stop working because device ids aren't actually persistent.
The other issue I mentioned is that in rare cases linux won't identify the same disks by the same internal label, which it uses to assign entries in /dev/ with, resulting in device ids changing across a reboot.

Is it just me, or is it completely irresponsible of whoever's in charge to change the device ids? I thought you weren't supposed to change existing kernel behaviour, because you can't - without a complete code audit - know what impacts it'll have.

What, really, is the difference in the scsi-xxx and ata-xxx /dev/disk/by-id naming schema? 12.10 had each disk listed under both labels, and I've never been bothered to figure out exactly why it's done that way. I've seen the scsi-xxx naming schema referred to as a virtual scsi interface, so perhaps I should have been using the ata-xxx labels all along. I believe when I set the pool up, I initially referred to each disk by it's /dev/sdX naming scheme, and then did an export/import using the by-id labels, and ZFS just picked the scsi-xxx references by default. Changing the order the drives are referred to in /dev/sdX is a known issue, and can be caused by spin-up delays, reordering drives on the controller, etc. by-id was supposed to be persistent, hence why I re-imported the pool using that method of referencing drives.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!

Thermopyle posted:

Yes, which you should be doing.

I don't think there's anything wrong with using by-id, it certainly makes physically identifying which disk is which easier. This is certainly something that might happen if you upgrade a machine without first exporting your pool, but since the solution is a simple export/import operation anyway its probably not that big of a deal. I agree UUID is a more permanent and consistent way to assign the disks to a vdev/pool, but by-id should in theory be just as consistent, plus it allows you to easily identify which disk is failed by putting the disk type and serial number right there in the pool information.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
Any recommendations for a "normal" looking tower case that has space for 12 3.5" disks, and preferably a DVD drive as well? I'm planning to eventually add 4 more disks in another vdev (to 8 current disks in 2 vdevs) and my current tower has space for 6, plus 4 in a 4-in-3 hotswap adapter. I use my ZFS machine as an office PC as well, so I'm looking for something in a regular tower format that fits a normal ATX motherboard. System disks are all SSDs, but mounting spots for those would be a bonus instead of a requirement. Currently they're just shoved in between the drive cages.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
The Define XL looks like it would fit my needs, especially if I keep my 4-in-3 cage and swap that over. Anybody have first-hand experience with it? It says the HDD cages can be repositioned, so if I could somehow jam another 4-drive cage, it'd be perfect. Then I could even put one of those 4x2.5" swap enclosures in for the SSDs...

edit: Looks like I can, according to the photos on their website. The bottom cage moves back far enough to accommodate two bottom cages, plus the top cage. Now I just have to figure out where to buy an extra drive cage.

PitViper fucked around with this message at 23:00 on Jul 2, 2014

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
The Define XL R2 has repositionable cages, at least according to the product page. I think the original Define XL is listed as discontinued now.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!

eightysixed posted:

How does one power so many HDDs?

Careful shopping of power supplies. Plus the WD Reds are pretty gentle on power consumption, and spinup is staggered between the onboard ports and the HBA.

I'm really more future-proofing my space than anything, plus my current tower is getting a bit old. Nothing wrong with it, but I'd like to get something with a nicer layout and a bit more subtle design. I did just replace everything except the case last fall, and I've been using it for pretty close to 10 years now.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
Anything with enough slots to support an HBA or two. The IBM m1015 is popular, I'm running a Highpoint Rocket 2720 myself. You'll need the SAS breakout cables, but it'll add an additional 8 SATA ports. I have slots for 2 HBA cards, plus 6 SATA ports on the motherboard. Eventually I'll move the zpool drives all to a pair of the Highpoints, and use the onboard SATA for system drives and the DVD writer.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
My oldest 3 2TB drives (one in particular, about 19 months old) have been giving me issues in my zpool. One dropped out of the pool last night, and I had to reboot and import/export a couple times for it to pick the drive up. Just dropped in an order for 4 4TB Reds, which will replace a single older Red and the 3 Greens in that particular raidz. Once they're all swapped out, guess I'll find out if autoexpand works like it should!

This should bump my usable space from ~12TB to ~18TB. I was originally planning to swap with 3TB drives, since they're a little cheaper per GB, but I'm already at 99% pool utilization, so the added 3TB can't hurt. It'll let me put off adding drives or upgrading the other 2TB drives for quite a while.

edit: Hopefully the faulted drive makes it until the replacement drives are here. 13223 powered on hours, 515975 load cycle count, and it was throwing the last 5 errors about a minute after power-on last night. The other 2 Greens haven't pitched any errors yet, with 23k and 24k power on hours. They're both over 580k load cycle counts though.

PitViper fucked around with this message at 15:44 on Jul 12, 2014

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!

Thermopyle posted:

Yeah, one of my pools is the same. I've got no more space in the case and this was one of the first pools I ever set up and I dumbly created it with 10 drives, so it's going to cost a small fortune to upsize all those drives.

The problem is that it takes half a month to scrub when utilization is that high.

Sometimes I miss WHS and it's ability to expand with any size drive, a drive at a time.

^^ZFS on Linux is fine.

Yea, I've got 8 in mine, but in 2 raidz sets of 4 each. I figured I could easily upgrade 4 at a time, plus it gave a little extra redundancy. And adding another vdev of 4 is somewhat easy without going crazy with a huge case and all that. If I ever need more than 12x4TB, then I think there's a problem with digital hoarding.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
Anybody noticed head parking behaviour changes with any of their Red drives? I have one with ~3500 hours on it that I replaced a failed drive with, and it's showing almost 11k load cycle counts. My older 4 Reds with 12.3k hours have load cycle counts under 30. Checking the drives with WDIDLE, the older ones came with the idle3 timer disabled, and the newer one was set to 80. My Greens mostly have load cycle counts in the 550k+ range, but interesting that WD seems to have changed the behaviour on the Reds sometime in the last year or so. These are all 2TB drives, I'll have to check what the new 4TB drives I ordered come set at from the factory.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!

infinite99 posted:

So I'm completely new to ZFS but I'm planning on building a larger file server. Would it be a bad idea to grab a few drives now and create a pool and then later on down the line, buy some more drives and create a separate pool? Right now I'm using Amahi Server which implements Greyhole and has all the drives pooled into one giant volume. It works like the Windows Home Server drive pooling and it's worked out for me but I think I need something better to handle multiple tasks and the ability to expand if I wanted to. Would ZFS be the way to go or should I just use something like FreeNAS?

I think it's recommended to create vdevs out of smaller groups of drives, and add vdevs/replace with larger drives to expand your zpool. I've got 8 disks in a 4/4 setup, so a pair of raidz vdevs in a single zpool. I'm swapping out the 4 oldest drives for larger drives this weekend, which will give me another 6TB of space with the same number of disks. Next would be either adding another set of 4 disks in another raidz vdev to the existing pool (making it 12 disks in a 4/4/4 setup) or swapping out the remaining 4 disks for larger ones.

ZFS is pretty flexible, the only thing you can't do is remove a disk from a vdev, or remove a vdev from a pool, without replacing it with a disk the same size or larger. So if I did expand to 12 disks, I'd be stuck with a 12 disk setup until I could build a pool of similar volume with fewer disks, copy all the data off, then destroy the old pool. Less flexible than something like Greyhole or WHS pooling, but with tolerance for losing a disk and still being accessible and rebuildable.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!

GokieKS posted:

Do note however that replacing an existing vdev is not a simple process. For example in PitViper's situation, when he wants to replace one entire vdev of 4 2TB drives with larger ones, he actually would have replace them one at a time, and rebuild (resliver) the vdev after each one, for 4 times total.

This is correct, and one of the drawbacks. Also note that in my situation, if I were to add another vdev, almost all of the writes from now forward will happen onto the new vdev, since my current pool is nearly 100% full (80GB free out of ~11TB usable). It could lead to uneven read/write performance if that's important. Mine is all just bulk storage and not really speed-sensitive. I can still sustain write (~4.2GB test file) at ~220MB/sec, and reads are around 240MB/sec, and that's with 5 disks on a PCIe SAS card, and 3 disks on the onboard SATA ports.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
Finally finished the disk swap that I started Monday evening. It's been too long since I ran a scrub, I think. Lots of checksum errors that were repaired, and because of some system stability issues during the swap I ended up pulling a bunch of data off the pool because ZFS thought it was corrupted. All of it checked out fine (images, videos, nothing irreplacable or not backed up), but I restored what was backed up and just copied back what wasn't.

Autoexpand worked without an issue, bumped the pool up to a total capacity of 18TB from 12TB. Luckily the original pool was already set for advanced format drives, but even if it hadn't I could have created a new pool with the new drives, copied all or almost all of the original data over, destroyed the old pool and re-added the drives I was keeping to the new pool.

Next is physically re-arranging the drives in the system and pulling the unused disks. I'll keep some of them for external drive enclosures, or anything that I don't mind losing. Luckily 6TB should be more than enough space for the foreseeable future, until I replace the other 4 drives due to old age.

code:
  pool: media-pool
 state: ONLINE
  scan: scrub repaired 2.15M in 10h57m with 0 errors on Thu Jul 24 09:27:31 2014
config:

	NAME                                          STATE     READ WRITE CKSUM
	media-pool                                    ONLINE       0     0     0
	  raidz1-0                                    ONLINE       0     0     0
	    ata-WDC_WD20EFRX-68AX9N0_WD-WMC300564617  ONLINE       0     0     0
	    ata-WDC_WD20EFRX-68AX9N0_WD-WMC300253507  ONLINE       0     0     0
	    ata-WDC_WD20EFRX-68AX9N0_WD-WMC300618570  ONLINE       0     0     0
	    ata-WDC_WD20EFRX-68AX9N0_WD-WMC300576341  ONLINE       0     0     0
	  raidz1-1                                    ONLINE       0     0     0
	    ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E1855293  ONLINE       0     0     0
	    ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E1834262  ONLINE       0     0     0
	    ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E1855357  ONLINE       0     0     0
	    ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E1842966  ONLINE       0     0     0

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
I have mine set up in two Z1 sets of 4 drives each. A full scrub takes about 12 hours, with about 75% of the pool used. That's only 18TB capacity though, since I've got 2 and 4TB drives. I think there's a technical reason that you want to have an odd number of drives in a Z1 though, but I forget exactly what it is.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
I have 8GB RAM in a system running a 24TB raw zpool (2 z1 sets of 4 disks each) plus a virtualized windows desktop for work (2gb RAM dedicated). Its probably not ideal for performance, but it certainly meets the needs for a home file server. 1GB/TB is the recommendation for enterprise workloads. Adding an SSD for a cache disk will help offset the lower RAM available, but is obviously not as cost-effective.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!

Thermopyle posted:

I've just about maxed out my current tower case with 22 drives in it. I'm considering building something, but I'm lazy so I might just buy something.

What's some options for something to hold more than 22 drives? I'd consider new if the price is right, but I'm picturing some server hardware or something that I could snag for cheap off ebay...

http://www.supermicro.com/products/chassis/4U/418/SC418E16-R1K62B2.cfm ?

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
I just noticed the one I posted is 2.5" drives. That's what I get for skimming the product page before I post!

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
Might be better suited to the Haus, but I've had some issues with a few disks. I've been flipping them around on the SAS cable and the error seems to follow a couple ends on one SAS cable, but certainly all on one SAS port on the card.

Occasionally one disk will just drop from my zpool. It's always one of the 4TB Reds, which all share one SAS port on my Highpoint 2720SGL. The 2TB drives on the other port are 100% reliable, no drops or errors reported there. The drop is always accompanied by the following SMART error:

code:
Error 23144 occurred at disk power-on lifetime: 2168 hours (90 days + 8 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  04 61 00 00 00 00 a0

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  e0 00 00 00 00 00 a0 00      00:48:49.554  STANDBY IMMEDIATE
  ef 10 02 00 00 00 a0 00      00:48:49.554  SET FEATURES [Enable SATA feature]
  ec 00 00 00 00 00 a0 00      00:48:49.553  IDENTIFY DEVICE
  ef 03 46 00 00 00 a0 00      00:48:49.553  SET FEATURES [Set transfer mode]
  ef 10 02 00 00 00 a0 00      00:48:49.553  SET FEATURES [Enable SATA feature]
Since these errors only occur on the one SAS port, and have persisted even when swapping drives between connectors and also swapping the whole SAS cable, can I safely guess my controller card likely is the cause? After moving the 4TB drives off to the mainboard controller, the problem seems to have disappeared. All the drives that have errored out of the zpool have also passed an extended SMART test when connected to the mainboard controller.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
Nothing like running a scrub, seeing CKSUM errors adding up, and then at the end it goes "fixed a few MB of errors, no biggie" to give me faith in my storage choice.

Now I just need to get off my rear end and RMA a couple of my 4TB Reds. When they get warm (north of 104°F, so typically during a scrub or heavy IO) they like to freak out and reset their SATA connection. They're also in the upper drive bays, in a case with apparently inadequate airflow, so it's a problem. Right now I have a box fan shoving air into the side of the case, and all the drives stay in the mid 70°s F. I need to replace the case with something to hold 12 3.5" disks and give good airflow past the disk cages, then hopefully I won't have to worry about it anymore.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
Yea, I had to change the idle timer on a couple 4TB Reds from last year. They were going nuts parking the heads, but the WD utility fixed it and I haven't had an issue since. You'll know right away if they've got the bad firmware, I've got a drive with 18k hours and a load cycle count of 51, and one with 5800 hours and 1933 load cycle count. The first is a 2TB Red, and the second is a 4TB Red , the second one I didn't catch the parking issue for a few hundred hours.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
Might be better in the Linux thread, but does anybody using ZFS on Linux have issues where a scrub will "hang" after a while? Last night my desktop was scrubbing, and after 1.67TB it just kinda... stopped. "zpool status" says it was still scrubbing, but at around 12MB/sec and hadn't done any more after sitting all day. Reboot, and it's carrying on where it left off. ~350MB/sec. It happens about every other to every third scrub, and the system is still usable when it's hung. This is 4 x 4TB and 4 x 2TB Reds on a Highpoint RocketRAID 2720SGL, with the card just passing the drives through to the OS (which involves formatting the drives attached to a USB enclosure/system SATA ports before connecting to the Highpoint. Should have bought the regular 2720 non-RAID, or an m1015. Lesson learned!)

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!

G-Prime posted:

Makes total sense now, no worries.

Things like your current situation make me really think hard about how I'm going to deal with that problem. Odds are pretty good that I'm just going to resize by buying replacement drives over time, staggered, and let the pool resilver once they're fully replaced. Then lather, rinse, repeat on the next size up, until the end of time or storage tech improving dramatically.

That's how I've been managing my pool. The only regret I have is that it's two vdevs of 4x2TB and 4x4TB drives, both raidz1. It does make expanding easier, since I'll probably swap the 2TB drives for 4/6TB sometime next year, but I worry about losing two disks from the same vdev when resilvering, since they're all disks of the same age.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!

G-Prime posted:

That's why I want to do it staggered. Every time you replace a disk (as I understand it), it'll resilver, and use the same amount of space as the old disk. Once you replace them all, it should expand to fill that space neatly. Yeah, resilvers are stressful on the array, but because ZFS knows what data is missing, it should only read what's needed to fill the new drive, rather than the complete contents of the array. Also, I've reached the conclusion that there's never a reason to do z1. I know that it's too late for you to change your mind on that, obviously, but I'm going to be looking toward the future, and may just go z3 for as much of a safety net as possible.

Yeah, my oldest disks in the pool now are around 3 years of power-on time. All WD Reds, sitting on a dedicated HBA. I don't know that I'd do z2 on a 4 disk vdev, and I only did 4 disks initially because it fit tidily on my 4-port controller, and I only needed 4x2TB of space at the time. Ideally I'd jump to a single 8 disk vdev and use z2 or z3, but I'd need to use at least 4TB disks just to manage my existing data, to say nothing of expanding capacity.

This way, I can refresh half the disks every two years or thereabouts, disk capacity is outpacing my storage growth needs, and even with some disasters relating to bad cables and a dead controller, I haven't lost any data since I built the pool. Next hardware refresh I'll probably even use ECC RAM!

Adbot
ADBOT LOVES YOU

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
A case with 12 3.5" drive bays, hotswap or not, would be my perfect case right now. I can slap my disc drive in an external enclosure, and stick SSD drives wherever. But 12 3.5" and two 5.25" bays would fill my needs perfectly for the next several years. I've got 9 3.5" drives, two SSDs, and a CD/DVD recorder in my current case, but airflow sucks and cable management is nonexistent.

Something like the Fractal Design Define R5 would be great, with just 4 more 3.5" bays.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply