Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
In the process of setting up my new NAS, so far so good:
* SMART Short Tests
* SMART Long Tests
* badblocks -b 4096 -ws /dev/adaX ran successfully with zero errors on all drives (took awhile!)

Any other burn in tests I should do?

When FreeNAS creates a pool does it use the UUIDs of the disks? Just wanted to make sure to avoid an issue like https://askubuntu.com/questions/801511/zfs-disk-label-changed

Do I need to do the zpool export / zpool import after creating the pool in FreeNAS to ensure that?

When it comes time to transfer over data from my old NAS, should I go the zfs send / receive route? Or rsync?

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Enos Cabell posted:

Is there a thread recommended brand for these? My drive count is at the point where I'm going to need to start using them soon, but I worry about blowing up my drives with poorly made splitters.

I was thinking of the Startech ones personally.

BlankSystemDaemon
Mar 13, 2009



fletcher posted:

In the process of setting up my new NAS, so far so good:
* SMART Short Tests
* SMART Long Tests
* badblocks -b 4096 -ws /dev/adaX ran successfully with zero errors on all drives (took awhile!)

Any other burn in tests I should do?

When FreeNAS creates a pool does it use the UUIDs of the disks? Just wanted to make sure to avoid an issue like https://askubuntu.com/questions/801511/zfs-disk-label-changed

Do I need to do the zpool export / zpool import after creating the pool in FreeNAS to ensure that?

When it comes time to transfer over data from my old NAS, should I go the zfs send / receive route? Or rsync?
The problem of disks changing labels is a problem unique to Linux, because of the strange implementation to provide compatibility with BIOS if a floppy is connected.
FreeBSD doesn't have this problem, and can easily distinguish between disks by-id, by gpt label, by glabel (if you label them manually), or the (a)da devices via CAM.


In other news, I've finally got some of the disks that I bought used, and here are some useless numbers. I'm satisfied since the sequential write speed is more than the 200MBps that the FiberChannel HBAs I have can achieve.
One disk stood out:
pre:
enc@n5006048005f1ebbe/type@0/slot@6/da9
	512         	# sectorsize
	2000398934016	# mediasize in bytes (1.8T)
	3907029168  	# mediasize in sectors
	0           	# stripesize
	0           	# stripeoffset
	243201      	# Cylinders according to firmware.
	255         	# Heads according to firmware.
	63          	# Sectors according to firmware.
	SEAGATE ST32000444SS	# Disk descr.
	9WM2TYLC0000C119ATWR	# Disk ident.
	id1,enc@n5006048005f1ebbe/type@0/slot@6	# Physical path
	No          	# TRIM/UNMAP support
	7200        	# Rotation rate in RPM
	Not_Zoned   	# Zone Mode

Asynchronous random reads:
	sectorsize:       131 ops in   72.047167 sec =        2 IOPS
	4 kbytes: <this is where the test failed, and the disk started throwing massive amounts of errors>
You'll have to excuse the weird name, that's how FreeBSD handles SAS enclosure information published via SES.


Also, I have some potentially big news for the people who like low-power servers as much as I do.
The new x86/64 architecture optimization manual has been published, and the PDF mentions both SHA-Ni and a new set of Galois fields instructions.
What's important about that, you ask? Well, SHA-Ni will let anyone do SHA512 checksums for ZFS without CPU overhead. More importantly, the new Galois fields are described elsewhere in the PDF as including ways to accelerate Reed-Solomon erasure codes, which are what RAID6 and ZFS' RAIDz2 and RAIDz3 use to achieve P+Q and P+Q+R distributed parity with striping!


Something else I've discovered is that mbuffers network I/O mode uses TCP and can't be forced to use UDP, so if you want to avoid the ~8% overhead at ~1500MTU, you need to use netcat.

BlankSystemDaemon fucked around with this message at 21:01 on May 22, 2020

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

D. Ebdrup posted:

The problem of disks changing labels is a problem unique to Linux, because of the strange implementation to provide compatibility with BIOS if a floppy is connected.
FreeBSD doesn't have this problem, and can easily distinguish between disks by-id, by gpt label, by glabel (if you label them manually), or the (a)da devices via CAM.


In other news, I've finally got some of the disks that I bought used, and here are some useless numbers. I'm satisfied since the sequential write speed is more than the 200MBps that the FiberChannel HBAs I have can achieve.
One disk stood out:
pre:
enc@n5006048005f1ebbe/type@0/slot@6/da9
	512         	# sectorsize
	2000398934016	# mediasize in bytes (1.8T)
	3907029168  	# mediasize in sectors
	0           	# stripesize
	0           	# stripeoffset
	243201      	# Cylinders according to firmware.
	255         	# Heads according to firmware.
	63          	# Sectors according to firmware.
	SEAGATE ST32000444SS	# Disk descr.
	9WM2TYLC0000C119ATWR	# Disk ident.
	id1,enc@n5006048005f1ebbe/type@0/slot@6	# Physical path
	No          	# TRIM/UNMAP support
	7200        	# Rotation rate in RPM
	Not_Zoned   	# Zone Mode

Asynchronous random reads:
	sectorsize:       131 ops in   72.047167 sec =        2 IOPS
	4 kbytes: <this is where the test failed, and the disk started throwing massive amounts of errors>
You'll have to excuse the weird name, that's how FreeBSD handles SAS enclosure information published via SES.


Also, I have some potentially big news for the people who like low-power servers as much as I do.
The new x86/64 architecture optimization manual has been published, and the PDF mentions both SHA-Ni and a new set of Galois fields instructions.
What's important about that, you ask? Well, SHA-Ni will let anyone do SHA512 checksums for ZFS without CPU overhead. More importantly, the new Galois fields are described elsewhere in the PDF as including ways to accelerate Reed-Solomon erasure codes, which are what RAID6 and ZFS' RAIDz2 and RAIDz3 use to achieve P+Q and P+Q+R distributed parity with striping!


Something else I've discovered is that mbuffers network I/O mode uses TCP and can't be forced to use UDP, so if you want to avoid the ~8% overhead at ~1500MTU, you need to use netcat.

I understood some of those words.

BlankSystemDaemon
Mar 13, 2009



Smashing Link posted:

I understood some of those words.
Did you understand that 2 IOPS is perhaps a bit too few IOPS? :v:

Kia Soul Enthusias
May 9, 2004

zoom-zoom
Toilet Rascal
https://www.amazon.com/Synology-bay-DiskStation-DS218-Diskless/dp/B075N1BYWX

DS218+ is $273 on Amazon right now.

GnarlyCharlie4u
Sep 23, 2007

I have an unhealthy obsession with motorcycles.

Proof

IOwnCalculus posted:

If you aren't limited on drive bays / physical space, and you can hit the drive redundancy / capacity you need, go for it. They'll be somewhat slower than higher capacity drives but not enough to notice for your purposes.

Eh I mean I only NEED 2TB but I'm aiming for 4. I have 8 sata ports on some old AMD board I don't even remember the model of.
I have 24 good drives with low hours on them. I'm okay with just getting a few archival file transfers. I'm not even trying to stream from it.
I just don't want to wait a week to transfer 2TB. At that point I'll loving throw some hardware on a credit card and pay it off in a month or two, rather than babysit some bullshit.

I've also got a dozen 250GB SATA SSD's that I planned on using for server drives but that's not going to happen any time soon so maybe I should just try and sell it all and buy a consumer product to float me.


This is not a bad deal but $20 off isn't good enough for me to pull the trigger on it, especially if I'd have to buy disks. That money would be better spent on a couple of 4TB NAS drives.

GnarlyCharlie4u fucked around with this message at 02:35 on May 23, 2020

IOwnCalculus
Apr 2, 2003





GnarlyCharlie4u posted:

Eh I mean I only NEED 2TB but I'm aiming for 4. I have 8 sata ports on some old AMD board I don't even remember the model of.
I have 24 good drives with low hours on them. I'm okay with just getting a few archival file transfers. I'm not even trying to stream from it.
I just don't want to wait a week to transfer 2TB.

Nah you'll be fine. At worst I'd expect them to be about half the performance of a modern 8TB+ drive, and you're going to have at least some striping benefits for any RAID above 1.

Echophonic
Sep 16, 2005

ha;lp
Gun Saliva
First, a story. I was tinkering with my N40L, trying to dampen some of the noise it makes and managed to drop a drive with 8 years of uptime. Dead square on the corner, cracked a screw mount on the caddy, and it landed flat on its top. I know changing orientation's not really a thing with modern drives, but yeah. Anyway, it loving works and has zero complaints about the whole experience so far. Good job HGST.

It and it's equally-old friend are getting upgraded for unrelated reasons. I know the N40L only has SATA2, but I don't see why it won't take 12TB IronWolves.

I'm considering upgrading my Proliant N40L after a long run. Is there a solid recommendation for a new spot to go? I was eyeing the Gen10, but apparently people aren't super pleased with the Opteron in there for Plex or, well, much of anything. The Gen10 Plus has some heft with that Xeon, but it's still a PITA to get barebones and doesn't seem like booting from an SSD is really in the cards for that one, given the lack of SATA ports. I'm currently running DrivePool, so something I can chuck Windows Server onto is preferred for migration ease.

Echophonic fucked around with this message at 05:23 on May 23, 2020

BlankSystemDaemon
Mar 13, 2009



Echophonic posted:

First, a story. I was tinkering with my N40L, trying to dampen some of the noise it makes and managed to drop a drive with 8 years of uptime. Dead square on the corner, cracked a screw mount on the caddy, and it landed flat on its top. I know changing orientation's not really a thing with modern drives, but yeah. Anyway, it loving works and has zero complaints about the whole experience so far. Good job HGST.
That's more luck than skill, right there. :D

LordOfThePants
Sep 25, 2002

I put a couple shucked Easystores into my server and am attempting to run initial testing.

When I try to run a SMART conveyance test, it says it's not supported. I've run the short test without issue and the long test is running now.

They're plugged into my R720's backplane and I'm using a Perc H710 flashed to IT Mode so it just passes through the disks.

Anything to be concerned with about that?

BlankSystemDaemon
Mar 13, 2009



Not all disks support S.M.A.R.T conveyance testing, so you probably shouldn't be concerned if you don't know for sure they're supposed to have it.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

D. Ebdrup posted:

The problem of disks changing labels is a problem unique to Linux, because of the strange implementation to provide compatibility with BIOS if a floppy is connected.
FreeBSD doesn't have this problem, and can easily distinguish between disks by-id, by gpt label, by glabel (if you label them manually), or the (a)da devices via CAM.


Ubuntu creates symbolic links that go from wwn to the dev device. The wwn never changes so if you have it in your pool, that device will always be there as long as it's connected (no Linus Tech "Tips" where he finds this out the hard way that 'sdb' is not always going to be 'sdb')

pre:
       NAME                          STATE     READ WRITE CKSUM
        neriak                        ONLINE       0     0     0
          raidz2-0                    ONLINE       0     0     0
            wwn-0x5000cca267ee4500    ONLINE       0     0     0
            spare-1                   ONLINE       0     0     0
              wwn-0x5000cca24dc0b963  ONLINE       0     0     0
              wwn-0x5000cca273efb9ce  ONLINE       0     0     0
            wwn-0x5000cca26ae68644    ONLINE       0     0     0
            wwn-0x50014ee210e54a18    ONLINE       0     0     0
            wwn-0x50014ee211437a70    ONLINE       0     0     0
        logs
          sdh64                       ONLINE       0     0     0
        cache
          sdh1                        ONLINE       0     0     0
        spares
          wwn-0x5000cca273efb9ce      INUSE     currently in use

errors: No known data errors
edit: sure, I use sdh but those are logs and cache and zfs can always run without them.

BlankSystemDaemon
Mar 13, 2009



Running log that isn't mirrored across two devices is one of the only ways to ensure that ZFS can throw away your data without catastrophic hardware failure, but I assume you know that and just like living dangerously? :P

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
I'm having second thoughts on the SilverStone CS381, the cooling leaves a lot to be desired. With my N40L the drive temps were in the low 30s with low activity. With 8 drives in the CS381 is in the mid 40s. Any sort of activity and it's 50+. This is with the 4x 120mm fans at full speed (2 intake on side, 2 exhaust on the rear). It's just not designed well to get airflow across the drives. I think with some modding I could add a few more fans and get temperatures to an acceptable level, shouldn't be required on a $300+ case though.

Seems like there aren't many 8 bay SAS cases like this on the market, any others worth considering? I'd be willing to sacrifice the hot swap capability for better cooling.

BlankSystemDaemon
Mar 13, 2009



fletcher posted:

I'm having second thoughts on the SilverStone CS381, the cooling leaves a lot to be desired. With my N40L the drive temps were in the low 30s with low activity. With 8 drives in the CS381 is in the mid 40s. Any sort of activity and it's 50+. This is with the 4x 120mm fans at full speed (2 intake on side, 2 exhaust on the rear). It's just not designed well to get airflow across the drives. I think with some modding I could add a few more fans and get temperatures to an acceptable level, shouldn't be required on a $300+ case though.

Seems like there aren't many 8 bay SAS cases like this on the market, any others worth considering? I'd be willing to sacrifice the hot swap capability for better cooling.
To get a solution with fans behind the disk to pull air over the drives, with backplane and connectors placed so they don't interrupt things and through-holes in the PCB for air to flow through, you're kinda looking at getting a 6x 5.25" case of some description and using some of these.
Most of these cases are MicroATX, not MiniITX.

EDIT: To add a bit more to this, consumer cases that report to be NAS cases aren't actually tested for airflow, because ANY understanding of fluid dynamics (which airflow fall under, even I probably wouldn't recommend pouring water in a case) will make it quite plain that air takes the path of least resistance which is never across the drives unless it's designed that way.

BlankSystemDaemon fucked around with this message at 19:57 on May 23, 2020

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I don't understand why those 4/8 bay SAS expansion enclosures are so expensive. It's just a backplane, a SAS port some power and fans.

I was forced due to space constraints to go with a shallow depth network cabinet at home (utility room). I'm halfway tempted to diy fabricate something with a rack mount shelf.

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down

Moey posted:

I don't understand why those 4/8 bay SAS expansion enclosures are so expensive. It's just a backplane, a SAS port some power and fans.

I was forced due to space constraints to go with a shallow depth network cabinet at home (utility room). I'm halfway tempted to diy fabricate something with a rack mount shelf.

The 8 Bay I picked up form PC Pitstop or whatever was $200. That felt fair.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

TraderStav posted:

The 8 Bay I picked up form PC Pitstop or whatever was $200. That felt fair.

Whoa, I was thinking they were all in the 500+ range.

Edit:

Is there something cheaper?

https://www.pc-pitstop.com/8-bay-expander-tower-trayless

Moey fucked around with this message at 20:41 on May 23, 2020

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down

Moey posted:

Whoa, I was thinking they were all in the 500+ range.

Edit:

Is there something cheaper?

https://www.pc-pitstop.com/8-bay-expander-tower-trayless


https://www.pc-pitstop.com/scsat84xb

This is the one that I got, sorry if it doesn’t meet the requirements you were intending, I’m pretty ignorant about what is all out there. I paired this with an LSI card in my UnRaid server and it worked like a charm for Sata drives. Pretty sure it works just the same with SAS

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

EVIL Gibson posted:

Ubuntu creates symbolic links that go from wwn to the dev device. The wwn never changes so if you have it in your pool, that device will always be there as long as it's connected (no Linus Tech "Tips" where he finds this out the hard way that 'sdb' is not always going to be 'sdb')

pre:
       NAME                          STATE     READ WRITE CKSUM
        neriak                        ONLINE       0     0     0
          raidz2-0                    ONLINE       0     0     0
            wwn-0x5000cca267ee4500    ONLINE       0     0     0
            spare-1                   ONLINE       0     0     0
              wwn-0x5000cca24dc0b963  ONLINE       0     0     0
              wwn-0x5000cca273efb9ce  ONLINE       0     0     0
            wwn-0x5000cca26ae68644    ONLINE       0     0     0
            wwn-0x50014ee210e54a18    ONLINE       0     0     0
            wwn-0x50014ee211437a70    ONLINE       0     0     0
        logs
          sdh64                       ONLINE       0     0     0
        cache
          sdh1                        ONLINE       0     0     0
        spares
          wwn-0x5000cca273efb9ce      INUSE     currently in use

errors: No known data errors
edit: sure, I use sdh but those are logs and cache and zfs can always run without them.

I just use by-id. I like it because you can tell which drive is which since the id is composed of model and serial number.

pre:
therms@ehud:~⟫ sudo zpool status tank1
  pool: tank1
 state: ONLINE
  scan: resilvered 2.58T in 6h9m with 0 errors on Thu May 21 19:55:26 2020
config:

        NAME                                    STATE     READ WRITE CKSUM
        tank1                                   ONLINE       0     0     0
          raidz1-0                              ONLINE       0     0     0
            ata-WDC_WD80EZZX-11CSGA0_VKJ8KSXX   ONLINE       0     0     0
            ata-WDC_WD80EMAZ-00WJTA0_1EHGU3AZ   ONLINE       0     0     0
            ata-WDC_WD100EMAZ-00WJTA0_JEKBAWEZ  ONLINE       0     0     0

errors: No known data errors
That reminds me that this pool started out built on 1TB drives, then I expanded it with 3TB, then 5TB, then 8TB, and now, as you can see I've started moving it to 10TB drives.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

D. Ebdrup posted:

To get a solution with fans behind the disk to pull air over the drives, with backplane and connectors placed so they don't interrupt things and through-holes in the PCB for air to flow through, you're kinda looking at getting a 6x 5.25" case of some description and using some of these.
Most of these cases are MicroATX, not MiniITX.

EDIT: To add a bit more to this, consumer cases that report to be NAS cases aren't actually tested for airflow, because ANY understanding of fluid dynamics (which airflow fall under, even I probably wouldn't recommend pouring water in a case) will make it quite plain that air takes the path of least resistance which is never across the drives unless it's designed that way.

Thanks for the suggestions! That seems to open up a lot more possibilities, seems to mostly be rack mount stuff though. Are there non-rackmount chassis' that fit those Icy Dock backplanes before I consider going down the rackmount route? Also why am I having a hard time finding that thing for sale in the US?

edit: This looks like it would fit the bill and supports MiniITX: https://www.monoprice.com/product?p_id=10842

fletcher fucked around with this message at 22:02 on May 23, 2020

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

D. Ebdrup posted:

Running log that isn't mirrored across two devices is one of the only ways to ensure that ZFS can throw away your data without catastrophic hardware failure, but I assume you know that and just like living dangerously? :P

reading more about it, the slog is mainly for vm making sure they don't get corrupted; not really for static fileserver files

I'll just remove it then.

BlankSystemDaemon
Mar 13, 2009



TraderStav posted:

https://www.pc-pitstop.com/scsat84xb

This is the one that I got, sorry if it doesn’t meet the requirements you were intending, I’m pretty ignorant about what is all out there. I paired this with an LSI card in my UnRaid server and it worked like a charm for Sata drives. Pretty sure it works just the same with SAS
I wouldn't mind seeing how one of these are wired internally, before buying them though. They look like they might be sanely designed, but who really knows?

fletcher posted:

Thanks for the suggestions! That seems to open up a lot more possibilities, seems to mostly be rack mount stuff though. Are there non-rackmount chassis' that fit those Icy Dock backplanes before I consider going down the rackmount route? Also why am I having a hard time finding that thing for sale in the US?

edit: This looks like it would fit the bill and supports MiniITX: https://www.monoprice.com/product?p_id=10842
RaidSonic is very equivalent to IcyDock, but they're not quite the same.
For example, IcyDock has one with 5 vertical disks in, so you get up to 10 disks in that cabinet if you're fine with not having a mITX formfactor.
It's hard to spot if the backplane has through-holes in it for air, though.

EVIL Gibson posted:

reading more about it, the slog is mainly for vm making sure they don't get corrupted; not really for static fileserver files

I'll just remove it then.
Separate intent log is merely for ensuring that (random, or not) writes issued synchronously get flushed to ANY disk as fast as possible, where it can then be rearranged by ZFS to be written sequentially. That's also why they never need to be as big as people think they need to be, and why write endurance is much more important than size or anything else.
VMs are only some of the things that do writes synchronously. Databases do it too (if they're good, at any rate - looking at you, not-postgres), as does NFS depending on the user-space application using the share (so for examples or databases on a NFS share). Samba sharing on Unix-likes can be syncronous, but not by default (you need to flip 'strict sync' to yes in the config. iSCSI via ctld, irrespective of it's a file extend or a device extend backed by a zvol, can and often is also synchronous.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

TraderStav posted:

https://www.pc-pitstop.com/scsat84xb

This is the one that I got, sorry if it doesn’t meet the requirements you were intending, I’m pretty ignorant about what is all out there. I paired this with an LSI card in my UnRaid server and it worked like a charm for Sata drives. Pretty sure it works just the same with SAS

Is the only difference that you mount the drives internally instead of in hot swap drives to a backplane?

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down

D. Ebdrup posted:

I wouldn't mind seeing how one of these are wired internally, before buying them though. They look like they might be sanely designed, but who really knows?

It's pretty cut and dry, two ports for the SDF connections externally. Internally it's 2x4 SATA connections and power for them. That's about it. I don't plan to take my server down any time soon or else I'd take pictures for you. What specifically are you looking for, perhaps I can help recall.

Moey posted:

Is the only difference that you mount the drives internally instead of in hot swap drives to a backplane?

Yes, no hotswapping. It basically is just as I described above, PSU and SATA connections.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
Ugh.

Bought 8 Easystores.

6/8 perfect.

2/8 clearly repacked returns. Both obviously shucked. 1 with a 300gb drive inside. Other has a 12TB but wasn't even put back together.

Jesus Christ, Best Buy.

BlankSystemDaemon
Mar 13, 2009



TraderStav posted:

It's pretty cut and dry, two ports for the SDF connections externally. Internally it's 2x4 SATA connections and power for them. That's about it. I don't plan to take my server down any time soon or else I'd take pictures for you. What specifically are you looking for, perhaps I can help recall.
Mostly I'm wondering if the PCB has through-holes for airflow so the air moves across the disks and not out to the side of the case.

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down

D. Ebdrup posted:

Mostly I'm wondering if the PCB has through-holes for airflow so the air moves across the disks and not out to the side of the case.

Not sure if I have the answer. The next time I take it down I'll take a picture, I know that won't help you right now but it's a bit of a PITA to get to.

It's pretty damned open in there, it's basically PSUs, the backplane card (which I believe is perpendicular to the back so the fans are blowing pretty much on the disks. There are not side vents or anything.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
Good news. A few hours of phone calls and driving later, and I've got 8 legit drives.



We doin, boyz!

Duck and Cover
Apr 6, 2007

Toshimo posted:

Good news. A few hours of phone calls and driving later, and I've got 8 legit drives.



We doin, boyz!

More drives of higher capacity gently caress 5 8tb drives just isn't enough.

How detrimental is using shucked WD drives in a 12 bay set up? The reds are rated for 8 bays.

Duck and Cover fucked around with this message at 01:34 on May 25, 2020

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

The drives don't know how many other drives are in there with them. The only thing that matters is how hot they get and how much vibration they see. I've always thought that rated-by-number-of-bays things was dumb.

I've had like 20 WD drives in one case for years and I've only had one failed drive. Of course, that's just one guy in one specific circumstance, but it does tell you that you're probably not automatically ruining poo poo by putting 9 Reds in an enclosure.

Hughlander
May 11, 2005

Buff Hardback posted:

Honestly it's a bit sad that the Rosewill cases got discontinued/forced discontinued by corona because for the price they were probably the most drives per dollar

I mailed them on Friday. They said they’re out of stock not discountinued. And they’ll get more stock at end of June

Sniep
Mar 28, 2004

All I needed was that fatty blunt...



King of Breakfast

Thermopyle posted:

The drives don't know how many other drives are in there with them. The only thing that matters is how hot they get and how much vibration they see. I've always thought that rated-by-number-of-bays things was dumb.

I've had like 20 WD drives in one case for years and I've only had one failed drive. Of course, that's just one guy in one specific circumstance, but it does tell you that you're probably not automatically ruining poo poo by putting 9 Reds in an enclosure.

i want to say its more of a use case barrier

a home user is very unlikely, in a reused pc chassis or a synology or qnap or whathave you, ever have more than 8 disks in a single box, it's just not typically relevant to home use

however, now, if you have a couple 2U dell storage rackmount chassis with 12 disks in them each, that's clearly coming from a completely different functional use case, and in a rack at bigbiz, inc. probably sees 10-100x the workload of the 8 bay synology in a home

so yeah feel free to augment to as many disks as you like in array size, just dont jump array utilization to meet the typical pattern for the actual DB backend or whatever actual business scope of use

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Sniep posted:

i want to say its more of a use case barrier

a home user is very unlikely, in a reused pc chassis or a synology or qnap or whathave you, ever have more than 8 disks in a single box, it's just not relevant to home use

now if you have a couple 2U dell storage rackmount with 12 disks in them each, that's clearly a completely different use case, and in a rack at bigbiz, inc. probably sees 10-100x the workload of the 8 bay synology in a home

so yeah feel free to augment to as many disks as you like in array size, just dont jump array utilization to meet the typical pattern for such scope of installs

Yeah, I think that might be right.

Duck and Cover
Apr 6, 2007

Thermopyle posted:

The drives don't know how many other drives are in there with them. The only thing that matters is how hot they get and how much vibration they see. I've always thought that rated-by-number-of-bays things was dumb.

I've had like 20 WD drives in one case for years and I've only had one failed drive. Of course, that's just one guy in one specific circumstance, but it does tell you that you're probably not automatically ruining poo poo by putting 9 Reds in an enclosure.

Yeah I know, given that you'd be spending like $120 more per drive (this may be wrong I'm too lazy to check) to get something rated for larger enclosures I was curious if it would actually be worth it. While anecdotal information isn't perfect I don't expect to find anything better. I suspect it wouldn't be worth it at normal prices.

Sniep posted:

i want to say its more of a use case barrier

a home user is very unlikely, in a reused pc chassis or a synology or qnap or whathave you, ever have more than 8 disks in a single box, it's just not typically relevant to home use

however, now, if you have a couple 2U dell storage rackmount chassis with 12 disks in them each, that's clearly coming from a completely different functional use case, and in a rack at bigbiz, inc. probably sees 10-100x the workload of the 8 bay synology in a home

so yeah feel free to augment to as many disks as you like in array size, just dont jump array utilization to meet the typical pattern for the actual DB backend or whatever actual business scope of use

Synology and Qnap both have large desktop systems. (12 and 12+4 ssd). I could certainly see a company rating their drives in such a manner as measuring workload is harder.

Duck and Cover fucked around with this message at 06:59 on May 25, 2020

Kia Soul Enthusias
May 9, 2004

zoom-zoom
Toilet Rascal

Charles posted:

B&H has the WD reds on sale today in the deal zone. Remember, don't get 6tb or below

This deal is back again already. Remember 2-6TB are SMR.

BlankSystemDaemon
Mar 13, 2009



Any case that uses rubber fixtures to hold the disks in place is also going to massively reduce the vibrations that the drives experience.

Here's a home-grown example:

The blue pieces of rubber hold the disk in place when the caddy is held closed by the dark blue piece of plastic that snaps in place.

BurgerQuest
Mar 17, 2009

by Jeffrey of YOSPOS

D. Ebdrup posted:

Any case that uses rubber fixtures to hold the disks in place is also going to massively reduce the vibrations that the drives experience.

Here's a home-grown example:

The blue pieces of rubber hold the disk in place when the caddy is held closed by the dark blue piece of plastic that snaps in place.

i put all my storage servers on a lab grade vibration plate to sort the weak from the strong

Adbot
ADBOT LOVES YOU

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

D. Ebdrup posted:

Any case that uses rubber fixtures to hold the disks in place is also going to massively reduce the vibrations that the drives experience.

Here's a home-grown example:

The blue pieces of rubber hold the disk in place when the caddy is held closed by the dark blue piece of plastic that snaps in place.

I have the drives in my PC suspended fully with elastic, old school style. It works like nothing else, and made an enormous difference over the standard grommet/long screw combo. So quiet.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply