Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



SwissArmyDruid posted:

If you're going with unreliable storage, you could do worse than an RPi + one of those USB 3 to SATA doodads? The cheapo ones can power SSDs, you'll need an external power supply for anything that spins, because I don't think any flavor of RPi has USB-PD yet.

edit: Oh right, I came into this thread for a reason. Looking for a stopgap measure until 10 Gbit comes down in price more, what's a good cheapo option that I haven't thought of because the above is what I'm using now.
You can get 8Gbps FiberChannel adapters for way under $50 on ebay.

Adbot
ADBOT LOVES YOU

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
I have two WD Red 8TB in RAID1 and when doing rsync -avz of several hundred GB of data over time it just slows down with every file until it literally stops for minutes at a time. Even after stopping rsync, the disks see activity for minutes afterwards. What could be the reason? Is that some buffer thing?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

SwissArmyDruid posted:

edit: Oh right, I came into this thread for a reason. Looking for a stopgap measure until 10 Gbit comes down in price more, what's a good cheapo option that I haven't thought of because the above is what I'm using now.

How cheap are you looking? You can get 10Gb fiber cards for <$30 any day on eBay, SFP+'s for $5, and a 4-port switch for $130. And LC-LC cables are dirt cheap, too.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
Really getting worried about those WD Red 8TB drives. Is slowing down over time during copying something SMR can be causing?

Incidentally, I installed OMV to a thumb drive and it seems like it actually imploded because it was one of the overheating Sandisk ones so I am replacing it today with a regular SSD. I wonder if these two issues are related.

SwissArmyDruid
Feb 14, 2014

by sebmojo

DrDork posted:

How cheap are you looking? You can get 10Gb fiber cards for <$30 any day on eBay, SFP+'s for $5, and a 4-port switch for $130. And LC-LC cables are dirt cheap, too.

"comes standard on the next motherboard I buy when I build a new system" cheap.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

SwissArmyDruid posted:

"comes standard on the next motherboard I buy when I build a new system" cheap.

Supermicro does have a whole line of X11 motherboards (admittedly mostly for embedded stuff) with SFP+ slots on them, and used SFP+ modules are, as said, all of $5 on eBay.

But if you mean something more pedestrian / consumer grande, then the best you can hope for is 2.5GbE that some higher-end boards are starting to slap on there. Problem there is that there aren't any cheap 2.5GbE switches--QNAP has the cheapest I've seen (QSW-1105-5T) at an estimated $150, but it's not out until the end of the year. Past that you're stuck with either getting a switch with only 1 or 2 10GbE links for like $200, or getting 8+ links for $500 ish.

In either event, going fiber is frankly not much more expensive at all than 2.5GbE for two links, and considerably cheaper for more than 2 links. And you get 4x the speed.

BlankSystemDaemon
Mar 13, 2009



Nothing but 10GbE is gonna come standard unless you go for server boards which might have SFP+ modules that you can plug cages into, but even that's risky considering the only benefit is the much lower latency, so it's only useful for boards intended to be used for putting NVMe over fabric.

EDIT: orz

BlankSystemDaemon fucked around with this message at 14:36 on Jun 29, 2020

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Not sure what exactly is "risky"? Like "risky" in the sense of "I wouldn't bet on your next 'board having built in >1GbE networking because the only ones that do tend to be $400+ server boards and you're probably not willing to pay that much if you're asking how to avoid paying for a $30 NIC" in which case, yeah, I agree with you 100%.

From a system side, fiber vs copper doesn't really matter all that much in normal use. Though getting drivers for some of the older fiber cards under Windows can be a fun challenge at times, since they're from like 2005. But they're super cheap!

BlankSystemDaemon
Mar 13, 2009



What I meant was, there's very little point in aiming for SFP+ unless you know you're going to be block-, file-, or object-sharing stuff stored on NVMe, or if your dataset fits in ARC and on NVMe SLOG.
In that sense it's risky, since most people in this thread aren't going to be doing that - however much I wish I could afford it, even I'm not doing it.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

D. Ebdrup posted:

What I meant was, there's very little point in aiming for SFP+ unless you know you're going to be block-, file-, or object-sharing stuff stored on NVMe, or if your dataset fits in ARC and on NVMe SLOG.
In that sense it's risky, since most people in this thread aren't going to be doing that - however much I wish I could afford it, even I'm not doing it.

I don't really understand your mentality here. If you're looking for >1Gb networking, 10Gb fiber is far and away the cheapest option for home use thanks to the plethora of cheap used gear and the existence of a cheap 4-port switch. You can get a switch, cables, 4 NICs, and 4 SFP+s for ~$250 if you hunt around eBay, ~$300-$350 if you just buy whatever happens to be available when you're searching.

$350 won't even buy you a switch for 10GbE copper networking unless you're ok with only 2 ports (yet, at least--hopefully the 4-port QNAP/similar will be out by the end of the year at the $150-$200 price point), and 10GbE NICs tend to be considerably more expensive, too (~$75 right now), so a 4 system setup would be well over $600.

I'll agree with you that, in terms of actual performance, fiber's low latency does enable you to do some fancy stuff with NVMe drives that's hard to do with copper, but most people aren't likely to be doing that, so it's a wash. It's certainly no worse than copper, unless you have a home that's already pre-wired for Cat5 and running fiber lines would be a lot more effort than just plugging poo poo into existing wall ports. In that case I'd probably just eat the cost difference to save myself the hassle.

Rockker
Nov 17, 2010

I started putting together an unraid box and I've got a old drive that showed a ton of errors like this during a preclear:

print_req_error: I/O error, dev sdf, sector 36152
Buffer I/O error on dev sdf, logical block 4519, async page read

so I started a badblocks and the terminal just started scrolling with the blocks that were reported bad

But smartctl shows the health status as OK ? Is it because the self test didn't finish or something? Currently trying a long test. I checked the drive on different cables (that I had known good drives already running on) with the same results to rule out a cabling issue

code:
=== START OF INFORMATION SECTION ===
Vendor:               HGST
Product:              HUS726060AL5210
Revision:             A519
Compliance:           SPC-4
User Capacity:        6,001,175,126,016 bytes [6.00 TB]
Logical block size:   512 bytes
Physical block size:  4096 bytes
LU is fully provisioned
Rotation Rate:        7200 rpm
Form Factor:          3.5 inches
Logical Unit id:      0x5000cca24236fc9c
Serial number:        xxxxxxxx
Device type:          disk
Transport protocol:   SAS (SPL-3)
Local Time is:        Mon Jun 29 09:49:47 2020 EDT
SMART support is:     Available - device has SMART capability.
SMART support is:     Enabled
Temperature Warning:  Enabled

=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK

Current Drive Temperature:     31 C
Drive Trip Temperature:        85 C

Manufactured in week 18 of year 2015
Specified cycle count over device lifetime:  50000
Accumulated start-stop cycles:  15
Specified load-unload count over device lifetime:  600000
Accumulated load-unload cycles:  1396
Elements in grown defect list: 887

Vendor (Seagate Cache) information
  Blocks sent to initiator = 96048286531584

Error counter log:
           Errors Corrected by           Total   Correction     Gigabytes    Total
               ECC          rereads/    errors   algorithm      processed    uncorrected
           fast | delayed   rewrites  corrected  invocations   [10^9 bytes]  errors
read:          0       18         0        18     203660        600.900           0
write:         0      717         0       717      71454       6002.081         252
verify:        0        0         0         0      56115          0.000           0

Non-medium error count:     1141

SMART Self-test log
Num  Test              Status                 segment  LifeTime  LBA_first_err [SK ASC ASQ]
     Description                              number   (hours)
# 1  Background short  Failed in segment -->       6    1946         609656464 [0x3 0x5d 0x1]
# 2  Background short  Failed in segment -->       6    1945         688655448 [0x3 0x5d 0x1]
# 3  Background short  Failed in segment -->       6    1945         230788896 [0x3 0x5d 0x1]

Long (extended) Self-test duration: 49115 seconds [818.6 minutes]

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
For people like me wanting to build out decently performing VM and container infrastructure that has professional-ish levels of performance 10Gbps is kind of expected for storage because you’ll be moving at least 2 GBps just casually writing big messages out via Kafka and an RDBMS and also because you want to vaguely match network storage latency in real environments. But because 10GbE switches are still into the hundreds of varying quality and you can use Infiniband adapters and switches for less than half it’s mostly a question about software compatibility (if your K8S distribution of choice would support it). Even the rather basic standalone Kafka tests I have are supposed to hit 2 GBps / worker because it’s a stress test meant to find weak performance spots. It’s very similar to giving game developers big, powerful GPUs so they can make sure that GPUs are the bottleneck rather than CPU or memory.

Fiber vs copper does certainly matter when it comes to power consumption and heat if you’re running several ports full tilt in a closet.

Less Fat Luke
May 23, 2003

Exciting Lemon

Rockker posted:

I started putting together an unraid box and I've got a old drive that showed a ton of errors like this during a preclear:

print_req_error: I/O error, dev sdf, sector 36152
Buffer I/O error on dev sdf, logical block 4519, async page read

so I started a badblocks and the terminal just started scrolling with the blocks that were reported bad

But smartctl shows the health status as OK ? Is it because the self test didn't finish or something? Currently trying a long test. I checked the drive on different cables (that I had known good drives already running on) with the same results to rule out a cabling issue
Your smartctl shows a poo poo-ton of uncorrectable write errors, I would say that the drive is bad.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

necrobobsledder posted:

But because 10GbE switches are still into the hundreds of varying quality and you can use Infiniband adapters and switches for less than half it’s mostly a question about software compatibility (if your K8S distribution of choice would support it).

That's part of why I went with simple SFP+ fiber instead of Infiniband--most SFP+ NICs already have drivers rolled in to your common distros, so at most you have to set an option switch or two to force checksum offloading if the OS doesn't pick it up automatically. Windows drivers were a bit of an annoyance to actually find, being old and all, but worked fine without any fiddling once I found them.

necrobobsledder posted:

Fiber vs copper does certainly matter when it comes to power consumption and heat if you’re running several ports full tilt in a closet.

That's true, but shouldn't really be that much an issue for home use: a 10GbE port can pull about ~5W/port while a SFP+ is usually <1W/port. For even an 8 port switch that's only an additional 40W, which shouldn't be a huge problem to cool. Especially since most 8 port >1Gb switches of any variety are already cooking along well enough that you wouldn't want to shove them somewhere without any ventilation.

Totally different story when you're talking enterprise and banks of 48x10GbE ports. An extra 240W/switch adds up.

H110Hawk
Dec 28, 2006

DrDork posted:

at most you have to set an option switch or two to force checksum offloading if the OS doesn't pick it up automatically.

Our intrepid home lab poster may wish to test which is faster for checksumming. At the far end of the spectrum near line rate you want to let the os cpu do it. The nic's will run out of juice and you will stall. It's frustrating to test and fix, but if you are having mysterious problems above around 50% nic throughput try disabling it. You might see a sudden surge in throughput especially in small frames.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Yeah 10gb is still kinda pricey, I was lucky with my M1000e as it came with 40GB Force10 switches and 40GB QSFPs, I picked up a 10GB Fiber card for my NAS and hooked it directly to the Force10s.

Infiniband is super niche still.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

H110Hawk posted:

Our intrepid home lab poster may wish to test which is faster for checksumming. At the far end of the spectrum near line rate you want to let the os cpu do it. The nic's will run out of juice and you will stall. It's frustrating to test and fix, but if you are having mysterious problems above around 50% nic throughput try disabling it. You might see a sudden surge in throughput especially in small frames.

Yeah, it depends on what you're pairing the NIC up with and what your loads are. I was mostly saying that SFP+'s tend to "just work" with most distros these days. Tuning them for optimal performance, like with any NIC, is of course a bit more involved.

H110Hawk
Dec 28, 2006

DrDork posted:

Yeah, it depends on what you're pairing the NIC up with and what your loads are. I was mostly saying that SFP+'s tend to "just work" with most distros these days. Tuning them for optimal performance, like with any NIC, is of course a bit more involved.

Yup! I agree. If you really want 10gig on the cheap eBay stuff is the way to go. Buy spare optics. Multimode is the way to go. If you are doing singlemode and keep blowing optics buy some 5db pads to take the edge off your light and look at your cooling.

BlankSystemDaemon
Mar 13, 2009



Speaking of 10G, an article on the cheapest 10GbE just appeared in my RSS feed.

wolrah
May 8, 2006
what?

D. Ebdrup posted:

Speaking of 10G, an article on the cheapest 10GbE just appeared in my RSS feed.
I looked in to this recently and ended up jumping right to 40G because the cards and optics/DACs really weren't that much more expensive on the used market compared to 10G. Basically all 40G cards can support adapters to connect 10G SFP+ modules and some of them can even support a breakout to 4x10G too so there didn't really seem to be any significant downside other than requiring an x8 PCIe slot for full performance.

I got 2x Mellanox ConnectX-3 40/56G InfiniBand cards off eBay and bought a brand new QSFP+ DAC from FiberStore. Less than $100 total with shipping and I can hit 35 gigabits per second of file transfer between RAM disks. My server's hard drives are the real world limiting factor at this point and it's glorious.

The catch of course is that unless you're looking at some EoL gear like old Brocade ICXes the switch costs get crazy. For now I'm avoiding that problem by just having my desktop and server plugged directly together, the rest of the LAN can share the gigabit link.

Hughlander
May 11, 2005

https://www.zdnet.com/article/a-hacker-gang-is-wiping-lenovo-nas-devices-and-asking-for-ransoms/

Less Fat Luke
May 23, 2003

Exciting Lemon
Well that's one way to let people know that "LenovoEMC" sells NAS devices!

text editor
Jan 8, 2007

SwissArmyDruid posted:

"comes standard on the next motherboard I buy when I build a new system" cheap.

Buying second-hand 10G cards is generally cheaper than buying a pricier board with it built it. Like another poster said, it's a $15-$30 card

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe

lordfrikk posted:

Really getting worried about those WD Red 8TB drives. Is slowing down over time during copying something SMR can be causing?

Incidentally, I installed OMV to a thumb drive and it seems like it actually imploded because it was one of the overheating Sandisk ones so I am replacing it today with a regular SSD. I wonder if these two issues are related.

The device managed SMR red drives are only certain 4TB and 6TB models. The 8TB ones are CMR.

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

Yeah there's this stupid chart now (I mean it's good, only stupid because they got caught sneaking SMR in and sowing mistrust):
https://blog.westerndigital.com/wd-red-nas-drives/

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
Yeah, I bought the 8TB because I read about the 2-6TB range being SMR. I have no experience with large drives in general and this is my first time using a NAS so I was worrying they could somehow still be SMR despite nobody discovering it yet? I guess not.

But I did some more research into my rsync issue and seems like most people experience some sort of slowdown or stalling in between files.

Yesterday when I was copying files it still showed some slowdown between files and the drives were noisy all the time. I left the NAS turned on overnight and today when I am copying files it's awfully quiet and there's barely any stalling in between :iiam:

Could it be that after certain amount of GB the mirror is balancing files between the two drives and that's why the increase in activity and slowdown?

BlankSystemDaemon
Mar 13, 2009



Devian666 posted:

The device managed SMR red drives are only certain 4TB and 6TB models. The 8TB ones are CMR.
Up until it stops being only 4 and 6TB models, sure.
SMR exists to permit higher density drives with the same number of platters, so eventually all the biggest disks will be SMR.

Hughlander
May 11, 2005

How far out from SSDs being effective in a NAS do people feel we are? I checked newegg the other day, and I think a 4TB SSD is going for 2x what a 4TB Rotational went when I started buying in 2014. 2x premium for lower power, noise, and heat sounds pretty good to me. If 8TB SSDs were 2x what an 8TB rotational is, I could see dropping those 4s now.

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.

lordfrikk posted:

Yeah, I bought the 8TB because I read about the 2-6TB range being SMR. I have no experience with large drives in general and this is my first time using a NAS so I was worrying they could somehow still be SMR despite nobody discovering it yet? I guess not.

But I did some more research into my rsync issue and seems like most people experience some sort of slowdown or stalling in between files.

Yesterday when I was copying files it still showed some slowdown between files and the drives were noisy all the time. I left the NAS turned on overnight and today when I am copying files it's awfully quiet and there's barely any stalling in between :iiam:

Could it be that after certain amount of GB the mirror is balancing files between the two drives and that's why the increase in activity and slowdown?

I mean, there’s also a question of networking hardware. There’s more pieces in play here than the NAS and its hard drives.

IOwnCalculus
Apr 2, 2003





Hughlander posted:

How far out from SSDs being effective in a NAS do people feel we are? I checked newegg the other day, and I think a 4TB SSD is going for 2x what a 4TB Rotational went when I started buying in 2014. 2x premium for lower power, noise, and heat sounds pretty good to me. If 8TB SSDs were 2x what an 8TB rotational is, I could see dropping those 4s now.

There's nothing stopping you from doing it now if it meets your goals.

Personally, I'm still looking to minimize $/TB as much as possible. I have enough spindles already that there's no real difference between disk and SSD for my workload and internet connection.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Hughlander posted:

How far out from SSDs being effective in a NAS do people feel we are? I checked newegg the other day, and I think a 4TB SSD is going for 2x what a 4TB Rotational went when I started buying in 2014. 2x premium for lower power, noise, and heat sounds pretty good to me. If 8TB SSDs were 2x what an 8TB rotational is, I could see dropping those 4s now.

If by "effective" you mean reaching nominal price-parity, it's gonna be a while. SSDs don't carry quite the premium that they used to, but they're not going to be the "cheap" option for quite some time.

For smaller systems like the one you're considering, though, they're not that much more expensive. But unless you're doing 10Gig networking or better, are working with a ton of small files, are running with tiny amounts of RAM, are trying to do something like mounting it via iSCSI and running Steam games off it or something, or just hate waiting a second or so for a folder to populate, you're not likely to see a whole lot of benefit from it: spinning rust is still pretty decent at serving up movies and ISOs and whatever. You can also use a small SSD as a cache in a lot of setups and get a lot of the advantages, but obviously at a lower price.

The noise/power/heat is true, but WD Reds are pretty quiet already, lower power consumption will never pay off the premium, and the heat shouldn't be an issue in any reasonably ventilated case. Though you could of course get a much smaller case if you're just doing SSDs, so that's something.

DrDork fucked around with this message at 17:12 on Jun 30, 2020

Hughlander
May 11, 2005

DrDork posted:

If by "effective" you mean reaching nominal price-parity, it's gonna be a while. SSDs don't carry quite the premium that they used to, but they're not going to be the "cheap" option for quite some time.

For smaller systems like the one you're considering, though, they're not that much more expensive. But unless you're doing 10Gig networking or better, are working with a ton of small files, are running with tiny amounts of RAM, are trying to do something like mounting it via iSCSI and running Steam games off it or something, or just hate waiting a second or so for a folder to populate, you're not likely to see a whole lot of benefit from it: spinning rust is still pretty decent at serving up movies and ISOs and whatever. You can also use a small SSD as a cache in a lot of setups and get a lot of the advantages, but obviously at a lower price.

The noise/power/heat is true, but WD Reds are pretty quiet already, lower power consumption will never pay off the premium, and the heat shouldn't be an issue in any reasonably ventilated case. Though you could of course get a much smaller case if you're just doing SSDs, so that's something.

I think I'd take 2x premium on the same density as a trade-off personally. My home array is in the Den that I WFH out of and 16-18 Reds add up to a lot of heat in the room + noise.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Hughlander posted:

I think I'd take 2x premium on the same density as a trade-off personally. My home array is in the Den that I WFH out of and 16-18 Reds add up to a lot of heat in the room + noise.

16-18? Yeah, that'll add up! When you mentioned 4TB drives I was thinking you were in the 2-4 drive range. Sadly, we're nowhere near a mere 2x the price once you start getting up in drive sizes:

18x8TB shucked drives cost about 18 x $140 = $2520, while even 18x4TB SSD's would be 18 x $500 = $9,000. The just-announced 8TB Samsung 870's are apparently going to retail for about $900 (which is a decent $/GB price if you're ok with QLC).

BlankSystemDaemon
Mar 13, 2009



Hughlander posted:

I think I'd take 2x premium on the same density as a trade-off personally. My home array is in the Den that I WFH out of and 16-18 Reds add up to a lot of heat in the room + noise.
Out of curiosity, what do you do on this array that would benefit from the increased IOPS?

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.

D. Ebdrup posted:

Out of curiosity, what do you do on this array that would benefit from the increased IOPS?

Ya this.

If it’s media, spinning rust is good. Reading big files in order is what it’s good at.

Buncha small files is where SSD will win.

Hughlander
May 11, 2005

D. Ebdrup posted:

Out of curiosity, what do you do on this array that would benefit from the increased IOPS?

Nothing per say, again my concern is db and wattage.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
If you don't need the array to be that wide for performance reasons, you could collapse it to something like 6x14TB drives for ~$1700, that'd still get you 84TB raw space. Buy a good $200 or so case with sound absorbing panels, and you'll be well on your way to a much quieter system than you have now. It also won't cost $9000.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
For the amount of money spent on drives for mostly cosmetic or logistical purposes I’d put the server elsewhere, run longer cables, and let things get louder, bigger, and hotter.

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry
SDDs are all about the IOPS and constant replacement, most of the higher end builds for them I've seen them just swapping them with caching included so that they can run tons of iscsi vms off it, and one researcher who was pulling terabytes of data into a super computer constantly.

Lower end, they are great at travel. If you just want to lug a ton of terabytes on a plane, using ssds ensures you don't kill your harddrives after a few big trips.

Adbot
ADBOT LOVES YOU

TenementFunster
Feb 20, 2003
Probation
Can't post for 6 days!
with the 2020 Synology models out now, what's the best 4-drive NAS for file backup and Plex usage? are the two extra cores on the DS920+ over the DS420+ gonna make a huge difference on HEVC playback or subtitle transcoding?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply