SwissArmyDruid posted:If you're going with unreliable storage, you could do worse than an RPi + one of those USB 3 to SATA doodads? The cheapo ones can power SSDs, you'll need an external power supply for anything that spins, because I don't think any flavor of RPi has USB-PD yet.
|
|
# ? Jun 28, 2020 14:00 |
|
|
# ? Apr 19, 2024 13:29 |
|
I have two WD Red 8TB in RAID1 and when doing rsync -avz of several hundred GB of data over time it just slows down with every file until it literally stops for minutes at a time. Even after stopping rsync, the disks see activity for minutes afterwards. What could be the reason? Is that some buffer thing?
|
# ? Jun 28, 2020 15:43 |
|
SwissArmyDruid posted:edit: Oh right, I came into this thread for a reason. Looking for a stopgap measure until 10 Gbit comes down in price more, what's a good cheapo option that I haven't thought of because the above is what I'm using now. How cheap are you looking? You can get 10Gb fiber cards for <$30 any day on eBay, SFP+'s for $5, and a 4-port switch for $130. And LC-LC cables are dirt cheap, too.
|
# ? Jun 28, 2020 16:16 |
|
Really getting worried about those WD Red 8TB drives. Is slowing down over time during copying something SMR can be causing? Incidentally, I installed OMV to a thumb drive and it seems like it actually imploded because it was one of the overheating Sandisk ones so I am replacing it today with a regular SSD. I wonder if these two issues are related.
|
# ? Jun 29, 2020 09:30 |
|
DrDork posted:How cheap are you looking? You can get 10Gb fiber cards for <$30 any day on eBay, SFP+'s for $5, and a 4-port switch for $130. And LC-LC cables are dirt cheap, too. "comes standard on the next motherboard I buy when I build a new system" cheap.
|
# ? Jun 29, 2020 12:47 |
|
SwissArmyDruid posted:"comes standard on the next motherboard I buy when I build a new system" cheap. Supermicro does have a whole line of X11 motherboards (admittedly mostly for embedded stuff) with SFP+ slots on them, and used SFP+ modules are, as said, all of $5 on eBay. But if you mean something more pedestrian / consumer grande, then the best you can hope for is 2.5GbE that some higher-end boards are starting to slap on there. Problem there is that there aren't any cheap 2.5GbE switches--QNAP has the cheapest I've seen (QSW-1105-5T) at an estimated $150, but it's not out until the end of the year. Past that you're stuck with either getting a switch with only 1 or 2 10GbE links for like $200, or getting 8+ links for $500 ish. In either event, going fiber is frankly not much more expensive at all than 2.5GbE for two links, and considerably cheaper for more than 2 links. And you get 4x the speed.
|
# ? Jun 29, 2020 14:26 |
Nothing but 10GbE is gonna come standard unless you go for server boards which might have SFP+ modules that you can plug cages into, but even that's risky considering the only benefit is the much lower latency, so it's only useful for boards intended to be used for putting NVMe over fabric. EDIT: orz BlankSystemDaemon fucked around with this message at 14:36 on Jun 29, 2020 |
|
# ? Jun 29, 2020 14:31 |
|
Not sure what exactly is "risky"? Like "risky" in the sense of "I wouldn't bet on your next 'board having built in >1GbE networking because the only ones that do tend to be $400+ server boards and you're probably not willing to pay that much if you're asking how to avoid paying for a $30 NIC" in which case, yeah, I agree with you 100%. From a system side, fiber vs copper doesn't really matter all that much in normal use. Though getting drivers for some of the older fiber cards under Windows can be a fun challenge at times, since they're from like 2005. But they're super cheap!
|
# ? Jun 29, 2020 14:43 |
What I meant was, there's very little point in aiming for SFP+ unless you know you're going to be block-, file-, or object-sharing stuff stored on NVMe, or if your dataset fits in ARC and on NVMe SLOG. In that sense it's risky, since most people in this thread aren't going to be doing that - however much I wish I could afford it, even I'm not doing it.
|
|
# ? Jun 29, 2020 14:51 |
|
D. Ebdrup posted:What I meant was, there's very little point in aiming for SFP+ unless you know you're going to be block-, file-, or object-sharing stuff stored on NVMe, or if your dataset fits in ARC and on NVMe SLOG. I don't really understand your mentality here. If you're looking for >1Gb networking, 10Gb fiber is far and away the cheapest option for home use thanks to the plethora of cheap used gear and the existence of a cheap 4-port switch. You can get a switch, cables, 4 NICs, and 4 SFP+s for ~$250 if you hunt around eBay, ~$300-$350 if you just buy whatever happens to be available when you're searching. $350 won't even buy you a switch for 10GbE copper networking unless you're ok with only 2 ports (yet, at least--hopefully the 4-port QNAP/similar will be out by the end of the year at the $150-$200 price point), and 10GbE NICs tend to be considerably more expensive, too (~$75 right now), so a 4 system setup would be well over $600. I'll agree with you that, in terms of actual performance, fiber's low latency does enable you to do some fancy stuff with NVMe drives that's hard to do with copper, but most people aren't likely to be doing that, so it's a wash. It's certainly no worse than copper, unless you have a home that's already pre-wired for Cat5 and running fiber lines would be a lot more effort than just plugging poo poo into existing wall ports. In that case I'd probably just eat the cost difference to save myself the hassle.
|
# ? Jun 29, 2020 15:13 |
|
I started putting together an unraid box and I've got a old drive that showed a ton of errors like this during a preclear: print_req_error: I/O error, dev sdf, sector 36152 Buffer I/O error on dev sdf, logical block 4519, async page read so I started a badblocks and the terminal just started scrolling with the blocks that were reported bad But smartctl shows the health status as OK ? Is it because the self test didn't finish or something? Currently trying a long test. I checked the drive on different cables (that I had known good drives already running on) with the same results to rule out a cabling issue code:
|
# ? Jun 29, 2020 15:14 |
|
For people like me wanting to build out decently performing VM and container infrastructure that has professional-ish levels of performance 10Gbps is kind of expected for storage because you’ll be moving at least 2 GBps just casually writing big messages out via Kafka and an RDBMS and also because you want to vaguely match network storage latency in real environments. But because 10GbE switches are still into the hundreds of varying quality and you can use Infiniband adapters and switches for less than half it’s mostly a question about software compatibility (if your K8S distribution of choice would support it). Even the rather basic standalone Kafka tests I have are supposed to hit 2 GBps / worker because it’s a stress test meant to find weak performance spots. It’s very similar to giving game developers big, powerful GPUs so they can make sure that GPUs are the bottleneck rather than CPU or memory. Fiber vs copper does certainly matter when it comes to power consumption and heat if you’re running several ports full tilt in a closet.
|
# ? Jun 29, 2020 15:21 |
|
Rockker posted:I started putting together an unraid box and I've got a old drive that showed a ton of errors like this during a preclear:
|
# ? Jun 29, 2020 15:24 |
|
necrobobsledder posted:But because 10GbE switches are still into the hundreds of varying quality and you can use Infiniband adapters and switches for less than half it’s mostly a question about software compatibility (if your K8S distribution of choice would support it). That's part of why I went with simple SFP+ fiber instead of Infiniband--most SFP+ NICs already have drivers rolled in to your common distros, so at most you have to set an option switch or two to force checksum offloading if the OS doesn't pick it up automatically. Windows drivers were a bit of an annoyance to actually find, being old and all, but worked fine without any fiddling once I found them. necrobobsledder posted:Fiber vs copper does certainly matter when it comes to power consumption and heat if you’re running several ports full tilt in a closet. That's true, but shouldn't really be that much an issue for home use: a 10GbE port can pull about ~5W/port while a SFP+ is usually <1W/port. For even an 8 port switch that's only an additional 40W, which shouldn't be a huge problem to cool. Especially since most 8 port >1Gb switches of any variety are already cooking along well enough that you wouldn't want to shove them somewhere without any ventilation. Totally different story when you're talking enterprise and banks of 48x10GbE ports. An extra 240W/switch adds up.
|
# ? Jun 29, 2020 16:14 |
|
DrDork posted:at most you have to set an option switch or two to force checksum offloading if the OS doesn't pick it up automatically. Our intrepid home lab poster may wish to test which is faster for checksumming. At the far end of the spectrum near line rate you want to let the os cpu do it. The nic's will run out of juice and you will stall. It's frustrating to test and fix, but if you are having mysterious problems above around 50% nic throughput try disabling it. You might see a sudden surge in throughput especially in small frames.
|
# ? Jun 29, 2020 16:21 |
|
Yeah 10gb is still kinda pricey, I was lucky with my M1000e as it came with 40GB Force10 switches and 40GB QSFPs, I picked up a 10GB Fiber card for my NAS and hooked it directly to the Force10s. Infiniband is super niche still.
|
# ? Jun 29, 2020 16:23 |
|
H110Hawk posted:Our intrepid home lab poster may wish to test which is faster for checksumming. At the far end of the spectrum near line rate you want to let the os cpu do it. The nic's will run out of juice and you will stall. It's frustrating to test and fix, but if you are having mysterious problems above around 50% nic throughput try disabling it. You might see a sudden surge in throughput especially in small frames. Yeah, it depends on what you're pairing the NIC up with and what your loads are. I was mostly saying that SFP+'s tend to "just work" with most distros these days. Tuning them for optimal performance, like with any NIC, is of course a bit more involved.
|
# ? Jun 29, 2020 16:31 |
|
DrDork posted:Yeah, it depends on what you're pairing the NIC up with and what your loads are. I was mostly saying that SFP+'s tend to "just work" with most distros these days. Tuning them for optimal performance, like with any NIC, is of course a bit more involved. Yup! I agree. If you really want 10gig on the cheap eBay stuff is the way to go. Buy spare optics. Multimode is the way to go. If you are doing singlemode and keep blowing optics buy some 5db pads to take the edge off your light and look at your cooling.
|
# ? Jun 29, 2020 16:58 |
Speaking of 10G, an article on the cheapest 10GbE just appeared in my RSS feed.
|
|
# ? Jun 29, 2020 21:43 |
|
D. Ebdrup posted:Speaking of 10G, an article on the cheapest 10GbE just appeared in my RSS feed. I got 2x Mellanox ConnectX-3 40/56G InfiniBand cards off eBay and bought a brand new QSFP+ DAC from FiberStore. Less than $100 total with shipping and I can hit 35 gigabits per second of file transfer between RAM disks. My server's hard drives are the real world limiting factor at this point and it's glorious. The catch of course is that unless you're looking at some EoL gear like old Brocade ICXes the switch costs get crazy. For now I'm avoiding that problem by just having my desktop and server plugged directly together, the rest of the LAN can share the gigabit link.
|
# ? Jun 30, 2020 00:26 |
|
https://www.zdnet.com/article/a-hacker-gang-is-wiping-lenovo-nas-devices-and-asking-for-ransoms/
|
# ? Jun 30, 2020 01:22 |
|
Well that's one way to let people know that "LenovoEMC" sells NAS devices!
|
# ? Jun 30, 2020 03:12 |
|
SwissArmyDruid posted:"comes standard on the next motherboard I buy when I build a new system" cheap. Buying second-hand 10G cards is generally cheaper than buying a pricier board with it built it. Like another poster said, it's a $15-$30 card
|
# ? Jun 30, 2020 04:00 |
|
lordfrikk posted:Really getting worried about those WD Red 8TB drives. Is slowing down over time during copying something SMR can be causing? The device managed SMR red drives are only certain 4TB and 6TB models. The 8TB ones are CMR.
|
# ? Jun 30, 2020 05:06 |
|
Yeah there's this stupid chart now (I mean it's good, only stupid because they got caught sneaking SMR in and sowing mistrust): https://blog.westerndigital.com/wd-red-nas-drives/
|
# ? Jun 30, 2020 06:09 |
|
Yeah, I bought the 8TB because I read about the 2-6TB range being SMR. I have no experience with large drives in general and this is my first time using a NAS so I was worrying they could somehow still be SMR despite nobody discovering it yet? I guess not. But I did some more research into my rsync issue and seems like most people experience some sort of slowdown or stalling in between files. Yesterday when I was copying files it still showed some slowdown between files and the drives were noisy all the time. I left the NAS turned on overnight and today when I am copying files it's awfully quiet and there's barely any stalling in between Could it be that after certain amount of GB the mirror is balancing files between the two drives and that's why the increase in activity and slowdown?
|
# ? Jun 30, 2020 09:18 |
Devian666 posted:The device managed SMR red drives are only certain 4TB and 6TB models. The 8TB ones are CMR. SMR exists to permit higher density drives with the same number of platters, so eventually all the biggest disks will be SMR.
|
|
# ? Jun 30, 2020 10:15 |
|
How far out from SSDs being effective in a NAS do people feel we are? I checked newegg the other day, and I think a 4TB SSD is going for 2x what a 4TB Rotational went when I started buying in 2014. 2x premium for lower power, noise, and heat sounds pretty good to me. If 8TB SSDs were 2x what an 8TB rotational is, I could see dropping those 4s now.
|
# ? Jun 30, 2020 14:32 |
|
lordfrikk posted:Yeah, I bought the 8TB because I read about the 2-6TB range being SMR. I have no experience with large drives in general and this is my first time using a NAS so I was worrying they could somehow still be SMR despite nobody discovering it yet? I guess not. I mean, there’s also a question of networking hardware. There’s more pieces in play here than the NAS and its hard drives.
|
# ? Jun 30, 2020 14:49 |
|
Hughlander posted:How far out from SSDs being effective in a NAS do people feel we are? I checked newegg the other day, and I think a 4TB SSD is going for 2x what a 4TB Rotational went when I started buying in 2014. 2x premium for lower power, noise, and heat sounds pretty good to me. If 8TB SSDs were 2x what an 8TB rotational is, I could see dropping those 4s now. There's nothing stopping you from doing it now if it meets your goals. Personally, I'm still looking to minimize $/TB as much as possible. I have enough spindles already that there's no real difference between disk and SSD for my workload and internet connection.
|
# ? Jun 30, 2020 15:38 |
|
Hughlander posted:How far out from SSDs being effective in a NAS do people feel we are? I checked newegg the other day, and I think a 4TB SSD is going for 2x what a 4TB Rotational went when I started buying in 2014. 2x premium for lower power, noise, and heat sounds pretty good to me. If 8TB SSDs were 2x what an 8TB rotational is, I could see dropping those 4s now. If by "effective" you mean reaching nominal price-parity, it's gonna be a while. SSDs don't carry quite the premium that they used to, but they're not going to be the "cheap" option for quite some time. For smaller systems like the one you're considering, though, they're not that much more expensive. But unless you're doing 10Gig networking or better, are working with a ton of small files, are running with tiny amounts of RAM, are trying to do something like mounting it via iSCSI and running Steam games off it or something, or just hate waiting a second or so for a folder to populate, you're not likely to see a whole lot of benefit from it: spinning rust is still pretty decent at serving up movies and ISOs and whatever. You can also use a small SSD as a cache in a lot of setups and get a lot of the advantages, but obviously at a lower price. The noise/power/heat is true, but WD Reds are pretty quiet already, lower power consumption will never pay off the premium, and the heat shouldn't be an issue in any reasonably ventilated case. Though you could of course get a much smaller case if you're just doing SSDs, so that's something. DrDork fucked around with this message at 17:12 on Jun 30, 2020 |
# ? Jun 30, 2020 17:09 |
|
DrDork posted:If by "effective" you mean reaching nominal price-parity, it's gonna be a while. SSDs don't carry quite the premium that they used to, but they're not going to be the "cheap" option for quite some time. I think I'd take 2x premium on the same density as a trade-off personally. My home array is in the Den that I WFH out of and 16-18 Reds add up to a lot of heat in the room + noise.
|
# ? Jun 30, 2020 19:09 |
|
Hughlander posted:I think I'd take 2x premium on the same density as a trade-off personally. My home array is in the Den that I WFH out of and 16-18 Reds add up to a lot of heat in the room + noise. 16-18? Yeah, that'll add up! When you mentioned 4TB drives I was thinking you were in the 2-4 drive range. Sadly, we're nowhere near a mere 2x the price once you start getting up in drive sizes: 18x8TB shucked drives cost about 18 x $140 = $2520, while even 18x4TB SSD's would be 18 x $500 = $9,000. The just-announced 8TB Samsung 870's are apparently going to retail for about $900 (which is a decent $/GB price if you're ok with QLC).
|
# ? Jun 30, 2020 19:24 |
Hughlander posted:I think I'd take 2x premium on the same density as a trade-off personally. My home array is in the Den that I WFH out of and 16-18 Reds add up to a lot of heat in the room + noise.
|
|
# ? Jun 30, 2020 19:31 |
|
D. Ebdrup posted:Out of curiosity, what do you do on this array that would benefit from the increased IOPS? Ya this. If it’s media, spinning rust is good. Reading big files in order is what it’s good at. Buncha small files is where SSD will win.
|
# ? Jun 30, 2020 19:46 |
|
D. Ebdrup posted:Out of curiosity, what do you do on this array that would benefit from the increased IOPS? Nothing per say, again my concern is db and wattage.
|
# ? Jun 30, 2020 20:02 |
|
If you don't need the array to be that wide for performance reasons, you could collapse it to something like 6x14TB drives for ~$1700, that'd still get you 84TB raw space. Buy a good $200 or so case with sound absorbing panels, and you'll be well on your way to a much quieter system than you have now. It also won't cost $9000.
|
# ? Jun 30, 2020 20:24 |
|
For the amount of money spent on drives for mostly cosmetic or logistical purposes I’d put the server elsewhere, run longer cables, and let things get louder, bigger, and hotter.
|
# ? Jun 30, 2020 21:37 |
|
SDDs are all about the IOPS and constant replacement, most of the higher end builds for them I've seen them just swapping them with caching included so that they can run tons of iscsi vms off it, and one researcher who was pulling terabytes of data into a super computer constantly. Lower end, they are great at travel. If you just want to lug a ton of terabytes on a plane, using ssds ensures you don't kill your harddrives after a few big trips.
|
# ? Jun 30, 2020 21:54 |
|
|
# ? Apr 19, 2024 13:29 |
|
with the 2020 Synology models out now, what's the best 4-drive NAS for file backup and Plex usage? are the two extra cores on the DS920+ over the DS420+ gonna make a huge difference on HEVC playback or subtitle transcoding?
|
# ? Jun 30, 2020 22:54 |