|
It should be an ideal workload for SMR disks. Assuming they smartly designed the "plots" which we already know they didn't. Streaming/sequential read only workloads ought to just go right along.
|
# ? May 13, 2021 16:02 |
|
|
# ? Apr 24, 2024 10:44 |
|
H110Hawk posted:It should be an ideal workload for SMR disks. Assuming they smartly designed the "plots" which we already know they didn't. Streaming/sequential read only workloads ought to just go right along. If they were just SMR disks. WD designed them with a CMR portion that is designed to be flushed and written to an SMR area. On the 6TB drives I have my estimate is that the CMR portion is around 300GB. When the CMR portion runs out the performance drops heavily as it is forced to write data to SMR to empty CMR. This poor performance has had larger raid arrays mark drives as defective. The comedy from the tears looks promising.
|
# ? May 13, 2021 20:31 |
|
SMR is fine if you're writing huge chunks of data to an empty disk (not a ZFS rebuild mind you, that's not sequential enough) so they're probably just fine for Chia if you're blasting your dumb loving sudokus onto an SSD then moving them.
|
# ? May 13, 2021 20:42 |
|
The 14TB external I ordered from Adorama has been backordered for 3 weeks
|
# ? May 14, 2021 03:32 |
|
15K RPM SAS drives plot fine and last forever. It's the secret that they don't want anyone to know.
|
# ? May 14, 2021 20:39 |
|
I don't want to live in the same apartment with 15k RPM SAS drives. Noise was not a consideration when they designed those.
|
# ? May 16, 2021 09:29 |
|
What?
|
# ? May 16, 2021 14:54 |
|
They mean the drives are very loud as they're designed for datacenter or sever room usage.
|
# ? May 16, 2021 15:31 |
|
WHAT?
|
# ? May 16, 2021 15:32 |
|
gently caress
|
# ? May 16, 2021 15:45 |
|
I love this thread.
|
# ? May 16, 2021 16:21 |
|
HO KAAAYYYY
|
# ? May 16, 2021 16:41 |
I almost laughed myself out of my chair because of that. Also, in other news: corrective zfs receive, which makes it possible to heal corrupted data on pools, is only missing one test (which currently panics on FreeBSD) before it's ready for its final review. This will finally mean that all those ZFS standard I/O streams that I've been writing to SMR disks as a way of backup (which is the only acceptable use of SMR), will be possible to restore from, without loading the whole chain of snapshots onto a new pool.
|
|
# ? May 16, 2021 16:45 |
|
BlankSystemDaemon posted:I almost laughed myself out of my chair because of that. I just happened to read a blog post about why ZFS streams aren't a good backup. This does seem to address the biggest shortcoming it brought up, but reading the patch comment it still feels like a bit of a hack. Storing ZFS send streams is not a good backup method
|
# ? May 16, 2021 22:20 |
Saukkis posted:I just happened to read a blog post about why ZFS streams aren't a good backup. This does seem to address the biggest shortcoming it brought up, but reading the patch comment it still feels like a bit of a hack. This is just (a small) part of that full strategy. I respect Chris a lot, but sometimes I get the feeling he isn't really following OpenZFS too closely, despite having moved from OmniOS to Linux on the storage cluster he's got running at work - because that code has been in development for almost 3 years.
|
|
# ? May 16, 2021 23:29 |
|
YEAH! YEEEEAAAAAAAHHHHH!!!!!
|
# ? May 17, 2021 00:38 |
|
Has anyone here setup Wireguard on their Synology via Docker? A friend mentioned I should look into that as opposed to opening holes in my router to access SAB/Sonarr, and I'm googling around now to see what's involved.
|
# ? May 18, 2021 16:46 |
EC posted:Has anyone here setup Wireguard on their Synology via Docker? A friend mentioned I should look into that as opposed to opening holes in my router to access SAB/Sonarr, and I'm googling around now to see what's involved. I have WireGuard running on a raspberry pi and unless you’re fairly comfortable with docker networking I’d recommend just using Synology’s VPN server* instead. WireGuard is nice and lightweight but it’s not like OpenVPN is bad at all. * unless the client is an iPhone. As of last year there didn’t seem to be any way to get the config and certificate files from the synology into the ovpn app on iOS.
|
|
# ? May 18, 2021 20:28 |
|
I run Wireguard directly on the router, and that has been the best solution I've found by far. Many routers are compatible with custom firmware that makes it possible even if it's not a feature on the original firmware. Not trying to manage my VPN on a device by device basis has felt like a huge upgrade.
|
# ? May 18, 2021 21:28 |
|
Gyshall posted:YEAH! YEEEEAAAAAAAHHHHH!!!!! Why did I read this in Randy Marsh's voice? Also any recommendations for hardware for setting up a 20-40TB NAS for homelab? Used server recommendations maybe?
|
# ? May 19, 2021 15:25 |
|
Ffycchi posted:
A few pages back I found some good supermicro 4u servers with plenty of bays and dual xeons on eBay
|
# ? May 19, 2021 16:05 |
|
tuyop posted:I have WireGuard running on a raspberry pi and unless you’re fairly comfortable with docker networking I’d recommend just using Synology’s VPN server* instead. WireGuard is nice and lightweight but it’s not like OpenVPN is bad at all. Unfortunately it's an iPhone
|
# ? May 20, 2021 16:58 |
EC posted:Unfortunately it's an iPhone Well drat, this guide at least seems like a pretty good starting point for getting WG running on a synology. Unless you feel like grabbing a raspberry pi (a zero w would probably work fine) for this purpose lol. https://github.com/runfalk/synology-wireguard
|
|
# ? May 20, 2021 17:34 |
|
In the process of setting up this huge rear end 12 bay QNAP. Trying to sort out some workflow poo poo. If I have a bunch of 3d models I'm slicing from a desktop, does it make sense to just use qsync or some other software to mirror all those files to the PC so that load/render times really quick? I haven't hosed with SSD caching at all, and I'm not sure I can plug in an external or PCIe SSD which would leave me with the full 12 bays for raid 6 Also, I have my old 4 bay QNAP, no way to attach that directly to the rack through a spare ethernet eh? My wiring situation is a bit all over the place, and if it saves me going out to buy another switch that would be great. If I do end up grabbing a switch, any reason to go managed vs unmanaged for a Nas serving a couple desktops? w00tmonger fucked around with this message at 01:10 on May 22, 2021 |
# ? May 22, 2021 01:07 |
|
I think the chia run on hard drives or just general covid delays have extended to accessories. All the 5 in 3 enclosures I had bookmarked are sold out or backordered, and ebay has dried up
|
# ? May 22, 2021 03:05 |
|
Ffycchi posted:Why did I read this in Randy Marsh's voice? https://en.m.wikipedia.org/wiki/Lil_Jon
|
# ? May 22, 2021 12:34 |
|
This might be a dumb question that I should ask in another thread but, hey, there are no dumb questions, right? So, I have a Synology DS220j that I’m very happy with. For work, I access a big personal folder on a shared FTP server that can be very slow and laggy. I’d much prefer to do the work locally, but my work macbook’s HD is too small. What’s the best way to mirror a folder, I don’t know, one-drive style, between my local NAS and the work server? Synology has a lot of backup options but I’d like something more live so that I can freely upload and not worry about it.
|
# ? May 24, 2021 15:26 |
Mimir posted:This might be a dumb question that I should ask in another thread but, hey, there are no dumb questions, right? Synology’s cloud share package works with FTP and will do exactly what you want. I would double check that you’re not breaking some big legal policy based on the type of data you’d be copying over, but otherwise it’s an ideal solution. FTP syncing done on the 220j, folder accessed using SMB or whatever on the work computer(s)
|
|
# ? May 24, 2021 19:53 |
|
The Milkman posted:I think the chia run on hard drives or just general covid delays have extended to accessories. All the 5 in 3 enclosures I had bookmarked are sold out or backordered, and ebay has dried up Well I managed to find a couple SuperMicro CSE-M35TQB on some custom pc builder shop for.. more than I originally wanted to spend but much less than they're going for elsewhere. Anyone know if the fan it comes with is the typical enterprise 'we absolutely do not care about acoustics'? Not sure if I should just preemptively buy some noctuas. edit: I'm also curious if this will obviate the need to tape/block the pin on shucked drives, that'd be a nice bonus because i'm sure the tape is gonna come off on half my disks when I remove the old connections Chilled Milk fucked around with this message at 23:59 on May 24, 2021 |
# ? May 24, 2021 21:33 |
The Milkman posted:Well I managed to find a couple SuperMicro CSE-M35TQB on some custom pc builder shop for.. more than I originally wanted to spend but much less than they're going for elsewhere. Anyone know if the fan it comes with is the typical enterprise 'we absolutely do not care about acoustics'? Not sure if I should just preemptively buy some noctuas. The 3V pin on SATA connectors are a left-over from a time when SATA was meant to supply power to more than just disks, which a new SATA standard has repurposed because someone think it makes for a good way to power-cycle the drive if the signal is briefly set to high and then back down to low. Unfortunately a lot of ATX power supplies keep the 3V signal high because of the old standard, which forces the drive to be constantly off. If you find a backplane which carries 3V at all, it'll be one that's meant for the newer specification where it's just a brief pulse. BlankSystemDaemon fucked around with this message at 12:07 on May 25, 2021 |
|
# ? May 25, 2021 12:05 |
|
H110Hawk posted:WHAT? Came here to bitch about mybook prices due to crypto - now feeling cheered up!
|
# ? May 28, 2021 21:49 |
|
Ika posted:Came here to bitch about mybook prices due to crypto - now feeling cheered up! Glad to amuse
|
# ? May 29, 2021 00:16 |
Welp, that sucks. Supermicro has continued the only good 90-bay JBOD chassis that is front-and-back loaded was ever made, and replaced it with a top-loading variant. The whole point of the front-and-back loading design was that you're only ever moving a few disks on individual sleds (a maximum of 2 per sled, if memory serves) if you're going to be servicing any of them, rather than moving 90 disks at a time. Anyone who's ever played around with gyroscopes knows what happens when you apply a force to a spinning object, and it doesn't take an engineering degree to figure that when servicing any one of 90 disks, the number of times the entire server has to be moved out on its rails is enough for gyroscopic precession to risk decreasing service lifetime of the disks in question.
|
|
# ? May 29, 2021 16:05 |
|
We did it to laptops for decades. don't load it with 15k disks.
|
# ? May 29, 2021 16:13 |
H110Hawk posted:We did it to laptops for decades. don't load it with 15k disks. When a drive is floating 300-900 nanometers (which has been the standard for a few decades now) above the disk surface, it really doesn't take much force in ANY direction to have it hit.
|
|
# ? May 29, 2021 17:14 |
|
BlankSystemDaemon posted:Anyone who's ever played around with gyroscopes knows what happens when you apply a force to a spinning object, and it doesn't take an engineering degree to figure that when servicing any one of 90 disks, the number of times the entire server has to be moved out on its rails is enough for gyroscopic precession to risk decreasing service lifetime of the disks in question. Yeah, but gyroscopic precession only works when you're applying a force perpendicular to the plane of rotation. With the way the disks are loaded in that new chassis, the force of sliding the bay in/out of the rack should be almost entirely parallel with the plane of rotation, so the force simply gets applied directly against the spindle, which should be more than capable of handling it. And the spinning disk should act as a stabilizing factor, anyhow. But, yes, the end result is you're now moving 90 drives instead of 2, and that's not great regardless of orientation. Seems weird to change it since something that dense would be almost certainly only seen in datacenters where you'd generally have good access to both the front and back of every rack. Upside: instead of two tiny-rear end little fans for the 2x PSUs, which seems...questionable...for that setup, you now get 5x big rear end fans for the 4x PSUs, which seems a lot more reasonable, and presumably much better internal thermals as a consequence.
|
# ? May 30, 2021 03:27 |
|
I had a bunch of top-loading Sun X4500s at a previous job like 12 years ago and they're still running, we didn't see any higher failure rate on the drives in those systems versus horizontal loading systems. I'm sure it's fine. Edit: Now I miss the Sun that made cool poo poo.
|
# ? May 30, 2021 03:51 |
|
Less Fat Luke posted:I had a bunch of top-loading Sun X4500s at a previous job like 12 years ago and they're still running, we didn't see any higher failure rate on the drives in those systems versus horizontal loading systems. I'm sure it's fine. I still have thumper ptsd. We ran dozens of them.
|
# ? May 30, 2021 04:01 |
|
Less Fat Luke posted:I had a bunch of top-loading Sun X4500s at a previous job like 12 years ago and they're still running, we didn't see any higher failure rate on the drives in those systems versus horizontal loading systems. I'm sure it's fine. a bunch of the sun guys (including Bryan Cantrill) got back together founded a company doing the same kind of integrated-stack hardware+software stuff, it was on HackerNews a couple days ago https://oxide.computer/ obviously it kinda takes a certain customer to be willing to dive back into proprietary hardware at this stage of the game but it looks like a pretty cohesive turnkey experience if that's your thing. Basically oriented around a "private cloud" sort of model. Paul MaudDib fucked around with this message at 04:04 on May 30, 2021 |
# ? May 30, 2021 04:02 |
|
|
# ? Apr 24, 2024 10:44 |
|
Well I'm on the wait list now. I like their spin of "run our os or bust" as "attestable chain of trust".
|
# ? May 30, 2021 04:27 |