|
Kingnothing posted:I thought it was 4 drives per SFF8087 not 3? All the breakout cables seem to have 4 SATA connectors. On a "dumb" breakout cable, yes. If you get into expanders everything changes. I have ~20 drives hanging on a single 8087 right now.
|
# ? Sep 11, 2020 07:56 |
|
|
# ? Apr 25, 2024 10:05 |
|
Armed with my new info, and knowing I can plug an h310 into an open 4x, I was looking at clearance for my GPU. It seems as though the h310 would sit against the 3080 FE I plan on getting and block a shitload of intake, plus the h310s supposedly get crazy hot. So I had a dumb idea because earlier today I saw GPU flex cables for vertical GPU mounting. What if I ran one of those and put the h310 at the bottom of the case or vertical? It seems like they make x1-x8 or x4-x8/16 for mining rigs. Just in case anyone is scrolling through and doesn’t have an open end 4x but wants an 8x card. Maybe it’s still a dumb idea and I’m wasting my time researching, but hey who knows?
|
# ? Sep 11, 2020 08:42 |
|
H110Hawk posted:Ssh is the gold standard on the internet. Yep. Not using SSH is liable to get you fired for gross incompetency in a lot of tech shops. The only other method to look at might be using something like gcloud's oauth stuff, but unless you're buying a turnkey solution from a 3rd party vendor, implementing it in software is not a lot of fun. If oauth is used internally at Google with no breaches, it's probably good enough for you
|
# ? Sep 11, 2020 08:45 |
|
This LSI 9211-4i is 4x in case it still matters to you. https://docs.broadcom.com/doc/12352061 Looks like they are going for <$30 on eBay right now.
|
# ? Sep 11, 2020 14:10 |
|
Kingnothing posted:Armed with my new info, and knowing I can plug an h310 into an open 4x, I was looking at clearance for my GPU. It seems as though the h310 would sit against the 3080 FE I plan on getting and block a shitload of intake, plus the h310s supposedly get crazy hot. Riser cables like that are, indeed, A Thing, and are a core component of a lot of SFF cases because it's the only way they can jam a video card into the space. They normally work pretty well, but you can occasionally get ones that were made so cheaply that the signal degradation is enough to negatively impact things: sometimes devices simply won't show up, other times it'll force a fallback to PCIe 2.0 or whatever. So if you get weird behavior it's probably a lovely riser card, not because the idea isn't reasonable.
|
# ? Sep 11, 2020 14:12 |
|
cr0y posted:I am still in the pondering phase, anything else I might want to consider other than SSH? It just seemed like the least hassle, spin up a VM, give it storage from my array and isolate it as much as I can from anything else on my network since I don't want to make my family members end points on my VPN. SSH file transfers have the major advantage of being well supported and well documented on basically every platform that matters though, where SyncThing is easy on the major desktop platforms and then can be run on a lot of NASes but may be quirky or unofficial in those cases. If you go with the SSH route though, something I'd add to what's already been said is to only open your SSH port to as little of the world as you can practically get away with. Ideally your firewall supports looking up DNS names for whitelisting and you could set up Dynamic DNS clients at the remote sites so you could have it be literally only exposed to the relevant addresses. If that's not practical for one reason or another, then try to whitelist the smallest range(s) you can that will still cover the dynamic range you expect to see them connecting from. Sometimes that might be a single /24 (common with smaller cable and fixed wireless providers), sometimes you might need to whitelist an entire ISP (common with national DSL or cellular providers). Either way, you severely limit the potential sources of attack to only those loosely network-adjacent to your legitimate users. If there's no legitimate reason someone should be connecting from an ISP on the other side of the planet then they shouldn't be able to connect even if they had all the other information.
|
# ? Sep 11, 2020 16:31 |
IOwnCalculus posted:On a "dumb" breakout cable, yes. If you get into expanders everything changes.
|
|
# ? Sep 11, 2020 17:37 |
|
Those pcie riser cables are fine as long as it isn’t getting too crazy long (10-24”) usually and good enough for gen3. I’ve used ones that are 8” long that were only officially rated for gen2 and they were fine at gen3.
|
# ? Sep 11, 2020 18:16 |
|
I love UnRAID but my god I loving hate how it handles VMs. No snapshots, no cloning.
|
# ? Sep 13, 2020 16:17 |
|
Matt Zerella posted:I love UnRAID but my god I loving hate how it handles VMs. No snapshots, no cloning. Agree.
|
# ? Sep 13, 2020 18:59 |
|
Couldn't you just handle those functions from the CLI using virsh? Assuming they use KVM for virtual machines. And if you could, would those cloned machines show up in the inventory in the web interface?
|
# ? Sep 13, 2020 19:59 |
|
Nulldevice posted:Couldn't you just handle those functions from the CLI using virsh? Assuming they use KVM for virtual machines. And if you could, would those cloned machines show up in the inventory in the web interface? I want it in the GUI. Proxmox does it. It's pretty ridiculous paid software like UnRAID can't but then again their CEO just said bit rot doesn't exist so who knows where this platform is headed. Once I can easily expand ZFS I might be jumping ship.
|
# ? Sep 13, 2020 20:07 |
Matt Zerella posted:Once I can easily expand ZFS I might be jumping ship.
|
|
# ? Sep 13, 2020 20:37 |
|
Yeah, i love my synology, but I am building a linux server to teach myself some devops stuff on the side, and ZFS expansion by adding does seem to be very complicated compared to MDADM. If they can fix that, i would love to have my new server using it, and expand it in the future. Right now every uses ubuntu though so no BSD os on this one, (in my area at least)
|
# ? Sep 13, 2020 20:50 |
|
I basically wrote an Ansible pull bootstrap and pull repo so I can at least get a baseline up on the new VM. It's still incredibly irritating. UnRAID has some extremely stupid shortcomings to it that I never used to care about but now that I'm WFH they're becoming more apparent when I need scratch VMs. No API. So I can't use packer, the problems I pointed out above. Cmon guys there are free solutions out there with more features.
|
# ? Sep 13, 2020 21:22 |
|
Axe-man posted:Yeah, i love my synology, but I am building a linux server to teach myself some devops stuff on the side, and ZFS expansion by adding does seem to be very complicated compared to MDADM. ZFS is so much more mature, MDADM has some cool features but I got really fed up with it fast.
|
# ? Sep 13, 2020 23:59 |
|
unraid/btrfs/mdadm/SHR etc all suck but until the end of time we’re going to have people coming in here and being all “well I don’t want to have to buy 4 drives at once!” and then going and losing their data like, if you care so little about data loss/if it’s all easily replaceable you can just add drives one at a time with ZFS as well, plus then you get protection against corruption as well and the Linus torvalds thing is just lol, he’s actively out of touch these days, see also: his AVX rant
|
# ? Sep 14, 2020 00:19 |
|
Has anyone in this thread ever actually lost data due to corruption that ZFS could actually recover from? The protections while nice seem to cover problems a lot less likely than losing your array in a fire, theft, ransomware attack, software (looking at you Emby) deleting libraries, or user error.
|
# ? Sep 14, 2020 01:44 |
|
Personally I'm just sick of how hacky UnRAID is. It's slow as poo poo, and they've made some weird choices. I'm not worried about data integrity on zfs, I'm more interested in snapshots, compression, mountpoints, etc. Now that I'm a PROFESSIONAL LINUX TOUCHER I'm much more comfortable setting up a Linux machine with docker and compose and getting essentially the same features as UnRAID without dumb poo poo like only being able to log into ssh as root (which is incredibly awful). I still maintain if you don't care about VMs and want to roll your own NAS/media server then nothing beats it but yeah once you need more advanced features and/or speed without relying on a weird cache setup, it falls short.
|
# ? Sep 14, 2020 01:53 |
|
Paul MaudDib posted:and the Linus torvalds thing is just lol, he’s actively out of touch these days, see also: his AVX rant Huh. Thanks for that rabbit hole. This was interesting: https://blog.cloudflare.com/on-the-dangers-of-intels-frequency-scaling/
|
# ? Sep 14, 2020 02:01 |
|
H110Hawk posted:Huh. Thanks for that rabbit hole. This was interesting: https://blog.cloudflare.com/on-the-dangers-of-intels-frequency-scaling/ the long and short of it is that (a) this is primarily due to Intel having two AVX-512 units on every core, there's nothing implicitly wrong with the instruction set (and in fact a lot is right, it fixes a lot of problems with AVX2 and provides useful extensions, thus Torvalds is completely wrong). The same problem would exist if you put like 4 256-bit AVX2 units on a single core too. You could conceivably do like AMD did with AVX2 on Zen1 and Zen+ and run AVX-512 at half-rate on a 256-bit unit over 2 cycles. (b) this primarily affects Intel server (Skylake-SP) only. Xeon-W downclocks substantially less, and on the consumer Skylake-X you can control how much it downclocks completely. And in fact on Ice Lake it only downclocks 100 MHz from peak in one specific scenario, so it mostly won't affect future processors at all. it's not something that is particularly useful inside the kernel, so there's no direct reason he would care, but Torvalds is just getting old and out of touch as far as the broader computing world outside his project.
|
# ? Sep 14, 2020 02:18 |
|
THF13 posted:Has anyone in this thread ever actually lost data due to corruption that ZFS could actually recover from? Yes, a long loving time ago: IOwnCalculus posted:Somewhat unrelated, for reasons I still don't know, my trusty old Linux mdraid RAID5 array decided to dump all of the metadata on all but one drive, and md refused to rebuild the array in any meaningful manner. Since RAID is not a backup I have all of the data backed up elsewhere, so I took this as a sign to spin up a Nexenta Community Edition VM and rebuild the array as a RAIDZ2 (I can stand to pare some data off). Currently restoring my data over the internet at about 20Mbps. I've been fully on ZFS since and at least twice it has saved me from full array restorations when I had two drives chuck errors in the same raidz vdev.
|
# ? Sep 14, 2020 03:05 |
|
Matt Zerella posted:I love UnRAID but my god I loving hate how it handles VMs. No snapshots, no cloning. This is my main gripe with it, I would really like to get down to a single for server in my house for everything but I still have to keep my ESX NUC humming along because of how well it manages VMs, plus Veeam etc. I have most of my stuff running in docker containers but I still need some VMs for this and that. I even tried to nest ESX in an unraid VM but no matter what I could not get ESX to detect the virtual hard drive I gave to it ☹️
|
# ? Sep 14, 2020 04:30 |
|
THF13 posted:Has anyone in this thread ever actually lost data due to corruption that ZFS could actually recover from? Zfs cow and snapshots should save you from Emby shouldn’t it?
|
# ? Sep 14, 2020 05:17 |
THF13 posted:Has anyone in this thread ever actually lost data due to corruption that ZFS could actually recover from? Professionally I've seen UREs during traditional raid rebuilds that resulted in needing to restore the entire array (and which resulted in the companies moving to ZFS when I brought up that ZFS was designed to deal with this exact scenario), and also professionally I've had to use ZFS' ability to go back to known-good transaction groups when a txg doesn't get finished properly before an improper shutdown and ZFS couldn't recover without a bit of help (something that traditional RAID can't do at all). You're not the first, nor last, to make this mistake, but you have to realize that ZFS, and all RAID for that matter, isn't about reliability in the mainframe RAS sense - it's about the other part of the reliability-availability-servicability tripod: availability; ie. the idea that the data remains available even if a certain subset of the system fails. Real reliability for storage layers comes from having two controllers connected to one disk via two cables (multipathing, a feature of SAS) and having multiple machines configured in active-active failover mode. ZFS also has features like snapshots (that Hughlander also brought up) which, while not unique to ZFS (as they exist in UFS too, for example) are designed to be atomic and so lightweight that they can be performed at zero-cost, thereby making it trivial to recover from cryptolockers and accidental deletions, that other RAID systems struggle to deal with unless you implement something on top of it like Shadow Copies on Windows (which ZFS also integrates with). EDIT: An example of this is that companies I've worked for have had zfs snapshot frequencies as low as every 15 minutes - which results in over 42k snapshots a month. BlankSystemDaemon fucked around with this message at 12:57 on Sep 14, 2020 |
|
# ? Sep 14, 2020 12:49 |
|
I can't remember, did ZFS get around to improving performance on resilvering? I seem to recall that was another sticking point for awhile.
|
# ? Sep 15, 2020 01:00 |
|
ZFS never changed the default which is to resilver slowly and leave a large amount of performance available for on-line transactions, since that’s a sensible default for its primary role for storage appliances. But you can tune it to use the whole disk and it will.
|
# ? Sep 15, 2020 04:02 |
|
There was some behavior in 0.8 that changed that improved things significantly, I can't recall what the actual name of it is. vvv Yep, that. It's still a super long process on my array but it's faster than it used to be. IOwnCalculus fucked around with this message at 04:25 on Sep 15, 2020 |
# ? Sep 15, 2020 04:06 |
|
Sequential scrub and resilver
|
# ? Sep 15, 2020 04:18 |
|
Paul MaudDib posted:unraid/btrfs/mdadm/SHR etc all suck but until the end of time we’re going to have people coming in here and being all “well I don’t want to have to buy 4 drives at once!” and then going and losing their data I think Linus is an interesting fella but never really saw him as a scion of the computer world that some do. I think for me, I need to stop thinking of hardware expansion without wiping out the raid and recovering from backups as the rule rather than the exception. I have to admit it is easy sysadmin work to throw a few hdds in a existing array and call it good. edit: tbf i have multiple backups of everything in multiple locations and forms and use RAID 6 which has lower bit flip rebuild failure rate Axe-man fucked around with this message at 04:42 on Sep 15, 2020 |
# ? Sep 15, 2020 04:39 |
Farmer Crack-rear end posted:I can't remember, did ZFS get around to improving performance on resilvering? I seem to recall that was another sticking point for awhile. On FreeBSD, which defaults to a kernel tickrate of 1000 (despite being essentially borderline soft-realtime/tickless, even down to interrupt handling), both scrubs and resilvers are much faster than they are on any other untuned system, for the simple reason that scrubs and resilvers are tied to the tickrate (in FreeBSD, this is controlled by a pair of sysctl values in the vfs.zfs OID). I believe every other Unix-like that ZFS is implemented on also has a fluid tickrate support, so that leaves at least two values to tweak, in case it's something you want to play with - although I can't remember how it's done on Illumos-derivatives, and don't know how it's done on Linux, if you use that. ZFS resilvering speed easily exceeds traditional hardware RAID resilvering - I've seen some of the latter struggle to get above 10MBps on production systems, and never seen anything resilver at the speeds ZFS is capable of even on fairly meager hardware with spinning rust, let alone NVMe SSDs on a fast machine.
|
|
# ? Sep 15, 2020 08:04 |
|
Also, can I just add on the subject of the Unraid guy saying just believe the disks in regards to URE errors - how can a software developer have trust that drive firmwares are bug-free? The firmware is getting more and more complex as the industry adds features like SMR and there's no way that the controlling software is formally validated (if that's even possible at this scale). Like the drive industry tried to implement drive encryption and the efforts were so poor that Bitlocker now defaults to software encryption over any existing drive firmwares encryption feature. Anyways thanks for listening to my TED talk.
|
# ? Sep 15, 2020 15:49 |
|
Less Fat Luke posted:Also, can I just add on the subject of the Unraid guy saying just believe the disks in regards to URE errors - how can a software developer have trust that drive firmwares are bug-free? The firmware is getting more and more complex as the industry adds features like SMR and there's no way that the controlling software is formally validated (if that's even possible at this scale). Like the drive industry tried to implement drive encryption and the efforts were so poor that Bitlocker now defaults to software encryption over any existing drive firmwares encryption feature. I agree disk firmware is not bug free, I also believe in the futility of arguing with the firmware when it throws URE. Or did I miss the point of your TED talk?
|
# ? Sep 15, 2020 16:17 |
|
H110Hawk posted:I agree disk firmware is not bug free, I also believe in the futility of arguing with the firmware when it throws URE. Or did I miss the point of your TED talk? I'm saying the disk can give you faulty data that it doesn't consider in error, which filesystem checksumming protects against (which is what this guy says is a waste of time. I should have been clearer so it was more of a TED-X I guess.
|
# ? Sep 15, 2020 16:27 |
|
Can anyone recommend a software to pool a couple drives under a single letter? I have a bunch of smaller drives I intent to use as basically a scratch disk for my server and I wanna lump em into together for ease of use. Really would like something free if possible. I don’t need any fancy features such as redundancy or file backup/duplication, literally just to pool the space into one larger lump.
|
# ? Sep 15, 2020 17:32 |
|
Kingnothing posted:Can anyone recommend a software to pool a couple drives under a single letter? I have a bunch of smaller drives I intent to use as basically a scratch disk for my server and I wanna lump em into together for ease of use. go to Administrative Tools in the windows control panel > Computer Management > Disk Management delete the partitions on the disks, right click "disk X" on one of the disks (doesn't matter which), Create Spanned Volume (or simple volume?), check the drives that you want to use. note that losing any disk will trash the whole spanned volume though, so back up anything you care about Paul MaudDib fucked around with this message at 18:14 on Sep 15, 2020 |
# ? Sep 15, 2020 18:11 |
|
Paul MaudDib posted:go to Administrative Tools in the windows control panel > Computer Management > Disk Management Is there something that’ll do it without taking down the whole thing if one drive kicks? I’ve heard of drivepool but I wasn’t sure if it was lovely.
|
# ? Sep 15, 2020 18:33 |
|
As in, take a bunch of internal hard drives and combine them into one? Windows has a tool called Storage Spaces built-in that'll allow you to do that.
|
# ? Sep 15, 2020 18:40 |
|
Kingnothing posted:Is there something that’ll do it without taking down the whole thing if one drive kicks? I’ve heard of drivepool but I wasn’t sure if it was lovely. DrivePool is the best way to do what everyone in this thread does (have lots of storage), but instead they’re obsessed with free software like FreeNAS or other paid options like Unraid. The only downside to DrivePool is that you’ll want the whole suite because all of Stablebit’s softwares are excellent. Please don’t use Storage Spaces because Microsoft cannot be trusted to keep it stable.
|
# ? Sep 15, 2020 18:48 |
|
|
# ? Apr 25, 2024 10:05 |
|
Stablebit Drivepool does exactly what you want but unfortunately isn't free. It'll pool the drives with or without redundancy into one logical view, and if one drive dies it'll only take the files on it. e;f,b
|
# ? Sep 15, 2020 18:48 |