|
I played for a while with iSCSI between my desktop and my NAS. iSCSI over multiple parallel NICs can use all of the bandwidth for a single data stream; in my case I had 2 dedicated dual-port intel NICs simply crossed over to one another with cat6, and I could consistently push very close to 2gbps sustained throughput on a single file transfer between cache and ssd with very little overhead. It was not without downsides, however:
In the end I learned that while the iSCSI extent can be allowed to grow dynamically, it cannot be shrunk again so after bloating it up to 2tb shuffling temporary stuff around I was faced with having to delete the whole disk and start over to get the space back so I just said gently caress it and have put up with a measly 1gbps ever since. If you're ok with those constraints though, on amazon you can grab intel dual-port NICs for ~$40 and quad for ~$80, and share the full love of a saturated SATA3 bus and then some with one special computer for under $200 poverty goat fucked around with this message at 03:56 on Sep 13, 2014 |
# ¿ Sep 13, 2014 03:38 |
|
|
# ¿ Apr 25, 2024 02:28 |
|
Combat Pretzel posted:It's a custom NAS based on FreeNAS. Since they'll be ZVOLs, there's caching and compression via ZFS. I think they also support TRIM, so space could be reclaimed, but I haven't that tested yet. I don't know anything about giving iSCSI a full raw disk; in my case it was just a file in a raidz2 array with a SSD cache, shared with my media. When I set it up I could specify a maximum size, and I could change the maximum size arbitrarily down the road without preallocating any of the space, but once it grew to a certain size there was no way (at least, not through freeNAS's web UI) to shrink the 2gb file back down once it no longer contained 2 gigs of data. I wouldn't mind playing with it again but now I've gone and replaced freeNAS with ubuntu since it has decent ZFS support now and I got sick of jails/ports and not being able to use virtualbox/vmware in bsd.
|
# ¿ Sep 13, 2014 22:36 |
|
If it uses the linux compatibility layer stuff it's going to be limited to 32bit and probably still be a broken, buggy mess. The last straw for me was when I spent a few hours in the newly included debian/ubuntu jails trying and failing to do basic stuff like install sun java for a minecraft server. ZFS performance isn't quite as great because it's sharing a measly 8gb of memory with more stuff but otherwise I couldn't be happier with my ubuntu setup. I've got my vms running headless as services now and everything. The stuff I offloaded to the iSCSI drive for a while is just on a local 500gb single platter WD black with a SSD cache via intel RST now. poverty goat fucked around with this message at 23:02 on Sep 13, 2014 |
# ¿ Sep 13, 2014 22:57 |
|
Combat Pretzel posted:VirtualBox had a FreeBSD version since forever. I doubt it'd be some implementation on the Linux compatibility layer. It's been a few months so I don't remember if I just couldn't get it working or couldn't find it or what. I'd have taken ESXi for a spin but I was under the impression that I couldn't give a VM raw access to the existing RAIDZ array without a newer CPU with AMD-Vi/VT-X but maybe that's not the case . I definitely don't want to entangle it in virtual disks e: vv poverty goat fucked around with this message at 05:41 on Sep 14, 2014 |
# ¿ Sep 14, 2014 04:42 |