Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
poverty goat
Feb 15, 2004



I played for a while with iSCSI between my desktop and my NAS. iSCSI over multiple parallel NICs can use all of the bandwidth for a single data stream; in my case I had 2 dedicated dual-port intel NICs simply crossed over to one another with cat6, and I could consistently push very close to 2gbps sustained throughput on a single file transfer between cache and ssd with very little overhead. It was not without downsides, however:

  • it's block level storage, meaning that the NAS is only aware of a big raw data file and oblivious of its contents. Filesystem compression, deduplification and whatnot aren't going to be any use here and :iiam: as to whether your NAS is going to cache it intelligently.
  • You can't access the stuff you're already sharing from your NAS's file system over iSCSI, nor can you share anything in the iSCSI extent (that's what the raw file disk thing is called, i guess) with the rest of the network from the NAS (though you can from the machine that mounts it). When I did this I found some murmurings of the possibility of mounting the disk simultaneously on the NAS and sharing its contents that way but as I understand iSCSI isn't designed for simultaneous shared access like this and it might cause the end times or something
  • it requires its own network and may or may not perform well with an unmanaged consumer switch in between. It might not actually require a dedicated network but it will eat the whole pipe and you don't want to do it. In my case I was next to the NAS so I just crossed over the two NICs to one another.

In the end I learned that while the iSCSI extent can be allowed to grow dynamically, it cannot be shrunk again so after bloating it up to 2tb shuffling temporary stuff around I was faced with having to delete the whole disk and start over to get the space back so I just said gently caress it and have put up with a measly 1gbps ever since.

If you're ok with those constraints though, on amazon you can grab intel dual-port NICs for ~$40 and quad for ~$80, and share the full love of a saturated SATA3 bus and then some with one special computer for under $200

poverty goat fucked around with this message at 03:56 on Sep 13, 2014

Adbot
ADBOT LOVES YOU

poverty goat
Feb 15, 2004



Combat Pretzel posted:

It's a custom NAS based on FreeNAS. Since they'll be ZVOLs, there's caching and compression via ZFS. I think they also support TRIM, so space could be reclaimed, but I haven't that tested yet.

I don't know anything about giving iSCSI a full raw disk; in my case it was just a file in a raidz2 array with a SSD cache, shared with my media. When I set it up I could specify a maximum size, and I could change the maximum size arbitrarily down the road without preallocating any of the space, but once it grew to a certain size there was no way (at least, not through freeNAS's web UI) to shrink the 2gb file back down once it no longer contained 2 gigs of data.

I wouldn't mind playing with it again but now I've gone and replaced freeNAS with ubuntu since it has decent ZFS support now and I got sick of jails/ports and not being able to use virtualbox/vmware in bsd.

poverty goat
Feb 15, 2004



If it uses the linux compatibility layer stuff it's going to be limited to 32bit and probably still be a broken, buggy mess. The last straw for me was when I spent a few hours in the newly included debian/ubuntu jails trying and failing to do basic stuff like install sun java for a minecraft server.

ZFS performance isn't quite as great because it's sharing a measly 8gb of memory with more stuff but otherwise I couldn't be happier with my ubuntu setup. I've got my vms running headless as services now and everything. The stuff I offloaded to the iSCSI drive for a while is just on a local 500gb single platter WD black with a SSD cache via intel RST now.

poverty goat fucked around with this message at 23:02 on Sep 13, 2014

poverty goat
Feb 15, 2004



Combat Pretzel posted:

VirtualBox had a FreeBSD version since forever. I doubt it'd be some implementation on the Linux compatibility layer.

That said, I'd say, if you should ever reconsider FreeNAS, you'd be better off putting it on ESXi and run a separate VM for Ubuntu. Despite the FreeNAS folks getting their panties in a twist about virtualization.

It's been a few months so I don't remember if I just couldn't get it working or couldn't find it or what. I'd have taken ESXi for a spin but I was under the impression that I couldn't give a VM raw access to the existing RAIDZ array without a newer CPU with AMD-Vi/VT-X but maybe that's not the case :iiam:. I definitely don't want to entangle it in virtual disks

e: vv :ms:

poverty goat fucked around with this message at 05:41 on Sep 14, 2014

  • Locked thread