|
Inspector_666 posted:Via a complex series of mirrors and lenses I use IR transmission to handle all of my LAN communication. Pictures please. This sounds really cool. Also this: http://ronja.twibright.com/
|
# ? Sep 9, 2014 17:27 |
|
|
# ? Mar 28, 2024 13:13 |
|
Inspector_666 posted:Via a complex series of mirrors and lenses I use IR transmission to handle all of my LAN communication.
|
# ? Sep 10, 2014 00:22 |
|
I was thinking more about those 2-foot range bullshit systems they put in laptops for a bit several years back. And I was kidding.
|
# ? Sep 10, 2014 05:39 |
|
Bokito posted:Maybe you can go with Thunderbolt 2 networking?
|
# ? Sep 13, 2014 01:54 |
|
Combat Pretzel posted:Can you do Thunderbolt networking with any adapter? And at what distances does that work? I'm looking for a "cheap" direct high speed link between my NAS and my main computer, so switching isn't necessary. Just use infiniband. Or for point-to-point, get a couple 10ge cards and just connect them. But your NAS is almost certainly not fast enough that you'll need more than GBE, and if it is, buying ssds for your computer will be faster and cheaper. If you want a "network" use infiniband. If you want a fast connection, put disks in your chassis and use sata
|
# ? Sep 13, 2014 02:03 |
|
I played for a while with iSCSI between my desktop and my NAS. iSCSI over multiple parallel NICs can use all of the bandwidth for a single data stream; in my case I had 2 dedicated dual-port intel NICs simply crossed over to one another with cat6, and I could consistently push very close to 2gbps sustained throughput on a single file transfer between cache and ssd with very little overhead. It was not without downsides, however:
In the end I learned that while the iSCSI extent can be allowed to grow dynamically, it cannot be shrunk again so after bloating it up to 2tb shuffling temporary stuff around I was faced with having to delete the whole disk and start over to get the space back so I just said gently caress it and have put up with a measly 1gbps ever since. If you're ok with those constraints though, on amazon you can grab intel dual-port NICs for ~$40 and quad for ~$80, and share the full love of a saturated SATA3 bus and then some with one special computer for under $200 poverty goat fucked around with this message at 03:56 on Sep 13, 2014 |
# ? Sep 13, 2014 03:38 |
|
It's a custom NAS based on FreeNAS. Since they'll be ZVOLs, there's caching and compression via ZFS. I think they also support TRIM, so space could be reclaimed, but I haven't that tested yet. As to why iSCSI, I'd like to store disk performance independent and/or casually played games on the NAS. Since they're not going to be played on the laptop, they don't need to be shared. I've looked it up and others have tried it, with some games it works, others take offense being on a network drive. To "fix" this, I have to make the drive make-pretend being a physical one (to the apps, anyway). I'm only going to use iSCSI where it makes sense. Documents and media content are still going to be stored in a ZFS filesystem and shared via CIFS. As for Thunderbolt, it seems like a cheap cop out over multiport NICs or 10Gbe ones. Sadly, there isn't really any expansion cards right now (even less ones that work under FreeNAS), except the Asus one for "only" 60€, which requires a TB header on the mainboard for some reason, despite needing already PCIe 4x. The mainboard for the NAS has two Gigabit ports. I'm going to run a direct line to my mainboard either way, and another to the Wifi router/switch for my laptop and tablet. Plan is to restrict iSCSI to the direct hook up and the rest to the router. If cable length is going to be an issue with Thunderbolt, it'll be useless to me, anyway. One point of the NAS was to get the disk vibrations out of my living room. Combat Pretzel fucked around with this message at 12:32 on Sep 13, 2014 |
# ? Sep 13, 2014 12:28 |
|
Have you actually tested with just 1GBe? Our entire VDI pool used no more than 700mbps (5 minute average) within the last 24 hours. That's providing all disk for about 300 desktops, maybe even more, I don't know how many people actually work on Friday around here.
|
# ? Sep 13, 2014 14:32 |
|
Combat Pretzel posted:It's a custom NAS based on FreeNAS. Since they'll be ZVOLs, there's caching and compression via ZFS. I think they also support TRIM, so space could be reclaimed, but I haven't that tested yet. PCIe 4x isn't actually fast enough (8x for dual 10gb links, probably 16x for the newer 20gb lightpeak stuff), but I'd guess the header is actually for power over the bus, since lightpeak sends more DC than it'll get from the PCI bus
|
# ? Sep 13, 2014 16:44 |
|
evol262 posted:PCIe 4x isn't actually fast enough (8x for dual 10gb links, probably 16x for the newer 20gb lightpeak stuff), but I'd guess the header is actually for power over the bus, since lightpeak sends more DC than it'll get from the PCI bus
|
# ? Sep 13, 2014 17:05 |
|
adorai posted:Have you actually tested with just 1GBe? On the FreeNAS test VM, that runs on a single vCPU, an emulated NIC and just 2GB of RAM, I can shove over 150 megabytes a second over the virtual link. The actual box is going to be a quadcore Xeon with 16GB, it should be able to saturate a single 1GBe link. That said, it's merely an option I'm looking into for the iSCSI usage scenario. evol262 posted:PCIe 4x isn't actually fast enough (8x for dual 10gb links, probably 16x for the newer 20gb lightpeak stuff), but I'd guess the header is actually for power over the bus, since lightpeak sends more DC than it'll get from the PCI bus
|
# ? Sep 13, 2014 21:36 |
|
Combat Pretzel posted:It's a custom NAS based on FreeNAS. Since they'll be ZVOLs, there's caching and compression via ZFS. I think they also support TRIM, so space could be reclaimed, but I haven't that tested yet. I don't know anything about giving iSCSI a full raw disk; in my case it was just a file in a raidz2 array with a SSD cache, shared with my media. When I set it up I could specify a maximum size, and I could change the maximum size arbitrarily down the road without preallocating any of the space, but once it grew to a certain size there was no way (at least, not through freeNAS's web UI) to shrink the 2gb file back down once it no longer contained 2 gigs of data. I wouldn't mind playing with it again but now I've gone and replaced freeNAS with ubuntu since it has decent ZFS support now and I got sick of jails/ports and not being able to use virtualbox/vmware in bsd.
|
# ? Sep 13, 2014 22:36 |
|
There's a VirtualBox jail in the latest release now. Whatever that's worth. I'd see what it does, but a) I don't think Hyper-V does nested virtualization, and b) installing plugs and jails takes trillions of hours for whatever reason.
|
# ? Sep 13, 2014 22:39 |
|
If it uses the linux compatibility layer stuff it's going to be limited to 32bit and probably still be a broken, buggy mess. The last straw for me was when I spent a few hours in the newly included debian/ubuntu jails trying and failing to do basic stuff like install sun java for a minecraft server. ZFS performance isn't quite as great because it's sharing a measly 8gb of memory with more stuff but otherwise I couldn't be happier with my ubuntu setup. I've got my vms running headless as services now and everything. The stuff I offloaded to the iSCSI drive for a while is just on a local 500gb single platter WD black with a SSD cache via intel RST now. poverty goat fucked around with this message at 23:02 on Sep 13, 2014 |
# ? Sep 13, 2014 22:57 |
|
Alereon posted:Thunderbolt and USB 3.1 are both carrying PCIe 3.0 lanes, but chipsets only provide PCIe 2.0 lanes. You can either connect devices to the PCIe 3.0 lanes coming from Intel CPUs, or use more lanes coming from the chipset to talk to devices, up to the maximum of 8. Since Thunderbolt carries PCIe 3.0 there is no overhead issue with 1-1 lane mapping, that probably isn't the case with 10GbE for example, I suspect you can probably push more data down 10GbE than you can over 1x PCIe 3.0 counting protocol overhead. The more you know. Combat Pretzel posted:There's a VirtualBox jail in the latest release now. Whatever that's worth. I'd see what it does, but a) I don't think Hyper-V does nested virtualization, and b) installing plugs and jails takes trillions of hours for whatever reason. Hyper-V doesn't do nested virt (which might be OK since vbox can do binary translation and doesn't need accel), but I probably wouldn't bother either. PCIe is 75w for 16x. I think 4x is 25w. Which is still a lot higher than the ~10w going out on lightpeak, but I'd assumed that the chipset was inefficient as hell. Alereon's explanation makes a lot more sense
|
# ? Sep 13, 2014 23:08 |
|
gggiiimmmppp posted:If it uses the linux compatibility layer stuff it's going to be limited to 32bit and probably still be a broken, buggy mess. The last straw for me was when I spent a few hours in the newly included debian/ubuntu jails trying and failing to do basic stuff like install sun java for a minecraft server. That said, I'd say, if you should ever reconsider FreeNAS, you'd be better off putting it on ESXi and run a separate VM for Ubuntu. Despite the FreeNAS folks getting their panties in a twist about virtualization.
|
# ? Sep 13, 2014 23:53 |
|
Combat Pretzel posted:VirtualBox had a FreeBSD version since forever. I doubt it'd be some implementation on the Linux compatibility layer. It's been a few months so I don't remember if I just couldn't get it working or couldn't find it or what. I'd have taken ESXi for a spin but I was under the impression that I couldn't give a VM raw access to the existing RAIDZ array without a newer CPU with AMD-Vi/VT-X but maybe that's not the case . I definitely don't want to entangle it in virtual disks e: vv poverty goat fucked around with this message at 05:41 on Sep 14, 2014 |
# ? Sep 14, 2014 04:42 |
|
No, RDM sucks. You want controller passthrough with VT-d or sriov
|
# ? Sep 14, 2014 05:10 |
|
I think Linux's bonding driver in balance-rr mode would work to scale traffic above 1g on the transmit side.
|
# ? Sep 14, 2014 05:41 |
|
Ninja Rope posted:I think Linux's bonding driver in balance-rr mode would work to scale traffic above 1g on the transmit side.
|
# ? Sep 14, 2014 05:43 |
|
adorai posted:I don't know anything about this feature. What I do know is that if you enable a feature like this you run the risk of seeing a lot of out of order packets. Sure, but OSs should be able to handle that situation reasonably well. Someone should try it and report back.
|
# ? Sep 14, 2014 05:54 |
|
evol262 posted:No, RDM sucks. You want controller passthrough with VT-d or sriov
|
# ? Sep 14, 2014 13:19 |
|
Coincidentally, Anandtech posted a news story about the future of Thunderbolt. One interesting bit of information explains the lack of Thunderbolt on consumer boards: Intel does not allow non-Apple motherboards to ship with devices connected to the PCIe lanes from the CPU, primarily due to poor driver support under Windows on systems without the Intel chipset drivers installed. Since you'd need four PCIe 2.0 lanes to feed a Thunderbolt 2 controller, that only leaves four lanes for everything else, not enough. This also clarifies that I was wrong about something above: while Thunderbolt carries PCIe 3.0 signals, the actual controllers are still limited to PCIe 2.0 when talking to the system. This is because no chipsets support PCIe 3.0, and Apple would have been Intel's only consumer of CPU-connected PCIe 3.0 chips. The article also mentions that drivers have been updated to support Thunderbolt networking, for ~800MB/sec of throughput, or ~1600MB/sec over Thunderbolt 2, which aggregates both lanes.
|
# ? Sep 14, 2014 14:42 |
|
Ninja Rope posted:Sure, but OSs should be able to handle that situation reasonably well. Someone should try it and report back.
|
# ? Sep 14, 2014 14:45 |
|
Ninja Rope posted:Sure, but OSs should be able to handle that situation reasonably well. Someone should try it and report back.
|
# ? Sep 15, 2014 05:03 |
|
I set up a 10 GbE network between my storage box, hyper-V host and primary workstation. I inherited an HP switch with a set of 10GbE SFP+ line cards in the back, got a set of re-branded Intel X-520 dual port SFP+ network cards from ebay, and hooked the whole thing up with name brand non-generic direct attach cables. It works super sweet, single threaded transfers over iSCSI to cache or the SSDs average 3gb/sec average throughput, multithreaded reads or writes to cache will saturate the entire 10gb. Very much worth the effort getting it set up. That said: gently caress HP in the rear end with the largest, rustiest piece of scrap steel you could reasonably be expected to move using a forklift. Those three loving cables doubled the cost of the entire project. Makes me want to buy an EEPROM reader and just duplicate the gently caress out of the name brand cables. Literally all it does is read manufacturer and SN off the fake transceiver and poo poo on it and lock it out if it doesn't match some regex the switch firmware runs. And this is of course for my protection, as I wouldn't get the vicious rear end reaming if I bought the generic $20 cables from Cables4Less or something. gently caress, even Cisco has an CLI command to disable the Genuine_Cisco checks, and they practically bathe in your money after selling you a networking setup.
|
# ? Sep 15, 2014 10:22 |
|
Do you have to use branded Direct Attach cables? How does it identify the cable brand, I thought the entire point was that there weren't any electronics in the cable?
|
# ? Sep 15, 2014 13:18 |
|
Alereon posted:Do you have to use branded Direct Attach cables? How does it identify the cable brand, I thought the entire point was that there weren't any electronics in the cable?
|
# ? Sep 15, 2014 14:26 |
|
I do actually have a freebsd/zfs box exporting a 1TB iscsi volume to my Windows machine , and I have a fair bit of my steam library on it. I guess I can take a look at the realworld bandwidth usage and post it here, just to add some actual data. (I, too, am looking at buying some cheap 4gbit or 10gbit gear from eBay just for the fun of it. )
|
# ? Sep 15, 2014 15:03 |
|
What I'm curious about is whether going 10GBe would lower latencies when there's large read requests. I'd figure it'll make a difference in open world games that constantly stream geometry and textures. At least from the point on where most of it is in the ARC.
|
# ? Sep 15, 2014 16:09 |
|
I wouldn't expect it to - latency on 1Gbit over a few meters of cable and a single switch should be neglible anyway. The reads will finish faster, ofc - in the ideal case (saturated network connections, same bandwidth overhead, reading from ARC) it should take a tenth of the time. I guess you could call the entire time from request to all data delivered the latency, in which case ... "yes"? Edit: Or do you mean when there's multiple readers? Computer viking fucked around with this message at 19:02 on Sep 15, 2014 |
# ? Sep 15, 2014 18:56 |
|
Alereon posted:Do you have to use branded Direct Attach cables? How does it identify the cable brand, I thought the entire point was that there weren't any electronics in the cable? Yes, there is a little EEPROM inside the connector that tells the SFP+ port what kinds of things it understands (you need that for the port to initialize properly), and HP, because a 500% markup on cables and ink is how they're still in the black each year, locks those down so only things on the whitelist will work. Which is why one of these days I'm going to pull apart my cheapo cable and see what size/type of EEPROM it uses and if I can't find a way to counterfeit the poo poo out of them. Making a cable that's HP genuine at one end and Intel Genuine at the other would finally allow me to get Jumbo frames working without having to spend $500+ on branded SR optics for the switch and network card. Combat Pretzel posted:What I'm curious about is whether going 10GBe would lower latencies when there's large read requests. I'd figure it'll make a difference in open world games that constantly stream geometry and textures. At least from the point on where most of it is in the ARC. Not really, no. In instances where multiple things are fighting over the already saturated stream, it can make a difference, but in the real world, you rarely see much latency improvement for large transfers. Small transfers do see some significant improvement, just because the packet switching rate is so much faster, the per packet latency is like 1/8th what 1GbE has. Which makes ARC reads a little nicer for iSCSI shares. My setup is kinda retarded right now. I have an OmniOS server that shares out ZFS volumes over the COMSTAR sub-system, which publishes the volume as a raw disk iSCSI LUN, which is mapped to a Windows 2012 R2 server, which shares the LUN out as a set of windows folders. The windows side volumes are also bitlockered. The entire setup is rube goldberg as gently caress. Methylethylaldehyde fucked around with this message at 21:55 on Sep 15, 2014 |
# ? Sep 15, 2014 21:51 |
|
Oh gently caress me sideways, I just noticed I did something horrible on the fileserver: There's a huge difference between "zpool create poolname <list of disks>" and "zpool create poolname raidz <list of disks>". At least there's nothing important on there, and there's little enough data used to make backup/rebuild/restore trivial. Still, though.
|
# ? Sep 16, 2014 01:03 |
|
Computer viking posted:Oh gently caress me sideways, I just noticed I did something horrible on the fileserver: There's a huge difference between "zpool create poolname <list of disks>" and "zpool create poolname raidz <list of disks>". At least there's nothing important on there, and there's little enough data used to make backup/rebuild/restore trivial. Still, though.
|
# ? Sep 16, 2014 03:46 |
|
adorai posted:The early zfs docs specifically warned against this, and used an example of adding another vdev to a pool with the wrong (or no) raidlevel. It's a common enough mistake. Still, it's kind of embarrassing that I didn't notice before I tried to offline a disk and got the warning about not having enough copies. Good thing I got it now (when trying to remove/4k-align/replace a disk at the time) instead of when a disk actually died. I wonder if the online realignment thing would have worked ... but now that I'm recreating it anyway I'll just do it right in the first place. Ah well, probably for the best.
|
# ? Sep 16, 2014 14:24 |
|
I thought ashift could only be set when you create a pool? I don't think you can set ashift per-disk when swapping. Made me glad I started my pool with 4k disks, so I never had to backup/wipe/re-create.
|
# ? Sep 17, 2014 00:12 |
|
Methylethylaldehyde posted:Which is why one of these days I'm going to pull apart my cheapo cable and see what size/type of EEPROM it uses and if I can't find a way to counterfeit the poo poo out of them. Making a cable that's HP genuine at one end and Intel Genuine at the other would finally allow me to get Jumbo frames working without having to spend $500+ on branded SR optics for the switch and network card. IIRC it's a flash memory writeable over the I2C bus available on the SFP interface. At least it was when I dealt with this stuff - we would "qualify" our SFPs in the lab by burning in the necessary values in the registers. Our production software had I2C write disabled (and I wouldn't be surprised if those traces were cut on production boards), but our lab hardware could write this. Some SFPs you actually do have to calibrate and qualify, but most of the time as you say it's just a markup. One provider even offers hardware so you can qualify your own off-brand SFPs, where each SFP qualification costs X credits that you purchase online.
|
# ? Sep 17, 2014 00:48 |
|
luminalflux posted:IIRC it's a flash memory writeable over the I2C bus available on the SFP interface. At least it was when I dealt with this stuff - we would "qualify" our SFPs in the lab by burning in the necessary values in the registers. Our production software had I2C write disabled (and I wouldn't be surprised if those traces were cut on production boards), but our lab hardware could write this. Nothing says Russian business model quite like turning 'Do you like being hosed in the rear end by vendors? There has to be a better way!' into a software and hardware product designed to circumvent it. Behold! http://sfptotal.com/en/ Send money to glorious Crimea, Ukraine, and awesome soviet engineers help you work around vendor lock-ins! If I had more than one or two connectors to buy, I'd seriously consider purchasing one just to see how well it works.
|
# ? Sep 17, 2014 01:25 |
|
PitViper posted:I thought ashift could only be set when you create a pool? I don't think you can set ashift per-disk when swapping. Made me glad I started my pool with 4k disks, so I never had to backup/wipe/re-create. If you replace one disk at the time with 4k-block 4k-aligned disks, the whole vdev will flip to 4k when you swap in the last disk ... I think. Based on that, I planned to offline a disk, make a 4k-aligned gnop device, and then replace the disk with the new device (that just happens to be the same physical drive with a tiny bit of misdirection). I might have to try it on a spare machine.
|
# ? Sep 17, 2014 02:39 |
|
|
# ? Mar 28, 2024 13:13 |
|
Inspector_666 posted:Via a complex series of mirrors and lenses I use IR transmission to handle all of my LAN communication. Very plausible, actually
|
# ? Sep 17, 2014 04:10 |