Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
SamDabbers
May 26, 2003



Inspector_666 posted:

Via a complex series of mirrors and lenses I use IR transmission to handle all of my LAN communication.

Pictures please. This sounds really cool.

Also this: http://ronja.twibright.com/

Adbot
ADBOT LOVES YOU

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Inspector_666 posted:

Via a complex series of mirrors and lenses I use IR transmission to handle all of my LAN communication.
Fiber optics?

Inspector_666
Oct 7, 2003

benny with the good hair
I was thinking more about those 2-foot range bullshit systems they put in laptops for a bit several years back.

And I was kidding.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Bokito posted:

Maybe you can go with Thunderbolt 2 networking?

http://www.engadget.com/2014/04/07/thunderbolt-2-networking/
Can you do Thunderbolt networking with any adapter? And at what distances does that work? I'm looking for a "cheap" direct high speed link between my NAS and my main computer, so switching isn't necessary.

evol262
Nov 30, 2010
#!/usr/bin/perl

Combat Pretzel posted:

Can you do Thunderbolt networking with any adapter? And at what distances does that work? I'm looking for a "cheap" direct high speed link between my NAS and my main computer, so switching isn't necessary.

Just use infiniband.

Or for point-to-point, get a couple 10ge cards and just connect them. But your NAS is almost certainly not fast enough that you'll need more than GBE, and if it is, buying ssds for your computer will be faster and cheaper.

If you want a "network" use infiniband. If you want a fast connection, put disks in your chassis and use sata

poverty goat
Feb 15, 2004



I played for a while with iSCSI between my desktop and my NAS. iSCSI over multiple parallel NICs can use all of the bandwidth for a single data stream; in my case I had 2 dedicated dual-port intel NICs simply crossed over to one another with cat6, and I could consistently push very close to 2gbps sustained throughput on a single file transfer between cache and ssd with very little overhead. It was not without downsides, however:

  • it's block level storage, meaning that the NAS is only aware of a big raw data file and oblivious of its contents. Filesystem compression, deduplification and whatnot aren't going to be any use here and :iiam: as to whether your NAS is going to cache it intelligently.
  • You can't access the stuff you're already sharing from your NAS's file system over iSCSI, nor can you share anything in the iSCSI extent (that's what the raw file disk thing is called, i guess) with the rest of the network from the NAS (though you can from the machine that mounts it). When I did this I found some murmurings of the possibility of mounting the disk simultaneously on the NAS and sharing its contents that way but as I understand iSCSI isn't designed for simultaneous shared access like this and it might cause the end times or something
  • it requires its own network and may or may not perform well with an unmanaged consumer switch in between. It might not actually require a dedicated network but it will eat the whole pipe and you don't want to do it. In my case I was next to the NAS so I just crossed over the two NICs to one another.

In the end I learned that while the iSCSI extent can be allowed to grow dynamically, it cannot be shrunk again so after bloating it up to 2tb shuffling temporary stuff around I was faced with having to delete the whole disk and start over to get the space back so I just said gently caress it and have put up with a measly 1gbps ever since.

If you're ok with those constraints though, on amazon you can grab intel dual-port NICs for ~$40 and quad for ~$80, and share the full love of a saturated SATA3 bus and then some with one special computer for under $200

poverty goat fucked around with this message at 03:56 on Sep 13, 2014

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
It's a custom NAS based on FreeNAS. Since they'll be ZVOLs, there's caching and compression via ZFS. I think they also support TRIM, so space could be reclaimed, but I haven't that tested yet.

As to why iSCSI, I'd like to store disk performance independent and/or casually played games on the NAS. Since they're not going to be played on the laptop, they don't need to be shared. I've looked it up and others have tried it, with some games it works, others take offense being on a network drive. To "fix" this, I have to make the drive make-pretend being a physical one (to the apps, anyway). I'm only going to use iSCSI where it makes sense. Documents and media content are still going to be stored in a ZFS filesystem and shared via CIFS.

As for Thunderbolt, it seems like a cheap cop out over multiport NICs or 10Gbe ones. Sadly, there isn't really any expansion cards right now (even less ones that work under FreeNAS), except the Asus one for "only" 60€, which requires a TB header on the mainboard for some reason, despite needing already PCIe 4x.

The mainboard for the NAS has two Gigabit ports. I'm going to run a direct line to my mainboard either way, and another to the Wifi router/switch for my laptop and tablet. Plan is to restrict iSCSI to the direct hook up and the rest to the router. If cable length is going to be an issue with Thunderbolt, it'll be useless to me, anyway. One point of the NAS was to get the disk vibrations out of my living room.

Combat Pretzel fucked around with this message at 12:32 on Sep 13, 2014

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
Have you actually tested with just 1GBe? Our entire VDI pool used no more than 700mbps (5 minute average) within the last 24 hours. That's providing all disk for about 300 desktops, maybe even more, I don't know how many people actually work on Friday around here.

evol262
Nov 30, 2010
#!/usr/bin/perl

Combat Pretzel posted:

It's a custom NAS based on FreeNAS. Since they'll be ZVOLs, there's caching and compression via ZFS. I think they also support TRIM, so space could be reclaimed, but I haven't that tested yet.

As to why iSCSI, I'd like to store disk performance independent and/or casually played games on the NAS. Since they're not going to be played on the laptop, they don't need to be shared. I've looked it up and others have tried it, with some games it works, others take offense being on a network drive. To "fix" this, I have to make the drive make-pretend being a physical one (to the apps, anyway). I'm only going to use iSCSI where it makes sense. Documents and media content are still going to be stored in a ZFS filesystem and shared via CIFS.

As for Thunderbolt, it seems like a cheap cop out over multiport NICs or 10Gbe ones. Sadly, there isn't really any expansion cards right now (even less ones that work under FreeNAS), except the Asus one for "only" 60€, which requires a TB header on the mainboard for some reason, despite needing already PCIe 4x.

The mainboard for the NAS has two Gigabit ports. I'm going to run a direct line to my mainboard either way, and another to the Wifi router/switch for my laptop and tablet. Plan is to restrict iSCSI to the direct hook up and the rest to the router. If cable length is going to be an issue with Thunderbolt, it'll be useless to me, anyway. One point of the NAS was to get the disk vibrations out of my living room.

PCIe 4x isn't actually fast enough (8x for dual 10gb links, probably 16x for the newer 20gb lightpeak stuff), but I'd guess the header is actually for power over the bus, since lightpeak sends more DC than it'll get from the PCI bus

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

evol262 posted:

PCIe 4x isn't actually fast enough (8x for dual 10gb links, probably 16x for the newer 20gb lightpeak stuff), but I'd guess the header is actually for power over the bus, since lightpeak sends more DC than it'll get from the PCI bus
Thunderbolt and USB 3.1 are both carrying PCIe 3.0 lanes, but chipsets only provide PCIe 2.0 lanes. You can either connect devices to the PCIe 3.0 lanes coming from Intel CPUs, or use more lanes coming from the chipset to talk to devices, up to the maximum of 8. Since Thunderbolt carries PCIe 3.0 there is no overhead issue with 1-1 lane mapping, that probably isn't the case with 10GbE for example, I suspect you can probably push more data down 10GbE than you can over 1x PCIe 3.0 counting protocol overhead.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

adorai posted:

Have you actually tested with just 1GBe?
Still waiting for parts, this is theorycrafting.

On the FreeNAS test VM, that runs on a single vCPU, an emulated NIC and just 2GB of RAM, I can shove over 150 megabytes a second over the virtual link. The actual box is going to be a quadcore Xeon with 16GB, it should be able to saturate a single 1GBe link. That said, it's merely an option I'm looking into for the iSCSI usage scenario.

evol262 posted:

PCIe 4x isn't actually fast enough (8x for dual 10gb links, probably 16x for the newer 20gb lightpeak stuff), but I'd guess the header is actually for power over the bus, since lightpeak sends more DC than it'll get from the PCI bus
Really? I thought the PCIe bus interface was supposed to be able to deliver 75W? Either way, there's no proper expansion cards, despite Intel supposedly having said to release some to drive adoption.

poverty goat
Feb 15, 2004



Combat Pretzel posted:

It's a custom NAS based on FreeNAS. Since they'll be ZVOLs, there's caching and compression via ZFS. I think they also support TRIM, so space could be reclaimed, but I haven't that tested yet.

I don't know anything about giving iSCSI a full raw disk; in my case it was just a file in a raidz2 array with a SSD cache, shared with my media. When I set it up I could specify a maximum size, and I could change the maximum size arbitrarily down the road without preallocating any of the space, but once it grew to a certain size there was no way (at least, not through freeNAS's web UI) to shrink the 2gb file back down once it no longer contained 2 gigs of data.

I wouldn't mind playing with it again but now I've gone and replaced freeNAS with ubuntu since it has decent ZFS support now and I got sick of jails/ports and not being able to use virtualbox/vmware in bsd.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
There's a VirtualBox jail in the latest release now. Whatever that's worth. I'd see what it does, but a) I don't think Hyper-V does nested virtualization, and b) installing plugs and jails takes trillions of hours for whatever reason.

poverty goat
Feb 15, 2004



If it uses the linux compatibility layer stuff it's going to be limited to 32bit and probably still be a broken, buggy mess. The last straw for me was when I spent a few hours in the newly included debian/ubuntu jails trying and failing to do basic stuff like install sun java for a minecraft server.

ZFS performance isn't quite as great because it's sharing a measly 8gb of memory with more stuff but otherwise I couldn't be happier with my ubuntu setup. I've got my vms running headless as services now and everything. The stuff I offloaded to the iSCSI drive for a while is just on a local 500gb single platter WD black with a SSD cache via intel RST now.

poverty goat fucked around with this message at 23:02 on Sep 13, 2014

evol262
Nov 30, 2010
#!/usr/bin/perl

Alereon posted:

Thunderbolt and USB 3.1 are both carrying PCIe 3.0 lanes, but chipsets only provide PCIe 2.0 lanes. You can either connect devices to the PCIe 3.0 lanes coming from Intel CPUs, or use more lanes coming from the chipset to talk to devices, up to the maximum of 8. Since Thunderbolt carries PCIe 3.0 there is no overhead issue with 1-1 lane mapping, that probably isn't the case with 10GbE for example, I suspect you can probably push more data down 10GbE than you can over 1x PCIe 3.0 counting protocol overhead.

The more you know.

Combat Pretzel posted:

There's a VirtualBox jail in the latest release now. Whatever that's worth. I'd see what it does, but a) I don't think Hyper-V does nested virtualization, and b) installing plugs and jails takes trillions of hours for whatever reason.

Hyper-V doesn't do nested virt (which might be OK since vbox can do binary translation and doesn't need accel), but I probably wouldn't bother either.

PCIe is 75w for 16x. I think 4x is 25w. Which is still a lot higher than the ~10w going out on lightpeak, but I'd assumed that the chipset was inefficient as hell. Alereon's explanation makes a lot more sense

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

gggiiimmmppp posted:

If it uses the linux compatibility layer stuff it's going to be limited to 32bit and probably still be a broken, buggy mess. The last straw for me was when I spent a few hours in the newly included debian/ubuntu jails trying and failing to do basic stuff like install sun java for a minecraft server.
VirtualBox had a FreeBSD version since forever. I doubt it'd be some implementation on the Linux compatibility layer.

That said, I'd say, if you should ever reconsider FreeNAS, you'd be better off putting it on ESXi and run a separate VM for Ubuntu. Despite the FreeNAS folks getting their panties in a twist about virtualization.

poverty goat
Feb 15, 2004



Combat Pretzel posted:

VirtualBox had a FreeBSD version since forever. I doubt it'd be some implementation on the Linux compatibility layer.

That said, I'd say, if you should ever reconsider FreeNAS, you'd be better off putting it on ESXi and run a separate VM for Ubuntu. Despite the FreeNAS folks getting their panties in a twist about virtualization.

It's been a few months so I don't remember if I just couldn't get it working or couldn't find it or what. I'd have taken ESXi for a spin but I was under the impression that I couldn't give a VM raw access to the existing RAIDZ array without a newer CPU with AMD-Vi/VT-X but maybe that's not the case :iiam:. I definitely don't want to entangle it in virtual disks

e: vv :ms:

poverty goat fucked around with this message at 05:41 on Sep 14, 2014

evol262
Nov 30, 2010
#!/usr/bin/perl
No, RDM sucks. You want controller passthrough with VT-d or sriov

Ninja Rope
Oct 22, 2005

Wee.
I think Linux's bonding driver in balance-rr mode would work to scale traffic above 1g on the transmit side.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Ninja Rope posted:

I think Linux's bonding driver in balance-rr mode would work to scale traffic above 1g on the transmit side.
I don't know anything about this feature. What I do know is that if you enable a feature like this you run the risk of seeing a lot of out of order packets.

Ninja Rope
Oct 22, 2005

Wee.

adorai posted:

I don't know anything about this feature. What I do know is that if you enable a feature like this you run the risk of seeing a lot of out of order packets.

Sure, but OSs should be able to handle that situation reasonably well. Someone should try it and report back.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

evol262 posted:

No, RDM sucks. You want controller passthrough with VT-d or sriov
The reason I didn't go with 6 core Xeon desktop comedy option, where there's a VM running FreeNAS on two of the cores. Of course, end of September, the Windows 9 tech preview comes out and I wouldn't be surprised that Hyper-V suddenly supports VT-d passthrough.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Coincidentally, Anandtech posted a news story about the future of Thunderbolt. One interesting bit of information explains the lack of Thunderbolt on consumer boards: Intel does not allow non-Apple motherboards to ship with devices connected to the PCIe lanes from the CPU, primarily due to poor driver support under Windows on systems without the Intel chipset drivers installed. Since you'd need four PCIe 2.0 lanes to feed a Thunderbolt 2 controller, that only leaves four lanes for everything else, not enough.

This also clarifies that I was wrong about something above: while Thunderbolt carries PCIe 3.0 signals, the actual controllers are still limited to PCIe 2.0 when talking to the system. This is because no chipsets support PCIe 3.0, and Apple would have been Intel's only consumer of CPU-connected PCIe 3.0 chips.

The article also mentions that drivers have been updated to support Thunderbolt networking, for ~800MB/sec of throughput, or ~1600MB/sec over Thunderbolt 2, which aggregates both lanes.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Ninja Rope posted:

Sure, but OSs should be able to handle that situation reasonably well. Someone should try it and report back.
It might help throughput, but it wouldn't be good for latency.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Ninja Rope posted:

Sure, but OSs should be able to handle that situation reasonably well. Someone should try it and report back.
Yes. They generally handle that situation by asking for TCP retransmissions every time they receive more than a handful of out-of-order packets.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
I set up a 10 GbE network between my storage box, hyper-V host and primary workstation. I inherited an HP switch with a set of 10GbE SFP+ line cards in the back, got a set of re-branded Intel X-520 dual port SFP+ network cards from ebay, and hooked the whole thing up with name brand non-generic direct attach cables. It works super sweet, single threaded transfers over iSCSI to cache or the SSDs average 3gb/sec average throughput, multithreaded reads or writes to cache will saturate the entire 10gb. Very much worth the effort getting it set up.

That said: gently caress HP in the rear end with the largest, rustiest piece of scrap steel you could reasonably be expected to move using a forklift. Those three loving cables doubled the cost of the entire project. Makes me want to buy an EEPROM reader and just duplicate the gently caress out of the name brand cables. Literally all it does is read manufacturer and SN off the fake transceiver and poo poo on it and lock it out if it doesn't match some regex the switch firmware runs. And this is of course for my protection, as I wouldn't get the vicious rear end reaming if I bought the generic $20 cables from Cables4Less or something. gently caress, even Cisco has an CLI command to disable the Genuine_Cisco checks, and they practically bathe in your money after selling you a networking setup.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Do you have to use branded Direct Attach cables? How does it identify the cable brand, I thought the entire point was that there weren't any electronics in the cable?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Alereon posted:

Do you have to use branded Direct Attach cables? How does it identify the cable brand, I thought the entire point was that there weren't any electronics in the cable?
minimal electronics in a twinaxial cable. We have found some brands of cable don't work with certain NIC manufacturers. In particular, our Belkin twinax cables did not work with our Oracle ZFS Appliance and we had to use Cisco.

Computer viking
May 30, 2011
Now with less breakage.

I do actually have a freebsd/zfs box exporting a 1TB iscsi volume to my Windows machine , and I have a fair bit of my steam library on it. I guess I can take a look at the realworld bandwidth usage and post it here, just to add some actual data. :)

(I, too, am looking at buying some cheap 4gbit or 10gbit gear from eBay just for the fun of it. )

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
What I'm curious about is whether going 10GBe would lower latencies when there's large read requests. I'd figure it'll make a difference in open world games that constantly stream geometry and textures. At least from the point on where most of it is in the ARC.

Computer viking
May 30, 2011
Now with less breakage.

I wouldn't expect it to - latency on 1Gbit over a few meters of cable and a single switch should be neglible anyway.

The reads will finish faster, ofc - in the ideal case (saturated network connections, same bandwidth overhead, reading from ARC) it should take a tenth of the time. I guess you could call the entire time from request to all data delivered the latency, in which case ... "yes"?

Edit: Or do you mean when there's multiple readers?

Computer viking fucked around with this message at 19:02 on Sep 15, 2014

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Alereon posted:

Do you have to use branded Direct Attach cables? How does it identify the cable brand, I thought the entire point was that there weren't any electronics in the cable?

Yes, there is a little EEPROM inside the connector that tells the SFP+ port what kinds of things it understands (you need that for the port to initialize properly), and HP, because a 500% markup on cables and ink is how they're still in the black each year, locks those down so only things on the whitelist will work.

Which is why one of these days I'm going to pull apart my cheapo cable and see what size/type of EEPROM it uses and if I can't find a way to counterfeit the poo poo out of them. Making a cable that's HP genuine at one end and Intel Genuine at the other would finally allow me to get Jumbo frames working without having to spend $500+ on branded SR optics for the switch and network card.

Combat Pretzel posted:

What I'm curious about is whether going 10GBe would lower latencies when there's large read requests. I'd figure it'll make a difference in open world games that constantly stream geometry and textures. At least from the point on where most of it is in the ARC.

Not really, no. In instances where multiple things are fighting over the already saturated stream, it can make a difference, but in the real world, you rarely see much latency improvement for large transfers.

Small transfers do see some significant improvement, just because the packet switching rate is so much faster, the per packet latency is like 1/8th what 1GbE has. Which makes ARC reads a little nicer for iSCSI shares.



My setup is kinda retarded right now. I have an OmniOS server that shares out ZFS volumes over the COMSTAR sub-system, which publishes the volume as a raw disk iSCSI LUN, which is mapped to a Windows 2012 R2 server, which shares the LUN out as a set of windows folders. The windows side volumes are also bitlockered. The entire setup is rube goldberg as gently caress.

Methylethylaldehyde fucked around with this message at 21:55 on Sep 15, 2014

Computer viking
May 30, 2011
Now with less breakage.

Oh gently caress me sideways, I just noticed I did something horrible on the fileserver: There's a huge difference between "zpool create poolname <list of disks>" and "zpool create poolname raidz <list of disks>". At least there's nothing important on there, and there's little enough data used to make backup/rebuild/restore trivial. Still, though.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Computer viking posted:

Oh gently caress me sideways, I just noticed I did something horrible on the fileserver: There's a huge difference between "zpool create poolname <list of disks>" and "zpool create poolname raidz <list of disks>". At least there's nothing important on there, and there's little enough data used to make backup/rebuild/restore trivial. Still, though.
The early zfs docs specifically warned against this, and used an example of adding another vdev to a pool with the wrong (or no) raidlevel. It's a common enough mistake.

Computer viking
May 30, 2011
Now with less breakage.

adorai posted:

The early zfs docs specifically warned against this, and used an example of adding another vdev to a pool with the wrong (or no) raidlevel. It's a common enough mistake.

Still, it's kind of embarrassing that I didn't notice before I tried to offline a disk and got the warning about not having enough copies. Good thing I got it now (when trying to remove/4k-align/replace a disk at the time) instead of when a disk actually died.

I wonder if the online realignment thing would have worked ... but now that I'm recreating it anyway I'll just do it right in the first place. Ah well, probably for the best.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
I thought ashift could only be set when you create a pool? I don't think you can set ashift per-disk when swapping. Made me glad I started my pool with 4k disks, so I never had to backup/wipe/re-create.

luminalflux
May 27, 2005



Methylethylaldehyde posted:

Which is why one of these days I'm going to pull apart my cheapo cable and see what size/type of EEPROM it uses and if I can't find a way to counterfeit the poo poo out of them. Making a cable that's HP genuine at one end and Intel Genuine at the other would finally allow me to get Jumbo frames working without having to spend $500+ on branded SR optics for the switch and network card.

IIRC it's a flash memory writeable over the I2C bus available on the SFP interface. At least it was when I dealt with this stuff - we would "qualify" our SFPs in the lab by burning in the necessary values in the registers. Our production software had I2C write disabled (and I wouldn't be surprised if those traces were cut on production boards), but our lab hardware could write this.

Some SFPs you actually do have to calibrate and qualify, but most of the time as you say it's just a markup. One provider even offers hardware so you can qualify your own off-brand SFPs, where each SFP qualification costs X credits that you purchase online.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

luminalflux posted:

IIRC it's a flash memory writeable over the I2C bus available on the SFP interface. At least it was when I dealt with this stuff - we would "qualify" our SFPs in the lab by burning in the necessary values in the registers. Our production software had I2C write disabled (and I wouldn't be surprised if those traces were cut on production boards), but our lab hardware could write this.

Some SFPs you actually do have to calibrate and qualify, but most of the time as you say it's just a markup. One provider even offers hardware so you can qualify your own off-brand SFPs, where each SFP qualification costs X credits that you purchase online.

Nothing says Russian business model quite like turning 'Do you like being hosed in the rear end by vendors? There has to be a better way!' into a software and hardware product designed to circumvent it. Behold! http://sfptotal.com/en/ Send money to glorious Crimea, Ukraine, and awesome soviet engineers help you work around vendor lock-ins!

If I had more than one or two connectors to buy, I'd seriously consider purchasing one just to see how well it works.

Computer viking
May 30, 2011
Now with less breakage.

PitViper posted:

I thought ashift could only be set when you create a pool? I don't think you can set ashift per-disk when swapping. Made me glad I started my pool with 4k disks, so I never had to backup/wipe/re-create.

If you replace one disk at the time with 4k-block 4k-aligned disks, the whole vdev will flip to 4k when you swap in the last disk ... I think.

Based on that, I planned to offline a disk, make a 4k-aligned gnop device, and then replace the disk with the new device (that just happens to be the same physical drive with a tiny bit of misdirection). I might have to try it on a spare machine. :science:

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Inspector_666 posted:

Via a complex series of mirrors and lenses I use IR transmission to handle all of my LAN communication.

Very plausible, actually

  • Locked thread