Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
evol262
Nov 30, 2010
#!/usr/bin/perl
The steamlink will work great as long as you're just playing games. Alternatively, you can just stream to a steam client on Linux and change your monitor's input (this is nice anyway if you're playing an FPS or something where the input latency really matters).

As long as your system does VT-d and the IOMMU groups check out, you're good to go.

Adbot
ADBOT LOVES YOU

DevNull
Apr 4, 2007

And sometimes is seen a strange spot in the sky
A human being that was given to fly

Combat Pretzel posted:

So I've been watching this video: https://www.youtube.com/watch?v=LXOaCkbt4lI

Some guy built a 2S system with seven graphics cards to run seven Windows VMs for gaming. Apparently he runs games like Crysis 3 in a VM at 1440p and decent framerates. Is the state of PEG virtualization that good these days on Linux?

This isn't really too impressive since it is just pass though on the graphics cards. Virtualization of the GPU is not nearly that good. It works for older games, and basic desktop stuff, but not high performance stuff.

evol262 posted:

The short answer is "yes". I can't be hosed watching some video, and scrolling through it doesn't give any indication of whether that's GPU virtualization or just passthrough. Passthrough works fine in Linux, including with GRID cards/etc.

There's a lot of work going into virtio-vga/virtio-gpu/virgil, but actual virtualization of GPUs isn't as good as VMware. That said, judging from the hardware used, I don't think they did in that video, either.

No. Somebody already quoted an earlier post, but passthrough doesn't work that way. A stub PCI device is used to make sure that the kernel doesn't grab and initialize the device.

In theory, PCIe hotplugging means that you can dynamically unload the stub module and let it be claimed, but you'd need to write another kernel module to do it, which would never, ever get upstreamed/mainlined.

If you just want a windows VM for gaming/accelerated stuff, and you're passing through one (or multiple) GPU(s), this works great, and you can just use the onboard Intel stuff for your host. Otherwise, use multiple GPUs (like they did in this video), a SR-IOV GPU, or create a VM with the device passed through which dual boots. You can also pass the device through to multiple VMs and only have one running at a time.


Cards with a UEFI BIOS (all of them these days, basically) are easier, because UEFI is much nicer from a firmware level. But your OS needs to be booting in UEFI mode (with requisite UEFI bootloader) to make it feasible. You can do it without, but stubbing/remapping VGA memory is a huge pain in the rear end, and best avoided.

Your other concern with switching inputs on the monitor is that you need actual inputs. This is fine with Steam streaming, but otherwise, expect to need a 2nd keyboard/mouse, or use a KVM.

Feel free to ask away.


This is GPU SR-IOV. GRID cards support it, among others. Some FireGLs also do. It's not really "partitioning", but it presents multiple PCIe device IDs so you can map the same card into multiple VMs, in much the same way as NIC SR-IOV.

The problem in graphics virtualization is that it is mostly being driven by NVIDIA and their stuff is honestly crap. I don't have hopes for good virtualized graphics until Intel decides it is worth the money.

evol262
Nov 30, 2010
#!/usr/bin/perl

DevNull posted:

The problem in graphics virtualization is that it is mostly being driven by NVIDIA and their stuff is honestly crap. I don't have hopes for good virtualized graphics until Intel decides it is worth the money.
Just CUDA/OpenCL redux.

I haven't touched AMD's stuff yet, but at least it's not all done in the driver like nvidia, so hoping it's better...

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

DevNull posted:

I don't have hopes for good virtualized graphics until Intel decides it is worth the money.
Intel is doing something in regards to graphics "virtualization", which seems to amount to more or less of hooking up the guest driver with the host one over the VM bus, if I understood it correctly. But it doesn't appear to be vendor-neutral stuff, and as we know, NVIDIA's not going to follow anyhow. Intel GVT or whatever it was.

evol262
Nov 30, 2010
#!/usr/bin/perl

Combat Pretzel posted:

Intel is doing something in regards to graphics "virtualization", which seems to amount to more or less of hooking up the guest driver with the host one over the VM bus, if I understood it correctly. But it doesn't appear to be vendor-neutral stuff, and as we know, NVIDIA's not going to follow anyhow. Intel GVT or whatever it was.

It's basically the same approach nvidia has taken. It works on Iris Pro with Xen and KVM. It could probably work with Hyper-v, too, if Intel adds it to the Windows driver (and if Microsoft lets them add bits to hyper-v). AMD's approach is by far the cleanest, but it's not possible to map multiple guests to one physical non-IOV card without driver support.

Intel will hopefully follow AMD in the future (5th gen GPUs). nvidia will grudgingly give it lackluster support and throw money into making it look bad compared to their proprietary crap, which they'll make sure only works on quadros, ridiculous gamer cards, and Tesla/grid cards (AMD and Intel are both also gating this on "enterprise/premium" hardware now, but are much better about trickling down once the cost of research has been paid down)

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I hope that AMD adds/doesn't block the feature on consumer cards.

I think when upgrading that I'll take the plunge on a whole VM system for shits and giggles, tossing the card between Windows and Linux guests. Hopefully someone figures out how to move devices from and to the host on mid term. Maybe some interesting things to be tried when there's a permanent VM host underneath, too. Seems like a cheaper way to run everything from the NAS, at least cheaper than any host bus adapters.

kiwid
Sep 30, 2013

We're about to order our brand new infrastructure and I was wondering if you guys can take a look at it real quick and see if this all looks right (from a networking perspective).

Assume all SFP+ links are 10G.

edit: the vmware teaming links are going to be in failover mode rather than teaming.

kiwid fucked around with this message at 16:10 on Jan 7, 2016

Pile Of Garbage
May 28, 2007



kiwid posted:

We're about to order our brand new infrastructure and I was wondering if you guys can take a look at it real quick and see if this all looks right (from a networking perspective).

Assume all SFP+ links are 10G.

edit: the vmware teaming links are going to be in failover mode rather than teaming.



Are the two switches on the core side in the same stack or standalone? If they're standalone then you're going to have to provide a lot more detail about the Layer 2/3 configuration if you want anyone to validate it from a networking perspective.

Also I'd be wary about using the Cisco SMB switches as they aren't exactly scalable (Also they've got weird specs, apparently they can only do jumbo frames on 10/100/1000 ports).

kiwid
Sep 30, 2013

cheese-cube posted:

Are the two switches on the core side in the same stack or standalone? If they're standalone then you're going to have to provide a lot more detail about the Layer 2/3 configuration if you want anyone to validate it from a networking perspective.

Also I'd be wary about using the Cisco SMB switches as they aren't exactly scalable (Also they've got weird specs, apparently they can only do jumbo frames on 10/100/1000 ports).

They are standalone and we'd be doing round-robin MPIO with them. Do you recommend something else? We didn't want to spend a gently caress load of money on the switches if we didn't have to. We aren't too worried about scalability since for the foreseeable future we're just going to be running VMware essentials plus which limits us to 3 hosts anyway. However, I didn't know about the jumbo frame limitation so I'll have to look into that.

KillHour
Oct 28, 2007


Strongly considering picking this up for my new virtualization testbed.

http://www.ebay.com/itm/HP-Fusion-I...wkAAOSwCQNWewfy

Specs: E5-2620 x2, 32GB DDR3 ECC, 1TB 7200RPM HDD.

Datasheet:
http://www.google.com/url?sa=t&rct=...bpab64iZh7oVtNw

This is mostly for learning/studying, but I'll probably end up hosting some game servers on it for general goon enjoyment. I'll be running VSphere 5 with a mix of Windows and Linux hosts. Any thoughts?

Edit: There's this one too, but it's more than I want to pay if I don't need the extra space.
http://www.ebay.com/itm/640GB-Fusio...5cAAOSwFnFWFWtK

KillHour fucked around with this message at 17:00 on Jan 7, 2016

Pile Of Garbage
May 28, 2007



kiwid posted:

They are standalone and we'd be doing round-robin MPIO with them. Do you recommend something else? We didn't want to spend a gently caress load of money on the switches if we didn't have to. We aren't too worried about scalability since for the foreseeable future we're just going to be running VMware essentials plus which limits us to 3 hosts anyway. However, I didn't know about the jumbo frame limitation so I'll have to look into that.

How is the storage going to be presented? iSCSI or NFS?

kiwid
Sep 30, 2013

cheese-cube posted:

How is the storage going to be presented? iSCSI or NFS?

iSCSI both with the nimble and the qnap.

edit: Do you have any sources for the jumbo frame limitation I can look at?

kiwid fucked around with this message at 16:58 on Jan 7, 2016

Pile Of Garbage
May 28, 2007



kiwid posted:

iSCSI both with the nimble and the qnap.

edit: Do you have any sources for the jumbo frame limitation I can look at?

Hmm it should work I guess with iSCSI provided that each target/initiator IP address is tied to a single interface. Are you purchasing via a VAR? It would be a good idea to get one of their sales engineers or whatnot to eyeball the design.

Re jumbo frames, I just noticed it when skimming this datasheet which has the following line: http://www.cisco.com/c/en/us/products/collateral/switches/small-business-500-series-stackable-managed-switches/c78-695646_data_sheet.html

quote:

Frame sizes up to 9K (9216) bytes. Supported on 10/100 and Gigabit Ethernet interfaces. The default MTU is 2K.

Just thought it was weird how they specifically mention only 10/100/1000. Maybe the hardware can't switch 9K frames at 10G or something. Maybe ask over in the Cisco.

kiwid
Sep 30, 2013

cheese-cube posted:

Hmm it should work I guess with iSCSI provided that each target/initiator IP address is tied to a single interface. Are you purchasing via a VAR? It would be a good idea to get one of their sales engineers or whatnot to eyeball the design.

Re jumbo frames, I just noticed it when skimming this datasheet which has the following line: http://www.cisco.com/c/en/us/products/collateral/switches/small-business-500-series-stackable-managed-switches/c78-695646_data_sheet.html


Just thought it was weird how they specifically mention only 10/100/1000. Maybe the hardware can't switch 9K frames at 10G or something. Maybe ask over in the Cisco.

Great, thanks for your help. Yes it's through a VAR and they say it's solid as they have other SMBs running a similar setup.

edit: I started raising these questions with our VAR so they're setting us up with calls to Cisco professionals so we'll see where it goes.

kiwid fucked around with this message at 17:30 on Jan 7, 2016

KS
Jun 10, 2003
Outrageous Lumpwad

kiwid posted:

We're about to order our brand new infrastructure and I was wondering if you guys can take a look at it real quick and see if this all looks right (from a networking perspective).

Assume all SFP+ links are 10G.

edit: the vmware teaming links are going to be in failover mode rather than teaming.



I bought one of those SG500 10g switches for a temporary situation and it was an unmitigated disaster. I wouldn't wish them on my worst enemy.

If you're looking for a cheaper solution than a pair of 4500-Xs you can do Nexus 3k. You'll be way better off.

penus penus penus
Nov 9, 2014

by piss__donald
I just installed my first VM. I used an off the shelf machine at work and installed virtual box with Windows 10 evaluation. Once I had that installed and patched I put that VM into a Truecrypt 7.1a container. In the VM, I installed my PIA VPN and Steam. I connected to an unsecured guest network from a neighboring office with my VPN on, went to efukt.com and sent a friend a video of a woman with an extremely large vagina through steam chat. I re encrypted the container and the machine is now currently imaging a fresh install of Windows.

All in all, an easy process. I like VM's now!

KillHour
Oct 28, 2007


THE DOG HOUSE posted:

I just installed my first VM. I used an off the shelf machine at work and installed virtual box with Windows 10 evaluation. Once I had that installed and patched I put that VM into a Truecrypt 7.1a container. In the VM, I installed my PIA VPN and Steam. I connected to an unsecured guest network from a neighboring office with my VPN on, went to efukt.com and sent a friend a video of a woman with an extremely large vagina through steam chat. I re encrypted the container and the machine is now currently imaging a fresh install of Windows.

All in all, an easy process. I like VM's now!

My office. NOW!

KillHour
Oct 28, 2007


KillHour posted:

Strongly considering picking this up for my new virtualization testbed.

http://www.ebay.com/itm/HP-Fusion-I...wkAAOSwCQNWewfy

Specs: E5-2620 x2, 32GB DDR3 ECC, 1TB 7200RPM HDD.

Datasheet:
http://www.google.com/url?sa=t&rct=...bpab64iZh7oVtNw

This is mostly for learning/studying, but I'll probably end up hosting some game servers on it for general goon enjoyment. I'll be running VSphere 5 with a mix of Windows and Linux hosts. Any thoughts?

Ended up pulling the trigger on this. I'm sure it will be more than fine for a home lab. Time to drag my rack out of the garage and into the basement!

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

kiwid posted:

edit: Do you have any sources for the jumbo frame limitation I can look at?
For what it's worth, we had our nimble technical sales and someone from nimble support tell us that they recommend using standard frames. While they acknowledged some benefit from jumbo frames in a properly configured network they have had many support cases from mismatched MTUs and felt that the benefits did not outweigh the risks. We just left it at 1500 and have great performance.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Jumbo frames doesn't really improve performance in any direct way on a low latency network, which your storage should be. He throughput increase negligible. The main benefit of jumbo is less overhead processing network communication on the source and destination, as well as along the data path. Fewer frames means fewer sets of headers to unpack and decisions to make for things like load balancing, MAC address table lookups, filtering rules, etc.

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


So,

Are Jumbo Frames just for convenience sake as in it's just something extra or is there ever a circumstance where you'd really need it enabled?

Ex: Buying an i7 over an i5.

Gucci Loafers fucked around with this message at 04:13 on Jan 8, 2016

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Tab8715 posted:

Are Jumbo Frames just for convenience sake as in it's just something extra or is there ever a circumstance where you'd really need it enabled?
When properly configured, they will reduce congestion on a congested network. You are probabaly already hosed when they really matter.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Tab8715 posted:

Are Jumbo Frames just for convenience sake as in it's just something extra or is there ever a circumstance where you'd really need it enabled?
The size of an IPv4 packet is 20 bytes, and the size of an IPv6 header is 40 bytes. Assuming a 1500-byte MTU, IPv4 headers take up ~1.33% of your packet, and and IPv6 headers take up ~2.66% of your packet. That means that the highest performance you can hope to achieve is ~98.5% of line rate on IPv4, or ~97.25% of line rate on IPv6. With a 9000 MTU, you increase your packet size to 6x, so proportionally, your headers use up potentially only 1/6 as much traffic. This brings you up to ~99.6% of line rate on IPv6 or ~99.8% of line rate on IPv4, minus other overheads. When dealing with very high throughput data transmissions, the other advantage is that your switches are only looking at headers for 1/6 as many packets (i.e. to determine 802.1x destination VLANs, or when using IGMP snooping), which can drop CPU usage significantly in some circumstances. The same things obviously apply to host-side transforms like iptables, etc.

Like adorai pointed out, this is only useful on systems where throughput is paramount, like storage networks used for very high-throughput batch processing. When you saturate any network, you will see interactivity suffer.

The disadvantage is that since your packets are larger, they take longer to deliver, which can mess with interactivity for UDP streams that don't wait for large chunks before acting on streamed data. This is especially noticeable in VoIP applications, but you can see it in other low-latency environments like certain kinds of games, etc.

Docjowles
Apr 9, 2009

The other disadvantage, as mentioned, is manageability. It's really easy to gently caress up MTU settings somewhere along the line once you decide to change the default. And MTU mismatches can manifest in really bizarre ways.

The right answer is usually to benchmark things with standard and jumbo, and unless your workload actually benefits from them, leave it alone.

Sometimes your hand is forced, though. We were having perf issues recently on an Equallogic device and Dell wouldn't give me the time of day until we turned on jumbo frames because it's their only supported config.

Potato Salad
Oct 23, 2014

nobody cares


Q, have you considered a converged architecture solution? vSan managing a bay of SSDs attached to a set of hypervisors by a common storage midplane has enabled positively stupid iops in my HPC cluster without the cost of storage vendor ssd markup.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Docjowles posted:

The other disadvantage, as mentioned, is manageability. It's really easy to gently caress up MTU settings somewhere along the line once you decide to change the default. And MTU mismatches can manifest in really bizarre ways.

The right answer is usually to benchmark things with standard and jumbo, and unless your workload actually benefits from them, leave it alone.

Sometimes your hand is forced, though. We were having perf issues recently on an Equallogic device and Dell wouldn't give me the time of day until we turned on jumbo frames because it's their only supported config.
Adding to this, some hardware still doesn't support jumbo frames in the way you expect. The older switches on HP C7000 chassis, for example, claim to support jumbo frames but poo poo themselves over 8192 MTU. Obviously not a problem in greenfield, but something to consider when you're integrating legacy platforms.

evol262
Nov 30, 2010
#!/usr/bin/perl

Potato Salad posted:

Q, have you considered a converged architecture solution? vSan managing a bay of SSDs attached to a set of hypervisors by a common storage midplane has enabled positively stupid iops in my HPC cluster without the cost of storage vendor ssd markup.

MTU matters outside of storage in memory or compute-bound situations, or where you have an in-memory cluster (or mostly, redis, riak, or whatever) serving a shitload of data which never touches backend storage, etc. Less of a traditional virt situation, but still relevant

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Potato Salad posted:

Q, have you considered a converged architecture solution? vSan managing a bay of SSDs attached to a set of hypervisors by a common storage midplane has enabled positively stupid iops in my HPC cluster without the cost of storage vendor ssd markup.

If you're talking about VMware vSAN, the tradeoff is that the licensing is as expensive or more than buying dedicated hardware from a vendor. Also, the markup on drives covers significant QA and firmware development and the drives are generally dual ported to allow for redundant connections to each drive. They aren't off the shelf SSD drives.

HPL
Aug 28, 2002

Worst case scenario.
Just a word of warning for folks thinking of trying out Windows Server 2016 - Hyper-V in 2016 requires Second Level Address Translation (SLAT). SLAT is only available on the i-series (i3, i5, i7) and newer Xeons. It is not supported on the Core 2 series of CPUs. You can't install the Hyper-V role on a Core 2 Duo/Quad computer. I am really burned up about this because I didn't know about this until I put together a Core 2 Duo box and tried installing Server 2016 TP4.

At this point I am exploring the possibility of running Windows containers as opposed to Hyper-V containers. This should be awful/interesting.

HPL fucked around with this message at 00:11 on Jan 10, 2016

wyoak
Feb 14, 2005

a glass case of emotion

Fallen Rib

HPL posted:

Just a word of warning for folks thinking of trying out Windows Server 2016 - Hyper-V in 2016 requires Second Level Address Translation (SLAT). SLAT is only available on the i-series (i3, i5, i7) and newer Xeons. It is not supported on the Core 2 series of CPUs. You can't install the Hyper-V role on a Core 2 Duo/Quad computer. I am really burned up about this because I didn't know about this until I put together a Core 2 Duo box and tried installing Server 2016 TP4.

At this point I am exploring the possibility of running Windows containers as opposed to Hyper-V containers. This should be awful/interesting.
Good to know but why are you building boxes with 8 year old processors?

HPL
Aug 28, 2002

Worst case scenario.

wyoak posted:

Good to know but why are you building boxes with 8 year old processors?

Because I already have an esxi box and I'm a poor student right now and I don't want to go out and spend a lot of money on something that I'm going to end up burning to the ground when the final Server 2016 comes out. Besides, a Core 2 Duo E8400 with 12GB of RAM should in theory have enough oomph to run Server 2016 and some VMs, especially since all I want to do is mess around with nano servers and containers.

Incidentally, in my very short time with Server 2016 thus far, it boots up damned quick. On my "antiquated" hardware, it does two loops of the dotted circle and then it's ready to go. Non-Hyper-V containers only take a few seconds to fire up. It uses the Windows 10-style interface and there isn't all the fluffy non-essential crap on it like I heard was in earlier previews. The start menu has nice sensible admin-related programs pinned to it. All in all, if you're already familiar with 2012 R2, 2016 is going to feel pretty familiar.

EDIT: Oh god, working on things that are not properly documented is sucking horribly.

HPL fucked around with this message at 02:29 on Jan 10, 2016

HPL
Aug 28, 2002

Worst case scenario.
Well, trying to build a test network on Server 2016 without Hyper-V was about as fun as whacking my hand with a sledgehammer. Time to work on adding Xenserver to the list of hypervisors I've tried.

I'll be back though, I'll be back. Server 2016 hasn't seen the last of me.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
With KVM, any best practises in regards to resource allocation? If I am to run a single VM on top fulltime, can I allocate all cores or do I need to leave one for the host, so it doesn't bog down handling IO?

evol262
Nov 30, 2010
#!/usr/bin/perl
You can assign everything. The host won't bog down on IO, though interactive applications on the host (in a desktop setting) may not perform well if all the cores are pegged

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
OK, cool. Once everything's set up as I want, the Linux KVM host will be just there to deal with the bullshit IO arrangement between it and the NAS.

HPL
Aug 28, 2002

Worst case scenario.
Well, tried Xenserver and it was okay, not really ringing my bells though. Not anything compelling to make one switch over from ESXi. If anything, it's much fiddlier than ESXi, especially with how it deals with storage. I gave Server 2016 another throw and tried running VirtualBox on it. Kitematic worked well for running containers, but it gave containers IP addresses in the 192.168.99.0 subnet which was kind of dumb. I've never been a fan of VirtualBox's networking. Anyways, burned that to the ground and gave Proxmox a go. I've only been on it for an hour or two but I'm already warming up to it. Containers are super easy since dealing with it is an integral part of the the hypervisor as opposed to a duct taped kludge in a VM. Console in Proxmox works well. Nice and snappy, good display quality.

Still looking forward to the final version of Server 2016 and getting some decent (SLAT/Hyper-V capable) hardware. Server 2016 seems like it might actually make a decent daily driver OS. Nano servers are kind of a waste at this point since there's not much functionality happening with them (only certain roles can be used with Nano servers) and they're a pain in the butt to deal with since you have to use djoin to get them joined on your domain. Windows containers work much better than Nano servers since there's much less loving around to get them going, they start up fast and have less overhead. It'll be interesting to see how things develop on that front. If Microsoft can come up with some nifty tools to make managing and networking containers easy, they'll cover any ground lost from being late to the game in no time.

EDIT: Oh cool, I got SolyDK to install on Proxmox. I couldn't even do that in Hyper-V or bare metal.

HPL fucked around with this message at 09:43 on Jan 11, 2016

friendbot2000
May 1, 2011

The company I work for is developing a series of integration drivers for the government and we are having difficulty testing the one for Solarwinds NPM because of a lack of a dedicated test environment that we can fiddle with at will. Surprisingly people frown at the thought of unplugging various network devices and bring other peoples work to a halt in the name of TESTING! Does anyone know if its possible to deploy a virtual solution to our particular issue? Or can they point me to some instructions on how to configure such a solution? The thwack community is being decidedly unhelpful as they are convinced I just want to create a node in Solarwinds and trying to convince them otherwise is like banging my head against a brick wall.

ragzilla
Sep 9, 2005
don't ask me, i only work here


friendbot2000 posted:

The company I work for is developing a series of integration drivers for the government and we are having difficulty testing the one for Solarwinds NPM because of a lack of a dedicated test environment that we can fiddle with at will. Surprisingly people frown at the thought of unplugging various network devices and bring other peoples work to a halt in the name of TESTING! Does anyone know if its possible to deploy a virtual solution to our particular issue? Or can they point me to some instructions on how to configure such a solution? The thwack community is being decidedly unhelpful as they are convinced I just want to create a node in Solarwinds and trying to convince them otherwise is like banging my head against a brick wall.

SNMPSim?

HPL
Aug 28, 2002

Worst case scenario.
After spending more time with Proxmox, I have really come to appreciate it. It is very easy to use, works well with almost any operating system and you can install a desktop on the host machine and manage your VMs via the web GUI right on the host itself. It's not a hypervisor I would use for learning virtualization as it is not one of the big boys, but if you have a spare computer sitting around and want to run VMs and still be able to use the computer itself, it's fantastic. The only thing to watch out for is that if you make a bootable USB drive from the ISO, it may not install, since it may hang while trying to look for a cdrom drive (of all things). If that happens, go check the proxmox wiki and they list a couple of programs to try to create your USB drive.

Adbot
ADBOT LOVES YOU

some kinda jackal
Feb 25, 2003

 
 
Is there an issue with VMware's KB site? Every KB I pull up takes me to an error page.

I swear, I only need to consult the VMware site like every other month, but every single time I do I run into some problem. I think I'm just the unluckiest guy.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply