Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
some kinda jackal
Feb 25, 2003

 
 
Really need VMware to update the open-vm-tools-deploypkg packages on their yum repo. Seems to fail dependency check against whatever is in EPEL for CentOS/RH 7 nowadays. I think EPEL updated it when they released RH/CentOS 7.2 and now I'm having to deal with dependency failures on all my updates.

Is there an email address that I can suggest stuff like this to? I'm not even sure where I'd go about reporting a bug with their yum repo.

Adbot
ADBOT LOVES YOU

Wibla
Feb 16, 2011

HPL posted:

Two returns to the local computer store later, I gave up on trying current RAM and ended up eBaying slower RAM which thankfully worked. Originally, I was like: "Sweet, 4 DIMM slots, I'll fill 'er with 4 8GB DIMMs". Nope. Wouldn't POST. Dug around some, found out the old motherboard chipset wouldn't support 8GB DIMMs, but could possibly support 4GB DIMMs. Okay, back to the store, eat the restocking fee and I go home and try the 4GB-1666 DIMMs. Nope, still wouldn't POST. Returned that RAM, ate another restocking fee and then ordered some actual 1333 from a Chinese eBay seller. Just arrived yesterday and thank god it works. Gigantic pain in the kiester.

Should have just bought a TS140 in the first place.

And to think that I have bitched about getting the ram slots right on a G7... I won't complain about that again, that's for sure, because that's a walk in the park compared to this poo poo, heh.

Viktor
Nov 12, 2005

Martytoof posted:

Really need VMware to update the open-vm-tools-deploypkg packages on their yum repo. Seems to fail dependency check against whatever is in EPEL for CentOS/RH 7 nowadays. I think EPEL updated it when they released RH/CentOS 7.2 and now I'm having to deal with dependency failures on all my updates.

Is there an email address that I can suggest stuff like this to? I'm not even sure where I'd go about reporting a bug with their yum repo.

Look at Rhba-2015:2246-2 deploypkg was brought into 9.10.2 tools so in theory you can remove the VMware packages totally*

* untested this is on our todo list with srm and templates in the new year

some kinda jackal
Feb 25, 2003

 
 

Viktor posted:

Look at Rhba-2015:2246-2 deploypkg was brought into 9.10.2 tools so in theory you can remove the VMware packages totally*

* untested this is on our todo list with srm and templates in the new year

Oh neat, that would be amazing. Not that I don't appreciate you guys making the repo available, but the less repos I have to deal with the better off I am.

:cool:

Viktor
Nov 12, 2005

Martytoof posted:

Oh neat, that would be amazing. Not that I don't appreciate you guys making the repo available, but the less repos I have to deal with the better off I am.

:cool:

Oh sorry for the miss confusion! I'm an end user too on a big team and will be testing that the features still work for our environment(srm/vrealize).

Agree one less repo on satellite makes me a happy camper!

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Viktor posted:

Oh sorry for the miss confusion! I'm an end user too on a big team and will be testing that the features still work for our environment(srm/vrealize).

Agree one less repo on satellite makes me a happy camper!
Oh whoa an actual Viktor post

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
I've never gotten around to posting SH/SC till recently, but nice to see we have a VM thread, I'm an avid Xen and HyperV user, but proffessionally I used VMWare ESXi

HPL
Aug 28, 2002

Worst case scenario.
I'm working on an ESXi shell script to manage VMs since the embedded web client won't power VMs on or off and vSphere Client won't run in Linux. So far it's working except I'm trying to come up with a way to take user input and convert it to upper case. Normally in bash I'd use "tr [a-z] [A-Z]", but ESXi doesn't have the "tr" command. Any ideas? Basically I want to take user input in either upper case or lower case without using a bunch of OR statements.

HPL fucked around with this message at 07:53 on Dec 25, 2015

evol262
Nov 30, 2010
#!/usr/bin/perl
Awk toupper(). I think busybox's sed is very limited and won't have gnu extensions, but you can always go through the whole range of characters you want, I guess. toupper is nicer

psydude
Apr 1, 2008

Heartache is powerful, but democracy is *subtle*.
Anyone messed around with the Hyper-V Server 2016 technical preview? I got ahold of a copy for free through my master's program and am debating whether or not to replace my 2012 Hyper-V setup running my lab environment.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
I'm considering an upgrade to El Capitan over my holiday break. Does VirtualBox still have any major issues outstanding on this OSX version?

DeaconBlues
Nov 9, 2011
What's the best (cheapest) way of virtualising a legit copy of Windows?

If I install with an OEM key and I need to scrap the VM and reinstall it then Windows will think my hardware has changed, yes? Then it won't let me use the OEM key a second time.

evol262
Nov 30, 2010
#!/usr/bin/perl

DeaconBlues posted:

What's the best (cheapest) way of virtualising a legit copy of Windows?

If I install with an OEM key and I need to scrap the VM and reinstall it then Windows will think my hardware has changed, yes? Then it won't let me use the OEM key a second time.

The virtual hardware should be similar enough that Windows will pass. You only need to call (and you can call to get a bypass) if 3 major components have changed, but it'll probably just be the dmi uuid. Really, you can just snapshot them VM when you install it and move the disk files around (if you want it on a different machine) or roll back to the snapshot (if you break it)

chutwig
May 28, 2001

BURLAP SATCHEL OF CRACKERJACKS

Vulture Culture posted:

I'm considering an upgrade to El Capitan over my holiday break. Does VirtualBox still have any major issues outstanding on this OSX version?

VirtualBox 5.x has been fine for me since I upgraded.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Is there a good way to monitor host memory utilization while factoring in dedupe/compression/resource priority and all that? I'm about to split our environment in to two active/active identical clusters and different sites instead of doing an active/standby failover. VMs are divided in to high/normal/low priority resource pools and we accept that anything in low (dev and test) will be degraded due to memory pressure so I have to factor those in. It wasn't too hard with active/standby because there was enough VMs to take the hosts to at least 80% utilization which kicked in all the memory tricks and helped keep the consolidation ratio high, but with the new config I need to target more around the 50-60% because each cluster need to have enough capacity to fail over on itself. At that point VM memory is going to be allocated in 8mb pages so dedupe/compression is effectively off until the host comes under memory pressure. The quick and easy but maybe not good solution would be to disable large page allocations on the entire environment so dedupe is always working like how 3.5 used to behave, and then I know roughly how big my VMs will be. Active memory is an okay-ish stat once I subtract out the dev environment, but it still leaves me in the dark on how much overhead I will buy myself when memory pressure is up.

DeaconBlues
Nov 9, 2011

evol262 posted:

The virtual hardware should be similar enough that Windows will pass. You only need to call (and you can call to get a bypass) if 3 major components have changed, but it'll probably just be the dmi uuid. Really, you can just snapshot them VM when you install it and move the disk files around (if you want it on a different machine) or roll back to the snapshot (if you break it)

Thanks. Snapshots make a lot of sense, too. Cheers!

Maneki Neko
Oct 27, 2000

DeaconBlues posted:

What's the best (cheapest) way of virtualising a legit copy of Windows?

If I install with an OEM key and I need to scrap the VM and reinstall it then Windows will think my hardware has changed, yes? Then it won't let me use the OEM key a second time.

Buy a retail copy.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

BangersInMyKnickers posted:

VMs are divided in to high/normal/low priority resource pools
Can you go into more detail about how your resource pools are configured? Because I think I know how you have this set up, and if so, it isn't working the way you think it is.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Vulture Culture posted:

Can you go into more detail about how your resource pools are configured? Because I think I know how you have this set up, and if so, it isn't working the way you think it is.

Each resource pool is under the cluster root, with resource shares allocated to the default high/normal/low for CPU and memory and VMs placed under them. This was all set up in 3.5 and upgraded through 6 and I think its working. When I take a host down for maintenance I definitely see the VMs in the low priority resource pool come under balloon and vswap pressure before everything else. We don't really have CPU contention at this point so nothing to really see there. No limits on the pools.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

BangersInMyKnickers posted:

Each resource pool is under the cluster root, with resource shares allocated to the default high/normal/low for CPU and memory and VMs placed under them. This was all set up in 3.5 and upgraded through 6 and I think its working. When I take a host down for maintenance I definitely see the VMs in the low priority resource pool come under balloon and vswap pressure before everything else. We don't really have CPU contention at this point so nothing to really see there. No limits on the pools.
Yeah, that's fine for any hard-limited resource like memory -- this is pools behaving precisely as intended. What's not working the way you want is CPU shares.

Duncan Epping had a diagram a few years ago:
http://www.yellow-bricks.com/2009/11/13/resource-pools-and-shares/

What you need to remember about CPU shares in VMware resource pools is that each pool is granted a total number of shares, and this is not a template for the number of shares on VMs within the pool. This is a really important distinction.

High/medium/low shares are allocated as 4:2:1 weights; that is, a VM with High priority has double the shares of a VM with Normal priority. Likewise, a VM with Normal priority has double the shares of a VM with Low priority. More technically, it breaks down as: High 2000 shares per vCPU, Normal 1000 shares per vCPU, Low 500 shares per vCPU.

So let's say you're running in an environment where you're using High to reserve CPU for production VMs, Normal for production batch systems, and Low for development systems. You're doing this without using resource pools: you're setting the share priority on each VM. So, you've got 20 VMs under High, 8 under Normal and 3 under Low, because you obviously need more scale to support your production workloads than you do your development ones. When you assign share priority to each of these, it breaks down as follows:

  • High: 20 VMs sharing (20 * 2000 = 40000) shares, or ~81% of total CPU under contention
  • Normal: 8 VMs sharing (8 * 1000 = 8000) shares, or ~16% of total CPU under contention
  • Low: 3 VMs sharing (3 * 500 = 1500) shares, or ~3% of total CPU under contention

Now let's sweep these under pools instead. You have a pool that has High priority, a pool that has Normal priority, and a pool that has Low priority.

  • High: 20 VMs sharing 2000 shares, or ~57% of total CPU under contention (each VM gets 100 shares)
  • Medium: 8 VMs sharing 1000 shares, or ~29% of total CPU under contention (each VM gets 125 shares)
  • Low: 3 VMs sharing 500 shares, or ~14% of total CPU under contention (each VM gets 133 shares)

What you've done here is the opposite of what you intended: under heavy CPU contention, your dev VMs are getting higher priority than your production systems, because the pool is less loaded.

Now let's gently caress it up more: we'll create a VM and forget to assign it to a resource pool, so it has Normal shares (1000). Your new VM gets as many CPU shares as the entire Normal resource pool put together.

Of course, if you are rarely oversubscribing CPU, or if you have generally very bursty workloads, this is probably a moot point. But my general recommendation is to never, ever set CPU shares on a resource pool unless you're trying to solve a very specific problem and know exactly what you're doing.

Vulture Culture fucked around with this message at 00:18 on Dec 28, 2015

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Vulture Culture posted:

But my general recommendation is to never, ever set CPU shares on a resource pool unless you're trying to solve a very specific problem and know exactly what you're doing.
For what it's worth, I agree 100%. There are probably situations that you would use resource pools for. The vast majority of end users will never need or truly want them. Even for a dev environment, I can do some back of the napkin math to justify buying more hardware rather than slowing their VMs to a crawl.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

I see so many customers with hosed up resource pool setups because they don't understand how the CPU shares are allocated. The worst are the ones where there's a resource pool for high performance VMs and then just random VMs sitting at the root of the cluster not in any resource pool.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Last note:

Especially with respect to shares, it's helpful to think of resource pools as vApps, because they work the exact same way.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Vulture Culture posted:

Of course, if you are rarely oversubscribing CPU, or if you have generally very bursty workloads, this is probably a moot point. But my general recommendation is to never, ever set CPU shares on a resource pool unless you're trying to solve a very specific problem and know exactly what you're doing.

Good to know, thanks. I just assumed that the share value for CPU would scale proportionally to the number of VMs in the pool like memory does. I threw the CPU shares back to normal for all the pools but I try to keep CPU load below 70% because ready time starts getting bonkers at that point anyhow. I think if I was ever under CPU contention how I had it set up would have worked out alright since there's only 8 servers in the high resource pool so there was no way they could ever saturate anywhere near their total share proportion so everything else would have been left to normal/low to duke it out and those have roughly 50 VMs in them each. Maybe I'll keep the lowered share value on the low pool but keep the other two on normal.

stubblyhead
Sep 13, 2007

That is treason, Johnny!

Fun Shoe

BangersInMyKnickers posted:

I try to keep CPU load below 70% because ready time starts getting bonkers at that point anyhow.

Just out of curiosity, how often are you using multi-vCPU vms in your environment, and what's your metric for deciding it's warranted?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

stubblyhead posted:

Just out of curiosity, how often are you using multi-vCPU vms in your environment, and what's your metric for deciding it's warranted?

About 75% are single vCPU, 15% are 2 vCPU, and 10% are 3-4. We're currently on 6 core processors on the hosts so I'm really iffy about going over 3 vCPU per vm if I can avoid it as tying up more than half of a socket on a single VM seems like it would wreak scheduling havoc. Generally I start everything at a single vCPU and then scale up if any bottlenecks are causing real-world impact instead of making a CPU alarm trigger which might not be negatively impacting anything. And I only add that second vCPU after I confirmed that the application with a CPU bottleneck is properly threaded to actually use the second vCPU. My 4 vCPU servers are ArcGIS servers driven by user load so they justify it.

Ready time/Readiness is something you should be watching with 1-5% deemed acceptable and 5%+ you should start worrying. I've managed to keep practically all servers down at around .4% readiness across the board with 110 windows VMs on 3 dual-socket 5 year old hosts so I think I've managed it pretty well so far.

some kinda jackal
Feb 25, 2003

 
 
My really bad metric is just checking the loadavg and if it's floating above one consistently then I give it two, if it's floating at two I give it four, etc.

Unless it's super obvious. Like I'm running Hercules which is emulating a system with four CPUs so I gave that VM four vCPUs, I gave Splunk eight vCPUs, etc.

If I'm doing a lot of compiling I'll give that machine more than one as well so I can efficiently run "make -j 2" for speed.

some kinda jackal
Feb 25, 2003

 
 

Viktor posted:

Look at Rhba-2015:2246-2 deploypkg was brought into 9.10.2 tools so in theory you can remove the VMware packages totally*

* untested this is on our todo list with srm and templates in the new year

My entirely unscientific testing (converted my template to a VM, removed the vmware repo and open-vm-tools-debloypkg, converted back to template and deployed to new VM) this works exactly as expected, and the vmware repo is now unnecessary for anyone who's installing open-vm-tools 9.10.2+

Thanks! :)

Viktor
Nov 12, 2005

Martytoof posted:

My entirely unscientific testing (converted my template to a VM, removed the vmware repo and open-vm-tools-debloypkg, converted back to template and deployed to new VM) this works exactly as expected, and the vmware repo is now unnecessary for anyone who's installing open-vm-tools 9.10.2+

Thanks! :)

Awesome thanks for the heads up!

Methanar
Sep 26, 2013

by the sex ghost
I can't remediate hosts with VUM with the web client, instead I need to use the discontinued fat client. Are you serious VMware?

Tev
Aug 13, 2008

Methanar posted:

I can't remediate hosts with VUM with the web client, instead I need to use the discontinued fat client. Are you serious VMware?

https://blogs.vmware.com/vsphere/2015/09/vsphere-update-manager-fully-integrated-interface-with-the-vsphere-web-client.html

I believe you can, just need to be on 6.0 U1.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
So I've been watching this video: https://www.youtube.com/watch?v=LXOaCkbt4lI

Some guy built a 2S system with seven graphics cards to run seven Windows VMs for gaming. Apparently he runs games like Crysis 3 in a VM at 1440p and decent framerates. Is the state of PEG virtualization that good these days on Linux?

Mr Shiny Pants
Nov 12, 2012
Wow, this might be something I will try. Windows 10 is not for me, but I do have some games I want to play. If I could run a VM with decent framerates and use a Steam Link to display it on my TV.....

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
There has been some discussion about this in other threads. It's doable, but not as simple as you would hope.

PC Building/Upgrading/Parts-picking Megathread

Police Automaton posted:

I did some research into this as I'm building exactly a computer like this and nowadays this works a lot better than it did just a few years ago and if you do it correctly, you'll basically get native performance in windows for all practical gaming purposes. Your CPU has to be IOMMU capable, you'll probably go with intel so that means a vt-d capable CPU. You need to really look into if the mainboard is doing it correctly, the implementation on some mainboards around is apparently borked even if it's theoretically supported, there are some lists with tested mainboards around if you search long enough. Also you'd probably want to avoid nvidia cards apparently as nvidia seems to not like people using their non-quadro cards for such purposes. (even though there are workarounds, they apparently come at a performance loss, but take that with a grain at salt, I'm not sure how current that information is) I went with a 5820k because of the six cores making everything easier and a bit more future-proof, a recent i5 is probably enough too though, seeing as you can overcommit CPUs and plenty of games don't really tax modern CPUs in any way whatsoever. (an i7 does have a slight advantage here though) Be aware that you need plenty of RAM, my latest knowledge on the topic is that you can't do lazy allocation if you want to do passthrough.

Many people will not recommend AMD cards for linux gaming and they're probably right about that, but if you're like me already ready to build a system such as this, you probably also already know that gaming with linux is basically a fools errant anyways and will only want to use the AMD card for windows gaming. You either need two dedicated graphics cards (one for the VM, one for the host) or a CPU with integrated GPU&dedicated card. (you'd use the GPU for the linux host then) Be aware that it is currently apparently impossible to wrestle back control of the passed-through card back to Linux without a reboot, so dynamically switching cards is sadly not that easy. It is theoretically also possible to have the host without a graphics card but I don't think that's very practical for most usage scenarios.

The Linux Questions Thread, SCO non-infringing version

evol262 posted:

Basically, passthrough works like this:

If your chipset and CPU support IOMMO (VT-d and AMD-vi), your PCI devices are grouped off into an arbitrary number of logical groups, which hopefully has some relation to the actual slot layout, but doesn't always.

You can pass through IOMMU groups to guests, along with all the associated PCI IDs. On one of my test systems:
code:
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.1
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/3/devices/0000:00:03.0
/sys/kernel/iommu_groups/4/devices/0000:00:14.0
/sys/kernel/iommu_groups/5/devices/0000:00:16.0
/sys/kernel/iommu_groups/6/devices/0000:00:1a.0
/sys/kernel/iommu_groups/7/devices/0000:00:1b.0
/sys/kernel/iommu_groups/8/devices/0000:00:1c.0
/sys/kernel/iommu_groups/8/devices/0000:00:1c.5
/sys/kernel/iommu_groups/8/devices/0000:00:1c.6
/sys/kernel/iommu_groups/8/devices/0000:02:00.0
/sys/kernel/iommu_groups/8/devices/0000:03:00.0
/sys/kernel/iommu_groups/8/devices/0000:04:00.0
/sys/kernel/iommu_groups/9/devices/0000:00:1d.0
/sys/kernel/iommu_groups/10/devices/0000:00:1f.0
/sys/kernel/iommu_groups/10/devices/0000:00:1f.2
/sys/kernel/iommu_groups/10/devices/0000:00:1f.3
If you snip off the "0000:" bits (some systems can have more than one IOMMU controller, but I've never seen one), these match directly to PCI IDs:
code:
00:00.0 Host bridge: Intel Corporation 4th Gen Core Processor DRAM Controller (rev 06)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06)
01:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller (rev a1)
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)
00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06)
00:14.0 USB controller: Intel Corporation 9 Series Chipset Family USB xHCI Controller
00:16.0 Communication controller: Intel Corporation 9 Series Chipset Family ME Interface #1
00:1a.0 USB controller: Intel Corporation 9 Series Chipset Family USB EHCI Controller #2
00:1b.0 Audio device: Intel Corporation 9 Series Chipset Family HD Audio Controller
00:1c.0 PCI bridge: Intel Corporation 9 Series Chipset Family PCI Express Root Port 1 (rev d0)
00:1c.5 PCI bridge: Intel Corporation 9 Series Chipset Family PCI Express Root Port 6 (rev d0)
00:1c.6 PCI bridge: Intel Corporation 82801 PCI Bridge (rev d0)
02:00.0 Network controller: Qualcomm Atheros AR93xx Wireless Network Adapter (rev 01)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
04:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 03)
00:1d.0 USB controller: Intel Corporation 9 Series Chipset Family USB EHCI Controller #1
00:1f.0 ISA bridge: Intel Corporation 9 Series Chipset Family H97 Controller
00:1f.2 SATA controller: Intel Corporation 9 Series Chipset Family SATA Controller [AHCI Mode]
00:1f.3 SMBus: Intel Corporation 9 Series Chipset Family SMBus Controller
Paired up, these give me:
code:
 IOMMU Group |  PCI ID  | Device 
           0 | 00:00.0  | Host bridge: Intel Corporation 4th Gen Core Processor DRAM Controller (rev 06)
           1 | 00:01.0  | PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06)
           1 | 01:00.0  | VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
           1 | 01:00.1  | Audio device: NVIDIA Corporation GM204 High Definition Audio Controller (rev a1)
           2 | 00:02.0  | VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)
           3 | 00:03.0  | Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06)
           4 | 00:14.0  | USB controller: Intel Corporation 9 Series Chipset Family USB xHCI Controller
           5 | 00:16.0  | Communication controller: Intel Corporation 9 Series Chipset Family ME Interface #1
           6 | 00:1a.0  | USB controller: Intel Corporation 9 Series Chipset Family USB EHCI Controller #2
           7 | 00:1b.0  | Audio device: Intel Corporation 9 Series Chipset Family HD Audio Controller
           8 | 00:1c.0  | PCI bridge: Intel Corporation 9 Series Chipset Family PCI Express Root Port 1 (rev d0)
           8 | 00:1c.5  | PCI bridge: Intel Corporation 9 Series Chipset Family PCI Express Root Port 6 (rev d0)
           8 | 00:1c.6  | PCI bridge: Intel Corporation 82801 PCI Bridge (rev d0)
           8 | 02:00.0  | Network controller: Qualcomm Atheros AR93xx Wireless Network Adapter (rev 01)
           8 | 03:00.0  | Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
           8 | 04:00.0  | PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 03)
           9 | 00:1d.0  | USB controller: Intel Corporation 9 Series Chipset Family USB EHCI Controller #1
          10 | 00:1f.0  | ISA bridge: Intel Corporation 9 Series Chipset Family H97 Controller
          10 | 00:1f.2  | SATA controller: Intel Corporation 9 Series Chipset Family SATA Controller [AHCI Mode]
          10 | 00:1f.3  | SMBus: Intel Corporation 9 Series Chipset Family SMBus Controller
Group 8 would be a mess to pass through, for example, because you can't just pass this device or that device easily. The guest would get them all if the GPU were in that bus because someone did something dumb when designing the groups. Group 1 is just the GPU, which is extremely convenient.

So you pass through a raw device. The "hardware support" is 100% up to being able to get drivers in the guest which support that device ID, and it's like running it native.

The problem is that nVidia eats a huge bag of dicks, and their driver checks, and says: "is this machine using any hardware that we know only exists on VMs? How about virtual CPUs? Are VMware or Hyper-V guest additions installed? Or...?"

If any of those things are true, their driver just says "nope, not gonna do it, you should buy a quadro or tesla", because gently caress people with consumer cards.


If you're running ZFS, your options are basically KVM or Xen. Or hoping your hardware is on the ESXi HCL, installing it, and having IOMMU support (see above) to pass a card through to a VM which handles ZFS or doing raw disk mapping, which is terrible in all sorts of ways.

The usual considerations for small scale virt are:
  • "What do we use at work or what do I want to get familiar with professionally?" VMware is a good pick here
  • "Do I want my hypervisor to run Windows so I don't need to learn anything really different?" Hyper-V is a good choice here (or if you use it at work)
  • "Do I have multiple systems and I want them to be highly available?" Hyper-V and VMware are bad choices here unless you're going to :files: them or you're rich. KVM and Xen do this for free pretty well.
  • "Do I care about nested virt and passthrough?" Hyper-V is a bad choice here

KVM with virt-manager is probably a very easy, very capable solution for what you want. But if you have more considerations than just "run some VMs and silo stuff", it doesn't hurt to bring it up. Note that you can almost certainly enable nested virt and virtualize VMware if you want to play with it.

Mr Shiny Pants
Nov 12, 2012
My TS440 has an PCIe 16x lane slot and I have a spare 660GTX. Might try something in a couple of days.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I'm currently trying to find out whether it's possible to hand around the GPU device between VM and host. Say if I want to play games, I kill Xorg and run the VM, and once the VM shuts down, Xorg fires up again. Scripts doing stuff where needed.

Someone else suggested to me to prep a Linux host for VM business only and run another Linux instance on top when needed. A Linux host would maybe allow for some interesting clownery involving my FreeNAS box and iSCSI.

--edit:
Crap, Police Automaton's post is rather recent. So much for the card juggling. Altho I've come across suggestions that cards with UEFI BIOS are supposedly less bitchy.

Combat Pretzel fucked around with this message at 18:42 on Jan 3, 2016

Mr Shiny Pants
Nov 12, 2012
I don't think that is possible, maybe with some scripts. But if it takes a reboot then you can just as well dual boot. The nice thing about this is that I can make my main machine a Linux or OSX box and still play my PC games without dual booting.

Luckily my TS440 has onboard video so I can just use that for the host OS and run the Windows VM with a dedicated GPU attached. It would be awesome if you could partition your vid card, didn't some Nvidia cards allow this? One third for the main OS and maybe two thirds for the VMs.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I have to evaluate things. I'd really like to start ditching Windows. I depend on Photoshop, Solidworks and games. I'd figure with the upcoming upgrade and having built-in graphics on the CPU, it might be an option to relegate the NVIDIA card to hosting Windows games only. Displays have multiple inputs, Photoshop can run in a regular unaccelerated VM, Solidworks I don't know, and for games I can switch the monitor input. Also depends on whether I can get over the lack of L4 cache on the 6700K.

evol262
Nov 30, 2010
#!/usr/bin/perl

Combat Pretzel posted:

So I've been watching this video: https://www.youtube.com/watch?v=LXOaCkbt4lI

Some guy built a 2S system with seven graphics cards to run seven Windows VMs for gaming. Apparently he runs games like Crysis 3 in a VM at 1440p and decent framerates. Is the state of PEG virtualization that good these days on Linux?

The short answer is "yes". I can't be hosed watching some video, and scrolling through it doesn't give any indication of whether that's GPU virtualization or just passthrough. Passthrough works fine in Linux, including with GRID cards/etc.

There's a lot of work going into virtio-vga/virtio-gpu/virgil, but actual virtualization of GPUs isn't as good as VMware. That said, judging from the hardware used, I don't think they did in that video, either.

Combat Pretzel posted:

I'm currently trying to find out whether it's possible to hand around the GPU device between VM and host. Say if I want to play games, I kill Xorg and run the VM, and once the VM shuts down, Xorg fires up again. Scripts doing stuff where needed.
No. Somebody already quoted an earlier post, but passthrough doesn't work that way. A stub PCI device is used to make sure that the kernel doesn't grab and initialize the device.

In theory, PCIe hotplugging means that you can dynamically unload the stub module and let it be claimed, but you'd need to write another kernel module to do it, which would never, ever get upstreamed/mainlined.

If you just want a windows VM for gaming/accelerated stuff, and you're passing through one (or multiple) GPU(s), this works great, and you can just use the onboard Intel stuff for your host. Otherwise, use multiple GPUs (like they did in this video), a SR-IOV GPU, or create a VM with the device passed through which dual boots. You can also pass the device through to multiple VMs and only have one running at a time.

Combat Pretzel posted:

--edit:
Crap, Police Automaton's post is rather recent. So much for the card juggling. Altho I've come across suggestions that cards with UEFI BIOS are supposedly less bitchy.

Cards with a UEFI BIOS (all of them these days, basically) are easier, because UEFI is much nicer from a firmware level. But your OS needs to be booting in UEFI mode (with requisite UEFI bootloader) to make it feasible. You can do it without, but stubbing/remapping VGA memory is a huge pain in the rear end, and best avoided.

Your other concern with switching inputs on the monitor is that you need actual inputs. This is fine with Steam streaming, but otherwise, expect to need a 2nd keyboard/mouse, or use a KVM.

Feel free to ask away.

Mr Shiny Pants posted:

It would be awesome if you could partition your vid card, didn't some Nvidia cards allow this? One third for the main OS and maybe two thirds for the VMs.

This is GPU SR-IOV. GRID cards support it, among others. Some FireGLs also do. It's not really "partitioning", but it presents multiple PCIe device IDs so you can map the same card into multiple VMs, in much the same way as NIC SR-IOV.

Adbot
ADBOT LOVES YOU

Mr Shiny Pants
Nov 12, 2012
So if I flash my 660GTX to UEFI firmware, use the OVMF bios in my VM and have my server boot in UEFI mode this should work? Cool I can try that.

The steamlink is just a way to disable having to have a physical keyboard and monitor attached.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply