Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vaz
Feb 15, 2002
Vurt Refugee
Any VIOS goons here?

Oh yes do I get v11 of VMware Workstation for free if I upgrade my v9 to v10 between now and December? VMware website not clear on this one?

Adbot
ADBOT LOVES YOU

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.

Need some help, having weird latency issues with our iSCSI SAN. We've had both Dell and VMWare look at it, and they haven't found anything so far, but we haven't really pressed them yet. So far they're just pointing the finger at each other.

Storage: Dell MD3220i, full of 7.2k 1TB disks, grouped into 5 disk RAID 5 arrays (3 arrays total.)
Switches: HP ProCurve 2510s (Original), now HP ProCurve 3800s. Jumbo frames enabled and disabled.
ESXi Hosts: Dell R620 and R710s, Customized Dell ESXi 5.5 with updated NIC drivers.

Across all datastores we're seeing spikes of up to over 500ms read/write latency on the active storage paths. So far we've checked/tried the PSP (now configured to Round Robin), the problem started with Jumbo Frames enabled, they're now disabled for troubleshooting at Dell's request, the MD3220i is running the latest firmware. We've swapped out the ProCurve 2510s for 3800s, updated the firmware on both, updated the firmware on the Dell hosts, etc. Each path has it's own VLAN on the switches. Everything's been rebooted and we can't find any correlation between specific VMs / datastores / paths.

some kinda jackal
Feb 25, 2003

 
 
I don't have any specific suggestions for your problem, but for the love of god please set up a bridge and have Dell and VMware join it together so they can fight it out themselves. You will get nowhere fast if you act as the go-between because it will literally be an endless fingerpointing circle, as you're figuring out.

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.

Martytoof posted:

I don't have any specific suggestions for your problem, but for the love of god please set up a bridge and have Dell and VMware join it together so they can fight it out themselves. You will get nowhere fast if you act as the go-between because it will literally be an endless fingerpointing circle, as you're figuring out.

I agree completely, going to try to do that this week.

Cidrick
Jun 10, 2001

Praise the siamese

mattisacomputer posted:

Across all datastores we're seeing spikes of up to over 500ms read/write latency on the active storage paths. So far we've checked/tried the PSP (now configured to Round Robin), the problem started with Jumbo Frames enabled, they're now disabled for troubleshooting at Dell's request, the MD3220i is running the latest firmware. We've swapped out the ProCurve 2510s for 3800s, updated the firmware on both, updated the firmware on the Dell hosts, etc. Each path has it's own VLAN on the switches. Everything's been rebooted and we can't find any correlation between specific VMs / datastores / paths.

Try to catch it while running esxtop (and press D for disk mode) on one of your ESX hosts and find out which LUNs and/or devices are generating the high DAVG/KAVG during periods of high latency. If the latency shows up as DAVG (which is what it most likely is) you can prooobably rule out VMware as a culprit, and then you'd have to look at the individual pieces of your physical path from there. Since this is iscsi, you can also try setting up a port mirror on your switches for one of the problematic hosts and set up a tcpdump/wireshark on the mirrored port to watch the conversation and find out where in a connection the delay is coming from. Wireshark supports a native iscsi filter so it should be pretty easy to get only the traffic that you care about.

Syano
Jul 13, 2005
Thats not a lot of spindles at all for a 3200i devoted to any sort of workload and they are slow spindles to boot. Are all the raid groups owned by the same SP? Do you even have multiple SPs in the chasis? What sort of workload are you putting it under? And are the extreme latencies seen all the time or under certain conditions. My gut tells me you should have just configured the thing as one huge raid group with a couple hot spares because this is probably where your latency is coming from.

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.

Cidrick posted:

Try to catch it while running esxtop (and press D for disk mode) on one of your ESX hosts and find out which LUNs and/or devices are generating the high DAVG/KAVG during periods of high latency. If the latency shows up as DAVG (which is what it most likely is) you can prooobably rule out VMware as a culprit, and then you'd have to look at the individual pieces of your physical path from there. Since this is iscsi, you can also try setting up a port mirror on your switches for one of the problematic hosts and set up a tcpdump/wireshark on the mirrored port to watch the conversation and find out where in a connection the delay is coming from. Wireshark supports a native iscsi filter so it should be pretty easy to get only the traffic that you care about.

I didn't know wireshark had a native iSCSI filter, but I've been meaning to shark the traffic next time I was onsite, however this just proves I should have done this much sooner, thanks!

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.

Syano posted:

Thats not a lot of spindles at all for a 3200i devoted to any sort of workload and they are slow spindles to boot. Are all the raid groups owned by the same SP? Do you even have multiple SPs in the chasis? What sort of workload are you putting it under? And are the extreme latencies seen all the time or under certain conditions. My gut tells me you should have just configured the thing as one huge raid group with a couple hot spares because this is probably where your latency is coming from.

Agreed, here is more info based on your questions and others.

The MD3320i has both controllers online, and the LUNs are spread evenly amongst the two. Correcting my earlier post, there are 24 total drives in the chassis, arranged into 3 RAID5 groups of 7 drives, segment size is 128KB. There are 15 virtual disks, spread roughly evenly amongst the 3 raid groups (5 VDs on array 1 / 6 VDs on 2 / 4 VDs on 3.)

The workload is a general ESXi cluster, with mostly file storage and general application usage. Only one DB server, a SQL2k8 R2 box hosting vCenter/viewEvents/VUM.

Latencies are all the time, and are even worse under heavy load - i.e. snapshot backups by Veeam, or just taking a regular snapshot in vSphere. Also, the latency just started 4-5 weeks ago. Previously backups were saving at over 50MB/s, now they're struggling to top 10MB/s apparently. The latency is also the same for all 4 hosts and 15 datastores, all of them are reporting the same spikes in read/write latency.

Cronus
Mar 9, 2003

Hello beautiful.
This...is gonna get gross.

mattisacomputer posted:

backups were saving at over 50MB/s, now they're struggling to top 10MB/s apparently. The latency is also the same for all 4 hosts and 15 datastores, all of them are reporting the same spikes in read/write latency.

This really sounds like switchport to me. Everything is negotiating at gigE and no CRC issues on any ports? How many switches are involved and do they all have the storage VLAN configured on the trunk uplinks? Vmkping working with low latency?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Cronus posted:

This really sounds like switchport to me. Everything is negotiating at gigE and no CRC issues on any ports? How many switches are involved and do they all have the storage VLAN configured on the trunk uplinks? Vmkping working with low latency?
Sounds more like a disk going into heroic recovery to me. Not sure what kind of individual disk stats you can get out of a dell md32xx array, but if there is any way to see disk read or checksum errors, I would be looking that direction personally.

stevewm
May 10, 2005
We have a handful of servers we are looking at virtualizing.

Having used Hyper-V previously, I had planned on continuing with it. Just need some advice on sizing the host hardware..

Machines to be virtualized:

2012 R2 running DC/DNS/DHCP/WSUS replica/DFS/File Server (not our only DC, there are 7 others company wide)
2012 R2 running a relatively low load intranet website/Postgre SQL database
Ubuntu 12.04 running UniFi WiFi controller
Ubuntu 12.04 running Zabbix (monitoring software)
Ubuntu 12.04 running greylog2 (log collection/search, can be heavy on disk at times)

My current plan was a single host with 24/32GB RAM, 2 disks in RAID 1 for host Hyper-V OS, 6 15k SAS disks in RAID 10 for VM storage.

While it is putting all the eggs in one basket so to speak, none of these servers are absolutely business critical. We don't have the budget for shared storage nor a secondary host just yet. Should there be a host issue, a little down time is OK.

In the near future I plan on putting a similar host at another location and utilizing Hyper-V replication, when I get the budget for it.

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.

adorai posted:

Sounds more like a disk going into heroic recovery to me. Not sure what kind of individual disk stats you can get out of a dell md32xx array, but if there is any way to see disk read or checksum errors, I would be looking that direction personally.

Doesn't look I can find any individual disk stats, but I could be missing something.

We've tried two different pairs of switches. Originally we had ProCurve 2510 Gig switches which have worked fine since the environment was stood up over 2 years ago at this point. There are four different storage VLANs creating four different paths for each host to the storage. We've subsequently replaced both 2510s with 3800 series ProCurves and set them up the same way, four different VLANs, each port is only tagged on one VLAN and there are no trunks. Everything negotiated at Gig fine on both pairs.

Jadus
Sep 11, 2003

stevewm posted:

We have a handful of servers we are looking at virtualizing.

Having used Hyper-V previously, I had planned on continuing with it. Just need some advice on sizing the host hardware..

Machines to be virtualized:

2012 R2 running DC/DNS/DHCP/WSUS replica/DFS/File Server (not our only DC, there are 7 others company wide)
2012 R2 running a relatively low load intranet website/Postgre SQL database
Ubuntu 12.04 running UniFi WiFi controller
Ubuntu 12.04 running Zabbix (monitoring software)
Ubuntu 12.04 running greylog2 (log collection/search, can be heavy on disk at times)

My current plan was a single host with 24/32GB RAM, 2 disks in RAID 1 for host Hyper-V OS, 6 15k SAS disks in RAID 10 for VM storage.

While it is putting all the eggs in one basket so to speak, none of these servers are absolutely business critical. We don't have the budget for shared storage nor a secondary host just yet. Should there be a host issue, a little down time is OK.

In the near future I plan on putting a similar host at another location and utilizing Hyper-V replication, when I get the budget for it.

RAM is cheap, so you should be looking at least at 64GB if not triple digits. One of the primary results you may see of virtualizing is incremental server sprawl and having the extra RAM gives you lots of flexibility.

The other thing to consider is the Windows licensing; if your physical is already Server Standard I believe 2012 gives you 2 Windows VMs with that. You can add additional standard licenses but if you're going over 6-8 Windows VMs eventually then you should consider Datacenter licensing as it grants unlimited VMs.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

stevewm posted:

My current plan was a single host with 24/32GB RAM, 2 disks in RAID 1 for host Hyper-V OS, 6 15k SAS disks in RAID 10 for VM storage.
if you can separate out those roles from your DC onto separate VMs. As for sizing, I would buy 2x the ram you think you'll need, and I would do an 8 disk raid10 instead of a raid 1 + 6 disk raid 10. In my mind, you would have two disks worth of extra iops sitting idle in your proposed config.

jre
Sep 2, 2011

To the cloud ?



adorai posted:

if you can separate out those roles from your DC onto separate VMs. As for sizing, I would buy 2x the ram you think you'll need, and I would do an 8 disk raid10 instead of a raid 1 + 6 disk raid 10. In my mind, you would have two disks worth of extra iops sitting idle in your proposed config.

Seconding the buy twice as much ram as you think you'll need sentiment. Buy slightly slower processors if necessary to get it in budget.

stevewm
May 10, 2005

adorai posted:

...... and I would do an 8 disk raid10 instead of a raid 1 + 6 disk raid 10. In my mind, you would have two disks worth of extra iops sitting idle in your proposed config.

I have gotten some conflicting information on this part... Some say to keep the host OS boot volume/system files off the same spindles as the VMs. And others say otherwise.

Though I can't imagine the host OS hits the disk enough to really matter either way.

Ahz
Jun 17, 2001
PUT MY CART BACK? I'M BETTER THAN THAT AND YOU! WHERE IS MY BUTLER?!
Does anyone know why I can't resize my display window which used to automatically resize the monitor / pixel layout in Fusion 7?

It only seems to zoom the window when I resize now.

some kinda jackal
Feb 25, 2003

 
 
Am I losing out to too much by specifying HW7 when creating new VMs on 5.5? I want to be able to manage them using the thick client, even though I'm kind of coming around to the Web UI for most stuff (Mac user). If I'm really losing out on some awesome performance options then I'll go to whatever new HW version I need, but if it's just fluff then I dunno.

Mierdaan
Sep 14, 2004

Pillbug

Martytoof posted:

Am I losing out to too much by specifying HW7 when creating new VMs on 5.5? I want to be able to manage them using the thick client, even though I'm kind of coming around to the Web UI for most stuff (Mac user). If I'm really losing out on some awesome performance options then I'll go to whatever new HW version I need, but if it's just fluff then I dunno.

Are you just dorking around in a lab, or is this for a production environment? Comparison here.

some kinda jackal
Feb 25, 2003

 
 

Mierdaan posted:

Are you just dorking around in a lab, or is this for a production environment? Comparison here.

Well the question stemmed from me working in my Lab environment, but I also do the same at work since I prefer to use the thick client to manage our VCSA when I'm in a Windows environment.

Thanks for the link. Judging by the docs it looks like the main things I'd be interested in are maybe E1000e (though I use all VMXNET3 if I can help it, these days) and USB3 (for my lab machines). Looks like I can probably stay at 7. Is it HW8 that introduced stuff that isn't available in the C# client or is it 8+? I honestly can't remember.





edit: I have to say though, once I told myself "okay you're not going to have a thick client so just man up and learn to love the Web client" I found that I actually kind of stopped HATING it. I'm not in LOVE with it but I think I'm making do and once I figured out where everything is (and quite frankly stopped pouting over a lack of thick client for OSX) I think I'll keep using it. It's still slow and annoying in places, but I guess attitude is a huge part of why I absolutely loathed it.

That being said, the console is still has the feel of a disasterpiece (god dammit, let me launch the console while the VM is powered off, VMware!) compared to the thick client, but I'll live :)

some kinda jackal fucked around with this message at 03:18 on Oct 24, 2014

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

VI client 5.5U2 allows you to edit the HW version 8 and below features of version 10 VMs.

http://wahlnetwork.com/2014/09/15/restricted-edit/

Gothmog1065
May 14, 2009
Hey guys. I'm playing around with the low end of virtualization. I'm running Windows 10 preview, Linux, Server 2012 (not at the same time normally). One of my questions is this, is there a way to pump more video capabilities or install drivers to use my GPU more? I'd like to run it as a full screen at 1920x1080 and maybe even play some games on low settings through the VM. It may not be possible with what I"m currently using.

Host OS: Win 8.1 Pro
VM: VirtualBox 4.3.16
CPU: I7 4600k OC'd to 4.6 ghz
RAM: 16GB DDR3
GPU: AMD Radeon 7870 (2 GB GDDR5)

The VMs are all on separate HDDs that aren't on my SSD.

When I go to the settings for the video, it will only allow me to allocate 128MB to the VM. Is there a way to increase this? I'd like to even make my resolutions within the window more widescreen, but there are no options for it.

Nebulis01
Dec 30, 2003
Technical Support Ninny

Gothmog1065 posted:

Hey guys. I'm playing around with the low end of virtualization. I'm running Windows 10 preview, Linux, Server 2012 (not at the same time normally). One of my questions is this, is there a way to pump more video capabilities or install drivers to use my GPU more? I'd like to run it as a full screen at 1920x1080 and maybe even play some games on low settings through the VM. It may not be possible with what I"m currently using.

Host OS: Win 8.1 Pro
VM: VirtualBox 4.3.16
CPU: I7 4600k OC'd to 4.6 ghz
RAM: 16GB DDR3
GPU: AMD Radeon 7870 (2 GB GDDR5)

The VMs are all on separate HDDs that aren't on my SSD.

When I go to the settings for the video, it will only allow me to allocate 128MB to the VM. Is there a way to increase this? I'd like to even make my resolutions within the window more widescreen, but there are no options for it.

On Hyper-V you're pretty much going to be looking at RemoteFX (http://blogs.msdn.com/b/rds/archive/2014/06/06/understanding-and-evaluating-remotefx-vgpu-on-windows-server-2012-r2.aspx) and then using remote desktop services to access the VM and not using the direct hyper-v 'connect to' option. I'm not entirely sure you'll get it working on a consumer card.

Da Mott Man
Aug 3, 2012


Nebulis01 posted:

On Hyper-V you're pretty much going to be looking at RemoteFX (http://blogs.msdn.com/b/rds/archive/2014/06/06/understanding-and-evaluating-remotefx-vgpu-on-windows-server-2012-r2.aspx) and then using remote desktop services to access the VM and not using the direct hyper-v 'connect to' option. I'm not entirely sure you'll get it working on a consumer card.

For Remote Desktop Services you'll need SQL for the licensing database and AD for accounts at the very least, for more fancy stuff you'll need to setup IIS as well. We've been using consumer GPU hardware in our GPU enabled RDS farm for about 10 months now with minimal problems, mostly due to driver crashes (this has only happened twice).

A limitation is that you cannot install RDS on an AD controller, found that out on a small test box. The role installer should block you from doing so, but in our case during testing it did not and caused the machine to constantly blue screen until we tracked down the problem.

The real gotcha is vram. You have no way of controlling the amount of vram dedicated to each instance until Windows Server 10, in Windows Server 2012 R2 and below it is determined by resolution and the amount of monitors.

The other thing is that you need to do RDP over UDP (for latency) and it takes a lot of bandwidth.

Da Mott Man fucked around with this message at 01:21 on Oct 28, 2014

lurksion
Mar 21, 2013
Very little experience with visualization here, so a kind of basic conceptual question (that I won't even be doing anything with for awhile).

Let's say I have 2 clients on a hypervisor (A/B), both connected to the same physical VLAN (no NAT intermediary from the host). If I transfer/read from A directly to B (say I setup A as an internal FTP server and B uses wget), does that data flow all the way out to the physical network and back again, or does the host step in and go "hey, this just needs to happen internally"?

OK, now let's say A has raw hard drives passed through to it via VTd and it is serving them out to the rest of the network as shared drives via samba/whatever. Now B reads/writes via the sharing protocol - I presume whatever the answer is still holds?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
the data does not hairpin to the switch, if that is your question.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Virtual switches behave (mostly) like physical switches, so if both guests are on the same host the vSwitch will check its MAC address table to figure out where to forward the packet, and if it determines that the destination is attached to one of its virtual ports it will forward it out to that port. Traffic only crosses the uplink if it needs to leave the vSwitch to get where it's going.

Your second question is basically the same as your first. The guest to guest communication never leaves the host, and the storage traffic would be local to the host as well if passthrough is used.

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


If for some odd reason the guests are on separate vSwitches, then it'll flow out onto the physical network. Inter-vSwitch communication does cross the physical uplinks because vSwitches are essentially isolated.

Otherwise, on the same vSwitch, no. It's all done virtually.

lurksion
Mar 21, 2013
Thanks guys, good to know.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Our cheap, lovely DR solution revolves around using NetApp snapmirroring to get our VM nfs volumes over to a dr site with standalone esxi hosts ready to go. Mount up the volumes as datastores, import the VMX files in to the inventory, start the VMs and you're ready to go. The stumbling block we hit is that we have separate vols for high performance dedupe (OS vol), high performance non-dedupe, and low performance sata. The OS vols get mounted with the import but any additional vdmks are disassociated and their link to the datastore get broken even though both datastore name and nfs path are identical. It's a pain in the butt to go through each VM and re-add all the additional disks that were disassociated. Is there a way to trick my hosts in to using identical vmfs mount points in both environments? I assume it has something to do with the /vmfs/volumes/[hex]-[hex] volumes that get mounted not matching up on both ends but I'm not sure what method they're using to generate that ID. Is it a combination of ip/hostname AND nfs volume path, or something randomized?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Anyone here roll out VXLAN and NSX? My work is wanting me to do so. Seems fairly straight forward just wondering about the hiccups. I think I calmed the push back from the IT team from working against us as they start to like it after I explained it.

Pantology
Jan 16, 2006

Dinosaur Gum

Dilbert As gently caress posted:

Anyone here roll out VXLAN and NSX? My work is wanting me to do so. Seems fairly straight forward just wondering about the hiccups. I think I calmed the push back from the IT team from working against us as they start to like it after I explained it.

Only in a lab setting. There are only, what, 250 paying customers at this point? Your odds of running into one here are pretty slim. That said, I work with a couple guys that went through the NSX Ninja program, so if you have any specific questions, I'd be happy to pass them along. The only obvious potential hiccup that springs to mind is to make sure your physical infrastructure is configured for a >= 1600 byte MTU.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Pantology posted:

Only in a lab setting. There are only, what, 250 paying customers at this point? Your odds of running into one here are pretty slim. That said, I work with a couple guys that went through the NSX Ninja program, so if you have any specific questions, I'd be happy to pass them along. The only obvious potential hiccup that springs to mind is to make sure your physical infrastructure is configured for a >= 1600 byte MTU.

yeah one of the teacher/coworker I worked with, moved out to Kansas(or Oklahoma) and is doing a case study with vmware on his implementation. I'm going to be setting up a PoC soon for our company, but since we want to simplify our +40 sites, SDN looks like the way to go.

Pile Of Garbage
May 28, 2007



Dilbert As gently caress posted:

yeah one of the teacher/coworker I worked with, moved out to Kansas(or Oklahoma) and is doing a case study with vmware on his implementation. I'm going to be setting up a PoC soon for our company, but since we want to simplify our +40 sites, SDN looks like the way to go.

How are you handling compute/storage? Just interested as from what I've seen the popular option is flex-pods all the way.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Has anyone worked with GPU passthrough in either KVM or Xen? I need to load a server with 6 or 8 low-cost GPUs and dedicate them to VMs. Cost is kind of really important, so not having to worry about VMware licensing on this would be great.

Hadlock
Nov 9, 2004

Are you using the GPU for video or OpenCL/CUDA computing?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Hadlock posted:

Are you using the GPU for video or OpenCL/CUDA computing?
Both.

evol262
Nov 30, 2010
#!/usr/bin/perl

KVM and Xen are gonna be pretty similar from a passthrough/pcistub performance perspective. There's not a compelling reason to pick one over the other for this. I would use vfio-vga, though.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

KVM and Xen are gonna be pretty similar from a passthrough/pcistub performance perspective. There's not a compelling reason to pick one over the other for this. I would use vfio-vga, though.
In terms of stability, are there any known issues or gotchas that I should be aware of before I start picking out hardware or testing?

e: one feather in Xen's cap is that it seems to support nVidia Grid, while KVM doesn't do vGPU yet.

e2: Seems nVidia Grid isn't supported in Linux guests yet, so that's a moot point.

Vulture Culture fucked around with this message at 22:26 on Nov 5, 2014

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

Misogynist posted:

In terms of stability, are there any known issues or gotchas that I should be aware of before I start picking out hardware or testing?

e: one feather in Xen's cap is that it seems to support nVidia Grid, while KVM doesn't do vGPU yet.

e2: Seems nVidia Grid isn't supported in Linux guests yet, so that's a moot point.

The only real gotcha is that 1:N is currently under development but not working yet. You're aware of that one, though.

Broadly, I'd say to use AMD for OpenCL and nVidia for everything else. But 1:N GPU support is extremely likely to also hit nVidia first, and the K1/2 are excellent CUDA cards anyway.

Usual safe Linux advice: nVidia will get graphics features before AMD, AMD will get KVM features marginally before Intel. Buy nVidia/Intel anyway.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply