Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«266 »
  • Post
  • Reply
Volguus
Mar 3, 2009


Boris Galerkin posted:

How do I do that? Wikipedia says Hyper-V is a Windows thing and I absolutely need a Linux host.

e: After reading about it Iím not sure if I absolutely need a Linux host. I do a lot of numerical/computational stuff, so itís pretty much been Linux all the way down because I donít want to recompile libraries/tools for different architectures, and also because all the computers I use are also Linux so it just makes everything easier.

But if I can run a Windows host with headless Fedora/Linux with zero performance hits while letting me develop MPI applications when I run them in the Linux VM then I guess that might work. But this also seems like a lot more work than asking if there were obvious settings that I could flip on/off to gain free performance for Excel, which I donít even use for number crunching. I just want a lightweight Excel app, and LibreOffice is poo poo and Google Sheets and Office Excel Online are too slow.

You don't, Hyper-v is a windows thing. What could help you quite a bit would be if you would have a virtualization server that will have a windows guest. Now, that server can be linux with qemu+kvm or can be windows with Hyper-v, doesn't matter, the performance will most likely be better than running the machine locally.

Given the nature of your work, running the primary development machine in a VM would not even be a consideration.

Adbot
ADBOT LOVES YOU

karoshi
Nov 4, 2008


Boris Galerkin posted:

Iím also not adverse to switching from VirtualBox to something else if the performance gain is trivially easy. Or also just switching from Windows 10 to 8.1 if that works out better too.

VirtualBox was a piece of poo poo. I was looking for AVX support and finally found a post on their forums. A dev was saying there was not gonna be AVX support unless a customer demanded and paid for it. gently caress Oracle. I switched to VMWare and it was so much better.

e: not familiar with the KVM user experience, but I bet it's better than gently caress Oracle's virtualbox.

karoshi fucked around with this message at Feb 13, 2018 around 14:45

evol262
Nov 30, 2010
#!/usr/bin/perl

Boris Galerkin posted:

How do I do that? Wikipedia says Hyper-V is a Windows thing and I absolutely need a Linux host.

e: After reading about it Iím not sure if I absolutely need a Linux host. I do a lot of numerical/computational stuff, so itís pretty much been Linux all the way down because I donít want to recompile libraries/tools for different architectures, and also because all the computers I use are also Linux so it just makes everything easier.

But if I can run a Windows host with headless Fedora/Linux with zero performance hits while letting me develop MPI applications when I run them in the Linux VM then I guess that might work. But this also seems like a lot more work than asking if there were obvious settings that I could flip on/off to gain free performance for Excel, which I donít even use for number crunching. I just want a lightweight Excel app, and LibreOffice is poo poo and Google Sheets and Office Excel Online are too slow.

Of course you can do this, though you should note that Hyper-V 2 is a little picky about UEFI on the guest, and legacy booting is not supported.

However, I'd probably just run the windows guest on KVM and use RDP to access it.

Note that the performance difference between Virtualbox (with VT-x/SVM enabled) and Hyper-V/KVM is close to zero for many workloads, other than storage being marginally faster. Nothing is going to make using a desktop Windows guest more pleasant than vbox guest extensions, RDP (with or without remotefx), VMware workstation, etc.

If your workflow works for you, it's not gonna be that much faster by moving to KVM/Hyper-V.

If your Linux workload isn't actually dependent on the underlying hardware, though, I'd consider just using WSL.

Saukkis
May 16, 2003



In VMware vSphere, is there a way to schedule a virtual machine to start automatically after it has been powered off from the guest OS? My coworker has interpreted that powering off is necessary for the VM to get access to the new CPU flags and the Spectre mitigations to become effective. It would be most convenient to schedule startup and then during the normal update cycle power off the VMs instead of rebooting them, and not having to go start them up manually.

SlowBloke
Aug 14, 2017


Saukkis posted:

In VMware vSphere, is there a way to schedule a virtual machine to start automatically after it has been powered off from the guest OS? My coworker has interpreted that powering off is necessary for the VM to get access to the new CPU flags and the Spectre mitigations to become effective. It would be most convenient to schedule startup and then during the normal update cycle power off the VMs instead of rebooting them, and not having to go start them up manually.

You could do some gymnastics with powercli to shutdown a list of vm, wait for completion and then powering the same list of vms.

Moey
Oct 22, 2010

I LIKE TO MOVE IT


SlowBloke posted:

You could do some gymnastics with powercli to shutdown a list of vm, wait for completion and then powering the same list of vms.

This is what I was thinking as well. I know of nothing in vCenter that will power a VM back on. Horizon View can do it.

stevewm
May 10, 2005


I've been working on figuring out a solution for our infrastructure at work. A small smattering of servers... 8 Windows VMs and 4 Linux VMs. Our computing power needs are not likely to change soon, but our storage usage is constantly growing. The existing servers are all Dell, 6+ years old.. Support/warranty on them expiring soon.

So I'm looking at setting up a 2 or 3 node Hyper-V cluster to consolidate everything, though I think 2 nodes would be enough. At the very least to get some redundancy, as we basically have none right now.

Did a Dpack/Live Optics run for 24 hours; with all our servers included and the one hyper-v host we had, results in 1196 IOPs @ 95%... 4756 peak. (caused by a end of day process that runs at 11PM).

Have gotten a few solutions quoted by a few vendors and have narrowed it down to a few choices.

2 node Starwind VSAN appliance - most expensive option by far
2x Lenovo SR630 servers paired with DS4200 iSCSI SAN - cheapest option
2 node Dell VRTX - just barely more expensive than the Lenovo option.
3 node Scale Computing HC cluster - in-between Lenovo and Dell price wise.

Starwind was out due to sheer pricing alone. Way more than anything else.

The Lenovo looks promising... Concerned about their support though.

While I was impressed by the Scale Computing cluster, it is sold as a fixed appliance and not easy to upgrade. Need more storage? You can't just slap more drives in, you have to add another entire node. They use SuperMicro hardware running a highly customized KVM hypervisor. They also need 3 nodes minimum, which increases the MS licensing.

I have been leaning towards the Dell VRTX, which is basically a chassis with blade servers and shared storage in a single box with redundant everything. (except for the networking backplane, which is easy to work around). Easy to upgrade with either additional blades or storage. Single management point, easier configuration... From a company who's support I am familiar and satisfied with.. Am I crazy for going this route?

SlowBloke
Aug 14, 2017


Moey posted:

This is what I was thinking as well. I know of nothing in vCenter that will power a VM back on. Horizon View can do it.

If you hate yourself VERY strongly you could create a vSphere orchestrator workflow to shutdown a vm, wait for the vm status to change and turn it back on to be applied on a cluster or vm folder(s), but it would take an awful lot of time to setup compared to a handful of strings of ps1 batches.

Methanar
Sep 26, 2013

It always was

stevewm posted:

I've been working on figuring out a solution for our infrastructure at work. A small smattering of servers... 8 Windows VMs and 4 Linux VMs. Our computing power needs are not likely to change soon, but our storage usage is constantly growing. The existing servers are all Dell, 6+ years old.. Support/warranty on them expiring soon.

So I'm looking at setting up a 2 or 3 node Hyper-V cluster to consolidate everything, though I think 2 nodes would be enough. At the very least to get some redundancy, as we basically have none right now.

Did a Dpack/Live Optics run for 24 hours; with all our servers included and the one hyper-v host we had, results in 1196 IOPs @ 95%... 4756 peak. (caused by a end of day process that runs at 11PM).

Have gotten a few solutions quoted by a few vendors and have narrowed it down to a few choices.

2 node Starwind VSAN appliance - most expensive option by far
2x Lenovo SR630 servers paired with DS4200 iSCSI SAN - cheapest option
2 node Dell VRTX - just barely more expensive than the Lenovo option.
3 node Scale Computing HC cluster - in-between Lenovo and Dell price wise.

Starwind was out due to sheer pricing alone. Way more than anything else.

The Lenovo looks promising... Concerned about their support though.

While I was impressed by the Scale Computing cluster, it is sold as a fixed appliance and not easy to upgrade. Need more storage? You can't just slap more drives in, you have to add another entire node. They use SuperMicro hardware running a highly customized KVM hypervisor. They also need 3 nodes minimum, which increases the MS licensing.

I have been leaning towards the Dell VRTX, which is basically a chassis with blade servers and shared storage in a single box with redundant everything. (except for the networking backplane, which is easy to work around). Easy to upgrade with either additional blades or storage. Single management point, easier configuration... From a company who's support I am familiar and satisfied with.. Am I crazy for going this route?

I've used a Dell VRTX before and I agree, it was pretty nifty as an all in one box. If I was somehow in a situation where I needed an in-office server presence again, I'd definitely consider it.

Multiply whatever concerns you have about Lenovo support by a factor of 10. I will never be complicit in purchasing Lenovo hardware again.

I don't know anything about your storage requirements, but have you considered an external ZFS NAS that you mount rather than direct attach disks?

Methanar fucked around with this message at Feb 14, 2018 around 21:16

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles



VRTX has a single network plane, if you want network redundancy then you need external switches. FX2 with two compute blades and a storage blade will get you embedded network redundancy at 1 or 10gige without needing external boxes.

stevewm
May 10, 2005


BangersInMyKnickers posted:

VRTX has a single network plane, if you want network redundancy then you need external switches. FX2 with two compute blades and a storage blade will get you embedded network redundancy at 1 or 10gige without needing external boxes.

Yeah I was aware of the single network plane... One of Dell's config docs actually recommends slapping in 2x PCIe NICs and assigning one to each blade and setting up NIC teaming. In the event the network backplane went out, the PCIe NICs would continue operating.

I was already going to have 2x external switches for connecting 2x routers/firewalls (in active/passive).

Thanks Ants
May 21, 2004

Bless you, ants. Blants.




Fun Shoe

Have a look at some of the Hyperconverged Storage Spaces Direct qualified designs, e.g.

https://lenovopress.com/lp0064.pdf

Boris Galerkin
Dec 17, 2011


Volguus posted:

You don't, Hyper-v is a windows thing. What could help you quite a bit would be if you would have a virtualization server that will have a windows guest. Now, that server can be linux with qemu+kvm or can be windows with Hyper-v, doesn't matter, the performance will most likely be better than running the machine locally.

Given the nature of your work, running the primary development machine in a VM would not even be a consideration.


karoshi posted:

VirtualBox was a piece of poo poo. I was looking for AVX support and finally found a post on their forums. A dev was saying there was not gonna be AVX support unless a customer demanded and paid for it. gently caress Oracle. I switched to VMWare and it was so much better.

e: not familiar with the KVM user experience, but I bet it's better than gently caress Oracle's virtualbox.


evol262 posted:

Of course you can do this, though you should note that Hyper-V 2 is a little picky about UEFI on the guest, and legacy booting is not supported.

However, I'd probably just run the windows guest on KVM and use RDP to access it.

Note that the performance difference between Virtualbox (with VT-x/SVM enabled) and Hyper-V/KVM is close to zero for many workloads, other than storage being marginally faster. Nothing is going to make using a desktop Windows guest more pleasant than vbox guest extensions, RDP (with or without remotefx), VMware workstation, etc.

If your workflow works for you, it's not gonna be that much faster by moving to KVM/Hyper-V.

If your Linux workload isn't actually dependent on the underlying hardware, though, I'd consider just using WSL.

Thanks guys. Looks like I'm sticking to a Linux host and Windows guest, but might look into VMWare Workstation. Does the free version work better out of the box? Again, all I need Windows for is the Office suite. Right now I have Windows 10 in Window mode in VirtualBox, with the Windows task bar hidden and Excel/PowerPoint running as full screen apps inside the VirtualBox window, so it's kinda got the a native app feel going for it.

In a similar vein, I don't need to run Windows 10. Would Windows 8.1 or even 7 be better if all I need it for is just Excel and PowerPoint?

SamDabbers
May 26, 2003

Oh my god...

Lipstick Apathy

Boris Galerkin posted:

Thanks guys. Looks like I'm sticking to a Linux host and Windows guest, but might look into VMWare Workstation. Does the free version work better out of the box? Again, all I need Windows for is the Office suite. Right now I have Windows 10 in Window mode in VirtualBox, with the Windows task bar hidden and Excel/PowerPoint running as full screen apps inside the VirtualBox window, so it's kinda got the a native app feel going for it.

In a similar vein, I don't need to run Windows 10. Would Windows 8.1 or even 7 be better if all I need it for is just Excel and PowerPoint?

I use KVM with virt-manager as the UI and find it needs-suiting for desktop virtualization. It gives a VirtualBox/VMware-like experience, works well, and is extremely feature rich.

There are accelerated graphics drivers for Windows 7 on QEMU/KVM so that's probably the way to go on that stack if you don't want to use RDP. Microsoft removed the display driver model the accelerated drivers were using in Windows 8 and up, so RDP ends up being a better experience with those versions as the guest.

Semi-serious edit: Depending on which version of MS Office you're using, WINE might be an actual option for you. It's one of the better supported apps.

SamDabbers fucked around with this message at Feb 15, 2018 around 14:30

stevewm
May 10, 2005


Anyone dealt at all with Scale Computing? (https://www.scalecomputing.com/)

I finally got my official quote from them; I was pleasantly surprised. Much lower than I expected it to be. In fact they are the cheapest option now, even when factoring in additional MS licensing costs because of 3 servers.

Their hypervisor platform is KVM using custom vSAN like storage, but also pretty nifty.. 3x 1u Nodes, 10.44TB usable space . 1.4TB of that is a SSD tier. I was initially concerned about storage space/price, but 10.44TB will do us for a long time even accounting for growth.

DR is also easy... get a cheap single node (~$2) , add it to the cluster and tick a box to make your most important VMs replicate to it.

Dead simple management interface (all HTML5), barely any configuration required (networking, and install your VMs), auto failover, etc...

Their target market is small businesses like ours.. Their HQ is only 50 miles from us. Their support is also all local. They handle any hardware issues on a NBD basis.

Really, I am not seeing any downsides here.

Thanks Ants
May 21, 2004

Bless you, ants. Blants.




Fun Shoe

The only thing I'd watch out for is whether you run any virtual appliance type services that are only supported on VMware/Hyper-V

stevewm
May 10, 2005


Thanks Ants posted:

The only thing I'd watch out for is whether you run any virtual appliance type services that are only supported on VMware/Hyper-V

Nope.. nothing but a handful of basic Windows VMs (SQL, RDS farm, AD, WSUS) and some Ubuntu server VMs (Unifi, webserver, etc..)

Hmm.. Decisions decisions...

Thanks Ants
May 21, 2004

Bless you, ants. Blants.




Fun Shoe

If their HQ is only 50 miles away then it sounds like you'll get treated well by them. How do they handle situations where you just want to add storage - or do you have to add compute at the same time? How well does it integrate with your backup provider etc.

stevewm
May 10, 2005


Thanks Ants posted:

If their HQ is only 50 miles away then it sounds like you'll get treated well by them. How do they handle situations where you just want to add storage - or do you have to add compute at the same time? How well does it integrate with your backup provider etc.

Looks like you have to add a node, though they have what they call Storage Nodes that only add additional storage.


But 10TB is way more than we will need for some time. Even factoring in several years of growth at current rates.

YOLOsubmarine
Oct 19, 2004

Breaux, Breaux, you seen a defense around here anywhere!?


stevewm posted:

Anyone dealt at all with Scale Computing? (https://www.scalecomputing.com/)

I finally got my official quote from them; I was pleasantly surprised. Much lower than I expected it to be. In fact they are the cheapest option now, even when factoring in additional MS licensing costs because of 3 servers.

Their hypervisor platform is KVM using custom vSAN like storage, but also pretty nifty.. 3x 1u Nodes, 10.44TB usable space . 1.4TB of that is a SSD tier. I was initially concerned about storage space/price, but 10.44TB will do us for a long time even accounting for growth.

DR is also easy... get a cheap single node (~$2) , add it to the cluster and tick a box to make your most important VMs replicate to it.

Dead simple management interface (all HTML5), barely any configuration required (networking, and install your VMs), auto failover, etc...

Their target market is small businesses like ours.. Their HQ is only 50 miles from us. Their support is also all local. They handle any hardware issues on a NBD basis.

Really, I am not seeing any downsides here.

This is basically Nutanix, and I wouldnít be shocked to learn that they have some of the same performance issues Nutanix has, particularly with monolithic workloads.

stevewm
May 10, 2005


YOLOsubmarine posted:

This is basically Nutanix, and I wouldnít be shocked to learn that they have some of the same performance issues Nutanix has, particularly with monolithic workloads.

Care to elaborate a bit?

What kind of issues....

YOLOsubmarine
Oct 19, 2004

Breaux, Breaux, you seen a defense around here anywhere!?


stevewm posted:

Care to elaborate a bit?

What kind of issues....

Nutanix suffers from some issues related to their storage VM architecture that can limit single workload performance pretty heavily. They also have some issues with metadata management that has bit a couple of our customers.

Scale does storage in kernel though youíre still going to be limited by resources allocated to the kernel for storage processing, but thatís certainly much cleaner than the Nutanix method.

The spinning media footprint on each node is generally very small as well (just a few spindles) so if youíre not doing a great job of populating cache performance can really tank.

And node rebuilds are really painful.

stevewm
May 10, 2005


YOLOsubmarine posted:

Nutanix suffers from some issues related to their storage VM architecture that can limit single workload performance pretty heavily. They also have some issues with metadata management that has bit a couple of our customers.

Scale does storage in kernel though youíre still going to be limited by resources allocated to the kernel for storage processing, but thatís certainly much cleaner than the Nutanix method.

The spinning media footprint on each node is generally very small as well (just a few spindles) so if youíre not doing a great job of populating cache performance can really tank.

And node rebuilds are really painful.

Good things to think about...

Our workload is nearly all read... nearly 90% if the LiveOptics report is to be believed, though it makes sense given our primary application. (point of sale app). With our existing SQL instance and physical server (which has 24GB of RAM dedicated to it), the disk gets hit very little under normal operation.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles



Backups, full table scans, or otherwise lovely programming that causing huge amounts of disk churn are generally the things that will cause ssd cache schemes to fall over and see performance collapse. Buyer beware, especially now that all-flash is coming below the $/gb of 10k SAS and edging in to NL-SATA.

YOLOsubmarine
Oct 19, 2004

Breaux, Breaux, you seen a defense around here anywhere!?


stevewm posted:

Good things to think about...

Our workload is nearly all read... nearly 90% if the LiveOptics report is to be believed, though it makes sense given our primary application. (point of sale app). With our existing SQL instance and physical server (which has 24GB of RAM dedicated to it), the disk gets hit very little under normal operation.

You might also want to consider backup. Availability can be handled through replication, but data recovery can be tricky since most popular virtual backup solutions leverage apis like VADP for VMware or VSS for Microsoft, but such a thing may not be available. So youíre stuck with agent based backup and recovery or something leveraging native snapshots. Do they have a workflow for doing file level restores from snapshots and replica targets?

Pixelboy
Sep 13, 2005

Now, I know what you're thinking...


Boris Galerkin posted:

e: After reading about it Iím not sure if I absolutely need a Linux host. I do a lot of numerical/computational stuff, so itís pretty much been Linux all the way down because I donít want to recompile libraries/tools for different architectures, and also because all the computers I use are also Linux so it just makes everything easier.

Wouldn't the Windows Subsystem for Linux cover you here?

https://docs.microsoft.com/en-us/windows/wsl/about

Boris Galerkin
Dec 17, 2011


SamDabbers posted:

I use KVM with virt-manager as the UI and find it needs-suiting for desktop virtualization. It gives a VirtualBox/VMware-like experience, works well, and is extremely feature rich.

There are accelerated graphics drivers for Windows 7 on QEMU/KVM so that's probably the way to go on that stack if you don't want to use RDP. Microsoft removed the display driver model the accelerated drivers were using in Windows 8 and up, so RDP ends up being a better experience with those versions as the guest.

Semi-serious edit: Depending on which version of MS Office you're using, WINE might be an actual option for you. It's one of the better supported apps.

KVM/virt-manager

So I just looked these things up and I don't get it. Do I not need VirtualBox/VMWare? I only downloaded VirtualBox because that's just the first name that comes up when I think of VMs.

I'm not sure what version of Office I'm using. I have an Office 365(?) subscription and I think part of that deal is I always get to use the newest version of their software. I think it's 2016 though. I don't have any standalone copies of Office.

Fake edit: Is it possible to boot up Windows 8.1/10 in something like "headless" mode and then start up Windows apps and run them with a GUI, kinda like what you could do with X11 on Linux?

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Pillbug

KVM is an alternative to VirtualBox as a platform to host virtual machines on Linux, the main differences being that KVM is open-source and Linux exclusive whereas VirtualBox is closed-source and has versions available on multiple platforms.

Virt-manager is just a frontend for KVM which is readily available, well supported and straightforward to use in a normal desktop environment. There are other frontends like the CLI or Wok/Kimchi (a web interface) that you could use instead, just like you can use either VMWare's Windows client or the vCenter web client to access an ESX host.

If you're just looking to run a single Windows install to have access to MS Office, it probably doesn't matter much which you use.

Mr Shiny Pants
Nov 12, 2012


Boris Galerkin posted:

Fake edit: Is it possible to boot up Windows 8.1/10 in something like "headless" mode and then start up Windows apps and run them with a GUI, kinda like what you could do with X11 on Linux?

Just use RDP.

Spring Heeled Jack
Feb 25, 2007


Posting in this thread because its such a weird situation so I'm just hoping someone can confirm or deny that this is the worst idea ever.

We have a handful of overseas developers using Visual Studio and a few other dev programs inside of a Windows 10 Horizon View pool I had setup at the request of a previous boss. Those devs are going to need to start using Docker for Windows as well, so I am testing a deployment in another pool to see how it will run (hint: it's bad).

We have a pretty basic setup with each user having a single writable app volume for all of the data/software activation/whatever they would happen to change locally. Everything else is baked into the master VM.

Has anyone delt with this or anything similar? At this point I'm looking into a manual pool with assigned virtual or even physical desktops because the performance is so bad and the use of writable volumes has introduced a ton of small quirks with logging on/off.

Potato Salad
Oct 23, 2014




Tortured By Flan

What kind of storage is backing app volumes, and where is docker actually running?

Linked clones? Jmp?

I have a tendency to be immediately suspicious of storage when there are vdi complaints. Got protocol fulfillment latency data on the datastore backing the clones and the apps?

Spring Heeled Jack
Feb 25, 2007


Potato Salad posted:

What kind of storage is backing app volumes, and where is docker actually running?

Linked clones? Jmp?

I have a tendency to be immediately suspicious of storage when there are vdi complaints. Got protocol fulfillment latency data on the datastore backing the clones and the apps?

Everything is on vSAN. And docker is running via nested virt in the linked clones themselves. The vSAN storage is limited to Horizon workloads. Iím not saying itís a storage problem, Iím just curious if there are any options aside from throwing a ton of memory and vCPU at each clone.

Potato Salad
Oct 23, 2014




Tortured By Flan

Look at your vsan 24h performance before throwing more resources at it

Flash vcenter -> cluster of interest > monitor > performance > vsan

Congestions (self throttling)? Latency?


What about guest resources. When a dev is logged in, do you actually see significant cpu wait time or consumption on the guests?

Potato Salad fucked around with this message at Feb 20, 2018 around 00:39

evol262
Nov 30, 2010
#!/usr/bin/perl

Eletriarnation posted:

KVM is an alternative to VirtualBox as a platform to host virtual machines on Linux, the main differences being that KVM is open-source and Linux exclusive whereas VirtualBox is closed-source and has versions available on multiple platforms.

Well, Gnome Boxes is an alternative to VBox using KVM. But KVM itself is more of an alternative to vmkernel/hyper-v.

Basically, KVM is a relatively small kernel module which allows for virt. qemu provides the emulated hardware (which vmware/hyper-v/vbox/etc all do also). libvirt provides the "glue" between raw kvm and user interfaces.

I mean, for a single guest, this doesn't matter at all, but KVM is absolutely a 'bare metal' hypervisor which doesn't require a user interface, with a number of very large cloud deployments

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell


I've always wondered about the history of why KVM and qemu are two separate things.

It kinda seems like both of them are not too useful on their own.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


Thermopyle posted:

I've always wondered about the history of why KVM and qemu are two separate things.

It kinda seems like both of them are not too useful on their own.
QEMU works great on its own, it's just that without KVM, it relies on binary translation like VMware did in the days before VT-x and AMD-V. You still use it in this mode if you want to do something like emulate ARMv7 on an x86 host.

evol262
Nov 30, 2010
#!/usr/bin/perl

Thermopyle posted:

I've always wondered about the history of why KVM and qemu are two separate things.

It kinda seems like both of them are not too useful on their own.

Well, qemu used to have an "accelerated" kmod back in 2002 or something, and KVM was designed to slot into that.

Basic kernel stuff. KVM should be small and abstracted, providing only device nodes. Anything which wants to use those device nodes (KVM, LVM, DRI, whatever) needs to include all of its own baggage.

qemu was a reasonably good starting point for "we can emulate disk controllers, BIOS, interrupts, etc", which would never make it into the kernel anyway. In theory, qemu-kvm is a forked version of qemu which stubs out kqemu with kvm, but that's nitpicky.

More to the point, as noted, qemu is useful on its own for emulating other architectures. KVM gets leveraged without qemu by lkvm/kvmtool/whatever to provide additional abstraction/security for Clear/Kata containers.

Adbot
ADBOT LOVES YOU

wolrah
May 8, 2006
what?


qemu significantly predates even widespread availability of hardware virtualization. qemu's first public release (0.1) was in March of 2003, where Intel's release of VT-x didn't come along until November of 2005 and AMD-V followed in May of 2006. Before that it was pretty much a "big iron" feature only seen in datacenters. It wasn't until 2009 that qemu actually gained KVM support, up until then it was a purely software solution.

It still kinda blows my mind that these days ARM chips are beginning to have hardware virtualization support.

wolrah fucked around with this message at Feb 20, 2018 around 19:25

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«266 »