|
Boris Galerkin posted:How do I do that? Wikipedia says Hyper-V is a Windows thing and I absolutely need a Linux host. You don't, Hyper-v is a windows thing. What could help you quite a bit would be if you would have a virtualization server that will have a windows guest. Now, that server can be linux with qemu+kvm or can be windows with Hyper-v, doesn't matter, the performance will most likely be better than running the machine locally. Given the nature of your work, running the primary development machine in a VM would not even be a consideration.
|
# ? Feb 13, 2018 14:53 |
|
|
# ? Apr 25, 2024 23:28 |
|
Boris Galerkin posted:I’m also not adverse to switching from VirtualBox to something else if the performance gain is trivially easy. Or also just switching from Windows 10 to 8.1 if that works out better too. VirtualBox was a piece of poo poo. I was looking for AVX support and finally found a post on their forums. A dev was saying there was not gonna be AVX support unless a customer demanded and paid for it. gently caress Oracle. I switched to VMWare and it was so much better. e: not familiar with the KVM user experience, but I bet it's better than gently caress Oracle's virtualbox. karoshi fucked around with this message at 15:45 on Feb 13, 2018 |
# ? Feb 13, 2018 14:58 |
|
Boris Galerkin posted:How do I do that? Wikipedia says Hyper-V is a Windows thing and I absolutely need a Linux host. Of course you can do this, though you should note that Hyper-V 2 is a little picky about UEFI on the guest, and legacy booting is not supported. However, I'd probably just run the windows guest on KVM and use RDP to access it. Note that the performance difference between Virtualbox (with VT-x/SVM enabled) and Hyper-V/KVM is close to zero for many workloads, other than storage being marginally faster. Nothing is going to make using a desktop Windows guest more pleasant than vbox guest extensions, RDP (with or without remotefx), VMware workstation, etc. If your workflow works for you, it's not gonna be that much faster by moving to KVM/Hyper-V. If your Linux workload isn't actually dependent on the underlying hardware, though, I'd consider just using WSL.
|
# ? Feb 13, 2018 15:08 |
|
In VMware vSphere, is there a way to schedule a virtual machine to start automatically after it has been powered off from the guest OS? My coworker has interpreted that powering off is necessary for the VM to get access to the new CPU flags and the Spectre mitigations to become effective. It would be most convenient to schedule startup and then during the normal update cycle power off the VMs instead of rebooting them, and not having to go start them up manually.
|
# ? Feb 14, 2018 20:42 |
|
Saukkis posted:In VMware vSphere, is there a way to schedule a virtual machine to start automatically after it has been powered off from the guest OS? My coworker has interpreted that powering off is necessary for the VM to get access to the new CPU flags and the Spectre mitigations to become effective. It would be most convenient to schedule startup and then during the normal update cycle power off the VMs instead of rebooting them, and not having to go start them up manually. You could do some gymnastics with powercli to shutdown a list of vm, wait for completion and then powering the same list of vms.
|
# ? Feb 14, 2018 21:36 |
|
SlowBloke posted:You could do some gymnastics with powercli to shutdown a list of vm, wait for completion and then powering the same list of vms. This is what I was thinking as well. I know of nothing in vCenter that will power a VM back on. Horizon View can do it.
|
# ? Feb 14, 2018 21:57 |
|
I've been working on figuring out a solution for our infrastructure at work. A small smattering of servers... 8 Windows VMs and 4 Linux VMs. Our computing power needs are not likely to change soon, but our storage usage is constantly growing. The existing servers are all Dell, 6+ years old.. Support/warranty on them expiring soon. So I'm looking at setting up a 2 or 3 node Hyper-V cluster to consolidate everything, though I think 2 nodes would be enough. At the very least to get some redundancy, as we basically have none right now. Did a Dpack/Live Optics run for 24 hours; with all our servers included and the one hyper-v host we had, results in 1196 IOPs @ 95%... 4756 peak. (caused by a end of day process that runs at 11PM). Have gotten a few solutions quoted by a few vendors and have narrowed it down to a few choices. 2 node Starwind VSAN appliance - most expensive option by far 2x Lenovo SR630 servers paired with DS4200 iSCSI SAN - cheapest option 2 node Dell VRTX - just barely more expensive than the Lenovo option. 3 node Scale Computing HC cluster - in-between Lenovo and Dell price wise. Starwind was out due to sheer pricing alone. Way more than anything else. The Lenovo looks promising... Concerned about their support though. While I was impressed by the Scale Computing cluster, it is sold as a fixed appliance and not easy to upgrade. Need more storage? You can't just slap more drives in, you have to add another entire node. They use SuperMicro hardware running a highly customized KVM hypervisor. They also need 3 nodes minimum, which increases the MS licensing. I have been leaning towards the Dell VRTX, which is basically a chassis with blade servers and shared storage in a single box with redundant everything. (except for the networking backplane, which is easy to work around). Easy to upgrade with either additional blades or storage. Single management point, easier configuration... From a company who's support I am familiar and satisfied with.. Am I crazy for going this route?
|
# ? Feb 14, 2018 22:06 |
|
Moey posted:This is what I was thinking as well. I know of nothing in vCenter that will power a VM back on. Horizon View can do it. If you hate yourself VERY strongly you could create a vSphere orchestrator workflow to shutdown a vm, wait for the vm status to change and turn it back on to be applied on a cluster or vm folder(s), but it would take an awful lot of time to setup compared to a handful of strings of ps1 batches.
|
# ? Feb 14, 2018 22:09 |
|
stevewm posted:I've been working on figuring out a solution for our infrastructure at work. A small smattering of servers... 8 Windows VMs and 4 Linux VMs. Our computing power needs are not likely to change soon, but our storage usage is constantly growing. The existing servers are all Dell, 6+ years old.. Support/warranty on them expiring soon. I've used a Dell VRTX before and I agree, it was pretty nifty as an all in one box. If I was somehow in a situation where I needed an in-office server presence again, I'd definitely consider it. Multiply whatever concerns you have about Lenovo support by a factor of 10. I will never be complicit in purchasing Lenovo hardware again. I don't know anything about your storage requirements, but have you considered an external ZFS NAS that you mount rather than direct attach disks? Methanar fucked around with this message at 22:16 on Feb 14, 2018 |
# ? Feb 14, 2018 22:12 |
|
VRTX has a single network plane, if you want network redundancy then you need external switches. FX2 with two compute blades and a storage blade will get you embedded network redundancy at 1 or 10gige without needing external boxes.
|
# ? Feb 14, 2018 22:17 |
|
BangersInMyKnickers posted:VRTX has a single network plane, if you want network redundancy then you need external switches. FX2 with two compute blades and a storage blade will get you embedded network redundancy at 1 or 10gige without needing external boxes. Yeah I was aware of the single network plane... One of Dell's config docs actually recommends slapping in 2x PCIe NICs and assigning one to each blade and setting up NIC teaming. In the event the network backplane went out, the PCIe NICs would continue operating. I was already going to have 2x external switches for connecting 2x routers/firewalls (in active/passive).
|
# ? Feb 14, 2018 22:24 |
|
Have a look at some of the Hyperconverged Storage Spaces Direct qualified designs, e.g. https://lenovopress.com/lp0064.pdf
|
# ? Feb 14, 2018 22:25 |
|
Volguus posted:You don't, Hyper-v is a windows thing. What could help you quite a bit would be if you would have a virtualization server that will have a windows guest. Now, that server can be linux with qemu+kvm or can be windows with Hyper-v, doesn't matter, the performance will most likely be better than running the machine locally. karoshi posted:VirtualBox was a piece of poo poo. I was looking for AVX support and finally found a post on their forums. A dev was saying there was not gonna be AVX support unless a customer demanded and paid for it. gently caress Oracle. I switched to VMWare and it was so much better. evol262 posted:Of course you can do this, though you should note that Hyper-V 2 is a little picky about UEFI on the guest, and legacy booting is not supported. Thanks guys. Looks like I'm sticking to a Linux host and Windows guest, but might look into VMWare Workstation. Does the free version work better out of the box? Again, all I need Windows for is the Office suite. Right now I have Windows 10 in Window mode in VirtualBox, with the Windows task bar hidden and Excel/PowerPoint running as full screen apps inside the VirtualBox window, so it's kinda got the a native app feel going for it. In a similar vein, I don't need to run Windows 10. Would Windows 8.1 or even 7 be better if all I need it for is just Excel and PowerPoint?
|
# ? Feb 15, 2018 11:37 |
|
Boris Galerkin posted:Thanks guys. Looks like I'm sticking to a Linux host and Windows guest, but might look into VMWare Workstation. Does the free version work better out of the box? Again, all I need Windows for is the Office suite. Right now I have Windows 10 in Window mode in VirtualBox, with the Windows task bar hidden and Excel/PowerPoint running as full screen apps inside the VirtualBox window, so it's kinda got the a native app feel going for it. I use KVM with virt-manager as the UI and find it needs-suiting for desktop virtualization. It gives a VirtualBox/VMware-like experience, works well, and is extremely feature rich. There are accelerated graphics drivers for Windows 7 on QEMU/KVM so that's probably the way to go on that stack if you don't want to use RDP. Microsoft removed the display driver model the accelerated drivers were using in Windows 8 and up, so RDP ends up being a better experience with those versions as the guest. Semi-serious edit: Depending on which version of MS Office you're using, WINE might be an actual option for you. It's one of the better supported apps. SamDabbers fucked around with this message at 15:30 on Feb 15, 2018 |
# ? Feb 15, 2018 15:21 |
|
Anyone dealt at all with Scale Computing? (https://www.scalecomputing.com/) I finally got my official quote from them; I was pleasantly surprised. Much lower than I expected it to be. In fact they are the cheapest option now, even when factoring in additional MS licensing costs because of 3 servers. Their hypervisor platform is KVM using custom vSAN like storage, but also pretty nifty.. 3x 1u Nodes, 10.44TB usable space . 1.4TB of that is a SSD tier. I was initially concerned about storage space/price, but 10.44TB will do us for a long time even accounting for growth. DR is also easy... get a cheap single node (~$2) , add it to the cluster and tick a box to make your most important VMs replicate to it. Dead simple management interface (all HTML5), barely any configuration required (networking, and install your VMs), auto failover, etc... Their target market is small businesses like ours.. Their HQ is only 50 miles from us. Their support is also all local. They handle any hardware issues on a NBD basis. Really, I am not seeing any downsides here.
|
# ? Feb 15, 2018 20:13 |
|
The only thing I'd watch out for is whether you run any virtual appliance type services that are only supported on VMware/Hyper-V
|
# ? Feb 15, 2018 20:33 |
|
Thanks Ants posted:The only thing I'd watch out for is whether you run any virtual appliance type services that are only supported on VMware/Hyper-V Nope.. nothing but a handful of basic Windows VMs (SQL, RDS farm, AD, WSUS) and some Ubuntu server VMs (Unifi, webserver, etc..) Hmm.. Decisions decisions...
|
# ? Feb 15, 2018 20:38 |
|
If their HQ is only 50 miles away then it sounds like you'll get treated well by them. How do they handle situations where you just want to add storage - or do you have to add compute at the same time? How well does it integrate with your backup provider etc.
|
# ? Feb 15, 2018 20:50 |
|
Thanks Ants posted:If their HQ is only 50 miles away then it sounds like you'll get treated well by them. How do they handle situations where you just want to add storage - or do you have to add compute at the same time? How well does it integrate with your backup provider etc. Looks like you have to add a node, though they have what they call Storage Nodes that only add additional storage. But 10TB is way more than we will need for some time. Even factoring in several years of growth at current rates.
|
# ? Feb 15, 2018 20:59 |
|
stevewm posted:Anyone dealt at all with Scale Computing? (https://www.scalecomputing.com/) This is basically Nutanix, and I wouldn’t be shocked to learn that they have some of the same performance issues Nutanix has, particularly with monolithic workloads.
|
# ? Feb 15, 2018 21:07 |
|
YOLOsubmarine posted:This is basically Nutanix, and I wouldn’t be shocked to learn that they have some of the same performance issues Nutanix has, particularly with monolithic workloads. Care to elaborate a bit? What kind of issues....
|
# ? Feb 15, 2018 21:08 |
|
stevewm posted:Care to elaborate a bit? Nutanix suffers from some issues related to their storage VM architecture that can limit single workload performance pretty heavily. They also have some issues with metadata management that has bit a couple of our customers. Scale does storage in kernel though you’re still going to be limited by resources allocated to the kernel for storage processing, but that’s certainly much cleaner than the Nutanix method. The spinning media footprint on each node is generally very small as well (just a few spindles) so if you’re not doing a great job of populating cache performance can really tank. And node rebuilds are really painful.
|
# ? Feb 15, 2018 21:34 |
|
YOLOsubmarine posted:Nutanix suffers from some issues related to their storage VM architecture that can limit single workload performance pretty heavily. They also have some issues with metadata management that has bit a couple of our customers. Good things to think about... Our workload is nearly all read... nearly 90% if the LiveOptics report is to be believed, though it makes sense given our primary application. (point of sale app). With our existing SQL instance and physical server (which has 24GB of RAM dedicated to it), the disk gets hit very little under normal operation.
|
# ? Feb 15, 2018 21:56 |
|
Backups, full table scans, or otherwise lovely programming that causing huge amounts of disk churn are generally the things that will cause ssd cache schemes to fall over and see performance collapse. Buyer beware, especially now that all-flash is coming below the $/gb of 10k SAS and edging in to NL-SATA.
|
# ? Feb 15, 2018 22:01 |
|
stevewm posted:Good things to think about... You might also want to consider backup. Availability can be handled through replication, but data recovery can be tricky since most popular virtual backup solutions leverage apis like VADP for VMware or VSS for Microsoft, but such a thing may not be available. So you’re stuck with agent based backup and recovery or something leveraging native snapshots. Do they have a workflow for doing file level restores from snapshots and replica targets?
|
# ? Feb 15, 2018 23:57 |
|
Boris Galerkin posted:e: After reading about it I’m not sure if I absolutely need a Linux host. I do a lot of numerical/computational stuff, so it’s pretty much been Linux all the way down because I don’t want to recompile libraries/tools for different architectures, and also because all the computers I use are also Linux so it just makes everything easier. Wouldn't the Windows Subsystem for Linux cover you here? https://docs.microsoft.com/en-us/windows/wsl/about
|
# ? Feb 16, 2018 02:15 |
|
SamDabbers posted:I use KVM with virt-manager as the UI and find it needs-suiting for desktop virtualization. It gives a VirtualBox/VMware-like experience, works well, and is extremely feature rich. KVM/virt-manager So I just looked these things up and I don't get it. Do I not need VirtualBox/VMWare? I only downloaded VirtualBox because that's just the first name that comes up when I think of VMs. I'm not sure what version of Office I'm using. I have an Office 365(?) subscription and I think part of that deal is I always get to use the newest version of their software. I think it's 2016 though. I don't have any standalone copies of Office. Fake edit: Is it possible to boot up Windows 8.1/10 in something like "headless" mode and then start up Windows apps and run them with a GUI, kinda like what you could do with X11 on Linux?
|
# ? Feb 16, 2018 18:45 |
|
KVM is an alternative to VirtualBox as a platform to host virtual machines on Linux, the main differences being that KVM is open-source and Linux exclusive whereas VirtualBox is closed-source and has versions available on multiple platforms. Virt-manager is just a frontend for KVM which is readily available, well supported and straightforward to use in a normal desktop environment. There are other frontends like the CLI or Wok/Kimchi (a web interface) that you could use instead, just like you can use either VMWare's Windows client or the vCenter web client to access an ESX host. If you're just looking to run a single Windows install to have access to MS Office, it probably doesn't matter much which you use.
|
# ? Feb 16, 2018 23:48 |
|
Boris Galerkin posted:Fake edit: Is it possible to boot up Windows 8.1/10 in something like "headless" mode and then start up Windows apps and run them with a GUI, kinda like what you could do with X11 on Linux? Just use RDP.
|
# ? Feb 17, 2018 09:02 |
|
Posting in this thread because its such a weird situation so I'm just hoping someone can confirm or deny that this is the worst idea ever. We have a handful of overseas developers using Visual Studio and a few other dev programs inside of a Windows 10 Horizon View pool I had setup at the request of a previous boss. Those devs are going to need to start using Docker for Windows as well, so I am testing a deployment in another pool to see how it will run (hint: it's bad). We have a pretty basic setup with each user having a single writable app volume for all of the data/software activation/whatever they would happen to change locally. Everything else is baked into the master VM. Has anyone delt with this or anything similar? At this point I'm looking into a manual pool with assigned virtual or even physical desktops because the performance is so bad and the use of writable volumes has introduced a ton of small quirks with logging on/off.
|
# ? Feb 19, 2018 20:39 |
|
What kind of storage is backing app volumes, and where is docker actually running? Linked clones? Jmp? I have a tendency to be immediately suspicious of storage when there are vdi complaints. Got protocol fulfillment latency data on the datastore backing the clones and the apps?
|
# ? Feb 19, 2018 22:45 |
|
Potato Salad posted:What kind of storage is backing app volumes, and where is docker actually running? Everything is on vSAN. And docker is running via nested virt in the linked clones themselves. The vSAN storage is limited to Horizon workloads. I’m not saying it’s a storage problem, I’m just curious if there are any options aside from throwing a ton of memory and vCPU at each clone.
|
# ? Feb 20, 2018 01:28 |
|
Look at your vsan 24h performance before throwing more resources at it Flash vcenter -> cluster of interest > monitor > performance > vsan Congestions (self throttling)? Latency? What about guest resources. When a dev is logged in, do you actually see significant cpu wait time or consumption on the guests? Potato Salad fucked around with this message at 01:39 on Feb 20, 2018 |
# ? Feb 20, 2018 01:34 |
|
Eletriarnation posted:KVM is an alternative to VirtualBox as a platform to host virtual machines on Linux, the main differences being that KVM is open-source and Linux exclusive whereas VirtualBox is closed-source and has versions available on multiple platforms. Well, Gnome Boxes is an alternative to VBox using KVM. But KVM itself is more of an alternative to vmkernel/hyper-v. Basically, KVM is a relatively small kernel module which allows for virt. qemu provides the emulated hardware (which vmware/hyper-v/vbox/etc all do also). libvirt provides the "glue" between raw kvm and user interfaces. I mean, for a single guest, this doesn't matter at all, but KVM is absolutely a 'bare metal' hypervisor which doesn't require a user interface, with a number of very large cloud deployments
|
# ? Feb 20, 2018 11:50 |
|
I've always wondered about the history of why KVM and qemu are two separate things. It kinda seems like both of them are not too useful on their own.
|
# ? Feb 20, 2018 15:57 |
|
Thermopyle posted:I've always wondered about the history of why KVM and qemu are two separate things.
|
# ? Feb 20, 2018 17:14 |
|
Thermopyle posted:I've always wondered about the history of why KVM and qemu are two separate things. Well, qemu used to have an "accelerated" kmod back in 2002 or something, and KVM was designed to slot into that. Basic kernel stuff. KVM should be small and abstracted, providing only device nodes. Anything which wants to use those device nodes (KVM, LVM, DRI, whatever) needs to include all of its own baggage. qemu was a reasonably good starting point for "we can emulate disk controllers, BIOS, interrupts, etc", which would never make it into the kernel anyway. In theory, qemu-kvm is a forked version of qemu which stubs out kqemu with kvm, but that's nitpicky. More to the point, as noted, qemu is useful on its own for emulating other architectures. KVM gets leveraged without qemu by lkvm/kvmtool/whatever to provide additional abstraction/security for Clear/Kata containers.
|
# ? Feb 20, 2018 17:21 |
|
qemu significantly predates even widespread availability of hardware virtualization. qemu's first public release (0.1) was in March of 2003, where Intel's release of VT-x didn't come along until November of 2005 and AMD-V followed in May of 2006. Before that it was pretty much a "big iron" feature only seen in datacenters. It wasn't until 2009 that qemu actually gained KVM support, up until then it was a purely software solution. It still kinda blows my mind that these days ARM chips are beginning to have hardware virtualization support. wolrah fucked around with this message at 20:25 on Feb 20, 2018 |
# ? Feb 20, 2018 20:23 |
|
wolrah posted:qemu significantly predates even widespread availability of hardware virtualization. qemu's first public release (0.1) was in March of 2003, where Intel's release of VT-x didn't come along until November of 2005 and AMD-V followed in May of 2006. Before that it was pretty much a "big iron" feature only seen in datacenters. It wasn't until 2009 that qemu actually gained KVM support, up until then it was a purely software solution. https://play.google.com/store/apps/details?id=fr.energycube.android.app.com.limbo.emu.main.armv7
|
# ? Feb 22, 2018 07:31 |
|
|
# ? Apr 25, 2024 23:28 |
|
96 Port Hub posted:https://play.google.com/store/apps/details?id=fr.energycube.android.app.com.limbo.emu.main.armv7 That's binary translation. ARM hardware virt is essentially the same as it is on PPC or x86 -- it allows hypercalls. It is not an implementation of the x86 ISA. See here if you want an in-depth read.
|
# ? Feb 22, 2018 13:40 |