|
Really need VMware to update the open-vm-tools-deploypkg packages on their yum repo. Seems to fail dependency check against whatever is in EPEL for CentOS/RH 7 nowadays. I think EPEL updated it when they released RH/CentOS 7.2 and now I'm having to deal with dependency failures on all my updates. Is there an email address that I can suggest stuff like this to? I'm not even sure where I'd go about reporting a bug with their yum repo.
|
# ? Dec 24, 2015 02:03 |
|
|
# ? Mar 29, 2024 15:00 |
|
HPL posted:Two returns to the local computer store later, I gave up on trying current RAM and ended up eBaying slower RAM which thankfully worked. Originally, I was like: "Sweet, 4 DIMM slots, I'll fill 'er with 4 8GB DIMMs". Nope. Wouldn't POST. Dug around some, found out the old motherboard chipset wouldn't support 8GB DIMMs, but could possibly support 4GB DIMMs. Okay, back to the store, eat the restocking fee and I go home and try the 4GB-1666 DIMMs. Nope, still wouldn't POST. Returned that RAM, ate another restocking fee and then ordered some actual 1333 from a Chinese eBay seller. Just arrived yesterday and thank god it works. Gigantic pain in the kiester. And to think that I have bitched about getting the ram slots right on a G7... I won't complain about that again, that's for sure, because that's a walk in the park compared to this poo poo, heh.
|
# ? Dec 24, 2015 02:35 |
|
Martytoof posted:Really need VMware to update the open-vm-tools-deploypkg packages on their yum repo. Seems to fail dependency check against whatever is in EPEL for CentOS/RH 7 nowadays. I think EPEL updated it when they released RH/CentOS 7.2 and now I'm having to deal with dependency failures on all my updates. Look at Rhba-2015:2246-2 deploypkg was brought into 9.10.2 tools so in theory you can remove the VMware packages totally* * untested this is on our todo list with srm and templates in the new year
|
# ? Dec 24, 2015 17:47 |
|
Viktor posted:Look at Rhba-2015:2246-2 deploypkg was brought into 9.10.2 tools so in theory you can remove the VMware packages totally* Oh neat, that would be amazing. Not that I don't appreciate you guys making the repo available, but the less repos I have to deal with the better off I am.
|
# ? Dec 24, 2015 20:24 |
|
Martytoof posted:Oh neat, that would be amazing. Not that I don't appreciate you guys making the repo available, but the less repos I have to deal with the better off I am. Oh sorry for the miss confusion! I'm an end user too on a big team and will be testing that the features still work for our environment(srm/vrealize). Agree one less repo on satellite makes me a happy camper!
|
# ? Dec 25, 2015 04:15 |
|
Viktor posted:Oh sorry for the miss confusion! I'm an end user too on a big team and will be testing that the features still work for our environment(srm/vrealize).
|
# ? Dec 25, 2015 05:08 |
|
I've never gotten around to posting SH/SC till recently, but nice to see we have a VM thread, I'm an avid Xen and HyperV user, but proffessionally I used VMWare ESXi
|
# ? Dec 25, 2015 05:11 |
|
I'm working on an ESXi shell script to manage VMs since the embedded web client won't power VMs on or off and vSphere Client won't run in Linux. So far it's working except I'm trying to come up with a way to take user input and convert it to upper case. Normally in bash I'd use "tr [a-z] [A-Z]", but ESXi doesn't have the "tr" command. Any ideas? Basically I want to take user input in either upper case or lower case without using a bunch of OR statements.
HPL fucked around with this message at 07:53 on Dec 25, 2015 |
# ? Dec 25, 2015 07:49 |
|
Awk toupper(). I think busybox's sed is very limited and won't have gnu extensions, but you can always go through the whole range of characters you want, I guess. toupper is nicer
|
# ? Dec 25, 2015 12:12 |
|
Anyone messed around with the Hyper-V Server 2016 technical preview? I got ahold of a copy for free through my master's program and am debating whether or not to replace my 2012 Hyper-V setup running my lab environment.
|
# ? Dec 26, 2015 19:34 |
|
I'm considering an upgrade to El Capitan over my holiday break. Does VirtualBox still have any major issues outstanding on this OSX version?
|
# ? Dec 27, 2015 06:03 |
|
What's the best (cheapest) way of virtualising a legit copy of Windows? If I install with an OEM key and I need to scrap the VM and reinstall it then Windows will think my hardware has changed, yes? Then it won't let me use the OEM key a second time.
|
# ? Dec 27, 2015 10:47 |
|
DeaconBlues posted:What's the best (cheapest) way of virtualising a legit copy of Windows? The virtual hardware should be similar enough that Windows will pass. You only need to call (and you can call to get a bypass) if 3 major components have changed, but it'll probably just be the dmi uuid. Really, you can just snapshot them VM when you install it and move the disk files around (if you want it on a different machine) or roll back to the snapshot (if you break it)
|
# ? Dec 27, 2015 13:38 |
|
Vulture Culture posted:I'm considering an upgrade to El Capitan over my holiday break. Does VirtualBox still have any major issues outstanding on this OSX version? VirtualBox 5.x has been fine for me since I upgraded.
|
# ? Dec 27, 2015 16:25 |
|
Is there a good way to monitor host memory utilization while factoring in dedupe/compression/resource priority and all that? I'm about to split our environment in to two active/active identical clusters and different sites instead of doing an active/standby failover. VMs are divided in to high/normal/low priority resource pools and we accept that anything in low (dev and test) will be degraded due to memory pressure so I have to factor those in. It wasn't too hard with active/standby because there was enough VMs to take the hosts to at least 80% utilization which kicked in all the memory tricks and helped keep the consolidation ratio high, but with the new config I need to target more around the 50-60% because each cluster need to have enough capacity to fail over on itself. At that point VM memory is going to be allocated in 8mb pages so dedupe/compression is effectively off until the host comes under memory pressure. The quick and easy but maybe not good solution would be to disable large page allocations on the entire environment so dedupe is always working like how 3.5 used to behave, and then I know roughly how big my VMs will be. Active memory is an okay-ish stat once I subtract out the dev environment, but it still leaves me in the dark on how much overhead I will buy myself when memory pressure is up.
|
# ? Dec 27, 2015 16:55 |
|
evol262 posted:The virtual hardware should be similar enough that Windows will pass. You only need to call (and you can call to get a bypass) if 3 major components have changed, but it'll probably just be the dmi uuid. Really, you can just snapshot them VM when you install it and move the disk files around (if you want it on a different machine) or roll back to the snapshot (if you break it) Thanks. Snapshots make a lot of sense, too. Cheers!
|
# ? Dec 27, 2015 17:12 |
|
DeaconBlues posted:What's the best (cheapest) way of virtualising a legit copy of Windows? Buy a retail copy.
|
# ? Dec 27, 2015 17:50 |
|
BangersInMyKnickers posted:VMs are divided in to high/normal/low priority resource pools
|
# ? Dec 27, 2015 18:00 |
|
Vulture Culture posted:Can you go into more detail about how your resource pools are configured? Because I think I know how you have this set up, and if so, it isn't working the way you think it is. Each resource pool is under the cluster root, with resource shares allocated to the default high/normal/low for CPU and memory and VMs placed under them. This was all set up in 3.5 and upgraded through 6 and I think its working. When I take a host down for maintenance I definitely see the VMs in the low priority resource pool come under balloon and vswap pressure before everything else. We don't really have CPU contention at this point so nothing to really see there. No limits on the pools.
|
# ? Dec 27, 2015 18:28 |
|
BangersInMyKnickers posted:Each resource pool is under the cluster root, with resource shares allocated to the default high/normal/low for CPU and memory and VMs placed under them. This was all set up in 3.5 and upgraded through 6 and I think its working. When I take a host down for maintenance I definitely see the VMs in the low priority resource pool come under balloon and vswap pressure before everything else. We don't really have CPU contention at this point so nothing to really see there. No limits on the pools. Duncan Epping had a diagram a few years ago: http://www.yellow-bricks.com/2009/11/13/resource-pools-and-shares/ What you need to remember about CPU shares in VMware resource pools is that each pool is granted a total number of shares, and this is not a template for the number of shares on VMs within the pool. This is a really important distinction. High/medium/low shares are allocated as 4:2:1 weights; that is, a VM with High priority has double the shares of a VM with Normal priority. Likewise, a VM with Normal priority has double the shares of a VM with Low priority. More technically, it breaks down as: High 2000 shares per vCPU, Normal 1000 shares per vCPU, Low 500 shares per vCPU. So let's say you're running in an environment where you're using High to reserve CPU for production VMs, Normal for production batch systems, and Low for development systems. You're doing this without using resource pools: you're setting the share priority on each VM. So, you've got 20 VMs under High, 8 under Normal and 3 under Low, because you obviously need more scale to support your production workloads than you do your development ones. When you assign share priority to each of these, it breaks down as follows:
Now let's sweep these under pools instead. You have a pool that has High priority, a pool that has Normal priority, and a pool that has Low priority.
What you've done here is the opposite of what you intended: under heavy CPU contention, your dev VMs are getting higher priority than your production systems, because the pool is less loaded. Now let's gently caress it up more: we'll create a VM and forget to assign it to a resource pool, so it has Normal shares (1000). Your new VM gets as many CPU shares as the entire Normal resource pool put together. Of course, if you are rarely oversubscribing CPU, or if you have generally very bursty workloads, this is probably a moot point. But my general recommendation is to never, ever set CPU shares on a resource pool unless you're trying to solve a very specific problem and know exactly what you're doing. Vulture Culture fucked around with this message at 00:18 on Dec 28, 2015 |
# ? Dec 28, 2015 00:15 |
|
Vulture Culture posted:But my general recommendation is to never, ever set CPU shares on a resource pool unless you're trying to solve a very specific problem and know exactly what you're doing.
|
# ? Dec 28, 2015 05:35 |
|
I see so many customers with hosed up resource pool setups because they don't understand how the CPU shares are allocated. The worst are the ones where there's a resource pool for high performance VMs and then just random VMs sitting at the root of the cluster not in any resource pool.
|
# ? Dec 28, 2015 06:51 |
|
Last note: Especially with respect to shares, it's helpful to think of resource pools as vApps, because they work the exact same way.
|
# ? Dec 28, 2015 07:15 |
|
Vulture Culture posted:Of course, if you are rarely oversubscribing CPU, or if you have generally very bursty workloads, this is probably a moot point. But my general recommendation is to never, ever set CPU shares on a resource pool unless you're trying to solve a very specific problem and know exactly what you're doing. Good to know, thanks. I just assumed that the share value for CPU would scale proportionally to the number of VMs in the pool like memory does. I threw the CPU shares back to normal for all the pools but I try to keep CPU load below 70% because ready time starts getting bonkers at that point anyhow. I think if I was ever under CPU contention how I had it set up would have worked out alright since there's only 8 servers in the high resource pool so there was no way they could ever saturate anywhere near their total share proportion so everything else would have been left to normal/low to duke it out and those have roughly 50 VMs in them each. Maybe I'll keep the lowered share value on the low pool but keep the other two on normal.
|
# ? Dec 28, 2015 16:34 |
|
BangersInMyKnickers posted:I try to keep CPU load below 70% because ready time starts getting bonkers at that point anyhow. Just out of curiosity, how often are you using multi-vCPU vms in your environment, and what's your metric for deciding it's warranted?
|
# ? Dec 28, 2015 18:55 |
|
stubblyhead posted:Just out of curiosity, how often are you using multi-vCPU vms in your environment, and what's your metric for deciding it's warranted? About 75% are single vCPU, 15% are 2 vCPU, and 10% are 3-4. We're currently on 6 core processors on the hosts so I'm really iffy about going over 3 vCPU per vm if I can avoid it as tying up more than half of a socket on a single VM seems like it would wreak scheduling havoc. Generally I start everything at a single vCPU and then scale up if any bottlenecks are causing real-world impact instead of making a CPU alarm trigger which might not be negatively impacting anything. And I only add that second vCPU after I confirmed that the application with a CPU bottleneck is properly threaded to actually use the second vCPU. My 4 vCPU servers are ArcGIS servers driven by user load so they justify it. Ready time/Readiness is something you should be watching with 1-5% deemed acceptable and 5%+ you should start worrying. I've managed to keep practically all servers down at around .4% readiness across the board with 110 windows VMs on 3 dual-socket 5 year old hosts so I think I've managed it pretty well so far.
|
# ? Dec 28, 2015 19:31 |
|
My really bad metric is just checking the loadavg and if it's floating above one consistently then I give it two, if it's floating at two I give it four, etc. Unless it's super obvious. Like I'm running Hercules which is emulating a system with four CPUs so I gave that VM four vCPUs, I gave Splunk eight vCPUs, etc. If I'm doing a lot of compiling I'll give that machine more than one as well so I can efficiently run "make -j 2" for speed.
|
# ? Dec 28, 2015 20:09 |
|
Viktor posted:Look at Rhba-2015:2246-2 deploypkg was brought into 9.10.2 tools so in theory you can remove the VMware packages totally* My entirely unscientific testing (converted my template to a VM, removed the vmware repo and open-vm-tools-debloypkg, converted back to template and deployed to new VM) this works exactly as expected, and the vmware repo is now unnecessary for anyone who's installing open-vm-tools 9.10.2+ Thanks!
|
# ? Dec 29, 2015 20:49 |
|
Martytoof posted:My entirely unscientific testing (converted my template to a VM, removed the vmware repo and open-vm-tools-debloypkg, converted back to template and deployed to new VM) this works exactly as expected, and the vmware repo is now unnecessary for anyone who's installing open-vm-tools 9.10.2+ Awesome thanks for the heads up!
|
# ? Dec 30, 2015 00:04 |
|
I can't remediate hosts with VUM with the web client, instead I need to use the discontinued fat client. Are you serious VMware?
|
# ? Dec 30, 2015 06:50 |
|
Methanar posted:I can't remediate hosts with VUM with the web client, instead I need to use the discontinued fat client. Are you serious VMware? https://blogs.vmware.com/vsphere/2015/09/vsphere-update-manager-fully-integrated-interface-with-the-vsphere-web-client.html I believe you can, just need to be on 6.0 U1.
|
# ? Dec 30, 2015 14:04 |
|
So I've been watching this video: https://www.youtube.com/watch?v=LXOaCkbt4lI Some guy built a 2S system with seven graphics cards to run seven Windows VMs for gaming. Apparently he runs games like Crysis 3 in a VM at 1440p and decent framerates. Is the state of PEG virtualization that good these days on Linux?
|
# ? Jan 3, 2016 15:02 |
|
Wow, this might be something I will try. Windows 10 is not for me, but I do have some games I want to play. If I could run a VM with decent framerates and use a Steam Link to display it on my TV.....
|
# ? Jan 3, 2016 16:19 |
|
There has been some discussion about this in other threads. It's doable, but not as simple as you would hope. PC Building/Upgrading/Parts-picking Megathread Police Automaton posted:I did some research into this as I'm building exactly a computer like this and nowadays this works a lot better than it did just a few years ago and if you do it correctly, you'll basically get native performance in windows for all practical gaming purposes. Your CPU has to be IOMMU capable, you'll probably go with intel so that means a vt-d capable CPU. You need to really look into if the mainboard is doing it correctly, the implementation on some mainboards around is apparently borked even if it's theoretically supported, there are some lists with tested mainboards around if you search long enough. Also you'd probably want to avoid nvidia cards apparently as nvidia seems to not like people using their non-quadro cards for such purposes. (even though there are workarounds, they apparently come at a performance loss, but take that with a grain at salt, I'm not sure how current that information is) I went with a 5820k because of the six cores making everything easier and a bit more future-proof, a recent i5 is probably enough too though, seeing as you can overcommit CPUs and plenty of games don't really tax modern CPUs in any way whatsoever. (an i7 does have a slight advantage here though) Be aware that you need plenty of RAM, my latest knowledge on the topic is that you can't do lazy allocation if you want to do passthrough. The Linux Questions Thread, SCO non-infringing version evol262 posted:Basically, passthrough works like this:
|
# ? Jan 3, 2016 16:57 |
|
My TS440 has an PCIe 16x lane slot and I have a spare 660GTX. Might try something in a couple of days.
|
# ? Jan 3, 2016 17:14 |
|
I'm currently trying to find out whether it's possible to hand around the GPU device between VM and host. Say if I want to play games, I kill Xorg and run the VM, and once the VM shuts down, Xorg fires up again. Scripts doing stuff where needed. Someone else suggested to me to prep a Linux host for VM business only and run another Linux instance on top when needed. A Linux host would maybe allow for some interesting clownery involving my FreeNAS box and iSCSI. --edit: Crap, Police Automaton's post is rather recent. So much for the card juggling. Altho I've come across suggestions that cards with UEFI BIOS are supposedly less bitchy. Combat Pretzel fucked around with this message at 18:42 on Jan 3, 2016 |
# ? Jan 3, 2016 18:38 |
|
I don't think that is possible, maybe with some scripts. But if it takes a reboot then you can just as well dual boot. The nice thing about this is that I can make my main machine a Linux or OSX box and still play my PC games without dual booting. Luckily my TS440 has onboard video so I can just use that for the host OS and run the Windows VM with a dedicated GPU attached. It would be awesome if you could partition your vid card, didn't some Nvidia cards allow this? One third for the main OS and maybe two thirds for the VMs.
|
# ? Jan 3, 2016 19:06 |
|
I have to evaluate things. I'd really like to start ditching Windows. I depend on Photoshop, Solidworks and games. I'd figure with the upcoming upgrade and having built-in graphics on the CPU, it might be an option to relegate the NVIDIA card to hosting Windows games only. Displays have multiple inputs, Photoshop can run in a regular unaccelerated VM, Solidworks I don't know, and for games I can switch the monitor input. Also depends on whether I can get over the lack of L4 cache on the 6700K.
|
# ? Jan 3, 2016 19:10 |
|
Combat Pretzel posted:So I've been watching this video: https://www.youtube.com/watch?v=LXOaCkbt4lI The short answer is "yes". I can't be hosed watching some video, and scrolling through it doesn't give any indication of whether that's GPU virtualization or just passthrough. Passthrough works fine in Linux, including with GRID cards/etc. There's a lot of work going into virtio-vga/virtio-gpu/virgil, but actual virtualization of GPUs isn't as good as VMware. That said, judging from the hardware used, I don't think they did in that video, either. Combat Pretzel posted:I'm currently trying to find out whether it's possible to hand around the GPU device between VM and host. Say if I want to play games, I kill Xorg and run the VM, and once the VM shuts down, Xorg fires up again. Scripts doing stuff where needed. In theory, PCIe hotplugging means that you can dynamically unload the stub module and let it be claimed, but you'd need to write another kernel module to do it, which would never, ever get upstreamed/mainlined. If you just want a windows VM for gaming/accelerated stuff, and you're passing through one (or multiple) GPU(s), this works great, and you can just use the onboard Intel stuff for your host. Otherwise, use multiple GPUs (like they did in this video), a SR-IOV GPU, or create a VM with the device passed through which dual boots. You can also pass the device through to multiple VMs and only have one running at a time. Combat Pretzel posted:--edit: Cards with a UEFI BIOS (all of them these days, basically) are easier, because UEFI is much nicer from a firmware level. But your OS needs to be booting in UEFI mode (with requisite UEFI bootloader) to make it feasible. You can do it without, but stubbing/remapping VGA memory is a huge pain in the rear end, and best avoided. Your other concern with switching inputs on the monitor is that you need actual inputs. This is fine with Steam streaming, but otherwise, expect to need a 2nd keyboard/mouse, or use a KVM. Feel free to ask away. Mr Shiny Pants posted:It would be awesome if you could partition your vid card, didn't some Nvidia cards allow this? One third for the main OS and maybe two thirds for the VMs. This is GPU SR-IOV. GRID cards support it, among others. Some FireGLs also do. It's not really "partitioning", but it presents multiple PCIe device IDs so you can map the same card into multiple VMs, in much the same way as NIC SR-IOV.
|
# ? Jan 3, 2016 19:36 |
|
|
# ? Mar 29, 2024 15:00 |
|
So if I flash my 660GTX to UEFI firmware, use the OVMF bios in my VM and have my server boot in UEFI mode this should work? Cool I can try that. The steamlink is just a way to disable having to have a physical keyboard and monitor attached.
|
# ? Jan 3, 2016 21:32 |