|
BnT posted:Oh yeah, the reason I'm here; Does anyone know of a way to quickly find a list of VMs that have snapshots from vCenter? vSphere 4.1u2 if it matters. In PowerCLI, to find snapshots: code:
code:
|
# ? Feb 23, 2012 18:11 |
|
|
# ? Jan 26, 2025 07:44 |
|
Rob de Veij wrote a nice thing called RVTools that does a nice job of picking out all the things like old snapshots, out-of-date VMware Tools installs, etc etc, as well.
|
# ? Feb 23, 2012 18:47 |
|
Mierdaan posted:Rob de Veij wrote a nice thing called RVTools that does a nice job of picking out all the things like old snapshots, out-of-date VMware Tools installs, etc etc, as well. This. Is. Awesome.
|
# ? Feb 23, 2012 22:55 |
|
Mierdaan posted:Rob de Veij wrote a nice thing called RVTools that does a nice job of picking out all the things like old snapshots, out-of-date VMware Tools installs, etc etc, as well.
|
# ? Feb 23, 2012 23:03 |
|
Any suggestions for pasting info from the clipboard into a VM console window in vSphere? I've been playing around with AutoHotkey, but I haven't been able to get anything working.
|
# ? Feb 23, 2012 23:20 |
|
Mierdaan posted:Rob de Veij wrote a nice thing called RVTools that does a nice job of picking out all the things like old snapshots, out-of-date VMware Tools installs, etc etc, as well. Wow, thanks! Zombie VMDKs are listed on the vHealth tab too, this thing rocks.
|
# ? Feb 24, 2012 00:31 |
|
stubblyhead posted:Any suggestions for pasting info from the clipboard into a VM console window in vSphere? I've been playing around with AutoHotkey, but I haven't been able to get anything working.
|
# ? Feb 24, 2012 12:52 |
|
Anyone a VMUG Advantage member? http://www.vmug.com/p/cm/ld/fid=10 The discounts on classes, tests, VMworld, and software are easy to see the value in if you plan on using them. Unfortunately for us, the budget isn't there this year for most of that. I was curious as to how useful the eLearning courses are and if they are worth the cost alone?
|
# ? Feb 24, 2012 16:40 |
|
Fancy_Lad posted:Anyone a VMUG Advantage member? Holy poo poo. $200 would save me $700 on the VCP course alone. I'd drop that for myself in a heartbeat.
|
# ? Feb 24, 2012 16:46 |
|
stubblyhead posted:Any suggestions for pasting info from the clipboard into a VM console window in vSphere? I've been playing around with AutoHotkey, but I haven't been able to get anything working. In 4.1 and higher it is disabled by default, for security. http://kb.vmware.com/kb/1026437
|
# ? Feb 24, 2012 18:14 |
|
So I've gotten myself into a bit of a sticky situation. Here's what I've got right now. Running an ESX 4.0 server standalone. I have a local VM store on the machine and an iSCSI RAID. When we first set it up we didn't have the RAID, so machines when on the local store. I'm now trying to move the machine from the local store to the external RAID. The machine is shut down, because we don't have fancy anything. In the Datastore browser I right clicked on the machine folder, chose move, and told it to move to the new datastore. Now this is painfully slow, which I guess is fine, but there's a problem. This machine has a 500 GB iSCSI device mapped to it as a RDM, so there's also a 500 GB vmdk file (actually, there's 3 for some reason not sure about that) but it's not actually a vmdk. Now that I'm moving the folder, it's trying to move those 1.5 TB "files." Two problems. First, I'm not sure what it's going to do with the data on the iSCSI device when it does the "delete" part of the move command. Second, the device I'm moving the machine to is only 1 TB, so it can't hold 1.5TB of fake VMDKs. I can't cancel the operation because... I don't know why. In the progress window, the Cancel button is greyed out (btw, this is going to take about 3000 minutes), and if I right click on the Move operation in the vSphere client, the cancel operation is also greyed out. So I'm on a clock before everything shits the bed I guess.
|
# ? Feb 24, 2012 19:39 |
|
Are there other VMs on the array? If not, can you just yank a cable and wait for a timeout?
|
# ? Feb 24, 2012 19:49 |
|
Erwin posted:Are there other VMs on the array? If not, can you just yank a cable and wait for a timeout? Basically everything is on that array. I might just have to reboot the VM server tonight after hours.
|
# ? Feb 24, 2012 19:51 |
|
There are a lot of ways around this, but the simplest since it is already shutdown would be to just remove the RDM from the VM during the move. Choosing to remove and delete from disk doesnt delete data on an RDM, but it will remove that VMDK from the folder. You can re-add it after the move. In terms of the operation in progress, I believe you can cancel that from the command line.
|
# ? Feb 24, 2012 20:07 |
|
KS posted:In terms of the operation in progress, I believe you can cancel that from the command line. Any idea how to do that?
|
# ? Feb 24, 2012 20:14 |
|
Mierdaan posted:Rob de Veij wrote a nice thing called RVTools that does a nice job of picking out all the things like old snapshots, out-of-date VMware Tools installs, etc etc, as well. Trying this Monday.
|
# ? Feb 25, 2012 00:56 |
|
1000101 posted:Generally something like video transcoding or rendering or anything of that sort will use 100% of the CPU resources you throw at it. For some reason I thought GPU passthrough worked well now, but I have no idea where I got that impression from. I guess that means I'll have to do all decoding in software, which is fine- but I'm really not sure if a single 4-core VM would be able to decode and reencode to 1080p with two or three movies at once. If I went with the 6-core, I was actually planning to assign the transcoding VM six vCPUs, and assign one vCPU to each of the rest of the VMs, with my reasoning being that the other VMs were not really going to be that CPU-intensive. The ZFS file server VM is going to be running a 12-disk 3-vdev pool, and it doesn't see much action. The most that happens on it is just streaming of media to an HTPC and a few computers here and there. Nothing super intensive, so I'm not going to devote that many resources to it in the first place (except for RAM, of course, since ZFS loves RAM). I'm probably going to host the VMs off of two leftover 750GB drives in a RAID1, if possible. I know multiple VMs absolutely kill platter drives due to random access, but the VMs won't be doing much disk activity anyway. I might spring for 2x 120GB SSDs instead, and just try and keep the OS sizes down, but I'd rather not have to spend the money. BnT posted:I'd make sure that whatever you're getting supports VT-d or AMD's thingy, especially if you're planning on running a guest with ZFS direct access. For this budget you could easily make a pretty beefy system centered around a Sandy Bridge Xeon E3 and the Intel C20x series chipsets, but that would require ECC (which I would recommend anyway), also this would only get you four cores. Thanks for the info- I knew the free version only supported up to 32GB of RAM, but didn't realize it only supported a single socket, too. That does change my plans- I was starting to think it would be best to wait for a dual socket 2011 and start off with one quad core for now. I've made sure to factor VT-d and IOMMU into the boards and CPUs I'm looking at. I guess I'll just need to spend some time running tests of my own with a VM on my 2500K-powered desktop this weekend to see what video transcoding in a VM is like with four cores. mpeg4v3 fucked around with this message at 01:27 on Feb 25, 2012 |
# ? Feb 25, 2012 01:24 |
|
mpeg4v3 posted:For some reason I thought GPU passthrough worked well now, but I have no idea where I got that impression from. I guess that means I'll have to do all decoding in software, which is fine- but I'm really not sure if a single 4-core VM would be able to decode and reencode to 1080p with two or three movies at once. If I went with the 6-core, I was actually planning to assign the VM all six cores, and assign one to each of the rest of the VMs, with my reasoning being that the other VMs were not really going to be that CPU-intensive.
|
# ? Feb 25, 2012 01:25 |
|
How do you all do hardware monitoring of your HP and Dell (and others) vSphere hosts? The hardware status in vCenter/VI Client will show you basic status but one thing I always liked about HP servers was that they had a Windows agent that would simply email you if you lost a PSU or hard drive or something. What's the recommended option for VMware without having to stand up a dedicated monitoring server with the HP software and the Dell software etc.
|
# ? Feb 25, 2012 10:59 |
|
Bitch Stewie posted:How do you all do hardware monitoring of your HP and Dell (and others) vSphere hosts?
|
# ? Feb 25, 2012 11:04 |
|
Bitch Stewie posted:What's the recommended option for VMware without having to stand up a dedicated monitoring server with the HP software and the Dell software etc.
|
# ? Feb 25, 2012 15:20 |
|
Can I get some guidance on how I should configure HA with a two-host cluster? A host had an hardware issue and I saw "insufficient resources to satisfy ha failover" in vCenter where I'd have expected the other host to try and take over. I'm struggling with the admission control settings on the properties of the cluster. Either host is identical and has sufficient RAM/CPU to run all of our VMs, and each does so during maintenance/patch windows. We use vSphere Standard and so don't have DRS, nor do we use reservations with our VMs.
|
# ? Feb 26, 2012 13:51 |
|
With a two-host cluster that is your only cluster, you should probably just have Admission Control disabled. Its purpose is to prevent you from powering up too many VMs on a cluster, but the idea is that you would just power it up on another cluster.
|
# ? Feb 26, 2012 14:34 |
|
I'm trying to automate creating a VM as much as humanly possible but for the kickstart to work, I need to get the MAC address of one of the NICs. Through human interaction I can check it via vSphere > Edit Settings > Select the NIC and tada. But I have yet to find a way to do it, preferably through vmware-cmd (which let me say is lacking is documentation). Does anyone have a good way of doing this? I've found a sample perl script in the powercli tools but the host has to be on.
|
# ? Feb 26, 2012 15:00 |
|
Erwin posted:With a two-host cluster that is your only cluster, you should probably just have Admission Control disabled. Its purpose is to prevent you from powering up too many VMs on a cluster, but the idea is that you would just power it up on another cluster. Which I assume simply means "fail the lot over and to hell with the consequences" so we just need to keep an eye on resources (which having just two hosts we obviously do anyway)?
|
# ? Feb 26, 2012 15:14 |
|
RevKrule posted:I'm trying to automate creating a VM as much as humanly possible but for the kickstart to work, I need to get the MAC address of one of the NICs. You can force the VMs to have a certain MAC by modifying the VMX file, rather than looking it up. I think with default setup, you might not know what the MAC address is before you boot a new VM, which is why that Perl script doesn't work.
|
# ? Feb 26, 2012 15:29 |
|
Bitch Stewie posted:Which I assume simply means "fail the lot over and to hell with the consequences" so we just need to keep an eye on resources (which having just two hosts we obviously do anyway)?
|
# ? Feb 26, 2012 17:58 |
|
And don't forget to set the restart priorities sanely for all your VMs if that wasn't a concern before.
|
# ? Feb 26, 2012 19:26 |
|
Wait they removed VMware teams in 8? The gently caress?
|
# ? Feb 27, 2012 00:20 |
|
Corvettefisher posted:Wait they removed VMware teams in 8? http://blogs.vmware.com/workstation/2011/10/what-happened-to-teams.html
|
# ? Feb 27, 2012 03:30 |
|
RE: Directpath I/O for video cards: I haven't followed development in this area much, but it gets attention (at least the workstation/server-class GPU hardware does). Last I checked, which was ages ago, you can get Fermi-based nVidia cards working fine, and pretty much any Radeon HDs. I haven't bothered sticking an expensive Fermi card my own system for testing yet, but I can tell you that a cheap Radeon HD card has worked fine. The problems seem to come up after allocating too much RAM to the VM (something like over 2.8GB of memory). You get BSODs during boot, that sort of nastiness in the Guest/VM. It seems to me that it has something to do with how the video device BIOS sets up a range of DMAs, which later conflict with what the virtual BIOS and VMware VMX later arrange (or rearrange after power-on). Again, I'd have to look into it more... Anyway, it works, but it's not without quirks. Every time I rebooted the Guest OS with a Radeon, it would just stop working and required me to fiddle with removing the device and re-adding it (in Device Manager), hoping it'll work on next boot. You had to disable the built-in virtual video device for it to work properly. And well, on next boot, it was using the built-in virtual video device again. My use case was surrounding DLNA servers and transcoding media in real-time. I since then built a media PC box and stopped media streaming (couldn't fast-forward in all codecs/containers, etc. so it was spotty). I also imagined it would be kind of neat if I could create a gaming VM that had its own allocation of discrete hardware: A pass-through display adapter and sound card, direct USB input for a gaming mouse/keyboard, etc. then I can consolidate quite a of my home's hardware into a beefy workstation. HDMI (over Cat-5 or even IP) already makes remote computing between multiple floors quite doable. Multiple gamers and VMs? Yep, you can. Ultimately, though, the way VMware would naturally need to do this, is to set up discrete video device hardware as VM shares/resources, so you can check off that 3D flag and have multiple VMs sip from real 3D hardware. It's a little ways out, though.
|
# ? Feb 27, 2012 13:32 |
|
Question about VMware View pricing. Is it literally only $190/desktop for existing vSphere customers as claimed on this page?: http://www.vmware.com/products/view/howtobuy.html I figured I'd start looking at View towards the end of the year when I have time because I figured it had an initial buy-in of several thousand dollars at least. Is it really just linearly priced on a per-desktop basis?
|
# ? Feb 27, 2012 16:45 |
|
Yeah, there's a little tiny price hump at the beginning to get the starter kit so you have composer and stuff, but after that the cost just scales linearly. I don't think you're factoring in support on the view licenses though, which from the last quote I got was about $160/3yrs/view instance.
|
# ? Feb 27, 2012 19:29 |
|
Erwin posted:Question about VMware View pricing. Is it literally only $190/desktop for existing vSphere customers as claimed on this page?: http://www.vmware.com/products/view/howtobuy.html That is the cost of licensing, but that's not covering the costs of your backend storage or servers to host the instances. You'll want so many IOPs per instance, in order to have an environment that truly replaces the desktop... but here's where things can be quite costly. For our environment, we were looking around $150-180k for enough backend storage spindle counts to support around 20-30 IOPs per instance. fatjoint fucked around with this message at 00:49 on Feb 28, 2012 |
# ? Feb 28, 2012 00:15 |
|
Erwin posted:Question about VMware View pricing. Is it literally only $190/desktop for existing vSphere customers as claimed on this page?: http://www.vmware.com/products/view/howtobuy.html The "existing customers" is if you want the add-on licenses- e.g. you have free capacity on your existing regular vSphere clusters and want to run desktops on them.
|
# ? Feb 28, 2012 01:04 |
|
Well, we only have about 25 users. Usually at that size things are inefficiently priced, but this makes View very tempting. 500-750 IOPs is not a problem, and a bump in host capacity will cost less than replacing half of those desktops. I'm not going to switch everyone tomorrow, but it's something to look at before we refresh any hardware.
|
# ? Feb 28, 2012 03:46 |
|
Our VM "expert" recently left for a new job, he had zero VM experience and the only reason we let him run with it was that everyone else was too busy. After he left another coworker and I decided to take a look behind the curtain. We begin by logging into the only guest VM this guy stood up which made it into production. Everything looks pretty regular until we start wondering why exactly its taking up so much space on the SAN and has 3 vdisks even though it only shows 2 drive letters under My Computer. Lets load up the settings page and see whats going on! Well, that's odd what is that little 40gb doing there. But I only see 70ish GB drives in explorer so ... wait no that can't be right. No one in their right mind would setup software raid on a guest OS, I mean it's sitting on a SAN it would just be wasting resources. Oh It is also fault tolerant! As we progressed we saw a ton of guests each with specs all over the place. A Windows XP(32bit) guest with 10GB of RAM reserved and 4 cpus with nothing at all installed just sitting there powered on and eating available resources. We haven't hired a replacement yet, and the letters "VMWARE" weren't anywhere in the PD they put out for him so it looks like I have to brush up on esxi and all the vWords. Literally Human fucked around with this message at 04:11 on Feb 28, 2012 |
# ? Feb 28, 2012 04:03 |
|
Fistfull of Jizz posted:Oh My boss does that with our DC and some other VM's 14GB ram 4cores 250GB Zero thicked disk For a Domain controller... other than windows chacing I haven't seen it use above 10GB from the logs
|
# ? Feb 28, 2012 04:22 |
|
Corvettefisher posted:My boss does that with our DC and some other VM's I remember the guy who told us his boss made him give his DC for a small company some ridiculous amount of RAM only to find out it was something completely unrelated to the guest that was causing the performance problem. Our biggest VM RAM wise is our exchange datastore server. 8GB of RAM for 600+ mailboxes. We have a few 2 core SQL boxes. Our DCs are a MAXIMUM of 1 core 2GB 40GB.
|
# ? Feb 28, 2012 05:18 |
|
|
# ? Jan 26, 2025 07:44 |
|
Ok, so this old chestnut, virtual, physical or appliance for VCenter? Moving to Vcenter 5 from 4.1 and we had it on a physical box (since it makes things easier for upgrades etc) but 'best practice' is having it virtualised apparently. What do y'all do? Our enviroment has 2 ESXi servers with around 20 VMs on
|
# ? Feb 28, 2012 16:00 |