|
Indeed a vast majority of the VMworld 2011 labs were run on NFS. See http://virtualgeek.typepad.com/virtual_geek/2011/09/vmworld-2011-hands-on-lab-10-billion-ios-served.html I agree with Markus that NFS should not be considered be considered lower performing than iSCSI. For Martytoof, from the last thread: Hit F4 in the DCUI to switch to a high contrast scheme. See if that fixes your iLO color problems.
|
# ¿ Feb 20, 2012 20:05 |
|
|
# ¿ Mar 29, 2024 16:56 |
|
stubblyhead posted:Any suggestions for pasting info from the clipboard into a VM console window in vSphere? I've been playing around with AutoHotkey, but I haven't been able to get anything working. In 4.1 and higher it is disabled by default, for security. http://kb.vmware.com/kb/1026437
|
# ¿ Feb 24, 2012 19:14 |
|
I use http://vijava.sf.net and JRuby for enterprise integration.
|
# ¿ Mar 12, 2012 18:16 |
|
Use vmfstools to examine any locks on the volume, break lock if necessary.
|
# ¿ Mar 13, 2012 21:20 |
|
I like having an SSD in the ESX host. Allowing ESX to use the SSD for swap is a nice safety net in the case of accidental or temporary intentional memory overcommit. See http://www.vmware.com/files/pdf/mem_mgmt_perf_vsphere5.pdf on page 26 for a nice graph showing that 60GB of VMs can run in 30GB of real RAM with less than a 20% impact compared to normalized throughput.
|
# ¿ Mar 27, 2012 15:46 |
|
FISHMANPET posted:So as I read through Masterping vSphere 5 and VMware vSphere Design, I'm mentally planning my departments virtualization build out (and my boss is listening to me on this, so I can't gently caress it up) and I decided to look for 10 Gb Switches. These are not in Dell's Force10 line (http://www.dell.com/us/enterprise/p/force10-networking) so I have to assume they are rebranded from some other company. I could be wrong, but based on Dell's history and the design it looks to be a rebranded Brocade/Foundry.
|
# ¿ Apr 6, 2012 14:00 |
|
adorai posted:He asked us to prove that each VM can run on each node. I don't think he believes that each node is configured identically and is definately capable of running each VM. Without maintenance moding all but one node and showing that all VMs can run on that one node (not possible, concurrently) I do not know how to satisfy his request. The way to answer this is with Host Profiles. You know they're configured identically. He needs vSphere to tell him they're configured identically.
|
# ¿ Apr 12, 2012 05:16 |
|
The rollout of My VMware seems to have borked a lot of links. Tried to guide someone through signing up for evaluation download of vSphere and it was rather difficult.
|
# ¿ Apr 17, 2012 14:29 |
|
In the same port group / VLAN?
|
# ¿ Apr 20, 2012 12:11 |
|
Martytoof posted:Thanks for the clarification. Sounds like it was code from 2004-ish ESX judging by what I read, so the impact should be minimal. Group responsible says more code will be released on May 5th.
|
# ¿ Apr 30, 2012 17:06 |
|
Nothing to add but I like your naming schemes.
|
# ¿ May 16, 2012 18:22 |
|
Mierdaan posted:How is the command history handled in esxi5's shell? There's no history command, but you can up-arrow through the history so it's being stored somewhere... ESXi uses busybox, and thus utilizes the ash shell. There is obviously a history kept, but only in memory. You can search it in vi-mode. Busybox decided to not include the history built-in with their version of ash. See http://communities.vmware.com/message/1601787
|
# ¿ May 22, 2012 14:02 |
|
I can say I've hit that bug myself. Again, the steps to reproduce are so simple, so common, I just assumed that it already had been reported. I was using ESX 4 and 4.1 with both vCenter 4 and 5.
|
# ¿ May 23, 2012 03:30 |
|
There are a bunch more than that. Simplifying provisioning and existing familiarity with NFS security/performance are two.
|
# ¿ Jul 9, 2012 19:46 |
|
Sylink posted:Has anybody used the vCenter Server Appliance here? Just deployed the latest version (build 759855) in my lab yesterday and it is working fine for me. Since the database is on the same disk just make you give it plenty of IOPS. I've also deployed it in Workstation. If you do that make sure to allocate less RAM to the VM to make absolutely sure you do not start heavy swapping; I gave it 2.5GB on my 4GB laptop and it ran surprisingly well.
|
# ¿ Aug 1, 2012 14:49 |
|
Correct, no VUM included. You could still deploy Update Manager to a separate Windows machine. The embedded database changed from DB2 to Postgres. The only external database option is Oracle.
|
# ¿ Aug 1, 2012 19:05 |
|
September 11th.
|
# ¿ Aug 27, 2012 21:14 |
|
Misogynist posted:If it does end up being great then I'm a little pissy that we made a huge initial investment in PHD Virtual this year, but something that's able to back up 1.2 TB of VMs using less than 8,000,000 inodes on its NFS share would sure be handy from an administration perspective. I'm sure they'll be playing feature-catchup for awhile yet, though. Heard a sad rumor that PHD Virtual was running low on cash.
|
# ¿ Aug 28, 2012 15:27 |
|
madsushi posted:Dell and HP usually trail by 3-4 weeks, although they had 5.0U1 out just a week after it was released. HP versions were released coincidentally with the official VMware release. Now, usually you can go to the easy to remember URL http://www.hp.com/go/esxidownload and you'll end up in the right spot. As of right now the links are still pointing to 5.0.0. So for now go to https://my.vmware.com/web/vmware/info/slug/datacenter_cloud_infrastructure/vmware_vsphere/5_1#drivers_tools and look under OEM.
|
# ¿ Sep 12, 2012 01:59 |
|
Try removing and reinstalling management and HA agents on the hosts. http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003714#5
|
# ¿ Sep 30, 2012 14:43 |
|
OK, try it again, but this time after running VMware-fdm-uninstall.sh, delete or rename /etc/vmware/vpxa/vpxa.cfg. Then try to reconnect.
|
# ¿ Sep 30, 2012 17:31 |
|
In all likelihood the files gnarlix is dealing with came from VMware Workstation. Which if that's the case that means he should run them through VMware Converter first. http://kb.vmware.com/kb/1012258
|
# ¿ Oct 3, 2012 03:02 |
|
Change the PSP for the VMW_SATP_SVC SATP from VMW_PSP_FIXED to VMW_PSP_RR. See http://kb.vmware.com/kb/1017760 This will fix it for all future datastores.
|
# ¿ Oct 22, 2012 04:42 |
|
I like Terminals. Supports RDP, VNC, SSH, Telnet, Citrix.
|
# ¿ Nov 1, 2012 15:20 |
|
Congratulations! First tip: Work on capitalizing VMware properly. I think Apps is a cool area to be working in right now.
|
# ¿ Nov 20, 2012 20:15 |
|
1,700 page faults/second is not that high.
|
# ¿ Oct 24, 2013 14:51 |
|
Martytoof posted:Is there a good "Hyper-V for VMware dudes" primer? Most of the stuff is pretty straightforward but I'd love to read something that approaches teaching the material from that angle. http://blogs.technet.com/b/in_the_cloud/archive/2013/10/29/microsoft-wants-to-help-vmware-experts-future-proof-their-career.aspx
|
# ¿ Oct 30, 2013 03:29 |
|
That's some sweet graphing. How do you pull those stats?
|
# ¿ Mar 1, 2014 03:55 |
|
KennyG posted:I have 6 hosts in a cluster and 2 of the 6 display the message. The other 4 don't. I didn't do anything fancy to configure them. Can anyone explain what is going on here or how to fix it. I don't think the google diagnosis is what I should do as it's a rather involved SSH/CLI solution that sends me down a rabbit hole I didn't do for the other 4. Check that your local drives have a diagnostic partition. They likely don't. If they are disk builds, make sure you installed to the right disk, didn't it get confused with on board SD card or something else. If you are using Auto Deploy ensure that the local disks either have a diagnostic partition already, or are writable so one can be created. Use partedUtil to explore/wipe if necessary.
|
# ¿ Mar 9, 2014 02:29 |
|
You can do that with Set-VMHostNetworkAdapter. From the PowerCLI docs, something like code:
|
# ¿ Apr 6, 2014 12:02 |
|
You don't RAID with VSAN. VSAN handles redundancy and protection itself. If you have a RAID controller that does not allow "pass through", then create single disk RAID 0s for each disk.
|
# ¿ May 7, 2014 12:29 |
|
|
# ¿ Mar 29, 2024 16:56 |
|
CtrlMagicDel posted:When you put an ESXi host into maintenance mode and reboot the host via Auto Deploy, is it supposed to come back into Virtual Center in maintenance mode or not in maintenance mode? Our hosts always came back in maintenance mode previously, and after upgrading to 5.1U2 they seem to be booting back up active and immediately having VM's move onto them via DRS. I've had different VMware support people who have told me that one or the other is the expected behavior, including one support guy who both linked me to and quoted some documentation basically verbatim except for a portion which stated that it was supposed to come up in maintenance mode except he had literally changed two words in his quote to indicate it was NOT supposed to come up in maintenance mode It is almost funny except for how infuriating it is. Shows how you well understood Auto Deploy & Host Profiles are, even inside VMware. Good luck getting proper support for these features. (We've tried.) The host comes out of maintenance mode if and only if the host profile is successfully and complete applied. Look into the F11 console while booting to see what the host profile is doing, if it is just spinning on, say, enumerating every disk. If this takes longer than the timeout, your host profile is considered "failed". The number of disks it takes for this to happen is surprisingly low.
|
# ¿ Jun 8, 2014 19:26 |