Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Corvettefisher posted:

My boss does that with our DC and some other VM's
14GB ram
4cores
250GB Zero thicked disk

For a Domain controller... other than windows chacing I haven't seen it use above 10GB from the logs
I always laugh my rear end at this. It should be a requirement for anyone who is doing anything with VMware professionally to understand disk alignment, co-scheduling, thin provisioning vs thick provisioning and the (lack of) performance impact.

I remember the guy who told us his boss made him give his DC for a small company some ridiculous amount of RAM only to find out it was something completely unrelated to the guest that was causing the performance problem. Our biggest VM RAM wise is our exchange datastore server. 8GB of RAM for 600+ mailboxes. We have a few 2 core SQL boxes. Our DCs are a MAXIMUM of 1 core 2GB 40GB.

Adbot
ADBOT LOVES YOU

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

luminalflux posted:

Anyone have any hard numbers on performance gains/losses using jumbo frames on iSCSI VMware storage and vMotion over GbE?

I found some good numbers (like single percent increase in performance), but they were invalidated by the guy having a dodgy switch. The documentation for my HP P4000 says "likely not needed" but I'm curious about how slight the gains are.
There's probably not a lot of speed difference, your biggest change will be CPU utilization. Given the power of most VMware implementations and the likelyhood that you have a TOE card on your SAN you won't see much of a change as a percentage of CPU utilization.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
anyone running a VMware cluster on AMD? we are going to eval a few HP DL165C 1U servers to possibly use in our next refresh this spring. We figure 16 "core"/128GB/4x 1Gbe is about right density wise. Just curious if I should expect disappointment from our eval.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
I have a hard time taking some of these reviews seriously, specifically when I don't think the reviewers really understand datacenter needs. For instance, having 16 integer cores vs 8 hyperthreaded cores has some tangible benefits in a typical high density VMware environment where you see 1000s of idling VMs vs simply maxing out the processors and reporting which ones complete the workload faster. CPU contention is a real concern, and whether you solve it by throwing more cores at the problem or simply throwing raw compute at it to get rid of workload faster can result in a very different experience. I'm really interested in anecdotal experience, even if I run the risk of calling in the fan boys.

anandtech in particular has great reviews regarding raw performance, but I am not convinced that carries over to real life datacenter needs. I don't even know how you would measure it beyond throwing a certain load on VM after VM until you hit a certian CPU ready threshold.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Erwin posted:

Why do people ever touch the browser on a server?

Webex :(

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

zacd posted:

I usually share RDP/putty/etc from my PC. Am I doing it wrong?
depends. A lot of times we have vendors in that need to "do research" and I am not a fan of giving up my PC for 3 hours. Share the console session in VMware, and you can continue working. In my experience if you RDP to the server and then share a webex session inside of RDP, they lose mouse control if you minimize the window.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Erwin posted:

I have a laptop that I usually use for this, but why not make a Windows 7 VM, then use the browser in that for Webex, then RDP to the server that the vendor needs?
Seems like more work than just logging into the vsphere client.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

evil_bunnY posted:

On another subject: I'm trying to put together some hosts, and hitting an information wall: can I run a 8GB and a 4GB DIMM per channel (for 96GB totalon a 2socket box), or a 8GB and a 16GB DIMM per channel (for 192GB total) and still run them at 1.6ghz? Dell website says yes, but my rep checked with an Intel presales guy and he said 1.3ghz max?
On previous generations of Intel processors (not sure about sandy bridge or whatever is newest) exceeding 48 or 64GB per populated socket caused a memory slowdown. You could mix DIMM sizes but there are guidelines to follow when doing so in regard to ranks, not sure of the specifics.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
if you guys have HA VMware clusters and are having trouble backing up large fileservers in a timely fashion, it may be worth your while to spin up a VM that provides CIFS or samba with a snapshotable filesystem (such as ZFS). You could avoid the veam backup and just use the built in functionality of the filesystem to perform your filesystem backups.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

FISHMANPET posted:

Is there some way to dd an RDM into a VMDK? I've got two SCCM servers with RDM, and at some point it sounds like I need to move these to VMDK.
VMware converter.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

FISHMANPET posted:

Though from my naive view it doesn't matter much since VMWare can't use all that memory license wise anyway.
Keep in mind VMware licensing entitlements are across your environment, so if you have a DR site that isn't used as heavily, you can leverage your leftover vRAM entitlements at your primary site.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
only partially related to virtualization, but can anyone recommend an FM1 motherboard that supports IOMMU? I want to build a new combo VMware / NAS box, passing my disk controller into the NAS VM via IOMMU.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
would I see a significant performance difference on bulldozer processors running 5.0 rather than 4.1u2?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Kachunkachunk posted:

I haven't heard anything about Bulldozers gaining or losing performance between versions, really. Was there anything in particular that made you wonder (such as existing performance hits on 4.x for any reason at all)?
We got two demo servers, and they seem to be performing well, except for a few applications that need single threaded performance. The bulldozer procs are 2.1GHz in comparison to our nehalam procs which are 2.9GHz, and one application in particular is taking 3x as long to run. I could stomach 50% longer, not 300%.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

evil_bunnY posted:

Bulldozer single thread performance is horrible, but I wasn't expecting it to be this bad.
I agree. We were expecting a bit of trouble, but nothing like this. In aggregate it's great, total cpu usage (as a percentage) is lower, it's just these single thread applications that peg the CPU for an hour or two or more that are suffering.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Misogynist posted:

The main concern when using iSCSI is that most people opt to use software HBAs. Even with TOE and other hardware acceleration turned on, the CPU usage of iSCSI is significantly higher than using Fibre Channel. In 99% of production use cases, you'll never notice this CPU usage. However, one impact is that if you are pegging the CPU on your system at 100% utilization, or otherwise jamming up the scheduling queue (i.e. by scheduling a VM that uses every core on the box), you run a much higher risk of introducing random I/O timeouts and errors into your stack than if you used Fibre Channel.
If you are running your host CPU that high you have other things to worry about. The outrageous levels of CPU power you find today over, say, 2008 levels, makes iSCSI CPU overhead a complete non issue in well over 99% of real world production installs. The CPU requirements of iSCSI shouldn't even enter into the mind of someone doing a deployment these days. When a high end VMware server was 2x 2 core Xeons with 8GB of RAM, yes, but no longer.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Maggot Monster posted:

This is probably the laziest thing I'll ever post but does anyone have a decent "business case" they've used that successfully laid out the pricing for machines, storage, cabling, 10G infrastructure, the full works.
I just went through this process for my company.

I started with a number in mind that I figured was palatable for me to spend, and framed the discussion around our DR site. The hardware was lacking if we were to have a real disaster. What I did was priced out a full refresh of our production site, and we will move our production hardware to the DR site. We got 2 years out of it in production, and will now get another 3 years out of it at our DR site.

Realistically, I know my business environment, and it's very easy to justify purchases when the FDIC suggests that you do so.*

Knowing nothing about your company, I would probably just try to make one of the sites the red headed stepchild, and get a 2 year refresh cycle, moving stuff from site a to site c now, and in two years refreshing site b and moving the gear that is there to site c, repeating this process every 4 years. So site C gets 4 year old gear for 2 years.

*On this note, does anyone have any suggestions on how to explain how VMware HA and clustering works to an IT examiner that clearly doesn't understand it? Last year one of the examiners told us that when we did DR testing, we had to bring each VM up on each cluster member to prove that the cluster member could run the VM. We decided we were going to ignore his request, but if he comes back I need to have a reasonable response to him, and I really don't even know where to begin with this guy.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

fatjoint posted:

It's not completely unreasonable, just put members of your cluster in HA into maintenance mode in a rolling fasion to show the vms migrating and existing on different nodes.
He asked us to prove that each VM can run on each node. I don't think he believes that each node is configured identically and is definately capable of running each VM. Without maintenance moding all but one node and showing that all VMs can run on that one node (not possible, concurrently) I do not know how to satisfy his request.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Wonder_Bread posted:

I don't believe this is the case at all. It doesn't even make sense to me. The E1000 is an emulated 1Gb Intel NIC, I don't see how it could go faster.
It does, sorry to ruin your day.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Cidrick posted:

VMWare would make a mint by adding a paid extension that completely hides all things VMWare from dmidecode, lspci, and the like, just from admins who want to prevent braindead developers from blaming their performance issues on the hypervisor.

"No way man, you're totally running on your own dedicated 12-core 48GB box, man. It's your code."
The problem typically isn't virtualization, it's people with poo poo VMware environments that are terribly sized. By requiring bare metal, you can at least avoid that problem. We have many vendors that do not support running virtualized, and we just went ahead and did it anyway. We purchased 5x licenses of plate spin just in case.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

bob arctor posted:

what's the easiest cheapest way to to do backups of VMs under essentials (4.1). I have client backup exec on the main servers, but ideally I'd also like to backup the whole VMs to a NAS on a daily or weekly basis.

I haven't used it but srm 5.0 supposedly has a replication piece and is licensed pretty inexpensively for just a few VMs.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Bitch Stewie posted:

I guess I'm curious under what circumstances you'd use separate vSwitches?
We use a seperate vswitch for management traffic. More or less a nice safety net in case someone fucks up and makes a bad change on the primary dvswitch.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Bitch Stewie posted:

I'm asking purely about VM traffic destined for different VLANs.
In our environment we just tag every VLAN going out of the switch into our VMware hosts, and assign them to a portgroup on the dvswitch. I think we currently have 6x 1Gbe uplinks per host, soon to be 2x 10Gbe uplinks.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
i have a qos question regarding UCS. We have 6x UCS rack servers each with 2x 10Gbe nics connected to seperate Nexus 5548 switches. We plan to carve these links up into vNICs and apply QOS to them. My plan is to have each port carved into 1x management, 1x vmkernel for NFS traffic, 1x VM traffic, and 1x vMotion. I was planning to give highest priority to NFS, second to VM, third to vMotion, and last to management. Does this seem reasonable?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

ValhallaSmith posted:

So is it bad news to run databases in vitual machines still? I was going to use redhat's KVM virtualization tools to do the VMs.
On numerous occasions we have kicked around the idea of consolidating all of our DB servers into one, licensing SQL enterprise, and running a physical / virtual hybrid cluster. We think we can get MUCH more performance (per dollar spent on licensing) from a physical box than a virtual. On the other hand, in ESXi 5.0 you can go multicore on the same socket, not sure how that works with Microsoft licensing.

Either way, to answer your question, we run every DB server we have in VMware today. None of them are nearly that size, but to echo evil_bunny, IO will likely be a problem that you need to solve long before CPU and memory.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
we have names like exchmail, lar1, lar2, dc5, callmgr1, etc..

creativity at it's finest.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
We run everything except management off of a dvswitch.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Swink posted:

We have two vshpere hosts, with a vcenter server running on one of them. If the server that is hosting vsphere goes down, I lose HA and the ability to vmotion our VMs on to the other host, correct?
HA runs independently of vcenter. If you lose a host, the other host(s) should start powering up VMs after the heartbeat fails.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

HalloKitty posted:

No, I meant as a minimum generally, I didn't mean all VMs get a fixed number or something
Don't do this. Our rule is a maximum of one vCPU unless proven to be CPU bound. Even our primary exchange datastore server has only a single vCPU for ~600 mailboxes. I bet of our ~160 VMs we have less than 10 that have more than a single vCPU.

I would strongly suggest you read up on coscheduling. Allocating two vCPUs when it is not needed will typically reduce performance not just overall, but also on that specific VM.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

HalloKitty posted:

Does it not matter if you have cores to spare? I guess I should read up before I make another stupid statement, thanks all
Define cores to spare? If you have less vCPUs allocated than you have cores, then you will see no performance penalty. If, like the rest of the world, you are stacked up 5+ vCPUs to cores, you will see a penalty. A VM cannot run until it has all of the cores allocated to it available. So if you have a 2 vCPU VM that needs to run some work, and only 1 core is available, it will wait for the second core to become available. It will hold the idle core during this time. VMware's relaxed coscheduling can fudge this a bit, but for every clock cycle that one core runs, a second core must run as well.

When you read that you should allocate the minimum number of cores necessary, it's not just a suggestion. We size based on MHz. If a single core VM is running 50% CPU or higher for extended amounts of time, and the application is multithreaded, we will add a core. Otherwise it is just 1 core, no matter what. We regularly tell application vendors to gently caress off on their requirements.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

MrChupon posted:

My question is: What sort of hardware am I looking at if I want to run that many OSes at once and have them be generally responsive? I know you don't need to dedicate CPU cores to a VM, but if I spread 5 VMs plus a host OS over a quad-core processor, is it going to be horrible usability wise? Do I need to dedicate RAM to each VM or can I "over-provision" that too? (RAM seems cheap, though).
On desktop virtualization software, you cannot (typically) overprovision RAM but you can overprovision CPU cores. Depending on load, you can have significantly more cores provisioned than you have physical cores. We are at roughly 4 to 1 in our environment, which to be fair, is very much an enterprise shop.

My guess is that for your testing you will have only one or two VMs actually doing things at any given time, the rest will be idle. Buy as much RAM as you can, a CPU that supports virtualization extensions, and have fun.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
is it esxi 4.x or 5.0? The process is different but at least similar.

http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-customers-using-vmware.html

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
i would rather put together 3x newegg boxes for virtualization at that price.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

GrandMaster posted:

We are in the process of migrating into a new ESXi5 cluster and it looks like any guests with v7 hardware are all running in older EVC modes (Westmere/Nehalem etc), but v8 machines are on Sandy Bridge.
If you shut down a v7 guest and turn it back on, it will likely use sandy bridge features. My guess is that you see the difference with v8 because you had to power them off to upgrade.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
In my opinion, you are often better off building two boxes. One lower powered one jsut for storage, and one for VMware. You can buy a zacate motherboard, case and power supply for pretty drat cheap, and then you don't have to worry about iommu or a VMware supported storage controller.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Corvettefisher posted:

Does anyone here actively use SRM? Any gripes about it? So far it seems pretty amazing.
define actively. We have it setup and have tested it, but we don't regularly fail over between sites.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

theperminator posted:

Both machines are on the same subnet, with a switch for each path
Path 1 is on 192.168.0.0/24 and Path 2 is on 10.1.1.0/24


Not 100% sure on that as I used the vSphere client to set it up, but it is showing that they're both bound to iSCSI.
If you did it in the gui it's not setup correctly.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
For anyone curious, we are running our entire environment (600 employee bank/holding company/title company) off of 6 single proc 8 core 2.8GHZ intel servers with 96GB of RAM each. We average around 50% CPU and 80% RAM utilization at peak hours. We can probably bump them up to 128GB each before we hit CPU contention. We have a similar setup at our DR site but only run about 10 VMs there.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

three posted:

Oracle like a real man.
Can I have your mailing address? I want to send you a bullet to save you the trouble it buying it yourself.

Adbot
ADBOT LOVES YOU

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

thebigcow posted:

Was Oracle ever the best, or at least a good database? I don't know anything about it but from reading this thread I don't understand how they are a business.

Oracle is straight up awesome, but only if you have a team dedicated to it. For most shops where you have some says admins who know how to google up some SQL poo poo, oracle is not a fit.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply