Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Bitch Stewie
Dec 17, 2011
How do you all do hardware monitoring of your HP and Dell (and others) vSphere hosts?

The hardware status in vCenter/VI Client will show you basic status but one thing I always liked about HP servers was that they had a Windows agent that would simply email you if you lost a PSU or hard drive or something.

What's the recommended option for VMware without having to stand up a dedicated monitoring server with the HP software and the Dell software etc.

Adbot
ADBOT LOVES YOU

Bitch Stewie
Dec 17, 2011
Can I get some guidance on how I should configure HA with a two-host cluster?

A host had an hardware issue and I saw "insufficient resources to satisfy ha failover" in vCenter where I'd have expected the other host to try and take over.

I'm struggling with the admission control settings on the properties of the cluster.

Either host is identical and has sufficient RAM/CPU to run all of our VMs, and each does so during maintenance/patch windows.

We use vSphere Standard and so don't have DRS, nor do we use reservations with our VMs.

Bitch Stewie
Dec 17, 2011

Erwin posted:

With a two-host cluster that is your only cluster, you should probably just have Admission Control disabled. Its purpose is to prevent you from powering up too many VMs on a cluster, but the idea is that you would just power it up on another cluster.

Which I assume simply means "fail the lot over and to hell with the consequences" so we just need to keep an eye on resources (which having just two hosts we obviously do anyway)?

Bitch Stewie
Dec 17, 2011

cheese-cube posted:

You can get vCentre 5.0 as a virtual appliance now however there are some limitations to it (Off the top of my head: can only use it's embedded database, doesn't support Linked Mode configuration, doesn't support IPv6).

From what I saw the biggest ball ache is that it doesn't include VUM.

Bitch Stewie
Dec 17, 2011

Alctel posted:

Ok, so backing up a VSphere 5.0 enviroment, 2 ESXi servers, ~20 machines.

Having to choose between Veeam, Commvault, VRanger (and symantic ahahaaha)

I am leaning heavily towards Veeam but my manager has heard great things about Commvault and wants to go with them

halp

PHD, Veeam, possibly Unitrends are also worth a look.

Commvault is good poo poo, but as others have said it may (almost certainly is) be overkill for just a pair of hosts unless you're getting a bundle or something that makes it a good deal financially.

Of course where it excels is that it supports pretty much anything/everything else whilst PHD and Veeam are almost entirely focussed on VMware.

Unitrends is an interesting product IMO. Seems to get good feedback and a nice licensing model.

Bitch Stewie
Dec 17, 2011
So with just a pair of vSphere Standard hosts running 4.1 SP2, if/when we want to go to 5.0, is it worth upgrading or it just as simple to start over since other than the network and storage config there isn't that much to configure on each box?

I know it's ridiculous to compare vSphere to Windows but as I'm predominantly a Windows admin I still have this "Upgrades = Bad" mentality which is difficult to shake.

Also the boxes are HP and right now run the regular vSphere build from the VMware ISOs - if I do rebuild I'd use the HP ISO's available from the VMware site.

Bitch Stewie
Dec 17, 2011
Thanks and yeah I know the process is, but so is putting a Windows 7 disc in a Vista machine and hitting upgrade - what you end up with isn't your ideal though IYSWIM and I didn't know if there's any parallel with vSphere to have to think about?

Bitch Stewie
Dec 17, 2011

Misogynist posted:

That's why you give them a budget up front. I used to do this with recruiters all the time and it saved me a lot of heartache.

I have to ask, why would you give any vendor your budget upfront?

My personal opinion is that if you have $20k to spend on widgets, and you tell a vendor that you have $20k to spend on widgets, they'll come up with a solution that costs $20k.

Personally I prefer to work on the solution with the obvious caveat that you both need to know that the solution is going to come back in the ballpark.

I had to laugh when we had our SAN. List is/was around $58k. Special bid came back at $43k. They sent us an evaluation unit that was supposed to be from an eval pool but they had none so sent a brand new sealed unit.

After three weeks we were told we could have it at $23k "to save the hassle of collecting it".

I wanted to tell them to gently caress off out of general principle, but we smiled sweetly, said yes please and bought it.

Bitch Stewie
Dec 17, 2011
Could someone help me out with a little clarification on how vNICs and vSwitch's interact with one another please?

Say I have a vSwitch and I have a pair of VMs each with a vNIC connected to the vSwitch.

Ignoring any OS limitations, what is the maximum throughput I could get between the two servers via their vNICs since the traffic is all within the same vSwitch please?

I'm assuming the type/model of vNIC plays a part but I've never been 100% clear on the differences - AIUI Flexible and E1000 are 1gbps and VMXNET is 10gbps?

Bitch Stewie
Dec 17, 2011

Misogynist posted:

Ignore the speed reported to the OS, it's not relevant to anything. Any of the emulated vNICs (E1000, etc.) will perform as quickly as the server can emulate them, but the server will run them through the vSwitch. The paravirtualized vNICs communicate directly with the hypervisor, and in the case of VMXNET3, can ferry data between VMs at the speed of shared memory.

Thanks for the reply.

This is what's not so clear to me so far. So if I use, say, an E1000, you're saying the OS might report a 1gbps NIC but it can actually transfer data in/out of the vSwitch at greater than 1gbps?

I'm assuming by the time I get to using vSphere for the application I have in mind that some issue I read of with the VMXNET driver when used with that application will have been ironed out, and I'll have 10gbps pNIC connectivity in/out of the vSwitch, I just wasn't clear how the intra-vSwitch traffic works..

Bitch Stewie
Dec 17, 2011
Anyone ever had any experience with Stratus Avance?

Bitch Stewie
Dec 17, 2011
I don't really get that. Why would you pay for vSphere Standard but not buy vCenter?

Bitch Stewie
Dec 17, 2011
Whatever you do, be sure to check your current licenses to confirm you don't have vCenter licenses.

Essentials is a fantastic value bundle but you don't get vMotion.

Essentials Plus does come with vMotion but it's probably more expensive than adding vCenter Foundation to what you already have.

Bitch Stewie
Dec 17, 2011
Does it work properly now?

Data recovery had a reputation for being totally broken when it came out didn't it?

Bitch Stewie
Dec 17, 2011
The words "Mission Critical" and cheap NAS or SAN don't sit too well with me.

What I would look at is a pair of solid boxes with RAID10 DAS and depending on budget either run a VSA on them (HP P4000 or VMware VSA) or use the DAS with Veeam doing replication from one box to the other - you don't get HA but if the box running critical VM's shits itself you just fire up the replica on the second box.

I'd be wary of dropping in a cheap NAS box as IMO you're combining the worst of all worlds in that you'll probably have it hooked up to two cheap switches, and most of the cheap NAS vendors don't do proper on-site support like HP or Dell would.

Bitch Stewie
Dec 17, 2011
My comments were more at the idea of using a Thecus/Synology/Netgear level of NAS for your mission critical VMs.

If they offer 4 hour onsite swap-out where you are then fair enough, I don't think they do though.

I'm not sure I'd consider a $1500 box running Windows Storage Server to be more solid than a pair of boxes running a synchronous SAN/NAS either - actually I am sure, I wouldn't, it's a huge SPOF.

Bob needs to evaluate his entire environment. I've seen so many people rush in and stick a SAN in because hey, it's a SAN, it's redundant everything right? Then the broom closet goes up in flames and you've lost everything you had because you only have a single SAN without dropping a lot of money.

That's where the VSA's come in handy for offering high availability and redundancy, assuming you have the physical infrastructure to make use of them.

Really my main point is don't stick your mission critical VM's on a pro-sumer NAS box.

Bitch Stewie
Dec 17, 2011
VMware's VSA isn't the only option if you want high availability.

P4000 will do all the things the physical product does.

$8k gets you two nodes in a cluster.

Drop it on some DAS and you can lose a node and the thing will keep ticking.

Split the nodes and stretch the cluster and you can lose a room, maybe even a site.

The VSA's are perfect where you don't need a true hardware but do want the feature set, or when you're still concerned that with a single SAN you still have a SPOF in the SAN itself (you're probably more likely to lose it due to human error than the SAN failing).

Bitch Stewie fucked around with this message at 18:32 on Apr 30, 2012

Bitch Stewie
Dec 17, 2011
@bull3964 - A question for you - why do you need a SAN?

Let's say you just run that VM on DAS and use Veeam to replicate the VM every hour.

If your live server dies, could you just fire up the VM on the replica server?

It's cheap, you're back to one hour in time.

Even if you go and spunk $40k on a single SAN tomorrow you still need to cater for how you're going to backup and recover your VM's if the SAN fucks up.

I guess I see too many people equate high availability to owning a single mystical expensive SAN :)

Bitch Stewie
Dec 17, 2011
How do you handle segregating traffic that all ends up on different VLANs in the same physical switch?

The obvious solution is a single trunk of 3 or 4 NICs and use port group VLAN tagging.

I guess I'm curious under what circumstances you'd use separate vSwitches?

The reason I ask is that we've just started overhauling our network and we are now VLAN'ing so right now I have a couple of vSwitches where the pNICs go into the same physical switch and I'm trying to work out if there are any good reasons to leave it that way.

Bitch Stewie fucked around with this message at 22:11 on May 5, 2012

Bitch Stewie
Dec 17, 2011

adorai posted:

We use a seperate vswitch for management traffic. More or less a nice safety net in case someone fucks up and makes a bad change on the primary dvswitch.

Oh yeah sorry, I already have that on dumb access uplinks and that will not change as the "someone" who fucks up would be me :)

I'm asking purely about VM traffic destined for different VLANs.

Bitch Stewie
Dec 17, 2011

skipdogg posted:

So we're going to be replacing some older infrastructure stuff, and we're trying to virtualize everything. I'm really only planning on virutalizing about 4 physical servers, none with any kind of heavy IO load at all. Think basic windows stuff, file/print share, DC, low usage IIS and WSUS. We're a HP shop so am I off base in thinking 2x DL360's with a LeftHand iSCSI SAN is a good starting point? We can tweak the actual specifications of the hardware such as cores and RAM.

We do have some larger EMC SAN's so EMC isn't out of the question but I'm not familiar with their smaller offerings.

Are you looking at Lefthand physical or the VSA? VSA may be worth a look.

Bitch Stewie
Dec 17, 2011

liveify posted:

What would be an example of an entry level SAN for a couple servers?

Dell MD series or HP P2000 or various Drobo/ReadyNAS level devices depending on whether you want SAN or NAS and the version of vSphere.

Keep in mind the VSA (VMware or P4000) can also give you clustering/redundancy over and above what an entry level SAN will give you, even one with dual controllers.

Bitch Stewie
Dec 17, 2011

Internet Explorer posted:

I haven't heard of anyone using a Drobo in production

I have, and believe me it wasn't a suggestion :)

Bitch Stewie
Dec 17, 2011

DrOgdenWernstrom posted:

If we got a MD3200i Dual controller, 2x Dell PowerConnect 6248's (I know, not the greatest but we can get them cheap and we don't have that much traffic or layer3), and the essentials plus kit, we could be pretty good with redundancy I think.

Just work out what you want to make yourself redundant against. Respectfully a lot of people go and buy a dual controller SAN thinking it's the answer to any and all redundancy problems - it still leaves you with all your eggs in one basket and open to the biggest cause of failure which is human error.

Also don't forget that if you want to replicate you need to buy another one - with the VSA's you can "simply" stretch the storage between rooms or floors or sites depending on your bandwidth quantity and latency.

Bitch Stewie
Dec 17, 2011

Syano posted:

The proper way to do risk management with your gear purchases is to calculate likelihood of occurrence and impact from occurrence for potential risks then mitigate the ones that make sense.

Agreed.

The problem is that I see many SMBs (I did it myself with our first SAN) who do none of this and simply assume that a SAN is the solution to a pretty poorly defined problem.

It's very easy to buy something cheap and then a few months later when then need changes find that either you point blank can't do it, or you can but you're going to get bent over (cheap EMC's and Netapp's leap to mind).

Bitch Stewie
Dec 17, 2011

Because despite its flaws, and there are many, there's not much else out there that will give you vendor supported clustered storage for $5k.

I'm curious how many of those flaws are hard fast limits and how many are defaults put in place to stop SMB's doing retarded things, but that could be overridden.

Bitch Stewie
Dec 17, 2011

Docjowles posted:

I'm gearing up for this... my boss feels vehemently that shared storage introduces a single point of failure, and would prefer to buy boxes jammed full of 15k SAS drives mirroring each other with DRBD. Nevermind that we already pay for a DR site that we can fail over to at a moment's notice in the unlikely event that both controllers in the SAN go unreachable (across diverse paths) at the exact same instant.

I don't think he's so set against it that I can't make a good case, it's just not an argument I expected to have in 2012.

I'd prefer clustering/scale-out if I had the physical capability to do so (which we have, so we do).

Plus you can do some very funky stuff with DAS and either bare metal or VSA type clustering, and usually for much less money than a traditional hardware SAN + controllers.

Bitch Stewie
Dec 17, 2011

complex posted:

Just deployed the latest version (build 759855) in my lab yesterday and it is working fine for me. Since the database is on the same disk just make you give it plenty of IOPS. I've also deployed it in Workstation. If you do that make sure to allocate less RAM to the VM to make absolutely sure you do not start heavy swapping; I gave it 2.5GB on my 4GB laptop and it ran surprisingly well.

I'm assuming it still doesn't include update manager?

Bitch Stewie
Dec 17, 2011

vty posted:

Can anyone assist me with understanding the best use cases for the VSA? I'm trying to get past past the marketing mumbo jumbo on Vmware.com Specifically I have a few customers that utilize on-host storage between several (3, 5, 20) ESXi hosts and I think that this might be the prime real estate to utilize VSA.

Do the nodes have to be completely wiped with VMFS configured on the "disks"? Or can currently running ESxi servers be moved into the VSA cluster?

Also, does this come with any form of Essentials licensing?

edit: best/past

It comes with Essentials Plus now. If you're looking at VSA's also look at the HP P4000 which is iSCSI so SAN whilst VMware's is NFS/NAS.

Personally I like the concept a lot since if you do it right there is no single point of failure and if your environment supports it you can stretch synchronous cluster which is something you can't do with any entry level hardware SAN/NAS that I know of.

Bitch Stewie
Dec 17, 2011

Moey posted:

There is some logic to the idea, but it has too many holes to really ever be relied on. If the building is on fire, do I really want to lug out a 10 bay NAS with me?

Seriously, this is your bosses idea of a loving backup plan "Grab the NAS Moey, grab the NAS"? :D

Bitch Stewie
Dec 17, 2011

stevewm posted:

What about virtual storage appliance? In looking at the docs on VMware's site that appears to take care of the shared storage issue... I 'm just wondering if anyone here is using it.

I would really like to have at least 2 hosts, but the expense of a NAS might just be too much, at least for me to get approved anyways.

Might be worth looking at the HP Storevirtual as well. Slightly more expensive but more features and flexibility IMO.

Bitch Stewie
Dec 17, 2011
Is anyone using Nutanix or Simplivity or any other other converged vendors for their vSphere infrastructure?

Bitch Stewie
Dec 17, 2011

TwoDeer posted:

Happy to answer any questions you might have regarding Nutanix.

Well I suppose the basic question is what's your experience?

(and is this as an impartial end-user?)

Ditto if anyone's used Simplivity as that is looking a better potential fit right now.

Bitch Stewie
Dec 17, 2011

TwoDeer posted:

Edited my original response to indicate that I do work for Nutanix. If you want to PM me and let me know your region/vertical I'd be happy to put you in touch with reference-able users. I was in one of the very first partner training classes they had about two years ago and have been watching them ever since. Anyway, not interested in merely shilling for them but if there are specific questions I'll gladly take a stab at it. Curious as to why Simplivity appears to be a better fit, if you're able to elaborate.

Disclosure appreciated :) We've only just started speaking to a reseller so depending what comes back we'll be talking to Nutanix.

I think what I'm trying to understand, and where I think Simplivity appears a better fit for us, is what you do if you essentially want a very basic 2 node cluster but want to stretch it out a bit?

With Nutanix I think you're looking at a minimum of 4 nodes (2 "bricks" minimum of 2 nodes a brick so 8 processors?) which is a pretty big deal at SMB/SME size simply in vSphere/Windows licensing terms let alone hardware.

Simplivity appear to need a minimum of 2 "cubes" and you'd think that if they're both on the same 1ms latency L2 subnet that shouldn't be an issue - again need to see what comes back to decide whether it's a waste of our time and theirs talking more.

The reason for this is is a total environment refresh where I'm thinking if we're replacing servers (only 2 hosts right now) and SAN storage (approx 20TB usable capacity) and whatever we buy has to last 3 years, it's the right time to at least investigate converged.

Regardless of vendor there is an appeal in simply buying a couple of "bricks" that do everything but equally there's caution about having to buy more computer capacity if all we need is a little more storage.

Bitch Stewie
Dec 17, 2011

TwoDeer posted:

How does a 2-node cluster handle quorum? To that end, the smallest cluster currently supported from Nutanix consists of 3 nodes which would leave an expansion slot for most of our "blocks" (that's the term for the 2U appliance, a block - a block will have 1 or more "nodes [converged compute/storage]" in it). So the node becomes the unit at which you scale the size of your infrastructure. On physical footprint alone the minimum configuration for Nutanix is 2U with 3 nodes while Simplivity appears to require 4U for only two nodes. We support vSphere, KVM and Hyper-V for the hypervisor layer while I believe Simplivity currently only integrates with vSphere.

However you decide to go, I think you're making the right move to look at converged infrastructure offerings.

Still waiting for a *lot* more info on both tbh.

I know 2 node systems will handle split brain and if the system doesn't you'd hope there's a way to run a lightweight quorum as a VM on a Microserver or something so technically it shouldn't be an issue - but watch this space.

I think the fact we'd need 6 nodes minimum Nutanix to do any kind of DR between rooms/locations has just ruled it out though simply on $$$ grounds based on real-world pricing I saw on a couple of 6020's.

Bitch Stewie
Dec 17, 2011
Anyone running their vSphere stuff from HDS HUS storage?

Bitch Stewie
Dec 17, 2011

Cidrick posted:

Where's the best place to start troubleshooting high DAVG latency against a datastore? We have identical hosts with identical FC connectivity to a Hitachi-based SAN, one living on Hitachi VSP and another on Hitachi HUS, and while we get fantastic <1ms guest disk service times against VSP, we get 5-10ms guest service times that spikes into 100+ms latency under very little IO load. esxtop shows the KAVG as nearly 0 and it's the DAVG that spikes into the hundred-millisecond range at random, which VMware says is on the driver side and on down, potentially being a misconfig with your HBAs or at the SAN layer.

Is it worth trying to tune on the VMware side (LUN queue depth, etc) first, or do I need to blackmail my storage guy to open a case with Hitachi? I don't have visibility into the SAN side of things, so I want to have my ducks in a row before I try to blame everything on Hitachi.

I don't know enough to give too useful of a suggestion, but we're looking at HDS HUS right now and they do a product called "Tuning Manager" which seems to give reports on absolutely anything/everything imaginable to help ascertain what's happening at an array level.

Bitch Stewie
Dec 17, 2011

NullPtr4Lunch posted:

If you want that, make sure you get the license for it. Seems like everything Hitachi sells is separately licensed...

True, but it's around a grand so if someone has a VSP but doesn't have it I'd be.. surprised :)

Bitch Stewie
Dec 17, 2011

mattisacomputer posted:

For actual production use, vSAN is definitely demanding. I think it was in this thread but just a few weeks/months ago where someone had a catastrophic vSAN failure where a SAS controller couldn't keep up with a basic amount of vSAN i/o, even though the SAS controller was specifically on the vSAN HCL anyway. Home lab though? Yeah probably any ol' 6gbps SAS controller will do.

They had a Dell Perc 310 IIRC, and afterwards VMware removed it from the HCL and basically said "We screwed up and put some stuff on there that's not good enough".

Adbot
ADBOT LOVES YOU

Bitch Stewie
Dec 17, 2011

Dilbert As gently caress posted:

Is anyone here actually considering evo rails for branch offices?

I'm indifferent about them after I probed the arch of them.

Given they start from around $150K what sort of branch offices do you have? :)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply