|
How do you all do hardware monitoring of your HP and Dell (and others) vSphere hosts? The hardware status in vCenter/VI Client will show you basic status but one thing I always liked about HP servers was that they had a Windows agent that would simply email you if you lost a PSU or hard drive or something. What's the recommended option for VMware without having to stand up a dedicated monitoring server with the HP software and the Dell software etc.
|
# ¿ Feb 25, 2012 11:59 |
|
|
# ¿ Apr 25, 2024 07:21 |
|
Can I get some guidance on how I should configure HA with a two-host cluster? A host had an hardware issue and I saw "insufficient resources to satisfy ha failover" in vCenter where I'd have expected the other host to try and take over. I'm struggling with the admission control settings on the properties of the cluster. Either host is identical and has sufficient RAM/CPU to run all of our VMs, and each does so during maintenance/patch windows. We use vSphere Standard and so don't have DRS, nor do we use reservations with our VMs.
|
# ¿ Feb 26, 2012 14:51 |
|
Erwin posted:With a two-host cluster that is your only cluster, you should probably just have Admission Control disabled. Its purpose is to prevent you from powering up too many VMs on a cluster, but the idea is that you would just power it up on another cluster. Which I assume simply means "fail the lot over and to hell with the consequences" so we just need to keep an eye on resources (which having just two hosts we obviously do anyway)?
|
# ¿ Feb 26, 2012 16:14 |
|
cheese-cube posted:You can get vCentre 5.0 as a virtual appliance now however there are some limitations to it (Off the top of my head: can only use it's embedded database, doesn't support Linked Mode configuration, doesn't support IPv6). From what I saw the biggest ball ache is that it doesn't include VUM.
|
# ¿ Feb 29, 2012 19:29 |
|
Alctel posted:Ok, so backing up a VSphere 5.0 enviroment, 2 ESXi servers, ~20 machines. PHD, Veeam, possibly Unitrends are also worth a look. Commvault is good poo poo, but as others have said it may (almost certainly is) be overkill for just a pair of hosts unless you're getting a bundle or something that makes it a good deal financially. Of course where it excels is that it supports pretty much anything/everything else whilst PHD and Veeam are almost entirely focussed on VMware. Unitrends is an interesting product IMO. Seems to get good feedback and a nice licensing model.
|
# ¿ Mar 13, 2012 19:19 |
|
So with just a pair of vSphere Standard hosts running 4.1 SP2, if/when we want to go to 5.0, is it worth upgrading or it just as simple to start over since other than the network and storage config there isn't that much to configure on each box? I know it's ridiculous to compare vSphere to Windows but as I'm predominantly a Windows admin I still have this "Upgrades = Bad" mentality which is difficult to shake. Also the boxes are HP and right now run the regular vSphere build from the VMware ISOs - if I do rebuild I'd use the HP ISO's available from the VMware site.
|
# ¿ Mar 22, 2012 19:38 |
|
Thanks and yeah I know the process is, but so is putting a Windows 7 disc in a Vista machine and hitting upgrade - what you end up with isn't your ideal though IYSWIM and I didn't know if there's any parallel with vSphere to have to think about?
|
# ¿ Mar 22, 2012 19:46 |
|
Misogynist posted:That's why you give them a budget up front. I used to do this with recruiters all the time and it saved me a lot of heartache. I have to ask, why would you give any vendor your budget upfront? My personal opinion is that if you have $20k to spend on widgets, and you tell a vendor that you have $20k to spend on widgets, they'll come up with a solution that costs $20k. Personally I prefer to work on the solution with the obvious caveat that you both need to know that the solution is going to come back in the ballpark. I had to laugh when we had our SAN. List is/was around $58k. Special bid came back at $43k. They sent us an evaluation unit that was supposed to be from an eval pool but they had none so sent a brand new sealed unit. After three weeks we were told we could have it at $23k "to save the hassle of collecting it". I wanted to tell them to gently caress off out of general principle, but we smiled sweetly, said yes please and bought it.
|
# ¿ Apr 12, 2012 18:14 |
|
Could someone help me out with a little clarification on how vNICs and vSwitch's interact with one another please? Say I have a vSwitch and I have a pair of VMs each with a vNIC connected to the vSwitch. Ignoring any OS limitations, what is the maximum throughput I could get between the two servers via their vNICs since the traffic is all within the same vSwitch please? I'm assuming the type/model of vNIC plays a part but I've never been 100% clear on the differences - AIUI Flexible and E1000 are 1gbps and VMXNET is 10gbps?
|
# ¿ Apr 17, 2012 21:33 |
|
Misogynist posted:Ignore the speed reported to the OS, it's not relevant to anything. Any of the emulated vNICs (E1000, etc.) will perform as quickly as the server can emulate them, but the server will run them through the vSwitch. The paravirtualized vNICs communicate directly with the hypervisor, and in the case of VMXNET3, can ferry data between VMs at the speed of shared memory. Thanks for the reply. This is what's not so clear to me so far. So if I use, say, an E1000, you're saying the OS might report a 1gbps NIC but it can actually transfer data in/out of the vSwitch at greater than 1gbps? I'm assuming by the time I get to using vSphere for the application I have in mind that some issue I read of with the VMXNET driver when used with that application will have been ironed out, and I'll have 10gbps pNIC connectivity in/out of the vSwitch, I just wasn't clear how the intra-vSwitch traffic works..
|
# ¿ Apr 17, 2012 21:44 |
|
Anyone ever had any experience with Stratus Avance?
|
# ¿ Apr 19, 2012 18:35 |
|
I don't really get that. Why would you pay for vSphere Standard but not buy vCenter?
|
# ¿ Apr 27, 2012 18:59 |
|
Whatever you do, be sure to check your current licenses to confirm you don't have vCenter licenses. Essentials is a fantastic value bundle but you don't get vMotion. Essentials Plus does come with vMotion but it's probably more expensive than adding vCenter Foundation to what you already have.
|
# ¿ Apr 27, 2012 19:51 |
|
Does it work properly now? Data recovery had a reputation for being totally broken when it came out didn't it?
|
# ¿ Apr 28, 2012 23:11 |
|
The words "Mission Critical" and cheap NAS or SAN don't sit too well with me. What I would look at is a pair of solid boxes with RAID10 DAS and depending on budget either run a VSA on them (HP P4000 or VMware VSA) or use the DAS with Veeam doing replication from one box to the other - you don't get HA but if the box running critical VM's shits itself you just fire up the replica on the second box. I'd be wary of dropping in a cheap NAS box as IMO you're combining the worst of all worlds in that you'll probably have it hooked up to two cheap switches, and most of the cheap NAS vendors don't do proper on-site support like HP or Dell would.
|
# ¿ Apr 29, 2012 09:56 |
|
My comments were more at the idea of using a Thecus/Synology/Netgear level of NAS for your mission critical VMs. If they offer 4 hour onsite swap-out where you are then fair enough, I don't think they do though. I'm not sure I'd consider a $1500 box running Windows Storage Server to be more solid than a pair of boxes running a synchronous SAN/NAS either - actually I am sure, I wouldn't, it's a huge SPOF. Bob needs to evaluate his entire environment. I've seen so many people rush in and stick a SAN in because hey, it's a SAN, it's redundant everything right? Then the broom closet goes up in flames and you've lost everything you had because you only have a single SAN without dropping a lot of money. That's where the VSA's come in handy for offering high availability and redundancy, assuming you have the physical infrastructure to make use of them. Really my main point is don't stick your mission critical VM's on a pro-sumer NAS box.
|
# ¿ Apr 30, 2012 06:25 |
|
VMware's VSA isn't the only option if you want high availability. P4000 will do all the things the physical product does. $8k gets you two nodes in a cluster. Drop it on some DAS and you can lose a node and the thing will keep ticking. Split the nodes and stretch the cluster and you can lose a room, maybe even a site. The VSA's are perfect where you don't need a true hardware but do want the feature set, or when you're still concerned that with a single SAN you still have a SPOF in the SAN itself (you're probably more likely to lose it due to human error than the SAN failing). Bitch Stewie fucked around with this message at 18:32 on Apr 30, 2012 |
# ¿ Apr 30, 2012 18:21 |
|
@bull3964 - A question for you - why do you need a SAN? Let's say you just run that VM on DAS and use Veeam to replicate the VM every hour. If your live server dies, could you just fire up the VM on the replica server? It's cheap, you're back to one hour in time. Even if you go and spunk $40k on a single SAN tomorrow you still need to cater for how you're going to backup and recover your VM's if the SAN fucks up. I guess I see too many people equate high availability to owning a single mystical expensive SAN
|
# ¿ Apr 30, 2012 19:59 |
|
How do you handle segregating traffic that all ends up on different VLANs in the same physical switch? The obvious solution is a single trunk of 3 or 4 NICs and use port group VLAN tagging. I guess I'm curious under what circumstances you'd use separate vSwitches? The reason I ask is that we've just started overhauling our network and we are now VLAN'ing so right now I have a couple of vSwitches where the pNICs go into the same physical switch and I'm trying to work out if there are any good reasons to leave it that way. Bitch Stewie fucked around with this message at 22:11 on May 5, 2012 |
# ¿ May 5, 2012 22:09 |
|
adorai posted:We use a seperate vswitch for management traffic. More or less a nice safety net in case someone fucks up and makes a bad change on the primary dvswitch. Oh yeah sorry, I already have that on dumb access uplinks and that will not change as the "someone" who fucks up would be me I'm asking purely about VM traffic destined for different VLANs.
|
# ¿ May 5, 2012 22:19 |
|
skipdogg posted:So we're going to be replacing some older infrastructure stuff, and we're trying to virtualize everything. I'm really only planning on virutalizing about 4 physical servers, none with any kind of heavy IO load at all. Think basic windows stuff, file/print share, DC, low usage IIS and WSUS. We're a HP shop so am I off base in thinking 2x DL360's with a LeftHand iSCSI SAN is a good starting point? We can tweak the actual specifications of the hardware such as cores and RAM. Are you looking at Lefthand physical or the VSA? VSA may be worth a look.
|
# ¿ May 9, 2012 18:44 |
|
liveify posted:What would be an example of an entry level SAN for a couple servers? Dell MD series or HP P2000 or various Drobo/ReadyNAS level devices depending on whether you want SAN or NAS and the version of vSphere. Keep in mind the VSA (VMware or P4000) can also give you clustering/redundancy over and above what an entry level SAN will give you, even one with dual controllers.
|
# ¿ Jun 4, 2012 16:57 |
|
Internet Explorer posted:I haven't heard of anyone using a Drobo in production I have, and believe me it wasn't a suggestion
|
# ¿ Jun 4, 2012 17:38 |
|
DrOgdenWernstrom posted:If we got a MD3200i Dual controller, 2x Dell PowerConnect 6248's (I know, not the greatest but we can get them cheap and we don't have that much traffic or layer3), and the essentials plus kit, we could be pretty good with redundancy I think. Just work out what you want to make yourself redundant against. Respectfully a lot of people go and buy a dual controller SAN thinking it's the answer to any and all redundancy problems - it still leaves you with all your eggs in one basket and open to the biggest cause of failure which is human error. Also don't forget that if you want to replicate you need to buy another one - with the VSA's you can "simply" stretch the storage between rooms or floors or sites depending on your bandwidth quantity and latency.
|
# ¿ Jun 5, 2012 08:54 |
|
Syano posted:The proper way to do risk management with your gear purchases is to calculate likelihood of occurrence and impact from occurrence for potential risks then mitigate the ones that make sense. Agreed. The problem is that I see many SMBs (I did it myself with our first SAN) who do none of this and simply assume that a SAN is the solution to a pretty poorly defined problem. It's very easy to buy something cheap and then a few months later when then need changes find that either you point blank can't do it, or you can but you're going to get bent over (cheap EMC's and Netapp's leap to mind).
|
# ¿ Jun 5, 2012 16:36 |
|
Corvettefisher posted:http://www.windowsitpro.com/article/server-virtualization/limitations-vmware-vsphere-storage-appliance-141252 Because despite its flaws, and there are many, there's not much else out there that will give you vendor supported clustered storage for $5k. I'm curious how many of those flaws are hard fast limits and how many are defaults put in place to stop SMB's doing retarded things, but that could be overridden.
|
# ¿ Jul 19, 2012 20:50 |
|
Docjowles posted:I'm gearing up for this... my boss feels vehemently that shared storage introduces a single point of failure, and would prefer to buy boxes jammed full of 15k SAS drives mirroring each other with DRBD. Nevermind that we already pay for a DR site that we can fail over to at a moment's notice in the unlikely event that both controllers in the SAN go unreachable (across diverse paths) at the exact same instant. I'd prefer clustering/scale-out if I had the physical capability to do so (which we have, so we do). Plus you can do some very funky stuff with DAS and either bare metal or VSA type clustering, and usually for much less money than a traditional hardware SAN + controllers.
|
# ¿ Jul 28, 2012 08:09 |
|
complex posted:Just deployed the latest version (build 759855) in my lab yesterday and it is working fine for me. Since the database is on the same disk just make you give it plenty of IOPS. I've also deployed it in Workstation. If you do that make sure to allocate less RAM to the VM to make absolutely sure you do not start heavy swapping; I gave it 2.5GB on my 4GB laptop and it ran surprisingly well. I'm assuming it still doesn't include update manager?
|
# ¿ Aug 1, 2012 18:35 |
|
vty posted:Can anyone assist me with understanding the best use cases for the VSA? I'm trying to get past past the marketing mumbo jumbo on Vmware.com Specifically I have a few customers that utilize on-host storage between several (3, 5, 20) ESXi hosts and I think that this might be the prime real estate to utilize VSA. It comes with Essentials Plus now. If you're looking at VSA's also look at the HP P4000 which is iSCSI so SAN whilst VMware's is NFS/NAS. Personally I like the concept a lot since if you do it right there is no single point of failure and if your environment supports it you can stretch synchronous cluster which is something you can't do with any entry level hardware SAN/NAS that I know of.
|
# ¿ Oct 6, 2012 14:47 |
|
Moey posted:There is some logic to the idea, but it has too many holes to really ever be relied on. If the building is on fire, do I really want to lug out a 10 bay NAS with me? Seriously, this is your bosses idea of a loving backup plan "Grab the NAS Moey, grab the NAS"?
|
# ¿ Dec 13, 2012 11:12 |
|
stevewm posted:What about virtual storage appliance? In looking at the docs on VMware's site that appears to take care of the shared storage issue... I 'm just wondering if anyone here is using it. Might be worth looking at the HP Storevirtual as well. Slightly more expensive but more features and flexibility IMO.
|
# ¿ Jan 6, 2013 12:38 |
|
Is anyone using Nutanix or Simplivity or any other other converged vendors for their vSphere infrastructure?
|
# ¿ Feb 22, 2014 13:11 |
|
TwoDeer posted:Happy to answer any questions you might have regarding Nutanix. Well I suppose the basic question is what's your experience? (and is this as an impartial end-user?) Ditto if anyone's used Simplivity as that is looking a better potential fit right now.
|
# ¿ Feb 23, 2014 12:01 |
|
TwoDeer posted:Edited my original response to indicate that I do work for Nutanix. If you want to PM me and let me know your region/vertical I'd be happy to put you in touch with reference-able users. I was in one of the very first partner training classes they had about two years ago and have been watching them ever since. Anyway, not interested in merely shilling for them but if there are specific questions I'll gladly take a stab at it. Curious as to why Simplivity appears to be a better fit, if you're able to elaborate. Disclosure appreciated We've only just started speaking to a reseller so depending what comes back we'll be talking to Nutanix. I think what I'm trying to understand, and where I think Simplivity appears a better fit for us, is what you do if you essentially want a very basic 2 node cluster but want to stretch it out a bit? With Nutanix I think you're looking at a minimum of 4 nodes (2 "bricks" minimum of 2 nodes a brick so 8 processors?) which is a pretty big deal at SMB/SME size simply in vSphere/Windows licensing terms let alone hardware. Simplivity appear to need a minimum of 2 "cubes" and you'd think that if they're both on the same 1ms latency L2 subnet that shouldn't be an issue - again need to see what comes back to decide whether it's a waste of our time and theirs talking more. The reason for this is is a total environment refresh where I'm thinking if we're replacing servers (only 2 hosts right now) and SAN storage (approx 20TB usable capacity) and whatever we buy has to last 3 years, it's the right time to at least investigate converged. Regardless of vendor there is an appeal in simply buying a couple of "bricks" that do everything but equally there's caution about having to buy more computer capacity if all we need is a little more storage.
|
# ¿ Feb 23, 2014 18:21 |
|
TwoDeer posted:How does a 2-node cluster handle quorum? To that end, the smallest cluster currently supported from Nutanix consists of 3 nodes which would leave an expansion slot for most of our "blocks" (that's the term for the 2U appliance, a block - a block will have 1 or more "nodes [converged compute/storage]" in it). So the node becomes the unit at which you scale the size of your infrastructure. On physical footprint alone the minimum configuration for Nutanix is 2U with 3 nodes while Simplivity appears to require 4U for only two nodes. We support vSphere, KVM and Hyper-V for the hypervisor layer while I believe Simplivity currently only integrates with vSphere. Still waiting for a *lot* more info on both tbh. I know 2 node systems will handle split brain and if the system doesn't you'd hope there's a way to run a lightweight quorum as a VM on a Microserver or something so technically it shouldn't be an issue - but watch this space. I think the fact we'd need 6 nodes minimum Nutanix to do any kind of DR between rooms/locations has just ruled it out though simply on $$$ grounds based on real-world pricing I saw on a couple of 6020's.
|
# ¿ Feb 24, 2014 13:53 |
|
Anyone running their vSphere stuff from HDS HUS storage?
|
# ¿ Apr 7, 2014 17:44 |
|
Cidrick posted:Where's the best place to start troubleshooting high DAVG latency against a datastore? We have identical hosts with identical FC connectivity to a Hitachi-based SAN, one living on Hitachi VSP and another on Hitachi HUS, and while we get fantastic <1ms guest disk service times against VSP, we get 5-10ms guest service times that spikes into 100+ms latency under very little IO load. esxtop shows the KAVG as nearly 0 and it's the DAVG that spikes into the hundred-millisecond range at random, which VMware says is on the driver side and on down, potentially being a misconfig with your HBAs or at the SAN layer. I don't know enough to give too useful of a suggestion, but we're looking at HDS HUS right now and they do a product called "Tuning Manager" which seems to give reports on absolutely anything/everything imaginable to help ascertain what's happening at an array level.
|
# ¿ Apr 23, 2014 18:03 |
|
NullPtr4Lunch posted:If you want that, make sure you get the license for it. Seems like everything Hitachi sells is separately licensed... True, but it's around a grand so if someone has a VSP but doesn't have it I'd be.. surprised
|
# ¿ Apr 24, 2014 17:40 |
|
mattisacomputer posted:For actual production use, vSAN is definitely demanding. I think it was in this thread but just a few weeks/months ago where someone had a catastrophic vSAN failure where a SAS controller couldn't keep up with a basic amount of vSAN i/o, even though the SAS controller was specifically on the vSAN HCL anyway. Home lab though? Yeah probably any ol' 6gbps SAS controller will do. They had a Dell Perc 310 IIRC, and afterwards VMware removed it from the HCL and basically said "We screwed up and put some stuff on there that's not good enough".
|
# ¿ Aug 19, 2014 19:35 |
|
|
# ¿ Apr 25, 2024 07:21 |
|
Dilbert As gently caress posted:Is anyone here actually considering evo rails for branch offices? Given they start from around $150K what sort of branch offices do you have?
|
# ¿ Nov 7, 2014 14:45 |