Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
KS
Jun 10, 2003
Outrageous Lumpwad
There are a lot of ways around this, but the simplest since it is already shutdown would be to just remove the RDM from the VM during the move. Choosing to remove and delete from disk doesnt delete data on an RDM, but it will remove that VMDK from the folder. You can re-add it after the move.

In terms of the operation in progress, I believe you can cancel that from the command line.

Adbot
ADBOT LOVES YOU

KS
Jun 10, 2003
Outrageous Lumpwad
I think virtual is the way to go as long as you have some form of outside monitoring that can alert you if the whole cluster drops due to a storage or power issue.

If you're running a larger cluster, pin it to the first two hosts using DRS rules. That's pretty much what they were built for. As for RAM, I think 8GB for vcenter is the least of my problems.

Only registered members can see post attachments!

KS
Jun 10, 2003
Outrageous Lumpwad
You probably just have a firewall blocking it or something. Bridged mode is what you should be using.

KS
Jun 10, 2003
Outrageous Lumpwad
Anyone with experience using vCloud Director? I have some architectural questions.

We're aiming for two datacenters. Datacenter A will have the production environment. It will be SAN replicated to Datacenter B for DR and we will use SRM for failover.

Separately, we spawn dev environments off of prod. It is currently a manual process. I know vCloud Director can snapshot production and stand up copies for development in datacenter A. My question is, can it do the same based off of the replicated data in datacenter B?

We think it'd be cool to have the DR cluster do double duty as the dev cluster, but we can't get a straight answer on if vcloud director can automatically stand up environments based on replicated data rather than the live production VMs.

KS
Jun 10, 2003
Outrageous Lumpwad
Never really expected to see anyone praise an AMS. We ditched an AMS2300 for Compellent and the UI was unresponsive and horrible, and IO performance was really terrible. Have they improved SMN2? It was taking me ~60 seconds from click on each link to page load.


Erwin posted:

So I'm starting to juggle data around so I can upgrade my datastores to VMFS5. Is there any reason I wouldn't want to combine, for instance, two 2TB LUNs/VMFSs/datastores that are on one RAID set into one 4TB LUN/VMFS/datastore?

If your storage does not have VAAI hardware locking you want to be careful about the number of VMs on each datastore. If you do have VAAI it is not a big deal.

KS
Jun 10, 2003
Outrageous Lumpwad
Twinax cable requirement is also listed on that page.

quote:

What are the SFP+ direct attach cable requirements for the Intel® Ethernet Server Adapter Series?

Any SFP+ passive or active limiting direct attach copper cable that comply with the SFF-8431 v4.1 and SFF-8472 v10.4 specifications
SFF-8472 Identifier must have value 03h (SFP or SFP Plus). This value can be verified by the cable manufacturer.
Maximum cable length for passive cables is 7 meters
Support for active cables requires Intel Network Connections software version 15.3 or later

Note that last bit as we got burned by it on the X520s. To use a 10m active cable, you have to have the windows-based intel software installed or you get some very weird symptoms. We tried it on ESX and had to back off to 5m cables. The 10m active cables work fine with QLE8242s.

KS
Jun 10, 2003
Outrageous Lumpwad

Corvettefisher posted:

Domain controllers are not reccomended to be virtualized. I can't say I have had problems with 2008 and later but 2k3 have problems.

I think this one gets so much traction years-on because they're not recommended to be p2ved. Build a fresh virtual machine and dcpromo it rather than p2ving it.


Misogynist posted:

Veeam vs. PHD

We were in the midst of a Veeam trial and based on your feedback here we gave PHD a try. You're right, it's absolutely better -- and it worked out to be about 60% of the cost, which was a nice bonus. Ended up buying enough to cover our entire production clusters. Their sales guy should send you a gift basket.

We are backing up 110 VMs covering about 9 TB and the incrementals take <1 hour with a single 1-proc VBA. The initial full took about a day once we scaled the VBA up to 6 CPU/24GB.

KS fucked around with this message at 15:03 on Jun 18, 2012

KS
Jun 10, 2003
Outrageous Lumpwad
There should not be a performance difference between an identically specced v7 and v8 VM. What you gain is the ability to take advantage of new features, such as USB 3, HW accelerated Aero, and scaling VMs beyond 8-core and 256GB RAM.

You just need to shut down and power on to take advantage of the new CPU mode -- you do not need to upgrade.

KS
Jun 10, 2003
Outrageous Lumpwad

FISHMANPET posted:

So is there a reason to choose NFS datastores over something block based? It can be as good as block as far as I can tell, but it seems like it just started as an afterthought and snowballed from there, and when starting from scratch there's no reason to choose NFS if you have iSCSI or FC.

This seems really backwards -- up until VAAI leveled the playing field NFS was flat-out superior at scaling out, and the five biggest ESX deployments I saw while doing 3.5 and 4.0-era consulting all used NFS storage.

Performance is probably equal on 10g fabrics, but NFS just wins from a management perspective in my book. It's historically easier to run NFS over a converged infrastructure. In some cases you save using 10 gig NICs instead of CNAs, and you don't have to worry about the 3 driver installs it takes to get a QLE8242 working in ESXi 5.

Even now I'd put Netapp running NFS up against anything I can think of for ease of management. Before 5.0 I was juggling ~45 2TB datastores. Not fun.

KS fucked around with this message at 16:22 on Jul 10, 2012

KS
Jun 10, 2003
Outrageous Lumpwad
In the UCS world I now live in, no matter if I choose FC, 10g iscsi, or NFS, it gets plugged into an interface on each of a pair of Nexus switches in VPC mode. Since we no longer have completely independent fabrics, maybe it's less of a difference.

Doing your redundancy at layer 2 also means not having to manage MPIO on the physical servers, which is probably another point in favor of NFS for manageability.

KS
Jun 10, 2003
Outrageous Lumpwad
You can buy 4gig SD cards at staples for $7. Dell's customized ESX is downloadable as well, so preinstall only saves you a few minutes, and those SD cards are ridiculously expensive.

Not sure what swap space you're talking about. If you're talking about swap to host cache, you're better off with a single SSD. You can forego this if you have headroom on the hosts. It's a fairly new feature and only kicks in for a situation that you want to avoid anyways.

If you're talking about virtual machine swap files, and you store them separately from the VM for some reason, you want that to be redundant and fast. However, you really probably don't want to be doing this either unless you're replicating the VMs and want to avoid replicating the swap file. Even then, you're better off putting swap on shared storage for vmotion purposes.



edit: That diagram represents 10g links? Is your storage iscsi-based? Putting storage, networking, and vmotion over the same wire requires some careful planning. Be sure to do your homework and get the right network adapters for your environment.

KS fucked around with this message at 20:09 on Jul 10, 2012

KS
Jun 10, 2003
Outrageous Lumpwad

FISHMANPET posted:

I've read that you can't join a server to an HA cluster unless it has local hypervisor swap

I think that's really outdated -- you had to turn it on back in 2.5. Nowadays even on an SD card it sets up a 1GB swap space on the card and can join fine, but god help you if it ever gets into a swapping situation. Have never run into that error from 4.0 till now.

e: vvv It'll probably never even be a problem.

KS fucked around with this message at 20:24 on Jul 10, 2012

KS
Jun 10, 2003
Outrageous Lumpwad
SIOC and NIOC become really important when running a converged network. You need to be able to throttle that vmotion traffic somehow or bad things happen. If you're considering plain-jane 10g NICs with this design and using iscsi, you're setting yourself up for some pain.

You should strongly consider enterprise plus. The 4-host Enterprise Plus + VCenter bundle is like $25k + $6k/year support, for a ballpark figure.

I will be ripping the SD card out of a diskless running host in an HA cluster tomorrow to see what happens, so stay tuned.

KS
Jun 10, 2003
Outrageous Lumpwad
70 MB/sec isn't bad for a single gig link, but you should get something out of a second.

With a software iscsi initiator in ESXi you need to do some special setup to make it actually use multiple links to get to one datastore. You should have a vmk interface for each of your physical NICs and then you have to bind them to the vmhba using "esxcli swiscsi nic add -n vmkX -d vmhba33" Do you remember setting this up when you were trying two paths?

KS
Jun 10, 2003
Outrageous Lumpwad

KS posted:

I will be ripping the SD card out of a diskless running host in an HA cluster tomorrow to see what happens, so stay tuned.

Better late than never, right? I just did this and nothing happened -- host is up, VMs are up, no failover triggered. There are not even any warnings generated, which is a bit unfortunate.

I don't have enough load to force swapping and see what happens, but it seems that under normal circumstances, losing an SD card is no big deal.

KS
Jun 10, 2003
Outrageous Lumpwad

Erwin posted:

It seems to imply that, even if I have an existing vSphere infrastructure with a vCenter server, I need a separate vCenter server for the View VMs. Is that true?

It is even more vague about needing a separate set of ESXi hosts. I hope that's not the case, because that's dumb. I only want to virtualize 20 desktops, max.

You do not need a seperate vCenter, unless you're currently running foundation and are going above 3 hosts.

I would be careful running desktops on the same hosts as servers. Desktops have very different IOPS requirements and you can get yourself in trouble pretty quickly.

That said, from a licensing perspective it's fine. Your desktops will run on any licensed ESX host. There is a vSphere Desktop sku that is more cost-effective for large VDI deployments as it has no VRAM limits, so if you scale out you can consider buying that instead.

KS
Jun 10, 2003
Outrageous Lumpwad

Docjowles posted:

That reminds me, I have a "why won't you assholes take my money" question, too. Emailed this to VMware presales and just never heard a peep.

Kit is a one-time purchase -- if you kept the kit under support you're entitled to upgrade to 5 for those 6 CPUs as part of your support agreement. You will not get additional licenses, and will have to buy those a la carte.

KS
Jun 10, 2003
Outrageous Lumpwad

Corvettefisher posted:

My company is a reseller(or in the process) I would be happy to give you some priority.

Between this and the enterprise storage thread, you've posted at least four times trying to solicit business for whatever lovely company you work for. I don't even know if it's against the rules, but it's loving annoying to see your sales pitches everywhere.

KS
Jun 10, 2003
Outrageous Lumpwad

Erwin posted:

From what I've seen in my little bit of looking, Teradici has some that will do 2x 1920x1200, soon to be 4x, and 10zig has something that will do 4x 1920x1200, but there's some weird thing you have to do, like two linked sessions or something. I think I'll be asking for some demo units soon, and we definitely need a minimum of 4x 1920x1200 here to make VDI viable.

I have tested the Teradici I9440 and it worked pretty well but wouldn't recommend it over any sort of WAN link. We ultimately went with a Wyse model that has 2x single-link DVI ports, so 1920x12 max. They work fine. Wyse's packaging and attention to detail sucks: Each zero-client came in a PC desktop sized box, and it came with a PS/2 mouse (no PS/2 ports on the zero client).

KS
Jun 10, 2003
Outrageous Lumpwad
It is odd that you have 6 paths. How are the ports on the Compellent set up? Are they in a single fault domain given that they're plugged into the same switch? Are you in virtual ports mode?

I would think you should have two paths if in legacy mode or four if in virtual ports mode on the Compellent as long as you have dual controllers. Six seems like there's a misconfiguration somewhere.

edit: I am pretty sure the vmknics should be on a seperate vswitch. Example of what a correct config looks like:



This is from a branch office we have with a single-controller Compellent plugged into a single switch with broadcom hardware-assisted iscsi. Looks similar to your setup.

KS fucked around with this message at 01:11 on Sep 4, 2012

KS
Jun 10, 2003
Outrageous Lumpwad
Not sure on that, you may be right. I know this is the setup where if you did it in the GUI it is automatically incorrect. You have to bind each VMK to the corresponding hardware adapter (or to the software adapter if you prefer), so you may want to verify that it's set up correctly.

edit: actually thinking this through, the above means that the two-NIC to one vswitch setup you posted is definitely incorrect. They can't be set up to fail over between physical NICs if they're correctly bound to one physical NIC. I'd guess this is the source of your problem.

Link

KS fucked around with this message at 01:37 on Sep 4, 2012

KS
Jun 10, 2003
Outrageous Lumpwad
Pretty sure that's one of the things that VAAI has already addressed and eliminated. It does locking at the block level (?) and far quicker using a new scsi command.

KS
Jun 10, 2003
Outrageous Lumpwad

KennyG posted:

I have read the past 35 or so pages trying to get up to speed, but I have one specific question. Does VMWare support clustering at the guest level.

It looks like what you're asking is if VMware can combine the resources of multiple hosts to power one VM. The answer is no.

KS
Jun 10, 2003
Outrageous Lumpwad
The 4 socket tax is indeed nasty -- not only cost, but rack space-wise. With HP's Intel servers, I can get double the socket density per rack using 2 socket servers over 4 socket. I believe only Dell sells an Intel 2u 4-socket server that would level that playing field.

You can definitely buy an 8 socket host, though, and if HP's discounts are still the same. the DL980 and DL580 are surprisingly close in price. However given the 32 vcpu limit in VMware 5.1 a 64-CPU box isn't going to help much unless you go bare metal.

KS
Jun 10, 2003
Outrageous Lumpwad

sanchez posted:

Does esxi need its OS/boot storage once it's started up or does it load completely into RAM? We have a few servers that boot from sdcard, I've always wondered what would happen if the card failed.

I think I posted about testing this earlier in the thread -- pulled the sd card out of a running 5.0 server for a week with no ill effects.

KS fucked around with this message at 20:34 on Dec 18, 2012

KS
Jun 10, 2003
Outrageous Lumpwad
That's a typical discount for Cisco network gear, but UCS discounts are actually quite a bit higher. We're not really supposed to share specific percentages like that, are we? The whole IT purchasing thing (deal registrations especially) seems incredibly dishonest sometimes.

UCS blades were price competitive with DL360s for us in quantities above 10 and quite a bit cheaper than HP c-class blades with their 20 different management products.

KS
Jun 10, 2003
Outrageous Lumpwad

demonachizer posted:

I am trying to flesh out a test environment for a VMware project that will be 3 hosts connecting to a SAN. I am considering having the network guy setup vlans as follows and just want to know if it makes sense.

I assume these are gigabit ports? If they're 10 gbit, you're overthinking it entirely.

Are you using FT for anything right now? It has severe restrictions and you should steer away from it if possible. That said, if you're actually going to use it (seriously, don't) it needs a dedicated connection. I do think a separate vmotion network is worth it on 1 gbit, but you wouldn't need to double that up if you went to redundant switches. You can also use a dedicated switch here if your DC layout allows it to save on ports on whatever switch you're using for guest traffic.

Management can be carried on the same NICs as your guest networks. It is low overhead and it has the added benefit of making it fault tolerant as well.

If you add in redundant switching, you'd want your guest/mgt traffic carried by a trunk with two members, one to each switch in a vpc pair. You can choose whether to do iscsi redundancy at layer 2 or 3, but layer 3 seems better on 1 gbit networks -- you'd want a connection from each host to each of two switches, and separate iscsi subnets on each.

KS
Jun 10, 2003
Outrageous Lumpwad
^^^ Not so sure about that... and what the hell does it have to do with HA?

Moey posted:

Per the extremely professional diagram below, if I have Switch A and Switch B patched, will VMware report that my storage paths as "partial/no redundancy" even though I do have multiple unique paths?

If you're not doing something special on those switches (like a VPC) then that's not really a good idea for iscsi traffic. Even if you are doing VPC and port-channels, IMO that's great for NFS but you should be relying on MPIO for iscsi. If you're using round-robin MPIO you'd expect fully half your traffic to traverse that link between the switches, and that's a recipe for disaster.

Proper config would be:
Two distinct networks, one for each switch
Two adapters on the host, one in each network, connected to the appropriate switch
Two interfaces on the array, one in each network, connected to the appropriate switch

Even if you're giving the host two distinct IP addresses in the same class C in that diagram, that's not good enough for redundancy.

KS fucked around with this message at 06:48 on Feb 1, 2013

KS
Jun 10, 2003
Outrageous Lumpwad
Yeah, thought about it a bit more and edited out the stp stuff, but it's still wrong. Without doing some sort of fixed path or weighted MPIO (and it'd be manual because nothing in the vmware plugin would detect this), you'd send half of your traffic from initiators on switch A to targets on switch B. You'd see link saturation on the cross-link and probably some out of order packet fun as well. Sure, it'll work, but it's far from optimal.

If you were doing it right to move to NFS, those switches would be in a VPC, iscsi would STILL be in independent VLANs on both switches, and you'd have a VLAN for NFS (or re-use ONE of the iscsi VLANs). ISCSI hosts would then get a link to each switch, and NFS hosts would get a port-channel to both switches. I can post NX-OS configs for this scenario.



BangersInMyKnickers posted:

The switch interconnect allows gives a bit more protection in a double-failure scenario in the event that a nic on the host and the nas both failed at the same time.
Not really, because you'd be sending 75% of your traffic (half of the normal and all of the failed) over the ISL in that scenario, and you'd kill it under any kind of load. The correct fix for this is multiple NICs in the array.

edit: the good news is, if you're really running this way you could gain a fair bit of performance when you fix it. I'd suggest monitoring traffic volume on your ISL. You CAN do a layer-2 redundant setup on one subnet, but it involves VPC capable switches and port-channels to the hosts. I don't really see any indication that that's going on in that diagram.

KS fucked around with this message at 07:17 on Feb 1, 2013

KS
Jun 10, 2003
Outrageous Lumpwad
e: claimed

KS fucked around with this message at 20:10 on Jul 24, 2013

KS
Jun 10, 2003
Outrageous Lumpwad
Alarm bells are ringing. RAID-0 is a stripe, not a mirror, and the faster performance makes sense there too. That doesn't afford you any kind of data redundancy, which you had stated was the point.

KS
Jun 10, 2003
Outrageous Lumpwad
I feel like I'm missing something completely obvious here. Is there no way to set a VLAN tag for an independent hardware iscsi adapter through the GUI? Even though you can set IP address and everything else?

KS
Jun 10, 2003
Outrageous Lumpwad
Feeling your pain, because I'm about 15 minutes from a busted SLA on a severity 2 issue. What should have been a minor upgrade to vCenter appears to have completely broken HA.

They were so good once. Maybe it was just because I had access to Federal support.

KS fucked around with this message at 22:40 on Aug 5, 2013

KS
Jun 10, 2003
Outrageous Lumpwad

KS posted:

I feel like I'm missing something completely obvious here. Is there no way to set a VLAN tag for an independent hardware iscsi adapter through the GUI? Even though you can set IP address and everything else?

So the verdict for a Qlogic CNA is that you apparently have to do it through their QConverge console, which requires a VIB install. You can't do it through the base driver and surprisingly, if there's a way to do it through the BIOS I couldn't find it.

Also, since we have such a wide variety of people who read these threads, if anyone works at Qlogic and knows the guy who programmed this:

with the increment/decrement buttons and no ability to type a VLAN number, give them a punch in the junk from me. There are 4096! What the gently caress.

KS
Jun 10, 2003
Outrageous Lumpwad

Moey posted:

Thirding this. I like things to look pretty. :)

My current setup is 2x10gb running iscsi/vm network/management then 2x1gb for vMotion. Still have two more on board NICs if I ever need them and room for another PCIe card or two.

If you do a lot of vmotion and have big hosts, get it onto 10 gig! It's a whole lot more pleasant. 256GB hosts empty out in <30 seconds. If you're worried about link saturation, hopefully you're on Ent+ and can use NIOC.

I see so many incorrect vmotion configs! Stuff like vmotion traffic going over the VPC/ISL because VLANs and port groups aren't configured optimally. Just make sure you don't join the club.

Dilbert As gently caress posted:

Also gotta give a shout out to KS for the vCloud class

Very glad to hear it worked out.

KS
Jun 10, 2003
Outrageous Lumpwad

Tequila25 posted:

I made a diagram for how I have my connections laid out on my two switches, but now that HOST1 is up and running, I realize that it uses controller 0 on the SAN for just about everything and the SAN keeps complaining that my second LUN is not on the preferred path, because all the I/O is going through controller 0. This should fix itself when HOST 2 is up and running through SAN controller 1, but is that a bad idea? Any suggestions on if I need to change my connections and how to change them?

Edit: Or can I change the ESX config on HOST1 to change paths to use both controllers?



This doesn't sound right. I'm having trouble following your spreadsheet, but for sure your IOs should be on the preferred path on every host. In a multipathed setup, each host would be able to talk to each controller through each switch, if that makes sense.

e: I looked back and saw you're running a MD3200i. I don't know much about it, but it sounds like you're getting a warning saying you're sending IO through the controller that doesn't own the LUN. That's a misconfiguration in any situation except a path failure.

KS fucked around with this message at 18:58 on Sep 26, 2013

KS
Jun 10, 2003
Outrageous Lumpwad
Don't really need a management cluster, just a DRS rule. Mine's virtualized and pinned to the first two hosts in the cluster.

That said, I've had two complete power outages and had cause to regret virtualzing VCenter both times: once due to HA powering up the entire environment on the first host that booted, and the other due to a really lovely HA bug in 5.0 that broke HA for any machine that had been storage vmotioned on a VDS. That bug really represents that this stuff is far from bullet proof.

Good IT guys are risk averse, and from my experiences over the last few years, virtualizing vcenter can easily add more risk than it removes. After all, big operations tend to have mature strategies in place to protect physical servers.

KS
Jun 10, 2003
Outrageous Lumpwad
My SSO 5.5 installation rolls back and the fixes here don't make a difference. The 5.1 SSO install it's upgrading is entirely vanilla, self signed certs, default everything. Blog comments on posts for this issue suggest that the KB article isn't helping for a bunch of people. Support is more interested in closing the ticket than actually fixing the problem, and they want me to reinstall VCenter to fix it. Just wondering if anyone has had better luck from them?

It seems they simply cannot do an SSO release without loving up.

KS
Jun 10, 2003
Outrageous Lumpwad

Mierdaan posted:

Is there any reason why I shouldn't rely on vmkping as a troubleshooting tool?

We've just moved over from 1gig to 10gig backend fabric, including our vmotion traffic, and I'm seeing some strange timeouts when certain hosts try to vmotion to other hosts - no pattern that I've found, but I'm trying to test with a vmkping from the vmotion vmk interface on host A to the vmotion vmk interface on host B and VMware is telling me that vmkping isn't always reliable. So far it seems to match up reasonably well with the reality of what I can and can't vmotion, so I'm not sure what VMware's talking about.

Are you by chance using multiple vmotion interfaces? I've seen this behavior in an environment that was set up a bit wrong: vmotion_A interfaces on all hosts in one subnet/vlan and vmotion_B interfaces in a second VLAN. It would work on all hosts where vmotion_A was vmk1 and vmotion_B was vmk2, but fail on the hosts where they were reversed.

To be clear, that's not a supported setup.

Adbot
ADBOT LOVES YOU

KS
Jun 10, 2003
Outrageous Lumpwad

geera posted:

Is this a fair assessment of the two products? I'm not married to VMware by any means, but it's a platform we at least have some experience running now, so I want to make sure I don't take us from a system that's been extremely stable to one that's maybe not going to be as reliable.

I think it's very fair. If you're running predominantly MS VMs and are buying datacenter licenses anyways, it's hard to go wrong with 2012 R2. It's improved a lot. I'm in the same situation workload-wise and my ELA with VMWare runs out in about 18 months. Unless something changes I can't see renewing it.

Mierdaan posted:

The vmkping issue was caused by a misconfigured portchannel between our two 10gig switches. We have a vmotion-enabled vmkernel interface on each host with two active vmnics - if the vmnics in use by the vmkernel interfaces on two hosts were connected to the same switch, it'd work. If they were on different switches, it wouldn't work. Simple enough once we realized what was going on, but assigning active/active vmnics to a vmk just let us assume that it would work if at least one path was valid, when in fact it pics a vmnic and sticks with it (on boot?).

If you're doing LACP to the hosts, the easy fix is to create two port groups using the same VLAN and alter the bindings on the portgroup to make one adapter used and one unused on each. Then just make sure dvUplink1 is always the NIC connected to Switch A. That's also how you do iscsi MPIO with LACP.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply