Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«283 »
  • Post
  • Reply
Rhymenoserous
May 23, 2008


three posted:

vSphere is a significantly more well-rounded and feature-rich product than its competition, in my opinion. But, you know how many people actually USE all of those features? Not many. There are still people in 2013 that don't have DRS enabled.

The war to overtake the hypervisor isn't about being better than ESXi; it's about being "good enough." It won't be long.

DRS was brand new to me when I installed vMware 5.0 for the first time and upon reading up on it I was like "Oh hell yeah" and turned that poo poo on. I won't lie and say I make use of every feature set that VMWare has, but I'm making a good attempt at it.

Adbot
ADBOT LOVES YOU

Cidrick
Jun 10, 2001

Praise the siamese


whaam posted:

So I've got 4 HP DL360 G8s with the HP esxi 5.1 iso installed and their 4 port onboard nics seem to randomly drop from 1000 full to 10 full, this is with auto turned on them all. I thought it might be related to the network runs being too close to the power so I moved them, anyone else see behaviour like with in esx 5.1? This is happening randomly across the ports, each are going to different switches as well so its not that.

What's your physical switch that you're using? In my experience, autoneg tomfoolery is fixed with an update at the switch level. We have boatloads of Gen8 HPs with no issues like you're talking about, although I haven't upgraded us to 5.1 yet (We're still in 5.0)

Dilbert As FUCK
Sep 8, 2007

by Cowcaster


Pillbug

Does anyone have experience(s) with setting up a VMUG, or attending them. We use to have a group around the area, but salesmen started being filtered in as well as the main organizer moved out of state; resulting in it ending.

My new place more focused on engineering and IT services than selling the latest Cisco/Dell/HP hardware, so we don't have to worry about sales pitches. So I think it would be a worth while experience to try and start it up I've spoken to about 10ish people I know who would love to start one back up in my area. The good thing about my new place is they are always looking to sponsor community events such as this, so I am fairly sure they would say yes.

However, I wasn't sure if anyone had some words of wisdom before I bring up the "Hey why don't we do it?" to some people.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


I've got what I suspect are disk IO problems, but I'm not sure how to confirm it enough to go management and try and get new storage.

When I have to reboot a simple VM (a domain controller for instance) it can take over a half hour from when the BIOS is done loading (which only takes a few seconds) to when I get a login screen on the console of the VM. From what I can tell we're fine on both memory and CPU, so I think it's disk. We've got a pretty lovely RAID that can basically only deliver about 150-200 IOPS, and I think it's way overloaded. I also see a lot of short disconnects of some of the LUNs in the VMware console. Is there a single place I can go in VMware to see total demand on the disk from a single host? (No vCenter, so per host is the least granular I can get).

Cidrick
Jun 10, 2001

Praise the siamese


FISHMANPET posted:

Is there a single place I can go in VMware to see total demand on the disk from a single host? (No vCenter, so per host is the least granular I can get).

Hosts and Clusters -> Select your host -> Performance Tab -> Advanced Button -> Select various options in the dropdown. "Disk", "Datastore", and "Storage Adapter" should all show some insight.

My guess, there's some massive read latency talking to your VM datastore(s). These graphs should show it happening in a nice shiny labelled format for you to present to whomever pays for your storage backend.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster


Pillbug

Also when SSH'd into a host you can nab some stuff via ESXTOP and press D

Has any paths changed for that host=>datastore? It isn't trying to use a downed path is it?
If you are seeing disconnects from storage, see if other VM's on that storage device have the same problem. Also what Path Selection Policy are you using?
Do you have anything in the CD rom that is trying to be read?
For a Domain Controller 200 IOPS isn't all that bad, especially if AD/DNS is all it is running on the VM.

Dilbert As FUCK fucked around with this message at Apr 2, 2013 around 20:22

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


200 IOPS runs 4 domain controllers, 2 file servers (the actual files are on different LUNs, but the base OS is run off this LUN), and a host of other things, coming out to about 20 machines.

E: which is to say, at least according to my intuition, that 200 is not nearly enough for all that.

Erwin
Feb 17, 2006



SSH to the host, run esxtop, and hit V to go to the VM disk page. High numbers in the LAT/rd and LAT/wr should be enough to show a problem. Also, theoretically the CMDS/s column should fairly consistently add up to 200 if you're pegged, I think? If you want to be more thorough, there's a KB article on exactly this.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster


Pillbug

Really it depends if all they are hosting is just the Windows OS boot VMDK's, depending on the FS's needs, it very well could be. ESXTOP should give you an idea if you are queue'ing disk writes/reads. Do other VM's experiences the same issue when rebooted on that lun/datastore? If so I would assume one of two things, you have a path to storage issue, or you do not have enough IOPS to handle requests.

http://kb.vmware.com/kb/1008205
here is a good article on it.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


Yeah, I've seen this on other VMs too. Also a lot of them are just "slow" when I try and use them on the console, but from what I can tell it's not a CPU or memory problem anywhere, which leads me back to disk.

The RAID is kind of a piece of crap so I'm willing to blame the disconnects on the RAID being awful. And the topology is also awful. Each host has a gigabit Ethernet cable that plugs into a dumb pocket switch, and that pocket switch has a gigabit cable that goes into the iSCSCI RAID.

Less Fat Luke
May 23, 2003

Just the tip!


Exciting Lemon

FISHMANPET posted:

The RAID is kind of a piece of crap so I'm willing to blame the disconnects on the RAID being awful. And the topology is also awful. Each host has a gigabit Ethernet cable that plugs into a dumb pocket switch, and that pocket switch has a gigabit cable that goes into the iSCSCI RAID.
What in the gently caress.

Moey
Oct 22, 2010

I LIKE TO MOVE IT


Less Fat Luke posted:

What in the gently caress.

What is your back end storage? Also what is a pocket switch?

sanchez
Feb 26, 2003


One of those tiny netgear style switches you can buy from newegg for $30 I'd assume.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


Yup. The backend is some awful Sun StorageTek RAID, running with 3x2TB Western Digital Black drives in a RAID 5.

Moey
Oct 22, 2010

I LIKE TO MOVE IT


sanchez posted:

One of those tiny netgear style switches you can buy from newegg for $30 I'd assume.

At my previous job my old boss tried to get away with unmanaged netgear switches for the iscsi network. After replacing them with managed switches that were full wire speed, all of our random iscsi dropping issues went away.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


Moey posted:

Also what is a pocket switch?

gently caress, this horrible place is rubbing off on me, I never even realized that this wasn't real IT lingo and just poo poo we make up for the gently caress of it.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster


Pillbug

FISHMANPET posted:

Yup. The backend is some awful Sun StorageTek RAID, running with 3x2TB Western Digital Black drives in a RAID 5.

Oh boy SATA DRIVES TOO? The only way it have been betters is, if those were the green series.

I'd love to hear who's idea it was to implement that
"BUT IT'S 4000GB STORAGE! THAT'S GOTTA BE FAST BECAUSE LARGE NUMBERS!"

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


In our meager defence this was our first VMWare deployment and also never in the history of our organization has performance of storage actually been a bottlneck.

Less Fat Luke
May 23, 2003

Just the tip!


Exciting Lemon

Moey posted:

What is your back end storage? Also what is a pocket switch?
lovely but not that lovely. Test is 10ge to an NFS cluster and prod is a few Dell MD3220s. It's the single link plus pocket switches that kinda threw me; that's bottleneck behind bottleneck.

It wasn't a criticism, more like a question on my mind of the VMware licenses versus even a single actual switch.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

FISHMANPET posted:

In our meager defence this was our first VMWare deployment and also never in the history of our organization has performance of storage actually been a bottlneck.

Doing new things isn't an excuse for doing them poorly. It's not like virtualization is tiny niche without documentation of best practices.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


We're collosally stupid basically. Expertise is shunned in favor of outdated group think. But then it turns into poo poo That Pisses Me Off, so I'll take the links and try some stuff on our servers tomorrow.

Fun fact:
We have two servers licensed for ESX 4.x Standard, but we didn't spring for VCenter because it was $5k (on top of a $200k hardware buy) and what was the point?

The reason we have only 3 disks in the RAID is that we didn't want to buy disks from Sun, so we bought empty trays and filled them with disks from Newegg. We thought it would be as easy as buying Dell trays, but nobody uses this product, so we could only find 3 in the country. So we bought 6 disks and we just have a pile of cold spares.

evil_bunnY
Apr 2, 2003



Get out before you turn into one of them (it's started already). I am not joking. Get the gently caress out.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

three posted:

Doing new things isn't an excuse for doing them poorly. It's not like virtualization is tiny niche without documentation of best practices.
If I were to believe a local IT whackjob, that's trying to sell a third party contract to my local town hall, it takes months and months of intensive studies to get a single non-redundant VM host up and running. Pointing out how hypervisors (including free version of big commercial products) are available left and right to tinker with, made him fly off the handle, calling me a liar, cretin et al. VV

Dilbert As FUCK
Sep 8, 2007

by Cowcaster


Pillbug

three posted:

Doing new things isn't an excuse for doing them poorly. It's not like virtualization is tiny niche without documentation of best practices.

You would be surprised how many "VCP's" I have met that make designs similar to Fish's setup... Single hosts and Essentials kits tagged on for "High Availability" on a single host running software raid for a multisite deployment, DROBO NAS appliances for View Deploys, Production_Supercritical_data RAID arrays put in JBOD, designs that only use DAS, people putting ISCSI storage on boxes without proper bindings, limits on resources causing host swapping when plenty of resources were available.... I don't even want to start on the interviews...


My favorite is still when people come to me, and boast about how gijjabytes they have in the back room almost 10 whole TB of storage and still slow GOTTA GET ANOTHER 10!

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

Corvettefisher posted:

My favorite is still when people come to me, and boast about how gijjabytes they have in the back room almost 10 whole TB of storage and still slow GOTTA GET ANOTHER 10!


My boss has been doing exactly the same thing. Going on and on and on about how ReadyNAS boxes are crap (hint: they are, but not as bad as one would think) and how they can't handle any load. We had been fighting over iSCSI MPIO for a while. He didn't believe me, and in fact argued against me almost to the point of calling me stupid, when I said I could literally double the performance of a ReadyNAS by getting rid of the ether channel setup, breaking it into two separate 1gb links on different subnets (he wouldn't let me use VLANs). I finally got my hands on a spare ReadyNAS 4200 loaded with 16x 7200RPM SATA drives and was given permission to test my "theory." After blowing the configuration on the 4200 away, setting up a proper raid 6 array, none of that x-raid 2 poo poo, and upgrading the firmware on it, I put my theory to the test. My bosses face when I presented the benchmark results: I believe the next words out of his mouth were "Draw up a plan to get this in prod. NOW."

I've now got our entire development environment running off of one ReadyNAS with a load greater than what the previous ReadyNAS had on it and it's not even sweating.

Martytoof
Feb 25, 2003

It's called a hassle, sweetheart..



Corvettefisher posted:

My favorite is still when people come to me, and boast about how gijjabytes they have in the back room almost 10 whole TB of storage and still slow GOTTA GET ANOTHER 10!

... but 20tb isn't an impressive amount of storage these days

I mean, it's a lot, but it's not like "holy cow I've gotta tell someone about THIS, they'll never believe it!" amazing.

I guess maybe if it was 20tb of SSD storage then we can talk


e: Oh nevermind I totally misread what you were getting at Disregard.

talaena
Aug 30, 2003

Danger Mouse! Power House!

Hi, I'm the dork who tried to P2V his laptop and got stumped by the RSA Soft-token not loading in the VM. I gave up on that adventure, in fact I totally forgot I asked the question. The question was asked right as I was ramped up on my support of vCenter Configuration Manager and poo poo got so busy so fast I forgot about all the tertiary crap. I went from "I'm going to be so smooth and work from home through my P2V VPN" to "gently caress my life how the hell do I handle all of the poo poo they're throwing at me? I'm not doing anything beyond sleeping when I get home"

Now that I have a bit of a handle on the product, I'm looking to expand. I just bought 2 Dell Poweredge 2950's for home so I can play with vCloud Director. 24GB of ram for each host, should be enough to stage a small environment for testing. I'm still learning stuff from vCenter down. I came into this job with no Virtualization experience at all, and I barely have any to this day. My job is a lot of SQL and UI troubleshooting, I never go down into vSphere/ESXi.

I hope this home lab will give me a good platform from which to learn. I chose vCD as a focus because the product looks useful, and AppDirector sits on top of that and I want/need to learn more about that. And I suppose Data Director, but I'm not sure I have an urge to play with database deployment right now. Data Director confuses me almost as much as DVS networking. :P

A DVS is a requirement for vCD, and despite going through an ICM class, I know gently caress all about all this fancy networking. I have one vCD cell up and running in my lab at work, but heck if I know what ANY of it does; because all I did was follow the docs like a good little boy and it installed. I understand the base concept of Cloud Director; but trying to solidify all of it's concepts in my head is daunting. Hell, I understand AppDirector better than Cloud Director, and that just seems wrong on it's face.

I don't have anything to add or questions to ask; but I'm going to have a poo poo-ton of asinine questions going forward from next weekend when the servers are finally here.

evil_bunnY
Apr 2, 2003



Is it ok to admit I love vDS'es mostly because I'm lazy?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster


Pillbug

evil_bunnY posted:

Is it ok to admit I love vDS'es mostly because I'm lazy?

Just say you love it for NIOC.

ragzilla
Sep 9, 2005
don't ask me, i only work here




evil_bunnY posted:

Is it ok to admit I love vDS'es mostly because I'm lazy?

Anyone that doesn't admit to that is a liar. IT as a whole is driven by laziness.

Shumagorath
Jun 5, 2001


I'm trying to setup Hyper-V on Windows 8 Pro and I want to do the following things:

-Run Windows 7 or at least XP along with a host of malware analysis labs
-Run Windows Update natively on the VM (unless there's a better way to do it without direct internet access)
-Run phone-home software in such a way that it can't establish a connection without my saying so

Can I somehow get away with an internal virtual switch for the VM? I only have my motherboard's onboard NIC in this box and don't want to virtualize Windows 8's access to it. Is the best solution to just walk over to the store and buy another physical NIC so I can use an external switch on it?

Docjowles
Apr 9, 2009



ragzilla posted:

Anyone that doesn't admit to that is a liar. IT as a whole is driven by laziness.

That and hate-driven development. I'd be lying if I said a lot of my priorities weren't set by the "what is annoying the poo poo out of me lately?" method.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


Docjowles posted:

That and hate-driven development. I'd be lying if I said a lot of my priorities weren't set by the "what is annoying the poo poo out of me lately?" method.

I wish my boss would understand those two things.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster


Pillbug

If anyone is wondering what cool poo poo the UCS platform offers there is now an emulator out for it. Video in the link.
http://wahlnetwork.com/2013/04/01/c...kthrough-video/

Wasn't sure if I should post this here or the Cisco Thread, might just make a crosspost in both threads

Dilbert As FUCK fucked around with this message at Apr 4, 2013 around 12:16

parid
Mar 18, 2004


What am I missing with ucs? It seems 10-15% more expensive and the only tangible benefit for my esxi hosts seems to be faster os setup and simpler cabling both of which only impact day 1 of setup. How are other pepper find value to justify the expense. Judging purely by their customers, i think there is just something I'm missing.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster


Pillbug

parid posted:

What am I missing with ucs? It seems 10-15% more expensive and the only tangible benefit for my esxi hosts seems to be faster os setup and simpler cabling both of which only impact day 1 of setup. How are other pepper find value to justify the expense. Judging purely by their customers, i think there is just something I'm missing.

Now there is an emulator to find out !

In all honesty I sell them and from a VMware perspective I would probably be just as happy selling Dell or HP. There is really nothing special about them other than what every other server has, other than the converged networking, I don't see much to justify the extra costs. They originally had servers that had some of the more high ram density, , now they don't have anything really to set them apart from Dell/HP/IBM, aside from converged networking models.

They are good if you have to spend a budget so you get the same budget next year.

Dilbert As FUCK fucked around with this message at Apr 4, 2013 around 01:16

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

The reason UCS is popular is because VARs have high $$$ motivation to sell them, and Cisco has been literally giving them away to try to get into the server market.

I don't think from a technical perspective they're THAT amazing although they do some neat stuff. A lot of the functionality they spearheaded is just being mimicked by Dell, etc. I think they do make things a bit overcomplicated, as well.

ragzilla
Sep 9, 2005
don't ask me, i only work here




parid posted:

What am I missing with ucs? It seems 10-15% more expensive and the only tangible benefit for my esxi hosts seems to be faster os setup and simpler cabling both of which only impact day 1 of setup. How are other pepper find value to justify the expense. Judging purely by their customers, i think there is just something I'm missing.

Need to upgrade firmware? Spend an hour with an HP firmware DVD moving server to server? With UCS apply a new firmware bundle to the server and set it to apply at next reboot. Reboot hosts during maintenance window and enjoy your bugfixed firmware.

Unified fabric also gives some nice approaches for hardware sparing (assuming you use boot from SAN). The FC WWNs and Ethernet MACs are part of the blade 'personality'. So whereas before you may have had an extra VMware host in each of 2-3 clusters so provide your N+2 availability (so you can maintain N+1 when you put a host in maint mode) and another blade as a cold spare for Oracle RAC (or something else compute heavy) with UCS you can maintain 1 or 2 of those redundant servers and move it around as needed. This also lends itself well to quick upgrades if you have spare hardware of a higher spec in your chassis, power off old server, reapply personality to a higher spec one, power on.

Of slightly less interest, assuming you're using VICs, you can add a new SAN fabric as a vSAN and present it to servers without having to add physical HBAs (just present a new vHBA to the server). Server can have HBAs in old and new vSANs, migrate data from old SAN to new, remove old vHBAs and decommission old fabric (useful if you have to return ex-lease stuff and don't want the 2 fabrics to touch at all).

Separate NICs for iSCSI/NFS, Frontend, and vMotion traffic? No problem. You can even apply QoS to them, in case you want to guarantee your vMotion traffic exactly 2.5gbps. Want to limit a specific application? Present a new pair of vNICs to your ESX host and apply a 100mbps policy to it.

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

List price might be more expensive, they are reasonably competitive with normal discounts.

Adbot
ADBOT LOVES YOU

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

ragzilla posted:

With UCS apply a new firmware bundle to the server and set it to apply at next reboot.
Never worked with HP, but Dell can do this either with OpenManage or through their vCenter plugin.

quote:

with UCS you can maintain 1 or 2 of those redundant servers and move it around as needed
I can't imagine using 1 redundant blade for multiple separate systems.

The hardware detaching is cool, but how often does anyone need to use it? It'd be easier to just use auto deploy for ESXi hosts, and only bad people run physical workloads nowadays. (Plus who boots Windows from SAN? Although, technically you could just swap the drives.)

quote:

Present a new pair of vNICs to your ESX host and apply a 100mbps policy to it.
It would be better to use NIOC.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«283 »