Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

TeMpLaR posted:

Anyone ever heard of a place that uses NFS for all VM Guest OS's but uses software iscsi inside the guest for all additional drives?

I've seen that done for Exchange before, but not for everything. Any ideas why anywhere would do this and not convert everything over to NFS that isn't a cluster?

I am finding lots of examples of how to stop doing it, saying that it is antiquated. Maybe this is just a status quo kind of thing.

http://www.virtuallanger.com/2014/03/04/converting-in-guest-iscsi-volumes-to-native-vmdks/
Once upon a time, VMware didn't support SCSI-3 Persistent Group Reservations on iSCSI drives exposed through the vmkernel's initiator. So, if you wanted to run Microsoft Cluster Service on Windows 2008 or higher (or Solaris, using Veritas Cluster Service or Oracle ClusterWare), and your backing storage was iSCSI, you used a guest iSCSI initiator. Many organizations standardized on this and never moved back.

Even further back, there were reasons involving custom multipathing solutions that didn't work with VMware at the time, but that hasn't been an issue since people stopped running 3.x.

Vulture Culture fucked around with this message at 01:57 on Feb 11, 2015

Adbot
ADBOT LOVES YOU

madsushi
Apr 19, 2009

Baller.
#essereFerrari
There's one other reason I've run into: split virtual clusters, e.g. running a SQL cluster between a VM and a physical server.

TeMpLaR
Jan 13, 2001

"Not A Crook"
Wow, there are a lot more reasons presented then I expected. Thanks! I don't feel like we are totally behind the times now, since a lot of those specifics would apply (mainly snapshots as Nipplefloss discussed).

Madsushi - you could do that with an RDM.

Kind of interesting how polarizing this question is. Searching online results in mostly people migrating away from this, but here a lot of people have some streamlined workflows.

Wicaeed
Feb 8, 2005
Does VMware offer something like KVM "Backing disks" (something my coworker calls them).

Basically they work as a stateless disk that allows multiple VM's to be spun up from a single master disk, but as soon as they are powered off, revert back to the original state.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Wicaeed posted:

Does VMware offer something like KVM "Backing disks" (something my coworker calls them).

Basically they work as a stateless disk that allows multiple VM's to be spun up from a single master disk, but as soon as they are powered off, revert back to the original state.

Linked clones, kinda.

ragzilla
Sep 9, 2005
don't ask me, i only work here


Wicaeed posted:

Does VMware offer something like KVM "Backing disks" (something my coworker calls them).

Basically they work as a stateless disk that allows multiple VM's to be spun up from a single master disk, but as soon as they are powered off, revert back to the original state.

Linked clones do something similar to this. They can't be provisioned through the client AFAIK, but you can script it with powershell: http://michlstechblog.info/blog/vmware-vsphere-create-a-linked-clone-with-powercli/

evol262
Nov 30, 2010
#!/usr/bin/perl

Wicaeed posted:

Basically they work as a stateless disk that allows multiple VM's to be spun up from a single master disk, but as soon as they are powered off, revert back to the original state.

These work with differencing COW qcow2 images. It's a feature of qcow2, not KVM. And any COW vmware snapshot will function the same (linked clones, for example). And yes, you can create templates and spin up temporary VMs from those. It's not quite the same (you can get the same effect with View pools), but it's close enough.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
I'm exploring OpenStack to solve a particular problem. I have plenty of experience with both Amazon Web Services and Google Compute Engine, as well as traditional hypervisors like vSphere, but have never touched OpenStack. What's a good starting point that doesn't explain everything like I've never touched a virtual machine?

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
Checkout out RedHat's OpenStack docs. They're not specific to RedHat and they're miles better than the obtuse documentation that OpenStack provides. To get any benefit from the docs it's very useful to understand what each component is for, so checkout an overview like this one.

The OpenStack docs are okay from an overview POV, but lack any real depth, which is where the frustration begins. If you're looking to experiment with OpenStack, definitely try all-in-one installers like packstack or dockenstack (OpenStack in docker) because installing and configuring all those components is time-consuming and overwhelming.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

minato posted:

Checkout out RedHat's OpenStack docs. They're not specific to RedHat and they're miles better than the obtuse documentation that OpenStack provides. To get any benefit from the docs it's very useful to understand what each component is for, so checkout an overview like this one.

The OpenStack docs are okay from an overview POV, but lack any real depth, which is where the frustration begins. If you're looking to experiment with OpenStack, definitely try all-in-one installers like packstack or dockenstack (OpenStack in docker) because installing and configuring all those components is time-consuming and overwhelming.
Would you recommend vanilla OpenStack, or one of the third-party distributions like Mirantis?

Wicaeed
Feb 8, 2005
So I'm getting hands on Azure really for the first time today.

Apparently when mapping endpoints (firewall rules really) to VM's, you can only use single ports (no port ranges) and only up to 150 endpoints per a single VM.

What the gently caress Azure :psypop:

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Misogynist posted:

Would you recommend vanilla OpenStack, or one of the third-party distributions like Mirantis?

Near as I can tell there isn't really a "vanilla openstack." It's basically like having a shitload of car parts and a haynes manual dropped off as opposed to a fully running car. Marintis and Red Hat just package it and give you ways of deploying it without going insane.

That said we've had pretty okay experiences with Red Hat Openstack Platform. Not great but given the landscape (Mirantis people honestly underwhelmed me but the installer worked well) I'm pretty comfortable with it. Are you looking to build VMs or do other things as well?

spoon daddy
Aug 11, 2004
Who's your daddy?
College Slice
Whelp looks like we're moving wholesale from VMware to RHEV in the next few weeks(PHB monetary decision). Would love to hear any thoughts folks have on RHEV. For what we use, the feature set seems comparable(thought lacking DRS stands out to me). How are features like Storage Migration? and anything I need to worry about regarding oVirt or is it close enough to vCenter that migration will be straight forward?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

1000101 posted:

Near as I can tell there isn't really a "vanilla openstack." It's basically like having a shitload of car parts and a haynes manual dropped off as opposed to a fully running car. Marintis and Red Hat just package it and give you ways of deploying it without going insane.

That said we've had pretty okay experiences with Red Hat Openstack Platform. Not great but given the landscape (Mirantis people honestly underwhelmed me but the installer worked well) I'm pretty comfortable with it. Are you looking to build VMs or do other things as well?
We're looking to pilot a really bare-bones project -- Nova, Cinder, Glance -- using a shitload of disposable, short-lived instances of exactly one workload type.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
evol262 had a really good overview of OpenStack somewhere on one of these threads, but search is failing me. Installing and maintaining it is a large time investment, you need to be very familiar with a bunch of technologies, and finding free support is difficult. It's very far from plug-and-play. For that reason, if you're just evaluating at this point, go with any 3rd-party distribution like RedHat's. RackSpace provide hosted OpenStack (I think) so if you just want to evaluate it from a user POV, throw some money at them and trial it without having to go through install hell. Otherwise be prepared to sink a lot of time into just getting something up and running.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

minato posted:

evol262 had a really good overview of OpenStack somewhere on one of these threads, but search is failing me. Installing and maintaining it is a large time investment, you need to be very familiar with a bunch of technologies, and finding free support is difficult. It's very far from plug-and-play. For that reason, if you're just evaluating at this point, go with any 3rd-party distribution like RedHat's. RackSpace provide hosted OpenStack (I think) so if you just want to evaluate it from a user POV, throw some money at them and trial it without having to go through install hell. Otherwise be prepared to sink a lot of time into just getting something up and running.
The alternative is that we manually manage 2,500+ ephemeral virtual machines across a rag-tag hardware deployment, so if it takes a few days of my time to get a good feel for it, that's probably fine.

Thanks for the help, everyone!

evol262
Nov 30, 2010
#!/usr/bin/perl

spoon daddy posted:

Whelp looks like we're moving wholesale from VMware to RHEV in the next few weeks(PHB monetary decision). Would love to hear any thoughts folks have on RHEV. For what we use, the feature set seems comparable(thought lacking DRS stands out to me). How are features like Storage Migration? and anything I need to worry about regarding oVirt or is it close enough to vCenter that migration will be straight forward?

oVirt is upstream. RHEV-M is the management engine. It's pretty comparable to vCenter, but some bits need to be configured on the hosts, not from the engine (SNMP, etc). Feel free to ask questions

Misogynist posted:

We're looking to pilot a really bare-bones project -- Nova, Cinder, Glance -- using a shitload of disposable, short-lived instances of exactly one workload type.

This should be really easy to do. My post is now missing some of the incubated/incubator projects like DNSaaS and MetalaaS, but you won't use those anyway.

Mirantis definitely has the best installer. Juju looks ok, but I haven't touched it. You can do it all yourself, but using packstack will be the least painful method, probably.

Neutron and kernel network namespaces are the sticking point for most people. I'd recommend setting up openvswitch bridges and configuring those by hand if you'll be the only tenant

Cidrick
Jun 10, 2001

Praise the siamese

minato posted:

evol262 had a really good overview of OpenStack somewhere on one of these threads, but search is failing me.

Found it.

I needed to look it up, too. I started digging through the RDO docks and using packstack to play around with the openstack installation, but there's an overwhelming amount of openstack information out there that's far from concise. My shop uses cloudstack at the moment, and it's fine, but we're looking at trying out openstack, mostly because of vendor and community support (and future support).

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.

evol262 posted:

oVirt is upstream. RHEV-M is the management engine. It's pretty comparable to vCenter, but some bits need to be configured on the hosts, not from the engine (SNMP, etc). Feel free to ask questions


This should be really easy to do. My post is now missing some of the incubated/incubator projects like DNSaaS and MetalaaS, but you won't use those anyway.

Mirantis definitely has the best installer. Juju looks ok, but I haven't touched it. You can do it all yourself, but using packstack will be the least painful method, probably.

Neutron and kernel network namespaces are the sticking point for most people. I'd recommend setting up openvswitch bridges and configuring those by hand if you'll be the only tenant

As a VMWare guy, is there a good learning resource or cert I should look for with Red Hat Virtualization? Any recommended reading?

evol262
Nov 30, 2010
#!/usr/bin/perl

Gyshall posted:

As a VMWare guy, is there a good learning resource or cert I should look for with Red Hat Virtualization? Any recommended reading?

There's a RHCA virt track, but I'd pretty much use harass people in #ovirt on oftc or users@ovirt.org, since all of us rhev developers are also ovirt devs, and can be reliably reached there.

Try to use it like VMware. If you have questions or problems, harass people. Try to use the API (python or REST).

Templating is easier than VMware, and rhev has a lot of features they have in add-on products, but we're also missing a lot (like VAAI). Play with it for a week and come back

spoon daddy
Aug 11, 2004
Who's your daddy?
College Slice

evol262 posted:

oVirt is upstream. RHEV-M is the management engine. It's pretty comparable to vCenter, but some bits need to be configured on the hosts, not from the engine (SNMP, etc). Feel free to ask questions

That would explain things, the guys figuring this out kept saying oVirt and the manager said RHEV but its definitely oVirt since we're using OEL. I'm installing it at home now to play with it, will ask questions as they come up. Thanks

syg
Mar 9, 2012
Ok heres a question.

SQL-based application that has one SQL enterprise server with 6 app servers, all windows.

I'm reconfiguring the volumes/datastores for SRM and trying to decide between putting all 7 servers on one datastore for SRM simplicity or splitting them up to DATA and APP datastores. SRM seems to imply that the best practice is grouping app-related guests into a datastore together, but I have heard that there can be contention for access to a datastore between guests when load is heavy. Any idea which way is best practice? The app servers interface with the clients via the internet and pull data from the SQL server.

Pile Of Garbage
May 28, 2007



syg posted:

Ok heres a question.

SQL-based application that has one SQL enterprise server with 6 app servers, all windows.

I'm reconfiguring the volumes/datastores for SRM and trying to decide between putting all 7 servers on one datastore for SRM simplicity or splitting them up to DATA and APP datastores. SRM seems to imply that the best practice is grouping app-related guests into a datastore together, but I have heard that there can be contention for access to a datastore between guests when load is heavy. Any idea which way is best practice? The app servers interface with the clients via the internet and pull data from the SQL server.

Do you have tiered storage on the back-end? That would be the major influencing factor as far as VM placement is concerned imo.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

syg posted:

Ok heres a question.

SQL-based application that has one SQL enterprise server with 6 app servers, all windows.

I'm reconfiguring the volumes/datastores for SRM and trying to decide between putting all 7 servers on one datastore for SRM simplicity or splitting them up to DATA and APP datastores. SRM seems to imply that the best practice is grouping app-related guests into a datastore together, but I have heard that there can be contention for access to a datastore between guests when load is heavy. Any idea which way is best practice? The app servers interface with the clients via the internet and pull data from the SQL server.

Whether or not there are problems with contention depends on your storage. From an IO perspective different datastores may all be containers virtualized on the same set of spindles, so splitting them won't lessen spindle contention. If you're using block storage there can also be lock contention but that is much improved with the VAAI ATS primitive, if your storage and Vsphere version support it.

So really you need to give more detail. From an SRM perspective keeping everything that is part of the application "bundle" together makes things easier.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Wicaeed posted:

So I'm getting hands on Azure really for the first time today.

Apparently when mapping endpoints (firewall rules really) to VM's, you can only use single ports (no port ranges) and only up to 150 endpoints per a single VM.

What the gently caress Azure :psypop:

Don't worry AWS is much worse

I'll make a new VM and storage thread, then cya space cowboy...

Dilbert As FUCK fucked around with this message at 07:13 on Feb 14, 2015

El_Matarife
Sep 28, 2002
I'm running a VCAP-DTA (VCAP lab exam for desktop administration) study group this Thursday night so if anyone wants in, PM me. And I'd love presenters if anyone is interested in that.

I'm now a 3x VCAP- DCA DCD and DTD.

Mr Shiny Pants
Nov 12, 2012
Are there any gotchas for Vsphere 5.5 update 2 that I need to be aware off? We are updating our cluster at the end of the week because of some VSS errors and some other things relating to the VMCI driver.

El_Matarife
Sep 28, 2002
I'd go to the 5.5 Patch 4 from December at least, the change block tracking backup corruption issue would keep me up nights if you ever expand disks. Honestly, unless you've got robust change control records, you'd never remember if any disks had been expanded past the 128Gb barrier. http://kb.vmware.com/kb/2090639

mayodreams
Jul 4, 2003


Hello darkness,
my old friend
We just went from 5.1 to 5.5 U2 a few weeks ago with a few gotchas. First, we were using vanilla ESXi iso's that when upgraded to vanilla 5.5 lost drivers for some of the hardware, specifically the PCI-E expansion chassis so that our new 10g HBA's wouldn't show up in ESXi. HP pretty much requires you use their ESXi ISO for any kind of support too. For reference, we are running DL380p G8's.

The major show stopper is that VAAI doesn't actually turn off in 5.5 and it is a known bug. This tore up our storage and started locking LUNs to one host and obviously created serious issues. We had to update to the most recent version of Nexenta that explicitly blocks VAAI access because VMware won't fix the bug after a year. All in all, we had to power off the entire environment (300+ VMs on 6 hosts) TWICE to get rid of the LUN locks and to do the Nexenta upgrades. It was a VERY painful week.

tl;dr make sure you have a test environment to validate the changes. We didn't, paid a huge price for it, but at least got some funding to build one out going forward.

TeMpLaR
Jan 13, 2001

"Not A Crook"

mayodreams posted:

We just went from 5.1 to 5.5 U2 a few weeks ago with a few gotchas. First, we were using vanilla ESXi iso's that when upgraded to vanilla 5.5 lost drivers for some of the hardware, specifically the PCI-E expansion chassis so that our new 10g HBA's wouldn't show up in ESXi. HP pretty much requires you use their ESXi ISO for any kind of support too. For reference, we are running DL380p G8's.

The major show stopper is that VAAI doesn't actually turn off in 5.5 and it is a known bug. This tore up our storage and started locking LUNs to one host and obviously created serious issues. We had to update to the most recent version of Nexenta that explicitly blocks VAAI access because VMware won't fix the bug after a year. All in all, we had to power off the entire environment (300+ VMs on 6 hosts) TWICE to get rid of the LUN locks and to do the Nexenta upgrades. It was a VERY painful week.

tl;dr make sure you have a test environment to validate the changes. We didn't, paid a huge price for it, but at least got some funding to build one out going forward.

I don't understand the VAAI comments. How would VAAI being enabled cause LUN locking?

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

TeMpLaR posted:

I don't understand the VAAI comments. How would VAAI being enabled cause LUN locking?

The version of Nexenta we had didn't fully support it so it was recommended to turn it off. The result was LUN locking and guests that were on the other hosts dropped off from storage from the host perspective.

evol262
Nov 30, 2010
#!/usr/bin/perl

TeMpLaR posted:

I don't understand the VAAI comments. How would VAAI being enabled cause LUN locking?

Hardware assisted locking and copy offload both sometimes misbehave.

Broadly, some unnamed vendors are really good at pushing APIs that other companies have a lot of trouble implementing, especially in the software defined networking and storage api spaces.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

evol262 posted:

Hardware assisted locking and copy offload both sometimes misbehave.

Broadly, some unnamed vendors are really good at pushing APIs that other companies have a lot of trouble implementing, especially in the software defined networking and storage api spaces.

I think SCSI UNMAP is really the poster child for this. It absolutely destroyed array performance when first introduced, to the point where VMware disabled the functionality and turned it into a manual, CLI only option.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

TeMpLaR posted:

I don't understand the VAAI comments. How would VAAI being enabled cause LUN locking?

VAAI includes a feature called ATS (atomic test and set) which allows multiple hosts to make metadata updates to a VMFS without having to issue a full on SCSI reservation for the device. If you're using thin provisioning or have lots of hosts in a cluster (say more than 6) this can make a pretty big difference.

Peanut and the Gang
Aug 24, 2009

by exmarx
I want to nuke a drive that PGP Desktop mucked up (it won't let me decrypt it for whatever reason). Goog says the easiest solution is to try DBAN to destroy it, but I don't have any usbs/cds available for booting from. So I want to try it from virtualbox. Is there a way I can set virtualbox to see my secondary drive attached to SATA? For safety reasons, I know this is usually disabled. No clue how to enable it though.

Peanut and the Gang
Aug 24, 2009

by exmarx
Got it with http://www.serverwatch.com/server-tutorials/using-a-physical-hard-drive-with-a-virtualbox-vm.html
And then adding the vmdk to the list in Settings -> Storage
Sweet.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

El_Matarife posted:

I'm running a VCAP-DTA (VCAP lab exam for desktop administration) study group this Thursday night so if anyone wants in, PM me. And I'd love presenters if anyone is interested in that.

I'm now a 3x VCAP- DCA DCD and DTD.

I tried to do something similar as a GUG's n poo poo, it failed horribly....

I found it better to just make poo poo loads of money and stuff. That's what SA told me I was wrong at, just should have focused on that all along I guess.

Also VCP6 I have worries about.... just eh....

Dilbert As FUCK fucked around with this message at 04:37 on Feb 21, 2015

Internet Explorer
Jun 1, 2005





So I know I asked about vShield in this thread before and got some good answers, but I was wondering if anyone has a specific AV product that works well for them. I need to get something in place and don't really have the time to do a ton of research. Anyone using something they like?

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I am using Trend Micro Deep Security for agentless AV on our virtualized servers and VDI workloads. For the most part, it works without issue. I'll notice sometimes their little linux appliance (that needs to run on each host) will go braindead and lock up. A reset on the VM fixes it. Outside of that, it seems to do a pretty good job on the AV front. Not using any of their other modules.

Adbot
ADBOT LOVES YOU

Mulloy
Jan 3, 2005

I am your best friend's wife's sword student's current roommate.
Question, I'm looking at a brief network pause (3-4 seconds at most) on a virtualized server. About 30 seconds prior to the stream of communication errors I see a VMTools update which includes a couple messages about Network Configuration updates occurring. Is it possible for an update like this to cause a brief pause/interruption in network activity?

Edit: It did lay down some new NIC drivers, I just can't see a clear "the NIC itself restarted" within the OS and I do not have access to the Host, just the guest.

Mulloy fucked around with this message at 23:24 on Feb 24, 2015

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply