Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Kachunkachunk posted:

For anyone with an interest and supported GPU, nVidia VIBs are available: http://www.nvidia.com/object/vmware-vsphere-esxi-5.1-304.76-driver.html.
I have doubts it'll work with other GPUs without modification/hackery, say for whitebox users like myself, but I do hope to have some time to try with a Fermi desktop card some time next week.

Edit: I had it working before with an older driver that wasn't locked down. I could just rely on it, but I noticed odd power consumption patterns with the card. Could be a server-side issue, though.
I would basically come back in a few days to find the room hot from the GPU, which had been running full-tilt for days. Without any VMs running off of it. :psyduck:


You had an extra . in that link http://www.nvidia.com/object/vmware-vsphere-esxi-5.1-304.76-driver.html

but good to know!

Adbot
ADBOT LOVES YOU

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Microsoft clustering within VMWare, anything I should really be aware of?

We're getting our new R620/Equallogic system with VMWare Enterprise in a month or so. This is our first VMWare system and first shared storage for VMs. Currently we're just doing 2008 R2 Hyper-V on local storage for standalone hosts.

So, now that we will have the infrastructure, I'm trying to get things to a more reliable point as well as easing maintenance.

Our webservers are all loadbalanced so there's no real issue with them, but we have a few things that absolutely must be up as much as possible. One example of this is our MSMQ server. Now, I could do a standalone machine with Fault Tolerance, but that doesn't solve downtime for patching the machine. So, the best thing to do here would be to make a cluster for MSMQ, that way the only downtime is the brief blip for failover of the cluster. Then we can patch the passive member, failover, and patch the other member.

Now, I know there's two ways I can do this.

1) Simplest way is "cluster in a box". I can use virtual disks for the shared storage on a single host and everything is configured within VMWare which simplifies operations. The only downside is, in the event of host failure, the whole cluster is down until H/A restarts it on another host. Not terrible, but not ideal.

2) Carve out storage from the SAN and use iSCSI initiators on the guest OS and go about MSCS like these were two physical hosts. That would allow me to split the VMs between two physical hosts so that it would only be a cluster failover in the even of underlying host failure. The downside is, VMWare wouldn't really be aware or managing the storage that I present to the guest over iSCSI and it increases the complexity of the storage network as that traffic should really be kept separate from the VMWare traffic.

I would prefer the quicker failover in the event of host failure, but if that extra configuration is going to give me extra grief down the road, we might just take the risk of longer recovery in the event of host failure.

MC Fruit Stripe
Nov 26, 2002

around and around we go
For anyone interested in my little "creating a fixed disk on Server 2012 causes the drive to fail" issue, I put 2008 R2 on the box and it can now create disks with no problem. No idea.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

bull3964 posted:

Microsoft clustering within VMWare, anything I should really be aware of?

We're getting our new R620/Equallogic system with VMWare Enterprise in a month or so. This is our first VMWare system and first shared storage for VMs. Currently we're just doing 2008 R2 Hyper-V on local storage for standalone hosts.

So, now that we will have the infrastructure, I'm trying to get things to a more reliable point as well as easing maintenance.

Our webservers are all loadbalanced so there's no real issue with them, but we have a few things that absolutely must be up as much as possible. One example of this is our MSMQ server. Now, I could do a standalone machine with Fault Tolerance, but that doesn't solve downtime for patching the machine. So, the best thing to do here would be to make a cluster for MSMQ, that way the only downtime is the brief blip for failover of the cluster. Then we can patch the passive member, failover, and patch the other member.

Now, I know there's two ways I can do this.

1) Simplest way is "cluster in a box". I can use virtual disks for the shared storage on a single host and everything is configured within VMWare which simplifies operations. The only downside is, in the event of host failure, the whole cluster is down until H/A restarts it on another host. Not terrible, but not ideal.

2) Carve out storage from the SAN and use iSCSI initiators on the guest OS and go about MSCS like these were two physical hosts. That would allow me to split the VMs between two physical hosts so that it would only be a cluster failover in the even of underlying host failure. The downside is, VMWare wouldn't really be aware or managing the storage that I present to the guest over iSCSI and it increases the complexity of the storage network as that traffic should really be kept separate from the VMWare traffic.

I would prefer the quicker failover in the event of host failure, but if that extra configuration is going to give me extra grief down the road, we might just take the risk of longer recovery in the event of host failure.

Simplest way: don't. Really, don't.

You're trading the ability to patch these VMs for the ability to patch your hypervisor. Regardless of the disk sharing configuration, you can't vMotion an MSCS cluster member. This means that every time you need to apply VMware updates for any reason, you're manually bringing down your cluster VMs one at a time. Routine patching of a vSphere cluster should take 30 seconds of your effort. The process should be completely hands-off once you click the button to apply patches in VUM. What you're doing is making it take an order of magnitude more effort, as well as dramatically increasing the chances that something fucks up really badly during one of the unnecessary MSCS failovers that you've deliberately introduced into your operating process.

I understand that reboots are a traumatic thing on physical hardware because you have to go through UEFI initialization, SAS controller initialization, and the myriad of other things that can make an ordinary reboot take ten minutes. On decent storage, your VM shouldn't take more than a minute or two to reboot for routine patches, and far less if you're running Server Core.

If you do go this route, use host affinity rules to clump your cluster VMs onto a pair of hosts so patching is at least not a complete clusterfuck.

Vulture Culture fucked around with this message at 18:13 on Mar 22, 2013

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online
If you go the SCSI bus sharing route, there is no way to take a snapshot of the VM. Which means you can't back up your MSCS machines or do anything fun with them. That's about the worst thing I've run into.

We've not had terribly good luck with MSCS in a virtual environment, but that's probably due to our absurd security requirements. Like our VMs can't in any way shape or form talk back to storage.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


The issue is, a 2 minute reboot of the VM is FAR more disruptive than a 10 second MSCS fail over when it comes to our app platform.

I'm probably going to have to discuss this internally.

Syano
Jul 13, 2005

bull3964 posted:

Microsoft clustering within VMWare, anything I should really be aware of?

We're getting our new R620/Equallogic system with VMWare Enterprise in a month or so. This is our first VMWare system and first shared storage for VMs. Currently we're just doing 2008 R2 Hyper-V on local storage for standalone hosts.

So, now that we will have the infrastructure, I'm trying to get things to a more reliable point as well as easing maintenance.

Our webservers are all loadbalanced so there's no real issue with them, but we have a few things that absolutely must be up as much as possible. One example of this is our MSMQ server. Now, I could do a standalone machine with Fault Tolerance, but that doesn't solve downtime for patching the machine. So, the best thing to do here would be to make a cluster for MSMQ, that way the only downtime is the brief blip for failover of the cluster. Then we can patch the passive member, failover, and patch the other member.

Now, I know there's two ways I can do this.

1) Simplest way is "cluster in a box". I can use virtual disks for the shared storage on a single host and everything is configured within VMWare which simplifies operations. The only downside is, in the event of host failure, the whole cluster is down until H/A restarts it on another host. Not terrible, but not ideal.

2) Carve out storage from the SAN and use iSCSI initiators on the guest OS and go about MSCS like these were two physical hosts. That would allow me to split the VMs between two physical hosts so that it would only be a cluster failover in the even of underlying host failure. The downside is, VMWare wouldn't really be aware or managing the storage that I present to the guest over iSCSI and it increases the complexity of the storage network as that traffic should really be kept separate from the VMWare traffic.

I would prefer the quicker failover in the event of host failure, but if that extra configuration is going to give me extra grief down the road, we might just take the risk of longer recovery in the event of host failure.

If it were a simple cluster id say use route 2. I have a 2 node file sharing cluster using in guest iscsi. Everything works just peachy and there isn't any gotchas to it at all. My hosts all connect to storage via iscsi so just add a virtual machine network to one of your iscsi adapters and youre off to the races. Bad news is that my veeam backup and replication jobs are pretty much worthless for this cluster. Since this is just a simple file server though all we do is replicate the data via dfs and grab backups at another location. Long story short: It works... but there really wasn't any point to it and it makes backups and replication tougher. Just build a redundant and available infrastructure to begin with and forget clustering.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

bull3964 posted:

The issue is, a 2 minute reboot of the VM is FAR more disruptive than a 10 second MSCS fail over when it comes to our app platform.

I'm probably going to have to discuss this internally.
The bigger point is that, especially if you're using a quorum disk or SMB witness instead of a simple node majority, cluster failovers don't always go as planned -- storage is temperamental and locking issues are usually to blame. You will hit some unplanned downtime on this if you keep your vSphere cluster anywhere near up-to-date. We've had great luck doing failovers of every kind with Exchange 2010, which is shared-nothing from a storage perspective, but we've hit really serious MSCS-related outages in the last couple of years between SQL Server (highly fragmented transaction log wouldn't replay) and a Windows file sharing cluster (took 10 hours to run a disk check before it would mount the volume). MSMQ might play a lot nicer given the much lower disk requirement.

If you don't patch your environment regularly (and otherwise good perimeter security may mean you don't have to), this won't be much of an issue for you, of course.

Syano posted:

Bad news is that my veeam backup and replication jobs are pretty much worthless for this cluster.
And every other VM :laugh:

Vulture Culture fucked around with this message at 19:05 on Mar 22, 2013

Syano
Jul 13, 2005

Misogynist posted:

And every other VM :laugh:

HEYO!!





But yeah.... :(

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Syano posted:

Just build a redundant and available infrastructure to begin with and forget clustering.

Well, that's ultimately the issue.

I'm trying to engineer from the server/network perspective resiliency that our application platform doesn't have currently. We're working to change it, but re-factoring always gets thrown out the window for new client projects.

That's really why i asked the question in the first place. I know doing this will ease the burden from the app perspective in that I can more easily get approval to patch and restart the guests if I tell them "It'll be a 10 second blip to failover the cluster." However, if it's going to cause other issues, I may not do it and just say that we need to be able to take the few minute hit of restarting the VM.

Syano
Jul 13, 2005
Unless your infrastructure is balls it should really only take maybe... I dunno... 30 seconds... to restart a vm? I dunno lemme clock a cold to login screen startup real fast...


Edit: Ok it took 90 seconds... always seemed faster not watching a clock. Anyways, still pretty fast.

Syano fucked around with this message at 20:05 on Mar 22, 2013

Studebaker Hawk
May 22, 2004

Any of you running any sort of IDPS system, or vmsafe application (Trend, Hytrust, Catbird, etc.)? Anyone doing it in a multi-tenant capacity?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

bull3964 posted:

Well, that's ultimately the issue.

I'm trying to engineer from the server/network perspective resiliency that our application platform doesn't have currently. We're working to change it, but re-factoring always gets thrown out the window for new client projects.

That's really why i asked the question in the first place. I know doing this will ease the burden from the app perspective in that I can more easily get approval to patch and restart the guests if I tell them "It'll be a 10 second blip to failover the cluster." However, if it's going to cause other issues, I may not do it and just say that we need to be able to take the few minute hit of restarting the VM.

What exactly is this application if you don't mind me asking? It's great to engineer solutions which increase up time and availability but don't try to make an application do something it wasn't designed for(unless you are the programmer for this app, or it is built in for a supported method).

What are the system requirements for this app? Have you looked into FT any? I realize it doesn't protect against application failures but it is something worth noting.

E: Also what is the business requirement for availability? It is great to shoot for 5 9's but it also costs a bunch of time and money to do so, is that something the business wants to pursue or is ~5 minutes of service interruption of that application every now an then okay?

Dilbert As FUCK fucked around with this message at 20:22 on Mar 22, 2013

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Corvettefisher posted:

What exactly is this application if you don't mind me asking?

We're a SaaS provider. So our platform is a combination of SQL Server backend (not virtualized and on it's own shared storage for the cluster) and custom .NET apps (both websites and supporting windows services). Right now, we're 85% virtualized under Hyper-v with standalone servers with local storage. We are moving to almost 100% virtualized with an iSCSI SAN and VMWare Enterprise.

These are just the standard issues that arise when a company grows 8 fold within 10 years. There's a lot of legacy code that wasn't really written with redundancy or resiliency in mind so it tends to be temperamental when network resources are inaccessible. Things have changed and the new stuff being written has much better error handling and reprocessing capabilities that can tolerate gaps in communication but there's a whole ton of stuff that we don't have the resources right now to revisit and fix.

For example, we started using MSMQ as a way to do async processing of data submitted from the web to backend services, but the questions of "what happens when the queue server isn't there" weren't really addressed outside of throwing a hard exception (mostly due to project time constraints and the mindset that the clients at the time didn't have strict SLAs on uptime.) So, MSMQ server goes down and some user is going to get a error when trying to submit or a service will freak out and stop processing everything because it couldn't access the queue. Less time the queue is down and that's less chance of something like that happening.

Now the new stuff buffers to local queues and has a whole error handling and reprocessing framework that makes the issue moot, but the older stuff isn't going to get revisited right away.

So, I'm just trying to suss out the right direction moving forward. We've had trouble establishing a good and consistent server patch policy simply because of the concerns with interrupting communication with some of these resources so I was looking at clustering as a way to help with this. But if it adds significant management overhead on its own, it won't be worth the tradeoff.

bull3964 fucked around with this message at 20:38 on Mar 22, 2013

DevNull
Apr 4, 2007

And sometimes is seen a strange spot in the sky
A human being that was given to fly

Kachunkachunk posted:

For anyone with an interest and supported GPU, nVidia VIBs are available: http://www.nvidia.com/object/vmware-vsphere-esxi-5.1-304.76-driver.html.
I have doubts it'll work with other GPUs without modification/hackery, say for whitebox users like myself, but I do hope to have some time to try with a Fermi desktop card some time next week.

Edit: I had it working before with an older driver that wasn't locked down. I could just rely on it, but I noticed odd power consumption patterns with the card. Could be a server-side issue, though.
I would basically come back in a few days to find the room hot from the GPU, which had been running full-tilt for days. Without any VMs running off of it. :psyduck:

I think they black listed all but the supported cards for the release version of the driver.

Kachunkachunk
Jun 6, 2011
That is poo. And I expect so. But there's always Directpath I/O, I guess.

DevNull
Apr 4, 2007

And sometimes is seen a strange spot in the sky
A human being that was given to fly

Kachunkachunk posted:

That is poo. And I expect so. But there's always Directpath I/O, I guess.

Nvidia wants to sell their high end cards. To be fair, those are the cards that have enough memory to run lots of VMs.

edit: It also helps stability. There are tons of hardware limitations on ESX for that reason.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
I redid my lab over the weekend, was wondering if anyone else had their lab setup in a similar fahion.
Freenas 8.3 using ZFS did a bit better than I thought it would

This is my View 5.2 lab;
(I can include IP addressing, VLAN's, and other configs if people want)

Only registered members can see post attachments!

Dilbert As FUCK fucked around with this message at 23:21 on Mar 24, 2013

wolrah
May 8, 2006
what?

DevNull posted:

edit: It also helps stability. There are tons of hardware limitations on ESX for that reason.

The correct answer in these situations is always to set a flag when using unsupported but technically capable hardware (and maybe make this very blatant in the client, say a permanent red status bar) so support doesn't waste time on said boxes but test/proof-of-concept/home learning lab environments can still be built for cheap. Blacklisting overall is just s dick move.

DevNull
Apr 4, 2007

And sometimes is seen a strange spot in the sky
A human being that was given to fly

wolrah posted:

The correct answer in these situations is always to set a flag when using unsupported but technically capable hardware (and maybe make this very blatant in the client, say a permanent red status bar) so support doesn't waste time on said boxes but test/proof-of-concept/home learning lab environments can still be built for cheap. Blacklisting overall is just s dick move.

Talk to Nvidia. It is their driver. The closest you can do for home learning on this would be 3D on Workstation.

evil_bunnY
Apr 2, 2003

DevNull posted:

Talk to Nvidia. It is their driver. The closest you can do for home learning on this would be 3D on Workstation.
This is a pretty unhealthy way of looking at things. VMware has leverage, customers wanting whiteboxes not so much.

DevNull
Apr 4, 2007

And sometimes is seen a strange spot in the sky
A human being that was given to fly

evil_bunnY posted:

This is a pretty unhealthy way of looking at things. VMware has leverage, customers wanting whiteboxes not so much.

I don't think we have the much leverage with Nvidia. I don't deal with that side of things much though. Virtualized graphics on the server is super new, so we will see how things unfold over the next few years. I don't know if anyone really knows how quickly this will get adopted.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
To any vExperts here, do you put it on your resume? I've been nominated by some people in my area.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Corvettefisher posted:

To any vExperts here, do you put it on your resume? I've been nominated by some people in my area.

This is an example of a question you shouldn't ever have to ask.

Anytime you receive recognition for your professional efforts it should go somewhere on your resume and in any cover letters you may send out. You can bet anyone competing for the same job would do the same. It's also another way to show value.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online
What kind of dedup ratio should I expect out of a mixed Windows 2008 R2/RHEL6 environment where most if not all of the VMs were built using templates? vSphere Data Protection seems to be getting something like 15:1.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Wow View 5.2 is surprisingly unbuggy to get going, or maybe after 5.1's bullshit I just didn't notice it as much.

1000101 posted:

This is an example of a question you shouldn't ever have to ask.

Anytime you receive recognition for your professional efforts it should go somewhere on your resume and in any cover letters you may send out. You can bet anyone competing for the same job would do the same. It's also another way to show value.


Thanks I'll keep that in mind. It's mostly the fact I really haven't been working for it at all, and for the most part forgot about it, but if it is note worthy might as well include it.



Goon Matchmaker posted:

What kind of dedup ratio should I expect out of a mixed Windows 2008 R2/RHEL6 environment where most if not all of the VMs were built using templates? vSphere Data Protection seems to be getting something like 15:1.


I'll check on my ratio when I get back to work. I have about 85ish VM's being backed up Windows/Centos 6


E: Oh that vExpert thing needs some references to online activity? This was a terrible time to buy myself a new avatar...

Dilbert As FUCK fucked around with this message at 04:17 on Mar 26, 2013

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


So, as I've mentioned before, we're in the process of obtaining our first VMWare cluster.

Now, VMWare's site has been confusing me and seems to change on a daily basis. We licensed an Enterprise Acceleration kit. However, on VMWare's site under pricing, the only kits they have listed now are with Operations Management.

https://www.vmware.com/products/datacenter-virtualization/vsphere/pricing.html

Even their whitepapers seem to not acknowledge Enterprise Acceleration Kits without Operations Management now.

This was not the case just a few weeks ago. Did we just buy a discontinued SKU?

However, under the Compare Kits tab, none of the kits themselves mention the Operations Management piece. So, at this point in time, I'm not sure what features are included in Enterprise and which are only included in Enterprise with Operations Management.

Case in point, vSphere Data Protection Advanced is listed under the Enterprise Acceleration Kit, but is that with all of those kits or only those with Operations Management?

evil_bunnY
Apr 2, 2003

Goon Matchmaker posted:

What kind of dedup ratio should I expect out of a mixed Windows 2008 R2/RHEL6 environment where most if not all of the VMs were built using templates? vSphere Data Protection seems to be getting something like 15:1.
I don't have a windows-only volume, but 3:1 is a reasonable expectation. Make sure your VMs are aligned.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

bull3964 posted:

So, as I've mentioned before, we're in the process of obtaining our first VMWare cluster.

Now, VMWare's site has been confusing me and seems to change on a daily basis. We licensed an Enterprise Acceleration kit. However, on VMWare's site under pricing, the only kits they have listed now are with Operations Management.

https://www.vmware.com/products/datacenter-virtualization/vsphere/pricing.html

Even their whitepapers seem to not acknowledge Enterprise Acceleration Kits without Operations Management now.

This was not the case just a few weeks ago. Did we just buy a discontinued SKU?

However, under the Compare Kits tab, none of the kits themselves mention the Operations Management piece. So, at this point in time, I'm not sure what features are included in Enterprise and which are only included in Enterprise with Operations Management.

Case in point, vSphere Data Protection Advanced is listed under the Enterprise Acceleration Kit, but is that with all of those kits or only those with Operations Management?

I think I know what you're asking but this is probably something you should give your reseller a buzz and talk this out.

Mierdaan
Sep 14, 2004

Pillbug
VMware Loses More Than $2 Billion in Market Cap on PayPal / Ebay Rumors.

Ah, the things rumors can do...

Crackbone
May 23, 2003

Vlaada is my co-pilot.

Couple questions. I'm looking to virtualize a DC (only one on the network, so no worries about AD replication or etc), but read a lot of conflicting information/recommendations.

First, while I'm pretty sure it's safe, is there any danger to a source machine in doing a hot migration? Essentially, I want to test the conversion process without stopping services- I don't care if the resulting vm is transactionally consistent for the test, but I do want to make sure the conversion process isn't invasive enough to kill any services or bring the system to its knees.

Second, I see a lot recommendations for doing a cold migration, but apparently VMWare killed off their cold boot discs quite a while ago (somewhere around 4.1?). Is it worth using an old migration technique or will a "warm" migration be fine?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Hot cloning a DC is a no, P2V of a DC is usually not on the recommended list. Cold cloning is much safer if you don't ever touch the physical DC again. FSMO transfer is the most safe, supported method.

Also when Virtualizing a domain controller make sure you aren't syncing to hosts, make sure it is a NTP server on a router or other device.

Studebaker Hawk
May 22, 2004

Crackbone posted:

Couple questions. I'm looking to virtualize a DC (only one on the network, so no worries about AD replication or etc), but read a lot of conflicting information/recommendations.

First, while I'm pretty sure it's safe, is there any danger to a source machine in doing a hot migration? Essentially, I want to test the conversion process without stopping services- I don't care if the resulting vm is transactionally consistent for the test, but I do want to make sure the conversion process isn't invasive enough to kill any services or bring the system to its knees.

Second, I see a lot recommendations for doing a cold migration, but apparently VMWare killed off their cold boot discs quite a while ago (somewhere around 4.1?). Is it worth using an old migration technique or will a "warm" migration be fine?

Why? It is almost never worth it, and far easier to just build/promote/demote. Unless you are running 2012 for some reason and can take advantage of VM-Generation ID

Crackbone
May 23, 2003

Vlaada is my co-pilot.

Should have been more specific, it's a SBS2011 server, so I don't believe I can do a standard demote/promote.

Speaking to the "hot" migration, I was talking specifically about doing it just to test out the conversion - the VM wouldn't ever be hooked up to the production environment. My concern was if the "hot" migration might create problems on the production server.

So like I said, other my concern was if a cold clone off of the 4.1 convertor would be problematic given it's fairly out of date.

Crackbone fucked around with this message at 22:57 on Mar 26, 2013

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
Do it from active directory restore mode

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
You could always install the hyper-v role and run hyper-v nested in the vm cluster then move it over...


the virtual hardware and such will be out of date but fixable, 5.1 is backwards compatible.

If you want to see how it works on a test, setup a windows VM in workstation, don't install tools, shut it down, use the cold clone to capture and import into the new environment

Docjowles
Apr 9, 2009

Came across a neat script in my blog reading. It tests everything in the official vSphere Hardening Guide that can reasonably be done programmatically and spits out a report card. It looks like roughly half the checks are still manual but better than nothing.

Claims to work all the way from version 4.0 up to 5.1.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Is anyone having trouble with audio playback in view 5.2, HTML (or BLAST)? Is it just not supported or is it suppose to work? Video plays a bunch smoother than I thought, however audio does not seem to work.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Corvettefisher posted:

Is anyone having trouble with audio playback in view 5.2, HTML (or BLAST)? Is it just not supported or is it suppose to work? Video plays a bunch smoother than I thought, however audio does not seem to work.

There is no audio support (or USB, webcam, redirection, thinprint, etc).

Adbot
ADBOT LOVES YOU

Studebaker Hawk
May 22, 2004

Crackbone posted:

Should have been more specific, it's a SBS2011 server, so I don't believe I can do a standard demote/promote.

Speaking to the "hot" migration, I was talking specifically about doing it just to test out the conversion - the VM wouldn't ever be hooked up to the production environment. My concern was if the "hot" migration might create problems on the production server.

So like I said, other my concern was if a cold clone off of the 4.1 convertor would be problematic given it's fairly out of date.

I have had some nightmare experiences P2V SBS2008. I would do this http://technet.microsoft.com/en-us/library/gg563798.aspx

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply