Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
CISADMIN PRIVILEGE
Aug 15, 2004

optimized multichannel
campaigns to drive
demand and increase
brand engagement
across web, mobile,
and social touchpoints,
bitch!
:yaycloud::smithcloud:
what's the easiest cheapest way to to do backups of VMs under essentials (4.1). I have client backup exec on the main servers, but ideally I'd also like to backup the whole VMs to a NAS on a daily or weekly basis.

I'm also curious about what the overhead is writing from an ESXi host to local VMFS storage. (Current hosts are R610s with whatever the best control dell ships and 10K SAS drives.)

Still wondering about the best sub 3k NAS to pick up is for a couple of low utilization VMs.

Adbot
ADBOT LOVES YOU

Docjowles
Apr 9, 2009

Is it Essentials or Essentials Plus? If you sprang for Plus, you get Data Recovery included which is perfectly fine for simpler environments.

Bitch Stewie
Dec 17, 2011
Does it work properly now?

Data recovery had a reputation for being totally broken when it came out didn't it?

Docjowles
Apr 9, 2009

2.0 (which came out a while ago) seems fine, I've been using it with no issues. The main problem I had was entirely my fault for backing up to a lovely NAS with few and slow spindles. The initial backup would take longer than 24 hours to complete and would therefore abort when the next backup job started, and never actually finished. I worked around this by adding one or two VM's per day to the backup job until everything was being backed up properly. Once the first full backup completes, subsequent jobs are just the incremental changes so they run very fast.

Caveat being that I have a small and simple environment, I don't claim to speak for a situation involving dozens of hosts and hundreds of VM's.

MC Fruit Stripe
Nov 26, 2002

around and around we go

Corvettefisher posted:

Okay well I wen't back over my newegg stuff and I guess I got the ram, mobo, and SSD on shell shockers which drove the cost down alot, as they are higher now, odd.

http://www.savemyserver.com/servlet/the-*Dell-%26-HP-Servers--pls-/Categories
might find some good deals here for whitebox builds
I got a bug up my butt about it, and once that happens, there's no stopping it. I now have a lab box - 32gb, Core 7, all the goods. All that's left is to outfit it.

I was loading VM after VM, but wanted to take a break and ask for some advice. If you get 15 minutes to kill this evening or tomorrow, any chance you could either diagram your network, or just provide some suggestions? Just things you did that made it more 'real' is what I am looking for - the point of all of this is to create a self contained lab environment, so I want to do it as real as possible.

So far I've installed:
4 Windows 2008 R2 servers
2 Windows 7
2 Windows XP
1 Ubuntu
1 Debian
1 CentOS (my Linux is extraordinary weak when I get away from commands I've memorized, hence)
1 FreeNAS

To do:
- Figure out how to use FreeNAS and integrate it
- Set up an ESXi box and put a couple of VMs on that
- Integrate it with GNS3. I've got to assume that's possible

The intent is to set up some approximation of a 'real' network, with all the features and problems thereof, but on a much smaller scale.

So I don't know, anything you've come across that might be a good thing to have or know, I'd appreciate.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
k lemme fire up visio, my environment is pretty crude



Pretty crude, most I haven't much powerCLI working and fully automated data center deploys yet , only HA/DRS/sDRS/Vmotion/vm affinaty and Vlans work atm, Working on fully automated enviroments that self build when resources are low


FYI my teacher has a cool website on poo poo if anyone wants to catch up on some stuff
http://www.vhersey.com/

Dilbert As FUCK fucked around with this message at 03:18 on Apr 29, 2012

CISADMIN PRIVILEGE
Aug 15, 2004

optimized multichannel
campaigns to drive
demand and increase
brand engagement
across web, mobile,
and social touchpoints,
bitch!
:yaycloud::smithcloud:

bob arctor posted:

what's the easiest cheapest way to to do backups of VMs under essentials (4.1). I have client backup exec on the main servers, but ideally I'd also like to backup the whole VMs to a NAS on a daily or weekly basis.

I'm also curious about what the overhead is writing from an ESXi host to local VMFS storage. (Current hosts are R610s with whatever the best control dell ships and 10K SAS drives.)

Still wondering about the best sub 3k NAS to pick up is for a couple of low utilization VMs.

its just essentials w/o a decent san which looks like it would be about 20K minimum I didn't really see a big advantage in essentials plus. I've read various bits and pieces about stuff like ghetto VCB and the like and I was wondering about the real world feasibility of these.

We have a couple of more "mission critical" systems coming which I haven't decided if I should host on site or not (a CRM and an MS NAV instance) There's a bunch of politics and bullshit as to where to host it, and while I'd rather not have it on site it might become the infinitely more sane choice, but if it is I'll definitely need to implement some HA. As right now I can restore any of the servers in a half day, but if the CRM and financial stuff is on site that might not be acceptable.

Docjowles
Apr 9, 2009

If they are truly "mission critical", push as hard as you can to get the paltry $4k one-time fee to upgrade to Plus. That buys you vMotion, HA and Data Recovery among other niceties. You do need shared storage for this, but you don't need a 5-6 figure SAN, an OK NAS like a Thecus 8900 will work for a very small environment which I assume you have if you are running straight Essentials. That will run you less than $3k fully loaded.

This should not be a hard sell to management. If the server running your "mission critical" CRM tool takes a dump, it will be back online on another member of the VMware HA cluster before Nagios even alerts you that it was down. If some genius runs DROP DATABASE AllOfOurShit; you can use Data Recovery to restore a recent backup in minutes. If a VMware exploit comes out, you can easily patch with no downtime using vMotion. How much is never seeing "Intertrode Hacked, Customer Credit Cards Leaked" in the news worth?

Bitch Stewie
Dec 17, 2011
The words "Mission Critical" and cheap NAS or SAN don't sit too well with me.

What I would look at is a pair of solid boxes with RAID10 DAS and depending on budget either run a VSA on them (HP P4000 or VMware VSA) or use the DAS with Veeam doing replication from one box to the other - you don't get HA but if the box running critical VM's shits itself you just fire up the replica on the second box.

I'd be wary of dropping in a cheap NAS box as IMO you're combining the worst of all worlds in that you'll probably have it hooked up to two cheap switches, and most of the cheap NAS vendors don't do proper on-site support like HP or Dell would.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

bob arctor posted:

what's the easiest cheapest way to to do backups of VMs under essentials (4.1). I have client backup exec on the main servers, but ideally I'd also like to backup the whole VMs to a NAS on a daily or weekly basis.

I haven't used it but srm 5.0 supposedly has a replication piece and is licensed pretty inexpensively for just a few VMs.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug


If anyone has any questions on these books let me know

Erwin
Feb 17, 2006

Bitch Stewie posted:

The words "Mission Critical" and cheap NAS or SAN don't sit too well with me.

What I would look at is a pair of solid boxes with RAID10 DAS and depending on budget either run a VSA

The words "mission critical" and VSA don't sit too well with me, only because it's 'new' like SRM replication, which doesn't work very well, and it's an "add-on" like VDR, which doesn't work at all.

I mean, it works for a few weeks, then your backups become corrupt and support spends a week looking at it before telling you you just have to delete all the backups and start over, and that "you should use some other software along with VDR because VDR isn't really meant for enterprise production." This was on something small, like 20 VMs at the time, with VDR 2.0. Piece. Of. poo poo. It's basically VMware's proof of concept for their backup APIs. Definitely buy 3rd-party backup software.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Bitch Stewie posted:

The words "Mission Critical" and cheap NAS or SAN don't sit too well with me.

What I would look at is a pair of solid boxes with RAID10 DAS and depending on budget either run a VSA on them (HP P4000 or VMware VSA) or use the DAS with Veeam doing replication from one box to the other - you don't get HA but if the box running critical VM's shits itself you just fire up the replica on the second box.

I'd be wary of dropping in a cheap NAS box as IMO you're combining the worst of all worlds in that you'll probably have it hooked up to two cheap switches, and most of the cheap NAS vendors don't do proper on-site support like HP or Dell would.

You are honestly better off buying a Dell Equalogic/powervault servers than you are VSA, then offloading your VMDK's to Crashplan or secondary NAS. You can only use 1/4 of the storage on your servers and the servers have to be the same size.

You are better spending your money on some of these
http://configure.us.dell.com/dellstore/config.aspx?oc=brctzy2&c=us&l=en&s=bsd&cs=04&model_id=powervault-nx200
or these
http://www.dell.com/us/business/p/powervault-nx300/fs
Than you are relying on VSA to cover your rear end

Dilbert As FUCK fucked around with this message at 05:26 on Apr 30, 2012

CISADMIN PRIVILEGE
Aug 15, 2004

optimized multichannel
campaigns to drive
demand and increase
brand engagement
across web, mobile,
and social touchpoints,
bitch!
:yaycloud::smithcloud:

Bitch Stewie posted:

The words "Mission Critical" and cheap NAS or SAN don't sit too well with me.

What I would look at is a pair of solid boxes with RAID10 DAS and depending on budget either run a VSA on them (HP P4000 or VMware VSA) or use the DAS with Veeam doing replication from one box to the other - you don't get HA but if the box running critical VM's shits itself you just fire up the replica on the second box.

I'd be wary of dropping in a cheap NAS box as IMO you're combining the worst of all worlds in that you'll probably have it hooked up to two cheap switches, and most of the cheap NAS vendors don't do proper on-site support like HP or Dell would.

I'm just checking things out right now, I feel I need a new NAS, however I don't want to get something half cocked when if we bring the CRM/Financials software onto in-house servers then I'll need to do it a bit more serious. I'm hoping that we can colo with a partner org who has a bit more infrastructure. That said hardware wise right now I have a couple of dual proc dell r610s with 32 and 48 gigs of RAM one has 4 10K SAS drives in RAID 10 The other has 6 in RAID 6. I also have a couple of older HP350's for testing/a physical domain controller in case something in VMWare shits the bed badly.

I do have a synology DS1010+ which I use as an intermediary backup exec target so as to keep the most recent couple of critical backups online if needed. Then I batch copy the Backup exec files to offsite storage. I've run test servers off the synology over iSCSI and while for something really low utilization it'll do the job 35MB/sec doesn't quite cut it.

Internet Explorer
Jun 1, 2005





Take a look at some of the lower end Equallogic stuff. They work great for the types of situations you are talking about.

Bitch Stewie
Dec 17, 2011
My comments were more at the idea of using a Thecus/Synology/Netgear level of NAS for your mission critical VMs.

If they offer 4 hour onsite swap-out where you are then fair enough, I don't think they do though.

I'm not sure I'd consider a $1500 box running Windows Storage Server to be more solid than a pair of boxes running a synchronous SAN/NAS either - actually I am sure, I wouldn't, it's a huge SPOF.

Bob needs to evaluate his entire environment. I've seen so many people rush in and stick a SAN in because hey, it's a SAN, it's redundant everything right? Then the broom closet goes up in flames and you've lost everything you had because you only have a single SAN without dropping a lot of money.

That's where the VSA's come in handy for offering high availability and redundancy, assuming you have the physical infrastructure to make use of them.

Really my main point is don't stick your mission critical VM's on a pro-sumer NAS box.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Bitch Stewie posted:

My comments were more at the idea of using a Thecus/Synology/Netgear level of NAS for your mission critical VMs.

If they offer 4 hour onsite swap-out where you are then fair enough, I don't think they do though.

I'm not sure I'd consider a $1500 box running Windows Storage Server to be more solid than a pair of boxes running a synchronous SAN/NAS either - actually I am sure, I wouldn't, it's a huge SPOF.

Bob needs to evaluate his entire environment. I've seen so many people rush in and stick a SAN in because hey, it's a SAN, it's redundant everything right? Then the broom closet goes up in flames and you've lost everything you had because you only have a single SAN without dropping a lot of money.

That's where the VSA's come in handy for offering high availability and redundancy, assuming you have the physical infrastructure to make use of them.

Really my main point is don't stick your mission critical VM's on a pro-sumer NAS box.


You can replicate shares and take backups you know, or just reformat it with centos and setup DRBD or use dells management software. Hell even Openfiler has decent enterprise support, dunno if you would want to run that instead of windows. You could get almost 4 of those for the price of a VSA license, or 2 NX300s.

I the point I am trying to make is if it came between VSA replication of DAS stores and getting some dell equallogics I would get a dell Equallogic in a heartbeat. Dell isn't really that bad, they aren't an EMC or Netapp but for smaller environments they do a decent job. Then hook up some nice offsite backups with $reputable_company_here.

Dilbert As FUCK fucked around with this message at 14:07 on Apr 30, 2012

some kinda jackal
Feb 25, 2003

 
 
So it looks like someone stole the ESX (?) source code and posted at least one header file on pastebin? Wonder what this will lead to.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Martytoof posted:

So it looks like someone stole the ESX (?) source code and posted at least one header file on pastebin? Wonder what this will lead to.

The leak is from ESX 3(note: not esxi), if you are running ESXi 4.1/5 you are probably safe as long as you keep VUM updating your poo poo.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


The SAN thing is a sticking point for us because we simply don't have the money to drop $40k into a SAN so that the handful of VMs we have that aren't redundant/load balanced have failover.

For example, we have an MSMQ server that's critical for a good majority of our apps, but it's low load and stores virtually no long term data aside from the setup of the queues themselves.

The conversation of "Hey, we need a piece of equipment that costs 3x more than the two virtual hosts combined so that we have an extra bit of redundancy on this one piddly little VM" never goes well. Management just says "we accept the risk because we can't afford otherwise."

There's a huge gap between small business and enterprise level hardware that we are currently struggling with. There are a lot of cases with modest specs and lower end performance would be 100% fine as long as it was rock solid stable and had good support. I don't need 12TB of superfast storage with tons of bells and whistles with advanced snapshot functionality to host a 25gb VM that barely hits the IO subsystem. I just need it to not go down unless we are patching THAT machine (in which case downtime could be measured in a handful of minutes rather than hours.)

sanchez
Feb 26, 2003
An Equallogic PS series can be had for just under 20k with some SAS drives. I think that's good value, local storage of VM's is just a pain in the rear end.

evil_bunnY
Apr 2, 2003

sanchez posted:

An Equallogic PS series can be had for just under 20k with some SAS drives. I think that's good value, local storage of VM's is just a pain in the rear end.
This. 12 SAS 10K drives isn't going to break the bank. And you can replicate on schedule to another one.

Erwin
Feb 17, 2006

Maybe look into the used market? You probably won't get a warranty or support, but it's better than not having a SAN, and you can get something like an EMC AX4 for under $5k (or, you know, a better brand than EMC). I'd sell you an EMC AX100 for little more than the cost of shipping, but it hasn't been on the HCL since 3.5.

Also, FWIW, I have a Promise VessRAID for our test environment. It was cheap (sub $2k for an empty 16-drive-slot box) and I can buy hard drives from Newegg or Amazon, as long as they're on Promise's compatibility list (the standard WD and Seagate drives are). I wouldn't put it in production, but maybe I would if I had no budget.

some kinda jackal
Feb 25, 2003

 
 

Corvettefisher posted:

The leak is from ESX 3(note: not esxi), if you are running ESXi 4.1/5 you are probably safe as long as you keep VUM updating your poo poo.

Thanks for the clarification. Sounds like it was code from 2004-ish ESX judging by what I read, so the impact should be minimal.

Sounds like this is mostly just going to be a ding in VMware's armour.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
I was under the impression that 10K SAS drives are pointless to buy. They provide 33% less performance, but are not 33% cheaper.

complex
Sep 16, 2003

Martytoof posted:

Thanks for the clarification. Sounds like it was code from 2004-ish ESX judging by what I read, so the impact should be minimal.

Sounds like this is mostly just going to be a ding in VMware's armour.

Group responsible says more code will be released on May 5th.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

bull3964 posted:

The SAN thing is a sticking point for us because we simply don't have the money to drop $40k into a SAN so that the handful of VMs we have that aren't redundant/load balanced have failover.


You should really contact a Dell rep about equal logic NAS/SAN setup, you would be surprised what you can get for around 20k. Just because it isn't EMC/Netapp doesn't mean it isn't good. You would be suprised at how well some of the PowerVault via SAS 6Gb/s can perform, you just have to mesh it right.

http://www.dell.com/us/soho/p/storage-products

There is also super micro I haven't much experiences with them but haven't heard anything bad about them

balakadaka
Jun 30, 2005

robot terrorists WILL kill you

three posted:

I was under the impression that 10K SAS drives are pointless to buy. They provide 33% less performance, but are not 33% cheaper.

I thought the whole point of these was pure MTBF rates. They'd be much higher than any 15k drive or SSD

Bitch Stewie
Dec 17, 2011
VMware's VSA isn't the only option if you want high availability.

P4000 will do all the things the physical product does.

$8k gets you two nodes in a cluster.

Drop it on some DAS and you can lose a node and the thing will keep ticking.

Split the nodes and stretch the cluster and you can lose a room, maybe even a site.

The VSA's are perfect where you don't need a true hardware but do want the feature set, or when you're still concerned that with a single SAN you still have a SPOF in the SAN itself (you're probably more likely to lose it due to human error than the SAN failing).

Bitch Stewie fucked around with this message at 18:32 on Apr 30, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

three posted:

I was under the impression that 10K SAS drives are pointless to buy. They provide 33% less performance, but are not 33% cheaper.
Both of these postulates depend entirely on whether you're talking about 2.5" or 3.5" drives.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Thanks for all the feedback. Honestly, even at $20k it's going to be a hard sell. Basically, it's a hump to get over as far as aligning cost expectations. I've seen this before in places I've worked in the past. It's hard to get into the mindset of buying enterprise level hardware when you've been getting by with small business offerings for so long. It's not a linear plot as far as infrastructure costs goes.

One day, your price range for new hardware is in the $1k-$5k range. The next day, growth pushes you to the point where hardware is in the $10k-$50k range. Upper management has a hemorrhage because technology costs are supposed to be decreasing rather than increasing, etc.

I'm going to have to just keep plugging away and making the business case for it. I'm still cleaning up the mess that my predecessor left. He was fond of using any old piece of junk machine lying round because "it works" for a specified role which has left a clutter of 30ish mismatched machines in the 4-6 year old age range. So, it's taking awhile for them to come around to the fact that yes, we should spend money on that 5 year old app to move it to new hardware rather than re-task some other slightly newer old server to take its place.

bull3964 fucked around with this message at 19:49 on Apr 30, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

bull3964 posted:

One day, your price range for new hardware is in the $1k-$5k range. The next day, growth pushes you to the point where hardware is in the $10k-$50k range. Upper management has a hemorrhage because technology costs are supposed to be decreasing rather than increasing, etc.
Man-hours aren't free. Keep a log of how much time you're spending dealing with issues related to your lovely environment, and use it to prepare some materials on the ROI you expect to see with your new purchases. Consider the cost of downtime to the business in this report.

Bitch Stewie
Dec 17, 2011
@bull3964 - A question for you - why do you need a SAN?

Let's say you just run that VM on DAS and use Veeam to replicate the VM every hour.

If your live server dies, could you just fire up the VM on the replica server?

It's cheap, you're back to one hour in time.

Even if you go and spunk $40k on a single SAN tomorrow you still need to cater for how you're going to backup and recover your VM's if the SAN fucks up.

I guess I see too many people equate high availability to owning a single mystical expensive SAN :)

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Bitch Stewie posted:

@bull3964 - A question for you - why do you need a SAN?

Let's say you just run that VM on DAS and use Veeam to replicate the VM every hour.

If your live server dies, could you just fire up the VM on the replica server?

It's cheap, you're back to one hour in time.

Even if you go and spunk $40k on a single SAN tomorrow you still need to cater for how you're going to backup and recover your VM's if the SAN fucks up.

I guess I see too many people equate high availability to owning a single mystical expensive SAN :)

Oh, that's something I'm exploring as well (really, something we're going to have in place regardless.)

The immediate concern is maintenance. We're a hyper-v shop currently which means no bare metal hypervisor. Trying to keep the underlying windows OS patched is a real PITA when it's like pulling teeth to get a downtime window. The vast majority of the machines on these hosts have a double on another hyper-v host and are behind a load balancer, so taking them down isn't an issue. It's these single points of failure VMs that are the blocker. Something like Veeam is great for recovery of VM if the host were to poo poo the bed, but it's not too ideal with respect for using it as a mechanism to move VMs between machines frequently. Oh, you can, but it's not really the right tool for the job.

Right now I'm pushing for Veeam (or some equivalent) for snapshot/replication in addition to two Dell R720xd with an VMWare Essentials Plus kit so we can transfer some of these single points of failure VMs to a system where the underlying OS doesn't need patched every few weeks requiring a server reboot. That, combined with doing P2V on a lot of the misc collection of machines out in our production environment should clean things up quite a bit and simplify our infrastructure.

After all that is done I'm going to approach the SAN angle again so we can utilize things like vMotion and High Availability for certain machines that require it.

evil_bunnY
Apr 2, 2003

three posted:

I was under the impression that 10K SAS drives are pointless to buy. They provide 33% less performance, but are not 33% cheaper.
They're only worth buying if your SAN's 7,2k drives are SATA, IMHO.

Misogynist posted:

Both of these postulates depend entirely on whether you're talking about 2.5" or 3.5" drives.
And this, and which ones your SAN can take.

CrazyLittle posted:

What's a good setup for 30-40 VMs? (simple webservers, etc)
Metrics please.

evil_bunnY fucked around with this message at 20:36 on Apr 30, 2012

CrazyLittle
Sep 11, 2001





Clapping Larry
What's a good setup for 30-40 VMs? (simple webservers, etc)

Moey
Oct 22, 2010

I LIKE TO MOVE IT

CrazyLittle posted:

What's a good setup for 30-40 VMs? (simple webservers, etc)

Care to be a little less vague?

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

CrazyLittle posted:

What's a good setup for 30-40 VMs? (simple webservers, etc)
"etc" jacks the price up to about 75-100k. Without "etc", probably about 100$

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

CrazyLittle posted:

What's a good setup for 30-40 VMs? (simple webservers, etc)

What's the OS they are running, what web servers, how many SQL DBs, what is the uptime needed, how much space do they need? What is your ballpark budget?

Adbot
ADBOT LOVES YOU

CrazyLittle
Sep 11, 2001





Clapping Larry

Moey posted:

Care to be a little less vague?

LmaoTheKid posted:

"etc" jacks the price up to about 75-100k. Without "etc", probably about 100$

Point taken.


I currently have about 21 VMs spread over 3 hosts managed with VM Infrastructure 3.5, and overall it's dogshit slow. Some of the users are complaining about performance on the hosts where there's CPU contention.

There's one ESXi machine that's running a webserver with enough disk access that it consumes all the activity on that host, so it's no better than hard iron beyond the ability to move the VM to another host without having to rebuild the entire OS.

The other 20 VMs are spread over two hosts: dual quad core E5420 (2.5ghz) with 16gb RAM each, both connected to a 15-spindle DAS shelf. Here's the VM layout:
  • 11 linux webserver (vCenter included)
  • 1 MSSQL DB (low impact)
  • 5 Windows RDP
  • 3 Windows web servers


I need to at least double that capacity, preferably while maintaining a similar space/power use. In retrospect the DAS shelf was a poor choice because it limited the number of host connections with redundancy, and I should have pushed for the "essentials plus" package equivalent of the time to get HA and vMotion.

CrazyLittle fucked around with this message at 20:53 on Apr 30, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply