Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
optikalus
Apr 17, 2008

xarph posted:

How many spindles do you think would be necessary for about 50-60 lightly used VMs? Would an enclosure with 12 in raid6+hot spare cut it? I'm going to try to set it up as a science experiment anyway, but knowing whether I should be disappointed in advance would be helpful.

IOPS is calculated based on the drive's characteristics (RPM, avg seek time) not the number of spindles in an array. You could get the same IOPS with half the drives if you use 15k RPM SAS vs 7200 RPM SATA.

Generally, 7200 RPM SATA drives give ~80 IOPS; ~120 IOPS for 10K, and ~180 for 15K drives. Obviously different drives and busses will perform slightly differently, but those generalizations have been pretty consistent with my own tests on a bunch of different drives over the years.

You need to find out how many IOPS your VMs will require, then plan for that with the disk appropriately.

Adbot
ADBOT LOVES YOU

tronester
Aug 12, 2004
People hear what they want to hear.

Misogynist posted:

Openfiler or FreeNAS isn't nearly as much of an issue as the fact that cramming a handful of SATA disks into some enclosure is not going to give you nearly enough IOPS to support even a lightly used vSphere environment. The actual distributions themselves aren't awful, but trying to set up OpenFiler replication if you're not already a Linux wizard is an exercise in futility.

Also keep in mind that the iSCSI targets used by OpenFiler, FreeNAS and basically any NAS distribution other than Nexenta will not be able to support shared disk clustering of Windows servers using MSCS. This may not be an issue for your environment, though.

Well luckily the HP proliants have SAS controllers, and will be equipped with 6 2.5" 300GB 10k rpm SAS drives. I honestly believe that they would have enough IOPS for their relatively light workload.

They do not use shared disk clustering.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

tronester posted:

Well luckily the HP proliants have SAS controllers, and will be equipped with 6 2.5" 300GB 10k rpm SAS drives. I honestly believe that they would have enough IOPS for their relatively light workload.

They do not use shared disk clustering.

Why does anyone every purchase 10K RPM drives? 2/3 of the performance for 9/10ths of the price.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

xarph posted:

How many spindles do you think would be necessary for about 50-60 lightly used VMs? Would an enclosure with 12 in raid6+hot spare cut it? I'm going to try to set it up as a science experiment anyway, but knowing whether I should be disappointed in advance would be helpful.
use openindiana with an SSD ZIL, 16GB of RAM, and a second SSD for additional cache. if you have a single dl380g6 laying around you can do this with 6 disks in raidz2 (raid6) plus the two SSDs for under $5k and get phenomenol performance. Additionally, you can make use of copy on write snapshots and snapshot based replication to another box running openindiana (or freebsd).

tronester
Aug 12, 2004
People hear what they want to hear.

three posted:

Why does anyone every purchase 10K RPM drives? 2/3 of the performance for 9/10ths of the price.

Versus a 15k rpm drive? They don't offer any with that spindle speed with at least 300 gb capacity.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

three posted:

Why does anyone every purchase 10K RPM drives? 2/3 of the performance for 9/10ths of the price.
You probably missed the part about them being 2.5". When you keep that criterion in mind, 15K drives are not 11% more expensive.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Internet Explorer posted:

So I've been playing around with our new VNX 5300 and it's replication partner, and let me just say man this has been quite an experience. Coming from an Equallogic box, and not having a storage admin, it is not nearly as kiddy-proof. I'd be more excited about it if I did not have 5 million other things going on (Setup our SAN ASAP! Fix this person's printer ASAP!)

I was running the VNX Initialization Assistant and it got to the point where it set an IP address and hostname and then bombed out. Apparently there is no way to reset the Control Station (or reinstall the OS on it), without them sending out a tech, so that you can run the VIA again, it all has to be done manually.

The support has been hit or miss. Have had a hard time getting a hold of anyone in their support chat who knows anything, and they supposedly "dispatched a tech" over 48 hours ago, but we were able to get it working and are now updating the firmware.

It definitely seems like these SANs were build from the bottom up, with dozens of little tools and separate interfaces, which is very different from the Equallogic side of things. I am very excited to put it through it's paces and then put it in production, though.

Hey, did you get it up and running yet?

Internet Explorer
Jun 1, 2005





Vanilla posted:

Hey, did you get it up and running yet?
Yes, I did, thank you. Although it's not in production yet. Appreciate all the help. Going to try to work on it some more today.

We purchased the Data Mover to do NFS and CIFs. It has the two blades with 4 ethernet ports each. I want to put the CIFS on one subnet and the NFS on another. One thing I noticed is that the blades are not active/active, unless I am missing something. One has to be dedicated as a standby or if one goes down everything comes down with it.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Internet Explorer posted:

Yes, I did, thank you. Although it's not in production yet. Appreciate all the help. Going to try to work on it some more today.

We purchased the Data Mover to do NFS and CIFs. It has the two blades with 4 ethernet ports each. I want to put the CIFS on one subnet and the NFS on another. One thing I noticed is that the blades are not active/active, unless I am missing something. One has to be dedicated as a standby or if one goes down everything comes down with it.

Yes, the blades are active / passive but you can have up to eight of them depending on the model of array. One is waiting to take over in the event of a failure, just means you're not operatin gon half steam - you have a full blade taking over from the failed one.

You'll find one will easily handle the workload, I think each one is rated to host up to 256TB of file data.

Wompa164
Jul 19, 2001

Don't write ghouls.
I'm not sure if this is the appropriate place to post this, but heregoes.

I've got about 6TB of personal data that I would like to back up to tape, on either LTO4 or LTO5. I don't own a capable tape drive but through my office I have access to a capable controller card and a copy of Kroll OnTrack.

Does anyone have suggestions for possibly renting an LTO4 or 5 drive? I'd only need it to create a backup set of my data so purchasing it doesn't make much sense to me.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Wompa164 posted:

I'm not sure if this is the appropriate place to post this, but heregoes.

I've got about 6TB of personal data that I would like to back up to tape, on either LTO4 or LTO5. I don't own a capable tape drive but through my office I have access to a capable controller card and a copy of Kroll OnTrack.

Does anyone have suggestions for possibly renting an LTO4 or 5 drive? I'd only need it to create a backup set of my data so purchasing it doesn't make much sense to me.
This seems kind of silly, given that you have no way of periodically testing your backups to see if they even still work.

Erwin
Feb 17, 2006

Wompa164 posted:

I'm not sure if this is the appropriate place to post this, but heregoes.

I've got about 6TB of personal data that I would like to back up to tape, on either LTO4 or LTO5. I don't own a capable tape drive but through my office I have access to a capable controller card and a copy of Kroll OnTrack.

Does anyone have suggestions for possibly renting an LTO4 or 5 drive? I'd only need it to create a backup set of my data so purchasing it doesn't make much sense to me.

Yeah, this is a little weird. But, probably the easiest thing to do would be to buy one on eBay, then when you're done with it, sell it on eBay. It's not like it'll depreciate that fast, so if you're patient, you can probably turn a small profit.

Or just buy 12TB of hard drives or something.

Wompa164
Jul 19, 2001

Don't write ghouls.

Misogynist posted:

This seems kind of silly, given that you have no way of periodically testing your backups to see if they even still work.

I have the data backed up currently, chopped up across 5 individual 1TB disks, which is fine. However I'd rather backup the entire volume contiguously across tapes, which are much cheaper than keeping drives anyways. It is weird, you're right, but I feel weirder keeping 1TB disks sitting in my closet.

sanchez
Feb 26, 2003
LTO tapes are not cheap, remember your filez will probably not compress very well, which means you'll fit 800gb on a $30 LTO4 tape. Compare that to a pre-madness $80 2tb hard drive..

Wompa164
Jul 19, 2001

Don't write ghouls.

sanchez posted:

LTO tapes are not cheap, remember your filez will probably not compress very well, which means you'll fit 800gb on a $30 LTO4 tape. Compare that to a pre-madness $80 2tb hard drive..

I hear you. I'm just a home user with an enterprise soul..

hackedaccount
Sep 28, 2009
Would some type of logical volume manager to concatenate the disks work?

Wompa164
Jul 19, 2001

Don't write ghouls.
Yeah that's a great suggestion as well. Do you have a particular solution mind?

Serfer
Mar 10, 2003

The piss tape is real



Things that are pissing me off: EMC.

I don't know what happened in the last six months, but their support has been absolute poo poo recently. Nevermind that the NX4's that we have keep throwing up errors for a successful sector reconstruction, but when they throw out an unrecoverable sector error, EMC requests 8 or so sets of SPCollects (log dumps, drive status, etc), accuses me of creating several of the collects before the error happened (why the hell would I do that?) and then determining that because none of the drives had faulted everything was fine.

After about 18 or 20 of those, they finally get out to replace the drive, and replace it with a bad drive. Try again next day, finally get a working drive, yay. Flash forward a week, and something tries to read off the unreconstructable errors, causing the controller to crash, so the other controller takes over, tries to read the same thing, and crashes. Why didn't they mark these sectors as a part of the replacement? Good question. It caused nearly 18 hours of downtime for one of our offices, and two weeks later, I still can't get them to give me an explanation of why they didn't do anything.

Add that to the fact that they're continuing their "three years and your platform is dead, time to buy again brand new!" system, and the Compellent pitch we just got this week is looking mighty good. Their hardware lasts through revisions, they keep them upgraded, adding new technologies just means adding a card, mixed drive types, tiering, virtualized RAID, they seem to be hitting every sore spot that we're having with EMC. It almost seems too good to be true. Their price is more than we would like, but I'm not dumb, I know we would eventually spend as much on EMC, and then spend more three years later to replace it all.

Does anyone have any experiences that would break the magic spell Compellent has put on me?

hackedaccount
Sep 28, 2009

Wompa164 posted:

Yeah that's a great suggestion as well. Do you have a particular solution mind?

If you're using Windows it seems you want to create what they call a Dynamic Volume. To be honest I assume it works and it's easy etc etc but I've never used it and I'm sure the guys over in the Windows thread could help you with any problems.

If you're using Linux you can use the built in Logical Volume Manager (aka LVM aka LVM2) to concatenate and I'm sure the guys in the Linux thread wouldn't mind helping a bit.


Two potential gotchas:

1) If one of the drives fails the data is likely lost, but you can replace the drive, recreate the logical drive (wiping all other drives in the process) and create a new fresh backup. May or may not be a problem for you.

2) You mentioned storing drives in the closet. Some OSes are sensitive to device names and will crap out when they change. For example, say the first time you connect hard drive #1 to USB port #1, disconnect it and put it in the closet. Then, the 2nd time you connect hard drive #1 to USB port #2 and suddenly Windows thinks the logical volume is broke. I can tell you how to get around this problem in Linux but I have no idea how it works (or if it's even a problem) in Windows.

markus876
Aug 19, 2002

I am a comedy trap.

hackedaccount posted:

2) You mentioned storing drives in the closet. Some OSes are sensitive to device names and will crap out when they change. For example, say the first time you connect hard drive #1 to USB port #1, disconnect it and put it in the closet. Then, the 2nd time you connect hard drive #1 to USB port #2 and suddenly Windows thinks the logical volume is broke. I can tell you how to get around this problem in Linux but I have no idea how it works (or if it's even a problem) in Windows.

If you setup the drives (or you can "upgrade" them into this state later without data loss) into what Windows called Dynamic drives I'm pretty sure you won't have this type of problem. I think it works similar to how it does in linux where there is a uuid or similar written to the beginning of the drive and the OS uses the uuid to determine information about the drive rather than the port it is plugged into.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Serfer posted:

Things that are pissing me off: EMC.

I don't know what happened in the last six months, but their support has been absolute poo poo recently. Nevermind that the NX4's that we have keep throwing up errors for a successful sector reconstruction, but when they throw out an unrecoverable sector error, EMC requests 8 or so sets of SPCollects (log dumps, drive status, etc), accuses me of creating several of the collects before the error happened (why the hell would I do that?) and then determining that because none of the drives had faulted everything was fine.

After about 18 or 20 of those, they finally get out to replace the drive, and replace it with a bad drive. Try again next day, finally get a working drive, yay. Flash forward a week, and something tries to read off the unreconstructable errors, causing the controller to crash, so the other controller takes over, tries to read the same thing, and crashes. Why didn't they mark these sectors as a part of the replacement? Good question. It caused nearly 18 hours of downtime for one of our offices, and two weeks later, I still can't get them to give me an explanation of why they didn't do anything.

Add that to the fact that they're continuing their "three years and your platform is dead, time to buy again brand new!" system, and the Compellent pitch we just got this week is looking mighty good. Their hardware lasts through revisions, they keep them upgraded, adding new technologies just means adding a card, mixed drive types, tiering, virtualized RAID, they seem to be hitting every sore spot that we're having with EMC. It almost seems too good to be true. Their price is more than we would like, but I'm not dumb, I know we would eventually spend as much on EMC, and then spend more three years later to replace it all.

Does anyone have any experiences that would break the magic spell Compellent has put on me?

So speaking quite honestly an NX4 box is basically as low as it gets and is very old. Support is likely the same.

VNX support is much better, VMAX / Symmetrix support is a country mile beyond both of those. The VNX is a world away from the NX4 - the NX4 is from what, 2006?

Compellent is more VNX range so you're talking a different EMC ball game. With Compellent Dell are doing great but there is one teensy weensy problem with Compellent:

Compellent is still a 32 bit OS which means the maximum cache is 4GB. This is a 'welcome to 2003' roadblock for Compellent and likely offers so more cache than you use today. That's the kind of cookie that isn't going to get solved without a painful software upgrade and the usual 'buy a ton of memory'' offering (assuming you can upgrade a 32bit array to a 64 bit array - likely you are buying the last lemons off the truck).

Other arrays, such as the VNX and Netapp arrays can offer you far more cache on the controller and also through the use of SSD drives or PAM cards. These make a world of difference.

evil_bunnY
Apr 2, 2003

Serfer posted:

Things that are pissing me off: EMC.

Does anyone have any experiences that would break the magic spell Compellent has put on me?
I've had some ex-colleagues install quite a few of those, and they've been pretty happy AFAIK.

Vanilla posted:

VNX support is much better, VMAX / Symmetrix support is a country mile beyond both of those. The VNX is a world away from the NX4 - the NX4 is from what, 2006?
As long as you're paying support that is no loving excuse though. poo poo like refusing to replace a drive then putting a bad one in (instead of just being there 9AM NBD with a tested unit) is just unacceptable.

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

Serfer posted:

Does anyone have any experiences that would break the magic spell Compellent has put on me?
In my experience, their support on Solaris is lovely. The last time I called Copilot about a Solaris 10 iSCSI problem, they told me to call "Solaris" for help. Also, when we had bought our first Compellent, the on-site engineer(s) couldn't figure out how to get iSCSI working on SUSE and restored to Googling for the solution. I ended up figuring it out on my own. Based on my anecdotal evidence, it seems like this product works best for Windows shops.

I should also mention that we had one of their lovely Supermicro controllers die in our London office and it took them 8 days to get a replacement controller in the office. This was with 24x7 priority onsite support. That being said, I don't think the product is that bad but is probably not very well suited for a Unix/Linux shops. We just had a bunch of unfortunate problems with it so now it is called the "Crapellent" around the office.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Bluecobra posted:

In my experience, their support on Solaris is lovely. The last time I called Copilot about a Solaris 10 iSCSI problem, they told me to call "Solaris" for help. Also, when we had bought our first Compellent, the on-site engineer(s) couldn't figure out how to get iSCSI working on SUSE and restored to Googling for the solution. I ended up figuring it out on my own. Based on my anecdotal evidence, it seems like this product works best for Windows shops.

I should also mention that we had one of their lovely Supermicro controllers die in our London office and it took them 8 days to get a replacement controller in the office. This was with 24x7 priority onsite support. That being said, I don't think the product is that bad but is probably not very well suited for a Unix/Linux shops. We just had a bunch of unfortunate problems with it so now it is called the "Crapellent" around the office.

From my experience thus far, their support is terrible. Not to mention their install engineers are terrible since they all seem to be terribly trained for Compellent since they're Dell engineers now. The SAN is okay, but I'd still rather buy EqualLogic.

Serfer
Mar 10, 2003

The piss tape is real



Vanilla posted:

So speaking quite honestly an NX4 box is basically as low as it gets and is very old. Support is likely the same.

VNX support is much better, VMAX / Symmetrix support is a country mile beyond both of those. The VNX is a world away from the NX4 - the NX4 is from what, 2006?
The NX4 is actually from 2008, we purchased it in 2009. We're having similar support issues on our CX4 as well (which is the same level as the VNX). The real issue isn't that the units are old, the issue is that their support is just terrible. It wouldn't have made a difference if it was a VNX or a VNXe, not replacing a drive the first time it throws an unrecoverable error is just unforgivable. Causing nearly 18 hours of downtime is just the icing on the cake. Add in the other issue we've now run into three times with EMC, the replace entire system every three years problem, and it's just not worth the money to stay with them.

Also, Compellent can do SSD and tiering, I'm not sure where you saw they couldn't.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Serfer posted:

The NX4 is actually from 2008, we purchased it in 2009. We're having similar support issues on our CX4 as well (which is the same level as the VNX). The real issue isn't that the units are old, the issue is that their support is just terrible. It wouldn't have made a difference if it was a VNX or a VNXe, not replacing a drive the first time it throws an unrecoverable error is just unforgivable. Causing nearly 18 hours of downtime is just the icing on the cake. Add in the other issue we've now run into three times with EMC, the replace entire system every three years problem, and it's just not worth the money to stay with them.

Also, Compellent can do SSD and tiering, I'm not sure where you saw they couldn't.

The support you've experienced is truly awful and I'd make sure EMC knew it. Let the rep know, also on any CSAT reports that come through.

Compellent can do SSD drives but the system cache is still stuck on 4GB. I think a medium size array is at about 24GB these days. They can't use the SSD drives as an extension of cache, only as a storage tier.

Not sure if it has improved but last time I was speaking to an admin of Compellent he was bitching that it takes about 5 days to promote data to a higher tier and over to days to demote data to a lower tier.

Serfer
Mar 10, 2003

The piss tape is real



Vanilla posted:

The support you've experienced is truly awful and I'd make sure EMC knew it. Let the rep know, also on any CSAT reports that come through.

Compellent can do SSD drives but the system cache is still stuck on 4GB. I think a medium size array is at about 24GB these days. They can't use the SSD drives as an extension of cache, only as a storage tier.

Not sure if it has improved but last time I was speaking to an admin of Compellent he was bitching that it takes about 5 days to promote data to a higher tier and over to days to demote data to a lower tier.
Hmm, yeah, but using it as tier 0 storage seems like it would be rather effective, even if it can't be used as cache.

Also, just got an alert from the NX4 I was complaining about, MORE UNCORRECTABLE ERRORS. Fuuuuuck.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Serfer posted:

Hmm, yeah, but using it as tier 0 storage seems like it would be rather effective, even if it can't be used as cache.

Also, just got an alert from the NX4 I was complaining about, MORE UNCORRECTABLE ERRORS. Fuuuuuck.

Using SSD drives in any array is definately going to be effective but is not even in the same ball park as having a larger cache. I'm a massive fan of SSD drives as Cache (EMC Fast Cache) or even PAM cards (Netapp). The more cache the better when it comes to arrays.

With the VNX I can have up to 2TB of Cache by using the SSD drives. I would much rather have a load of SSD drives dedicated to array cache than a load of SSD drives being used as data. If I had a choice it would be SSD for cache every time (or both if I can get away with it). I put SSD drives as cache into every VNX that comes under my nose, with three 200GB SSD drives I can bump the cache to over 200GB which turns even the smallest array into a different beast.

Using SSD drives with automated tiering means that the data that requires the performance over time gets to go on the SSD drives. The problem is it doesn't help me react to bursts, batch processes, boot storms, etc. EMC arrays move the data around once a day in 1GB chunks, the Compellent array in 4-12 days (?!) and other arrays (typically in the enterprise space) can move it around in seconds or minutes. So in the mid range space you're still relying on that 4GB-36GB of cache to do a lot of the hard work and it's just not enough, given people are hanging tens or even hundreds of TB's of the controllers.

SSD drives as cache bumps my cache from, say, 24GB to 800GB. This is a massive, massive jump and because it's a cache it will benefit all of the applications on the array. It acts as a huge read and write buffer and keeps 800GB of the most commonly accessed data at hand. This means my drives are a lot less utilised and often you can opt for the 10k rather than the 15k drives because of this.

On the support side flag this to your account manager, let them know you've had a data down event and the EMC kit is not pleasing you.

KS
Jun 10, 2003
Outrageous Lumpwad
If you're using Compellent's default recommended storage profile in a system with with SSD as tier 1, it's going to act as a write cache. All writes go to the top tier of storage in RAID 0+1. They get progressed down to lower tiers and rewritten as RAID-5 in the background. It does not, however, act like a read cache, except in the sense that recently written data will be on SSD...

Having experience now working with EMC VNXe, Hitachi AMS, HP EVA, and Compellent in large ESX environments, I don't think you can go wrong with the Compellent. I now have three of them. Replay Manager 6 killed my last few complaints about the system. I am excited to see where they go with dedupe and the 64-bit series 50 controllers, but even today it is fast and reliable.

I like to say that Compellent is 3PAR for people who care about costs. It's not quite there, but it's close.

KS fucked around with this message at 01:27 on Nov 13, 2011

Vanilla
Feb 24, 2002

Hay guys what's going on in th

KS posted:


I like to say that Compellent is 3PAR for people who care about costs. It's not quite there, but it's close.

My experience with 3PAR is that it's good kit but I hear what you say about cost. Those things are costed at the very top of mid-range - almost into Enterprise with the VMAX/XP/USPV.

Mierdaan
Sep 14, 2004

Pillbug

Vanilla posted:

Not sure if it has improved but last time I was speaking to an admin of Compellent he was bitching that it takes about 5 days to promote data to a higher tier and over to days to demote data to a lower tier.

Yeah I just sat in a room with Compellent people for 4 hours the other day and they said you can schedule moves like this overnight if you don't feel like waiting for Data Progression to work its magic.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Vanilla posted:

The more cache the better when it comes to arrays.

Seconding this. Caching technology is robust and universally used, as every single storage vendor realizes how powerful and important caching is to delivering storage. If you're spending the $$$ on SSDs, you should put them where they'll see the most benefit, which is the cache.

Mausi
Apr 11, 2006

I've got EMC coming in on Friday to convince me to use their SANs for a new environment, I've also got Dell and NetApp lined up.

We're doing a split environment, Virtual Desktop and Server both on vSphere5 but Desktop is going to be XenDesktop (because management say so) and server is pretty boring as far as storage requirements go beyond part of it being under vCloud control for Development. There'll be about 4000 concurrent desktop users (Win7, probably 30Gb disks before dedup and thin prov carve it down) and about 250Tb of Production server data of which only about 25% is busy (estimated from logs).
Server will be async replicated to a DR site for VMware SRM for about 50% of the capacity, the rest will be handled by backup and datadomain replication (probably, early days in design land as yet).
I don't much care about copper or fibre, it's a new DC so I can cable and switch it up how I want.

I may have to operate server/desktop from two separate SANs do to logistics but not certain yet. I want to get the VDI masters on SSD either via caching or Tier0 luns (would prefer automated management) to cut back on the number of spindles to handle a 3 hour login Window (pan-EU datacentre), also XenDesktop tends to REALLY like NFS for MCS.
I would prefer if the server disks were self tiering as well, but otherwise not particularly fussy as long as I can tie into SRM/Commvault and do thin provisioning at a LUN level.

So questions I guess:
Is NetApp still king of NFS if it turns out that they won't use thin VMFS for XenDesktop? Who else competes now?
Where in the range am I looking here? Seems small enterprise to me, but I'm a little out of touch on storage tech lately. I don't have a budget yet, what I'm seeking is a recommendation of arrays which should support these requirements so I can bully the right vendors for pricing like dogs in a pit.
Is it possible to use a single SAN to operate the split environment intelligently?

If someone can tell me something like "X array will do it all for you but it's expensive, probably try Y or Z or a combination of Desktop on A and Server on B" then if you're ever in London I'll take you to Hawkesmoor for a steak.

If any of this thinking is out of date or stupid please hit me for it, I'm just a VMware nerd with enough knowledge to be dangerous at the moment.

Internet Explorer
Jun 1, 2005





I'm not experienced enough with SANs to give you advice on that, as I am kind of in your shoes, but your comment about the Win7 disks got me thinking. Have you looked at using Citrix Provisioning Services for your vDisks? It is not like VMware View (or MCS) where you need to let your SAN do all the read/writes for your disks. Your Provisioning Servers can cache the image in RAM and serve it up. More here - http://virtualfeller.com/2011/02/15/provisioning-services-or-machine-creation-services%E2%80%A6-big-picture-matters/ and http://virtualfeller.com/2011/03/02/pvs-or-mcs-%E2%80%93-we-talking-about-iops-again/

madsushi
Apr 19, 2009

Baller.
#essereFerrari
Some NetApp info:

Anything in the FAS2xxx series will be too small, they cap around 150Tb. You'll want to look at the 3xxx series (probably too small for a 6xxx), and you NEED FlashCache for this type of VDI deployment (make sure they include it). My last XD on NetApp deployment sees 97-99% of read requests served by FlashCache (few hundred users).

Just as a note, MCS is relatively untested at scale and only a fraction of deployments use it. I only view it as a quick and dirty option for setting up storage for pilots/POCs.

PVS (as mentioned) is very thoroughly tested and is used in most deployments. You might read some mumbo jumbo about how NetApp's RAID-DP is bad because PVS deployments tend to see mostly writes to the SAN since the PVS server caches the disks, but the truth is that NetApp's WAFL tech makes it the best choice for writes no matter what raid level you pick.

There is also a 3rd option, using the NetApp/VMWare Rapid Deployment Utility (RDU) plugin, which I believe is considered to be NetApp/Citrix "best practices" at the moment. I have not deployed this solution because all of my XD deployments were tricked into using XenServer by Citrix. Luckily the RDU for XenServer should be out next year...

NetApp's SRM integration is very tight. VMWare running on NetApp NFS is great, and NetApp's Virtual Storage Console plugin for vCenter is by far the best VMWare/storage management tool out of the bunch.

Mausi
Apr 11, 2006

Thanks :)
There's another guy 'doing' VDI while I take care of the broad VMware infrastructure - he's the one who mentioned MCS so I'll check whether PVS is what will be going in.

I appreciate the NetApp info - the comments about Caching reads for VDI meshes with what I currently know so it's good to hear I'm not too far off current best practice.

If anyone has some choice info on the EMC or Dell side of things I'd be very glad to hear it :) From what I've read so far EMC are likely to try and pitch a vMAX and then fall back to a VNX with SSD cache, no idea what Dell will bring to the table but if it's an Equallogic I'm going to giggle.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Mausi posted:

Thanks :)
There's another guy 'doing' VDI while I take care of the broad VMware infrastructure - he's the one who mentioned MCS so I'll check whether PVS is what will be going in.

I appreciate the NetApp info - the comments about Caching reads for VDI meshes with what I currently know so it's good to hear I'm not too far off current best practice.

If anyone has some choice info on the EMC or Dell side of things I'd be very glad to hear it :) From what I've read so far EMC are likely to try and pitch a vMAX and then fall back to a VNX with SSD cache, no idea what Dell will bring to the table but if it's an Equallogic I'm going to giggle.

So depending on your size, your company and the criticality of your data and a number of other factors it may be the VMAX, VMAXe (the new baby VMAX) or the VNX. I can go into detail about each if you're looking for pointers, just ask.

If it's VMware you're primarily focused on Wikibon did a good write up rcently on VMware integration & also a survey.

http://wikibon.org/wiki/v/EMC_and_NetApp_lead_in_VMware_Storage_Integration_Functionality

and

http://wikibon.org/wiki/v/Wikibon_User_Survey:_EMC_and_NetApp_Dominate_VMware_Storage

The best thing about the EMC kit is the Fast Cache (SSD drives as cache) on the VNX and the automated storage tiering. You'll find a ton of storage can go down to the slower 2TB/3TB drives.

The VNX arrays can see up into the VM layer (ESX server, which VMs are on it, etc) and have the usual plugins so you can see down into the storage layer from vsphere and actually just provision storage from vSphere without logging into the array.

A great tool to look into is Powerpath VE. This does a lot more than NMP - firstly it makes sure all VMs can use all paths rather than the two you assign. This makes performance better and management a lot easier as you don't have to do any manual selection of paths. The pool apporach means it's suited towards those kind of vcloud director deployments where app owners can create their own VMs - usually slows it all down if an admin has to select paths (and is supported with things like pixie auto deploy).

After the initial EMC meeting ask to speak with one of the vSpecialists to get detailed info about what is coming up. Lots of good stuff in the works with relation to VMware.

I'll probably be in london myself in a few weeks. Happy to meet up for a beer & chat.

Dell will really only pitch Compellent...and well....all I have to say is 4Gb cache (mentioned this last page).

Mausi
Apr 11, 2006

Thanks for the info, will read those now. :) PM me your details when you're going to be down here in London, I'll definitely shout you a beer.

Given that we're designing and delivering to a certain level of maturity rather than having them grow into it organically, self-managing systems like auto storage tiering and powerpath are ideal as long as we can tie them into the central alerting system, which currently looks to be M$ System Centre and VMware OpsMgr 5.

Vanilla posted:

Dell will really only pitch Compellent...and well....all I have to say is 4Gb cache (mentioned this last page).
I'll make up a new mandatory requirement that no 32bit systems can be purchased as the CTO considers it a key indicator of an outmoded technology, or some poo poo ;)

Slagwag
Oct 27, 2010

"I am not a nugget!"
I enjoy my cheap environment with Synology devices as my SAN and my ESXi servers are running VMs of it without a problem.

Adbot
ADBOT LOVES YOU

Internet Explorer
Jun 1, 2005





Keldawn posted:

I enjoy my cheap environment with Synology devices as my SAN and my ESXi servers are running VMs of it without a problem.

All well and good, but I cannot imagine running a 4000 user VDI plus supporting infrastructure environment off them.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply