Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Nomex
Jul 17, 2002

Flame retarded.
Not really. You can bench domain services servers (File/print, DC, DNS etc.) to get an idea about how much IO you need. Things like Exchange, Sharepoint, Oracle, BES etc you can get some baselines from vendors, or just bench them yourself. Perfmon is your friend.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

complex posted:

This is for VMware, so I have the burden (luxury?) of just assuming all IO is random.
This is where virtualization admins get really lazy, and it really bothers me.

Your workloads don't get magically mixed just because they happen to get virtualized. They get mixed because of lovely, sloppy planning. Sometimes it's unavoidable or even desirable, because your I/O requirements aren't individually that high and you don't need to care anyway. Often, though, that isn't the case, and it's just one of a number of things that can happen when communication breaks down between server/storage admins because the server guys aren't constantly asking for LUNs and zoning anymore.

It's true that enough interleaved sequential I/O patterns will basically aggregate into a single random I/O pattern, but that's only true for a given set of spindles. The same rules apply to virtualized environments that apply to physical ones -- if your workload has significant I/O requirements, you had better dedicate spindles to it, and make sure that the array/volume are configured properly for your workload. You wouldn't mix your high-throughput Exchange database and transaction logs on the same disks if it was a physical box. Don't do it just because it's virtual, and hey, this abstraction thing means you don't need to plan anything anymore.

If you're running workloads of any significant size, you're going to run into applications that have their own high I/O requirements. These are your bread-and-butter applications like Exchange and SQL Server. You have absolutely no excuse to ignore all existing vendor best practices on these and just dump them onto generically-configured datastore volumes. Use your head. There are plenty of tools out there to profile your workload, and if your SAN's tools suck, some of them are built right into vSphere (see vscsiStats).

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

Xenomorph posted:

Any more info on this? At first I didn't think it would be a problem, but after using 2008 R2 on our Domain Controller, using regular 2008 feels a little "off" (basically going from Win7 back to Vista).

Besides the slightly improved interface, what advantages does Storage Server 2008 R2 offer? SMB 2.1? How much better is that than 2.0?

I'm not even familiar with the "Storage Server" product. I saw something to enable Single Instance Store (de-duplication) on the drive, which I'm guessing isn't in the regular Server products.
I'm tempted to just wipe Storage Server 2008 and install Server 2008 R2. We get Windows licenses cheap, and I'm trying to figure out if we'd be happier with the overall improvements in Server 2008 R2 compared to the NAS features we may not use in Storage Server 2008.

Sorry for the late reply, I was busy receiving and building my new cabinet, full of new systems etc. :)

Yes, WSS 2008 R2 is in the pipe, I know it as a fact but it seems it won't be out before the end of Q1 - why, don't ask me, Dell is rather tight-lipped about it for some mysterious reason (ie it's already out, there's nothing secret about it, HP introduced their G2 X3000-line last Novemeber.)
I have a theory though: until today Dell bought (storage companies) EqualLogic, Exanet, Ocarina and recently Compellent - how many you can point out in Dell's current portfolio? Right, only EqualLogic (Compellent acqusition is still under way and they will need another year to fully integrate into some unified storage lineup, they still selling Compellent-sourced systems regardless Dell's listing as theirs, you cannot configure it, no info up there etc.)

Ocarina's deduping is coming, we know that, they told us it's going to take up to a year before it shows up - Couldn't fit in EQL firmware space? Controller's inability to run it? - but they are totally silent about the Exanet IP they bought now more than a year ago... it was a scale-out, clustered NAS solution, exactly the product Dell is sorely missing (a' la SONAS, X9000, VNX/Isilon etc) and also a product that would certainly eat into the market share of NX units running Storage Server 2008 R2 clusters.
Coincidence?
I doubt it but time will tell.

As for WSS2008R2: new SMB2.1 is a lot faster, SiS is there yes, FCI is better (you might know it from Server 2008 R2), it includes iSCSI Target v3.3 and, as every previous Storage Server Edition, it includes unlimited CALs right out of the box (key selling points in many cases.)
If you're like me, planning to run two in a cluster then it's important to remember that R2 clustering is a lot easier now - and you can still get all your Server 2008 R2 features.

Licensing aside I'd never wipe Storage Server R2 and install Server 2008 R2 for sure, just like I would not hesitate for a second to wipe Storage Server 2003 R2 and install Server 2008 R2...

...but Storage Server 2008 vs Server 2008R2? Tough call... are they both Enterprise and is SS2008 x64?

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

paperchaseguy posted:

As a very rough rule of thumb, I use 120 IOPS/10k disk, 180 IOPS/15k disk, 60 IOPS/5k SATA. But yes, any major vendor will help you size it if you can collect some iostat data or give some good projections.

Same here for rough estimates w/ 80 for SATA 7.2k or 100 for SAS 7.2k added...

szlevi fucked around with this message at 22:59 on Feb 25, 2011

Nebulis01
Dec 30, 2003
Technical Support Ninny

szlevi posted:

...but Storage Server 2008 vs Server 2008R2? Tough call... are they both Enterprise and is SS2008 x64?

WSS2008 is available in x86 or x64. WSS2008R2 is available only on x64. Unless you really need the iSCSI or De-duplication features, Server 2008R2 would serve you quite well.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

Nebulis01 posted:

WSS2008 is available in x86 or x64. WSS2008R2 is available only on x64.

Yes, I know that - I'm asking what they have up and running over there, WSS2008 x86 or x64...

quote:

Unless you really need the iSCSI or De-duplication features,

...or FCI to make policy-based automated data tiering or to have unlimited licensing etc etc...

quote:

Server 2008R2 would serve you quite well.

Right except I am having problem figuring out how much advantage Server R2 gives you (sans somewhat better SMB). :)

Intrepid00
Nov 10, 2003

I'm tired of the PM’s asking if I actually poisoned kittens, instead look at these boobies.

szlevi posted:

Right except I am having problem figuring out how much advantage Server R2 gives you (sans somewhat better SMB). :)

Multi-Monitor support :v: Seriously thugh lots improvements and MS has a handy list.

http://www.microsoft.com/windowsserver2008/en/us/why-upgrade.aspx

Intrepid00 fucked around with this message at 04:28 on Feb 26, 2011

complex
Sep 16, 2003

Misogynist posted:

This is where virtualization admins get really lazy, and it really bothers me.

Suppose my array does sub-lun tiering and can do post-provisioning migration between luns?

They say laziness is a desirable trait in sysadmins.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

complex posted:

Suppose my array does sub-lun tiering and can do post-provisioning migration between luns?

They say laziness is a desirable trait in sysadmins.
If you got that feature for free, definitely take advantage of it. If you paid extra for it so you wouldn't have to think, we may have a disagreement between us. :)

conntrack
Aug 8, 2003

by angerbeet

Misogynist posted:

If you got that feature for free, definitely take advantage of it. If you paid extra for it so you wouldn't have to think, we may have a disagreement between us. :)


Sometimes the best of plans get screwed when the customers change the requirements at random intervals.

This is why my dreams are of online lun migrations.

ghostinmyshell
Sep 17, 2004



I am very particular about biscuits, I'll have you know.
Anyone know the dedupe limits for Netapp's ontap 8.01?

My access to the now site is non-existant :smith:

madsushi
Apr 19, 2009

Baller.
#essereFerrari

ghostinmyshell posted:

Anyone know the dedupe limits for Netapp's ontap 8.01?

My access to the now site is non-existant :smith:

If the filer supports 8.0.1, then the volume size limit for dedupe is 16 TB, regardless of controller.

Boner Buffet
Feb 16, 2006
HP makes getting competitive pricing, at least with the lefthand units but probably with all their storage, impossible. Butt hurt VARs refuse to give me quotes because they're not the preferential partner on the project and I can't make a purchase without competitive quotes. I hate VARs.

Intraveinous
Oct 2, 2001

Legion of Rainy-Day Buddhists

InferiorWang posted:

HP makes getting competitive pricing, at least with the lefthand units but probably with all their storage, impossible. Butt hurt VARs refuse to give me quotes because they're not the preferential partner on the project and I can't make a purchase without competitive quotes. I hate VARs.
Better than the alternative, where the different VARs throw fits and want you to call HP and tell them you don't wanna work with the preferential partner. I've gotten put in the middle of 'registered opportunity' spats a few times in the recent past. While I understand that the registration process is there for a reason, to protect the VARs, it should be transparent to the customer. I also understand the VAR not wanting to spend time to provide you what will basically amount to list pricing, which pretty much guarantees they won't be selected to fill the order.

That said, if you're having problems with it, see if CDW or PC Connection or one of the ginormous resellers will give you a quote, or call your HP rep and explain the situation. I'd be they can figure something out to get you the quotes you'll need.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

conntrack posted:

Sometimes the best of plans get screwed when the customers change the requirements at random intervals.

This is why my dreams are of online lun migrations.
I dream of a day when all SANs do nothing but dynamically tier blocks across storage and anything more complicated than "I want this much storage with this class of service" all happens silently.

Edit: Basically Isilon I guess

ragzilla
Sep 9, 2005
don't ask me, i only work here


Misogynist posted:

I dream of a day when all SANs do nothing but dynamically tier blocks across storage and anything more complicated than "I want this much storage with this class of service" all happens silently.

Edit: Basically Isilon I guess

Or 3par, or Compellent, or EMC VMAX.

Xenomorph
Jun 13, 2001

Nebulis01 posted:

WSS2008 is available in x86 or x64. WSS2008R2 is available only on x64. Unless you really need the iSCSI or De-duplication features, Server 2008R2 would serve you quite well.

We've never used iSCSI (yet), and I don't think we'd really use de-duplication. Most of the files will be office documents and a bunch of other stuff that will probably be pretty unique from user to user.

The PowerVault isn't really in production yet, so I guess I could just spend a few hours getting it ready with Server 2008 R2.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Misogynist posted:

I dream of a day when all SANs do nothing but dynamically tier blocks across storage and anything more complicated than "I want this much storage with this class of service" all happens silently.

Edit: Basically Isilon I guess

It's what a lot of vendors are trying to do, hence the whole vBlock style approach to hardware that is VERY popular right now.

One block purchase and one interface to code against to lets people create an portal to allow their apps teams to get their server, storage, db, network 'automatically'.

Naturally some arrays are already doing the block based tiering (EMC, IBM, HDS, etc).

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades
I just found out that Uncle Larry silently killed off the StorageTek 25xx line of storage. I guess $1,000 2TB disks weren't profitable enough for them. Now I have to go muck around eBay for disks and brackets to upgrade a 2510 array that is less than three years old at one of our remote sites. :argh:

conntrack
Aug 8, 2003

by angerbeet
I'm not 100% convinced autotiering in it's current forms(VSP,VMAX) would benefit all setups.

I know if I bought 2TB of SSD space for all that "automatic right performance" joy, the nightly batch processing system that just happens to be 2TB would push out the interactive systems. That system goes balls out after hours and in the day others take over during the shorter office hours window. Making a profile for that might be interesting.

The migration of blocks between the tiers takes some time after profiling. Just because you get some amount of SSD storage it still has to be right sized for the working set. I'd be interested in when work spent and dedicated 15k spindles costs more than autotiering and SSDs. From the white papers and sales guys it's hard to get solid info. "you started saving money just by talking to me now, wink wink".

But i might just be sperging and nitpicking i guess. When you go to the performance seminars it's all about sizing one array per application, we suckers that have one array for all applications sit and cry in the corner over our small budgets.

Did i make this post a thousand times before in this thread? I ask these questions in alot of forums and many people just go BUT WE NEEED SSD BECAUSE ITS COOOOOOL.

ragzilla
Sep 9, 2005
don't ask me, i only work here


conntrack posted:

The migration of blocks between the tiers takes some time after profiling. Just because you get some amount of SSD storage it still has to be right sized for the working set. I'd be interested in when work spent and dedicated 15k spindles costs more than autotiering and SSDs. From the white papers and sales guys it's hard to get solid info. "you started saving money just by talking to me now, wink wink".

At least on Compellent the migration is based on snapshots (so after a snapshot, the data is 'eligible' to get pushed down to a lower tier) so you'd run a snapshot on that LUN after the batch completes, and it'd start getting pushed down once there's pressure on that tier. Realistically though if you have some app that runs overnight and you don't want it using Tier0, just put it on a LUN that only has Tier1-3 storage? Just because you can give every LUN tier 0-3 access doesn't mean you should or would in production.

ragzilla fucked around with this message at 15:13 on Mar 3, 2011

conntrack
Aug 8, 2003

by angerbeet

ragzilla posted:

At least on Compellent the migration is based on snapshots (so after a snapshot, the data is 'eligible' to get pushed down to a lower tier). Realistically though if you have some app that runs overnight and you don't want it using Tier0, just put it on a LUN that only has Tier1-3 storage? Just because you can give every LUN tier 0-3 access doesn't mean you should or would in production.

My point was that they both warranted tier 0 performance, the migration profiler would profile them as tier 0 just during different time of the day. The migration will be over days in case of large datasets, interday migration to shrink the needed tier 0 storage isn't there yet. At least in the VSP from what I gather.

complex
Sep 16, 2003

First, before allowing anything to happen automatically, some systems will allow you to run in sort of a "recommendation" mode, saying "I think this change will be beneficial".

Also, if your tiering system does not take time-of-day changes (or weekly/monthly, whatever) into account, then of course it won't be able to adapt to cyclic events like you describe.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Bluecobra posted:

I just found out that Uncle Larry silently killed off the StorageTek 25xx line of storage. I guess $1,000 2TB disks weren't profitable enough for them. Now I have to go muck around eBay for disks and brackets to upgrade a 2510 array that is less than three years old at one of our remote sites. :argh:

4 or 5 months ago, we got, according to your sales rep, the last 2510 brackets in the country. They cost us $200 a piece too. Not sure why Oracle doesn't want to keep that gravy train running.

H110Hawk
Dec 28, 2006

conntrack posted:

right sized

Did i make this post a thousand times before in this thread? I ask these questions in alot of forums and many people just go BUT WE NEEED SSD BECAUSE ITS COOOOOOL.

I didn't think "right sized" was an actual term. Color me surprised. http://www.oxforddictionaries.com/definition/rightsize?view=uk

You need to buy an SSD or 3 and see if they are right for your applications. We bought one and extrapolated some data which condensed 52 spinning 7200 rpm disks into 6 SSD's. Coupled with the fact that we have one disk per server it was incredible savings.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

InferiorWang posted:

HP makes getting competitive pricing, at least with the lefthand units but probably with all their storage, impossible. Butt hurt VARs refuse to give me quotes because they're not the preferential partner on the project and I can't make a purchase without competitive quotes. I hate VARs.

Well the problem is HP. They give a fatty discount to whomever 'deal reg's' the opportunity. If you don't have the deal registration you can't get primo pricing. Our VAR is straight with us, we know his markup, he tells us his raw costs, etc. But there's nothing stopping a crappy VAR from getting the deal reg, and just pocketing the savings.

I bought some LH kit for our Austin TX location, but corporate is in California. Took over a week and a bunch of emails and calls to get some 'out of territory exception' so I could buy the drat boxes. HP makes it way too complicated to buy their equipment.

Slappy Pappy
Oct 15, 2003

Mighty, mighty eagle soaring free
Defender of our homes and liberty
Bravery, humility, and honesty...
Mighty, mighty eagle, rescue me!
Dinosaur Gum
Does anyone have straight information on EMC FAST and the element size? My understanding is that data tiering within a Clariion using FAST only occurs on a 1Gb element size - meaning that moving a "hot spot" really means moving 1Gb chunk of data. I'm not a storage guy but it seems to me that this sucks for applications like Exchange and SQL that do a lot of random reads of 4k, 8k and 64k blocks and that it would just result in unnecessarily dragging a lot of 1Gb chunks around from array to array. Is this a conscious decision by EMC? Are they working on decreasing the element size or allowing some kind of manual configuration (ideally, per array)? Is it even worth considering enabling a FAST pool with SSD for anything other than sequential data?

My Rhythmic Crotch
Jan 13, 2011

edit: not the right thread for this question :downs:

My Rhythmic Crotch fucked around with this message at 02:41 on Mar 4, 2011

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

conntrack posted:

I'm not 100% convinced autotiering in it's current forms(VSP,VMAX) would benefit all setups.
Personally I think Sun had it right -- throw a shitload of commodity spindles at the problem, and put a shitload of cache in front of it. 90+% of your reads come from cache, and 100% of writes are cached and written sequentially. Saves you the trouble of tiering, period and you never have to worry about fast disk, just cache. Which, iirc, an 18GB SSD for ZIL from Sun was ~$5k, and a shelf of 24 1TB SATA drives was around $15k. Too bad Oracle is already killing that poo poo.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Spamtron7000 posted:

Does anyone have straight information on EMC FAST and the element size? My understanding is that data tiering within a Clariion using FAST only occurs on a 1Gb element size - meaning that moving a "hot spot" really means moving 1Gb chunk of data. I'm not a storage guy but it seems to me that this sucks for applications like Exchange and SQL that do a lot of random reads of 4k, 8k and 64k blocks and that it would just result in unnecessarily dragging a lot of 1Gb chunks around from array to array. Is this a conscious decision by EMC? Are they working on decreasing the element size or allowing some kind of manual configuration (ideally, per array)? Is it even worth considering enabling a FAST pool with SSD for anything other than sequential data?

For Clariion / VNX it is indeed 1GB chunks.

For VMAX it's 768k chunks BUT it moves these in groups of 10 (so 7.5mb chunks).

I think it's down to array performance. More chunks means more metadata to store and analyse and the array already has enough to look after - even at night when it is expected to move the chunks the backups are running, batch jobs are kicking off, etc.

1GB chunks are still pretty small, compared to a 2TB drive it's a fraction of a percent and i'm seeing some BIG Clariion arrays out there. There's a lot of chunks to look after.....

Hopefully with hardware enhancements this size will come down, granularity is always good as long as it can be handled.

The way I look at the automated tiering is not to look at it from a performance improvement perspective but from a dump-all-that-inactive-data down to 2TB drives. If you're looking for all an round performance boost stick FAST Cache in there (SSDs used as cache), if you're looking to reduce costs stick FAST on.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

conntrack posted:

I'm not 100% convinced autotiering in it's current forms(VSP,VMAX) would benefit all setups.

I know if I bought 2TB of SSD space for all that "automatic right performance" joy, the nightly batch processing system that just happens to be 2TB would push out the interactive systems. That system goes balls out after hours and in the day others take over during the shorter office hours window. Making a profile for that might be interesting.

You set the window of observation. For most places this would be 9-5 and everything overnight (backups etc) is ignored.

Additionally you can choose to ignore some LUNs, lock a LUN in place if the user would prefer a set performance level, etc.

Talking about EMC, not sure about all vendors.

quote:

The migration of blocks between the tiers takes some time after profiling. Just because you get some amount of SSD storage it still has to be right sized for the working set. I'd be interested in when work spent and dedicated 15k spindles costs more than autotiering and SSDs. From the white papers and sales guys it's hard to get solid info. "you started saving money just by talking to me now, wink wink".

It's the utilisation of larger 1TB and 2TB drives that are generating the cost savings (in addition to reduced footprint, power, cooling, etc). This is why I see automated storage tiering as mostly a money saver than a performance improver.

Little SSD, a few slivers of fast disk and a ton of SATA. I've already seen it in action 10 times because people have just been using fast FC drives for all data and it isn't needed.

Eventually the small, fast drives will go away, SSDs will be cheaper and it will be all SSD and large 2TB+ drives.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

complex posted:

First, before allowing anything to happen automatically, some systems will allow you to run in sort of a "recommendation" mode, saying "I think this change will be beneficial".

Also, if your tiering system does not take time-of-day changes (or weekly/monthly, whatever) into account, then of course it won't be able to adapt to cyclic events like you describe.

I always avoid the recommendation modes. There used to be an EMC Symmetrix tool called Symm Optimizer that would automate array performance balancing but on a 'dumb level' compared to the sub-LUN tiering we have today. It's been around for about 7/8 years.

It would move hot spots around to balance the array, it had to be the same RAID type, it moved whole hypers, it needed some swap space, etc.

Anyway, those that used recommendation mode never actually went through with the moves as they were either too busy to even look at the array or didn't want to make changes in a enterprise environment.

Those that left Symm Optimizer to do its thang had an awesomely balanced array without hot spots.

These arrays have got so much bigger that people just have to bite the bullet and leave them to automate these things - there's a reason why it's called automated storage tiering and people have even less time today....

Vanilla fucked around with this message at 09:12 on Mar 4, 2011

conntrack
Aug 8, 2003

by angerbeet

adorai posted:

Personally I think Sun had it right -- throw a shitload of commodity spindles at the problem, and put a shitload of cache in front of it. 90+% of your reads come from cache, and 100% of writes are cached and written sequentially. Saves you the trouble of tiering, period and you never have to worry about fast disk, just cache. Which, iirc, an 18GB SSD for ZIL from Sun was ~$5k, and a shelf of 24 1TB SATA drives was around $15k. Too bad Oracle is already killing that poo poo.

I had a boner for those ZFS boxes until we got the quote. I would have loved to get some but sun was sinking and the sales people just didn't give a drat any more.

Now I thank them for not getting me to buy it.

conntrack
Aug 8, 2003

by angerbeet

Vanilla posted:

You set the window of observation. For most places this would be 9-5 and everything overnight (backups etc) is ignored.

Additionally you can choose to ignore some LUNs, lock a LUN in place if the user would prefer a set performance level, etc.

Talking about EMC, not sure about all vendors.

You mean getting the same effect as giving the applications dedicated spindles? :)

Vanilla posted:


It's the utilisation of larger 1TB and 2TB drives that are generating the cost savings (in addition to reduced footprint, power, cooling, etc). This is why I see automated storage tiering as mostly a money saver than a performance improver.

Little SSD, a few slivers of fast disk and a ton of SATA. I've already seen it in action 10 times because people have just been using fast FC drives for all data and it isn't needed.

Eventually the small, fast drives will go away, SSDs will be cheaper and it will be all SSD and large 2TB+ drives.

That depends on who you talk to, I personally share your view in this matter. A lot of people see it as a way to fire all those crusty storage guys though.

Why doesn't the vmax virtualize external storage? Any one?

BlankSystemDaemon
Mar 13, 2009



EDIT: ↓↓ Sorry, I'm stupid. I was directed towards the NAS thread, saw this and thought "Oh hey, that's probably it!", not realizing that enterprise is a bit outside my requirements (and price, I suspect).

BlankSystemDaemon fucked around with this message at 17:42 on Mar 4, 2011

Mierdaan
Sep 14, 2004

Pillbug
You might get an answer here, but you'd probably have better luck in the Home Storage megathread.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

conntrack posted:

You mean getting the same effect as giving the applications dedicated spindles? :)

On our vmax we have different service tiers defined so we can get the best bang for our buck. This is a process/SLA driven thing to make sure apps that need performance all the time can get it.

Some apps will live entirely on dedicated spindles and we charge back more to the business unit that owns the app.



quote:

That depends on who you talk to, I personally share your view in this matter. A lot of people see it as a way to fire all those crusty storage guys though.

Why doesn't the vmax virtualize external storage? Any one?

It isn't really what the vmax was designed to do. The VMAX was built as a powerhouse array that's intended to be extremely reliable and extremely fast. I think EMC support/engineering would rather focus on keeping it that way rather than spend resources making it talk to other arrays.

Edit:

There is always the comedy Invista option! It's just not something EMC has been very interested in doing.

1000101 fucked around with this message at 18:05 on Mar 4, 2011

conntrack
Aug 8, 2003

by angerbeet
The cost of "enterprise sata" sort of takes out the "save" part in "save money", so virtualizing midrange is looking better and better.

Edit: If the vmax would get the external capability i would definitely look more in to it.

conntrack fucked around with this message at 18:38 on Mar 4, 2011

H110Hawk
Dec 28, 2006

Spamtron7000 posted:

Does anyone have straight information on EMC FAST and the element size? My understanding is that data tiering within a Clariion using FAST only occurs on a 1Gb element size - meaning that moving a "hot spot" really means moving 1Gb chunk of data.

To give you an idea of the other end of the spectrum BlueArc tried to do this with their Data Migrator which would offload files from Fast -> Slow storage based on criteria you set. This happened at the file level, so if you had a bajillion files you wound up with a bajillion links to the slow storage. I'm not saying one way is better than the other, or one implementation is better than another, but there are extremes both ways with this sort of thing.

I for one would bet EMC has it less-wrong than BlueArc. Is their system designed for Exchange datastores? Is there a consideration in how you configure Exchange to deal with this?

Adbot
ADBOT LOVES YOU

Intraveinous
Oct 2, 2001

Legion of Rainy-Day Buddhists
So, what's the preference on arrays where you create several different groups of spindles, set a RAID protection level on that array, then present luns from it -vs- the ones that do a large, per tier, disk group and then virtual raid at some stripe level on top of it?

I've dealt mostly with HP EVA, which does the latter, and we're looking at different options now to replace an EVA 4000. 3PAR and Compellent both do things similar to the EVA, while NetApp and EMC(?) does the former.

The Netapp we're looking at is a V32xx, which we could use to virtualize and manage the EVA 4400 we'll still be using for at least a few years. So cutting it down to only doing stuff in OnTap would cut some of the management tasks.

Right now I've got budgetary quotes in hand from 3PAR and NTAP, and expect a Compellent one soon. Haven't talked to EMC yet. Anyone else I should be talking to? Am I making too big a deal out of the differences in management?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply