Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vanilla
Feb 24, 2002

Hay guys what's going on in th
I can answer most of your EMC questions (features, functionality, why EMC over xyz) from a sales perspective, i'm not that technical!

I can give opinions on other vendors in general.

Adbot
ADBOT LOVES YOU

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Alowishus posted:

How can I definitively determine the blocksize of a EMC Clariion CX3 RaidGroup?

I've got a script that is pulling a "navcli getall" report and parsing it to produce a web-based report about free space. Unfortunately, all I get from the report for each RaidGroup is this:
code:
Raw Capacity (Blocks):                     633262480
Logical Capacity (Blocks):                 506609984
Free Capacity (Blocks,non-contiguous):     296894784
By looking at a LUN on that RaidGroup:
code:
LUN Capacity(Megabytes):    102400
LUN Capacity(Blocks):       209715200
... I can infer that the blocksize is 2048, but is that something I can rely on to always be the case? If not, is there an easier way to get it out of Navisphere?

Unless i'm wrong you can set the block size on a lun by lun basis depending on your requirements. 2k, 4k, 8k, 16k, 32k, etc. This is the size you are likely looking for.

Overall background block size is set at 520 bytes. 512 bytes data and 8 bytes of Clariion data to ensure integrity, but the last 8 bytes is user transparent.

Have you tried looking at the lun via Navisphere?

Vanilla
Feb 24, 2002

Hay guys what's going on in th

optikalus posted:

I'm curious how you're calculating 3000 IOPS for the Equallogic. 7200RPM SATA drives yield about 80 IOPS per spindle, so you would need 38 drives to reach 3000. You would need to fill the box with 15k RPM drives to get anywhere near 3000 IOPS.

I can't imagine that the Equallogic is doing that fancy pilardata thing where they split up the disk and put the 'faster' data on the outside edge, but that'd be neat.

It's not just drives, a lot of IOPS is provided by the intelligent cache of the array before it even hits the drives.

You should always be wary of any IOPS benchmarks and statements, they're almost always useless - like a vendor benchmarking - often they benchmark such a small amount of data it doesn't even hit their back end and all comes from cache. Some monkey in a lab once saw the IOPS meter hit 300,000 IOPS so that will be what goes on all the slides and marketing outputs!

Vanilla fucked around with this message at 18:01 on Sep 19, 2008

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Catch 22 posted:


Its a nice device, but 60K for a SATA SAN just boggles my mind.
He did show me a 2000 mailbox Exchange 2007 environment running off a SATA one though.

I would consider this to be awful. SATA drives just are not made for the random IO of Exchange.

Can they do it? Sure they can if you use enough.....but I would always recommend using SAS / FC drives.

Also, you shouldn't share exchange spindles with other applications because they will just grind to a halt. So that means dedicating spindles for exchange. If you use large SATA drives then you have a huge amount of wasted space and typically Exchange wants performance, not massive capacity. 146GB and 300GB FC spindles would be perfect.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Catch 22 posted:

FC anything is pointless for me. I have 60 users, most who dick off all day, read internet stuff, and redraft word docs. Server network load never peaks over 10Mb until I run the backup.

I do hear you about SATA though, as honestly I am scared to move to it, but now that I have been really looking at our numbers, I don't think we need anything high end. Who knows, I might shoot myself after this is all done.

But like the focus of my question, am I overpaying, as a EMC comes out cheaper, even with SAS, but could I be forgetting something? Am I losing out on a feature that I need and would out weigh the cost of a Equallogic? I mean, I am small potatos compared to all of you here, but I like the DR aspect of VMs in ESXi on a SAN.

When I say FC I don't mean an FC network I just mean FC Drives - FC interface on the paddle. Very common these days on many arrays.

SATA is fine as long as you know how to use it and not use it. I'd always say take some SATA with some FC because you WILL end up needing it!

Dell do resell EMC stuff but they prefer selling Equalogic as they get paid more for selling it and they can go lower on price. Dell is all about volume and price.

Let's start fro the bottom. What are you after? How much capacity? What are you going to put on it? What do you want? What apps? Typical users? etc etc.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Catch 22 posted:

A SAN to host everything, Flatfiles Data, SQL, Exchange and my Local VM store to boot from, and something that will play into my DR plan for replicating offsite at some point set for late next year.

1TB for Data/SQL/Exchange, 500Gigs for VMs, 500Gigs for Snaps, 1TB for extra growth on top of what is already allowed for in the former numbers. My enviroment uses about 550Gigs as of right now, that's EVERYTHING on my tapes and estimated OS/Applications size.

Again -everything, Flatfiles Data, SQL, Exchange and my Local VM store

See above

See above, if you need more detailed kinds of apps, they are all flatfile crap, nothing special.

30 Lawyers 30 Staff/Secretary...they pittle around in word docs, web apps, and emails between counting large piles of money. (i.e. - dick off all day as said before.)

For such a small number of people you could get away with a NAS device that is capable of iSCSI rather than a SAN. With the SAN you're going to have to cough up for swiches, cables and Host Bus Adapters.

I see a lot of large companies these days not even pushing Fibre Channel and you're a lot lot smaller than they are. You can also get away with sharing spindles for your key apps, the number of people you have and their lethargic work style you describe i'm sure you'll be ok.

Now as I said above i'm an EMC guy so everything I propose will be *SHOCK HORROR* EMC! but there are alternatives.

EMC NX4
Dual Blades for High availability
CIFS license for windows shares
iSCSI License
12 x 300GB 15k SAS drives giving 2.3TB usable in R5
7 x 1TB drives giving 3.6TB usable in R6 (Scales to 60 drives total) - not really needed but I felt like putting them in :)
3 years HW and SW maintenance
Replication possible when you add second box
4 x Fibre Channel ports so you can add FC hosts later on - direct connect if need be.

http://www.emc.com/products/detail/hardware/celerra-nx4.htm

I don't know the kind of discount you'd get but you'd be looking at about $25k-30k i'd imagine.

Vanilla fucked around with this message at 23:21 on Sep 19, 2008

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Mierdaan posted:



Where are you getting that? I think the controllers can be active/active on the Equallogic as well, but may not be that way by default. Also each controller on the Equallogic has 2 gigE ports for iSCSI and 1 gigE for management, which is exactly the same as the AX4 as far as I know...


AX4 is an Active/Active configuration with 2 front end ports per controller. EQL runs active/passive and although it has '3' ports per controller one of those ports is used for shelf-to-shelf communication only (AX4 does that internally, doesn't list those ports). So really AX4 has 4 active FE ports to EQL 2.

Other AX4 benefits against the EQL box - the EQL box only provides MPIO path failover while you should get PowerPath for free wit the AX4 to give load balancing, path failover detection, etc.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Catch 22 posted:

But you could use the Cloneing features of a EMC to clone to another LUN and run backups from, making your production LUNs run nicely while backups run if your a 24hour shop. Equallogic would have your backups fight (for the right to par- never mind) for bandwidth and disk access.

Snapview allows for both Clones and Snaps. Same license.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Mierdaan posted:

How does the Snapview SKU work for the AX4? We didn't have it on our quote, but were assured we had the capability to do snapshots. Does Snapview get you some additional functionality we wouldn't have, or is our reseller just including but not as a line item?

I *think* with the AX4 out of the box you can take one snap of a lun and up to a 16 snapshots per array. No clones.

With the Snapview licnese you get clone support and the limits above are greatly increased.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

BonoMan posted:

I guess I can ask this part of the question as well...what's a good pipeline for backing up from a SAN or NAS?

They want server A to be current work. Server B to be archived work. All of that work backed up to tape for offsite storage and then also have physical backups in house (DVDs or whatever). They would also like a synced offsite server somewhere for fast restoration only.....they really won't pay for jack. :( Or at least not the massive costs it would cost for that.

In this situation people ensure the storage is replicated between both sites.

You then backup at one site and send the tapes to the other, or Iron Mountain. That way you're not backing up over a pipeline.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

rage-saq posted:

EMC actually came out onsite and did some performance monitoring to determine IOPS usage patterns before you gave them any money?

I've not actually heard of them doing this, just coming up with random guesstimates for customers based off a little input from the customer. They were horribly wrong (short by about 40% in some cases) that I ended up fixing their order at the last minute before they placed their order (but sometimes after and having to fix it by buying more disks).

Moral of the story: You can't cheat by guessing at IOPS patterns, you really need to know what your usage patterns look like.
Some applications (like Exchange) have decent guidelines, but they are just that, guidelines. I've seen the 'high' Microsoft Exchange estimates be short by about 50% of actual usage, and I've also seen peoples mail systems be 20% under the 'low' guideline.
SQL is impossible to guideline, you need to do a heavy usage case scenario where you record lots of logs to determine what to expect.

This is accurate. EMC will always step away (read: should always) from performance stuff at the pre-sales stage unless you pay for it or they used to be in delivery. Only you know your environment best and the while EMC will tell you about all the features a good guy should turn to you for spindle counts and not work off capacity estimates.

Vanilla fucked around with this message at 09:05 on Sep 30, 2008

Vanilla
Feb 24, 2002

Hay guys what's going on in th

rage-saq posted:


EMC unfortunately is very pompous and is under the misguided opinion that block virtualization is a bad which is why they don't have it. A lot of industry experts disagree.

Gotta comment on this, they do have block virtualization :)

...had it for years now..

Invista

Using Invista you can bring in most arrays, migrate online between them, stripe a volume over three different arrays from three different manufacturers and do all the usual stuff. What Invista does is allows you to pick which arrays you want to be virtualized rather than forcing you down a block based virtualization route on a per-array basis.

What EMC doesn't do is have the blanket virtualisation of a single array. If we look at the EVA it all sounds really great but there are some downsides as I spend my time working with the users of the EVA often, old and new EVA's. Many admins have come from a background of control and having this taken away by an array that wants to decide where to put things is uncomfortable. When you have lots on a box this is not good, especially without any form of dynamic tuning other than making new groups.

Other arrays allow the admin to dictate where to put LUNs, on which exact disks. With the EVA my performance is affected by other apps using the same disk group - people need to give the applications predicable performance levels and this isn't possible in this situation. Only way to guarantee performance levels is to put it all on its own disks group - this is expensive because you need the spares to go with the group, a lot of people are quite happy to share as long as they can pick, choose, limit, and have control if something does need moving.

Smallest disk group is 8 disks. So if I wanted just a small amount dedicated of space for my exchange logs (like 4 disks) i'd have to buy 8 minimum. It's hard enough explaining to procurement why I need to keep exchange on dedicated spindles let alone buy 8 spindles just for for a few GB of logs! The alternative is to let the EVA do whatever and put it with other data but this could drag down the performance of the whole array and is against MS best practice.

Then there is the fact the EVA only supports restriping, not concatination. Painful for applications that are worldwide 24hour, someone in a timezone is going to get crud performance for a few hours.

You seem to know HP quite well rage-saq, let me know if any of my thoughts above are old or inaccurate. I always like to know. I deal with a lot of people wanting to swap their EVA's out just as HP deals with a lot of people looking to swap CX arrays out, these just seem to be some of the common concerns.

What Clariion and DMX arrays do have is Virtual Provisioning. So you can easily make pools and easily add capacity with the same ease you would in an EVA BUT you will have the ability to cordon off drives and resources and to tune the array without penalty. You are essentially picking a part of the array you wish to virtualize. Grab 5TB of ATA, put it in a thin pool and in three clicks you can add capacity and present luns. This isn't exactly block virtualization but you could argue that all arrays have 'virtualized' since day one, you're presenting a 'logical' device in which the array pieces together the LUN based on what you've asked for - the EVA and such are really just taking that one step further and taking control of the entire layout.

It's mostly the accounts I work with who drive my opinion of arrays because I hear it first hand. I have one account who is absoutely chuffed with 3PAR, they love it. They have something else for their high end but they use 3PAR for the fast dirty storage needs. Five minutes and they've provisioned storage. They hate the colour though...and the F5 refresh thing.....

Vanilla fucked around with this message at 17:45 on Sep 30, 2008

Vanilla
Feb 24, 2002

Hay guys what's going on in th

oblomov posted:

Backing up or replicating (locally) large amount of data, how do you guys do it? So, my new project will require me to backup/replicate/copy/whatever about 100TB of data to tertiary storage.

People usually use array based remote replication tools to replicate their data to another array at another site.

They also use array based local replication to create local copies of their data for backup, test, dev, etc.

Backup is then often over SAN to VTL or straight to tape.

quote:

I will already be doing replicate to remote DR system, but will want to do a backup or replication job to local storage. I ruled out NetBackup with VTL or tapes since that is really unmanageable with this much storage, and now I am trying to figure out what is out there to use. So far, best option seems to be SAN vendor based replication of DATA to nearby cheaper storage SAN.

Most vendors have local replication tools to make 'Snaps' or 'clones'.

It's not really that clear what you are trying to do.

quote:

So, with NetApp, for example, I could take primary 3170 SAN cluster and then SnapMirror or SnapVault that to NearPoint SAN (basically a 3140 or something). It would be similar with say Equalogic from Dell or EMC. Other then this sort of thing, which requires bunch of overhead for SnapShots, is there any sort of say block-level streaming backup software that could be used (ala MS DPM 2007)?

I still don't follow i'm afraid :)

quote:

I haven't kept up with EMC recently, but their Celerra stuff looks interesting. Is anyone here familiar with it?

What do you want to know?

Vanilla
Feb 24, 2002

Hay guys what's going on in th

oblomov posted:

Yes, I plan to do that. I am going to have a set of storage with bunch of data at one site and will replicate (through SAN replication technology, SnapMirror, SRDF, whatever) to my hot DR site. In addition to that I need a third copy of data locally for higher level of protection (and at least somewhat delayed write just in case).

Ok, any decent array can do this and will have the ability to take crash consistent copies of things like Oracle and Exchange.

quote:

I need to make a backup of the data or a replica of data to localized storage. This includes a Database (SQL, 3-4TB), a few TB (say 15) of index logs (non-sql, full text search kind), and 60-70TB of flat files. Tapes won't work, there is too much to backup. I was thinking of doing SnapShot replicas but just wondered if there was a better way then doing NetApp sort of SnapMirror/SnapVault (or EMC/Equalogic/Whatever equivalent).

As above. In the EMC world they would use something called Replication Manager. This would manage all the local replication such as cloning and snapping. Just set the times and all the other details and it'll do it the same every day.

It will take consistent copies of SQL, Exchange, Oracle and others. You can then tell it to do whatever you want with that clone. Mount it flat file to a certain server, back it up, and so on.

quote:

How are the new Clarions compared to say a NetApp or Equalogic. I did not like cx3 series much since that seemed to be limited in both management and features compared to the competition, but it seems that Cx4 caught up to NetApp at least and bypassed it on some fronts (from a SAN perspective, not NAS).

Well above you mention Celerra which is EMC NAS. Clariion is EMC Mid-Range SAN.

This can turn into a real bitch fight. I suggest you look at what the market is doing and who is strong where. With regards to NAS IDC has EMC/Dell and Netapp neck and neck with regards to share, EMC/Dell at 39% and Netapp/IBM at 34%. Both far ahead of anyone else. So some good competition there, next to EMC & Netapp is HP and IBM but they're both far, far away on around 5% of market share.

With regards to SAN (excluding iSCSI) it's different. EMC's range is out at 31%, Netapp at 4%. Some of that number will be Symmetrix but Gartner has always put the Clariion in the lead in magic quadrants. The CX4 does have some new features such as Flash Drives, 64bit OS, drive spin down, in the box migration (move data fro the fast drives to the slow drives), etc.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

ddavis posted:

I inherited a Dell AX100 that's a bit long in the tooth. It hosts our Exchange database, SQL database, file shares, and some MS .vhds.

I'm looking to replace this soon and go full tilt with a few VMware servers to virtualize our entire environment.
My question is: one of the key benefits of virtualization (at least from my perspective) is that it divorces the server from the underlying hardware. Servers become pretty generic and interchangeable. Why then isn't it common to follow this same line of thinking and run a SAN on commodity hardware? You'd get all the benefits of an enterprise SAN without the proprietary software. Then down the road when you need to replace something that's no longer supported, it's trivial. Or upgrading becomes much cheaper because you can easily embrace whatever disk interface gives best performance/cost ratio. Does it mainly have to do with support contracts? Or am I missing some big picture aspect?


Vendors do have storage virtualisation. IBM has SVC, EMC has Invista, Incipient, etc.

Just like VMWare it's a way of divorcing your data from the underlying hardware so you can move it around with a few clicks.

A lot of the time only really big companies, such as banks, go this route. They're the ones who buy 10-20 new arrays a year so moving around 100's of TB of data is a chore and time consuming. With Virtualisation they can get rid of arrays that have run out of maintenance by moving the data with a few clicks and not have to worry about detailed migration plans, methodologies, etc.

For most companies with a handful of arrays or one at each site SAN virtualisation is a waste of time. It's the same effort to move it onto a new array as it is to virtualise it so when they wheel in the brand new bigger better array why spend money investing in SAN virtualisation?

Vanilla
Feb 24, 2002

Hay guys what's going on in th

1000101 posted:

Just wanted to point this out to keep terminology sane.

Crash consistent means the data is consistent with a crash and therefor it may not necessarily be what you want (i.e. could be worthless).

You really want to take consistent snapshots and replicate those.

An application owner will understand crash consistent and will know if it is right for them. It could be a case of just rewinding the logs to the right place.

As said above adding some software to a SAN solutions means it can work with the various applications to take application consistent copies with ease. With Exchange it will run VSS checks - ESEutil to confirm, with Oracle it will put it into hot backup mode, etc.

I've never been a fan of the replication of snapshots in the Netapp sense because, in the example of Exchange, ESEutil is not run at the time each Snap is taken. The copy is not verified before replication so how do you *know* you have a good copy? This is a first rule of DR/BC, know your copy is good and the vendors who are most strict on this (MS, Oracle, SAP) are almost always business critical. If you have a problem Microsoft will -only- support you if you eseutil has been run against the copy.

Much prefer full copies in general. Often snaps on high-change rate apps such as Exchange are uneconomical.

As always it depends on the exact circumstances.

Vanilla fucked around with this message at 19:37 on Dec 15, 2008

Vanilla
Feb 24, 2002

Hay guys what's going on in th

1000101 posted:

An application owner should say "I want application consistent data" and not "I want crash consistent data."

In the NetApp world this is managed by way of the Snap Manager products to ensure data integrity at the time of the snap. It does the verification you speak of. NetApp replication is very much reliable, as are the snapshots it takes.

If EMC took crash consistent snapshots (it won't with the right licenses I presume) then none of my enterprise customers would keep their Symmetrix systems. They would replace them with something that took consistent snaps.

Lucky for EMC, RecoverPoint provides this functionality and does so quite well.

To qualify, though my last 6 months have been knee deep in NetApp shops, I have worked with EMC technology before and I've only encouraged one customer to swap to NetApp. This was because he absolutely hated his Celerra and EMC support was costing him an arm and a leg for a value he just didn't feel he was getting.

I just wanted to get straight the term "crash consistent" as we usually relate that to a bad thing. I guess if you're doing file servers then crash-consistent is okay, but hardly ideal.


It does exactly this.

The apps owners also say 'I want dedicated spindles, zero data loss and 2TB by this afternoon' :)

EMC would use Replication manager to look after the snaps, clones and apps such as Exchange, Oracle, etc. This usually comes hand in hand with Recoverpoint.

The reason for my comment above is that i've never seen anyone run eseutil on an exchange snap. This is because Eseutil places a massive amount of I/O on the Exchange DB and if you're doing this on a snap that is pointing back at production you're just passing that I/O on. Even worse is if you have many snaps all trying to complete eseutil and all hammering production!

This isn't a dig at Netapp, i'm all for it just on full, separate volumes!

Vanilla
Feb 24, 2002

Hay guys what's going on in th

quote:

Separate volumes/RAID groups is an out-dated concept that needs to find it's way out the door in about 99% of the use cases. I realize this is the EMC party line, but they are partying their way out of the door of any organization with <5000 employees. Anyone buying into EMC now ends up regretting it as they grow and replace it with a compellent or a filer or something anyway.

I agree, but to some extent but the opposite is true. If you are completely reliant on an array to place everything for you you can't really do anything when performance starts to suck apart from buy more disk. How do you guarantee IOPS? I'm not just talking about end users i'm talking about the Cap Gemini's and EDS's of the world who have to guarantee backend performance.

EMC, Netapp and various other vendors have virtual provisioning / pooled storage. I was working today on a box that had tier 1 as a dedicated layout with dedicated spindles and whole rest of the box (40tb+) was a number of huge virtual pools. Best of both worlds, if you want simple pools of storage without worrying about separate volumes and luns it's there.

On a separate subject have any of the boxes you've worked on had flash drives? If so - opinions?

Vanilla
Feb 24, 2002

Hay guys what's going on in th

oblomov posted:

Don't forget that EMC is also basically 1.5-2x the cost of comparible NetApp

Source? That's a bit of a wild claim. Is this based on one example?

I'd argue that point heavily given that pricing is dependant on many things and in many cases i've found the opposite, especially when you ask for a robust solution from Netapp.

Gartner are back publishing storage pricing analysis, go and check it out - you'll find that all the vendors are within a few % of eachother because hardware really is just becomming a commodity....and most importantly EMC ISN'T the most expensive - even in the high end.

Vanilla fucked around with this message at 10:34 on Dec 16, 2008

Vanilla
Feb 24, 2002

Hay guys what's going on in th

1000101 posted:


edit: Don't think I hate all things EMC, I'm a services guy and the product is great. I'm mostly passing on complaints from paying customers and their feeling on the matter. I have a hard time disagreeing in many cases.

Likewise I don't hate everything Netapp. We have three FAS boxes in our play lab but as with every bit of kit there are downsides and I spend all day hearing about the downsides of our kit and that of our competitors....but mostly our kit because everyone loves kicking the vendor :)

Vanilla
Feb 24, 2002

Hay guys what's going on in th

oblomov posted:

That's based on last 5-6 times we have purchased storage, anything from NetApp FAS2000 series to FAS6000 series and appropriate EMC hardware. I have had yet to see time when EMC was cost effective. I could see the value in really high-end stuff to which NetApp would have to respond with their cluster OS instead of ontap.

If we look at the whole industry Gartner have carried out research based on analysis of bids accross the whole market and tell a very different story. Per GB Netapp is more

http://www.gartner.com/DisplayDocument?doc_cd=158097&ref=g_rss

For example - 2009 prices,

Clariion CX3-80 46TB 1TB Drives - $1.95 per GB avg
Netapp FAS6070AS 46TB 1TB drives - $3.45 per GB avg

Clariion CX3-80 10TB 300GB 15k FC - $6.80 per GB avg
Netapp FAS6070AS 10TB 300TB 15k FC - $12.35 per GB avg

There are vendors who are more expensive and vendors even cheaper than the above but it is a fallacy that EMC is always the most expensive. They all face the same criteria of dual controllers or cluster architecture, support for Unix, Linux, Windows, Vmware and no mainframe support, etc

Your experience is still valid - vendors don't always go in with their lowest price, the price you see differs depending on the size of the deal, how important you are to a vendor, your negotiating skills, etc.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

1000101 posted:

stuff

Methodology is all in the doc. Point taken on the comparison, I just went for the biggest in EMC's mid range vs the biggest in Netapps.

The CX3-80 is not EMC's latest array either, the CX4-960 is, but this isn't on the chart as Netapps 6080 isn't either.

FYI 3040AS vs CX3-80 = $1.85 vs $1.90 respectively (1TB dr), still not a big gap.

3070 vs CX3-80 is = $7.30 vs $6.80 (300GB 15k) - still not seeing this hugely expensive EMC :)

Sent you a PM

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Jadus posted:



If it's to tape, does LTO4 provide enough speed to complete a backup within a reasonable window? If it's back up to disk, what are you doing for offsite backups, and how can you push so much data within the same window?


Usually people use backup to disk locally to enable fast backup and restore. They ultimately still send it to tape.

However this isn't always the case. Vendors these days have disk libraries specifically for backup - they look and act like a tape library. These also have replication so you can replicate to a dsik library at a different site. It used to only be the banks who did this because (as you mentioned above) they have so much data to push they needed tens of libraries and, more expensively, tens of 2GB links.

So now we have deduplication of backup. You backup to disk, it deduplictes the data and then replicates. 30TB becomes 3TB and will happily go over the link.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

oblomov posted:

How well do these V-filers work? Haven't tried them yet and we were thinking of trying to front some EMC and Hitachi storage with it.

Just FYI, this would be unsupported by EMC.

Vanilla
Feb 24, 2002

Hay guys what's going on in th
So what Recoverpoint does is, as far as i'm aware, is use Replication Manager as the tool that makes the snaps and clones. Replication Manager has been around for years and is a standalone product but is integrated / bundles when Recoverpoint is sold.

Replication manager manages the snaps / clones on the array as it has the integration brains with regards to Exchange, SQL, VMWare, etc. Array based snap tools are SNapview (Clariion) or Timefinder Snap (DMX) When it comes to application such as exchange it uses VSS, with Orcale it puts the DB in hot backup mode, etc.

Here is a blog that discusses the two options open to you - array based snapshots or the VM flavour

http://virtualgeek.typepad.com/virtual_geek/2009/02/howto---vmware-integrated-and-application-integrated-array-replicas.html

Let me know if anything isn't clear, i've dug the above link out real quick as i've a million things to do :)

Vanilla
Feb 24, 2002

Hay guys what's going on in th

1000101 posted:

So what its looking like is that if I want to take consistent snapshots of VMFS volumes, I need to buy Replication Manager correct?

I presume it works by telling vCenter to take vmware snapshots (kicking off VSS in the guests) and when that completes it fires off an array based snapshot? (This is how SnapManager on NetApp works).

The EMC engineer assigned to this account is pretty worthless when it comes to volunteering information and english isn't his first language which only makes things worse.

edit: Many thanks for the blog entry, I see there is a cellera VSA there which will make my life about 1000 times easier.

Correct, get the local engineer / TC to supply you with some documentation. He needs to qualify that RM will do what you want and work in yuor environment before you buy.

Have a play with the 90 day demo.

It's a great blog for anything EMC and VMware. I'm really getting into the VDI stuff personally.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

bmoyles posted:

Say you need 100TB of storage, nothing super fast or expensive, but easily expandable and preferably managed as a single unit. Think archival-type storage, with content added frequently, but retrieved much less frequently, especially as time goes on. What do you go for these days?

EMC Centera

http://www.emc.com/products/detail/hardware/centera.htm

Works with hundreds of apps including Symantec Enterprise Vault, DiskXtender, etc.

Also replicates to a second Centera so you don't have to back it up.

If you go with the Parity model you're looking at 97TB usable in a full rack.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Ernesto Diaz posted:

Well I just bit the bullet and purchased 2 HP DL380G6 servers and an EMC AX4 with 12 x 450GB hard drives. I was originally looking at an all HP setup with an EVA4400, but it was double the price of the AX4 and has some really strange requirements.

Can anyone who has worked with an AX4 ease my mind and tell me they're a good unit? I've spent a large chunk of my companies budget on this, and I really hope it doesnt come back to bite me on the rear end.

I'm biased, but it's a good array. The bottom end of the Clariion range.

Vanilla
Feb 24, 2002

Hay guys what's going on in th
Raise a support call with EMC, they'll identify what they are and will advise. I thought gatekeepers were typically 2880kb.

They're assigned per management host (anything with SOlutions Enabler). If you have say 2 hosts that can manage the Symm you would reserve 12 gatekeepers per host. Each of these should only be assigned one FA port and masked to one host HBA.



So in other news how about EMC acquiring Data Domain?

Vanilla
Feb 24, 2002

Hay guys what's going on in th

1000101 posted:

Tragic in that datadomain was a neat product. Here's hoping EMC doesn't just shelve it into obscurity. Little surprise they paid so much for it.

It's EMC that has the track record of good integration with its acquisitions. Netapp are the ones who drive almost everything into a wall!

Vanilla
Feb 24, 2002

Hay guys what's going on in th

1000101 posted:

So long as it helps Clariion and Symettrix lines its going to work out. However, given that EMC seems to really like the idea of people buying shitloads of disk shelves, I doubt we'll see any inline data deduplication for production storage.

Hopefully EMC proves me wrong and makes the datadomain more than just a VTL offering bundled with Lagato.

Remember you still have to buy the deduplication nodes and technology. So whatever you don't buy in disk you buy in nodes and new technology. The savings arn't just around physical storage but the backend such as tapes. DDUP is target based deduplication, not source based - so it's all about back end, post process. The savings on tapes alone makes most dedup business cases pay off after 12 months.

It has already been announced that Data Domain will be its own full product division within EMC.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

1000101 posted:

While yes you do; depending on your application it will end up paying for itself in disks over the course of 2-3 years. I'm stalking strictly from a production storage standpoint. Of course, on the NetApp side of the house, de-duplication ends up being a zero cost option but I'd like to see other vendors with similar technology. Particularly for my very interested fibre channel customers who understand that EMC does FCP better than NetApp by a factor of like a billion or something and would rather jump in a bathtub filled with scorpions than buy a filer.

So with Celerra, the EMC NAS, there is already file based DeDuplication (at no extra charge). A combination of single instancing and compression.

This is different to Netapp who do block based dedup.

There are pros and cons to both. The EMC way is more granular and has less performance impact on production systems. The Netapp way can be used in both the NAS and SAN world and can be used on things like VMware.

EMC has a much stronger DeDuplication protfolio today (Avamar, Disk Library DeDuplication, Data Domain) but these items are cost items, unlike what you mention below about zero cost.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

1000101 posted:

I'd argue that depending on application, you might see zero performance impact from de-duplication. I've got a particular customer in mind who's virtualized several thousand webservers and keeps a couple hundred on a volume with ASIS. His overall storage footprint is <100GB for every 200 or so webservers. With a couple PAM modules he's actually performing better than if he wasn't using de-duplication.

Also keep in mind that while netapp is block level; it works very well on file level data. If you've got 400 copies of pain.jpg on your filer, you're only going to consume what one file does. The neat thing is if thats a frequently accessed block, then good odds its going to be sitting in cache on the box.

This is the sort of stuff thats going to start driving more units in people's datacenters. A lot of people are looking to cut costs wherever they can and if they can pay 50k in software to avoid adding a couple 30k shelves which consume space and power, then they're going to do it. This is the driving force behind every VMware engagement I've been on since the middle of last year. Spend more on software to avoid hardware costs and save datacenter space/power.

I want to see EMC break ground in this area with the CX4 line particularly; as I want an alternative in the event that NFS/iSCSI isn't going to be sufficient or they don't want to wait the duration it can sometimes take for a filer head to realize his partner poo poo himself. Since I'm not a reseller, I don't care what storage someone buys as long as they're happy with it and they aren't going broke trying to maintain it.

Keep in mind, I'm speaking in the context of online storage here, not nearline or backup devices. I think part of NetApp's plan was to try to leverage DD's de-dupe stuff on the fly and try to make it work with production storage.

However, if you deal primarily with DMX/V-MAX systems then your particular customers might not care about saving storage capacity whenever possible. My customers range from guys who think an AX4 is hot poo poo to someone who's got ~100 or so DMX4 systems so I have a much broader interest.

I've found most people too scared to turn on DeDuplication in online production on anything other than unimportant volumes through simple fear of potentially affecting performance. That and the fact it has to be rehydrated before being backed up and backup durations are already growing too fast without another bottleneck.

At the moment the general attitude towards both EMC and Netapp's free deduplication is that it's a way of slowing the inevitable growth, but it's not really a revolution. They use it on their IT home directories and their own test LUNs but not on business units.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

1000101 posted:

I think "most people" should really be qualified. As I've pointed out, I can think of a number of pretty sizable customers that are actively using de-duplication on revenue generating production systems. Depending on the application it can pretty much "solve" growth issues.

De-deduplication with virtualization is pretty much a home run in a LOT of cases. I would bet money that you're using products from companies that are doing just that right now.

Even still, people will find value in only buying say 1 or 2 disk shelves a year instead of 5 or 6 if de-duplication will just slow the growth.

Not doing the big I-am but I spend all day talking to different customers, mostly in finance, about all aspects of virtualisation and storage in a pre-sales manner and honestly DeDup on production is still a hot pototato. A lot wont deploy it on production even though it is free - a simple code upgrade and you're ready.

We have a lot of guys who spend a lot of time tuning storage, who work out application IOPS requirements and then how this can go onto an array, the RAID overhead, dedicated spindles, etc. To add in a factor that can affect performance in a way you cant predict or calculate is just frightening to them at this stage.

It'll come along slowly, but in my opinion dedup on production is making a very slow entrance to the world and the only people i've seen use it in production are using it only on systems deemed unimportant. Usually this is to grab themselves another few weeks before they need another shelf, but not hugely reduce storage.

I've had one customer turn off Celerra DeDuplication because those above didn't like the idea of it and files were their life (law firm) and a Netapp customer turn it off becuase it can't be used with active snapshots and as a worldwide operation they couldn't find a time where they could allow the high CPU load and without a method of throttling CPU use it was deemed to risky.

As the products mature they will find answers to these issues and it will become more acceptable, but both the Netapp and EMC offerings are very much seen as GEN1.

Vanilla fucked around with this message at 08:48 on Jul 30, 2009

Vanilla
Feb 24, 2002

Hay guys what's going on in th

paperchaseguy posted:

I work for EMC, there's at least one other person here who does, and several other professionals. You really won't clutter up the thread, it's not super active. Fire away with any questions.

Ahhh another EMC brother :downs: and joined three days before me! Weird.

Weird Uncle Dave - on the MS site there's a tool that will give you their recommendations for disk based on users, workload, etc. However if it's running fine on three disks and everyone's happy then three disks it is.

Leave perfmon running for longer - 9AM is going to be the key time for things like Exchange. If you can also use perfmon to get items such as the read / write ratio of your exchange environment that would help translate the IOPS that is actually hitting the backend (writes have an overhead with R5 due to the parity calc and additional IO required).

Vanilla
Feb 24, 2002

Hay guys what's going on in th

adorai posted:

We are currently looking at replacing our aging ibm san with something new. The top two on our list are a pair of Netapp 3020s and a 2050 for our offsite or a pair of EMC Clarions. I am interested in looking at a dual head Sun Unified Storage 7310 and a lower end sun at our offsite. The numbers seem to be literally half for the Sun solution, so I feel like I have to missing something on it.

For usage, the primary purpose will be backend storage for about 100 VMs, some cifs, and some iSCSI/fibre storage for a few database servers.

Any thoughts from you guys?

Personally I say avoid SUN. Can't go wrong with Netapp or EMC, both good kit.

EMC Clariion will do as 1000101 says but EMC has a Unified array that will do the whole shebang - FC, iSCSI, CIFS, NFS.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Bluecobra posted:

Probably because he works for EMC?

Yes, but if that was the only reason why would I recommend Netapp in the same breath? Do we wub them <3 <3? :)

TobyObi posted:

What are your issues with the Sun kit?



So my issues with the SUN kit.... :crossarms:

Firstly there’s a lot missing in the SUN range that people have come to expect and that the EMC and Netapp boxes are offering.

Replication – Not too sure about Netapp but EMC replicates based on RPO. Got a file system that’s pretty important? Set a 1 minute RPO. It’ll even tell you if it can’t maintain this. With the Sun box it’s the most basic replication ever. No throttling or any form of control – I think it’s free. It’s also only Asynchronous, no sync.

Then there are a few things that the Netapp and EMC arrays do have – such as deduplication (at no extra charge on either), the ability to have a wide range of drive sizes and type (not just 1TB drives or a very limited selection), Fibre Channel, RAID 1, RAID 10, RAID 3.

I could go on forever. No kind off application snapshot integration such as with NTAP Snapmanger or EMC RM. Performance monitoring that only goes from CPu to User (what about the actual backend?? The file system isnt everything).

They’ll have a new range out eventually. Naturally they’ll have Fibre Channel by then and you’ll probably find the usual story – no upgrade path, gotta buy new.

Then onto Sun as a business. Never have I seen a company have so many fantastic ideas and fail to bring them to the market. They bring these items out to market, persuade a few thinkers to adopt these revolutionary products and then drop them. Leaving their key sponsors holding a product that is end of life. People just have no confidence in them.

Vanilla fucked around with this message at 17:40 on Aug 27, 2009

Vanilla
Feb 24, 2002

Hay guys what's going on in th

TobyObi posted:

To be honest, this sounds more like a gripe with a certain model, rather than an entire company. I have a pair of Sun arrays, that are "entry level" that support all of these things.

Indeed it is a gripe - this is supposed to be their super product to destroy EMC and Netapp but is so far off the mark and missing so many key, simple things that it's laughable!

It's the typical SUN home grown build hardware around ZFS. It's so good it'll sell itself! :suicide:


quote:

I mean, I have an EMC AX150i. I haven't written off the entirety of EMC, just the shitheel who thought that he did a good job whipping that up in about 20 minutes.

35 minutes*.



*included 7 minutes cigarette break.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

oblomov posted:

Speaking of storage, anyone have experience with fairly large systems, as in 600-800TB, with most of that being short-term archive type of storage? If so, what do you guys use? NetApp wasn't really great solution for this due to volume size limitations, which I guess one could mask with a software solution on top, but that's clunky. They just came out with 8.0, but I have 0 experience with that revision. What about EMC, say Clariion 960, anyone used that? Symmetrix would do this, but that's just stupidly expensive. Most of my experience is NetApp with Equallogic thrown in for a good measure (over last year or so).

All the time, depends exactly what you need it for - just to stream to and delete shortly after? More details?

Typically see this on Clariion with 1TB drives. When 2 TB drives come along the footprint will be a lot less.

Adbot
ADBOT LOVES YOU

Vanilla
Feb 24, 2002

Hay guys what's going on in th
The largest LUN the Clariion can create is 16 exabytes.

Basically as long as your OS can address it the Clariion can create it. To expand the lun you would add metaluns but this is transparent to the host /app. You'll need to make metaluns to make your huge LUN.

If you are just making one huge LUN you have the opion of using ALUA which would mean that the LUN can be accessed from both Storage Processors and can get the performance of both (active / active). This means that one SP wont own the lun which would mean the other SP does nothing.

2TB drives are hitting the consumer market now so not long until the usual vendors pass it through their QA.

*Edited for clarification.*

Vanilla fucked around with this message at 10:56 on Sep 19, 2009

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply