Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
oblomov
Jun 20, 2002

Meh... #overrated

complex posted:

I assume you have installed the MPIO Multipathing Support for ISCSI thinger. http://technet.microsoft.com/en-us/library/cc725907.aspx

I have to say that Server 2008 iscsi and MPIO performance is just plain better compared even to 2.07 on Server 2003. Plus, finally Microsoft is supporting dynamic disks. Funny story, DPM 2007 requires dynamic disks (well, unless you really really want to deal with creating all partitions manually). Good going, Microsoft!

Adbot
ADBOT LOVES YOU

oblomov
Jun 20, 2002

Meh... #overrated

Also, on VMware and NetApp. RSM is only supported on iSCSI, I believe. Fiber support should be out shortly, but NFS is not in the picture just yet.

oblomov
Jun 20, 2002

Meh... #overrated

oblomov posted:

Also, on VMware and NetApp. RSM is only supported on iSCSI, I believe. Fiber support should be out shortly, but NFS is not in the picture just yet.

Oh, and on the subject of NetApp and iSCSI. I have been running a 12K Exchange environment on a clustered NetApp 3020 with 10 shelves of 15K rpm disks, and it's running like a champ. 2 iSCSI connections from each Exchange cluster node (2 Exchange clusters, 4 and 3 nodes) go to a pair of Cisco 3750 switches. Each NetApp head has 6 iSCSI ports with failover VIF trunking two aggregation VIFs in failover mode. I built this setup 2.5 years ago and haven't had any major issues.

oblomov
Jun 20, 2002

Meh... #overrated

oblomov posted:

Oh, and on the subject of NetApp and iSCSI. I have been running a 12K Exchange environment on a clustered NetApp 3020 with 10 shelves of 15K rpm disks, and it's running like a champ. 2 iSCSI connections from each Exchange cluster node (2 Exchange clusters, 4 and 3 nodes) go to a pair of Cisco 3750 switches. Each NetApp head has 6 iSCSI ports with failover VIF trunking two aggregation VIFs in failover mode. I built this setup 2.5 years ago and haven't had any major issues.

Although I must say I have been disappointed with NetApp lately. Ability to add shelves and additional fiber loops to clustered heads is quite lacking, can't do much live. Also, their sales engineering and sales people are way over-engineering environments quite a bit. You have to call them out on that and negotiate them down.

Btw, my company does have some MD3000i arrays in remote offices and lab environments, and they are great little iSCSI SANs. You can also attach 2 more shelves of MD1000 to each MD3000i array. I am running a 3 4x4 core nodes with about 100 VMs off one of these setups in a lab with no issues. Performance is obviously not the greatest, but it's a lab environment, so it's good enough.

oblomov
Jun 20, 2002

Meh... #overrated

Anyone have experience with LeftHand Networks, specifically their rebranded HP (or Dell 2950) appliances? Went to couple of demos and their premise seems pretty slick. Kind of like 3Par but cheaper.

I especially liked ability to add/remove nodes almost on the fly.

oblomov
Jun 20, 2002

Meh... #overrated

So, nobody has any experience with LeftHand? Googling online did not find much. I guess I'll tell them for some references and will put the product in the lab for some stress testing.

oblomov
Jun 20, 2002

Meh... #overrated

Maneki Neko posted:

Shows how long it's been since I used SnapDrive.

While the share was necessary before the Snapdrive 6 came out, you did not need CIFS license. You could just go through CLI and create the necessary share as well as put permissions on it. You just did not have the option in the GUI.

oblomov
Jun 20, 2002

Meh... #overrated

Cultural Imperial posted:

Q. Do I need a CIFS license on the storage system to run SnapDrive?
A. No. SnapDrive no longer requires a CIFS share for the host to access the storage system volumes.

And...

Minimum required version of Data ONTAP: 7.1

Table of licenses you need, depending on what you want to do.
code:
License  	Requirement if you want to...
iSCSI 	        Use iSCSI-accessed LUNs
FCP 	        Use FCP-accessed LUNs
SnapRestore 	Restore LUNs from Snapshot copies
SnapMirror 	Use the SnapMirror option
FlexClone 	Enable volume clone functionality on flexible volumes
SnapVault 	Use SnapVault for archiving LUN backup sets
MultiStore 	Create LUNs on vFiler units
Also, please note that a SnapDrive license is required for every SAN host/initiator.

Another little known fact: You do NOT need FCP to run NDMP over fiber.

oblomov
Jun 20, 2002

Meh... #overrated

Backing up or replicating (locally) large amount of data, how do you guys do it? So, my new project will require me to backup/replicate/copy/whatever about 100TB of data to tertiary storage.

I will already be doing replicate to remote DR system, but will want to do a backup or replication job to local storage. I ruled out NetBackup with VTL or tapes since that is really unmanageable with this much storage, and now I am trying to figure out what is out there to use. So far, best option seems to be SAN vendor based replication of DATA to nearby cheaper storage SAN.

So, with NetApp, for example, I could take primary 3170 SAN cluster and then SnapMirror or SnapVault that to NearPoint SAN (basically a 3140 or something). It would be similar with say Equalogic from Dell or EMC. Other then this sort of thing, which requires bunch of overhead for SnapShots, is there any sort of say block-level streaming backup software that could be used (ala MS DPM 2007)?

I haven't kept up with EMC recently, but their Celerra stuff looks interesting. Is anyone here familiar with it?

oblomov
Jun 20, 2002

Meh... #overrated

Vanilla posted:

People usually use array based remote replication tools to replicate their data to another array at another site.

They also use array based local replication to create local copies of their data for backup, test, dev, etc.

Yes, I plan to do that. I am going to have a set of storage with bunch of data at one site and will replicate (through SAN replication technology, SnapMirror, SRDF, whatever) to my hot DR site. In addition to that I need a third copy of data locally for higher level of protection (and at least somewhat delayed write just in case).

quote:

Backup is then often over SAN to VTL or straight to tape.

Most vendors have local replication tools to make 'Snaps' or 'clones'.

It's not really that clear what you are trying to do.

I need to make a backup of the data or a replica of data to localized storage. This includes a Database (SQL, 3-4TB), a few TB (say 15) of index logs (non-sql, full text search kind), and 60-70TB of flat files. Tapes won't work, there is too much to backup. I was thinking of doing SnapShot replicas but just wondered if there was a better way then doing NetApp sort of SnapMirror/SnapVault (or EMC/Equalogic/Whatever equivalent).

quote:

What do you want to know?

How are the new Clarions compared to say a NetApp or Equalogic. I did not like cx3 series much since that seemed to be limited in both management and features compared to the competition, but it seems that Cx4 caught up to NetApp at least and bypassed it on some fronts (from a SAN perspective, not NAS).

oblomov
Jun 20, 2002

Meh... #overrated

Vanilla posted:

Ok, any decent array can do this and will have the ability to take crash consistent copies of things like Oracle and Exchange.


As above. In the EMC world they would use something called Replication Manager. This would manage all the local replication such as cloning and snapping. Just set the times and all the other details and it'll do it the same every day.

It will take consistent copies of SQL, Exchange, Oracle and others. You can then tell it to do whatever you want with that clone. Mount it flat file to a certain server, back it up, and so on.


Well above you mention Celerra which is EMC NAS. Clariion is EMC Mid-Range SAN.

This can turn into a real bitch fight. I suggest you look at what the market is doing and who is strong where. With regards to NAS IDC has EMC/Dell and Netapp neck and neck with regards to share, EMC/Dell at 39% and Netapp/IBM at 34%. Both far ahead of anyone else. So some good competition there, next to EMC & Netapp is HP and IBM but they're both far, far away on around 5% of market share.

With regards to SAN (excluding iSCSI) it's different. EMC's range is out at 31%, Netapp at 4%. Some of that number will be Symmetrix but Gartner has always put the Clariion in the lead in magic quadrants. The CX4 does have some new features such as Flash Drives, 64bit OS, drive spin down, in the box migration (move data fro the fast drives to the slow drives), etc.

Yeap, looking at various things now. I am quite familiar with NetApps, run a few of the clusters right now, and much less familiar with EMC. Anyhow, appreciate the response, man.

oblomov
Jun 20, 2002

Meh... #overrated

Nomex posted:

I may be a little late with this. You should look into a data de-duplicating solution for the backup and tertiary storage. Check out Data Domain. They can be optioned to mount as SMB, NFS, FC or iSCSI. I've had one that I've been playing with for a little while now. My 300GB test data set deduplicated down to 101 GB on the first pass. Speed is pretty good too. 3GB/min over a single gigabit link. As it just shows up as disk space, it's supported by pretty much every backup product you can think of too.

Data Domain is quite good, we are using it for some of this on other projects. Here, the tricky part is that I am just going to replicate snapshots over I think, and not going to bother with backup software since that will take forever. I don't think I can get say NetApp talking to Data Domain for snapshots. Unless there is something cool continuous backup product that exists which I am not aware of, tertiary storage of different brand won't work here (guess there are some storage virtualization products out there, but they are pricey).

Also, NetApp does have dedupe which is pretty good, but it kind of sucks for iSCSI due to funky way NetApp does LUNS on top of waffle.

oblomov
Jun 20, 2002

Meh... #overrated

1000101 posted:

The app owners will be pretty livid in many cases when you give them a crash consistent state when you restore a snap. That was my only jib. He's going to expect it to be consistent should he need to restore. You can't depend 100% on transaction logs or an 'fsck' to bring things back to sanity.


Well different environments do separate things. You can't always assume that every customer will follow your best practices. That said, NetApp is pretty flexible with how it uses spindles. I can spread a LUN over say 40 spindles making the hit from eseutil pretty much nil. I've seen this proven in MANY large environments (over iSCSI even).

I just want to clarify terminology. If EMC came in to a customer and said "we'll take crash consistent snapshots" then we'd steer that customer away since they'd be getting about the same value out of a 3Ware RAID box assembled from Supermicro.

Since they don't say such crazy things; they're normally in our top 3 vendor pick. Unfortunately for EMC, they're often passed over due to lack of decent manageability. This is also the number one reason I'm seeing them ripped and replaced. Nobody cares that LUN X is only on spindles Y-Z these days, especially if performance is comparable.

Separate volumes/RAID groups is an out-dated concept that needs to find it's way out the door in about 99% of the use cases. I realize this is the EMC party line, but they are partying their way out of the door of any organization with <5000 employees. Anyone buying into EMC now ends up regretting it as they grow and replace it with a compellent or a filer or something anyway.

Don't forget that EMC is also basically 1.5-2x the cost of comparible NetApp (talking Clarions here, forget about DMX). That said, to me, NetApp is having the same issue now. It's all filer based, so in mid-level engagements, Equalogic, Lefthand, Compellent, or say 3Par is offering better future proof network.

I got Equalogic and Lefthand in the lab now, doing vendor "play-off", and they are both much more manageable then Netapp can be. We have Operations Manager, DFM and FilerView (with some CLI love going) on NetApp side to basically match built-in tools from the two newcomers. The only reason I don't have Compellent in there as well is that they are small and my management would rather have support from Dell or HP.

Also, Lefthand functionality for iSCSI with snapshots reserve on right is pretty nifty (so is replicating those thin-provisioned snaps). That said, NetApp is still very versatile and allows us to have multiple services going to same box. I just wish they got their gear in order and worked on the software front, especially integrating their "cloud-os" with ontap.

oblomov
Jun 20, 2002

Meh... #overrated

Vanilla posted:

Source? That's a bit of a wild claim. Is this based on one example?

I'd argue that point heavily given that pricing is dependant on many things and in many cases i've found the opposite, especially when you ask for a robust solution from Netapp.

Gartner are back publishing storage pricing analysis, go and check it out - you'll find that all the vendors are within a few % of eachother because hardware really is just becomming a commodity....and most importantly EMC ISN'T the most expensive - even in the high end.

That's based on last 5-6 times we have purchased storage, anything from NetApp FAS2000 series to FAS6000 series and appropriate EMC hardware. I have had yet to see time when EMC was cost effective. I could see the value in really high-end stuff to which NetApp would have to respond with their cluster OS instead of ontap.

oblomov
Jun 20, 2002

Meh... #overrated

1000101 posted:

I can't see the report but I'd be curious to see the methodology.

Mostly because it doesn't line up to what I'm seeing in the real world. Did they completely omit thin provisioning from the NetApp side or the fact that you don't have to carve out raid groups and waste NEARLY as many disks as you do with EMC?

I'm guessing no.

People buy NetApp to save money and EMC to maximize performance. At least among my customer base (banks/FSI, media shops, ASPs).

I would also argue its flawed in that no one with a 10TB storage need is going to buy a CX3-80/6080. That said, the comparison was made, I just need to understand the methodology that reached that result.

(qualified this)
I'd also like to point out that the 6070 doesn't exist anymore.

There are other details as well that we should cover to understand why this isn't an apples to apples comparison.

First and foremost; the 6080 is about twice the system as a CX3-80 (making the near twice the cost per gig slightly less surprising). Side by side, the NetApp supports almost twice the disk capacity, has nearly 4 times the RAM, and a lot more expandability. We're talking 480 spindles backed by 16GB of RAM (two SPs) vs 1100+ spindles backed by 64GB of RAM (two heads).

If you want a real apples to apples comparison,

use the NetApp FAS3070 or its replacement the 3170 (or 3140 is closer still)

We're talking a HUGE difference in price here.

This also ignores the fact that NetApp gives you iSCSI, NFS, CIFS, and FCP all in one box.

I am wiht 1000101 here. That's pretty much what we saw. Once you throw in RAID-DP, Thin Provisioning and DeDupe, EMC can't come close on the utilization. Plus the older CX-3 Clarions underperformed the newer NetApp boxes. Not sure about the new Clarions, but they are still more expensive then NetApp (that was before some big discounts from NetApp). Now, if you are talking real high end, then yeah, IMO, DMX will outperform a 6080. Although I would love to see GX based 6080 clustered system do its thing.

oblomov
Jun 20, 2002

Meh... #overrated

1000101 posted:

The awesome thing is; I can actually do that with one of these crazy virtualized storage backends as well. Regardless of whether I'm doing it EMC style or like the rest of the modern world, when performance sucks the solution is always to buy more disks. With everyone else's products though, I may not have to quite as soon.

It is trivially easy for me to create a dedicated volume on a 40+ drive aggregate servicing a single LUN on NetApp.

Ok, I am a bit confused here. Unless you do a dedicated aggregate, you are sharing disk I/O across multiple raid groups on NetApp. At least as far as I know. Is there a way to enforce a volume/LUN to sit on a particular raid group, separate from everything in an aggregate? If so, that would be awesome.


quote:

Also keep in mind that I'm thinking about a market that spans from 50 employees on up to and including a good chunk of the 5,000 employee orgs. Anything larger than that, and people are more than happy to staff a bunch of excel experts to manage their storage. These guys are buying DMX for performance and lots of it.

I find overall capacity to be a useless metric in determining that sort of thing. Given that one of my customers is <1000 employees and maintains about 50PB of NetApp (and about 12 of EMC). They like the flexibility and ease that NetApp provides and only bought the EMC for a VMware project that ultimately ended up being housed on NetApp NFS.

Welp, my company is either 10x or 20+x the max size you quoted above (depending on US or worldwide view) and we have yet to need a DMX. Hell, we don't really have too many 6080 clusters throughout US either. Now, we have sold DMX to oure customers (we are also an IT shop from a certain standpoint, but thats' not core business) when they are hell-bent on EMC, but other then that, unless one has very specific apps, NetApp performance can more then cover what 99% of companies even much larger in size then 5K employees need.

quote:

None yet, most of my customers who would need that level of IO already have 1000+ spindles that they bought prior to the SSD shelves offered by EMC. It will be a while before they validate the shelves and put them in production.

I like the look of new Sun boxes that front-end SATA with SSDs. Now, that makes sense.

quote:

I'd love to see this sort of thing more often though. It would be a hell of a thing to leverage with automated tiered storage.

Other then Compellent, does any vendor have automatic tiering built-in? Sun kind of does it with SSD/SATA hybrid storage on the newest stuff, but nobody else as far as I know. Now, NetApp, EMC and others have other solutions that can do tiering, but it's not the same.

I would love to be able to buy a shelf of SSDs, add bunch of either fiber/SAS or SATA shelves, and shove it all into single aggregate that would take care of tiering/caching/etc... dynamically by moving blocks to faster / slower storage as required.

One thing that I wish NetApp got in gear is thin provisioning management. It's terrible, pretty much non-existent. That's one thing that newcomers like Equalogic and LeftHand do much better. Hell, even with DFM or OpsManager, you can't get a good view at a system and figure out what's thin provisioned and what's not and how much space remains at volume or LUN level, etc... Meh.

quote:

edit: Don't think I hate all things EMC, I'm a services guy and the product is great. I'm mostly passing on complaints from paying customers and their feeling on the matter. I have a hard time disagreeing in many cases.

I like DMX and Centerras, and I think CX-3 were a waste of money. Now with CX-4 situation might have changed, but VMware jacked up the price on that.

oblomov
Jun 20, 2002

Meh... #overrated

1000101 posted:

You have it, as far as I know you'd basically create say a 20 drive aggregate, throw one volume on it with one LUN and be done with it.

This doesn't make sense to do because you're burning a whole lot of space; but this is effectively what you're doing with EMC anyway, so I guess it all lines up.

I'm not sure that you can map a LUN to specific disks any other way; but it is trivially easy to create an aggregate and just put one LUN on it.

Oh, nevermind then, I see where you are going. I have had to do single purpose aggregates (still mutli-LUN) for our Exchange and SQL implementations. One aggregate for DBs, one aggregate for Logs (different filers). Snapvault to separate filer. There were enough users and DBs in multiple clusters to warrant the separation.

oblomov
Jun 20, 2002

Meh... #overrated

I've been looking at LeftHand and Equalogic both. Got both setup in my lab. Lefthand software appears more flexible, Equalogic seems more "rack-dense". Equalogic is a bit less expensive since Dell can discount quite a bit.

Both appear to work just fine. Lefthand in particular has flexclone type of technology, thin replication, well done thin provisioning, multi-target replicas and couple other things. Downside is that for redundancy you are looking at somewhat large number of boxes. On the other side, you are not wasting space on spare drives.

Equalogic offers better reliability in single SAN, and perhaps higher throughput (3GB per shelf vs 2GB for LeftHand). I think I/O is a bit higher per Equalogic controller as well, however, considering you would normally have 2 LeftHand shelves for each Equalogic controller, it's a wash.

Power/Cooling I am not sure about. Lefthand has more boxes but it is more efficient. Equalogic does offer 5500 series box with 48 drives in it which is great if you just need bunch of space (think Sun with it's x4500 series).

With Equalogic, if you lose the box, you lose all volumes striped across from it. Hence dual-controllers, dual everything pretty-much. You would basically have to fry the powersupply (so it short-circuits the system) or something. Lefthand system approach is much more flexible, they have multiple network RAID levels and if you lose one box, you are still good to go.

One thing I am not sure with Lefthand is how the SAN will handle itself if there is a power/network failure. With Equalogic, we had power pulled couple times in a lab while running VMware with Exchange 2007 CCR clusters inside, and everything came back just peachy. With LeftHand we are going to test this next week .

Also, I think Equalogic will provide you better support then Lefthand, or at least I've had better support and responsiveness with Dell then HP (both premium support agreements, whatever the new Gold contract is called for Dell, and the HP equivalent).

Edit: One thing I forgot about Equalogic which kind of sucks is that they can't do multi-hop replicas or multi-target replicas. This sucks and means you can't replicate between say two DR sites and then do a separate snapshot/replica to tertiary "backup" storage. However, that said boxes appear very solid. I don't buy the BS about reliability on these. We've been running these things in a lab for 60 days now and have fairly decent VMware lab deployment on top of them. Seems fairly fast and very solid (and that's with constant switch drops and power interruptions due to it being a lab environment).

Intrepid00 posted:

Anyone have experiance with Lefthand boxes? We are looking to buy our fist SAN for our office. We currently have like 12 servers we are going to consildate (that, I've heard it too much) servers down to two. We are without a doubt going to use Hyper-v from MS. We use it now for development platforms and find it fast and relieable (as long as you don't put windows XP on it, it just isn't aware enough and the disk and network I/O suffer. Windows 2003 and Windows 6 fly on it. )

LeftHand Networks NSM box

Intersting box. The more storage you add, the more performance you get. You have one storage pool that is striped across all of thier boxes. I've seen reviews where with one box they where maxing out at around 1k IOPs and after another was added and stripped between the boxes the IOPS where getting a litte more then 1.9K IOPs. More boxes, more power. The framed SANs as Equalogic and Lefthand like to call the Dell AX and m3000i's have a a realitvly low IOPs roof compared to the potential Lefthand is showing us and what other tech sites have confirmed. Now, we won't proably need the insane amount of IOPs they are showing us, but the future expansion their would be nice for what ever is thrown our way in the future.

Things that concern me that I will get to ask the guy tomorrow is if their snapshot manager will offhost backups so that the server being backed up with Symantec will not be moving the backup data through the lan ports, but instead the backup server mounting the snapshot volume and deleting it when done.

Equalogic

I've already seen some posts saying don't do it. Besides their hosed up priceing chart (Storage increase from one model to another that only adds 1k to the price if we do it ourselves costs 7k+ from them) I haven't seen any good reason why not to. It does have a controller cap that is a lot lower then lefthands, but even then I don't think we will hit it. I also don't get their retarded active/passive controller push. I'd rather get two seperate boxes witha single controller then and replicate with them.

What I'd like to know is how fault tolerance is handled with them. I know I can slap two leftboxes on the network (same with datacore) and if one fails, the other steps right up and picks up where the other left off. We plan on having this in place by the end of next year. The Equalogic guy was being more vague about it and it was touting they got better IOPs. Which as some people have pointed out is debatable. I'll find out more tomorrow because I also am getting a free lunch out of them. Depending on what they have to say we may or not go with them and if it is a no it will be their outlandish price climb that is more like a hockey stick if graphed out and instead of slope like the other vendors.

oblomov fucked around with this message at 05:29 on Dec 20, 2008

oblomov
Jun 20, 2002

Meh... #overrated

Chucklehead posted:

Wow you guys this is an awesome thread.

I have been working myself to death trying to research a proper SAN, and I am going to have some choice words to share with my asshat vendor partners.

We're a 1000 person organization and if I hadn't started asking questions we very quickly would have been sold an over priced FC solution based entirely on storage capacity.

I'm pretty sold on NetApp right now, feels like I just need to fill in the blanks in regards to sizing and specific features.

Any huge red flags about a NetApp filer? HP guys in town are trying to tell me NetApp is the devil.

NetApp is solid. The reason I am looking at other stuff is because a few newer solutions are more flexible in both management and downtime perspective.

Keep in mind, NetApp is a dual controller (either in separate boxes or same chassis) cluster design with traditional fiber backbone running to the shelves (you can have multiple loops). Thus, you are facing downtime if you have to say remove a shelf in the middle of the loop or for a few other reasons.

There is nothing wrong with this design, but between a few frustrating sessions with NetApp support (usually they are very good) and downtime for maintenance, I started looking around. That said, I am not entirely sure there are better solutions. NetApp is good, and especially if you need to do not just SAN but also CIFS and NFS, it's really good.

Just be prepared for some CLI love, FilerView sucks and DFM/OpsManager are pricey and still not completely there compared to Equalogic/Lefthand.

IMO, HP is the devil :P.

oblomov
Jun 20, 2002

Meh... #overrated

Intrepid00 posted:

Equalogic box we found had 4x the maintance costs then lefthand. We also found Equalogic to cost a lot more, but you may be larger and Dell isn't dicking you on the price.

Well, for us Equalogic pricing is very good (quite large company). Equalogic is actually cheaper then Lefthand for the solution I was looking at. Service cost was very reasonable (we always buy 24x7 premium support), comparible with LeftHand (after discounts) or NetApp (after heavy discounts).

quote:

It would appear so as well to me that this is true. The one thing I did notice is that the Equalogic box will not do replication on the fly with two clusters always being eactly the same. The equalogic box will only do scheadule.

Not sure what you mean. Do you mean you can't just kick off replication but need to schedule it? Haven't tried, thanks for the tip, will test it in the lab.

quote:

The Lefthand boxes tend to have less drives and will not max out the 2GB limit of the equalogic box. I don't even thing the PS5000 will get above 2GB as well unless you are throwing SAS 15K RPM's at it.

Yeap, I doubt either will max out 2GB even with SAS 15K RPMs (that's what we specced).

quote:

Lefthand is owned by HP, but they are run as a seperate company. We would be caling lefthand for support not HP. Same goes for Equalogic. Also, Dell support can be pretty lovely even with gold depending on the time of day you call.

If you are looking at the NSM 2120 ask about the G2. They just started shipping them and has double the performance almost.

Well, this is the setup for now, but you bet that HP is working on integration. Our usual Dell calls are very good since we have account managers we can call and bitch at if things don't go the right way, and then poo poo gets escalated.

If you don't mind answering, how big is your LeftHand setup? I am kind of leaning LeftHand over Equalogic at this moment (but NetApp may win out). Trying to get some references from a large customer out of Lefthand (basically our management wants to see a large rollout, understandably) and I am not sure if Lefthand has many decent sized customers (beyond one large gov customer doing archived storage on it).

oblomov
Jun 20, 2002

Meh... #overrated

Intrepid00 posted:

I refuse to believe a company that sells ink at $40 bucks a pop is the devil

Seriously though, don't get caught up on that HP owns lefthand. They are just owned by them, but like Dell they are leaving them alone.

In both of these cases, the situation will not last long, IMO. It's just not efficient way of doing things, especially for HP (not like LeftHand is high end storage for HP).

oblomov
Jun 20, 2002

Meh... #overrated

Intrepid00 posted:

On the other hand we are just a around an 50 user software company. We fit in more with the Small Business.

Ahh, yeah, that's much harder to drive Dell down in price. We literally spent couple million this year with them (actually, maybe even a bit more then a couple). So we have huge negotiating position with Dell. Just need to mention magical words "Hewlett Packard" and price goes down .

quote:

I believe you have to scheadule it (5 min being min.) and if you so choose you can't have the replication actually be a mirror if you so choose. While the lefthand box has varying levels of Network raid and repelication.

Makes sense, and yeah, I found Lefthand to be much more flexible software wise. I am just basically not sure if I can trust it on a large scale. That's my fear. Hence search for references, hopefully Lefthand can come through on that since it will give us another good choice.

quote:

Only that monster 45 drive one might that Equalogic has, but that's more of storage need performance you are looking for.

Yeap, that's an interesting box Equalogic has there. Kind of like Sun 48 drive server. I would think that 3GB is quite enough there since it's really bulk storage, they don't have enough controller performance to really drive this until next year's upgrades (whenever those come).

quote:

Unfortunally I can only tell you will most diffently have a much large SAN network. You may also want to look, I think the Equalogic as a controller cap while the Lefthand boxes have almost none.

I appreciate the discussion, hard to find a lot of info on the web comparing Equalogic, and Lefthand. Stability with either is what I am worried about at larger scales. Scalability wise both will be similar because with Lefthand, you are looking at double the boxes (at Network Raid 2) and according to Lefthand around 20-25 you are basically tapping out the expandability, which by the time you are done is comparable to group of 12 for Equalogic (which is supposed to get expanded next year). Now, you can argue that cluster side for Equalogic is only limited to 8 boxes inside each group, but you can still copy stuff between clusters. Dunno, each has pluses and minuses. I have the feeling we'll go NetApp for the moment though and then will do a much longer eval next year. That will also let us check out say 3Par.

oblomov
Jun 20, 2002

Meh... #overrated

Well, I would be real careful connecting all that stuff to a 2050. IMO, this is either multiple NetApp clusters or 3140 (and maybe higher) type of setup. Here is the thing to think about - you do NOT need FC connectivity to the filers from your server. I am running SQL Server, Oracle, Exchange, File Services, VMware (couple dozen hosts at one site here), and whole bunch of other crap. Number of users ranges from couple thousand (for file services) to about 14K for Exchange to 30K Sharepoint instance running off SQL to multiple heavy production DBs.

Guess what, it's all iSCSI or in case of Oracle and one of the VMware clusters, NFS. Fiber is overrated for vast majority of environments. For the org your size and what you are looking to do, fiber is just overkill.

If you go with a single SAN scenario (and even with multiples), you just have to make sure to layout network ports (or fiber if you so desire) appropriately, get enough shelves in different aggregates (i.e. you don't want to put Groupwise on same disks as say your financial DBs) and generally, use common sense.

Replication, btw, works, but it's not panacea. SnapMirror has a fairly sizable impact on CPU (do it at night). You have to make sure to get appropriate software to quiesce your apps. I have no clue, for example, if NetApp offers GroupWise support.

For VMware servers, do you mean each one will have 8x1GB connections for iSCSI or is this 8x1GB total amongst eight? You can use NFS, btw, with Netapp and VMware. There are some pro/con scenarios for either iSCSI or NFS.

Chucklehead posted:

Edit: As usual I wasn't quite specific enough to start put with. Here is what I have envisioned so far:
6 ESX hosts connected via 8x1G Ethernet. 2 Host Novell Netware Groupwise Cluster, connected via 1G Ethernet, potentially FC. A 2 - 4 Host MS SQL cluster connected via 1G Ethernet, potentially FC. A 2 Host Oracle cluster connected via NFS (1G Ethernet). Our DR site will be pretty similar, just fewer boxes. I say potentially FC because the performance may require it - but can't we just use ethernet teaming to deliver more iSCSI/NFS bandwidth?

We are re-doing our DC network as well so anything I need to implement from the network side to do this right can also be designed now.

oblomov
Jun 20, 2002

Meh... #overrated

rage-saq posted:

Sounds like someone is wishing they got a 6 Hour Call To Repair warranty instead
Its mighty expensive though, nearly double 4 hour response!

Rage, can you clarify on this? Is this through HP or Lefthand support? Personally, I think I am going to go with NetApp after all. Equalogic's lack of dual-target replication may just be a deal breaker, and I do not want to deal with these sorts of supports issues that were talked about with Lefthand/HP combo.

Brent78, PS5000XV seems to be pretty good as long as you understand what you are and are not getting. I don't see a big deal in having 2 Lefthand boxes doing exactly the same thing as 2 Equalogic boxes. Lefthand wins a bit of redundancy (how often do SANs burst into flames after all) and software flexibility and Lefthand wins a bit on performance, rack space, power consumption and mainly on support. If I was sure of Lefthand support I would go with them, but I am not and I haven't gotten references from them on large customer install base.

Chucklehead, as far as different filers go, keep in mind the SAN you are getting, number of ports, number of shelves, etc... Also, there is a good chance that you will be taken the whole thing down for maintenance. Realistically, as long as you have enough ports/bandwidth and SAN Controller I/O throughput, disk I/O will be what matters for multiple types of work profiles.

Say you have Groupwise, SQL Server and a VMware cluster to put on a SAN. You could say get a single 3170 SAN and get enough shelves to form multiple aggregates to serve all of this, or you could get two different 3140 SANs instead, one for Groupwise and SQL and another for VMware. You won't have to have as many aggregates per each NetApp active-active cluster and you will have more flexibility during upgrades/maintenance and actually more connectivity options. Price-wise, I am not sure how this would come out, dual 3140s will probably be more expensive though.

Also, IMO, 10 ports per ESX host is kind of overkill. That's a lot of cost right there. What do you really need? You need 1 for control crap, vcenter, etc..., 1 for VMotion, say 2-3 for iSCSI and couple for front-end. Shouldn't need more then 6-8 per host, IMO. This is 20Ghz you said, so dual quad 2.5? I doubt you will saturate more then 2GB from say iSCSI or NFS. You can go FC, but that's just overkill on cost, IMO, for your situation.

oblomov
Jun 20, 2002

Meh... #overrated

rage-saq posted:

Well in a few weeks their support will be through HP.
6 hour Call To Repair is expensive because it is the highest level of support you can get. You call in an issue and HP basically promises it is resolved in 6 hours. That means they stock spare parts for you at the local warehouse, and make sure they have enough certified techs in your area to cover you should a problem arise. This is basically pulling out all the stops so your poo poo gets fixed ASAP.

Hmm... I will hit up my rep on this. The other route is frankly unacceptable. One of big reasons even at looking at something other then NetApp is less downtime and quicker potential fixes. If it takes 2 days to repair a SAN, that's simply unacceptable. The fact that their support is going to be through HP is what concerns me most actually. IMO, HP support is simply not up to par considering their pricing.

oblomov
Jun 20, 2002

Meh... #overrated

rage-saq posted:

HP support options like 4 hour 24/7 etc is just as good as anyone elses these days on Standard equipment, if you really need to have insane support you buy 6 hour CTR. LeftHand should be getting shelved under Standard equipment so the same Proliant level warranty options will apply.

If you REALLY want crazy support, you need to step it up to Enterprise equipment, stuff like EVA's, ESL's and XP's have insane warranty options. This is the kind of equipment that competes directly with Netapp, as Netapp predominantly makes Enterprise level equipment.
EVA's are awesome (installing one right now as a matter of fact) and have all sorts of warranty/support/prefailure replacement + auto dialhome that just don't appear on the Lefthand/MSA level Standard storage lines.

If you get an XP you can get a support contract with HP for 7 9's uptime. Thats like 6 seconds of downtime per year. This is what runs the NASDAQ and thats the kind of warranty they have.

Yeap, we run various NetApps from older FAS270 to newer 6080 beasts. I know all about the Enterprise support . At the end, whatever they say, it all comes down to techs. The 7 9's uptime is certainly doable with enough SAN hardware and replication but it's very very costly. The reason for me looking at LeftHand or Equalogic is flexibility. Traditional SANs are very inflexible and whatever they say about downtime, mid-range Enterprise solution (even say NetApp 6080) has some basic vulnerabilities that are inescapable. Not sure about higher-end HPs, not familiar with them, but I've seen DMX boxes drop whatever EMC says.

oblomov
Jun 20, 2002

Meh... #overrated

Chucklehead posted:

My experience is that the sales guys are going to lie right to your face and hope you are ignorant and/or lazy and don't call them on their bullshit.

Read this thread - the information in the OP is terrific.

Know what you need, you have to have a pretty good idea of what kind of data is in your environment.

I'll do a full post after I get the thing working.

Also, what are you looking to do with the SAN and how proficient are you in various SAN technology in general and how much time will you have to manage the said SAN? For example, NetApp can do NFS, CIFS, and Fiber in addition to iSCSI, so if any of that looks attractive, LeftHand simply can't do it. Pricewise, the difference is not going to be large either way. LeftHand has better interface and more flexible design, while NetApp has proven architecture/support/stability. You can lift and replace NetApp controllers to move your data to bigger boxes, with LeftHand you can just add more units (up to say 20 or so).

I got both Equalogic and LeftHand in the lab, and both are pretty good. My company has lots and lots of NetApp storage so I deal with it on daily basis (provisioning, monitoring, configuring, etc...). Hell, that Sun hybrid storage looks pretty nice too. Friend of mine got Compellent and that's also pretty solid (small company though, so who knows what will happen down the road.

oblomov
Jun 20, 2002

Meh... #overrated

Intrepid00 posted:

I'm not to thrilled with running NFS or any file service right off my storage. I'd like to minamize its attack surface.

Also, unless you have long sequantial reads/writes (movies) you don't need fiber either. And with 10 g/bit cards on the market if you have no fiber already, starting now is proably a waste.

There is no attack surface if the VLANs are non-routable with NFS. It's much the same as iSCSI, really. Some apps behave better (Oracle, VMware with NetApp). For fiber, I think the transmission protocol is better but then you can (or will be able to) do fiber over ethernet I guess. Also, there are some other heavy data scientific/engineering apps where the more heavier iSCSI protocols won't function as well, IMO. That said, yeah, if you don't have fiber investment now, it's most likely an unnecessary option.

oblomov
Jun 20, 2002

Meh... #overrated

rage-saq posted:

They are called Thatchers.


The fact that there are consultants/vendors out there that allow this kind of behavior is appalling. You are dropping a lot of cash for a very advanced piece of equipment that is just supposed to work, if it doesn't you should return it and get something that does.
I do enterprise storage design/consulting/implementation primarily around HP products and I can honestly say all of my deployments work 100% as advertised with no mysterious performance/reliability problems. Thats the whole loving point of doing this. If the product couldn't deliver as advertised I would be the first one trying to get the customer a refund as well as not recommending it in the future.

My personal opinion is that this is what you get when you go with generic server equipment and then use some kind of general purpose operating system + software package to accomplish this kind of low level stuff. A lot of Sun's entry products utilizing ZFS seem to fit this kind of bill along with other stuff I'm not a fan of like LeftHand etc. Your mileage may vary of course.

Funny enough, just had Sun come in and do a dog and pony show on the new 7000 series storage. Hardware seems interesting, but software seems very very raw. I think we are going to skip on the Sun option. On Lefthand, I dunno, I would say I disagree to the extent that I have been able to test in the lab. I've a little 3 node cluster in the lab and so far it behaves as advertised. We put some VMware, Exchange, and SQL volumes on it, turned things off to test Network Raid-2/3, dropped power on the whole cluster, hit it with iometer, jetstress, etc.. and so far they have been behaving fairly well. Their software is also simply awesome.

I do a lot with NetApp SANs and while they are pretty stable and "Enterprise" grade, we still get little glitches, etc... I've seen EMC drop the ball too. I don't think anyone is immune, including HP. Hell, everything I've seen from HP concerning pre-sales and post-sales support sucks. My company recently transitioned to HP based desktops/laptops. Our reps are not responsive, support is kind of bad, etc.. We have top tier support and we are a largish company with 50K people in US alone and over 100K worldwide. This is a big and fairly important contract I'd think, even for a company as large as HP.

At this point, the main reason my management does not want to consider Lefthand seriously is that they are now HP. Our desktop/laptop issues may not translate to SANs or Servers, but it still shows you something about the company in general and their approach to the Enterprise.

oblomov
Jun 20, 2002

Meh... #overrated

1000101 posted:

You won't; but at the same time if your head goes you lose access to all of your data.

Why didn't your VAR pick up on this? Are they an authorized NetApp reseller? I might question them in regards to why they didn't go through PCMall's parts list with a fine toothed comb.

That said, why the 2020 instead of the 2050?

Anyway, a second head will cost you ~7500ish if I recall. Chat with your VAR and maybe they can cut you a "pity" discount but don't count on it.

Worse come to worst, contact NetApp directly and complain. Explain to them that you explicitly laid out your requirements and were sold something different.

In the future, don't buy storage from the internet equivalent of Best Buy. Even if you don't hire a consultant, find a VAR thats authorized by netapp and if need be, take the quote back to netapp to make sure you're getting what you expected.

I don't think 2020 can do clustering, you have to pony up to 2050 for that. At least that's what I recall when we last got a few of each for remote offices. 2050 is basically almost 2x the size and has space for 2 internal controllers and 20 drives.

oblomov
Jun 20, 2002

Meh... #overrated

1000101 posted:

It supports clustering. We had to upgrade one of our client's 2020's to support it but it does work. The only fault with the 2020 is that its basically a dead end platform. The 2050 is generally a much better fit for people and has a lot more expandability.

What do you do then, just get another 2020 and cluster the two? 2050 has capability to do clustering within single chassis.

oblomov
Jun 20, 2002

Meh... #overrated

Catch 22 posted:

What?!? Please give your definition on "Small SAN"?

$100K is smallish. I think anything up to say $150K is on the small size. To give you an example, we just paid about half a mil for a 6080 NetApp SAN with whole bunch of FAS storage with a few hundred TB. And that's really mid-size not high-end SAN, IMO, although it is going toward the high-end.

On the LeftHand, I am still testing it in the lab and it's pretty good from everything I am seeing. Don't expect huge IO, i.e. 1600-1800 per G2 node (SAS). So max you can get is maybe 40K IOPS out of a cluster of 20-25 boxes.

oblomov
Jun 20, 2002

Meh... #overrated

brent78 posted:

Just wanted to post that I got a shelf of EqualLogic 5000VX setup in the lab and I'm very impressed with it's performance. It's configured with 16 x 300GB 15k disks, active/backup controller and all 6 gigE connected to a pair of 3750's. Using jumbo frames and receive flow control as well. I'm achieving 200 MB/s writes with ease and barely sweats with mixed random reads/writes. This shelf as configured was 40k, not the cheapest thing out there but on par with 15k SAS. The equivalent NetApp or EMC solution would have been double considering all their retarded licensing costs. Ohh you want iSCSI, caa-ching.

Equalogic is not bad at all performance wise. Management is straightforward, support is good and hardware is pretty neat. However, I must say that I like LeftHand more, mainly for flexibility of software there. Also, to be fair to NetApp (less so with EMC), you will see pricing converge closer together as you "fill up" on nodes. With NetApp and EMC (and Hitachi, HP EVA, etc...) you pay a lot more in the front, but at the end once you start scaling up, pricing is going to be much closer (if still more) then Equalogic (and LeftHand). So once you compare say NetApp 3160 with whole bunch of shelves and a similarly large Equalogic deployment, prices are much closer then you'd think at the start.

There are other advantages to Equalogic (Lefthand too) compared to traditional SANs though.

oblomov
Jun 20, 2002

Meh... #overrated


How well do these V-filers work? Haven't tried them yet and we were thinking of trying to front some EMC and Hitachi storage with it.

oblomov
Jun 20, 2002

Meh... #overrated

Catch 22 posted:

You would get an app consistent snap first (using SnapView to manage and set this up) then RecoverPoint replicates at the blocklevel (clones) the LUN. Flatfiles would not need the snap first.

Edit: I also just shot off a email to my EMC guys to make sure there is not another way with the new SE version.

I just saw a presentation by EMC last week replicating Oracle RAC and it was pretty drat impressive. Now, it looks like managing the various caches may be a bit of a hassle, and it's the usual EMC CX4 stuff (I still like NetApp better), but for replicating copious amounts of data, this is some very good stuff. Cool part is they track the changes and only replicate changed block at certain interval (with Async), say like every 20-30 min, so unlike SnapMirror, the amount of data after compressions and folded writes (i think that's what the EMC engineer called the process) appears to be going 3-1 for Oracle. Not too shabby. That said, it's not cheap and requires whole bunch of fiber connections.

oblomov
Jun 20, 2002

Meh... #overrated

TobyObi posted:

What are your issues with the Sun kit?

Does Sun storage have any VSS snapshot software for Windows now? Back in December they did not and neither did they have any vmware compatible snapshotting or anything else. I really liked their hardware, but their software was just way too raw for my liking.

I went with Equallogic instead for that particular project and we got bunch of NetApp boxes since for a few other projects. However, if Sun software stack improved (or you don't need it and NFS/iSCSI on Linux/Solaris is good enough), I'd definitely take a look. I did hear bad things about support, and heck, even their pre-sales tech support was meh.

oblomov
Jun 20, 2002

Meh... #overrated

Yeap, probably took us 5-6 months each last couple times we were selecting a storage solution. There were usually 3-4 vendors in the running and only 1 was selected each time, so there were plenty of losers, but everyone understood that this is what it takes in the Enterprise.

oblomov
Jun 20, 2002

Meh... #overrated

Speaking of storage, anyone have experience with fairly large systems, as in 600-800TB, with most of that being short-term archive type of storage? If so, what do you guys use? NetApp wasn't really great solution for this due to volume size limitations, which I guess one could mask with a software solution on top, but that's clunky. They just came out with 8.0, but I have 0 experience with that revision. What about EMC, say Clariion 960, anyone used that? Symmetrix would do this, but that's just stupidly expensive. Most of my experience is NetApp with Equallogic thrown in for a good measure (over last year or so).

oblomov
Jun 20, 2002

Meh... #overrated

Vanilla posted:

All the time, depends exactly what you need it for - just to stream to and delete shortly after? More details?

Typically see this on Clariion with 1TB drives. When 2 TB drives come along the footprint will be a lot less.

2TB is not out yet as far as I know. It's mostly just to stream data to (sequential writes mostly) and then archive it for a few weeks. I was basically thinking Clariion or NetApp 6000 series. Only problem being that 15/16TB is max volume size (not sure on the Clariion, could be wrong there). However, today I found out that size needs to be double, i.e. 1.5TB or so. My guess is that management will look at the cost and abandon the project, but hey, I got to come up with something, and it sure to be an interesting design exercise .

Will also look at Isilon, never heard of them before.

Adbot
ADBOT LOVES YOU

oblomov
Jun 20, 2002

Meh... #overrated

Cultural Imperial posted:

Ontap 8 running in "classic" mode overcomes the 16 TB aggregate limit.

http://www.ntapgeek.com/2009/09/64-bit-aggregates-in-data-ontap-8.html

GX or cluster mode as it is known in ontap 8 has several limitations, including an inability to snapmirror.

Yeap, talked to my NetApp reps and looks like Ontap 8 will do this. Also talked to some EMC guys, and Vanilla is correct as well, Clariion will handle the LUN size as well. There is no way in hell that I would be doing LVM stripes and such .

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply