Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
H110Hawk
Dec 28, 2006

Wicaeed posted:

Well thats part of the problem, I can't access the CLI. Rather the OS wont load all the way.


I'm guessing it's a CPU issue

Oh, jeez. I misread the filer model you had. F720 is a very old filer. It looks like your PCI cards are out of order. Try moving your NVRAM card to slot 1 and putting the FastEthernet card in Slot 2 (or whatever).

Don't bother calling netapp. Shoot me an IM, I tried sending you one and it said "refused by client."

Edit: \/ I tried sending you an IM on AIM.

H110Hawk fucked around with this message at 22:55 on Dec 11, 2008

Adbot
ADBOT LOVES YOU

Wicaeed
Feb 8, 2005
Probably because I don't have pm on SA :(

I'll try moving some poo poo around and see what happens, although there is only 1 free slot on a 720 :)

unknown
Nov 16, 2002
Ain't got no stinking title yet!


I'm guessing your problem is the nvram card - failing due to dead batteries.

(We've had that problem in the past). Radio shack/CC should have them - IIRC they are "CR2422" or something similar, just pop them out and replace them.

Oh yeah, rip out all the other cards first to get it booted up properly.

Wicaeed
Feb 8, 2005
heyyyy, it breathes!

quote:


Alpha Open Firmware by FirmWorks
Copyright 1995-1998 FirmWorks, Network Appliance. All Rights Reserved.
Firmware release 2.3_a2

Memory size is 256 MB
Testing SIO
Testing LCD
Probing devices
Testing 256MB
Complete
Finding image...
Loading 4 disk@0
100%
Starting .............................................................................

NetApp Release 5.3.6R1: Wed Jun 14 18:21:14 PDT 2000
Copyright (c) 1992-2000 Network Appliance, Inc.
Starting boot on Thu Dec 11 21:21:01 GMT 2008
Scanning for disk drives: ..............
Configuring disk drives: 8.1 8.14 8.2 8.10 8.3 8.8 8.13 8.5 8.11 8.12 8.4 8.6 8.9 8.0
Disk 8.2 is reserved for "hot spare"

1 disk is reserved for "hot spare".
Restoring parity from NVRAM
Loading volume vol0
Thu Dec 11 21:21:25 GMT [consumer]: Beginning parity recomputation on volume vol0, RAID group 0.
Replaying WAFL log
Thu Dec 11 21:21:29 GMT [rc]: NIS: Group Caching has been disabled
Thu Dec 11 21:21:36 GMT [rc]: Ethernet e0: Link down.
add net default: gateway 192.168.10.1
NFS server is running.
Thu Dec 11 21:21:48 GMT [rc]: Ethernet e0: Link down.
Thu Dec 11 21:21:48 GMT [rc]: relog syslog Thu Dec 11 21:17:28 GMT [power_low_monitor]: Power Supply #2 Failed

Thu Dec 11 21:21:48 GMT [rc]: relog syslog Thu Dec 11 21:17:28 GMT [power_low_monitor]: Power supply is in degraded mode!

Thu Dec 11 21:21:48 GMT [rc]: NetApp Release 5.3.6R1 boot complete. Last disk update written at Thu Dec 11 21:17:27 GMT 2008

Thu Dec 11 21:21:48 GMT [httpd_acceptor]: HTTP MIME Types file (/etc/httpd.mimetypes) is missing

Password:

Heh only problem is that I have no idea what the pw is going to be on it, and neither does the guy I bought it from.

I assume there's going to be a way to reset it to something I want, correct?

namaste friends
Sep 18, 2004

by Smythe
Unknown may be right.
code:
NetApp Release 5.3.6R1: Wed Jun 14 18:21:14 PDT 2000
Copyright (c) 1992-2000 Network Appliance, Inc.
Starting boot on Thu Dec 11 19:11:20 GMT 2008
Scanning for disk drives: Unsupported NVRAM size: 0MB
...is pretty funny. Can you open the filer up to check if the NVRAM card is in there?

I can send you the field service guide if you give me an email address. The NVRAM card is located in slot 9.

H110Hawk
Dec 28, 2006

Wicaeed posted:

I assume there's going to be a way to reset it to something I want, correct?

I know nothing at all about ONTAP5, in 6+ you would hit control-C sometime after starting and before it prints the OS banner, and press that you needed to reset the password. In Ontap 5.3.6R1 according to NOW:

quote:

How to reset the filer password
Reset the password if you forget it

If you forget your filer password, reset the password by using the system boot diskette. To avoid security problems, take care to limit access to the system boot diskette.
Procedure for resetting the password

Complete the steps in the following table to reset the filer password.
Step Action
1 Reboot from diskette as described in "Booting from system boot diskette."
2 When the boot menu appears, enter 3 to choose Change Password.
3 When the filer prompts you for a new password, enter it at the prompt.
Results
The system prints the following message:

Password Changed

Hit Return to reboot:

4 Remove the diskette from the filer's diskette drive and reboot the filer by pressing the Enter key.

oblomov
Jun 20, 2002

Meh... #overrated

Nomex posted:

I may be a little late with this. You should look into a data de-duplicating solution for the backup and tertiary storage. Check out Data Domain. They can be optioned to mount as SMB, NFS, FC or iSCSI. I've had one that I've been playing with for a little while now. My 300GB test data set deduplicated down to 101 GB on the first pass. Speed is pretty good too. 3GB/min over a single gigabit link. As it just shows up as disk space, it's supported by pretty much every backup product you can think of too.

Data Domain is quite good, we are using it for some of this on other projects. Here, the tricky part is that I am just going to replicate snapshots over I think, and not going to bother with backup software since that will take forever. I don't think I can get say NetApp talking to Data Domain for snapshots. Unless there is something cool continuous backup product that exists which I am not aware of, tertiary storage of different brand won't work here (guess there are some storage virtualization products out there, but they are pricey).

Also, NetApp does have dedupe which is pretty good, but it kind of sucks for iSCSI due to funky way NetApp does LUNS on top of waffle.

ddavis
Dec 12, 2008
I inherited a Dell AX100 that's a bit long in the tooth. It hosts our Exchange database, SQL database, file shares, and some MS .vhds.

I'm looking to replace this soon and go full tilt with a few VMware servers to virtualize our entire environment.
My question is: one of the key benefits of virtualization (at least from my perspective) is that it divorces the server from the underlying hardware. Servers become pretty generic and interchangeable. Why then isn't it common to follow this same line of thinking and run a SAN on commodity hardware? You'd get all the benefits of an enterprise SAN without the proprietary software. Then down the road when you need to replace something that's no longer supported, it's trivial. Or upgrading becomes much cheaper because you can easily embrace whatever disk interface gives best performance/cost ratio. Does it mainly have to do with support contracts? Or am I missing some big picture aspect?

Something like Openfiler, FreeNAS or even this article seem like viable solutions since iSCSI and NAS are both supported in VMware.

Nomex
Jul 17, 2002

Flame retarded.

ddavis posted:

I inherited a Dell AX100 that's a bit long in the tooth. It hosts our Exchange database, SQL database, file shares, and some MS .vhds.

I'm looking to replace this soon and go full tilt with a few VMware servers to virtualize our entire environment.
My question is: one of the key benefits of virtualization (at least from my perspective) is that it divorces the server from the underlying hardware. Servers become pretty generic and interchangeable. Why then isn't it common to follow this same line of thinking and run a SAN on commodity hardware? You'd get all the benefits of an enterprise SAN without the proprietary software. Then down the road when you need to replace something that's no longer supported, it's trivial. Or upgrading becomes much cheaper because you can easily embrace whatever disk interface gives best performance/cost ratio. Does it mainly have to do with support contracts? Or am I missing some big picture aspect?

Something like Openfiler, FreeNAS or even this article seem like viable solutions since iSCSI and NAS are both supported in VMware.

When a lot of customers buy SAN hardware, they buy for long term reliability, performance and cost, in that order. You're going to get 100x the support when something fucks up from EMC or HP, versus building it yourself. Big vendors will also guarantee their product will work with other major products at a certain SLA.

Nomex fucked around with this message at 05:35 on Dec 13, 2008

Catch 22
Dec 1, 2003
Damn it, Damn it, Damn it!

ddavis posted:

I inherited a Dell AX100 that's a bit long in the tooth. It hosts our Exchange database, SQL database, file shares, and some MS .vhds.

I'm looking to replace this soon and go full tilt with a few VMware servers to virtualize our entire environment.
My question is: one of the key benefits of virtualization (at least from my perspective) is that it divorces the server from the underlying hardware. Servers become pretty generic and interchangeable. Why then isn't it common to follow this same line of thinking and run a SAN on commodity hardware? You'd get all the benefits of an enterprise SAN without the proprietary software. Then down the road when you need to replace something that's no longer supported, it's trivial. Or upgrading becomes much cheaper because you can easily embrace whatever disk interface gives best performance/cost ratio. Does it mainly have to do with support contracts? Or am I missing some big picture aspect?

Something like Openfiler, FreeNAS or even this article seem like viable solutions since iSCSI and NAS are both supported in VMware.
There are so many more reasons, but for you and what you just asked, Performance.
OpenFiler and NAS will not cut it unless you are only virtualizing 5 or so servers with little I/O (500ish/ps) I had a 4 disk NAS running RAID 5, with about a 100 user load, running a blackberry server and print server. I added another VM and nearly brought down the "network". If this was on one of the other departments, poo poo would have hit the fan. If you want to do SAN/virtualizing , do it right.
As I say this I am moving something off that NAS right now, and its tossed up a I/O error 4 times. On the 5 time, the VM pulled down, and I am moving it to one of my SANs.

Besides, SANs are pretty cheap nowadays depending on features and size, and if you need gobs of space your company should be able to shell out something.

edit: you know I don't want to discourage you, so I will say, it can be done and work, but I love the fact that I can toss VM after VM to my SAN and not have to worry about the load I am putting on it for awhile.

Catch 22 fucked around with this message at 06:44 on Dec 13, 2008

Vanilla
Feb 24, 2002

Hay guys what's going on in th

ddavis posted:

I inherited a Dell AX100 that's a bit long in the tooth. It hosts our Exchange database, SQL database, file shares, and some MS .vhds.

I'm looking to replace this soon and go full tilt with a few VMware servers to virtualize our entire environment.
My question is: one of the key benefits of virtualization (at least from my perspective) is that it divorces the server from the underlying hardware. Servers become pretty generic and interchangeable. Why then isn't it common to follow this same line of thinking and run a SAN on commodity hardware? You'd get all the benefits of an enterprise SAN without the proprietary software. Then down the road when you need to replace something that's no longer supported, it's trivial. Or upgrading becomes much cheaper because you can easily embrace whatever disk interface gives best performance/cost ratio. Does it mainly have to do with support contracts? Or am I missing some big picture aspect?


Vendors do have storage virtualisation. IBM has SVC, EMC has Invista, Incipient, etc.

Just like VMWare it's a way of divorcing your data from the underlying hardware so you can move it around with a few clicks.

A lot of the time only really big companies, such as banks, go this route. They're the ones who buy 10-20 new arrays a year so moving around 100's of TB of data is a chore and time consuming. With Virtualisation they can get rid of arrays that have run out of maintenance by moving the data with a few clicks and not have to worry about detailed migration plans, methodologies, etc.

For most companies with a handful of arrays or one at each site SAN virtualisation is a waste of time. It's the same effort to move it onto a new array as it is to virtualise it so when they wheel in the brand new bigger better array why spend money investing in SAN virtualisation?

Wicaeed
Feb 8, 2005

H110Hawk posted:

I know nothing at all about ONTAP5, in 6+ you would hit control-C sometime after starting and before it prints the OS banner, and press that you needed to reset the password. In Ontap 5.3.6R1 according to NOW:

Welp I'm at a loss here. I need that system boot diskette, but I don't have access to a NOW subscription, so I can't download the software for the boot diskette :(

Anyone know how I can get my hands on it?

ddavis
Dec 12, 2008
Ok, this all makes sense. Thanks for the responses.
I was thinking I'd replace the AX100 with a AX4-5F and then use the AX100 offsite with some of our older servers for a DR hotsite.

But I'm not positive a SAN is the way to go. I've read things that proclaim speed on a NAS is just as good if not better. And something like the PowerVault NF500 or NF600 seem really cheap. Are the pluses for SANs things like snapshots and replication?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Vanilla posted:

Ok, any decent array can do this and will have the ability to take crash consistent copies of things like Oracle and Exchange.


Just wanted to point this out to keep terminology sane.

Crash consistent means the data is consistent with a crash and therefor it may not necessarily be what you want (i.e. could be worthless).

You really want to take consistent snapshots and replicate those.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

1000101 posted:

Just wanted to point this out to keep terminology sane.

Crash consistent means the data is consistent with a crash and therefor it may not necessarily be what you want (i.e. could be worthless).

You really want to take consistent snapshots and replicate those.

An application owner will understand crash consistent and will know if it is right for them. It could be a case of just rewinding the logs to the right place.

As said above adding some software to a SAN solutions means it can work with the various applications to take application consistent copies with ease. With Exchange it will run VSS checks - ESEutil to confirm, with Oracle it will put it into hot backup mode, etc.

I've never been a fan of the replication of snapshots in the Netapp sense because, in the example of Exchange, ESEutil is not run at the time each Snap is taken. The copy is not verified before replication so how do you *know* you have a good copy? This is a first rule of DR/BC, know your copy is good and the vendors who are most strict on this (MS, Oracle, SAP) are almost always business critical. If you have a problem Microsoft will -only- support you if you eseutil has been run against the copy.

Much prefer full copies in general. Often snaps on high-change rate apps such as Exchange are uneconomical.

As always it depends on the exact circumstances.

Vanilla fucked around with this message at 19:37 on Dec 15, 2008

Maneki Neko
Oct 27, 2000

Vanilla posted:

I've never been a fan of the replication of snapshots in the Netapp sense because, in the example of Exchange, ESEutil is not run at the time each Snap is taken.

Doesn't snapmanager for Exchange do this? I haven't used it, but I was under the impression it could handle this.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
An application owner should say "I want application consistent data" and not "I want crash consistent data."

In the NetApp world this is managed by way of the Snap Manager products to ensure data integrity at the time of the snap. It does the verification you speak of. NetApp replication is very much reliable, as are the snapshots it takes.

If EMC took crash consistent snapshots (it won't with the right licenses I presume) then none of my enterprise customers would keep their Symmetrix systems. They would replace them with something that took consistent snaps.

Lucky for EMC, RecoverPoint provides this functionality and does so quite well.

To qualify, though my last 6 months have been knee deep in NetApp shops, I have worked with EMC technology before and I've only encouraged one customer to swap to NetApp. This was because he absolutely hated his Celerra and EMC support was costing him an arm and a leg for a value he just didn't feel he was getting.

I just wanted to get straight the term "crash consistent" as we usually relate that to a bad thing. I guess if you're doing file servers then crash-consistent is okay, but hardly ideal.

quote:

Doesn't snapmanager for Exchange do this? I haven't used it, but I was under the impression it could handle this.

It does exactly this.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

1000101 posted:

An application owner should say "I want application consistent data" and not "I want crash consistent data."

In the NetApp world this is managed by way of the Snap Manager products to ensure data integrity at the time of the snap. It does the verification you speak of. NetApp replication is very much reliable, as are the snapshots it takes.

If EMC took crash consistent snapshots (it won't with the right licenses I presume) then none of my enterprise customers would keep their Symmetrix systems. They would replace them with something that took consistent snaps.

Lucky for EMC, RecoverPoint provides this functionality and does so quite well.

To qualify, though my last 6 months have been knee deep in NetApp shops, I have worked with EMC technology before and I've only encouraged one customer to swap to NetApp. This was because he absolutely hated his Celerra and EMC support was costing him an arm and a leg for a value he just didn't feel he was getting.

I just wanted to get straight the term "crash consistent" as we usually relate that to a bad thing. I guess if you're doing file servers then crash-consistent is okay, but hardly ideal.


It does exactly this.

The apps owners also say 'I want dedicated spindles, zero data loss and 2TB by this afternoon' :)

EMC would use Replication manager to look after the snaps, clones and apps such as Exchange, Oracle, etc. This usually comes hand in hand with Recoverpoint.

The reason for my comment above is that i've never seen anyone run eseutil on an exchange snap. This is because Eseutil places a massive amount of I/O on the Exchange DB and if you're doing this on a snap that is pointing back at production you're just passing that I/O on. Even worse is if you have many snaps all trying to complete eseutil and all hammering production!

This isn't a dig at Netapp, i'm all for it just on full, separate volumes!

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Vanilla posted:

The apps owners also say 'I want dedicated spindles, zero data loss and 2TB by this afternoon' :)

EMC would use Replication manager to look after the snaps, clones and apps such as Exchange, Oracle, etc. This usually comes hand in hand with Recoverpoint.

The app owners will be pretty livid in many cases when you give them a crash consistent state when you restore a snap. That was my only jib. He's going to expect it to be consistent should he need to restore. You can't depend 100% on transaction logs or an 'fsck' to bring things back to sanity.

quote:

The reason for my comment above is that i've never seen anyone run eseutil on an exchange snap. This is because Eseutil places a massive amount of I/O on the Exchange DB and if you're doing this on a snap that is pointing back at production you're just passing that I/O on. Even worse is if you have many snaps all trying to complete eseutil and all hammering production!

This isn't a dig at Netapp, i'm all for it just on full, separate volumes!

Well different environments do separate things. You can't always assume that every customer will follow your best practices. That said, NetApp is pretty flexible with how it uses spindles. I can spread a LUN over say 40 spindles making the hit from eseutil pretty much nil. I've seen this proven in MANY large environments (over iSCSI even).

I just want to clarify terminology. If EMC came in to a customer and said "we'll take crash consistent snapshots" then we'd steer that customer away since they'd be getting about the same value out of a 3Ware RAID box assembled from Supermicro.

Since they don't say such crazy things; they're normally in our top 3 vendor pick. Unfortunately for EMC, they're often passed over due to lack of decent manageability. This is also the number one reason I'm seeing them ripped and replaced. Nobody cares that LUN X is only on spindles Y-Z these days, especially if performance is comparable.

Separate volumes/RAID groups is an out-dated concept that needs to find it's way out the door in about 99% of the use cases. I realize this is the EMC party line, but they are partying their way out of the door of any organization with <5000 employees. Anyone buying into EMC now ends up regretting it as they grow and replace it with a compellent or a filer or something anyway.

Catch 22
Dec 1, 2003
Damn it, Damn it, Damn it!

ddavis posted:

Ok, this all makes sense. Thanks for the responses.
I was thinking I'd replace the AX100 with a AX4-5F and then use the AX100 offsite with some of our older servers for a DR hotsite.

But I'm not positive a SAN is the way to go. I've read things that proclaim speed on a NAS is just as good if not better. And something like the PowerVault NF500 or NF600 seem really cheap. Are the pluses for SANs things like snapshots and replication?

AX4 is host based replication, just FYI

Vanilla
Feb 24, 2002

Hay guys what's going on in th

quote:

Separate volumes/RAID groups is an out-dated concept that needs to find it's way out the door in about 99% of the use cases. I realize this is the EMC party line, but they are partying their way out of the door of any organization with <5000 employees. Anyone buying into EMC now ends up regretting it as they grow and replace it with a compellent or a filer or something anyway.

I agree, but to some extent but the opposite is true. If you are completely reliant on an array to place everything for you you can't really do anything when performance starts to suck apart from buy more disk. How do you guarantee IOPS? I'm not just talking about end users i'm talking about the Cap Gemini's and EDS's of the world who have to guarantee backend performance.

EMC, Netapp and various other vendors have virtual provisioning / pooled storage. I was working today on a box that had tier 1 as a dedicated layout with dedicated spindles and whole rest of the box (40tb+) was a number of huge virtual pools. Best of both worlds, if you want simple pools of storage without worrying about separate volumes and luns it's there.

On a separate subject have any of the boxes you've worked on had flash drives? If so - opinions?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Vanilla posted:

I agree, but to some extent but the opposite is true. If you are completely reliant on an array to place everything for you you can't really do anything when performance starts to suck apart from buy more disk. How do you guarantee IOPS? I'm not just talking about end users i'm talking about the Cap Gemini's and EDS's of the world who have to guarantee backend performance.

The awesome thing is; I can actually do that with one of these crazy virtualized storage backends as well. Regardless of whether I'm doing it EMC style or like the rest of the modern world, when performance sucks the solution is always to buy more disks. With everyone else's products though, I may not have to quite as soon.

It is trivially easy for me to create a dedicated volume on a 40+ drive aggregate servicing a single LUN on NetApp.

It's trivially easy for me to do this with 3Par chunklets and its trivially easy to do it with Compellent StorageCenter.

I'm sure most other vendors have this same level of functionality.

The truth of the matter is that most people don't have IO needs so specific as to require separate RAID groups. There is a pretty big market out there for <100 spindles. If they need less than 10,000 aggregate IOs/sec then why do they need to worry about carving out dedicated RAID groups.

It's a lot easier for an IT guy to look at total aggregate IO/bandwidth requirements and make a decision that way; rather than figuring out on a per raid group basis; then figuring out that he dedicated too many spindles to a RAID group and have to migrate LUNs around.

Also keep in mind that I'm thinking about a market that spans from 50 employees on up to and including a good chunk of the 5,000 employee orgs. Anything larger than that, and people are more than happy to staff a bunch of excel experts to manage their storage. These guys are buying DMX for performance and lots of it.

I find overall capacity to be a useless metric in determining that sort of thing. Given that one of my customers is <1000 employees and maintains about 50PB of NetApp (and about 12 of EMC). They like the flexibility and ease that NetApp provides and only bought the EMC for a VMware project that ultimately ended up being housed on NetApp NFS.

quote:

On a separate subject have any of the boxes you've worked on had flash drives? If so - opinions?

None yet, most of my customers who would need that level of IO already have 1000+ spindles that they bought prior to the SSD shelves offered by EMC. It will be a while before they validate the shelves and put them in production.

I'd love to see this sort of thing more often though. It would be a hell of a thing to leverage with automated tiered storage.

edit: Don't think I hate all things EMC, I'm a services guy and the product is great. I'm mostly passing on complaints from paying customers and their feeling on the matter. I have a hard time disagreeing in many cases.

oblomov
Jun 20, 2002

Meh... #overrated

1000101 posted:

The app owners will be pretty livid in many cases when you give them a crash consistent state when you restore a snap. That was my only jib. He's going to expect it to be consistent should he need to restore. You can't depend 100% on transaction logs or an 'fsck' to bring things back to sanity.


Well different environments do separate things. You can't always assume that every customer will follow your best practices. That said, NetApp is pretty flexible with how it uses spindles. I can spread a LUN over say 40 spindles making the hit from eseutil pretty much nil. I've seen this proven in MANY large environments (over iSCSI even).

I just want to clarify terminology. If EMC came in to a customer and said "we'll take crash consistent snapshots" then we'd steer that customer away since they'd be getting about the same value out of a 3Ware RAID box assembled from Supermicro.

Since they don't say such crazy things; they're normally in our top 3 vendor pick. Unfortunately for EMC, they're often passed over due to lack of decent manageability. This is also the number one reason I'm seeing them ripped and replaced. Nobody cares that LUN X is only on spindles Y-Z these days, especially if performance is comparable.

Separate volumes/RAID groups is an out-dated concept that needs to find it's way out the door in about 99% of the use cases. I realize this is the EMC party line, but they are partying their way out of the door of any organization with <5000 employees. Anyone buying into EMC now ends up regretting it as they grow and replace it with a compellent or a filer or something anyway.

Don't forget that EMC is also basically 1.5-2x the cost of comparible NetApp (talking Clarions here, forget about DMX). That said, to me, NetApp is having the same issue now. It's all filer based, so in mid-level engagements, Equalogic, Lefthand, Compellent, or say 3Par is offering better future proof network.

I got Equalogic and Lefthand in the lab now, doing vendor "play-off", and they are both much more manageable then Netapp can be. We have Operations Manager, DFM and FilerView (with some CLI love going) on NetApp side to basically match built-in tools from the two newcomers. The only reason I don't have Compellent in there as well is that they are small and my management would rather have support from Dell or HP.

Also, Lefthand functionality for iSCSI with snapshots reserve on right is pretty nifty (so is replicating those thin-provisioned snaps). That said, NetApp is still very versatile and allows us to have multiple services going to same box. I just wish they got their gear in order and worked on the software front, especially integrating their "cloud-os" with ontap.

H110Hawk
Dec 28, 2006
Jesus christ. 14 hours later:

code:
peeler-rescue> vol online boot
Volume 'boot' is now online.
I just felt like sharing. It's been a long day.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

oblomov posted:

Don't forget that EMC is also basically 1.5-2x the cost of comparible NetApp

Source? That's a bit of a wild claim. Is this based on one example?

I'd argue that point heavily given that pricing is dependant on many things and in many cases i've found the opposite, especially when you ask for a robust solution from Netapp.

Gartner are back publishing storage pricing analysis, go and check it out - you'll find that all the vendors are within a few % of eachother because hardware really is just becomming a commodity....and most importantly EMC ISN'T the most expensive - even in the high end.

Vanilla fucked around with this message at 10:34 on Dec 16, 2008

Vanilla
Feb 24, 2002

Hay guys what's going on in th

1000101 posted:


edit: Don't think I hate all things EMC, I'm a services guy and the product is great. I'm mostly passing on complaints from paying customers and their feeling on the matter. I have a hard time disagreeing in many cases.

Likewise I don't hate everything Netapp. We have three FAS boxes in our play lab but as with every bit of kit there are downsides and I spend all day hearing about the downsides of our kit and that of our competitors....but mostly our kit because everyone loves kicking the vendor :)

ddavis
Dec 12, 2008

Catch 22 posted:

AX4 is host based replication, just FYI

I'm still a newb when it comes to SANs, but I take it that means you can only do it via PowerPath/Navisphere loaded on attached hosts?

oblomov
Jun 20, 2002

Meh... #overrated

Vanilla posted:

Source? That's a bit of a wild claim. Is this based on one example?

I'd argue that point heavily given that pricing is dependant on many things and in many cases i've found the opposite, especially when you ask for a robust solution from Netapp.

Gartner are back publishing storage pricing analysis, go and check it out - you'll find that all the vendors are within a few % of eachother because hardware really is just becomming a commodity....and most importantly EMC ISN'T the most expensive - even in the high end.

That's based on last 5-6 times we have purchased storage, anything from NetApp FAS2000 series to FAS6000 series and appropriate EMC hardware. I have had yet to see time when EMC was cost effective. I could see the value in really high-end stuff to which NetApp would have to respond with their cluster OS instead of ontap.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

oblomov posted:

That's based on last 5-6 times we have purchased storage, anything from NetApp FAS2000 series to FAS6000 series and appropriate EMC hardware. I have had yet to see time when EMC was cost effective. I could see the value in really high-end stuff to which NetApp would have to respond with their cluster OS instead of ontap.

If we look at the whole industry Gartner have carried out research based on analysis of bids accross the whole market and tell a very different story. Per GB Netapp is more

http://www.gartner.com/DisplayDocument?doc_cd=158097&ref=g_rss

For example - 2009 prices,

Clariion CX3-80 46TB 1TB Drives - $1.95 per GB avg
Netapp FAS6070AS 46TB 1TB drives - $3.45 per GB avg

Clariion CX3-80 10TB 300GB 15k FC - $6.80 per GB avg
Netapp FAS6070AS 10TB 300TB 15k FC - $12.35 per GB avg

There are vendors who are more expensive and vendors even cheaper than the above but it is a fallacy that EMC is always the most expensive. They all face the same criteria of dual controllers or cluster architecture, support for Unix, Linux, Windows, Vmware and no mainframe support, etc

Your experience is still valid - vendors don't always go in with their lowest price, the price you see differs depending on the size of the deal, how important you are to a vendor, your negotiating skills, etc.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
I can't see the report but I'd be curious to see the methodology.

Mostly because it doesn't line up to what I'm seeing in the real world. Did they completely omit thin provisioning from the NetApp side or the fact that you don't have to carve out raid groups and waste NEARLY as many disks as you do with EMC?

I'm guessing no.

People buy NetApp to save money and EMC to maximize performance. At least among my customer base (banks/FSI, media shops, ASPs).

I would also argue its flawed in that no one with a 10TB storage need is going to buy a CX3-80/6080. That said, the comparison was made, I just need to understand the methodology that reached that result.

(qualified this)
I'd also like to point out that the 6070 doesn't exist anymore.

There are other details as well that we should cover to understand why this isn't an apples to apples comparison.

First and foremost; the 6080 is about twice the system as a CX3-80 (making the near twice the cost per gig slightly less surprising). Side by side, the NetApp supports almost twice the disk capacity, has nearly 4 times the RAM, and a lot more expandability. We're talking 480 spindles backed by 16GB of RAM (two SPs) vs 1100+ spindles backed by 64GB of RAM (two heads).

If you want a real apples to apples comparison,

use the NetApp FAS3070 or its replacement the 3170 (or 3140 is closer still)

We're talking a HUGE difference in price here.

This also ignores the fact that NetApp gives you iSCSI, NFS, CIFS, and FCP all in one box.

1000101 fucked around with this message at 03:21 on Dec 17, 2008

M@
Jul 10, 2004

1000101 posted:


I'd also like to point out that the 6070 doesn't exist.


Fairly sure you're wrong on this. Might be EOL but I've seen them before.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
Oh wait you're right. Replaced by the 6040 and 6080.

Doesn't invalidate the rest of my post though.

Intrepid00
Nov 10, 2003

I'm tired of the PM’s asking if I actually poisoned kittens, instead look at these boobies.
Anyone have experiance with Lefthand boxes? We are looking to buy our fist SAN for our office. We currently have like 12 servers we are going to consildate (that, I've heard it too much) servers down to two. We are without a doubt going to use Hyper-v from MS. We use it now for development platforms and find it fast and relieable (as long as you don't put windows XP on it, :gonk: it just isn't aware enough and the disk and network I/O suffer. Windows 2003 and Windows 6 fly on it. )

We have decided on iSCSI, we have zero Fiber Channel now and don't see the purpose of investing it. Not with 10Gigabit nics falling prices and that we won't even max out the one nic with the size of the san we will be throwing at our network.

These are the three solutions we are looking at (we will proably decide right around christmas, I like to think of it as a present.)

DataCore's SanMeldoy

This is a very nice box, and affordable. Ultimatly we are shelving it though because its a host based solution. We hit the max IOPS of the box we can do anything but buy a new box to get more and even then you are limited to two controllers. Afterwords you need to go to their higher package and thier doesn't seem to be any upgrade path or what the cost will entail.

LeftHand Networks NSM box

Intersting box. The more storage you add, the more performance you get. You have one storage pool that is striped across all of thier boxes. I've seen reviews where with one box they where maxing out at around 1k IOPs and after another was added and stripped between the boxes the IOPS where getting a litte more then 1.9K IOPs. More boxes, more power. The framed SANs as Equalogic and Lefthand like to call the Dell AX and m3000i's have a a realitvly low IOPs roof compared to the potential Lefthand is showing us and what other tech sites have confirmed. Now, we won't proably need the insane amount of IOPs they are showing us, but the future expansion their would be nice for what ever is thrown our way in the future.

Things that concern me that I will get to ask the guy tomorrow is if their snapshot manager will offhost backups so that the server being backed up with Symantec will not be moving the backup data through the lan ports, but instead the backup server mounting the snapshot volume and deleting it when done.

Equalogic

I've already seen some posts saying don't do it. Besides their hosed up priceing chart (Storage increase from one model to another that only adds 1k to the price if we do it ourselves costs 7k+ from them) I haven't seen any good reason why not to. It does have a controller cap that is a lot lower then lefthands, but even then I don't think we will hit it. I also don't get their retarded active/passive controller push. I'd rather get two seperate boxes witha single controller then and replicate with them.

What I'd like to know is how fault tolerance is handled with them. I know I can slap two leftboxes on the network (same with datacore) and if one fails, the other steps right up and picks up where the other left off. We plan on having this in place by the end of next year. The Equalogic guy was being more vague about it and it was touting they got better IOPs. Which as some people have pointed out is debatable. I'll find out more tomorrow because I also am getting a free lunch out of them. Depending on what they have to say we may or not go with them and if it is a no it will be their outlandish price climb that is more like a hockey stick if graphed out and instead of slope like the other vendors.

M@
Jul 10, 2004

1000101 posted:

Oh wait you're right. Replaced by the 6040 and 6080.

Doesn't invalidate the rest of my post though.

Agreed. I'd like to get a copy of that report that was linked and see what kind of comparisons they're using.

(Related: I've got a pair of used 6080s sitting on the shelf if anyone wants them, relatively cheap. I really don't want them anymore)

Vanilla
Feb 24, 2002

Hay guys what's going on in th

1000101 posted:

stuff

Methodology is all in the doc. Point taken on the comparison, I just went for the biggest in EMC's mid range vs the biggest in Netapps.

The CX3-80 is not EMC's latest array either, the CX4-960 is, but this isn't on the chart as Netapps 6080 isn't either.

FYI 3040AS vs CX3-80 = $1.85 vs $1.90 respectively (1TB dr), still not a big gap.

3070 vs CX3-80 is = $7.30 vs $6.80 (300GB 15k) - still not seeing this hugely expensive EMC :)

Sent you a PM

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
What would it look like if you were to compare across multiple product lines?

Say a CX3-10 vs a FAS2050? Maybe some others? Be curious to see the numbers.

Doing some digging on google and asking around, it looks like Gartner doesn't factor in the entire TCO of a given solution (just the initial solution cost) and the 10TB figure is a raw number, not actual usable storage. How well that gets utilized (and directly affects overall cost per gig) depends on a number of factors from the type of data being stored to how knowledgeable the guy managing the storage is (or how much time he wants to spend managing said storage).

Granted it's a great metric to start, it doesn't paint the whole picture.

Its good to talk about and I figure at worst you're going to have some more ammo to fire at your customers looking to competing solutions.

oblomov
Jun 20, 2002

Meh... #overrated

1000101 posted:

I can't see the report but I'd be curious to see the methodology.

Mostly because it doesn't line up to what I'm seeing in the real world. Did they completely omit thin provisioning from the NetApp side or the fact that you don't have to carve out raid groups and waste NEARLY as many disks as you do with EMC?

I'm guessing no.

People buy NetApp to save money and EMC to maximize performance. At least among my customer base (banks/FSI, media shops, ASPs).

I would also argue its flawed in that no one with a 10TB storage need is going to buy a CX3-80/6080. That said, the comparison was made, I just need to understand the methodology that reached that result.

(qualified this)
I'd also like to point out that the 6070 doesn't exist anymore.

There are other details as well that we should cover to understand why this isn't an apples to apples comparison.

First and foremost; the 6080 is about twice the system as a CX3-80 (making the near twice the cost per gig slightly less surprising). Side by side, the NetApp supports almost twice the disk capacity, has nearly 4 times the RAM, and a lot more expandability. We're talking 480 spindles backed by 16GB of RAM (two SPs) vs 1100+ spindles backed by 64GB of RAM (two heads).

If you want a real apples to apples comparison,

use the NetApp FAS3070 or its replacement the 3170 (or 3140 is closer still)

We're talking a HUGE difference in price here.

This also ignores the fact that NetApp gives you iSCSI, NFS, CIFS, and FCP all in one box.

I am wiht 1000101 here. That's pretty much what we saw. Once you throw in RAID-DP, Thin Provisioning and DeDupe, EMC can't come close on the utilization. Plus the older CX-3 Clarions underperformed the newer NetApp boxes. Not sure about the new Clarions, but they are still more expensive then NetApp (that was before some big discounts from NetApp). Now, if you are talking real high end, then yeah, IMO, DMX will outperform a 6080. Although I would love to see GX based 6080 clustered system do its thing.

oblomov
Jun 20, 2002

Meh... #overrated

1000101 posted:

The awesome thing is; I can actually do that with one of these crazy virtualized storage backends as well. Regardless of whether I'm doing it EMC style or like the rest of the modern world, when performance sucks the solution is always to buy more disks. With everyone else's products though, I may not have to quite as soon.

It is trivially easy for me to create a dedicated volume on a 40+ drive aggregate servicing a single LUN on NetApp.

Ok, I am a bit confused here. Unless you do a dedicated aggregate, you are sharing disk I/O across multiple raid groups on NetApp. At least as far as I know. Is there a way to enforce a volume/LUN to sit on a particular raid group, separate from everything in an aggregate? If so, that would be awesome.


quote:

Also keep in mind that I'm thinking about a market that spans from 50 employees on up to and including a good chunk of the 5,000 employee orgs. Anything larger than that, and people are more than happy to staff a bunch of excel experts to manage their storage. These guys are buying DMX for performance and lots of it.

I find overall capacity to be a useless metric in determining that sort of thing. Given that one of my customers is <1000 employees and maintains about 50PB of NetApp (and about 12 of EMC). They like the flexibility and ease that NetApp provides and only bought the EMC for a VMware project that ultimately ended up being housed on NetApp NFS.

Welp, my company is either 10x or 20+x the max size you quoted above (depending on US or worldwide view) and we have yet to need a DMX. Hell, we don't really have too many 6080 clusters throughout US either. Now, we have sold DMX to oure customers (we are also an IT shop from a certain standpoint, but thats' not core business) when they are hell-bent on EMC, but other then that, unless one has very specific apps, NetApp performance can more then cover what 99% of companies even much larger in size then 5K employees need.

quote:

None yet, most of my customers who would need that level of IO already have 1000+ spindles that they bought prior to the SSD shelves offered by EMC. It will be a while before they validate the shelves and put them in production.

I like the look of new Sun boxes that front-end SATA with SSDs. Now, that makes sense.

quote:

I'd love to see this sort of thing more often though. It would be a hell of a thing to leverage with automated tiered storage.

Other then Compellent, does any vendor have automatic tiering built-in? Sun kind of does it with SSD/SATA hybrid storage on the newest stuff, but nobody else as far as I know. Now, NetApp, EMC and others have other solutions that can do tiering, but it's not the same.

I would love to be able to buy a shelf of SSDs, add bunch of either fiber/SAS or SATA shelves, and shove it all into single aggregate that would take care of tiering/caching/etc... dynamically by moving blocks to faster / slower storage as required.

One thing that I wish NetApp got in gear is thin provisioning management. It's terrible, pretty much non-existent. That's one thing that newcomers like Equalogic and LeftHand do much better. Hell, even with DFM or OpsManager, you can't get a good view at a system and figure out what's thin provisioned and what's not and how much space remains at volume or LUN level, etc... Meh.

quote:

edit: Don't think I hate all things EMC, I'm a services guy and the product is great. I'm mostly passing on complaints from paying customers and their feeling on the matter. I have a hard time disagreeing in many cases.

I like DMX and Centerras, and I think CX-3 were a waste of money. Now with CX-4 situation might have changed, but VMware jacked up the price on that.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

oblomov posted:

Ok, I am a bit confused here. Unless you do a dedicated aggregate, you are sharing disk I/O across multiple raid groups on NetApp. At least as far as I know. Is there a way to enforce a volume/LUN to sit on a particular raid group, separate from everything in an aggregate? If so, that would be awesome.

You have it, as far as I know you'd basically create say a 20 drive aggregate, throw one volume on it with one LUN and be done with it.

This doesn't make sense to do because you're burning a whole lot of space; but this is effectively what you're doing with EMC anyway, so I guess it all lines up.

I'm not sure that you can map a LUN to specific disks any other way; but it is trivially easy to create an aggregate and just put one LUN on it.

Adbot
ADBOT LOVES YOU

oblomov
Jun 20, 2002

Meh... #overrated

1000101 posted:

You have it, as far as I know you'd basically create say a 20 drive aggregate, throw one volume on it with one LUN and be done with it.

This doesn't make sense to do because you're burning a whole lot of space; but this is effectively what you're doing with EMC anyway, so I guess it all lines up.

I'm not sure that you can map a LUN to specific disks any other way; but it is trivially easy to create an aggregate and just put one LUN on it.

Oh, nevermind then, I see where you are going. I have had to do single purpose aggregates (still mutli-LUN) for our Exchange and SQL implementations. One aggregate for DBs, one aggregate for Logs (different filers). Snapvault to separate filer. There were enough users and DBs in multiple clusters to warrant the separation.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply