Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
AmericanCitizen
Nov 25, 2003

I am the ass-kickin clown that'll twist you like a balloon animal. I will beat your head against this bumper until the airbags deploy.

Intrepid00 posted:

I seriously doubt now you are risk of a 2 day repair downtime and if you had to stay up and something extreamly bad happened to one of my nodes I'd remove the node from the cluster so the cluster would restrip and become redudant again if it went past 4 hours.

Anyway, we went with lefthand and the boxes arrive thurdayish. Hopefully we didn't make the wrong decision, but I don't think we did.

Chucklehead posted:

Let us know how it goes.

I should be getting a FAS2020 demo unit here right away but HP has also just proposed a Lefthand solution to us instead of the EVA.

Can both of you guys post your experiences, please?

My company is in basically the same boat- narrowed down to either LeftHand or a lower-end NetApp. I'm very interested in any kind of feedback you have.

Adbot
ADBOT LOVES YOU

AmericanCitizen
Nov 25, 2003

I am the ass-kickin clown that'll twist you like a balloon animal. I will beat your head against this bumper until the airbags deploy.

Number19 posted:

I have one question about CIFS on NetApp: Am I able to make the filer look like more than one server to the users? Management would very much like to have a "server" for each project instead of \\server\project1, \\server\project2, etc.

This sounds like an incredibly silly requirement that should be dropped if at all possible. The whole point of the SAN is that you can centralize your storage into one highly-available, easily manageable place and any solution that I can think of to do what you're asking won't be simple or scalable as the number of "servers" increases.

If people seriously can't just deal with the one volume or share per project, they need to get over it. It will be more of a pain on the user side as well since no one will be able to just go to a single location like \\filer and be able to directly navigate a tree of open projects, they'll be stuck trying to do that in the domain-level view with every other PC on the network displayed along side a dozen pretend servers.

AmericanCitizen
Nov 25, 2003

I am the ass-kickin clown that'll twist you like a balloon animal. I will beat your head against this bumper until the airbags deploy.

Maneki Neko posted:

Got a friend who works at a place that has a fine NetApp setup, but through some shenanigans with a different storage vendor, they now have a giant pile of SATA drives, which he was looking to just throw in a giant case and use as a dumping ground for things (likely over NFS), with the expectation of eventually spooling it off to tape.

We have a similar situation at my place. NetApp for the real stuff, but we use a ton of relatively low performance disk for backups and a variety of other needs.

We have three of these:
http://www.serversdirect.com/config.asp?config_id=SDR-A8301-T42

And really, they're perfectly suited to the task though you could certainly spend a lot more money on something else (and of course you should if it's worth it for your scenario, but it's not in our particular case.)

We load them up with CentOS and make big xfs partitions.

vvv It definitely depends on what you're using them for, but we honestly haven't had any problems with them (much to my surprise, really.)

AmericanCitizen fucked around with this message at 03:58 on Jul 16, 2010

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply