Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Docjowles
Apr 9, 2009

I'm here to make one of those embarrassing "I don't know anything about storage" posts :ohdear: Hoping to get some advice on products or at least vendors to look at for entry-level storage.

We have a couple storage arrays (HP MSA2012i) that do double duty as VMware datastores and storage for several terabytes of static files served by a website. IOPS requirements are very minimal. We're running out of space and I'm weighing my options. I'm new at this company, didn't set up any of the existing infrastructure. AFAIK they went with HP because that's what most of our servers are.

I really do not like the MSA's. They appear to be flaky as hell; updating firmware is a nail biting operation that tends to take them completely offline even though supposedly it updates one controller at a time and seamlessly fails over between them. We've had numerous disk failures and at least one total controller failure due to firmware bugs. Management is awful (although I gather this isn't unique in the storage world). It doesn't support modern features like compression and dedupe at all. I'd like to get rid of them or at least relegate to a backup role. But if buying a new array doesn't make sense, I can bite the bullet and just add expansion shelves.

Some requirements:

* Do not want to roll my own. This is production primary storage.
* We're currently using about 6TB without deduplication or compression, which will obviously increase over time.
* More concerned with capacity than raw IOPS. We do have one heavy-usage MSSQL box that runs on DAS that I would consider virtualizing, but that is not urgent
* Straightforward to manage. Not afraid to get my hands dirty and learn, but I am the only sysadmin, fiddling with storage cannot permanently consume 90% of my time. As you can see I don't have super demanding needs anyway.
* Hoping to spend under $25k incl. support contract for one filer

I find EMC's VNXe line and NetApp's FAS2200 somewhat appealing so far. Are those decent or terrible for any reason? Anywhere else I should be looking?

Adbot
ADBOT LOVES YOU

Docjowles
Apr 9, 2009

NippleFloss posted:

What protocols do you require? If you can get by with iSCSI only then you've got a lot more options like Nimble, HP Left-Hand, and Equallogic. If you need FC, or CIFS, or NFS then you're more limited to stuff like NetApp, EMC, or IBM.

How much capacity do you expect to need in a year? Two years? Do you want to utilize snapshot backups? Do you want to utilize application consistent snapshot backups for things like SQL or Exchange or VMWare? Do you care about automated data tiering? SSD as cache?

1) We currently use iSCSI only. I'd kind of like to have NFS as an option but it is not a deal breaker. We have no FC infrastructure or expertise so I am not considering that.

2) Historically our storage needs (on primary storage) have only grown by about 1.5TB a year, so pretty slow.

3) Our backup situation is actually kind of awful; you hit on another reason I am looking into this. So yeah, snapshots and the requisite additional space would be nice. For several services, we have application data backed up offsite but no backup of the OS/apps/configs. This is mitigated to a degree by boxes (in theory) being easily rebuilt via tools like Puppet but it's still a little scary. Maybe I'm behind the times on the idea of throwaway hosts in the cloud era.

4) Right now data tiering and SSD caching would be overkill. That would change if I wanted to virtualize our primary MSSQL servers but I don't feel like I have support from my boss for that.

Docjowles
Apr 9, 2009

I'd at least take a look at EMC's VNXe line, it could easily be within reach of your price range depending on how you spec it out. I only got to use one in a lab environment at my last job but it was pretty slick for an entry level system. It does both NFS and iSCSI out of the box. Just keep in mind that for better or worse, it's designed for the admin that doesn't know or want to know about storage in great detail. So for a lot of settings it's just EMC's way or the highway.

Docjowles
Apr 9, 2009

szlevi posted:

I have yet to meet an EQL user who wouldn't praise his box - it's fully redundant inside

Raising my hand here as a storage idiot again. I'm still evaluating my options for buying an entry level SAN to replace our aging HP MSA's. One roadblock I'm hitting is that my boss is really, really mistrustful of a SAN as a single point of failure. Despite having redundant controllers, each with their own ethernet ports and dedicated switches, their own power supplies plugged into different PDU's, and the disks themselves protected with RAID, he still sees one physical box and thinks there's a decent chance the entire thing will die. For this reason we have two MSA's and our applications' architecture is such that we can lose one entire enclosure and still limp along.

Is that an eventuality most of you plan for, losing both controllers in your SAN (or the entire RAID array)? Edit: We do have a DR site, I'm talking about within your primary site so one can poo poo the bed without needing the DR site.

Docjowles fucked around with this message at 21:32 on Aug 16, 2012

Docjowles
Apr 9, 2009

bull3964 posted:

I would say firmware upgrades are another place where you can get nailed.

I worry less about the hardware and more about the software. The idea that a glitch could wipeout the configuration or that a firmware update could blowup somehow but still proceed to the 2nd controller scares the crap out of me.

I think a lot of it is this, yeah. Firmware update screwing the pooch, or one controller dying and the failover not happening correctly, stuff like that.

Docjowles
Apr 9, 2009

I knew I recognized that name. I recently read an incredibly verbose article by him on why RAID 5--even with a hot spare--is dumb in sufficiently large arrays. I mean, I agree, but the article sucked. He just restated the same point over and over and over again.

Docjowles
Apr 9, 2009

Xenomorph posted:

I'm looking for a big storage box.

Block level storage / NFS/CIFS NAS storage / don't care? Budget?

edit: Actually I guess if you seriously want fibre channel that answers #1

Docjowles fucked around with this message at 17:54 on Sep 6, 2012

Docjowles
Apr 9, 2009

Xenomorph posted:

OK then, what is a good & cheap, rack-mountable hardware RAID 6 box I can slap a bunch of SATA drives into and connect via iSCSI to one of my existing servers?

Not to put words in his mouth, but I think what evil_bunnY is getting at is either reliability, features and performance matter for this application or they don't. If they don't, buy a cheap-rear end SuperMicro enclosure with a bunch of drive bays for like a grand. If they do, spend the extra money for the entry-level product from a reputable vendor with a support contract.

Don't spend $5k on a weird prosumer NAS like that Synology that comes with retarded poo poo like an iTunes server and ~~cloud integration~~ but doesn't have redundant controllers. It's the worst of both worlds.

Docjowles fucked around with this message at 18:31 on Sep 7, 2012

Docjowles
Apr 9, 2009

It wouldn't be my first choice (since we have some and I am actively trying to get rid of them since the management UI is garbage) but you could get a dual controller HP MSA2000 loaded with 12 1TB drives for like $5k refurbed.

Docjowles
Apr 9, 2009

Xenomorph posted:

I guess I don't fully understand iSCSI.

I'd have to see the specific page you're talking about, but it's probably a "unified" storage system that can do both iSCSI and NFS depending on what you configure. If you configure it to share over NFS, the drives get preformatted with an ext4 (or whatever) filesystem. Think of this like a Windows shared folder, you just connect to it and the filesystem is already there.

If you configure it for iSCSI, it's similar to fibre channel in that you just get a raw block device that you then have to partition and format yourself.

e: beaten :argh:

Docjowles
Apr 9, 2009

Xenomorph posted:

Next question: how important is it to have a dual-controller RAID?

This is pretty much something you have to answer yourself based on what you're using the device for. If there's a major issue (botched firmware update, CPU/RAM/whatever totally shits the bed and the whole device goes down) is it OK for the device to go down for a week while you get a replacement? It's one of those things where you're buying insurance against a rare but potentially devastating scenario.

Docjowles
Apr 9, 2009

paperchaseguy posted:

i think you're wanted here

:iceburn: drat, dog.

Docjowles
Apr 9, 2009

Xenomorph posted:

Does anyone use ZFS with Linux? Or is that asking for trouble?

Let me tell you about how our ZFS on Linux storage pool got totally hosed up and we had to download like 5TB of data from our offsite backup at Amazon S3 over DSL :allears:

It is not production ready.

Docjowles
Apr 9, 2009

FISHMANPET posted:

So this seems Enterprisey enough to ask here.

We've got some expirement systems that write their data to a local disk. I'd like to continuously sync this data to a central server, but still keep the data on the local disk, just in case. Some of these systems are Windows and some are Unix. So I'd like some form of reverse caching like Windows does with file servers, but with the ability to directly access the files on both ends, and work with Unix as well.

Any idea?

I haven't used it in a long time, and depending on the volume of data it may not be feasible, but take a look at Unison?

Docjowles
Apr 9, 2009

VMware's site mentions non-profit pricing, although I can't find jack poo poo on what the actual discount is.

One product to look at would be EMC's VNXe. It's probably still out of your price range, but not incredibly so.

I really question the point of shared storage with only one vSphere host, too.

Docjowles fucked around with this message at 18:26 on Nov 7, 2012

Docjowles
Apr 9, 2009

Corvettefisher posted:

VMwares pricing is pretty decent from quotes I have done. Not too just about VNX discounts. Don't forget the VNXe's exist.

Whoops my bad, I meant VNXe.

Docjowles
Apr 9, 2009

evil_bunnY posted:

See: current integrated backup.

Has anyone used the new backup stuff in production? At my old job we used Data Recovery since it was free and our environment was small. Even so it would gently caress up every month or two and I'd have to nuke the backups and start from scratch :downs: Not really an acceptable "retention" policy.

I still don't really expect to get approval to renew our VMware support contracts and get up to 5.1, but backup software that isn't completely worthless would be a helpful selling point. I'm not asking for Backup Jesus here, just whether anyone's used it and seen it not consistently poo poo all over your data monthly.

[e]: guess this is a better question for the VM thread but whatever

Docjowles
Apr 9, 2009

Corvettefisher posted:

The Data Protection appliance? I have it sitting in a lab, might run a few tests if there is something particular you are looking for, and get back to you on it. I Have problems managing it through anything other than the web client for some reason...

Yeah, that one. I don't have a specific question, just soliciting opinions from anyone who has used it a little in-depth as to whether it's stable and reliable. I don't mind a few quirks; I used the abortion that is Tolis' BRU for years, anything would be an improvement!

Docjowles
Apr 9, 2009

Probably worth at least looking at Equallogic's offerings. They've changed their lineup all around since the last time I evaluated them but something like the PS6100X might be a good candidate. That or the EMC VNXe depending on how important the "point and click, idiot proof" requirement is.

Docjowles
Apr 9, 2009

bull3964 posted:

Just out of curiosity, why is RAID 5 instead of RAID 6? I'm not sure I would be comfortable with single parity in a production setting, even if you did have a hot spare.

The VNXe GUI is kind of "storage for dummies" and only lets you configure certain types of drives into certain types of RAID configs. It may actually not be an option.

Disclaimer: I evaluated the VNXe at my old job like a year and a half ago, the software may have changed.

Docjowles
Apr 9, 2009

goobernoodles posted:

Also; anyone have a recommendation for a cheap cloud service for dumping the backups as a secondary off-site backup?

Amazon S3?

Docjowles
Apr 9, 2009

Or both! You can set up lifecycle policies on S3 buckets that automatically transition data into Glacier after N days

Docjowles
Apr 9, 2009

FISHMANPET posted:

I don't get it, are you implying that important decisions about core infrastructure should be made on facts rather than gut feelings and intuition?

No, that would just be crazy.

do you work where I work?

Docjowles
Apr 9, 2009

paperchaseguy posted:

guys he's a patient, not an employee


eta: or he will be :shepicide:

I refuse to believe we are not getting trolled.

Go read the "Working in IT" and Certification threads immediately and YOTJ the gently caress out of there. With regard to your question, your "best" (:emo:) option might seriously be to buy some white box parts off NewEgg and then install OpenFiler or some poo poo on there if your constraint is seriously to create a file server for under $500. And yeah, at least do what Skipdogg proposes. Buy parts under your limit spread over a few months until you can assemble something that isn't a shambling horror.

Docjowles
Apr 9, 2009

Seems like asking about defragging servers would be an excellent sysadmin interview question. If the candidate lets you finish asking without punching you in the face, show them the door.

Docjowles
Apr 9, 2009

Rated PG-34 posted:

Ballpark budget is not very high: 2-4k.

$2k for 20TB of storage. Good luck with that.

Docjowles
Apr 9, 2009

Rated PG-34 posted:

Maybe we could buy a bunch of USB sticks and glue them together.

You joke, but... 6x USB Flash Drive Raid

quote:

Based on the feel of the system, we have chosen to use this RAID as part of a production system (we will keep Bacula running just to be safe). Day 1 of the flash-raid-as-root starts today.

:negative:

Docjowles
Apr 9, 2009

Corvettefisher posted:

Just throwing this out there but,

So what storage vendors do you all use and why?

I'll have a better answer for "why" in a few weeks, but my soon to be employer is into NetApp to the tune of 20 petabytes. Thankfully they have a full-time storage admin so I'll get to learn to deal with storage on that scale without just drinking from the firehose on day 1.

Docjowles
Apr 9, 2009

Corvettefisher posted:

It makes cisco's website look good

I refuse to believe this is possible

Docjowles
Apr 9, 2009

Herv posted:

Hi Storage Folks.

I am looking for a modest yet redundant SAN for hosting up to 10 customers (small footprint per customer) using VMWare ESX.

Could someone point me in the direction of getting a redundant 4TB using FC or iSCSI for small footprint VM hosting? I would put the budget at around 20-25k.

Thanks in advance and I apologize if the answer is somewhere in this thread, its a monster though.

Equallogic is another player you should probably look at as long as iSCSI is acceptable. Or hell even Dell's MD3200i line since your requirements are so minimal. Both of those can be specced with dual controller redundant-everything setups and won't break the bank.

Docjowles fucked around with this message at 17:27 on Jul 15, 2013

Docjowles
Apr 9, 2009

NippleFloss posted:

The correct response to this is always "Cool, sounds like you've got a solution to your problem and I can close this ticket out."

Usually I don't condone being a dick but this would be REALLY loving tempting in this case

Docjowles
Apr 9, 2009

Preface: Not calling out Wicaeed or anything, the topic just triggered a rant :)

What is it about storage that it always ends up being the redheaded stepchild that management is eager to skimp on? My old boss wouldn't bat an eye at paying $shitloads for the latest, fastest Intel Xeons with the most cores (even for tasks that weren't CPU limited at all :derp:). 10Gb NIC's when we don't even saturate our 1Gb link? Why not! But god forbid we buy anything but lovely SATA drives and offbrand RAID controllers. Maybe he'd spring for a consumer SSD or two if I was lucky. Is it just that storage is nuanced and hard? The "$50k for 20TB?!?!? I can buy those disks from Best Buy for $500 bucks!" syndrome?

Docjowles
Apr 9, 2009

bull3964 posted:

That said, storage manufacturers can get hosed too because their margins are outrageous and you have to do this lovely dance with vendors that would make a car salesman weep to get anywhere near the real price for one of these things.

So true. The amount of margin you can knock off under the right circumstances is ludicrous. "Ok, here's our quote, $100k for your SAN." "Thanks, but did we mention that we're also taking bids from <competitor> on this project?" "Oh golly, wouldn't you know it, there was a typo on that quote. We meant $10k!"

Docjowles
Apr 9, 2009

I think you are going to have a hard time finding anything cheaper than that that isn't prosumer garbage.

Docjowles
Apr 9, 2009

NippleFloss posted:

Make sure you leverage the fact that you are talking to other vendors to get them to come down in price. You should never take the first quote that they provide as you can routinely get well below that if you're willing to haggle some, and let them each vendor know that you're talking to the others and that you've had very competitive quotes from them.

Something like this should be in the OP of every IT thread. It's totally nuts how flexible pricing is on hardware. The power of the "... thanks, but we have a much better quote from <direct competitor>" card cannot be overstated. I know we all start out hating Call For Pricing, but it's a two way street and you can usually do far better than what the list price would be even if they did publish it.

Docjowles
Apr 9, 2009

Wicaeed posted:

Take that time to make sure you have each controllers serial port plugged into a separate server, and have that server recorded :)

I am in the process of creating a new Equallogic cluster to host all of our production billing information and make drat sure I completed that step before we even started testing :)

Or get a console server. They're pretty sweet, basically a KVM switch for serial connections.

Maneki Neko posted:

Anyone played around with the NetApp EF540s at all? We've got some requirement around shared storage but like fast stuff and giving NetApp buckets of cash, so figured this looks like a decent fit.

I think we're going to do an eval, but NetApp doesn't have a demo unit available for us until like December. It's not our first choice for a product but we're currently all NetApp so figured it's at least worth a look. If we do end up doing a POC I'll try to remember to post back here.

Docjowles
Apr 9, 2009

Caged posted:

I've had experience with an MSA P2000 iSCSI unit with a couple of expansion bays. It's easy enough to set up, the hardware design is nothing special but it's not terrible. Honestly I only ended up using it due to budget constraints, but the thing worked fine and the performance was as expected.

We used one of those (MSA 2000 G3) at a past job. That pretty much sums it up. Absolutely nothing special but pretty drat cheap and it worked reliably. The web UI was the worst poo poo but we were also very behind on firmware so there's an outside chance that got fixed.

I can't say I'd actively recommend it to anyone but it did the job, as long as the job was "provide some spinning disks over the network" and nothing fancier.

Docjowles
Apr 9, 2009

My company is 100% NetApp over NFS. We have literally billions of files in the kb-to-mb range stored there and replicated with SnapMirror, not aware of any issues. Just... don't try to do a directory listing unless you have a few weeks to kill ;) . Not sure about dedup. Disclaimer: I am not the storage admin, though I'm hoping to learn more about our NetApp stuff over the next year.

Docjowles
Apr 9, 2009

VAR's always quote you for installation it seems like whether it's necessary or not. If you're comfortable setting it up just tell them to knock that off the quote.

Adbot
ADBOT LOVES YOU

Docjowles
Apr 9, 2009

Is ceph (in block storage mode) an option in this arena too? I'm asking, not offering it as a suggestion, have read a little about it but not played around at all.

Docjowles fucked around with this message at 06:47 on Jan 29, 2014

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply