Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Aquila
Jan 24, 2003

Mierdaan posted:

Sorry we don't all have 23 thumpers to brag about. Seriously dude, you're a loving dick sometimes.

Actually he has 27, but I had lost three and not bothered to rma a fourth when I worked there.

Adbot
ADBOT LOVES YOU

Aquila
Jan 24, 2003

At my new job my boss just asked me to get some nas quickly while we wait for power for our san*

Can anyone recommend something that can ship immediately in the $5-10k (or less maybe) range? Right now I'm thinking I'll grab a storevault s550. We've got 3-5 people working with multiple large video files (up to 50GB each) on their local systems, a mix of macs and pc's.

Also, in the long term, how does the Netapp FAS2000 series compare the to EMC AX4-5 stuff that was discussed in this thread?

*not a san at all

Aquila
Jan 24, 2003

Ray_ posted:

I'm looking around the same price range as you, actually. I think I've settled on this:
http://www.hds.com/products/storage-systems/simple-modular-storage.html
Dual active/active controllers, RAID6 with a hot spare, 12x 300GB 15k SAS (2.7TB total), 4x gig-e NICs, and prepackaged replication and snapshotting software. Final price = $11,395.

I can't really find anything else with those features that is competitively priced.

Did you ever purchase this? I'm torn between one of these (except with 750gb or 1tb sata drives) and a similar Netapp Storvault s550.

As far as I can tell I'd be paying ~$5000 more for the Netapp name (and associated very nice things).

Does anyone here have any hands on experience with either of these? Are there any forums or sites out there more dedicated to high end storage?

Aquila
Jan 24, 2003

Ray_ posted:

I actually just placed the order today after doing as much research as possible.

The massive amount of redundancy in the box along with not being able to find any negative press or reviews sold me on it.

I was looking at the StorVaults too, but the lack of redundancy ruled them out for me.

As far as I can tell storevaults and the sms 100's are nearly identical in terms of redundancy, or can be ordered as such. Two things have me leaning towards the netapp though, the main one being no nfs or cifs (iSCSI only) as far as I can tell on the sms 100, and the minor one being no user serviceable parts on the sms 100 (including drives).

My Netapp sales guy also pointed out that the hitachi uses copy on write snapshots, while netapp uses point in time, the former causing a performance hit while running, the latter not.

The price on the sms 100 is definitely better than the s550. A Hitachi sales person should be contacting me soon so I'll try to get definite word on the nfs/cifs issue.

Aquila
Jan 24, 2003

Ray_ posted:

Woah woah woah, as far as I know, the s550's only ship with a single controller. Did my reps tell me wrong?

Oops, totally my mistake there, I've been looking at way too many different products lately.

If only I'd been wrong on the no nfs/cifs thing.

Aquila
Jan 24, 2003

Does anyone have experience with Hitachi HUS sans? I'm considering one for for heavy db use.

Aquila
Jan 24, 2003

Oh god what have I done:



HUS 150 with SSD Tier is what I've done :getin:

Aquila
Jan 24, 2003

Caged posted:

Crazy design, I've heard good things though. What sort of stuff are you storing on it?

Postgresql with both lots of little stuff that needs to be very fast and lots of big stuff that also needs to be very fast. Lots of joins. I'm trying out dynamic tiering between sas and ssd, but also testing straight ssd:

/dev/mapper/360060e80101392b0058b38fb00000000 493G 70M 467G 1% /mnt/ssd
/dev/mapper/360060e80101392b0058b38fb00000009 493G 70M 467G 1% /mnt/sas
/dev/mapper/360060e80101392b0058b38fb00000005 493G 70M 467G 1% /mnt/hdt

I also kept one raid group of sas out for VM root volumes, because why not, but db's are bare metal right now. Hitachi threw in a free file module, but our san consultant didn't even want to hook it up, that's the box in the side of the picture. I am not sure what I'm going to do with that.

I am wondering though, does anyone know of a FC to IP solution for long distance replication that's either cheaper than a Brocade SW7800, or has 10GigE ports? The boss is very interested in Hitachi TrueCopy Extended Distance between colos, but right now the hardware cost is kinda nuts (~$25k on each end).

Aquila
Jan 24, 2003

NippleFloss posted:

What filesystem are you running on top of this?

Regarding FC over IP we used McData 1620s for that but they are discontinued now. They worked fine replicating data via HUR. Any FCIP router *should* work and HP probably has some pretty cheap ones given that their network gear is always pretty cheap. Bandwidth likely won't be a concern since you'll be limited by what's available at the LAN side, but since you're talking about synchronous replication you'll want something that doesn't introduce much latency. Speaking of syncrep is really tough to get right and on something like a DB workload where it's going to be highly sensitive to write latencies I can see it being very problematic.

ext4

Hitachi TCED can do sync or async replication, I'm think I'll do async for a near realtime copy of data in another location in case things go down I can bring my operation up in the other location quickly. I don't see how this is limited by LAN speed, I'll likely get a 10gig link for these purposes.

Aquila
Jan 24, 2003

NippleFloss posted:

Whoops, I meant to say WAN. 10G is overkill for replication traffic because unless you have dedicated fiber you aren't getting nearly that once it hits the WAN anyway. And yea, I've used async Hitachi replication before, and it sucked. It worked fine technically but it was an absolutely giant pain in the rear end to manage. HORCM sucked as a management product, the requirement for separate journal volumes for each consistency group ate up a lot of additional storage, and because replication happened on the device level and was completely divorced from actual hosts or applications it was common that LUNs got provisioned or reclaimed from hosts but the replication sets never got updated.

A lot of that would have been solved by better management tools, but those weren't available at the time. I haven't spent any time with the new Command Suite stuff so I have no idea if it makes replication more manageable. In general I think replicating at the app level is a better proposition in most cases than trying to do it at the array level, particularly the logical device level.

I am budgetting for 10G because I don't really know how much I'll need til we're up and running in production. Also I have an almost free 10Gig dark fiber to separate geo I think through my current facility. Of course the CEO wants to do it to EU, I've told him how spendy trans atlantic 10G is. Command Suite is totally acceptable for now, though our vendor did pull a fast one on us and not inform us that it only runs on windows until we were pretty late in the purchasing process (we're 100% ubuntu), I just made them throw in a Hitachi Windows server. We're hoping there's enough command line type support to completely automate volume creation and allocation, and snapshotting. My (amazing) systems dev already has made some significant improvements to Foreman with respect to choosing multipath devices which we are going to be submitting back upstream. Our goal is to make as much of this kind of stuff we do available back to the open source community, just in case there are any other startups out there crazy enough to buy Hitachi SAN's. Also we're slowly working on Hitachi to get them to realize there's more to Linux than RHEL and SUSE. Many steps along the way have been an interesting collision of startup vs (very) big business methodology.

As for replication, I haven't tried it yet obviously, but so far everything this vendor has told me works as stated.

cheese-cube posted:

I've used IBM SAN06B-R (Re-branded Brocade 7800) MPR devices in the past to do FCIP tunnelling and they are pretty solid. Of course as you mentioned they are drat expensive. From memory they were around $20k but there was also extra licensing on top of that.

My SAN consultant actually quoted two 7800's on each end, which seemed excessive to me unless I get redundant paths, which I supposed I probably should.

And because I'm working on it right now, here's the mount options I think I'm going to use:

/dev/mapper/360060e80101392b0058b38fb00000014 /mnt/ssd-barrier-tests ext4 defaults,data=writeback,noatime,barrier=0,journal_checksum 0 0

This is still untested. I'm also trying noop and deadline schedulers, and turning off dirty page ratios and swappiness (not that I have swap allocated) (echo 0 > /proc/sys/vm/swappiness echo 0 > /proc/sys/vm/dirty_ratio echo 0 > /proc/sys/vm/dirty_background_ratio). I'm still investigating discard, which I think should be issuing scsi UNMAP in our case (as opposed to sata TRIM) and is probably desirable for the san.

This is the format command I'm using:

mkfs.ext4 -E lazy_itable_init=0 -E lazy_journal_init=0 /dev/mapper/blah

The format is super fast due to san magic or something. I am investigating if I need a custom stride and stripe setting.

I've been using iostat (v10+), ioping, and fio for testing so far, plus actually loading lots of data into postgresql.

Aquila
Jan 24, 2003

So one of those tuning's I mentioned managed to make a 20MB local copy take 18 seconds to return on my test server. That's on the root partition (local samsung 840's in raid1) mind you. Oops.

Aquila
Jan 24, 2003

Misogynist posted:

We've got a BlueArc system with a number of end-of-life components that we need to prolong until we can formally decomm. Does anybody know a good third-party parts vendor that deals in BlueArc and LSI?

Possibly Berkcom or Zerowait. If those don't pan out I can ask my Hitachi (Intervision) guy who was just today talking to me about dealing with people who are dealing with old BlueArc crap.

e: goon M@ works at (or used to work at) ECS. In the past they've been good at getting me old gear.

Aquila fucked around with this message at 23:54 on Sep 5, 2013

Aquila
Jan 24, 2003

Agrikk posted:

Unless your finance department is so retarded that they forget to forecast for planned obsolescence so they chose to get gouged on support contracts instead of getting bigger gouged on new gear purchases.


"Pay $10k for support on eight year old kit instead of paying $40k for new gear and new warranty support? We save $30k! Aren't we awesome!"

When we moved into our new offices, I named our SSID "Sparky" because that's how I referred to the gear I kept patching together. The guest SSID was called "Smoky". Same reason.

I thought "Downtime", "Blown Capacitors" and "Performance Issues" were too obvious.

A friend I used to work with always wanted to name a Netapp "uncorrectablereaderrorer"

e: btw Hitachi snapshotting / api / command line / horcm can blow me.

Aquila
Jan 24, 2003

Oops. I let my san consultant setup my FC switches and now I don't know how to let new servers talk to the san.

Aquila
Jan 24, 2003

I'm looking for something for nearline local backups for our systems, mostly db backups. I'm thinking 3-6u, one or two boxes, 20-40tb usable, bonded Gbe or 10Gbe connected, nfs and or rsync, ftp, etc transfer. While we have alot of in house expertise rolling just this kind of solution ourselves I'm hoping for something very turnkey and reliable, while not being horrendously expensive, moderately expensive is potentially ok. We already have a hitachi fc san for db's and vm's, but it's file options appear to be so bad we're not even considering them (and they gave us a free file module).

Aquila
Jan 24, 2003

adorai posted:

With an oracle zfssa with two heads, two shelves can get you 30TB usable easily, have great performance, fit in 6u, and cost well under $100k. It will support CIFS, NFS, RSYNC, ZFS replication, FTP, SFTP, and pretty much anything else that is unix-y. It comes with enough CPU that you can do on the fly compression and actually improve your performance as you reduce IO.

HA pair of controllers with 10GBe
48 disks:
2x SSD for ZIL
46x 2.5 1TB 7200 RPM disk for storage:
-2x 2.5 1TB 7200 RPM disk as spares
-4x vdevs of 9 data and 2 parity disks gives you ~32TB usable

It will be flash accelerated and lightning fast while reasonably inexpensive ($30k for the controllers, $20k ish for each shelf with disks, $10k for the write cache). You can get an identical unit and replicate offsite with the built in replication features. The biggest problem is you will like it so much you will start using it for more than you originally thought (this happened to us and we filled it up pretty quickly).

That's way overkill for this application, our san is already running an insanely expensive ssd tier. I'm more interested into a backup appliance that accepts data and holds it safely, while letting me access it relatively quickly (copy on and off, not running anything off it). I don't plan on running db's or vm's on it. That being said we'd probably end up rolling this exact thing ourselves (we've done it before) if we do it that way. Either way we probably want something more like 3.5" near line enterprise sata drives which can usually go 12-16 in a 3u chassis. I'm ballparking that we can roll our own for about 15k each with 4tb enterprisy drives so I'm probably going to limit a commercial solution to about twice that (60k total).

Aquila
Jan 24, 2003

adorai posted:

drop the HA and I bet you can do it for under $60k. You'd have to talk to a sales guy to verify.

Hmmm ok, though the guy who would be managing this would probably have an aneurysm if we buy oracle storage gear, he was part of a team that made the original thumpers work.

Aquila
Jan 24, 2003

MC Fruit Stripe posted:

Seriously, if anyone hasn't had the pleasure of an EMC quote, it is so depressing how much bullshit they try to throw on a quote - and we're talking tens of thousands of dollars worth of useless poo poo. You think buying a car is full of landmines, buy a SAN.

That was my experience with Hitachi as well.

gently caress sans.

Aquila
Jan 24, 2003

In a major change of direction my Hitachi HUS 150 did something just as advertised: online firmware upgrade. We really didn't notice at all at the server or application layer. Really quite slick.

Also we upgraded firmware on our brocade 6510's with them online. Not like failing one path, like totally online. Black magic is all we can figure on that one.

Aquila
Jan 24, 2003

Bitch Stewie posted:

We're probably about to go for a HUS 110.

Feedback so far is that they're dull boring fuckers with a slightly nasty management interface but they just work with no real fuss - any feedback would be great thanks :)

Ok, there's a bunch of things to consider here. Are you running VMWare? Windows? Linux? You'll probably be alot happier with the first two. Do you expect Thin Provisioning to work? Work well? Whatever you expect or have been sold there get in writing. Do you expect NetApp like snapshotting and usability, well you're not getting it? Are you paying for Hitachi Command Suite? Snapshot Manager? Are you going to use a file module (lol it's BlueArc btw)? Best practices on the file module are kinda hilarous. Are you planning on using Dynamic Tiering? Don't worry, that seems to work pretty well. Do you think you can magically turn your HUS into a HUS VM? You can't.

I'm not telling you not to get one, just depending on a bunch of things it could be a source of pain. Well, more annoyance than pain. I will say our HUS 150 has never gone down and never lost data. Performance has been excellent too, but we have an OMG expensive SSD tier.

Aquila
Jan 24, 2003

If need basic HA NFS what are my options beyond Netapp? Also what's the current basic netapp model for this, FAS2500? I'm looking to get ~20TB usable in ~6U, 3.5" nearline sas is ok. No flash (besides nvram cache type stuff) or other fance high performance stuff.

Aquila
Jan 24, 2003

I am now the person that thinks $26k for a new Netapp is a deal. What have I become.

Aquila
Jan 24, 2003

So on the 24 drive 2554 how many drives do I lose to this thing you don't really explain at all? (but I'm guessing is operating system use?)

Aquila
Jan 24, 2003

NippleFloss posted:

On 7-mode you don't really lose any drives to anything you wouldn't expect. 2 drives to parity for each raid group and however many hot spares you want to keep. The problem is that FAS systems are active/active so you have to divvy those drives up between two controllers and spares aren't global. So you have at least two raid groups (one per controller) and two hot spares (one per controller) on a system where you're following best practices. That's six drives gone, but it's just to expected parity and spare overhead, not any operating system use (you do lose 10% usable capacity for WAFL reserve, but that doesn't kill your spindle count at least).

On CDOT it's much worse because you need a 3 drive node root aggregate for each node and that aggregate cannot hold user data. So you lose six drives to that, plus parity drives for your user data aggregate, plus hot spares...you basically end up losing half of your 24 drives just to various overhead. 8.3 addresses this somewhat by slicing a partition off of each disk to create the node root aggregate, meaning you don't need to dedicate 3 drives to it.

Ok, that's more what I was expecting, I haven't used a Netapp since the good old FAS960/980, a number of which I installed are probably still cranking out NFS and will probably continue to do so until the heat death of the universe. I'm fine losing 3 drives per 12 drive set, that puts me somewhere around 36TB with 2TB drives, pre-wafl, formatting, etc, so if I get 30TB usable out of this system I will be happy.

Aquila
Jan 24, 2003

devmd01 posted:

Oh god the change control is in for 1 am Sunday. Babbys first fiber switch zoning teardown and rebuild. Here's hoping I don't take down access to the compellent!

gently caress off hours windows. I've probably seen more outage from exhausted and unavailable technical staff then anything else during maintenance windows.

Aquila
Jan 24, 2003

Dilbert As gently caress posted:

Then again I admin Netapp, VNX, VNXe, Nimble, Nutanix, vsan, and Nexenta.

All SAN models and makes will be spelled with only N', X's and V's in the near future.

Aquila
Jan 24, 2003

Richard Noggin posted:

This, 1000x this. Unless you have a team with vendor-level knowledge of the product and keep spares, this is gospel.

This is why I bought a Hitachi SAN. In many ways it's been a nightmare, but it keeps serving my data, keeping the company up, and helps me keep my job.

Aquila
Jan 24, 2003

Netapp FAS 2554 installed and NFS'ing in maybe 2 hours. Compared to my Hitachi this thing is so easy to use.

e: I may go to 16Gb FC with my next Hitachi SAN if the stars align. As much for latency as throughput.

Aquila
Jan 24, 2003

Seventh Arrow posted:

I work in the NAS department for a bank and we've been a NetApp shop ever since we've had NAS (we have about 30 filers) - except now, we're going to be switching to Hitachi's HNAS platform. It's kind of interesting, I never even knew Hitachi had a NAS solution. We've done some of the training so far and it's kind of cool to see a different architecture, because they're both kind of different. So far, though, it seems like Hitachi EVS = NetApp vfilers and Hitachi virtual volumes = NetApp qtrees. This is important, because qtrees are our bread and butter. Some of our old qtrees have grown into disorganized monstrosities, so this might be a good chance to migrate some stuff into a more orderly setup.

I'm so sorry. I only used a Hitachi FC SAN once, they threw in a free HNAS head but we declined to use it or even take it out of the box. Best practices that our Hitachi consultant described were ridiculous at our scale. Administration of HDS filers is so hilariously bad compared to a Netapp that you're probably going to want to kill yourself soon.

Aquila
Jan 24, 2003

https://sadlock.org/

Unrelated: Anyone have any horror stories for medium to high end IBM sans? Or tricks to make them less horrible?

Aquila
Jan 24, 2003

cheese-cube posted:

I miss working with SVC/V7000 kit :(

I guess it's a v9000, I don't come in contact with it at all (I'm a hadoop admin now!). The stories I hear about it make me miss working on Hitachi SANs though.

Adbot
ADBOT LOVES YOU

Aquila
Jan 24, 2003

Lol, last week I saw a WD Enterprise Storage add up in an REI, it was very out of place.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply