Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Aquila
Jan 24, 2003

I am now the person that thinks $26k for a new Netapp is a deal. What have I become.

Adbot
ADBOT LOVES YOU

Muslim Wookie
Jul 6, 2005
The FAS2552 is a POS. I can't believe they allowed it to exist.

Best case scenario, active/passive pair, 12 disks:

4 for Controller one.
4 for Controller two.
5 disks =
2 parity, 3 data.

3 disks worth of IO woooo yay. Also, let's ship our enterprise class storage arrays with RC level operating systems! RIDE EM COWBOY!!!! Thank god I don't ever have to deal with clients on that level.

Having said that, lol at all the talk of Nimble in here. Lot's of fake storage architects doing the rounds here. And I've seen mentions of nearline sas as if it's actually deserving of a unique name, crazy!

But anyway, who's looking forward to Insight? I can't wait, shame there won't be any interesting announcements this year though. But hey, free certs!

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Muslim Wookie posted:

The FAS2552 is a POS. I can't believe they allowed it to exist.

Best case scenario, active/passive pair, 12 disks:

4 for Controller one.
4 for Controller two.
5 disks =
2 parity, 3 data.

Best case is active passive on 7-mode with the passive node running a raid-4 aggregate for the root vol and nothing else, and other 10 disks in a raid-dp aggregate on second node which leaves 7 or 8 data disks depending on whether you decide keep a hot spare or not.

Disk slicing on 8.3 is meant to address that problem somewhat.

RC code is fully tested, it simply hasn't been I'm the field long enough to qualify as GA. New platforms sometimes require ONTAP updates to support the new hardware (chipsets, CPU features) and these will ship with RC releases because they can't go GA until they have accrued a certain level of adoption in the
field. That's literally what GA means, that it has reached "General Adoption".

If Nimble wasn't competitive NetApp SEs wouldn't spend so much time wringing their hands over it and trading strategies to beat them. I think Nimble talk probably came up more in competitive chat than any other single vendor.

JockstrapManthrust
Apr 30, 2013

Muslim Wookie posted:

The FAS2552 is a POS. I can't believe they allowed it to exist.

Best case scenario, active/passive pair, 12 disks:

4 for Controller one.
4 for Controller two.
5 disks =
2 parity, 3 data.

3 disks worth of IO woooo yay. Also, let's ship our enterprise class storage arrays with RC level operating systems! RIDE EM COWBOY!!!! Thank god I don't ever have to deal with clients on that level.

Having said that, lol at all the talk of Nimble in here. Lot's of fake storage architects doing the rounds here. And I've seen mentions of nearline sas as if it's actually deserving of a unique name, crazy!

But anyway, who's looking forward to Insight? I can't wait, shame there won't be any interesting announcements this year though. But hey, free certs!

The 2520 is the one with 12 drives in the head :/ Until 8.3 is out, anything with a built in single shelf is a sad joke. I'm looking forward to getting it on our 8040 and reclaiming some disks if I can. That and the redone dedupe/compression engine.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

JockstrapManthrust posted:

The 2520 is the one with 12 drives in the head :/ Until 8.3 is out, anything with a built in single shelf is a sad joke. I'm looking forward to getting it on our 8040 and reclaiming some disks if I can. That and the redone dedupe/compression engine.

Disk slicing will only be supported for 2000 series and flash pool in 8.3. Support for other platforms will come at some unspecified later date.

JockstrapManthrust
Apr 30, 2013
Good to know, thanks. They are so tight lipped with such information, hopefully they are more forthcoming at insight in Berlin.

Aquila
Jan 24, 2003

So on the 24 drive 2554 how many drives do I lose to this thing you don't really explain at all? (but I'm guessing is operating system use?)

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Aquila posted:

So on the 24 drive 2554 how many drives do I lose to this thing you don't really explain at all? (but I'm guessing is operating system use?)

On 7-mode you don't really lose any drives to anything you wouldn't expect. 2 drives to parity for each raid group and however many hot spares you want to keep. The problem is that FAS systems are active/active so you have to divvy those drives up between two controllers and spares aren't global. So you have at least two raid groups (one per controller) and two hot spares (one per controller) on a system where you're following best practices. That's six drives gone, but it's just to expected parity and spare overhead, not any operating system use (you do lose 10% usable capacity for WAFL reserve, but that doesn't kill your spindle count at least).

On CDOT it's much worse because you need a 3 drive node root aggregate for each node and that aggregate cannot hold user data. So you lose six drives to that, plus parity drives for your user data aggregate, plus hot spares...you basically end up losing half of your 24 drives just to various overhead. 8.3 addresses this somewhat by slicing a partition off of each disk to create the node root aggregate, meaning you don't need to dedicate 3 drives to it.

Aquila
Jan 24, 2003

NippleFloss posted:

On 7-mode you don't really lose any drives to anything you wouldn't expect. 2 drives to parity for each raid group and however many hot spares you want to keep. The problem is that FAS systems are active/active so you have to divvy those drives up between two controllers and spares aren't global. So you have at least two raid groups (one per controller) and two hot spares (one per controller) on a system where you're following best practices. That's six drives gone, but it's just to expected parity and spare overhead, not any operating system use (you do lose 10% usable capacity for WAFL reserve, but that doesn't kill your spindle count at least).

On CDOT it's much worse because you need a 3 drive node root aggregate for each node and that aggregate cannot hold user data. So you lose six drives to that, plus parity drives for your user data aggregate, plus hot spares...you basically end up losing half of your 24 drives just to various overhead. 8.3 addresses this somewhat by slicing a partition off of each disk to create the node root aggregate, meaning you don't need to dedicate 3 drives to it.

Ok, that's more what I was expecting, I haven't used a Netapp since the good old FAS960/980, a number of which I installed are probably still cranking out NFS and will probably continue to do so until the heat death of the universe. I'm fine losing 3 drives per 12 drive set, that puts me somewhere around 36TB with 2TB drives, pre-wafl, formatting, etc, so if I get 30TB usable out of this system I will be happy.

qutius
Apr 2, 2003
NO PARTIES

Muslim Wookie posted:

But anyway, who's looking forward to Insight? I can't wait, shame there won't be any interesting announcements this year though. But hey, free certs!

I won't be going, but quite a few colleagues will be there.

From my understanding, the first cert is free. After that, they are half off.

Erwin
Feb 17, 2006

Anybody have a justified opinion on cMLC vs eMLC in the flash space? Tegile's big claim against Nimble is that they use enterprise flash drives, whereas Nimble uses consumer grade flash drives, and they claim that Nimble's performance will degrade over time because of it. I don't really buy it and am viewing these competing arrays as layers of abstraction over arbitrary storage hardware. I don't care too much about the underlying hardware, only about the usable capacity and expected IOPS.

Every article on eMLC vs cMLC is from a storage company that uses one or the other, and therefore has an opinion one way or the other.

Nitr0
Aug 17, 2005

IT'S FREE REAL ESTATE
Look at http://techreport.com/review/26523/the-ssd-endurance-experiment-casualties-on-the-way-to-a-petabyte
"Consumer" drives are going to last a hell of a long time I wouldn't be concerned about it.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Erwin posted:

Every article on eMLC vs cMLC is from a storage company that uses one or the other, and therefore has an opinion one way or the other.
The SSDs are cheap, and you can buy the spares yourself. If it is a concern, just replace them every year.

CrazyLittle
Sep 11, 2001





Clapping Larry

adorai posted:

you can buy the spares yourself.

Buy spare SSDs from Nimble, or buy off the shelf drives?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

CrazyLittle posted:

Buy spare SSDs from Nimble, or buy off the shelf drives?

Spares from nimble are cheap, no idea if you can just newegg the parts or if there is nimble specific firmware at play.

Erwin
Feb 17, 2006

adorai posted:

The SSDs are cheap, and you can buy the spares yourself. If it is a concern, just replace them every year.

Not that I would do this, but it can be a yeah, but for Tegile's argument. Thanks.

Mierdaan
Sep 14, 2004

Pillbug

adorai posted:

Spares from nimble are cheap, no idea if you can just newegg the parts or if there is nimble specific firmware at play.

How cheap is cheap? Like, I just quoted out a Compellent expansion shelf with 24x 200GB SSDs. $118k with software and services included :psyduck: Next time a Nimble rep calls me, I'm answering the damned phone.

devmd01
Mar 7, 2006

Elektronik
Supersonik
Oh god the change control is in for 1 am Sunday. Babbys first fiber switch zoning teardown and rebuild. Here's hoping I don't take down access to the compellent!

Aquila
Jan 24, 2003

devmd01 posted:

Oh god the change control is in for 1 am Sunday. Babbys first fiber switch zoning teardown and rebuild. Here's hoping I don't take down access to the compellent!

gently caress off hours windows. I've probably seen more outage from exhausted and unavailable technical staff then anything else during maintenance windows.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

adorai posted:

Spares from nimble are cheap, no idea if you can just newegg the parts or if there is nimble specific firmware at play.

From what our SE told me, they whitelist serial numbers so you have to go through them.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Mierdaan posted:

How cheap is cheap? Like, I just quoted out a Compellent expansion shelf with 24x 200GB SSDs. $118k with software and services included :psyduck: Next time a Nimble rep calls me, I'm answering the damned phone.
I can't find the price for a spare ssd, but a spare 3TB nearline 7200 rpm sas drive is under $1200, so probably about 5x the newegg price (but it includes a tray!). I think our controllers came with 4 SSDs each, I would guess the spares to replace them all each year would come in under $10k per controller. However, that's so overkill it's not even funny, just buy two spares and replace them when the controller tells you to.

bigmandan
Sep 11, 2001

lol internet
College Slice
After a few months of back and forth with a few vendors it looks like we'll be settling on two Dell Compellent SC4020 SANs. They'll be configured with two flash tiers and one platter tier for a total of ~25TB and around 17,000* sustained IOPS / 35,000 burst. Going with two as the owner wants to do replication (probably semi-sync). Since we're also getting some servers and switches we got some pretty drat good discounts.

I'm getting pretty excited. We had some disk failures not too long ago, so getting this up and running will give our team some peace of mind.

* ~1800 IOPS worst case if doing r/w from tier 3

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

bigmandan posted:

After a few months of back and forth with a few vendors it looks like we'll be settling on two Dell Compellent SC4020 SANs. They'll be configured with two flash tiers and one platter tier for a total of ~25TB and around 17,000* sustained IOPS / 35,000 burst. Going with two as the owner wants to do replication (probably semi-sync). Since we're also getting some servers and switches we got some pretty drat good discounts.

I'm getting pretty excited. We had some disk failures not too long ago, so getting this up and running will give our team some peace of mind.

* ~1800 IOPS worst case if doing r/w from tier 3

You can't really talk about IOPs in a vacuum like that. An IOP isn't an independent unit of measure like a liter or joule, it's wholly dependent on the workload involved. In a SAN environment the workloads a generally heavily mixed due the IO blender effect, which makes it especially hard to discuss IOPs thresholds in any meaningful way. Even a single application workload like SQL can have very different IO profiles depending on what type of activity is being directed at it, so saying "this array will give you x number of SQL IOPS" is an over-simplification.

bigmandan
Sep 11, 2001

lol internet
College Slice

NippleFloss posted:

You can't really talk about IOPs in a vacuum like that. An IOP isn't an independent unit of measure like a liter or joule, it's wholly dependent on the workload involved. In a SAN environment the workloads a generally heavily mixed due the IO blender effect, which makes it especially hard to discuss IOPs thresholds in any meaningful way. Even a single application workload like SQL can have very different IO profiles depending on what type of activity is being directed at it, so saying "this array will give you x number of SQL IOPS" is an over-simplification.

Those numbers are based on our expected workloads with about 50/50 R/W ratio. Forgot to mention that.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

NippleFloss posted:

You can't really talk about IOPs in a vacuum like that. An IOP isn't an independent unit of measure like a liter or joule, it's wholly dependent on the workload involved. In a SAN environment the workloads a generally heavily mixed due the IO blender effect, which makes it especially hard to discuss IOPs thresholds in any meaningful way. Even a single application workload like SQL can have very different IO profiles depending on what type of activity is being directed at it, so saying "this array will give you x number of SQL IOPS" is an over-simplification.
While we're being anal-retentive, there's no such thing as an IOP. IOPS stands for Input/Output Operations Per Second, and "Input/Output Operations Per" doesn't make any sense. :mad:

Aside: Random IOPS in a spinning disk context made a lot more sense as a semi-standard unit of measure, because they were really just a reciprocal of the median disk seek time, which was the same whether reading or writing. With SSD, block size is way more important to these calculations than it used to be.

Vulture Culture fucked around with this message at 19:29 on Oct 10, 2014

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Misogynist posted:

While we're being anal-retentive, there's no such thing as an IOP. IOPS stands for Input/Output Operations Per Second, and "Input/Output Operations Per" doesn't make any sense. :mad:

Aside: Random IOPS in a spinning disk context made a lot more sense as a semi-standard unit of measure, because they were really just a reciprocal of the median disk seek time, which was the same whether reading or writing. With SSD, block size is way more important to these calculations than it used to be.

This is one of those things I'm aware of intellectually, but when typing quickly, especially on a mobile device my brain rebels against the disconnect between a singular ending in an S and wants to make it sound right.

And even in the case of rotational media storage arrays use things like caching and readahead and write coalescing to hide the rotational latency involved in seek times. And you could still generate really high numbers by claiming "IOPS" for artificially small block sequential reads or something.

It's just sort of a pet peeve because after six years of storage engineering work I trigger when I see a statement about doing X IOPS with no context at all, because it usually meant that I was about to have a long and painful conversation about defining real requirements.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Turns out when your Compellent gets to 97% full the tiering doesn't work very well and everything slows to poo poo. Good thing we have such good monitoring in place that we were able to catch this before it became a problem :rolleyes:

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

FISHMANPET posted:

Turns out when your Compellent gets to 97% full the tiering doesn't work very well and everything slows to poo poo. Good thing we have such good monitoring in place that we were able to catch this before it became a problem :rolleyes:
most sans suck cock when they get above 90%.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

most sans suck cock when they get above 90%.
Not sure whether you're gay-shaming or slut-shaming your SAN or just implying that your SAN is spending company time on activities it ought not to

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Misogynist posted:

Not sure whether you're gay-shaming or slut-shaming your SAN or just implying that your SAN is spending company time on activities it ought not to
when full, it's spending more company time than it should servicing data requests.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Misogynist posted:

Not sure whether you're gay-shaming or slut-shaming your SAN or just implying that your SAN is spending company time on activities it ought not to

nah just that operating near capacity brings most sans to their knees

Richard Noggin
Jun 6, 2005
Redneck By Default

PCjr sidecar posted:

nah just that operating near capacity brings most sans to their knees

They're just not meant to handle a load like that.

socialsecurity
Aug 30, 2003
Probation
Can't post for 17 hours!
I have a EMC problem that has been bugging me for weeks, I have a Celerra NX4 EMC box. I have set up ISCSI targets on it and NOTHING can connect, I have tried setting up CHAP didn't help, I have tried putting it on it's own network, I have tried accessing it from every Windows variant and even ESXi. I setup an ISNS server and it can see the targets but anyone tries to connect to them they just time out. Is there some trick to these that I am missing?

Erwin
Feb 17, 2006

I have quotes from EMC, Nimble, and Tegile for 40TB usable for VMware and SQL storage. Nimble and Tegile are right in the same price range, and EMC is lower, and they've even included two fibre channel switches in their price, covered under EMC support (they've included 4 years of support). If nothing else, they know how to bargain.

EMC quoted a VNX 5200 vs. Nimble's CS300. I really like the ease of the Nimble since I'm the everything admin and don't want to do too much storage janitoring if I can help it. I also think the Nimble will outperform the VNX (much more flash, although the VNX has a 10k tier vs the Nimble's all NL-SAS). Problem is, not only is EMC's price lower, but I have to add the cost of 10 gig switches and NICs to Nimble's cost to do a proper price comparison.

Anybody have a VNX 5200? The interface isn't great, but it seems like I can mostly ignore it once it's set up. I'm sure it'll outperform our current storage ten-fold, but am I going to regret not going Nimble or Tegile when it seems I'm perfectly situated in the sweet spot for their products?

Bitch Stewie
Dec 17, 2011

Erwin posted:

I have quotes from EMC, Nimble, and Tegile for 40TB usable for VMware and SQL storage. Nimble and Tegile are right in the same price range, and EMC is lower, and they've even included two fibre channel switches in their price, covered under EMC support (they've included 4 years of support). If nothing else, they know how to bargain.

EMC quoted a VNX 5200 vs. Nimble's CS300. I really like the ease of the Nimble since I'm the everything admin and don't want to do too much storage janitoring if I can help it. I also think the Nimble will outperform the VNX (much more flash, although the VNX has a 10k tier vs the Nimble's all NL-SAS). Problem is, not only is EMC's price lower, but I have to add the cost of 10 gig switches and NICs to Nimble's cost to do a proper price comparison.

Anybody have a VNX 5200? The interface isn't great, but it seems like I can mostly ignore it once it's set up. I'm sure it'll outperform our current storage ten-fold, but am I going to regret not going Nimble or Tegile when it seems I'm perfectly situated in the sweet spot for their products?

What's your use case and environment?

Nimble is iSCSI only, Tegile and EMC are unified so that's clearly a difference in potential functionality.

How many hosts? Nimble or anything NAS you need switches (well most times) even if you only have a couple of hosts, if you go FC you could possibly direct connect and save a good few grand.

With Tegile and Nimble you can only expand a full shelf at a time IIRC so the moment you need that 1TB of additional capacity it's gonna cost a lot vs. EMC where you add a shelf and the disks you need.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Bitch Stewie posted:

With Tegile and Nimble you can only expand a full shelf at a time IIRC so the moment you need that 1TB of additional capacity it's gonna cost a lot vs. EMC where you add a shelf and the disks you need.

This is a pro and a con at the same time. The extra flexibility makes things mite complicated as you have to figure out how to optimally size and grow your raid groups as well as what raid types to use. Full shelf additions mean that you dont think about that as each shelf is sold with the optimal number of disks to expand and will be automatically configured by the system in the appropriate ways. You don't get into the bin packing problems you can get with small non-uniform additions leaving capacity and performance islands.

The VNX also had some limitations on which features can be used with pools versus traditional LUNs which adds more complexity.

I'd do Nimble out of those three options. It will handle small random IO well, which is what you'll see with SQL and VMware.

Erwin
Feb 17, 2006

Bitch Stewie posted:

What's your use case and environment?

Nimble is iSCSI only, Tegile and EMC are unified so that's clearly a difference in potential functionality.

How many hosts? Nimble or anything NAS you need switches (well most times) even if you only have a couple of hosts, if you go FC you could possibly direct connect and save a good few grand.

With Tegile and Nimble you can only expand a full shelf at a time IIRC so the moment you need that 1TB of additional capacity it's gonna cost a lot vs. EMC where you add a shelf and the disks you need.

VMware, like 8-10TB of SQL data (not that heavily hit), and 20TB everything else, but usage isn't ridiculously heavy. 3 hosts, with a 4th planned right after the new storage. I only need block, and prefer iSCSI, but if EMC wants to give me fc switches because they can't do iSCSI as well, that's fine with me.


NippleFloss posted:

I'd do Nimble out of those three options. It will handle small random IO well, which is what you'll see with SQL and VMware.

Yeah, if prices were identical, it'd be Nimble for sure, but there's a big price skew at the moment. EMC sized the last quote wrong, so they're requoting and that may make it closer, but since they're including fibre channel switches, Nimble is still at something like a $10k disadvantage.

Would you do Nimble with 1gig iSCSI over the VNX 5200 on 8gig FC?

Rated PG-34
Jul 1, 2004




Is QNAP any good? Our lab with its measly budget needs to archive about 20T of data, and we're considering purchasing one of these and hooking it up to a desktop (probably via iSCSI): http://amzn.com/B00AUHZV0C

Nukelear v.2
Jun 25, 2004
My optional title text

Erwin posted:

Would you do Nimble with 1gig iSCSI over the VNX 5200 on 8gig FC?

I'd say no dice without 10GigE.

It sounds like a pretty similar build to what we have now. Was planning my next rebuild using Nimble, but now I'm really looking at going the hyper converged route and doing it all on a Nutanix. Simpler, faster, etc. Something to think about.

Adbot
ADBOT LOVES YOU

MrMoo
Sep 14, 2000

Rated PG-34 posted:

Is QNAP any good? Our lab with its measly budget needs to archive about 20T of data, and we're considering purchasing one of these and hooking it up to a desktop (probably via iSCSI): http://amzn.com/B00AUHZV0C

The low-end brands are Synology, QNAP, Thecus, and ReadyNAS . Apart from the latter which has pretty much jumped the shark with Netgear, they are all fairly competitive with each other with frequent updates.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply