|
I am now the person that thinks $26k for a new Netapp is a deal. What have I become.
|
# ? Oct 4, 2014 15:10 |
|
|
# ? Apr 19, 2024 20:51 |
|
The FAS2552 is a POS. I can't believe they allowed it to exist. Best case scenario, active/passive pair, 12 disks: 4 for Controller one. 4 for Controller two. 5 disks = 2 parity, 3 data. 3 disks worth of IO woooo yay. Also, let's ship our enterprise class storage arrays with RC level operating systems! RIDE EM COWBOY!!!! Thank god I don't ever have to deal with clients on that level. Having said that, lol at all the talk of Nimble in here. Lot's of fake storage architects doing the rounds here. And I've seen mentions of nearline sas as if it's actually deserving of a unique name, crazy! But anyway, who's looking forward to Insight? I can't wait, shame there won't be any interesting announcements this year though. But hey, free certs!
|
# ? Oct 4, 2014 15:55 |
|
Muslim Wookie posted:The FAS2552 is a POS. I can't believe they allowed it to exist. Best case is active passive on 7-mode with the passive node running a raid-4 aggregate for the root vol and nothing else, and other 10 disks in a raid-dp aggregate on second node which leaves 7 or 8 data disks depending on whether you decide keep a hot spare or not. Disk slicing on 8.3 is meant to address that problem somewhat. RC code is fully tested, it simply hasn't been I'm the field long enough to qualify as GA. New platforms sometimes require ONTAP updates to support the new hardware (chipsets, CPU features) and these will ship with RC releases because they can't go GA until they have accrued a certain level of adoption in the field. That's literally what GA means, that it has reached "General Adoption". If Nimble wasn't competitive NetApp SEs wouldn't spend so much time wringing their hands over it and trading strategies to beat them. I think Nimble talk probably came up more in competitive chat than any other single vendor.
|
# ? Oct 4, 2014 19:14 |
|
Muslim Wookie posted:The FAS2552 is a POS. I can't believe they allowed it to exist. The 2520 is the one with 12 drives in the head :/ Until 8.3 is out, anything with a built in single shelf is a sad joke. I'm looking forward to getting it on our 8040 and reclaiming some disks if I can. That and the redone dedupe/compression engine.
|
# ? Oct 4, 2014 19:37 |
|
JockstrapManthrust posted:The 2520 is the one with 12 drives in the head :/ Until 8.3 is out, anything with a built in single shelf is a sad joke. I'm looking forward to getting it on our 8040 and reclaiming some disks if I can. That and the redone dedupe/compression engine. Disk slicing will only be supported for 2000 series and flash pool in 8.3. Support for other platforms will come at some unspecified later date.
|
# ? Oct 4, 2014 19:56 |
|
Good to know, thanks. They are so tight lipped with such information, hopefully they are more forthcoming at insight in Berlin.
|
# ? Oct 4, 2014 20:24 |
|
So on the 24 drive 2554 how many drives do I lose to this thing you don't really explain at all? (but I'm guessing is operating system use?)
|
# ? Oct 5, 2014 06:33 |
|
Aquila posted:So on the 24 drive 2554 how many drives do I lose to this thing you don't really explain at all? (but I'm guessing is operating system use?) On 7-mode you don't really lose any drives to anything you wouldn't expect. 2 drives to parity for each raid group and however many hot spares you want to keep. The problem is that FAS systems are active/active so you have to divvy those drives up between two controllers and spares aren't global. So you have at least two raid groups (one per controller) and two hot spares (one per controller) on a system where you're following best practices. That's six drives gone, but it's just to expected parity and spare overhead, not any operating system use (you do lose 10% usable capacity for WAFL reserve, but that doesn't kill your spindle count at least). On CDOT it's much worse because you need a 3 drive node root aggregate for each node and that aggregate cannot hold user data. So you lose six drives to that, plus parity drives for your user data aggregate, plus hot spares...you basically end up losing half of your 24 drives just to various overhead. 8.3 addresses this somewhat by slicing a partition off of each disk to create the node root aggregate, meaning you don't need to dedicate 3 drives to it.
|
# ? Oct 5, 2014 07:07 |
|
NippleFloss posted:On 7-mode you don't really lose any drives to anything you wouldn't expect. 2 drives to parity for each raid group and however many hot spares you want to keep. The problem is that FAS systems are active/active so you have to divvy those drives up between two controllers and spares aren't global. So you have at least two raid groups (one per controller) and two hot spares (one per controller) on a system where you're following best practices. That's six drives gone, but it's just to expected parity and spare overhead, not any operating system use (you do lose 10% usable capacity for WAFL reserve, but that doesn't kill your spindle count at least). Ok, that's more what I was expecting, I haven't used a Netapp since the good old FAS960/980, a number of which I installed are probably still cranking out NFS and will probably continue to do so until the heat death of the universe. I'm fine losing 3 drives per 12 drive set, that puts me somewhere around 36TB with 2TB drives, pre-wafl, formatting, etc, so if I get 30TB usable out of this system I will be happy.
|
# ? Oct 5, 2014 07:39 |
|
Muslim Wookie posted:But anyway, who's looking forward to Insight? I can't wait, shame there won't be any interesting announcements this year though. But hey, free certs! I won't be going, but quite a few colleagues will be there. From my understanding, the first cert is free. After that, they are half off.
|
# ? Oct 5, 2014 22:46 |
|
Anybody have a justified opinion on cMLC vs eMLC in the flash space? Tegile's big claim against Nimble is that they use enterprise flash drives, whereas Nimble uses consumer grade flash drives, and they claim that Nimble's performance will degrade over time because of it. I don't really buy it and am viewing these competing arrays as layers of abstraction over arbitrary storage hardware. I don't care too much about the underlying hardware, only about the usable capacity and expected IOPS. Every article on eMLC vs cMLC is from a storage company that uses one or the other, and therefore has an opinion one way or the other.
|
# ? Oct 6, 2014 17:04 |
|
Look at http://techreport.com/review/26523/the-ssd-endurance-experiment-casualties-on-the-way-to-a-petabyte "Consumer" drives are going to last a hell of a long time I wouldn't be concerned about it.
|
# ? Oct 6, 2014 22:27 |
|
Erwin posted:Every article on eMLC vs cMLC is from a storage company that uses one or the other, and therefore has an opinion one way or the other.
|
# ? Oct 7, 2014 00:01 |
|
adorai posted:you can buy the spares yourself. Buy spare SSDs from Nimble, or buy off the shelf drives?
|
# ? Oct 7, 2014 05:20 |
|
CrazyLittle posted:Buy spare SSDs from Nimble, or buy off the shelf drives? Spares from nimble are cheap, no idea if you can just newegg the parts or if there is nimble specific firmware at play.
|
# ? Oct 7, 2014 05:52 |
|
adorai posted:The SSDs are cheap, and you can buy the spares yourself. If it is a concern, just replace them every year. Not that I would do this, but it can be a yeah, but for Tegile's argument. Thanks.
|
# ? Oct 7, 2014 14:48 |
|
adorai posted:Spares from nimble are cheap, no idea if you can just newegg the parts or if there is nimble specific firmware at play. How cheap is cheap? Like, I just quoted out a Compellent expansion shelf with 24x 200GB SSDs. $118k with software and services included Next time a Nimble rep calls me, I'm answering the damned phone.
|
# ? Oct 7, 2014 14:58 |
|
Oh god the change control is in for 1 am Sunday. Babbys first fiber switch zoning teardown and rebuild. Here's hoping I don't take down access to the compellent!
|
# ? Oct 7, 2014 15:01 |
|
devmd01 posted:Oh god the change control is in for 1 am Sunday. Babbys first fiber switch zoning teardown and rebuild. Here's hoping I don't take down access to the compellent! gently caress off hours windows. I've probably seen more outage from exhausted and unavailable technical staff then anything else during maintenance windows.
|
# ? Oct 7, 2014 16:41 |
|
adorai posted:Spares from nimble are cheap, no idea if you can just newegg the parts or if there is nimble specific firmware at play. From what our SE told me, they whitelist serial numbers so you have to go through them.
|
# ? Oct 7, 2014 17:40 |
|
Mierdaan posted:How cheap is cheap? Like, I just quoted out a Compellent expansion shelf with 24x 200GB SSDs. $118k with software and services included Next time a Nimble rep calls me, I'm answering the damned phone.
|
# ? Oct 8, 2014 00:41 |
|
After a few months of back and forth with a few vendors it looks like we'll be settling on two Dell Compellent SC4020 SANs. They'll be configured with two flash tiers and one platter tier for a total of ~25TB and around 17,000* sustained IOPS / 35,000 burst. Going with two as the owner wants to do replication (probably semi-sync). Since we're also getting some servers and switches we got some pretty drat good discounts. I'm getting pretty excited. We had some disk failures not too long ago, so getting this up and running will give our team some peace of mind. * ~1800 IOPS worst case if doing r/w from tier 3
|
# ? Oct 10, 2014 17:16 |
|
bigmandan posted:After a few months of back and forth with a few vendors it looks like we'll be settling on two Dell Compellent SC4020 SANs. They'll be configured with two flash tiers and one platter tier for a total of ~25TB and around 17,000* sustained IOPS / 35,000 burst. Going with two as the owner wants to do replication (probably semi-sync). Since we're also getting some servers and switches we got some pretty drat good discounts. You can't really talk about IOPs in a vacuum like that. An IOP isn't an independent unit of measure like a liter or joule, it's wholly dependent on the workload involved. In a SAN environment the workloads a generally heavily mixed due the IO blender effect, which makes it especially hard to discuss IOPs thresholds in any meaningful way. Even a single application workload like SQL can have very different IO profiles depending on what type of activity is being directed at it, so saying "this array will give you x number of SQL IOPS" is an over-simplification.
|
# ? Oct 10, 2014 18:02 |
|
NippleFloss posted:You can't really talk about IOPs in a vacuum like that. An IOP isn't an independent unit of measure like a liter or joule, it's wholly dependent on the workload involved. In a SAN environment the workloads a generally heavily mixed due the IO blender effect, which makes it especially hard to discuss IOPs thresholds in any meaningful way. Even a single application workload like SQL can have very different IO profiles depending on what type of activity is being directed at it, so saying "this array will give you x number of SQL IOPS" is an over-simplification. Those numbers are based on our expected workloads with about 50/50 R/W ratio. Forgot to mention that.
|
# ? Oct 10, 2014 18:56 |
|
NippleFloss posted:You can't really talk about IOPs in a vacuum like that. An IOP isn't an independent unit of measure like a liter or joule, it's wholly dependent on the workload involved. In a SAN environment the workloads a generally heavily mixed due the IO blender effect, which makes it especially hard to discuss IOPs thresholds in any meaningful way. Even a single application workload like SQL can have very different IO profiles depending on what type of activity is being directed at it, so saying "this array will give you x number of SQL IOPS" is an over-simplification. Aside: Random IOPS in a spinning disk context made a lot more sense as a semi-standard unit of measure, because they were really just a reciprocal of the median disk seek time, which was the same whether reading or writing. With SSD, block size is way more important to these calculations than it used to be. Vulture Culture fucked around with this message at 19:29 on Oct 10, 2014 |
# ? Oct 10, 2014 19:25 |
|
Misogynist posted:While we're being anal-retentive, there's no such thing as an IOP. IOPS stands for Input/Output Operations Per Second, and "Input/Output Operations Per" doesn't make any sense. This is one of those things I'm aware of intellectually, but when typing quickly, especially on a mobile device my brain rebels against the disconnect between a singular ending in an S and wants to make it sound right. And even in the case of rotational media storage arrays use things like caching and readahead and write coalescing to hide the rotational latency involved in seek times. And you could still generate really high numbers by claiming "IOPS" for artificially small block sequential reads or something. It's just sort of a pet peeve because after six years of storage engineering work I trigger when I see a statement about doing X IOPS with no context at all, because it usually meant that I was about to have a long and painful conversation about defining real requirements.
|
# ? Oct 10, 2014 21:52 |
|
Turns out when your Compellent gets to 97% full the tiering doesn't work very well and everything slows to poo poo. Good thing we have such good monitoring in place that we were able to catch this before it became a problem
|
# ? Oct 11, 2014 02:35 |
|
FISHMANPET posted:Turns out when your Compellent gets to 97% full the tiering doesn't work very well and everything slows to poo poo. Good thing we have such good monitoring in place that we were able to catch this before it became a problem
|
# ? Oct 11, 2014 03:21 |
|
adorai posted:most sans suck cock when they get above 90%.
|
# ? Oct 11, 2014 22:29 |
|
Misogynist posted:Not sure whether you're gay-shaming or slut-shaming your SAN or just implying that your SAN is spending company time on activities it ought not to
|
# ? Oct 11, 2014 23:11 |
|
Misogynist posted:Not sure whether you're gay-shaming or slut-shaming your SAN or just implying that your SAN is spending company time on activities it ought not to nah just that operating near capacity brings most sans to their knees
|
# ? Oct 12, 2014 00:31 |
|
PCjr sidecar posted:nah just that operating near capacity brings most sans to their knees They're just not meant to handle a load like that.
|
# ? Oct 12, 2014 00:58 |
|
I have a EMC problem that has been bugging me for weeks, I have a Celerra NX4 EMC box. I have set up ISCSI targets on it and NOTHING can connect, I have tried setting up CHAP didn't help, I have tried putting it on it's own network, I have tried accessing it from every Windows variant and even ESXi. I setup an ISNS server and it can see the targets but anyone tries to connect to them they just time out. Is there some trick to these that I am missing?
|
# ? Oct 12, 2014 23:37 |
|
I have quotes from EMC, Nimble, and Tegile for 40TB usable for VMware and SQL storage. Nimble and Tegile are right in the same price range, and EMC is lower, and they've even included two fibre channel switches in their price, covered under EMC support (they've included 4 years of support). If nothing else, they know how to bargain. EMC quoted a VNX 5200 vs. Nimble's CS300. I really like the ease of the Nimble since I'm the everything admin and don't want to do too much storage janitoring if I can help it. I also think the Nimble will outperform the VNX (much more flash, although the VNX has a 10k tier vs the Nimble's all NL-SAS). Problem is, not only is EMC's price lower, but I have to add the cost of 10 gig switches and NICs to Nimble's cost to do a proper price comparison. Anybody have a VNX 5200? The interface isn't great, but it seems like I can mostly ignore it once it's set up. I'm sure it'll outperform our current storage ten-fold, but am I going to regret not going Nimble or Tegile when it seems I'm perfectly situated in the sweet spot for their products?
|
# ? Oct 15, 2014 18:20 |
|
Erwin posted:I have quotes from EMC, Nimble, and Tegile for 40TB usable for VMware and SQL storage. Nimble and Tegile are right in the same price range, and EMC is lower, and they've even included two fibre channel switches in their price, covered under EMC support (they've included 4 years of support). If nothing else, they know how to bargain. What's your use case and environment? Nimble is iSCSI only, Tegile and EMC are unified so that's clearly a difference in potential functionality. How many hosts? Nimble or anything NAS you need switches (well most times) even if you only have a couple of hosts, if you go FC you could possibly direct connect and save a good few grand. With Tegile and Nimble you can only expand a full shelf at a time IIRC so the moment you need that 1TB of additional capacity it's gonna cost a lot vs. EMC where you add a shelf and the disks you need.
|
# ? Oct 15, 2014 20:04 |
|
Bitch Stewie posted:With Tegile and Nimble you can only expand a full shelf at a time IIRC so the moment you need that 1TB of additional capacity it's gonna cost a lot vs. EMC where you add a shelf and the disks you need. This is a pro and a con at the same time. The extra flexibility makes things mite complicated as you have to figure out how to optimally size and grow your raid groups as well as what raid types to use. Full shelf additions mean that you dont think about that as each shelf is sold with the optimal number of disks to expand and will be automatically configured by the system in the appropriate ways. You don't get into the bin packing problems you can get with small non-uniform additions leaving capacity and performance islands. The VNX also had some limitations on which features can be used with pools versus traditional LUNs which adds more complexity. I'd do Nimble out of those three options. It will handle small random IO well, which is what you'll see with SQL and VMware.
|
# ? Oct 15, 2014 20:35 |
|
Bitch Stewie posted:What's your use case and environment? VMware, like 8-10TB of SQL data (not that heavily hit), and 20TB everything else, but usage isn't ridiculously heavy. 3 hosts, with a 4th planned right after the new storage. I only need block, and prefer iSCSI, but if EMC wants to give me fc switches because they can't do iSCSI as well, that's fine with me. NippleFloss posted:I'd do Nimble out of those three options. It will handle small random IO well, which is what you'll see with SQL and VMware. Yeah, if prices were identical, it'd be Nimble for sure, but there's a big price skew at the moment. EMC sized the last quote wrong, so they're requoting and that may make it closer, but since they're including fibre channel switches, Nimble is still at something like a $10k disadvantage. Would you do Nimble with 1gig iSCSI over the VNX 5200 on 8gig FC?
|
# ? Oct 15, 2014 20:59 |
|
Is QNAP any good? Our lab with its measly budget needs to archive about 20T of data, and we're considering purchasing one of these and hooking it up to a desktop (probably via iSCSI): http://amzn.com/B00AUHZV0C
|
# ? Oct 15, 2014 21:45 |
|
Erwin posted:Would you do Nimble with 1gig iSCSI over the VNX 5200 on 8gig FC? I'd say no dice without 10GigE. It sounds like a pretty similar build to what we have now. Was planning my next rebuild using Nimble, but now I'm really looking at going the hyper converged route and doing it all on a Nutanix. Simpler, faster, etc. Something to think about.
|
# ? Oct 15, 2014 21:58 |
|
|
# ? Apr 19, 2024 20:51 |
|
Rated PG-34 posted:Is QNAP any good? Our lab with its measly budget needs to archive about 20T of data, and we're considering purchasing one of these and hooking it up to a desktop (probably via iSCSI): http://amzn.com/B00AUHZV0C The low-end brands are Synology, QNAP, Thecus, and ReadyNAS . Apart from the latter which has pretty much jumped the shark with Netgear, they are all fairly competitive with each other with frequent updates.
|
# ? Oct 15, 2014 22:07 |