Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Erwin
Feb 17, 2006

Nukelear v.2 posted:

It sounds like a pretty similar build to what we have now. Was planning my next rebuild using Nimble, but now I'm really looking at going the hyper converged route and doing it all on a Nutanix. Simpler, faster, etc. Something to think about.

Nah, I have no reason to replace compute.

Adbot
ADBOT LOVES YOU

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Erwin posted:

I have quotes from EMC, Nimble, and Tegile for 40TB usable for VMware and SQL storage. Nimble and Tegile are right in the same price range, and EMC is lower, and they've even included two fibre channel switches in their price, covered under EMC support (they've included 4 years of support). If nothing else, they know how to bargain.

EMC quoted a VNX 5200 vs. Nimble's CS300. I really like the ease of the Nimble since I'm the everything admin and don't want to do too much storage janitoring if I can help it. I also think the Nimble will outperform the VNX (much more flash, although the VNX has a 10k tier vs the Nimble's all NL-SAS). Problem is, not only is EMC's price lower, but I have to add the cost of 10 gig switches and NICs to Nimble's cost to do a proper price comparison.

Anybody have a VNX 5200? The interface isn't great, but it seems like I can mostly ignore it once it's set up. I'm sure it'll outperform our current storage ten-fold, but am I going to regret not going Nimble or Tegile when it seems I'm perfectly situated in the sweet spot for their products?


Who cares the model number it is all about what the product can support vs what the environment needs are.

Nimble is great and all but never design your san upgrade off the sales pitch of compression ratios.

However, nimble is low admin in admin overhead. Then again I admin Netapp, VNX, VNXe, Nimble, Nutanix, vsan, and Nexenta. So who knows, find what works best for you with low admin overhead and best cost per $ in performance. Just remember that caching != baseline performance.

loving calculate the IOPS of your san/disks, devolve the cache hits, and understand/justify the business needs for that function... Not hard.

Rated PG-34 posted:

Is QNAP any good? Our lab with its measly budget needs to archive about 20T of data, and we're considering purchasing one of these and hooking it up to a desktop (probably via iSCSI): http://amzn.com/B00AUHZV0C

If you are SMB and feel like relying on a single raid controller that does ZFS usable. Go for it babes, if you care about the data an how it is accessed go for a MD 3220i or P3000

Aquila
Jan 24, 2003

Dilbert As gently caress posted:

Then again I admin Netapp, VNX, VNXe, Nimble, Nutanix, vsan, and Nexenta.

All SAN models and makes will be spelled with only N', X's and V's in the near future.

Erwin
Feb 17, 2006

Dilbert As gently caress posted:

Who cares the model number it is all about what the product can support vs what the environment needs are.
Nimble performance is directly correlated to model number.

quote:

Nimble is great and all but never design your san upgrade off the sales pitch of compression ratios.
I'm not. I didn't mention compression ratios. Everyone quoted 40TB usable ignoring compression.

quote:

However, nimble is low admin in admin overhead. Then again I admin Netapp, VNX, VNXe, Nimble, Nutanix, vsan, and Nexenta. So who knows, find what works best for you with low admin overhead and best cost per $ in performance. Just remember that caching != baseline performance.
I'm trying to find out by asking for experiences with both the administration of the arrays and the performance.

quote:

loving calculate the IOPS of your san/disks, devolve the cache hits, and understand/justify the business needs for that function... Not hard.
Both arrays abstract the underlying hardware, so I don't think it's as easy as adding up the IOPS of the spindles, but maybe it is, that's why I'm asking. But please, do continue to be the incoherent dick that responds to the questions you imagine in your head, not the ones that are asked :allears:

Bitch Stewie
Dec 17, 2011
If you just want block and don't see that changing you could just keep it simple and look at something like VNX or 3PAR 7200 or HDS HUS with direct attached FC - I know the HUS will take 4 hosts because we've just purchased a couple.

No point buying switches you don't need until you need them IMO.

sanchez
Feb 26, 2003

Erwin posted:


Would you do Nimble with 1gig iSCSI over the VNX 5200 on 8gig FC?

I wouldn't let it sway you either way because the nimble will have at least 4 gig ports per controller you can use and the array or something else will bottleneck before they do. 10g is tidier.

One potential Nimble gotcha, despite appearances you will likely have to manually tier things yourself to a certain extent, if you throw everything in cached datastores I'd be willing to bet you will overrun the default amount of cache they give you. They recommend moving less random read sensitive volumes to uncached datastores which have mediocre performance (it's RAID6 after all) but will help keep your cache hit rate up.

I'd still recommend them though.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
Sounds like EMC is priced to win. Assuming you need more than 100MB/sec of throughput on a single port I'd probably go with the EMC if the price is right.

FC isn't terribly difficult to manage. You'll pretty much touch your FC switches only when you add more storage controllers or more hosts. One rule to remember is that you'll want to create 1 zone per host per switch. In that zone make sure you've got the WWNs of an ESXi host and any storage ports you want to talk to. Avoid an all-in-one zone.

EMC Unisphere though.. Unisphere makes me irrationally angry. I only touch it about once a year when I add some more hosts to the lab and I always forget the process for adding hosts. It's more straightforward on probably ever other vendor on the planet.

Is Nimble aware you're evaluating EMC as well? Maybe go back to Nimble/your VAR and ask for bigger discounts/free stuff to bring the configurations into balance.

Erwin
Feb 17, 2006

1000101 posted:

Is Nimble aware you're evaluating EMC as well? Maybe go back to Nimble/your VAR and ask for bigger discounts/free stuff to bring the configurations into balance.
Yup, they're aware and also aware that their price is higher at the moment.

This is probably an incredibly stupid question, but can I use 1 gig links as failover iSCSI paths for 10 gig links? Assuming I'm okay with a performance drop in the event of hardware failure.

Am I an rear end in a top hat if I take that one step further and use only 1 10gig switch with 1 gig switching to back it up?

KennyTheFish
Jan 13, 2004
Even if it technically will work. The apps people will use it as an excuse to blame storage for all their problems.

Internet Explorer
Jun 1, 2005





Yeah, don't do that.

theperminator
Sep 16, 2009

by Smythe
Fun Shoe
Our Equallogic sans are making me want to quit my job.
We've had a bunch of instability and other issues ever since we started using 7.x series firmwares, Maybe it's a coincidence but I do know we were hit by this the other night.

"While running firmware version 7.0.x, an unexpected controller failover, or restart, may occur at 248 consecutive days of uninterrupted operation. In rare circumstances, this may cause you to encounter a brief disruption to existing iSCSI connections."

Everyone should upgrade to 7.0.9 to avoid this issue, I'm going to have a very boring and nerve wracking night of upgrading all of our members...

KennyTheFish
Jan 13, 2004
I upgraded last in July. So I have a few months to go. *holds breath*. Lets just hope it doesn't flag the array as degraded during the upgrade this time and force a console reboot.

theperminator
Sep 16, 2009

by Smythe
Fun Shoe
One of just mine did that...

Another just died in the rear end for a couple of minutes and then failed over to the secondary before I'd even gotten up to updating it... really need to get away from these turds.

KennyG
Oct 22, 2002
Here to blow my own horn.
Can we talk isilon? I'm looking at a multi-million dollar purchase.

In that purchase is 12 * X410 nodes@102TB of 3TB SED w/2x800gb data cache drives and 128gb of ram /2x10gig e per node. (36 drives per chassis)

I'm not concerned with the spec sheet as our use case is tricky and needs the iops/throughput. My concern is the pricing/value vs commodity alternatives.
I've brow beat them for a month and they've come down 65+% off retail but given that their retail is a fantasy number that no one pays it doesn't mean anything.

@~$2/GB the isilon part its still a bit painful. I love the ease of Isilon and the single namespace/hdfs etc makes life easy, but by my math I can rebuild this at twice the hardware and capacity spec at 15% of the price. I know support costs money and having a number to call for help is important but I can hire a storage engineer for 4 years for $800k fte and then have 1.2m to spend on the hardware which I can pre-stage hot spare parts and even then should only cost 500k. Not to mention it won't take 100% of the SE's time so I can have him do other things.


Also in this deal is 400tb of VNX, XTREMIO and Vplex/recoverPoint. This is a multi-million dollar deal and I want to make sure I haven't been duped into thinking it was a good deal because the numbers started so high. I really don't want to go with dell but they are half the price. I've made EMC aware and they don't seem to care. Basically talk me off the ledge from blowing up a massive deal with EMC.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
If you're building something that would cost multi-millions, I suspect the data is pretty important? If so, it's probably worth it to have real storage.

KennyG
Oct 22, 2002
Here to blow my own horn.
Do you work for EMC? Everything has a price, and the difference between 3 and 6 million is a pretty big deal. I am not talking about throwing a qnap at it and filling it with green drives but given that the Isilon is basically a glustreFS system where you are locked into a single vendor I'm wondering where the cost is coming from...

Just because EMC says it's worth $2/GB doesn't mean that the market or economics of the business have to agree. I understand that I am not backblaze and I need more than those pods but a PB @ $2gb is over $2m! I am not saying that enterprise support and validation isn't worth something but given the difference between that vs say dell r730s w/ RedHat Storage Server is massive!!!!

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

A good chunk of enterprise IT is covering your rear end. Yeah you can get some commodity hardware for way less money, but then it's your rear end on the line. You can't go to the C-Levels and the Board when poo poo goes wrong and say "We spent the money on the best system we could buy with the best support possible, and unfortunately there was a problem. We've engaged X,Y,Z resources and this is where we are at". You get to say you janked up a solution and now it's down. Covering your rear end costs money.

I don't know anything about your requirements, but if you want to look at another vendor you could try Hitachi Data Systems, they're supposedly really good at big implementations like this. I can't tell anyone in good conscious to look at an HP/3PAR solution with the clusterfuck that company is/is about to become.

For optimum rear end covering, bring in consultants for the project and let them design/implement the solution. Twice as many people to blame if poo poo goes wrong.

Yes, I just died a little inside typing this, but unfortunately that's how it is these days.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Hitachi isn't going to be cheaper than, well, anyone. And architecture wise they don't have anything that is optimized for high throughput in the way that Isilon is, so they would likely have to do it with a lot of hardware. NetApp E-Series would do it with discrete arrays, and probably be cheaper, but you don't get the single namespace and scale out. If you're putting a clustered file system on top of it anyway though that doesn't seem too important.

If it's really this important you should probably just pay a vendor for hardware and support and not try to roll your own. Hiring a dedicated admin to run your homegrown system doesn't provide you a full complement of support staff and engineers willing to be available 24/7 to make sure you're happy and continue to spend millions of dollars with them.

Bitch Stewie
Dec 17, 2011

skipdogg posted:

You get to say you janked up a solution and now it's down. Covering your rear end costs money.

That's pretty much the bottom line here IMO.

Get HDS and some of the other big players in the frame and see where you end up but don't roll your own at that kind of size and scale unless you're basically comfortable losing your job if it goes wrong.

KS
Jun 10, 2003
Outrageous Lumpwad
You do need to get another vendor into the picture to make the price magically fall further. I thought that was IT purchasing 101.

But I can't imagine supporting anything homegrown for primary storage without Amazon-levels of scale and talent. One guy isn't going to cut it, because he needs to sleep. You need to stock spares. You need a test system to roll updates to first.

I do homegrown for a few hundred TB of D2D backup and it's a huge pain already. I would never want to be on the hook for an outage to primary storage.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Can you elaborate a little more on your technical requirements? I have never worked a project to that scale but am very interested.

Kaddish
Feb 7, 2002
I'm seriously considering consolidating our entire VMware environment (about 45TB) to a Pure FA-420. I can get 60TB usable (assuming 5:1 compression) for about 240k. Anyone have any first hand experience? It seems like a solid product and perfect for VMware.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Kaddish posted:

I'm seriously considering consolidating our entire VMware environment (about 45TB) to a Pure FA-420. I can get 60TB usable (assuming 5:1 compression) for about 240k. Anyone have any first hand experience? It seems like a solid product and perfect for VMware.

Why would you assume 5:1 compression? You're going to trust their marketing that your data will fit because compression is magic? At least make them demo it and prove it.

That aside, 240k for 60TB (even granting that you get that much) is really high. You could get a similar level of performance for half the cost with a hybrid array. You don't need an AFA for a general purpose VI environment.

I also have some strong suspicions that Pure isn't long for this world, based on some conversations with their people at a conference last week. They're having a lot of trouble breaking into new accounts and moving out of their tier 0 niche because they are really expensive unless you believe in magical (like 12 to 1) compression ratios. If you do then they are merely kind of expensive.

What makes you think Pure is a good fit for your environment? What makes them perfect for VMware?

Kaddish
Feb 7, 2002

NippleFloss posted:

Why would you assume 5:1 compression? You're going to trust their marketing that your data will fit because compression is magic? At least make them demo it and prove it.

That aside, 240k for 60TB (even granting that you get that much) is really high. You could get a similar level of performance for half the cost with a hybrid array. You don't need an AFA for a general purpose VI environment.

I also have some strong suspicions that Pure isn't long for this world, based on some conversations with their people at a conference last week. They're having a lot of trouble breaking into new accounts and moving out of their tier 0 niche because they are really expensive unless you believe in magical (like 12 to 1) compression ratios. If you do then they are merely kind of expensive.

What makes you think Pure is a good fit for your environment? What makes them perfect for VMware?

Because if we don't get 5:1 they will give us more flash to make up the difference. It seems well suited to VMware due to the RTC, dedupe, and free replication. I'm not sold on it but I do like the upfront licensing model. I could get 3 shelves of 900GB V7K plus SVC licensing for around 200k or spend another 40K and get a sexy new flash array.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Kaddish posted:

Because if we don't get 5:1 they will give us more flash to make up the difference. It seems well suited to VMware due to the RTC, dedupe, and free replication. I'm not sold on it but I do like the upfront licensing model. I could get 3 shelves of 900GB V7K plus SVC licensing for around 200k or spend another 40K and get a sexy new flash array.

I'd make sure that agreement to provide more physical is spelled out very clearly in whatever contract you sign, if you choose to go with them. A few vendors provide those sorts of guarantees, but they require will require that you meet some very specific criteria before you can cash in on it. Pure isn't any more suited to VMware specifically than more other storage. It's flash with some features. If you like the up front licensing model then look at Nimble, which has a similar feature set, but will come in much cheaper and provide the same usable capacity before inline compression. VMware tends to work well with caching because the IO is generally random, but the working set is usually small. All flash only really begins to see a large advantage on random access workloads with a very large, or very indeterminate working set. Whether VMware data itself is compressible is a function of the type of data residing on the VMDK, so there's nothing particular about VMware that makes it good or bad for compression or dedupe.

There's nothing particularly wrong with Pure, I just don't think it's worth the money for most customers. They've been begging us to sell more of it, but it's just not a good fit for general purpose storage due to the cost, and the uncertainty about their longevity. From what I've heard they've had people leaving and are electing not to backfill them, which isn't generally a good sign.

Bitch Stewie
Dec 17, 2011
Never looked at IBM for our refresh but how does 3 shelves of 900GB run out at $200K? I didn't think the 7000 was that expensive?

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


We're demoing a Pure FA-420 right now. We just got our 10g networking up and running last week so we don't have a ton of stuff on it yet, but there's some mix of VMWare and just LUNs carved out to individual machines for various things.

On the two ESX volumes I have allocated, I'm seeing a 5.1:1 and 5.2:1 compression ratio. I have a physical SQL server attached and I'm seeing a 4.9:1 compression ratio for that data.

Most recent thing we added was a Mongodb node, and that's coming out to be 3.0:1 compression.

The compression ratio given does not take into account any deduplication you may be seeing (I think) and also ignores empty space. For example, that MSSQL volume I mentioned has a lot of free space in a few of the DB files. So, the 4.9:1 is only taking into account used space within a file. The empty space in the MDF files doesn't even register. If you compared the size taken on disk in the OS to the amount of storage taken on the array, it would be closer to 12:1.

There's about 200gb of data shared via deduplication right now.

We plan on putting more stuff on it over the next few weeks to run it through the paces. We don't have any 10g host adapters yet, so we've been limited by 2x1g iSCSI right now as far as performance goes.

If you can just drop it right into your environment, it's an easy thing to demo. They'll crate up and take the whole thing back after a 45 day trial if you don't want it.

Kaddish
Feb 7, 2002

Bitch Stewie posted:

Never looked at IBM for our refresh but how does 3 shelves of 900GB run out at $200K? I didn't think the 7000 was that expensive?

SVC licensing.

bigmandan
Sep 11, 2001

lol internet
College Slice
I just got a notification that our two Compellent SC4020's (among other hardware) should be arriving tomorrow. Can't wait to get these suckers racked and running. It'll be a few days before Dell sends their rep for install though.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

bigmandan posted:

I just got a notification that our two Compellent SC4020's (among other hardware) should be arriving tomorrow. Can't wait to get these suckers racked and running. It'll be a few days before Dell sends their rep for install though.

Are you required to have them do the config, or just prefer it?

Amandyke
Nov 27, 2004

A wha?

Moey posted:

Are you required to have them do the config, or just prefer it?

Likely installation services were purchased along with the hardware. So the Dell CE will likely rack and stack, cable and power on the arrays. They will probably run some health checks on it as well before turning the keys over, so to speak.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
Trip report: We were going to purchase a new netapp 8020 with a few shelves and complete pack for roughly $150k, including five years of maintenance. Instead I purchased a pair of Nimble CS300 series arrays for slightly more, and the performance is astounding. Not being a unified device is a big draw back, but I have some zfs appliances that can step in for cifs. Seriously, if you are buying storage right now, Nimble is pretty drat awesome.

Kaddish
Feb 7, 2002

adorai posted:

Trip report: We were going to purchase a new netapp 8020 with a few shelves and complete pack for roughly $150k, including five years of maintenance. Instead I purchased a pair of Nimble CS300 series arrays for slightly more, and the performance is astounding. Not being a unified device is a big draw back, but I have some zfs appliances that can step in for cifs. Seriously, if you are buying storage right now, Nimble is pretty drat awesome.

Out of curiosity, do you have a FC infrastructure?

bull3964 posted:

We're demoing a Pure FA-420 right now. We just got our 10g networking up and running last week so we don't have a ton of stuff on it yet, but there's some mix of VMWare and just LUNs carved out to individual machines for various things.

On the two ESX volumes I have allocated, I'm seeing a 5.1:1 and 5.2:1 compression ratio. I have a physical SQL server attached and I'm seeing a 4.9:1 compression ratio for that data.

Most recent thing we added was a Mongodb node, and that's coming out to be 3.0:1 compression.

The compression ratio given does not take into account any deduplication you may be seeing (I think) and also ignores empty space. For example, that MSSQL volume I mentioned has a lot of free space in a few of the DB files. So, the 4.9:1 is only taking into account used space within a file. The empty space in the MDF files doesn't even register. If you compared the size taken on disk in the OS to the amount of storage taken on the array, it would be closer to 12:1.

There's about 200gb of data shared via deduplication right now.

We plan on putting more stuff on it over the next few weeks to run it through the paces. We don't have any 10g host adapters yet, so we've been limited by 2x1g iSCSI right now as far as performance goes.

If you can just drop it right into your environment, it's an easy thing to demo. They'll crate up and take the whole thing back after a 45 day trial if you don't want it.

We're going to be speaking with some existing midsize health systems tomorrow about their Pure implementations. My problem is my storage needs are becoming increasingly difficult to manage. I have an aging DS8100 that now needs more space, volume mirrors from the DS to a flash820 that needs more space, a DS4800 I need to replace soon, a v7000 that's close to maxed out and not under SVC currently. I have a 30 host VMware environment with data stores from both the standalone v7000 and an SVC with v7000 backend connected to a totally different fabric. It's getting really loving hard to manage.

Not to mention three separate n-series, two unified v7000's, DS4500, another DS4800 and SVC/v7000 at our DR site. All of this connected to a fabric of out of support 2005-b5k switches that aren't meshed properly so I have ISL issues constantly, etc etc etc


Ok, I'm done bitching

poo poo, I forgot about the 2 new v5000's ready to be configured for a new Tivoli implementation.

Kaddish fucked around with this message at 04:59 on Oct 29, 2014

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Kaddish posted:

Out of curiosity, do you have a FC infrastructure?
nah we are all 10gbe nfs and iscsi.

theperminator
Sep 16, 2009

by Smythe
Fun Shoe
We've lost faith in Dell and their Equallogic line, and we're thinking of switching over to something else.

What are the small-medium business options from HP Like?

Thanks Ants
May 21, 2004

#essereFerrari


Have you look at the IBM v3700?

Bitch Stewie
Dec 17, 2011

theperminator posted:

We've lost faith in Dell and their Equallogic line, and we're thinking of switching over to something else.

What are the small-medium business options from HP Like?

P4000 and 3PAR? We have P4000 now and we're migrating to HDS HUS.

3PAR 7200 seems to have a very good reputation but we spent 3 weeks waiting on pricing when they knew they were late to the party and we were about to buy HDS and they still came back with a quote that was twice the cost.

Nice product, shame they're part of HP.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

HP has all but abandoned the P series, which was their direct competitor to EQL, so you may want to look elsewhere.

bigmandan
Sep 11, 2001

lol internet
College Slice

Amandyke posted:

Likely installation services were purchased along with the hardware. So the Dell CE will likely rack and stack, cable and power on the arrays. They will probably run some health checks on it as well before turning the keys over, so to speak.

We're pretty comfortable with racking and cabling the equipment, but the setup services include config of the storage units on 4 new hosts we are getting as well. This will be our fist SAN in our environment so having the setup and configuration done for us will be good.

On another note, has any here had experience with Storage Centre Live Volumes? I've read the documentation on it and watch the video Dell put out on it. Seems pretty interesting on paper but I'd like to hear what it's like using it in a production environment.

Adbot
ADBOT LOVES YOU

Jadus
Sep 11, 2003

theperminator posted:

We've lost faith in Dell and their Equallogic line, and we're thinking of switching over to something else.


Would you mind expanding on this? I've just recently purchased a PS6500ES and am very happy with it, but have no experience beyond that.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply