Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.




NippleFloss posted:


Im also not sure about their longevity. Some of their SEs and AMs up here seem a little desperate right now, like they've gotta make something happen soon.

It's funny because I just had the same thing told to me by a re-seller about Nimble.

Not saying it's true one way or another, but it's interesting what you hear from various sources. They deal with Nimble, NetApp, EMC, Dell, and Pure and they are who we bought our Pure array from. We've been really happy with our Pure and are adding another 12tb raw shelf, but we also have data needs beyond what Pure will scale to. Currently, they are being served by Equallogic, but scaling an Equallogic SAN group to 300-400tb sounds like a bad idea even if they can technically scale way beyond that (not to mention that they don't have things like compression or dedup and some of the stuff we'll be storing will compress really well.)

Right now, it's looking like NetApp is going to be the front-runner for what we need. We're going to likely need around 1/3rd of a petabyte by the end of 2015 with the ability to scale beyond that depending on demand and it's going to be an extremely mixed workload with a lot of sequential data loading going on as well as a decent chunk of random workload that we won't necessarily want to put on the Pure due to space concerns. Ability to serve up CIFS is a real plus to us too because there's parts of our app that extensively use file stores and right now the dual VMs under a DFS namespace and DFSR cause occasional issues that would be simplified by punting that stuff directly to the storage.

Adbot
ADBOT LOVES YOU

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."



bull3964 posted:

It's funny because I just had the same thing told to me by a re-seller about Nimble.

I'm sure this differs from geo to geo to an extent, but the big difference to me is that Nimble is publicly traded, so anyone can take a look at their financials to determine how well or how poorly things are going for them, whereas with Pure all I've got to go on is vague statements from the SEs and reps that we work with.

Pure is just very hard to position due to the cost and a lot of customers get really interested until they see a quote. The cost is such that it really only fits tier zero apps, but they want to pitch it as general purpose storage because they need to grow market share quickly and isn't enough demand for very low latency, low density, expensive storage to facilitate that growth.

If they survive long enough for flash costs to come down then they may drop into the sweet spot on the $/TB curve and then they will be well positioned. But they still need to get more integrated into partner ecosystems to make it really compelling I think. Being fast is great, but everyone is fast now, so you need to make it easy to leverage snapshots, clones, and replication to stand out, and that means hypervisor, application, and backup integration.

EDIT: also, as you mentioned, they have problems with scaling to a decent size and no plans to add scale out to address that, which makes the general purpose storage pitch tough to swallow.

YOLOsubmarine fucked around with this message at 00:16 on Dec 31, 2014

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.




I don't know if I would really call it low density. From a raw perspective, perhaps, but I'm a pretty firm believer in their claims about compression and dedup seeing it first hand. We're getting a 5:1 ratio for our databases (not counting empty space within the DB files which would push the ratio even higher.) Our general VMs are around 4.7:1 and we're seeing about 3.6:1 on our Mongodb data. In all, we're getting above the 4:1 ratio they use for their usable stats. The big thing though is density of IOPs. That was one of the perks for us since we currently have limited rack space in our colo before we would have to consider moving to another one of their locations.

Another thing is to not discount the niche of hardware covering for inefficiency. A good deal of our software systems could use more engineering to make them less resource intensive. We could probably achieve the performance we desire through software optimization and careful planning of a hybrid array (flash sizing for the working set, DB partitioning between tiered volumes, etc.) That takes time and people and we are in short supply of both right now. So, dropping $200k hardware becomes appealing in the short term as opposed to those other concerns if it basically eliminates one bottleneck completely. I know our company isn't unique in this situation and those are the areas where Pure is going to find a good audience.

They are working with partners more though. I don't know if you've seen the FlashStack CI solutions they launched earlier this month. These are forklift installs of Pure, VMWare and Cisco UCS and Nexus for either drop in vSphere or or Horizon View environments along with a single point of contact for support for the whole stack. VDI should be a good fit for their dedup and compression algorithms since so much of each desktop is going to be the same system to system.

In the end, it's not a fit for everyone. However, holy hell is it a nice solution where don't have exact models in how you are scaling performance and need something that can absorb all the IO you can throw at it without dedicating 3 racks to spindles. But no, it's really not going to be general storage unless you have data that really compresses and dedups well or you have a small working set.

chutwig
May 28, 2001

BURLAP SATCHEL OF CRACKERJACKS



As a longtime Pure customer, here are my positives and negatives about them.

Positives
  • If you make enough noise, they will pay attention. After a particularly unfortunate incident with a huge bug in Purity (no data loss but a pretty gross performance impact), we ended up with a dedicated engineer who personally handles everything but routine support stuff, and last month they sent over their chief architect and a couple of other high-level engineers to lay out their roadmap for Purity 4 and beyond. We also had an incident recently that required an extremely rapid upgrade of our controllers plus another shelf and within about 2 days we had the new controllers and shelf on-site, installed and running.
  • Aside from the aforementioned highly regrettable bug encounter, which I will detail below, Purity has been stable and has not given me too many reasons to notice it, which is the most you can ask for from a storage system.
  • The dedupe+compression are pretty magical and we typically see ratios of 6:1 or better over the full array.

Negatives
  • They really really really want(ed) to be the Apple of enterprise storage, by which I mean their gameplan was to sell you a box of SSDs, manage it for you, and you would only ever need to interact with it to make LUNs and add WWNs or iSCSI targets. Obviously this is a pipe dream and I have done my utmost to disabuse them of the notion that this is a feasible strategy in the realm of enterprise storage. poo poo goes wrong, poo poo acts weird, and responding every time with "upgrade to the latest version of Purity and then we'll start actually diagnosing the issue regardless of whether there's anything in the newer Purity that might actually address the problem" sits poorly with an audience that prioritizes predictability and stability above all else.
  • They manage it for you and you don't get to. If you're used to putting on the shoulder-length glove and digging around in the bowels of Data ONTAP like I have to do with our NetApps occasionally, it will come as an unpleasant shock. Just about everything requires a support ticket and a support connection from the array. Maybe this doesn't bother you and there are probably customers of theirs that love this, but it irritates me still.
  • They are very late to the NAS game. They recognize that they need to provide NFS/CIFS at some point but have no idea when, and their current best recommendation is to front a LUN with Windows Server 2012 for both CIFS and NFSv4 (and here I thought Microsoft had killed Services for UNIX).
  • They are also late to the replication game. Purity 4 has some rudimentary replication that I think operates on a 15-minute fixed schedule and then you need to manually copy the snapshot LUN to form a mountable LUN, which is obviously a far cry from SnapMirror or Tegile's ZFS-based replication. They know it's not great and are working on it but I don't know that it will ever be able to fully replace what SnapMirror currently provides for me.
  • The incident with the rapid controller replacement I referenced above: if you start to exceed the capabilities of the controllers for even a little while, you can quickly enter a death spiral where the dedupe/compression processes can't keep up, the system partition where pre-deduped data is kept blows up like Mr Creosote, you exceed 100% utilization on the user space and start intruding into reserved space, the array begins aggressively latency throttling you to try to get you to remove data, and then it's just game over. Even with our upgraded controllers we still had to have their support make the dedupe/compression jobs more aggressive in order to keep up, and we do not push the array very hard. I am currently evaluating our LUN layout to see if we can make some changes there that will help the dedupe processes; our engineer recommended avoiding LUNs larger than 2TB where possible, so I'm going to try to slice up some of our larger VMFS LUNs.

The negatives list looks pretty ominous, but the main salient point to take away is that 80% is the magic number. Don't exceed 80% usage, even after they recalibrated the measurement so that the old 80% became the new 100%. 80% in old versions of Purity was the point at which you would begin occupying reserved system space and the controller would start throttling like hell. Somewhere around Purity 3.2 or 3.3.something, they recalibrated that to the new 100% and hid the remaining space from view. You can still technically intrude into it and see things like 102% utilization, but the array will not be happy with you. That is the Purity bug that bit us; we started getting throttled very aggressively at an indicated 59% of our 11TB array. Support immediately told us that we should upgrade Purity but wouldn't tell us why, and then as soon as we upgraded to a version of Purity where the utilization percentage was recalibrated, it turned out that we were actually at 102% of 8.8TB usable user space and had been right on the edge for some time. That led to a lot of angry discussions, but ultimately they've become more forthcoming and we haven't jumped ship to Tegile yet, sooooo...

Ultimate takeaway from me is basically what NippleFloss said; it's a block device SAN that's fast but doesn't really have wow features. All the same, I'm happy to trade good wows for bad wows when it comes to storage, and they're making progress towards getting rid of the bad wows.

chutwig fucked around with this message at 05:48 on Dec 31, 2014

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."



chutwig posted:

Good stuff

On the subject of death spirals, EMC beat them in a big deal here (Costco iirc) by proposing a bake off and then gaming the benchmark with an overwrite heavy workload that made the Pure array tip over. And as you said, when they tip over they can do so pretty spectacularly since you NEED that inline dedupe and compression to keep things from suddenly filling up and the first thing it does when it gets pushed too hard is switch both to post-process. The customer was scared off by the result.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.




chutwig, thanks for the info, that stuff is good to know.

unknown
Nov 16, 2002
Ain't got no stinking title yet!




I'm more of a network person, but over the holidays got the extended-relative -business-owner query about replacing their servers and basic SAN storage. (Yes, I told him to talk to his consultant group, since they know what they hell his needs actually are.)

But that got me wondering about is: what is the baseline/entry level enterprise SAN systems these days?

The OP is from 2008, so things I'm guessing have changed completely, and most discussion in this thread is about $100k+ systems recently (or <$1k).

So what are people doing these days when you've got a ~$10-15k storage budget for "general purpose" enterprise usage?

Docjowles
Apr 9, 2009



You can probably get a low-end Equallogic or EMC VNXe in that ballpark. Anything cheaper than that gets into "prosumer" stuff like Synology and QNAP.

Internet Explorer
Jun 1, 2005


I just bought a 14 3 TB drive EqualLogic with 5 years NBD support for 13.5. It was less than the PowerVault quotes I was getting.

Maneki Neko
Oct 27, 2000



Internet Explorer posted:

I just bought a 14 3 TB drive EqualLogic with 5 years NBD support for 13.5. It was less than the PowerVault quotes I was getting.

Yeah, we're also getting EqualLogics at or below PowerVault pricing, which is bananas.

Internet Explorer
Jun 1, 2005


I think it is because PowerVaults are OEMed by NetApp (through their LSI storage division purchase). They want to make that money themselves.

Rhymenoserous
May 23, 2008


bull3964 posted:

It's funny because I just had the same thing told to me by a re-seller about Nimble.

That was true of nimble two years ago when I bought mine very desperate to get a sale, but they are much more secure these days and when I order more units/add on's to the units I have it's a very laid back purchasing experience. Back then I was the guy posting on various forums going "I've never heard of these dudes help!" now I can't walk into a SAN/NAS discussion thread where they aren't talked about as a proven/stable technology.

Nimble is great. I'm glad I took one for the team.

Rhymenoserous
May 23, 2008


Internet Explorer posted:

I think it is because PowerVaults are OEMed by NetApp (through their LSI storage division purchase). They want to make that money themselves.

That's exactly what it is.

unknown
Nov 16, 2002
Ain't got no stinking title yet!




Thanks, that nicely weeds out the "It's not more than $5-8k" type of comments/boasts that can be heard on occasion.

Soggy Chips
Sep 26, 2006

Fear is the mind killer


Right, posting in the correct thread this time...

Due to recent developments the size of the environment I currently backup is due to double, hence revisiting my approach of backup storage.
I currently use Veeam to backup just over 50TB of Utilised prod storage, with which I store in reverse incremental to provide the quickest restore time with the bonus of less total disk space required, coming at the cost of backup window due to the increased amount of IO.

I've got a questions surrounding backup storage appliances.
In the past I've always used hand me downs from production or DIY ZFS/StorageSpaces boxes, this works to a certain scale however it makes me interested in the storage appliances made for task out there.

Of course, requirements of RTO and retention periods have implications on what storage you use in different directions ie speed vs capacity.

If data is deduped to kingdom come, it will be slower to restore from. Which then raises the approach of using two tiers, One faster tier for immediate restores and one deduped to buggery for longer retentions? Similar to when DDT became popular 8 years ago with the introduction of VCB.
Obviously some vendors/appliances like Exagrid are fancy and have this faster tier built in and it manages that ageing of the data into the slower tier.

So I guess what I'm interested in is how people currently use hardware deduped storage appliances out in the real world?
Annoyingly most vendors whitepaper's gloss over the actual specifics of how these setups are configured optimally. This review mentions the use of doing a FULL nightly backup to get the best dedupe ratios which I can appreciate, however this completely rapes your Source storage and Network and takes significantly longer than just using CBT/incrementals. I'm not sure how this is improving things.
So in that regard what cycles and methods are people using to backup onto these appliances effectively?
I'm guessing a weekly full with nightly forward incrementals..

Wicaeed
Feb 8, 2005


If I recall, someone in this thread said that they were getting PowerVault pricing from Dell for their Equallogic array lineup.

Would you be willing to drop some numbers?

I need something stupid simple for our datacenter for a small VMware deployment and Equallogic seems to fit the bill, for now at least. I need to keep it relatively cheap though, but with 10Gbit connectivity.

Wicaeed fucked around with this message at 03:07 on Jan 21, 2015

Internet Explorer
Jun 1, 2005


Wicaeed posted:

If I recall, someone in this thread said that they were getting PowerVault pricing from Dell for their Equallogic array lineup.

Would you be willing to drop some numbers?

I need someone stupid simple for our datacenter for a small VMware deployment and Equallogic seems to fit the bill, for now at least. I need to keep it relatively cheap though, but with 10Gbit connectivity.

Hey Wicaeed, that was me. It was a PS4100E with 12x 3 TB drives and 5 years of NBD warranty for 13,500 including shipping and tax.

If you decide you need more than that, I can recommend Equallogic for "stupid simple." Great little pieces of hardware for someone who does not have a "storage admin" and doesn't need to spend a lot of time managing their SAN.

Rhymenoserous
May 23, 2008


That's not bad at all.

Thanks Ants
May 21, 2004

#essereFerrari



A 6 Tb PS4100E can be had for a little over 4000 if that helps at all.

bigmandan
Sep 11, 2001

lol internet

College Slice

Speaking of Dell storage, we have had our Dell Compellent arrays (sc4020) up and running for about a month now and drat these things are fast. Management so far seems pretty easy and the replication between the two is working quite well. Only issue I really have is that Enterprise Manager is really loving picky about which version of java that should be installed (7u45).

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."



That's around what VSAN licensing will run to license 3 nodes, fwiw. That would also meet the simple requirement and likely outperform the EQL with a few 250GB SSDs.

Thanks Ants
May 21, 2004

#essereFerrari



Is there a quick and dirty capacity calculator for Vsan to work out how much disk you need to be able to lose disks on hosts and entire hosts? I've always been under the impression that it wasn't worth doing on a small scale but I honestly never attempted to base that in factual evidence.

devmd01
Mar 7, 2006

Elektronik
Supersonik


bigmandan posted:

Speaking of Dell storage, we have had our Dell Compellent arrays (sc4020) up and running for about a month now and drat these things are fast. Management so far seems pretty easy and the replication between the two is working quite well. Only issue I really have is that Enterprise Manager is really loving picky about which version of java that should be installed (7u45).

Make sure you set up tiering properly and educate anyone that touches it on how the tiering works. Backup volumes don't need to live in the ssd tier!

bigmandan
Sep 11, 2001

lol internet

College Slice

devmd01 posted:

Make sure you set up tiering properly and educate anyone that touches it on how the tiering works. Backup volumes don't need to live in the ssd tier!

Myself and one other guy are the only ones touching it. And yea only a few things need the "flash optimized" profile. A lot of our use cases will be the "low with progression" profile (tier 3 -> 2) or just tier 3. The destination volume for replication also only gets tier 3.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."



Thanks Ants posted:

Is there a quick and dirty capacity calculator for Vsan to work out how much disk you need to be able to lose disks on hosts and entire hosts? I've always been under the impression that it wasn't worth doing on a small scale but I honestly never attempted to base that in factual evidence.

http://www.yellow-bricks.com/2014/01/17/virtual-san-datastore-calculator/

How many disks/hosts you need is determined by the number of failures to tolerate setting, which is defined in a storage policy that is applied to specific VMs. You can have different VMs use different storage policies and attach to the same VSAN datastore, so low priority VMs could use a storage policy that requires the ability to tolerate 1 failure, while high priority VMs could be set to tolerate a higher number of failures, up to 3. These VMs can be stored in the same datastore because the protection happens at the object level (VMDKs, config files, swap files, etc) rather than using raid, so multiple protection policies are possible on the same VSAN datastore. So the capacity required sum of the size of each VM x it's failures to tolerate setting plus some additional overhead for each object.

The total number of hosts required to support a failure to tolerate setting of n is 2n+1, so you need at least a 5 node cluster to support losing more than one disk or host.

Wicaeed
Feb 8, 2005


Woah holy crap, just got pricing back for a single PS4210X with the following:

24 900GB 10K SAS Drives (21TB RAW)
Dual 10Gbit Controllers
3 years NBD support

17 Grand

:stare:

Not bad at all

Wicaeed fucked around with this message at 23:12 on Jan 22, 2015

bigmandan
Sep 11, 2001

lol internet

College Slice

That seems like a pretty decent deal. What's your use case going to be?

Wicaeed
Feb 8, 2005


bigmandan posted:

That seems like a pretty decent deal. What's your use case going to be?

Small, 3 host VMware deployment that probably won't run more than 20 VMs. For what we really need it's slightly overkill, but I'm not complaining.

Internet Explorer
Jun 1, 2005


Nice, thanks for posting the quote.

This is more of a gut feeling type thing, but I don't really see the point of 10k SAS drives anymore. I feel like either a mix of SSD and NLSAS if you need a performance bump, or just NLSAS if you don't need it. I feel like there is a relatively small performance window where 24 10k SAS drives is worth it over either choice above.

Also, the only reason my quote was NBD was because it is going to be used as backup storage. If that's going into production I would get 4 hour support. If you have NBD and poo poo breaks on Friday, it will be Monday before you get your part. Saturday or Sunday would be Tuesday.

Internet Explorer fucked around with this message at 14:18 on Jan 23, 2015

Nebulis01
Dec 30, 2003
Technical Support Ninny

So we're looking for a new SAN our requirements are pretty minimal. This is interal Hyper-V cluster and a few SQL boxes.

Our existing stuff is about 4K IOPS, 250MB/sec and ~4TB used our existing infrastructure is 47% read, 53% write. We're looking for something that will handle 15K IOPS and give us 6-8TB usable with 10GigE and allow us to put a trial 5-10 users on to VDI with room to expand that to 60-70 down the road. Our budget is in the $40k range for this project. I'm looking at multiple vendors for this and so far we've narrowed it to Tegile, Nimble and NetApp.


NetApp wants us on an FAS2552A outfitted with 4x 200GB SSD and 20x 1TB 2.5" 7.2K (not willing to promise an IOPS benchmark for the array other than to say 'fits your needs')
Nimble presented a CS300 array with 4x 160GB SSD and 20x 1TB 2.5" 7.2k (rating 30K IOPS)
Tegile is quoting an HA2100 with 3x 200GB SDD and 13x 2TB 3.5" 7.2K (rating 10K IOPS, moving to an HA2300 with 1TB 2.5" 7.2K gets us in to the 30K range)

All of the quotes are in the same $40k+- range with 3 years of NBD support and a cold spare kit for HDD and SSD.

Are these prices too high (they seem a might high to me)?

I'm also really looking for feedback on the quality of Tegile, the platform seems like a nice front end to what is essentially a commercial version of ZFS with some bells and whistles. But I'm hesitant to go with such an unknown and young company for such a mission critical piece of infrastructure.

Are there other vendors we should consider?

Docjowles
Apr 9, 2009



Wicaeed posted:

Woah holy crap, just got pricing back for a single PS4210X with the following:

24 900GB 10K SAS Drives (21TB RAW)
Dual 10Gbit Controllers
3 years NBD support

17 Grand

:stare:

Not bad at all

:drat: About 2 years ago I priced out almost exactly the same thing (for the same use case, even) at a prior job, and IIRC it was more like $25k. Would have been nice, but was just a bit too pricey for my boss. So instead they continued limping along on a barely functional HP MSA 2000 :hellyeah: Which thankfully I don't have to loving deal with anymore.

Rhymenoserous
May 23, 2008


Docjowles posted:

:drat: About 2 years ago I priced out almost exactly the same thing (for the same use case, even) at a prior job, and IIRC it was more like $25k. Would have been nice, but was just a bit too pricey for my boss. So instead they continued limping along on a barely functional HP MSA 2000 :hellyeah: Which thankfully I don't have to loving deal with anymore.

Competition in the storage arena has been picking up.

Also 2012/13 was a bad time to buy bulk storage because of... http://www.reuters.com/article/2011/10/28/us-thai-floods-drives-idUSTRE79R66220111028

Maneki Neko
Oct 27, 2000



I asked in the Virtualization thread, but possible this is a better place.

Anyone using any of the hyperconverged stuff (Simplivity, Nutanix, etc)? We're talking to Simplivity, but was just curious what people's real world experiences have been.

Simplivity seems to offer more flexibility over the "lol here's a block of crap and you'll like it" approach that some of the other hyperconverged vendors shoot for.

Wicaeed
Feb 8, 2005


Maneki Neko posted:

I asked in the Virtualization thread, but possible this is a better place.

Anyone using any of the hyperconverged stuff (Simplivity, Nutanix, etc)? We're talking to Simplivity, but was just curious what people's real world experiences have been.

Simplivity seems to offer more flexibility over the "lol here's a block of crap and you'll like it" approach that some of the other hyperconverged vendors shoot for.

I'm assuming EMC ScaleIO falls into that hyperconverged storage range. I used it (briefly) and wasn't extremely impressed. It seems that for same price Nutanix was offering everything ScaleIO had + dedup/compression.

Nebulis01 posted:

So we're looking for a new SAN our requirements are pretty minimal. This is interal Hyper-V cluster and a few SQL boxes.

Our existing stuff is about 4K IOPS, 250MB/sec and ~4TB used our existing infrastructure is 47% read, 53% write. We're looking for something that will handle 15K IOPS and give us 6-8TB usable with 10GigE and allow us to put a trial 5-10 users on to VDI with room to expand that to 60-70 down the road. Our budget is in the $40k range for this project. I'm looking at multiple vendors for this and so far we've narrowed it to Tegile, Nimble and NetApp.


NetApp wants us on an FAS2552A outfitted with 4x 200GB SSD and 20x 1TB 2.5" 7.2K (not willing to promise an IOPS benchmark for the array other than to say 'fits your needs')
Nimble presented a CS300 array with 4x 160GB SSD and 20x 1TB 2.5" 7.2k (rating 30K IOPS)
Tegile is quoting an HA2100 with 3x 200GB SDD and 13x 2TB 3.5" 7.2K (rating 10K IOPS, moving to an HA2300 with 1TB 2.5" 7.2K gets us in to the 30K range)

All of the quotes are in the same $40k+- range with 3 years of NBD support and a cold spare kit for HDD and SSD.

Are these prices too high (they seem a might high to me)?

I'm also really looking for feedback on the quality of Tegile, the platform seems like a nice front end to what is essentially a commercial version of ZFS with some bells and whistles. But I'm hesitant to go with such an unknown and young company for such a mission critical piece of infrastructure.

Are there other vendors we should consider?

Our company purchased two Nimble CS300 arrays back around September and it fell right around the 40k per array mark, with support.

Rhymenoserous
May 23, 2008


Isn't hyper-converged stuff just fancy VSA?

Serfer
Mar 10, 2003

The piss tape is real





I have an enclosure attached to a set of Windows 2012 R2 servers acting as a clustered storage pool, and the only monitoring the box provides is via SES, and I can't seem to find a way to actually view the SES information in Windows. Does anyone have any idea? The enclosure doesn't show in device manager, so it's not quite that easy...

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."



Nebulis01 posted:

So we're looking for a new SAN our requirements are pretty minimal. This is interal Hyper-V cluster and a few SQL boxes.

Our existing stuff is about 4K IOPS, 250MB/sec and ~4TB used our existing infrastructure is 47% read, 53% write. We're looking for something that will handle 15K IOPS and give us 6-8TB usable with 10GigE and allow us to put a trial 5-10 users on to VDI with room to expand that to 60-70 down the road. Our budget is in the $40k range for this project. I'm looking at multiple vendors for this and so far we've narrowed it to Tegile, Nimble and NetApp.


NetApp wants us on an FAS2552A outfitted with 4x 200GB SSD and 20x 1TB 2.5" 7.2K (not willing to promise an IOPS benchmark for the array other than to say 'fits your needs')
Nimble presented a CS300 array with 4x 160GB SSD and 20x 1TB 2.5" 7.2k (rating 30K IOPS)
Tegile is quoting an HA2100 with 3x 200GB SDD and 13x 2TB 3.5" 7.2K (rating 10K IOPS, moving to an HA2300 with 1TB 2.5" 7.2K gets us in to the 30K range)

All of the quotes are in the same $40k+- range with 3 years of NBD support and a cold spare kit for HDD and SSD.

Are these prices too high (they seem a might high to me)?

I'm also really looking for feedback on the quality of Tegile, the platform seems like a nice front end to what is essentially a commercial version of ZFS with some bells and whistles. But I'm hesitant to go with such an unknown and young company for such a mission critical piece of infrastructure.

Are there other vendors we should consider?

NetApp won't get you an IOPS number because a generic IOPS number is pretty meaningless. Tegile bases their IOPS numbers on a 60/40 R/W ratio with 4k blocks. Nimble similarly uses a generic workload specification for their IOPS numbers. That's great if your workload is 60/40 with 4k blocks, but in your case it isn't even close. You're slightly more write heavy, and writes are more intensive operations that reads, so you will get fewer IOPS than the generic IOPS numbers provided for a 60/40 workload. Additionally, based on your IOPS numbers and your throughput numbers, you're doing a lot of sequential work. At 4k IOPS and 250 MB/s you're around a 64k average block size, which almost always signifies sequential IO, which won't benefit too much from cache anyway. I actually think you've probably got something wrong there and that throughput number was generated during a backup job and doesn't reflect a normal workload.

Anyway, the upshot of this is that NetApp, for better or worse, doesn't provide IOPS numbers like you want, but they do have some pretty sophisticated sizing tools that can be fed data about your specific workloads and generate a hardware recommendation and a report that will provide a lot of information about the expected performance. Ask to see an SPM report on the proposed system. If you just want a generic workload to compare to the others they can run SPM in reverse to generate IOPS and latency numbers for a specified workload.

Your price numbers seem pretty normal. I've seen CS300s quoted higher than that. I don't really have any opinions on Tegile other than that it will run slightly higher on latencies than the other arrays listed and it is probably going to have some of the same issues ZFS does with performance, especially write performance, deteriorating as the filesystem fragments from snapshotting and natural overwrite activity. NetApp and Nimble both some background processes that can consolidate empty space as it fragments, but I'm unaware of anything Tegile does to address that, though storing all of the FS metadata on SSD helps with that some.

You will get the most read cache out of the Nimble array. You will get the least out of the NetApp if you follow best practices and keep a spare.


Wicaeed posted:

I'm assuming EMC ScaleIO falls into that hyperconverged storage range. I used it (briefly) and wasn't extremely impressed. It seems that for same price Nutanix was offering everything ScaleIO had + dedup/compression.

ScaleIO sort of straddles the line between SDS and HyperConverged. It's a little of both, but the supported hardware matrix was pretty small last I checked, and it's not a terribly mature technology yet, even relative to Nutanix or Simplivity.

Rhymenoserous posted:

Isn't hyper-converged stuff just fancy VSA?

The two big hyperconverged vendors pre-date VSAN, so it's really like VSAN is a less fancy Simplivity/Nutanix, whatever. There are also some differences in that both Nutanix and Simplivity have to deploy VMs in the environment to act as controllers/storage processors while VSAN runs entirely within the hypervisor, which has some benefits. The flip side is that Nutanix and Simplivity both support things like hardware based snapshots and replication, which tend to be much better propositions than using VMware snapshots and VSphere replication, particularly the snapshotting. Better storage efficiency features like dedupe and compression also aren't available on VSAN. VSAN also doesn't make any attempt to enforce data locality whereas the other vendors do. Whether that's a good thing or not can be debated.

Muslim Wookie
Jul 6, 2005


NippleFloss posted:

Best case is active passive on 7-mode with the passive node running a raid-4 aggregate for the root vol and nothing else, and other 10 disks in a raid-dp aggregate on second node which leaves 7 or 8 data disks depending on whether you decide keep a hot spare or not.

Disk slicing on 8.3 is meant to address that problem somewhat.

RC code is fully tested, it simply hasn't been I'm the field long enough to qualify as GA. New platforms sometimes require ONTAP updates to support the new hardware (chipsets, CPU features) and these will ship with RC releases because they can't go GA until they have accrued a certain level of adoption in the
field. That's literally what GA means, that it has reached "General Adoption".

If Nimble wasn't competitive NetApp SEs wouldn't spend so much time wringing their hands over it and trading strategies to beat them. I think Nimble talk probably came up more in competitive chat than any other single vendor.

First of all, I know this is from months ago - I've been travelling the US since Insight and just popped back in here and noticed this... so my bad if this is too old to bring back up. But anyway, GA does *not* stand for General Adoption - it's definitely Availability. NetApp consistently claim publicly that RC is suitable for production workloads, but I can assure you a very different tune is sung to their T1 global accounts in my geo.

WRT Nimble, I can't find the document at the moment but, as you mentioned elsewhere, being that they are publicly listed they have to report on this things, I recall reading they had something like 3000 registered resellers/partners and only about 1000 arrays sold out in the field last year. I think that's fairly telling but I guess that's information that's probably too old to be meaningful now.

By the way, you always put together really well thought out and actually correct answers to the people in this thread - you reflect what storage architects should be and something I never see enough of. Too many people that are just winging it, relying on the vendor to do the work, dumping their lovely solution on a customer and walking away.

Also, totally agree with you on all the things you say about IOPS - it's always an irritating conversation. We tackle it similarly to how I think you do, from reading what you've said around here, but with one added thing in competitive situations... IOPS at what latency? That's pretty key for us in our geo anyway when going up against EMC.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."



Muslim Wookie posted:

Also, totally agree with you on all the things you say about IOPS - it's always an irritating conversation. We tackle it similarly to how I think you do, from reading what you've said around here, but with one added thing in competitive situations... IOPS at what latency? That's pretty key for us in our geo anyway when going up against EMC.

This is true, and very important, but in the context of discussions around the Hybrid and All-Flash players there's generally an implied ceiling of around 2ms on most of their generic IOPS numbers, so hitting the "at what latency" points is a little less meaningful versus something like Isilon where you may be talking about 5 to 10ms of latency when they quote some very high IOPs numbers. I dumped a lot of information into that post already, so "AND now consider the latency!" felt like a bit too much tackle in one go.

Besides, I think that at the scale where you're talking about a CS300 or a 2552 it's likely that the customer will get more than enough performance out of either with a sufficient amount of flash, and the conversations really needs to move away from performance, which is a relatively easy bar to get over, and towards data management and integration with other applications and tools in the environment.

What geo are you in? You're an SE, I take it?

Adbot
ADBOT LOVES YOU

SpaceRangerJoe
Dec 24, 2003

The little hand says it's time to rock and roll.

I've been looking at a Dell storage system for our environment here. I've been quoted on a PV3420. I only have 2 Hyper-V hosts, and we don't have the switch capacity to do storage over the network. The direct attach should work fine for my needs.

That being said, I'm I being taken to school on this pricing?
PV3420
3 year NDB service
6x 200Gb SSD
6x 300gb 15k
$18.5k

I know the pricing on Dell's SSD's is sky high, but this does seem like a lot. I still need to add 4 hour service instead of NDB since this will be production.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply