Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice

bull3964 posted:

One of the crazier ratios I saw with our pure was a 2tb file server we had. It was mixed content, images, docs, zip files, PDFs. For a brief time I had it on the same volume as its redundant partner. So, two 2tb vmdk files, both 90% full, but the same data on each.

Actual volume size on storage was about 210gb.

So, we've got a VERY poorly architected Oracle Database that nets us 9.1:1 on average, they refuse to do any form of white space reclamation, or data pruning within it, but thats whatever...

90% reduction on mixed workload file server is pretty dope

Adbot
ADBOT LOVES YOU

Methanar
Sep 26, 2013

by the sex ghost

bull3964 posted:

One of the crazier ratios I saw with our pure was a 2tb file server we had. It was mixed content, images, docs, zip files, PDFs. For a brief time I had it on the same volume as its redundant partner. So, two 2tb vmdk files, both 90% full, but the same data on each.

Actual volume size on storage was about 210gb.

At these crazy compression ratios, does that negatively affect read and write speeds? Does it matter if you're accessing 'sequential' blocks or a lot of different small files.

Also: how do things like filesystem indexing work when the underlying block storage is nothing at all like the logical filesystem or even filesystems that regularly attempt to defrag themselves.

Methanar fucked around with this message at 20:28 on May 23, 2018

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Methanar posted:

At these crazy compression ratios, does that negatively affect read and write speeds? Does it matter if you're accessing 'sequential' blocks or a lot of different small files.

Also: how do things like filesystem indexing work when the underlying block storage is nothing at all like the logical filesystem or even filesystems that regularly attempt to defrag themselves.

Compression and decompression are always on so you can’t really talk about a performance penalty because there’s no way to measure without them. But compression in particular happens after the write is acknowledged and journaled in NVRAM so the latency of the compression and destaging to flash is disconnected from the latency the user sees.

Both operations use LZO which is lightweight and has some chipset optimizations. I don’t know what the time spent in decompression is, but it’s certainly orders of magnitude smaller than the time spent in the SCSI read path so you’d never notice on an SSD based array. Until you’re doing full NVMe internally and NVMe over fabric for connectivity anything that adds a few microseconds of latency is just noise that gets lost in the much larger protocol latency overhead.

And anything that attempts to optimize or reorganize data based on the file system layout on the host is usually going to be counterproductive. There are some cases where it might actually be beneficial, but it would mostly be pure luck.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
Technically compression possibly speeds it up, as there is less physical IO.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

adorai posted:

Technically compression possibly speeds it up, as there is less physical IO.

Also true, though less impactful as we move away from spinning disk and, eventually, SAS attaches SSDs. In addition there is a cache amplification benefit of caching compressed and deduplicated blocks. In general the benefits are well worth the cost.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


We did have one issue on our older FA420 array that our write was outpacing the inline dedup/comp engine which caused rapid storage consumption as data was written out directly to disk.

A Purity update cleared that up though and we haven't had an issue since. We also never did have the issue on our m20 array. They performed the update themselves remotely. We did have to stop over in three different versions along the way, so that was six controller reboots.

Multipathing worked like it was supposed to and it was all drama free and done in about an hour. We had both VMWare and a physical DB cluster going against that array. Everything kept on ticking during the whole process.

These things have just been maintenance and really management free.

Modern OSs won't try to defrag. They recognize it's either a thin provisioned drive in the guest or it comes though as SSD if the drive was thick provisioned.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

bull3964 posted:

We did have one issue on our older FA420 array that our write was outpacing the inline dedup/comp engine which caused rapid storage consumption as data was written out directly to disk.

This was an issue when I first started working with Pure and I know the EMC account teams had a benchmark workload that could force the array to fall over during a PoC. But like you,I haven’t heard of it being an issue in a few years now.

Maneki Neko
Oct 27, 2000

Man we must be some crazy weirdo edge case, I work for a service provider and we've never seen any vendor get above 1.8:1 or so.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Maneki Neko posted:

Man we must be some crazy weirdo edge case, I work for a service provider and we've never seen any vendor get above 1.8:1 or so.

Which vendors have you used?

H110Hawk
Dec 28, 2006

Maneki Neko posted:

Man we must be some crazy weirdo edge case, I work for a service provider and we've never seen any vendor get above 1.8:1 or so.

Is your data encrypted before it's written to the device?

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


YOLOsubmarine posted:

This was an issue when I first started working with Pure and I know the EMC account teams had a benchmark workload that could force the array to fall over during a PoC. But like you,I haven’t heard of it being an issue in a few years now.

You really needed to pound it too. Like multiple 10gb links nearly maxed out with writes for hours on end.

To give Pure props again though, they want in and adjusted the priority of the background reclamation task to process the backlog faster until we got the firmware updated.

bull3964 fucked around with this message at 01:38 on May 24, 2018

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice

H110Hawk posted:

Is your data encrypted before it's written to the device?

pre-deduplicated data will net this poor of returns as well.

BonoMan
Feb 20, 2002

Jade Ear Joe
Cross-posting from the Mac Hardware thread.

I work at an ad agency with full production capabilities. Our video team (of which I'm a part of) has a nice central storage with about 100 TBs. All 8 of our production machines just tap into it via 10GbE and we work off the drive. It's great and wonderful.

We also have an 8 person interactive team on iMacs. Right now they're connected to an older NAS that isn't quite cutting it. They all tap into the general gigabit switch and the NAS is connected to that. But when they work with large files it really chokes.

I know Macs don't have 10GbE connections (although I think there's a Thunderbolt -> 10GbE adapter?) but we'd like a similar setup. Rewiring all of their ethernet drops and then getting another 10GbE switch and other equipment to replicate our video setup is probably going to be expensive. We'd be fine with a locally placed Thunderbolt NAS (as opposed to rack mounted NAS' in our machine room like it is now). However I can't wrap my head around how you connect a lot of machines to a NAS via thunderbolt. There doesn't seem to be a Thunderbolt "switch" ... or am I just thinking about this all wrong?

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read
One of the vendors in the running for this storage project is Dell, they're pitching a Unity 450f. Doing some reading on this unit and I found a review (http://www.storagereview.com/dell_emc_unity_450f_allflash_storage_review) stating that dedupe was 'coming in the future'? This seemed off to me, as every other all-flash array we've seen touts dedupe + compression. I've heard very mixed things about Unity in general, I'm holding out hope that the Pure quote is comparable.

The unit as configured has 18 x 1.92TB disks (23.98 usable, as stated be Dell).

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Spring Heeled Jack posted:

One of the vendors in the running for this storage project is Dell, they're pitching a Unity 450f. Doing some reading on this unit and I found a review (http://www.storagereview.com/dell_emc_unity_450f_allflash_storage_review) stating that dedupe was 'coming in the future'? This seemed off to me, as every other all-flash array we've seen touts dedupe + compression. I've heard very mixed things about Unity in general, I'm holding out hope that the Pure quote is comparable.

The unit as configured has 18 x 1.92TB disks (23.98 usable, as stated be Dell).

Unity sucks. It’s just a slightly refreshed VNX architecture which is itself like a decade old. It doesn’t have dedupe because they’ve got to figure out how to graft it onto their old rear end architecture. They’ve still got you configuring raid levels and poo poo.

https://www.google.com/amp/s/www.theregister.co.uk/AMP/2016/05/17/emc_unity_or_vnx3_whats_in_a_name/

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Spring Heeled Jack posted:

So my company is probably going to venture into the realm of all-flash arrays soon.

Our current setup is an IBM v7000 with 17TB usable space among a couple data stores with 10&15k disks.

I know a lot of the all flash arrays rely on dedupe and compression, but how reliable are their numbers in this regard? I’m getting quoted flash setups with anywhere from 10-20TB usable and then they’ll say 28-48TB ‘effective’ space.

I feel like a doofus potentially buying a new SAN with less physical disk space than our current one, though I know it really isn’t the case. Help calm my nerves?

Note dedupe and compression are available with license on the v7000 at code level 8.1.2:

https://www.ibm.com/developerworks/community/blogs/storagevirtualization/entry/Announcing_Data_Reduction_Pools_8.1.2?lang=en

If you're set on a new array, the v9000 does all of the above at sub millisecond latency, and v7000 gen2+ is out, supporting SSDs up to 15TB:

https://www-01.ibm.com/support/docview.wss?uid=ssg1S1003842

Maneki Neko
Oct 27, 2000

YOLOsubmarine posted:

Which vendors have you used?

Currently we're on Tegile, I don't recall all of the random platforms in the past.

H110Hawk posted:

Is your data encrypted before it's written to the device?

It is not, just apparently terrible.

underlig
Sep 13, 2007
We need help at work.

A previous employee setup our environment with two HP DL380 g9 as scale-out file servers in front of a HP MSA2040 SAN.

The sofs are setup in a cluster, under Windows Server 2012r2, nothing was updated since ~march 2016. After the initial problems we've updated bios, firmware and drivers from HPs SUM


The cluster contains two csvs and a witness, all on the san,


The problem is that the CSVs were all set with sofs1 as owner, and that server shut down two days ago.
When that happened the Hyper-V -cluster all lost contact with the SAN.

When we tried bringing the volumes up on sofs2 they connect for a few seconds then disconnects again stating "the resource is in use". This goes for both volumes and the witness.

We managed to get sofs01 online again, brought the volumes up and everything seemed to work for an hour until the server crashed again.

I finally, yesterday morning, managed to get the volumes to go online by first selecting bring online with sofs02 as owner and then set them in maintenance mode. (????) i have absolutely no idea why this worked and exactly WHAT maintenance mode really is since every host has full access to the volumes now, nothing seems to be draining or anything like that.
The idea of setting it in maintenance mode came when i tried bringing the volumes up through windows disk manager on sofs2

We do not understand why 02 cannot take ownership, i hope someone here has an idea of what might be misconfigured. There's no documentation, there also seems to be spelling errors like one volume is cSv and one is cVs.

We raised a ticket with HPE at noon on thursday, we have a Next Business Day agreement but tried to upgrade that to 4 hour instead but that only made HP pause the ticket until the 4hour upgrade was paid (and even then they say it might take up to seven days for it to get validated in HPs systems), so as it is now we have to wait until monday to get any help from them. I doubt their outsourced technican will be able to help us with any software configurations.


A second question is, will anything stop working right now if i try to take the witness out of maintenace and if the cluster then looses contact with it? As there is only one working node in the cluster, the witness shouldnt be necessary right?

Thanks Ants
May 21, 2004

#essereFerrari


It might be time to buy a single incident support from Microsoft rather than looking to HP for this one

https://support.microsoft.com/en-us/gp/offerprophone

NeuralSpark
Apr 16, 2004

BonoMan posted:

Cross-posting from the Mac Hardware thread.

I work at an ad agency with full production capabilities. Our video team (of which I'm a part of) has a nice central storage with about 100 TBs. All 8 of our production machines just tap into it via 10GbE and we work off the drive. It's great and wonderful.

We also have an 8 person interactive team on iMacs. Right now they're connected to an older NAS that isn't quite cutting it. They all tap into the general gigabit switch and the NAS is connected to that. But when they work with large files it really chokes.

I know Macs don't have 10GbE connections (although I think there's a Thunderbolt -> 10GbE adapter?) but we'd like a similar setup. Rewiring all of their ethernet drops and then getting another 10GbE switch and other equipment to replicate our video setup is probably going to be expensive. We'd be fine with a locally placed Thunderbolt NAS (as opposed to rack mounted NAS' in our machine room like it is now). However I can't wrap my head around how you connect a lot of machines to a NAS via thunderbolt. There doesn't seem to be a Thunderbolt "switch" ... or am I just thinking about this all wrong?

You don’t connect 2+ Macs to a single NAS via Thunderbolt, as it’s just PCIe over a different connector. 10Gbit adaptors and CAT6 is your easiest route to faster access. You could do Xsan but then you’re pulling fiber instead of CAT6 and adding a poo poo ton more complexity.

Serfer
Mar 10, 2003

The piss tape is real



Potato Salad posted:

I see vsan get 2-3 in a mixed nix/win environment.

Pure is loving fantastic if your budget can do it.

S2D is working pretty drat well for a customer too. I may or may not like building my own storage infra though.

S2D is great if your server infrastructure is already windows, your 10g switches do DCE, and you have RDMA NICs, if you don't, it's a lot of money and work to get it all.

I would highly recommend getting certified setups from a vendor (eg, Dell) instead of trying to build it yourself.

Thanks Ants
May 21, 2004

#essereFerrari


I was trying to get a qualified build for S2D out of Dell for ages, and our lovely rep was being really evasive about it for some reason. It was only an exercise in curiosity anyway so I dropped it.

ChubbyThePhat
Dec 22, 2006

Who nico nico needs anyone else

Thanks Ants posted:

It might be time to buy a single incident support from Microsoft rather than looking to HP for this one

https://support.microsoft.com/en-us/gp/offerprophone

This would be my first thought. I can't say I've run across that exact issue before.

Wicaeed
Feb 8, 2005
Is it normal for HPE Support to be really cagey about giving bug details?

We have a HPE StoreOnce system that experiences intermittent crashing of the SMB shares it serves, a reboot fixes this, however the Windows shares take about 30 minutes to come back after a reboot.

Asking HP Support for more details as to what is going on to actually cause the bug and I'm being stonewalled...

evil_bunnY
Apr 2, 2003

Wicaeed posted:

Asking HP Support for more details as to what is going on to actually cause the bug and I'm being stonewalled...
The most likely explanation is they have no clue.

Fruit Smoothies
Mar 28, 2004

The bat with a ZING
EDIT: Probably wrong thread

Fruit Smoothies fucked around with this message at 14:08 on May 30, 2018

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read
Holy crap, SAN discussions are down to HPE Nimble AF40 and a Dell Compellent SC5020. My coworker is leaning towards Comepllent since its a more 'mature' solution or something? :negative:

I know in my heart of hearts that this is a bad choice, but please give me a reason to sink Compellent. It just seems like an old as poo poo design that they slapped flash disks into.

bigmandan
Sep 11, 2001

lol internet
College Slice

Spring Heeled Jack posted:

Holy crap, SAN discussions are down to HPE Nimble AF40 and a Dell Compellent SC5020. My coworker is leaning towards Comepllent since its a more 'mature' solution or something? :negative:

I know in my heart of hearts that this is a bad choice, but please give me a reason to sink Compellent. It just seems like an old as poo poo design that they slapped flash disks into.

Yeah it's an older design but it's loving rock solid. We have a pair of SC4020's and they have given us very very few issues. Performance is pretty good for us still, even when using 3 storage tiers. (we have lots of at rest data so it made sense at the time to get tiered)

Potato Salad
Oct 23, 2014

nobody cares


given that your team is even entertaining an HPE storage product, I'd guess your employer's selection process is already kinda fucky

Potato Salad fucked around with this message at 17:23 on Jun 14, 2018

underlig
Sep 13, 2007
Continuing my questions about our HP MSA 2040 SAN. I know next to nothing about this kind of stuff.

It's got two controllers, both connected to both of our clustered fileservers via fiberchannel (so four cables in total).

Now i got the question if the SAN had been rebooted recently, checked the uptime and it's been up for 2½ years.

Q1: Should you reboot the SAN from time to time?

Q2: With two controllers, can i reboot one, that way keep everything running through the second controller, then reboot that once the first is back up?

Q3: Is there anything else in the SAN enclosure that needs restarting, or is all the intelligence in each of the controllers? I.e is "restart the san" == "restart all of the controllers"?

( Q4: except for our recent problems, this thing has run fine for 2½ years, should i look for newer firmware etc or is this another one of the things that you dont touch unless you have an issue that's resolved with a newer version? )

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read

Potato Salad posted:

given that your team is even entertaining an HPE storage product, I'd guess your employer's selection process is already kinda fucky

I’ve heard almost nothing but good things about Nimble products so far. What issues do you have with them?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Spring Heeled Jack posted:

Holy crap, SAN discussions are down to HPE Nimble AF40 and a Dell Compellent SC5020. My coworker is leaning towards Comepllent since its a more 'mature' solution or something? :negative:

I know in my heart of hearts that this is a bad choice, but please give me a reason to sink Compellent. It just seems like an old as poo poo design that they slapped flash disks into.

The design was fine 10 years ago but tiering makes no sense now that flash is relatively cheap and ubiquitous. Tiering from flash to flash is stupid. Also, their snapshotting mechanism still sucks relative to the pretty elegant redirect on write semantics that Nimble (and most everyone else) use. What drives are in the Compellent?

Spring Heeled Jack posted:

I’ve heard almost nothing but good things about Nimble products so far. What issues do you have with them?

Their biggest issue is that they got bought by HP.

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read

YOLOsubmarine posted:

The design was fine 10 years ago but tiering makes no sense now that flash is relatively cheap and ubiquitous. Tiering from flash to flash is stupid. Also, their snapshotting mechanism still sucks relative to the pretty elegant redirect on write semantics that Nimble (and most everyone else) use. What drives are in the Compellent?


Their biggest issue is that they go

[quote="YOLOsubmarine" post="485070719"]
The design was fine 10 years ago but tiering makes no sense now that flash is relatively cheap and ubiquitous. Tiering from flash to flash is stupid. Also, their snapshotting mechanism still sucks relative to the pretty elegant redirect on write semantics that Nimble (and most everyone else) use. What drives are in the Compellent?


Their biggest issue is that they got bought by HP.

I could get specifics, but it’s 16x 1.92TB disks.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

underlig posted:

Continuing my questions about our HP MSA 2040 SAN. I know next to nothing about this kind of stuff.

It's got two controllers, both connected to both of our clustered fileservers via fiberchannel (so four cables in total).

Now i got the question if the SAN had been rebooted recently, checked the uptime and it's been up for 2½ years.

Q1: Should you reboot the SAN from time to time?

Q2: With two controllers, can i reboot one, that way keep everything running through the second controller, then reboot that once the first is back up?

Q3: Is there anything else in the SAN enclosure that needs restarting, or is all the intelligence in each of the controllers? I.e is "restart the san" == "restart all of the controllers"?

( Q4: except for our recent problems, this thing has run fine for 2½ years, should i look for newer firmware etc or is this another one of the things that you dont touch unless you have an issue that's resolved with a newer version? )

Q1: No. you shouldn’t reboot it just to reboot it.

Q2: This depends on your host configuration. If all of the connected hosts have multipathing enabled for their disk devices and have at least one path to interfaces on each controller then a controller reboot will not cause an outage. You have to verify this at the host level. The SAN can’t tell you if MPIO is correctly configured on your hosts.

Q3: The controllers are the brains. There’s nothing else in the enclosure to be restarted.

Q4: If you haven’t had issues then you generally shouldn’t touch it unless it’s to fix a specific issue you’re aware of or concerned about. The MSA boxes are old workhorses so most of the kinks were worked out long ago and you aren’t going to get any new features. That said you should at least test that you CAN do single controller reboots and fail over properly because you don’t want to discover that it doesn’t work when a controller actually fails and everything is down. Set up a maintenance window and test it. Apply the most recent firmware while you’re at it.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Spring Heeled Jack posted:

I could get specifics, but it’s 16x 1.92TB disks.

Compellent may end up getting axed from the line card entirely fwiw. Dell/EMC has too many storage products in their combined portfolio and some of them will go away. I’d peg Compellent as a likely casualty since the tech is old as hell and not that...compelling.

The Nimble stuff is going to be easier to manage, lower touch, and consistently low latency. The information in Infosight is also great and Dell has nothing like that for Compellent. Your support experience with Nimble will also be better.

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read

YOLOsubmarine posted:

Compellent may end up getting axed from the line card entirely fwiw. Dell/EMC has too many storage products in their combined portfolio and some of them will go away. I’d peg Compellent as a likely casualty since the tech is old as hell and not that...compelling.

The Nimble stuff is going to be easier to manage, lower touch, and consistently low latency. The information in Infosight is also great and Dell has nothing like that for Compellent. Your support experience with Nimble will also be better.

Dell also pitched us a Unity 450f but they ended up pushing Compellent over it. Also the fact that dedupe is a very recent addition to the Unity line doesn't sit well with me.

Our current SAN is an IBM v7000 with spinning disk so anything will be an improvement, but I don't want to be stuck with a worse 'new' product for the next 5 years because my coworker is dead set on Dell for some reason. I'm trying to get technical reason to go HPE over Dell but of course the HP rep our CDW guy was working with left the company in the middle of this engagement, so we're slow on getting responses to our answers.

H110Hawk
Dec 28, 2006

Spring Heeled Jack posted:

I’ve heard almost nothing but good things about Nimble products so far. What issues do you have with them?

HPE is the problem. It's like Oracle. Not to be trifled with if you can at all avoid it.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Spring Heeled Jack posted:

Dell also pitched us a Unity 450f but they ended up pushing Compellent over it. Also the fact that dedupe is a very recent addition to the Unity line doesn't sit well with me.

Our current SAN is an IBM v7000 with spinning disk so anything will be an improvement, but I don't want to be stuck with a worse 'new' product for the next 5 years because my coworker is dead set on Dell for some reason. I'm trying to get technical reason to go HPE over Dell but of course the HP rep our CDW guy was working with left the company in the middle of this engagement, so we're slow on getting responses to our answers.

I'm not sure you'll find any slam dunk technical reasons as the plain truth is that both solutions will very likely work just fine for you. The differentiators should be manageability, support experience, level of comfort with the vendor and VAR, etc. They'll both serve data fast enough and do snapshots and replication and the usual things.

I guess the crux of it is that the Compellent All Flash platform isn't really the mature one. They just introduced all flash compellent a couple of years ago. They also only added deduplication and compression in mid 2016. So the actual platform with the features you're looking at has existed for right around two years. Coincidentally that's a few months after Nimble introduced their all flash platform. Yes, the hybrid based versions of both of these have been around for a while, but that's not really the same thing. So both platforms are relatively new and if he actually wants a mature all flash array he should look at Pure, who have had an all flash platform with a complete feature set since 2011.

Harry Lime
Feb 27, 2008


YOLOsubmarine posted:

I'm not sure you'll find any slam dunk technical reasons as the plain truth is that both solutions will very likely work just fine for you. The differentiators should be manageability, support experience, level of comfort with the vendor and VAR, etc. They'll both serve data fast enough and do snapshots and replication and the usual things.

I guess the crux of it is that the Compellent All Flash platform isn't really the mature one. They just introduced all flash compellent a couple of years ago. They also only added deduplication and compression in mid 2016. So the actual platform with the features you're looking at has existed for right around two years. Coincidentally that's a few months after Nimble introduced their all flash platform. Yes, the hybrid based versions of both of these have been around for a while, but that's not really the same thing. So both platforms are relatively new and if he actually wants a mature all flash array he should look at Pure, who have had an all flash platform with a complete feature set since 2011.

Pure loving rules, an upside of getting laid off today is hopefully in my next gig I'll get to sell it.

Adbot
ADBOT LOVES YOU

bigmandan
Sep 11, 2001

lol internet
College Slice

YOLOsubmarine posted:

Compellent may end up getting axed from the line card entirely fwiw. Dell/EMC has too many storage products in their combined portfolio and some of them will go away. I’d peg Compellent as a likely casualty since the tech is old as hell and not that...compelling.

The Nimble stuff is going to be easier to manage, lower touch, and consistently low latency. The information in Infosight is also great and Dell has nothing like that for Compellent. Your support experience with Nimble will also be better.

Whats lacking in Dell Storage Manger compared to Infosight? I haven't used Nimble stuff before.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply