Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
evil_bunnY
Apr 2, 2003

HP is mosdef being slimey and also they will only ever do whatever they’re contractually mandated to do.

Adbot
ADBOT LOVES YOU

Potato Salad
Oct 23, 2014

nobody cares


If that sales rep doesn't sell you support, it's EOL for ya.


I do keep having a NetApp account whose rep keeps trying to insist, each time we have contact, that a device is EOL one year earlier than is true. They keep getting reminded we already know the truth have previously clarified the matter. "Ah, right."

Potato Salad fucked around with this message at 01:07 on Sep 7, 2018

Wicaeed
Feb 8, 2005
Haven't been exposed to Nimble in a year or so, but if memory serves the CS215 was the bottom of the barrel entry level array for poors like, 6 years ago.

That's not to say they aren't still in support, but I'm fairly sure you should be able to find Nimble EOS/EOL information on the HP website, as bad as it is.

I'd also check the available information from HPE InfoSight, if their arrays are registered.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Maneki Neko posted:

Anyone have a good resource for end of life info for Nimble/HPE? We picked up a client that has CS215 arrays, the OG Nimble rep said they don't go end of support until 2021 but HPE is now pulling out "lol end of support at the end of 2019".

I'm feeling like this is likely slimey HPE rep shenanigans, but I'm trying to verify. Our OG Nimble rep left the company (shocking news).

ndyer39 posted:

if you head to Infosight->Documentation you'll find a section called Support Policies link. In there you should find the End of Availability notice of the CS200/CS400, as well as our overall Support Policy for hardware and software.

To summarise both documents, the policy is that we will allow customers to purchase another FIVE years of support from the day a system is made End of Availability, The CS220/240/260/420/440/460 was made End of Availability on Dec 11th 2014, so that means we will support them for hardware break/fix but also current and new software features, up to Dec 11th 2019.

https://community.hpe.com/t5/Array-Setup-and-Networking/CS420-Lifecycle/m-p/6985593#M1763

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
Hello SAN thread. On a recent call with a storage guy in IT, he claimed that adding more SSD storage to an existing file server would cost over $7k/TB. Even accounting for TCO, no-expenses-spared service contracts, redundancy etc, is this complete bullshit? These aren't fancy disks either, I'm pretty sure they're just SATA 3.

Thanks Ants
May 21, 2004

#essereFerrari


It's not inconceivable if your file server is something like a NetApp system, and the upgrade involves maintenance work to upgrade the software, swapping controllers, adding a shelf and then having to use proprietary SSDs.

Internet Explorer
Jun 1, 2005





Yeah, enterprise storage costs seem outrageous if you're only used to non-enterptisr storage. Whether that's a racket or not is a different conversation.

H110Hawk
Dec 28, 2006

ConanTheLibrarian posted:

Hello SAN thread. On a recent call with a storage guy in IT, he claimed that adding more SSD storage to an existing file server would cost over $7k/TB. Even accounting for TCO, no-expenses-spared service contracts, redundancy etc, is this complete bullshit? These aren't fancy disks either, I'm pretty sure they're just SATA 3.

Raw tb or usable, how enterprise is this stuff, etc. It's probably on the high side but depending on what is actually behind the curtain there it's trivial to hit that cost. What are you including in your tco? Maintenance, iops, controllers, network bandwidth, rack space, backup lifecycle, etc. Sounds like you might be getting the actual TCO not the salesdroid whitepaper tco.

Do never SAN.

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
Cost was quoted for usable storage. It has to be extremely reliable, but I'm pretty sure it's just some commodity Dell server. Half the reason I ask is because the performance is pretty poor considering the price. Like it seems to me that using faster SSDs (and NICs I suppose) wouldn't add much to the cost consider how high it already is.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

ConanTheLibrarian posted:

Hello SAN thread. On a recent call with a storage guy in IT, he claimed that adding more SSD storage to an existing file server would cost over $7k/TB. Even accounting for TCO, no-expenses-spared service contracts, redundancy etc, is this complete bullshit? These aren't fancy disks either, I'm pretty sure they're just SATA 3.

That’s somewhat high, but not astronomically so. The last Pure expansion we did penciled in at about 5k/TB raw. NetApp tends to be a little cheaper than Pure for raw storage. Some other vendors may be more. The raw capacity numbers aren’t generally important though, it’s usable capacity after data reduction that you should care about. Pure does pretty well by that metric for most workloads.

The thing to keep in mind with storage vendors and the cost of capacity is:
1) they’re mostly using enterprise grade drives (Pure excepted, though they write their own firmware) which are more expensive due to the more sophisticated controller software and hardware and dual porting
2) they have to cover the cost of firmware testing and validation
3) they have to cover parts replacement
4) most important, you’re paying for the software that runs on the array. The software is where the value is, and array manufacturers wouldn’t make much money if they charged everyone the same amount for software whether they bought 10TB of storage or 10000TB. So instead the cost of the software is baked into the cost for raw storage so that it scales as capacity increases. Sure, you can go buy a drive, even an enterprise drive, for cheaper than what an array vendor will sell it to you for, but how cheaply can you make it fully redundant and stable and practically guarantee data integrity and provide features like inline compression and de duplication and no-impact snapshots and clones and replication and provide 24/7/365 software support and next day hardware replacement? You’re paying for the software, and the tight integration with the hardware.

H110Hawk
Dec 28, 2006

ConanTheLibrarian posted:

Cost was quoted for usable storage. It has to be extremely reliable, but I'm pretty sure it's just some commodity Dell server. Half the reason I ask is because the performance is pretty poor considering the price. Like it seems to me that using faster SSDs (and NICs I suppose) wouldn't add much to the cost consider how high it already is.

It sounds like you haven't seen any details, or you're not passing them along here. You are firmly in the not enough information state right now.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

ConanTheLibrarian posted:

Cost was quoted for usable storage. It has to be extremely reliable, but I'm pretty sure it's just some commodity Dell server. Half the reason I ask is because the performance is pretty poor considering the price. Like it seems to me that using faster SSDs (and NICs I suppose) wouldn't add much to the cost consider how high it already is.

What is the actual thing you bought? Poor performance is probably not due to whatever communication protocol the drives are using.

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
Ok given all the proprietary stuff that might be involved, I can see how the cost could be so high.

YOLOsubmarine posted:

What is the actual thing you bought? Poor performance is probably not due to whatever communication protocol the drives are using.
I don't know the details off hand. As for the performance, I was only seeing around 600 MB/s read speeds over a 10Gbit/s NIC using dd to read a large file. The application would definitely benefit from better read speeds, so I'm curious about how much more expensive that could get.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Is this a physical file server you are just shoving more disks into?

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
Yep.

Thanks Ants
May 21, 2004

#essereFerrari


Server OEMs like Dell and HP have always and will forever bend you over on storage prices

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Still, that pricing doesn't add up, unless the server is full of drives and you are tossing in a SAS card + shelf?

bigmandan
Sep 11, 2001

lol internet
College Slice

Internet Explorer posted:

Yeah, enterprise storage costs seem outrageous if you're only used to non-enterptisr storage. Whether that's a racket or not is a different conversation.

I was recently quoted $25k CAD for 24 1.2 TB 10K drives from Dell to expand tier 3 on our two SC4020's ... Yeah it's pretty outrageous.

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice
7K for that isn't necessarily unreasonable.

I've paid much more for a 1.1TB HGST SSD PCI-E card before, and definitely more for a 0+1 SSD Stripe in a bunch of C240's.

Potato Salad
Oct 23, 2014

nobody cares


poo poo, I pay less a $2 / gb on good NVMe

s2d is the tits

Thanks Ants
May 21, 2004

#essereFerrari


Are you using Dell/HP validated designs for that or rolling your own?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
I am looking at buying some new arrays. I am currently using Nimble and Oracle ZFS storage, and I want to consolidate into a single array at each site. The two contenders are Pure and Nimble. I am pretty sure I will get Nimble for less money, but I like a few things about Pure. One really cool feature is snap to NFS. Anyway, I was wondering if anyone has experience with both arrays and could tell me if it really is worth paying a price premium for Pure. Any takers?

edit: I am looking at ~120TB to 150TB effective/usable and ~100k expected IOPS. Mix of Windows and Linux servers, plus 500 seats of VDI.

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice

adorai posted:

I am looking at buying some new arrays. I am currently using Nimble and Oracle ZFS storage, and I want to consolidate into a single array at each site. The two contenders are Pure and Nimble. I am pretty sure I will get Nimble for less money, but I like a few things about Pure. One really cool feature is snap to NFS. Anyway, I was wondering if anyone has experience with both arrays and could tell me if it really is worth paying a price premium for Pure. Any takers?

edit: I am looking at ~120TB to 150TB effective/usable and ~100k expected IOPS. Mix of Windows and Linux servers, plus 500 seats of VDI.

I have both actually.

I'm a fan of both, but some workloads are not great for PURE, where others run like greased lightning.

We use Nimble as our performance minded "Doesn't deduplicate" block storage offering, commonly video applications, or very large VMDK's that don't require greased lightning, but have large disks.
If this says anything though, we have nine separate PURE arrays, and three nimble.

2 running just EPIC/Cache DB's
2 Prod VMWare/VDI for failure domains
1 Dedicated SQL/Other DB's array
1 Oracle/AIX Dedicated array for EDW nonsense.
--
1 DR VDI
1 BC/DR Vmware with fan in from the two above.
1 BC/DR EPIC/Cache DB

I'd be more than happy to share honest deduplication / reduction numbers on PURE for our M20's, M50's and M20R2's if you want.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

I prefer Pure to Nimble all flash because I think they handle deduplication better. I also think they’re more likely to exist and still be innovating in 5 years, versus another HP storage acquisition, and one that wasn’t even purchased principally for their storage product.

Oh, also ActiveCluster is the simplest metroclustering solution I’ve worked with and it’s free with Pure arrays.

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice

YOLOsubmarine posted:

I prefer Pure to Nimble all flash because I think they handle deduplication better. I also think they’re more likely to exist and still be innovating in 5 years, versus another HP storage acquisition, and one that wasn’t even purchased principally for their storage product.

Oh, also ActiveCluster is the simplest metroclustering solution I’ve worked with and it’s free with Pure arrays.

True with ActiveCluster, we're not using it yet in prod, but it's super loving simple.
Otherwise, PURE deduplication is pretty spot on, dynamic finger printing really can't be beat in most scenarios.

Zorak of Michigan
Jun 10, 2006

I didn't speak up because we don't have Nimble to compare it to, but we're a Pure customer and I love the stuff. Not only do the arrays work, but their support staff have been really easy to work with when we hit them with vague "how do I do x" questions.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

kzersatz posted:

I'm a fan of both, but some workloads are not great for PURE, where others run like greased lightning.

We use Nimble as our performance minded "Doesn't deduplicate" block storage offering, commonly video applications, or very large VMDK's that don't require greased lightning, but have large disks.
If this says anything though, we have nine separate PURE arrays, and three nimble.
What I am inferring from your post is that data which does not dedupe does not necessarily perform well on Pure. Is that your assessment? My org is a bank with terabytes of images (loan docs, account agreements, check images, etc..) which doesn't compress or dedupe. Performance isn't an issue for these items, but they will be stored on whatever array we purchase. We will also have a lot of SQL and generic VMware.

Harry Lime
Feb 27, 2008


adorai posted:

What I am inferring from your post is that data which does not dedupe does not necessarily perform well on Pure. Is that your assessment? My org is a bank with terabytes of images (loan docs, account agreements, check images, etc..) which doesn't compress or dedupe. Performance isn't an issue for these items, but they will be stored on whatever array we purchase. We will also have a lot of SQL and generic VMware.

Due to how Pure handles metadata it does perform best with deduped/compressed workloads. It still performs well with un-reduceable workloads but that isn't where it shines.

Zorak of Michigan
Jun 10, 2006

The only weakness we've found with our Pures is in workloads that need massive throughput more than low latency, like data warehouse stuff. The Pures don't suck at throughput, but they don't do much better that wide spinning arrays that cost a fraction as much.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

adorai posted:

What I am inferring from your post is that data which does not dedupe does not necessarily perform well on Pure. Is that your assessment? My org is a bank with terabytes of images (loan docs, account agreements, check images, etc..) which doesn't compress or dedupe. Performance isn't an issue for these items, but they will be stored on whatever array we purchase. We will also have a lot of SQL and generic VMware.

We’ve got a few credit union customers running Pure without issue, but it’s going to be fairly expensive if you have large amounts of unreducible data, as will any all flash array.

kzersatz posted:

True with ActiveCluster, we're not using it yet in prod, but it's super loving simple.
Otherwise, PURE deduplication is pretty spot on, dynamic finger printing really can't be beat in most scenarios.

We just sold some to a customer to replace their old VNX/VPlex setup and it was basically a two day install versus many weeks. Very slick.

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice
I had a much better response typed up, but it boils down to this.

PURE will do well with just about any workload you toss at it, undeduplicated, deduplicated, doesn't matter, but cramming a bunch of fat kids into a Cadillac isn't the most efficient use of space you could have, which is why Nimble was a song and a dance cheaper to throw space hogs at versus Pure... but if your people want to fund it, absolutely do it.

That said, it does well for our EDW workloads because our DBA's are just shy of useless, they refuse to use any capabilities of Oracle to compact, reclaim whitespace, clean, consolidate, name it... they also refuse to use ASM to migrate any data whatsofuckingever, so the last time they did an export/import of the data to migrate disks, it was nearly 9TB in size... as we see attached, it's barely 800GB of unique data.

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice
oops

Only registered members can see post attachments!

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice
doublepost.shitter

Only registered members can see post attachments!

devmd01
Mar 7, 2006

Elektronik
Supersonik
Oh hell yes, we just got approval from the CIO to dump our VNX5400 entirely and upgrade+expand our existing Pure storage. We’ll be 100% flash ssd/nvm across both of our datacenters soon and with Pure on both sides we can start looking at active failover between sites.

lol internet.
Sep 4, 2007
the internet makes you stupid
Some dumb questions from a storage noob.

1. My san has two controllers with four 8gb FC ports each. If I use 2 ports on each controller, would I get 32gb of data between all 4 ports? Why do some controllers have 4 ports, does anyone actually connect that much?

2. Cisco 4GB MDS switch connected to a HP C7000 enclosure, if I connect two 4gb FC ports to on this to a 16gb Cisco switch, will I get 8GB of data thorougput? The first page says something about MDS switches being an exception.

3. Should I really even connect the enclosure interconnect switch to a regular switch? I do this currently because I might have a pizza box sever also connecting to the storage fabric.

4. Is it uncommon to do zoning through Cisco DCNM? I'm working with FC zones for the first time, just curious if people are mainly doing CLI at other companies.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
1. Hypothetically yes, you can get 32gb aggregated over four ports. It is likely that your bottleneck will be something other than your switch and HBA ports. Use multiple hosts connected to all ports and using multipathing. Controllers have multiple ports for bandwidth and for redundancy in case of component failure.

2. Your question is unclear... is the 4gb switch powered by the c7000? Is it a passthru switch or a fabric? Connected to blade servers in the c7000? First page of what? Exception to what? I think the answer is yes but the more I read what you wrote the more confused I get.

3. It sounds like this is an MDS powered by the c7000? If so you could connect the switch to an external SAN switch if you needed more ports for your fabric. I would keep it simple if you don't actually need that.

4. Whatever works for you.

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice
#4
I hate DCNM with a passion, but we're running a dated version due to purchasing our MDS's through EMC (predecessors call).
If you have the bandwidth to learn the CLI, do it, if you don't, stick to the GUI and call it a day.

CampingCarl
Apr 28, 2008




My company has a couple of the Buffalo 5010 Terastations I have a couple questions about.
1. We want to expand the space on them. CDW is quoting us on the Buffalo brand drives but is there an issue with getting some WD Golds off Amazon for cheaper?
1b. Side note I was also asked to look at putting in a couple SSDs to see if that would speed anything up.
2. Since you can't really expand a RAID array am I right in that the best course is to back the whole thing up, rebuild the array with more drives, and copy it all back?
3. I suspect that there is some issue with thumbnails when people are looking at/searching directories on a share. Windows just eats a bunch of CPU with the dll handling thumbnails and doesn't give it up until killed. May not be a question for this thread but I'm curious if anyone has seen something similar.

qutius
Apr 2, 2003
NO PARTIES

kzersatz posted:

#4
I hate DCNM with a passion, but we're running a dated version due to purchasing our MDS's through EMC (predecessors call).
If you have the bandwidth to learn the CLI, do it, if you don't, stick to the GUI and call it a day.

DCNM is usually pretty tolerant of whatever version of NXOS version you might be running. We had licensed versions of DCNM in a previous role and generally liked it, being able to have multiple fabrics open at the same time was a great. Plus some nice performance/stat collection. Anyways, not as fast as the command line, but helps newbies from making a mistake if you're running a larger storage team or trying to create standards/documentation. The 10.x versions have a slick upgrade process that's nice for pushing out code to switches too but yeah...not a tool that everyone will like.

Here's the support matrix for v11:
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/11_0_1/comp_matrix/b_compatibility_matrix_11_0_1.html

A quick glance to me looks like it supports any version of NXOS that's supported on the MDS platform. My guess is you'd be fine to upgrade. From v10 on they started to push almost all functionality to the web interface. I ended up still using both the web and client GUI depending on what I wanted to do.

CampingCarl posted:

My company has a couple of the Buffalo 5010 Terastations I have a couple questions about.
1. We want to expand the space on them. CDW is quoting us on the Buffalo brand drives but is there an issue with getting some WD Golds off Amazon for cheaper?
1b. Side note I was also asked to look at putting in a couple SSDs to see if that would speed anything up.
2. Since you can't really expand a RAID array am I right in that the best course is to back the whole thing up, rebuild the array with more drives, and copy it all back?
3. I suspect that there is some issue with thumbnails when people are looking at/searching directories on a share. Windows just eats a bunch of CPU with the dll handling thumbnails and doesn't give it up until killed. May not be a question for this thread but I'm curious if anyone has seen something similar.

Generally, vendors will require specific firmware to be running on the drives installed which is why CDW is quoting you the way they are. Google around (or contact Buffalo if you have support) and see what they say.

qutius fucked around with this message at 03:51 on Jan 22, 2019

Adbot
ADBOT LOVES YOU

SlowBloke
Aug 14, 2017
Another MDS question, we might have to decommission our MDS9124 and upgrade to a more modern 9148S. Am I going to need to redo the configuration/zoning from scratch or could i import the config from the old switch to the new one?

SlowBloke fucked around with this message at 20:34 on Feb 7, 2019

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply