Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
What price range are will talking exactly, Oracle has some pretty neat storage stuff. It's ZFS on the backend but the front end is your run of the mill UI.

Also protip: v7000's run a RH kernel on the backend.

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

Dilbert As gently caress posted:

What price range are will talking exactly, Oracle has some pretty neat storage stuff. It's ZFS on the backend but the front end is your run of the mill UI.

Also protip: v7000's run a RH kernel on the backend.

And EMC's stuff runs on SuSE. And ONTAP is sort-of BSD. And Juniper's stuff runs on a heavily, heavily customized FreeBSD. But everything interesting in all of these actually happens in proprietary bits. The v7000s run a separate storage kernel which handles it all. The RH kernel is only used to make hardware discovery and networking less painful.

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.

Dilbert As gently caress posted:

What price range are will talking exactly, Oracle has some pretty neat storage stuff. It's ZFS on the backend but the front end is your run of the mill UI.

Also protip: v7000's run a RH kernel on the backend.

Yup, I'm used to that from managing our v7ks via CLI, but to Sales Guy and his customers, its not Linux, it's a GUI!!!!

As for price range, I would say $30,000 to $50,000 as a max, based off the VSAN idea they were proposing before.

madsushi
Apr 19, 2009

Baller.
#essereFerrari
Is the NetApp V-series really more than $50k?

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

mattisacomputer posted:

That's what I recommended, as I know it can do the job but Sales Guy's response was "they're not gonna spend that kind of money." Either way not my problem, but the project now has me interested in what alternatives there to IBM SVC / v7000 that would provide similar features.

v5000 :v::v:

Zephirus
May 18, 2004

BRRRR......CHK

evol262 posted:

And EMC's stuff runs on SuSE.

VNX/CX are still Windows underneath.

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


Hey there storage guys, I'm looking at getting a simple disk array for iSCSI to create some shared storage for our growing vCenter deployment (6 hosts) - the only caveat is that I really need it to be Cisco hardware. Is there something in the UCS line that could fulfill this need? Could I literally grab a C240 and put openfiler on it?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

sudo rm -rf posted:

Hey there storage guys, I'm looking at getting a simple disk array for iSCSI to create some shared storage for our growing vCenter deployment (6 hosts) - the only caveat is that I really need it to be Cisco hardware. Is there something in the UCS line that could fulfill this need?

Cisco has storage now!

quote:

Could I literally grab a C240 and put openfiler on it?

YOU JUST FOUND CISCO'S STORAGE SOLUTION!
http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-invicta-series-solid-state-system/index.html
I mean it's totally not a C240 painted black....


I'd really vouch for a MD3220i or HP MSA if you're on a small cluster but growing. A C240 is fine but what happens if a CPU blows, main board? How quick is the failover?

You could buy 2 C240's and do Freenas and ZFS replication but still, it's nice to have a person to call for those pesky software bugs.

Dilbert As FUCK fucked around with this message at 22:46 on May 14, 2014

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


The main issue is that we get significant discount on cisco products (internal pricing), so I'm thinking that jerry-rigging a C240 with Freenas would have such a huge price advantage over even an entry-level array like the MD3220i to be worth it.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

sudo rm -rf posted:

The main issue is that we get significant discount on cisco products (internal pricing), so I'm thinking that jerry-rigging a C240 with Freenas would have such a huge price advantage over even an entry-level array like the MD3220i to be worth it.

Sure, but price isn't everything. If you plan to scale you may find issues, if you want to go on vacation and something happens they will call you, if you leave the company who do they talk with?


I'm not saying that a Dual C240 wouldn't work, I'm just saying weigh the pro's and cons. ZFS has some really neat poo poo coupled with some SSD's for L2ARC, you can get some amazing performance out of some basic 7.2K drives.

Thanks Ants
May 21, 2004

#essereFerrari


If you have literally any other option then rolling your own FreeNAS thing that ends up being mission critical is not the path you want to go down.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

madsushi posted:

Is the NetApp V-series really more than $50k?

The lowest end models might come under that, depending on licensing. But there is no more v-series now, it's just a license (FlexArray, of course) on regular FAS starting with the 8000 series.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Caged posted:

If you have literally any other option then rolling your own FreeNAS thing that ends up being mission critical is not the path you want to go down.

On this note, what is advised for a software solution for a light shared storage load?

I am replacing a physical server at a remote site (with limited connectivity) with a pair of ESXi hosts. I have a handful of DL380 G6s laying around, and am hoping to setup one for their "production" shared storage, and one as a backup target for PHD Virtual.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

mattisacomputer posted:

As for price range, I would say $30,000 to $50,000 as a max, based off the VSAN idea they were proposing before.
For 50k you can definitely get an Oracle ZFS appliance that will perform quite well. Obviously your workloads will define the need, but we hit ours (cost exactly $50k for HA, 10gbe and around 13TB usable) with 20k iops regularly and still service 99.999% of all requests in less than 1 microsecond.

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.

adorai posted:

For 50k you can definitely get an Oracle ZFS appliance that will perform quite well. Obviously your workloads will define the need, but we hit ours (cost exactly $50k for HA, 10gbe and around 13TB usable) with 20k iops regularly and still service 99.999% of all requests in less than 1 microsecond.

Will the this solution also virtualize the other storage systems that they have? That's the major need for this project, rather than adding additional storage.

Docjowles
Apr 9, 2009

Is anyone here using OpenStack Swift (their object storage / Amazon S3 analog), with or without SwiftStack's commercial upsell stuff? Our storage requirements for a class of data that's written once and then rarely accessed again have really ballooned over the last couple years for various reasons and the expensive-rear end NetApp filers we typically use aren't a good fit for many petabytes of fairly cold data. It's not "archive and then never use again" so tape isn't what we want either. An object store backed by tons of cheap, slow disk seems ideal for our use case.

Not asking for leads on partners, we have several as well as the expertise to build it in house if the price on farming it out doesn't make sense. Just wondering if anyone has personal experience managing a large Swift deployment. I'm out at the OpenStack Summit this week so I've been taking every chance to bone up on Swift and it seems pretty awesome.

Also exploring Ceph but the fact that Swift focuses purely on object storage and doesn't have the bulk of block and NFS/CIFS support (which we do not need) added on makes it more attractive.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Docjowles posted:

Is anyone here using OpenStack Swift (their object storage / Amazon S3 analog), with or without SwiftStack's commercial upsell stuff? Our storage requirements for a class of data that's written once and then rarely accessed again have really ballooned over the last couple years for various reasons and the expensive-rear end NetApp filers we typically use aren't a good fit for many petabytes of fairly cold data. It's not "archive and then never use again" so tape isn't what we want either. An object store backed by tons of cheap, slow disk seems ideal for our use case.

Not asking for leads on partners, we have several as well as the expertise to build it in house if the price on farming it out doesn't make sense. Just wondering if anyone has personal experience managing a large Swift deployment. I'm out at the OpenStack Summit this week so I've been taking every chance to bone up on Swift and it seems pretty awesome.

Also exploring Ceph but the fact that Swift focuses purely on object storage and doesn't have the bulk of block and NFS/CIFS support (which we do not need) added on makes it more attractive.
At my last job, I frequently described our data access patterns to storage vendors as "write once, read maybe."

Have you looked at Amazon Glacier?

evol262
Nov 30, 2010
#!/usr/bin/perl

Docjowles posted:

Is anyone here using OpenStack Swift (their object storage / Amazon S3 analog), with or without SwiftStack's commercial upsell stuff? Our storage requirements for a class of data that's written once and then rarely accessed again have really ballooned over the last couple years for various reasons and the expensive-rear end NetApp filers we typically use aren't a good fit for many petabytes of fairly cold data. It's not "archive and then never use again" so tape isn't what we want either. An object store backed by tons of cheap, slow disk seems ideal for our use case.

Not currently, but lastjob did exactly this (usenet binary back end). It works exceptionally well.

Docjowles
Apr 9, 2009

Glacier is a little too slow. The files are still wanted on-demand, but not in such a performance critical way that they need to be on 15k SAS or SSD. A lag of a few seconds is fine, several hours not so much. I guess rarely accessed is a poor description. Less frequently accessed? :)

Docjowles fucked around with this message at 17:52 on May 18, 2014

thebigcow
Jan 3, 2001

Bully!

Docjowles posted:

Glacier is a little too slow. The files are still wanted on-demand, but not in such a performance critical way that they need to be on 15k SAS or SSD. A lag of a few seconds is fine, several hours not so much. I guess rarely accessed is a poor description. Less frequently accessed? :)

You're going to be the guy with a SAN full of WD Red drives.

Thanks Ants
May 21, 2004

#essereFerrari


Anyone know roughly what the Dell Nexenta stuff comes in at?

Docjowles
Apr 9, 2009

thebigcow posted:

You're going to be the guy with a SAN full of WD Red drives.

The entire point of object storage systems like Swift and Ceph is to allow you to use commodity hw to scale out massively and cheaply. So probably!

parid
Mar 18, 2004
I think I'm going to be down this commodity + layer of abstraction road soon. Anyone here implement one of these systems recently? What was the experience? What should people going down this road pay attention to? What were your favorite pieces ( hw platforms, software layers, designs, ect )?

Thanks Ants
May 21, 2004

#essereFerrari


Has anyone heard any reports or used the Fujitsu DX100 S3 units? We are looking for a bunch of slowish storage that's a bit better than a Synology. Pricing seems alright, I can't really think of anything in the same sort of area other than a FAS2220.

CrazyLittle
Sep 11, 2001





Clapping Larry

thebigcow posted:

You're going to be the guy with a SAN full of WD Red drives.

What, exactly, is the problem with commodity kit for low-performance bulk storage solutions? Is there any real advantage to buying NL-SAS when you're just going to double or triply duplicate the data across multiple disks and storage hosts?

(I mean, isn't that the whole point of projects like backblaze?)

Mr Shiny Pants
Nov 12, 2012

CrazyLittle posted:

What, exactly, is the problem with commodity kit for low-performance bulk storage solutions? Is there any real advantage to buying NL-SAS when you're just going to double or triply duplicate the data across multiple disks and storage hosts?

(I mean, isn't that the whole point of projects like backblaze?)

People are afraid they are going to be the one left holding the bag when system goes tits up.

Take that as you will.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

CrazyLittle posted:

What, exactly, is the problem with commodity kit for low-performance bulk storage solutions? Is there any real advantage to buying NL-SAS when you're just going to double or triply duplicate the data across multiple disks and storage hosts?

(I mean, isn't that the whole point of projects like backblaze?)

I've come to learn as my career has progressed that Corporate IT is 75% covering your rear end. There's a reason the big SAN players can charge a big premium, the ability to cover my rear end if something goes wrong. I could go on and on, but yeah... CYA

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

CrazyLittle posted:

What, exactly, is the problem with commodity kit for low-performance bulk storage solutions? Is there any real advantage to buying NL-SAS when you're just going to double or triply duplicate the data across multiple disks and storage hosts?

(I mean, isn't that the whole point of projects like backblaze?)

There is a large OPEX cost associated with running applications without real support on commodity hardware with high failure rates. You're paying for more hardware to add extra redundancy to make up for the lack of reliability. You're paying more people to admin the systems because you don't have vendor backed technical support to troubleshoot problems, ship you replacement parts the same day, or perform parts replacements. You're paying for extra power and cooling and datacenter space because you needed extra hardware for more redundancy, and the hardware you bought likely isn't as dense or efficient as enterprise gear.

There are lots of hidden costs in lost efficiency (not just physical efficiency, but data management, flexibility, etc) that wipe away a lot of the CAPEX savings and turn them into OPEX. And you can depreciate hardware for tax benefits, but you can't depreciate power or cooling or employees. So whether it actually makes sense depends a lot on the specific organization and their needs and resources. In many cases it doesn't.

Mr Shiny Pants
Nov 12, 2012

NippleFloss posted:

There is a large OPEX cost associated with running applications without real support on commodity hardware with high failure rates. You're paying for more hardware to add extra redundancy to make up for the lack of reliability. You're paying more people to admin the systems because you don't have vendor backed technical support to troubleshoot problems, ship you replacement parts the same day, or perform parts replacements. You're paying for extra power and cooling and datacenter space because you needed extra hardware for more redundancy, and the hardware you bought likely isn't as dense or efficient as enterprise gear.


Well to be honest, Enterprise support sometimes isn't all that great. See the tales of woe in this thread about botched firmwares and the like taking down storagesystems. When people are breathing down your neck you get to say: "We pay them this ridiculous amount of money, they are working on it, I did everything I could."

Even with enterprise gear you buy everything twice, so the extra hardware is true for both scenario's.

As for the replacement parts: The idea is that you don't need the same day replacement parts because there is no SPOF in the system that warrants it. The cheaper parts also make it possible to have a couple of systems on the shelf should you need them.

There is something to be said for both solutions.

KennyG
Oct 22, 2002
Here to blow my own horn.

Mr Shiny Pants posted:

There is something to be said for both solutions.

As someone who has scratched more than a million dollars in 6 months to EMC, I can tell you it is simultaneously the most reassuring and unsettling thing.

We do have a SAN capable of 500k iops at less than 2ms latency. However, I can't help but wonder how much more robust of a system we could have if the 400gb ssds were $300 instead of 8000. Who cares if they last 1/10th as long. I'm still 2.5x ahead. That being said I have already been able to bail out of a sticky situation that would have cost me my job had I been on the DIY route.

Pile Of Garbage
May 28, 2007



KennyG posted:

As someone who has scratched more than a million dollars in 6 months to EMC, I can tell you it is simultaneously the most reassuring and unsettling thing.

We do have a SAN capable of 500k iops at less than 2ms latency. However, I can't help but wonder how much more robust of a system we could have if the 400gb ssds were $300 instead of 8000. Who cares if they last 1/10th as long. I'm still 2.5x ahead. That being said I have already been able to bail out of a sticky situation that would have cost me my job had I been on the DIY route.

Getting advanced replacement on consumer hardware is usually impossible so you'd have to be running with twice, maybe three times the number of hot-spares.

Mr Shiny Pants
Nov 12, 2012

cheese-cube posted:

Getting advanced replacement on consumer hardware is usually impossible so you'd have to be running with twice, maybe three times the number of hot-spares.

Which is possible if they cost 10 times less :)

Thanks Ants
May 21, 2004

#essereFerrari


So now you need another office to keep all these spares in.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Mr Shiny Pants posted:

Well to be honest, Enterprise support sometimes isn't all that great. See the tales of woe in this thread about botched firmwares and the like taking down storagesystems. When people are breathing down your neck you get to say: "We pay them this ridiculous amount of money, they are working on it, I did everything I could."

Even with enterprise gear you buy everything twice, so the extra hardware is true for both scenario's.

As for the replacement parts: The idea is that you don't need the same day replacement parts because there is no SPOF in the system that warrants it. The cheaper parts also make it possible to have a couple of systems on the shelf should you need them.

There is something to be said for both solutions.

Enterprise support is sometimes not that great *considering the amount of money paid for it*. It is always worlds better than no support, particularly as enough complaining about significant problems will often result in free hardware magically appearing at your site. Relying on your employees to support everything means that you have nowhere to escalate to and no hope for recompense if the product does not perform as you had hoped. You can build targets into a purchase contract and sue the vendor if the fail to meet them. You can't do that when your vendor is Frys.

I'm also not sure why you think you'd buy everything twice with Enterprise gear? Unless you're running a fully redundant data center model you're not going to be buying a second SAN to just sit there in case your primary SAN fails. You're paying for built in redundancy so you don't have to try and layer it over the top.

It's incredibly hard to build a system out of consumer parts that truly has no SPOF without a significant investment in money and resources. Something like GPFS will do it, but that's not really suitable for general purpose storage use or even cheap and deep backup storage. If your boss came to you and said "please develop a storage system that can provide X number of these types of IOPs, and which has an uptime of Y 9's, and costs significantly less than the enterprise vendors" could you do it? Could you actually prove that it could meet those requirements? Would you stake your job on it?


KennyG posted:

As someone who has scratched more than a million dollars in 6 months to EMC, I can tell you it is simultaneously the most reassuring and unsettling thing.

We do have a SAN capable of 500k iops at less than 2ms latency. However, I can't help but wonder how much more robust of a system we could have if the 400gb ssds were $300 instead of 8000. Who cares if they last 1/10th as long. I'm still 2.5x ahead. That being said I have already been able to bail out of a sticky situation that would have cost me my job had I been on the DIY route.

You wouldn't actually be 2.5x ahead, though. EMC isn't going to sell you those 400gb SSDs for $300. They have to qualify those disks to ensure that they work as advertised, which is expensive. And they have to do that for a variety of vendors because they can't get stuck with only a single supplier in case there is a supply chain issue. And they have to write custom firmware to manage those different devices from different vendors that might otherwise behave differently enough to cause problems for the storage controllers. And at the rate SSDs are being developed, a drive is going to be EOA in favor of a new model not too long after they've finished doing that, so they have to start all over again on the next one. That all costs a lot of money, and you pay for that along with the actual drive.

Consumer SSDs (i.e. the $300 model) aren't the same as enterprise SSDs anyway. They have controllers that are tuned for things like bursty IO, and faster out of box performance, and quick boot ups. They have performance that tends to degrade substantially over time, they suffer a larger number of bit errors, and they lack the durability to run 24/7/365 for any substantial period of time without failure, and they suffer during multi-stream access. You pay more for enterprise SSDs not just because they have better endurance (which is a big deal, a higher failure rate doesn't just mean more replacements, it means a higher likelihood of multiple simultaneous failures, and data loss), but because they have reliable and predictable performance over time, because they offer better protection against bit level error events, and because they are tuned for random IO at consistently low latencies, which is what a large number of concurrent IO streams ends up looking like to a disk.

Pure storage leverages consumer SSD for the *lower* cost but they have to basically completely rewrite the controller firmware to make them behave like enterprise SSD, so the end result is that the arrays are still as or more expensive than competitors.

Mr Shiny Pants
Nov 12, 2012

NippleFloss posted:

Enterprise support is sometimes not that great *considering the amount of money paid for it*. It is always worlds better than no support, particularly as enough complaining about significant problems will often result in free hardware magically appearing at your site. Relying on your employees to support everything means that you have nowhere to escalate to and no hope for recompense if the product does not perform as you had hoped. You can build targets into a purchase contract and sue the vendor if the fail to meet them. You can't do that when your vendor is Frys.

I'm also not sure why you think you'd buy everything twice with Enterprise gear? Unless you're running a fully redundant data center model you're not going to be buying a second SAN to just sit there in case your primary SAN fails. You're paying for built in redundancy so you don't have to try and layer it over the top.

It's incredibly hard to build a system out of consumer parts that truly has no SPOF without a significant investment in money and resources. Something like GPFS will do it, but that's not really suitable for general purpose storage use or even cheap and deep backup storage. If your boss came to you and said "please develop a storage system that can provide X number of these types of IOPs, and which has an uptime of Y 9's, and costs significantly less than the enterprise vendors" could you do it? Could you actually prove that it could meet those requirements? Would you stake your job on it?


Well that's the crux of it isn't it? Would you stake your job on it?

Everywhere I've looked one storage array equals no storage array. There is usually a second one with async or synchronous replication. Same with switches: redundant switches, paths etc etc. We've had storage arrays go down during rebuilds, controller failures etc. etc.

If I could get backing from management after explaining them the scenario's of building it ourselves and I am comfortable with the tech involved, I would certainly entertain the idea.

Would I want the extra responsibility? I don't know, it's nice to just close the door behind you and not having to care about storage you've built yourself. The tech is there though.

Mr Shiny Pants fucked around with this message at 19:02 on May 20, 2014

Jadus
Sep 11, 2003

My EqualLogic PS6500ES arrives tomorrow; I'm so excited!

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Mr Shiny Pants posted:

Everywhere I've looked one storage array equals no storage array. There is usually a second one with async or synchronous replication. Same with switches: redundant switches, paths etc etc. We've had storage arrays go down during rebuilds, controller failures etc. etc.

Redundant arrays are for DR or BCP, not for small scale failures, and those arrays are housed in a separate location. You aren't buying extra hardware because the hardware is unreliable, you're buying it because no matter how reliable it is it can still catch on fire or get swept away in a flood. Nobody is running two VMAXes side by side in the same room in case one fails. You buy a VMAX BECAUSE it doesn't fail, and you buy a second one and put it someone else in case an earthquake swallows the first one.

This is in distinction to the "this commodity hardware is cheap and unreliable so we need to buy extra hardware to provide the required up-time for day to day operations."

Yes, you buy redundant ethernet switches, because redundancy is provided through things like VPCs which require two switches. If you purchase director class gear with multiple SPs and virtual segementation you can certainly get by with one, much the same as you don't need TWO blade centers to provide adequate redundancy for your VMWare environment because the redundancy is built in to the platform. Very very risk averse engineers and organizations may quibble with this, but if they are that risk averse they probably aren't in the market for really super cheap roll your own storage.

Mr Shiny Pants posted:

If I could get backing from management after explaining them the scenario's of building it ourselves and I am comfortable with the tech involved, I would certainly entertain the idea.

Why would your management want to back you when you don't have any obligation to continue to provide support and their only recourse if you don't live up to your end of the bargain is to fire you? If you quit and go elsewhere they have to hope that you've documented what you've done well enough that whoever they hire can come in and continue to support it. If you're Google or Microsoft or Amazon that isn't a problem, because they've got no issues hiring smart people who can figure it out, and their entire business model is built around doing everything in house, so they've got significant resources devoted to QA and documentation. But most internal IT departments aren't going to have that luxury and they're better off outsourcing that expertise.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

NippleFloss posted:

You buy a VMAX BECAUSE it doesn't fail, and you buy a second one and put it someone else in case an earthquake swallows the first one.
I had a 3240 controller fail, and then the second one in the HA pair fail a few days later, before the maintenance window to replace the failed part in the first one. It was a bad week.

Mr Shiny Pants
Nov 12, 2012

NippleFloss posted:

Redundant arrays are for DR or BCP, not for small scale failures, and those arrays are housed in a separate location. You aren't buying extra hardware because the hardware is unreliable, you're buying it because no matter how reliable it is it can still catch on fire or get swept away in a flood. Nobody is running two VMAXes side by side in the same room in case one fails. You buy a VMAX BECAUSE it doesn't fail, and you buy a second one and put it someone else in case an earthquake swallows the first one.

This is in distinction to the "this commodity hardware is cheap and unreliable so we need to buy extra hardware to provide the required up-time for day to day operations."

Yes, you buy redundant ethernet switches, because redundancy is provided through things like VPCs which require two switches. If you purchase director class gear with multiple SPs and virtual segementation you can certainly get by with one, much the same as you don't need TWO blade centers to provide adequate redundancy for your VMWare environment because the redundancy is built in to the platform. Very very risk averse engineers and organizations may quibble with this, but if they are that risk averse they probably aren't in the market for really super cheap roll your own storage.


Why would your management want to back you when you don't have any obligation to continue to provide support and their only recourse if you don't live up to your end of the bargain is to fire you? If you quit and go elsewhere they have to hope that you've documented what you've done well enough that whoever they hire can come in and continue to support it. If you're Google or Microsoft or Amazon that isn't a problem, because they've got no issues hiring smart people who can figure it out, and their entire business model is built around doing everything in house, so they've got significant resources devoted to QA and documentation. But most internal IT departments aren't going to have that luxury and they're better off outsourcing that expertise.

Sure it fails, everything fails. And I don't know about you but if the SAN goes down we are looking at a day or two of downtime. So no, we have two storage arrays in a active - active configuration in different datacentres. This is for DR but also if the first one fails. Getting a tech onsite to fix our array takes a couple of hours, checking if everything works, and booting the whole infrastructure also takes a couple of hours. This is if they can find the issue right away. Our IBM SAN went down because of a second disk deciding to not fill in as a spare even though the array said it was a good drive. That was a fun night. Took us two days to get it running again, even with IBM support.

Now you can say the support was worth it, and it was but let's not pretend storage arrays don't go down. They do, and usually, spectacularly so.

As for the management backing: If you buy a storage array the cost of it usually involves some management backing otherwise you don't get the funding. So during these talks building it yourself can be discussed (depends on company culture for sure) as well as the risks involved. If both parties feel it is worth it due to costs, flexibility or whatever I don't see a reason why you won't at least look at some solutions. IMHO.

It is also about fit. I won't roll my own to host my vmWare cluster on, but something like archival storage I would certainly look at something like Ceph or ZFS.

I mean vmWare Vsan is like rolling your own and they are pushing it very hard.

Adbot
ADBOT LOVES YOU

Vanilla
Feb 24, 2002

Hay guys what's going on in th

adorai posted:

I had a 3240 controller fail, and then the second one in the HA pair fail a few days later, before the maintenance window to replace the failed part in the first one. It was a bad week.

Why is your window for replacing a failed controller days? A failed controller is what can be described as a major incident and this should be rectified ASAP - within hours unless it's some kind of software bug?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply