Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
iSCSI or NAS? FCOE? Capacity and performance requirements? Replication?

Adbot
ADBOT LOVES YOU

KS
Jun 10, 2003
Outrageous Lumpwad
What I'm looking for would need to offload 20-30TB of datastores from the Compellent array for <125k, while supporting VAAI and all the other shiny VMware-related features. Definitely needs replication support, although I'll probably buy one up front. I don't care what protocol. We already support multiple. <10k IOPS total, so I don't think any of the SATA+cache arrays from the various vendors would have a problem with it. I want to throw a half dozen dev environments on this thing and not have to worry about it.

I'm looking at upgrading the Compellent controllers to SC8000s to support VAAI, plus adding 25TB more disk for around 100k. IMO that's too much. Just shopping around for alternatives. I've certainly heard good things about Nimble, but looking for others experiences.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

KS posted:

What I'm looking for would need to offload 20-30TB of datastores from the Compellent array for <125k, while supporting VAAI and all the other shiny VMware-related features. Definitely needs replication support, although I'll probably buy one up front. I don't care what protocol. We already support multiple. <10k IOPS total, so I don't think any of the SATA+cache arrays from the various vendors would have a problem with it. I want to throw a half dozen dev environments on this thing and not have to worry about it.

I'm looking at upgrading the Compellent controllers to SC8000s to support VAAI, plus adding 25TB more disk for around 100k. IMO that's too much. Just shopping around for alternatives. I've certainly heard good things about Nimble, but looking for others experiences.

The biggest thing I like about Nimble is to not have to worry about storage tiering. The biggest thing I hate about Nimble is I don't get to play with storage tiering.

We are currently running two CS-240s doing cross site replication as well as a CS-220 at a 3rd site. Once we start loading them up more, we have the option to add on shelfs for additional storage, add in larger SSDs for performance or add on 10GbE controllers for bandwidth.

Maneki Neko
Oct 27, 2000

LOL, our 3200 series filers apparently have a known issue that causes them to flip on the OMG ERROR light randomly. The only solution is to reboot the head until it happens again.

THANKS NETAPP!

Mierdaan
Sep 14, 2004

Pillbug
You know you can get full VAAI support on series 40 controllers, right?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

KS posted:

What I'm looking for would need to offload 20-30TB of datastores from the Compellent array for <125k, while supporting VAAI and all the other shiny VMware-related features.
Pre or post deduplication? If you don't need dedupe, look at Oracle (I know, I keep pimping this).

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
Tintri is awesome, especially if your storage is only for vSphere.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

three posted:

Tintri is awesome, especially if your storage is only for vSphere.

Our old Nimble SE recently switched to Tintri and claims it's pretty amazing.

OldPueblo
May 2, 2007

Likes to argue. Wins arguments with ignorant people. Not usually against educated people, just ignorant posters. Bing it.

Maneki Neko posted:

LOL, our 3200 series filers apparently have a known issue that causes them to flip on the OMG ERROR light randomly. The only solution is to reboot the head until it happens again.

THANKS NETAPP!


Or upgrade ONTAP. :v: (bug fixed in later OS)

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
I ran across the ontap 8 blink_on bug today, was pretty annoying until I found out what the deal was.

KS
Jun 10, 2003
Outrageous Lumpwad
Thanks, I'll definitely talk to Tintri as well.

Mierdaan posted:

You know you can get full VAAI support on series 40 controllers, right?

We're on 30s. We got them right before they EOLed. Whoops. Turns out our VAR was garbage, and switching isn't easy. That's part of my motivation.

adorai posted:

Pre or post deduplication? If you don't need dedupe, look at Oracle (I know, I keep pimping this).

~25 TB pre-dedupe, no compression. I imagine it'd dedupe pretty well as there are 100+ OS instances in there.

Mierdaan
Sep 14, 2004

Pillbug

KS posted:

We're on 30s. We got them right before they EOLed. Whoops. Turns out our VAR was garbage, and switching isn't easy. That's part of my motivation.

Wow, total garbage. We bought CML like 2 years ago and they were already hinting at the successor to the series 40. Have you priced out moving the 30s to replication targets where you don't care about VAAI, and buying the 8000s new?

Edit: not having been through it yet, what is making an upgrade from 30s to 8000s difficult?

Mierdaan fucked around with this message at 03:38 on Jul 2, 2013

KS
Jun 10, 2003
Outrageous Lumpwad

Mierdaan posted:

Wow, total garbage. We bought CML like 2 years ago and they were already hinting at the successor to the series 40. Have you priced out moving the 30s to replication targets where you don't care about VAAI, and buying the 8000s new?

Edit: not having been through it yet, what is making an upgrade from 30s to 8000s difficult?

We do replication between two arrays, both with series 30s. There's some reticence about just upgrading the prod side because testing firmware updates on DR first is a useful exercise.

Price for four new controllers is $48k. I suspect we're being gouged because we're asking to be released from our VAR (Cambridge Computer, stay the gently caress away) to go with another, and Dell is requiring us to do this one deal with them first since it originated with them, back when we first talked about upgrading to SC40s a year and a half ago.

IT purchasing is bullshit all the way down, but talking about real prices paid on forums like this one takes a lot of their power away. I'm looking forward to the negotiation now that we have several realistic alternatives.

Edit: I don't think the upgrade is that hard. It's a two-step firmware upgrade process and they budget a bunch of hours for it, but no downtime. Just a mandatory professional install at $3450 per array.


KS fucked around with this message at 03:56 on Jul 2, 2013

Mierdaan
Sep 14, 2004

Pillbug
Yeah, this is where you escalate with your regional Dell storage guy, get whatever VAR you want and negotiate new customer pricing. They're loving/have hosed you pretty hard, it's not going to be hard to convince them you're about to walk down the street.

Maneki Neko
Oct 27, 2000

OldPueblo posted:

Or upgrade ONTAP. :v: (bug fixed in later OS)

Well sounds like our support engineer who has been handling our case might need a punch in the dick then, the way he was talking we should be digging up the body of Robert Stack because this was some unsolved mysteries level poo poo.

I'll look more at that in the morning.

parid
Mar 18, 2004

Maneki Neko posted:

Well sounds like our support engineer who has been handling our case might need a punch in the dick then, the way he was talking we should be digging up the body of Robert Stack because this was some unsolved mysteries level poo poo.

I'll look more at that in the morning.

Make sure you at least get to a proper level 2 engineer before you give up on the bug. Most of the time your first "escalation" is to a level 1 specialist. If its got a real PR number and is acknowledged as a real bug, it will eventually get a fix and they should be able to tell you when.

I have had some craaaazy hard issues with netapp and they have been able to solve all of them with enough time and pushing. If your current tech thinks this is beyond them, they are supposed to escalate. Sometimes they need to be reminded of that as its not good for their "stats".

parid fucked around with this message at 05:52 on Jul 2, 2013

OldPueblo
May 2, 2007

Likes to argue. Wins arguments with ignorant people. Not usually against educated people, just ignorant posters. Bing it.

Maneki Neko posted:

Well sounds like our support engineer who has been handling our case might need a punch in the dick then, the way he was talking we should be digging up the body of Robert Stack because this was some unsolved mysteries level poo poo.

I'll look more at that in the morning.

Might not be this, but I think this is the one I remember:

http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=472202

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Maneki Neko posted:

Well sounds like our support engineer who has been handling our case might need a punch in the dick then, the way he was talking we should be digging up the body of Robert Stack because this was some unsolved mysteries level poo poo.

I'll look more at that in the morning.

Based on your description you are hitting BURT 508436. That bug is not fixed (and may never be, because it's actually the expected and agreed upon behavior), but starting in 8.1.3 and 8.2 there is a command that lets you toggle the fru LEDs without having to do the "halt -s" procedure.

There is an issue with a constant flashing LED on the 3200 systems, filed under BURT 472202, and that one is fixed in newer versions of ONTAP. But it sounds like you've got a constantly lit LED so you'd be looking at 508436.

You may also be able to clear the LED from the SP in diag mode. If your support engineer hasn't had you try that yet you might suggest it. Sometimes it works, and sometimes it errors out, depending on how the LED got set.

ragzilla
Sep 9, 2005
don't ask me, i only work here


KS posted:

Price for four new controllers is $48k. I suspect we're being gouged because we're asking to be released from our VAR (Cambridge Computer, stay the gently caress away) to go with another, and Dell is requiring us to do this one deal with them first since it originated with them, back when we first talked about upgrading to SC40s a year and a half ago.

4 series 8000 controllers for 48 is a decent price, assuming they're quoting with 64GB memory.

Maneki Neko
Oct 27, 2000

NippleFloss posted:

Based on your description you are hitting BURT 508436. That bug is not fixed (and may never be, because it's actually the expected and agreed upon behavior), but starting in 8.1.3 and 8.2 there is a command that lets you toggle the fru LEDs without having to do the "halt -s" procedure.

There is an issue with a constant flashing LED on the 3200 systems, filed under BURT 472202, and that one is fixed in newer versions of ONTAP. But it sounds like you've got a constantly lit LED so you'd be looking at 508436.

You may also be able to clear the LED from the SP in diag mode. If your support engineer hasn't had you try that yet you might suggest it. Sometimes it works, and sometimes it errors out, depending on how the LED got set.

Yeah, 508436 is our huckleberry. You know what command it is to clear the FRU lights? We're still on 8.1.2P1, but that might be a good motivator to upgrade.

Internet Explorer
Jun 1, 2005





Just as I thought I got away from the EMC VNX, a new client has an EMC VNX we are inheriting. Ugh. Hope they have fixed a few bugs in the past year or so.

Amandyke
Nov 27, 2004

A wha?

Internet Explorer posted:

Just as I thought I got away from the EMC VNX, a new client has an EMC VNX we are inheriting. Ugh. Hope they have fixed a few bugs in the past year or so.

Lots of bugs and moved to a new major revision.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online
Also tends to hose itself when moving to the new major version.

Amandyke
Nov 27, 2004

A wha?

Goon Matchmaker posted:

Also tends to hose itself when moving to the new major version.

Just make sure you write down your DNS and NTP settings before you go hog wild and there shouldn't be any issues.

Herv
Mar 24, 2005

Soiled Meat
Hi Storage Folks.

I am looking for a modest yet redundant SAN for hosting up to 10 customers (small footprint per customer) using VMWare ESX.

I have two options on the table but was hoping to get another opinion from the experts here.

Requirements:

A measly 4TB storage to start off with. (Each Customer would use maybe 200GB each)
Fibre Channel (iSCSI can be an option)
Fully Redundant (two of everything except primary backplane/chassis, even if there's a manual failover)

Option 1 is:
HP MSA 2040 SAN DC SFF STORAGE
HP 8/20q Fibre Channel Switch
And assorted cables, HBA's, SFP's etc.

Cost = 20k and not redundant (talking with rep today on total cost for redundancy)

Option 2 is:
DSS San Software (2)
2x HP DL 360 G7 (2)
Cisco MDS FC Switch (2)
500GB SSD's (8)

Cost = 15k and there's two of everything, manual failover for SAN or FC switch failure.

----

Since the environment will be hosted remotely, I really want to use an all in one vendor / black box approach, and avoid the kluge of using so many different hardware products (although DSS has worked well in other projects). Using older hardware, with full redundancy has been easier to manage than well supported single points of failure.

The HP MSA approach has served me well on past projects as well, but to get them as redundant as possible sure does cost a lot more than DSS.

Could someone point me in the direction of getting a redundant 4TB using FC or iSCSI for small footprint VM hosting? I would put the budget at around 20-25k.

Thanks in advance and I apologize if the answer is somewhere in this thread, its a monster though.

Docjowles
Apr 9, 2009

Herv posted:

Hi Storage Folks.

I am looking for a modest yet redundant SAN for hosting up to 10 customers (small footprint per customer) using VMWare ESX.

Could someone point me in the direction of getting a redundant 4TB using FC or iSCSI for small footprint VM hosting? I would put the budget at around 20-25k.

Thanks in advance and I apologize if the answer is somewhere in this thread, its a monster though.

Equallogic is another player you should probably look at as long as iSCSI is acceptable. Or hell even Dell's MD3200i line since your requirements are so minimal. Both of those can be specced with dual controller redundant-everything setups and won't break the bank.

Docjowles fucked around with this message at 17:27 on Jul 15, 2013

Herv
Mar 24, 2005

Soiled Meat

Docjowles posted:

Equallogic is another player you should probably look at as long as iSCSI is acceptable. Or hell even Dell's MD3200i line since your requirements are so minimal. Both of those can be specced with dual controller redundant-everything setups and won't break the bank.

Thanks Doc, will toss this at the reseller.


And thank you Internet Explorer! (Never thought I would say that).
\/ \/ \/

Herv fucked around with this message at 20:20 on Jul 15, 2013

Internet Explorer
Jun 1, 2005





Herv posted:

Hi Storage Folks.

I am looking for a modest yet redundant SAN for hosting up to 10 customers (small footprint per customer) using VMWare ESX.

I have two options on the table but was hoping to get another opinion from the experts here.

Requirements:

A measly 4TB storage to start off with. (Each Customer would use maybe 200GB each)
Fibre Channel (iSCSI can be an option)
Fully Redundant (two of everything except primary backplane/chassis, even if there's a manual failover)

Option 1 is:
HP MSA 2040 SAN DC SFF STORAGE
HP 8/20q Fibre Channel Switch
And assorted cables, HBA's, SFP's etc.

Cost = 20k and not redundant (talking with rep today on total cost for redundancy)

Option 2 is:
DSS San Software (2)
2x HP DL 360 G7 (2)
Cisco MDS FC Switch (2)
500GB SSD's (8)

Cost = 15k and there's two of everything, manual failover for SAN or FC switch failure.

----

Since the environment will be hosted remotely, I really want to use an all in one vendor / black box approach, and avoid the kluge of using so many different hardware products (although DSS has worked well in other projects). Using older hardware, with full redundancy has been easier to manage than well supported single points of failure.

The HP MSA approach has served me well on past projects as well, but to get them as redundant as possible sure does cost a lot more than DSS.

Could someone point me in the direction of getting a redundant 4TB using FC or iSCSI for small footprint VM hosting? I would put the budget at around 20-25k.

Thanks in advance and I apologize if the answer is somewhere in this thread, its a monster though.

Take a look at the Equallogic small or branch office series. Should be PS4000.

CISADMIN PRIVILEGE
Aug 15, 2004

optimized multichannel
campaigns to drive
demand and increase
brand engagement
across web, mobile,
and social touchpoints,
bitch!
:yaycloud::smithcloud:
I'm trying to work out a SAN storage agreement with another organization, and i'm not quite sure what the typical usable amount of a SAN is.

The SAN is a Nimble 260 so there's 36 TB raw, which they show as 25-50 TB usable the > 25 TB part is based on compression so I'll say 25 TB usable storage, what I don't know is how much typically gets used up by snapshots and whatever else is needed.

What's a reasonable amount of that 25 TB can be used by VM's?

madsushi
Apr 19, 2009

Baller.
#essereFerrari

keygen and kel posted:

I'm trying to work out a SAN storage agreement with another organization, and i'm not quite sure what the typical usable amount of a SAN is.

The SAN is a Nimble 260 so there's 36 TB raw, which they show as 25-50 TB usable the > 25 TB part is based on compression so I'll say 25 TB usable storage, what I don't know is how much typically gets used up by snapshots and whatever else is needed.

What's a reasonable amount of that 25 TB can be used by VM's?

You're going to get something like 25 TB usable (spares, formatting, etc). Now your usage for snapshots is going to depend on a LOT of things, like your daily data delta, snapshot frequency, snapshot retention, etc. A good estimate would be about 25-40% snapshot usage. I have seen snapshot usage as low as 10% (low-change low-retention) and as high as 60% (high-change high-retention).

As a guess, between 15-18 TB will be usable for VMs, with the rest for snapshots.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Herv posted:

Hi Storage Folks.

I am looking for a modest yet redundant SAN for hosting up to 10 customers (small footprint per customer) using VMWare ESX.

I have two options on the table but was hoping to get another opinion from the experts here.

Requirements:

A measly 4TB storage to start off with. (Each Customer would use maybe 200GB each)
Fibre Channel (iSCSI can be an option)
Fully Redundant (two of everything except primary backplane/chassis, even if there's a manual failover)

Option 1 is:
HP MSA 2040 SAN DC SFF STORAGE
HP 8/20q Fibre Channel Switch
And assorted cables, HBA's, SFP's etc.

Cost = 20k and not redundant (talking with rep today on total cost for redundancy)

Option 2 is:
DSS San Software (2)
2x HP DL 360 G7 (2)
Cisco MDS FC Switch (2)
500GB SSD's (8)

Cost = 15k and there's two of everything, manual failover for SAN or FC switch failure.

----

Since the environment will be hosted remotely, I really want to use an all in one vendor / black box approach, and avoid the kluge of using so many different hardware products (although DSS has worked well in other projects). Using older hardware, with full redundancy has been easier to manage than well supported single points of failure.

The HP MSA approach has served me well on past projects as well, but to get them as redundant as possible sure does cost a lot more than DSS.

Could someone point me in the direction of getting a redundant 4TB using FC or iSCSI for small footprint VM hosting? I would put the budget at around 20-25k.

Thanks in advance and I apologize if the answer is somewhere in this thread, its a monster though.

Nimble is all bit giving storage away right now, so I'd call them. Nutanix sells integrated storage/esx nodes that provide distributed raid and easy scale out. You don't even have a separate storage environment to manage then, just an integrated virtual stack.

You also haven't mentioned what you want to get out of this. Array level snapshots, replication, deduplication cloning, fast VM restore, integrated backup...

There's a lot more info requires to really make all but a very general recommendation.

Herv
Mar 24, 2005

Soiled Meat

NippleFloss posted:

Nimble is all bit giving storage away right now, so I'd call them. Nutanix sells integrated storage/esx nodes that provide distributed raid and easy scale out. You don't even have a separate storage environment to manage then, just an integrated virtual stack.

You also haven't mentioned what you want to get out of this. Array level snapshots, replication, deduplication cloning, fast VM restore, integrated backup...

There's a lot more info requires to really make all but a very general recommendation.

I will look into this option as well, (give my low reqs to a sales rep). The additional features sound great but might be more than needed for this implementation.

There's two sites with 15 min sync between them (Application Data/SQL TLog Shipping), low turn-up times for failover so there's not a lot that has to happen with the VM's themselves once they are up and running. If something craps the bed its usually better to failover since the replica is relatively current and we can take our sweet time on repairing any (super rare) VM bombing out.

We use a GFS backup to NAS application but integrated backup sounds pretty neat, I only have to keep backup data at one site, whichever is live at the time.

Again, thanks!

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
We've got a faculty member looking to buy 100-200TB, which to us is "big data." They're looking at some of those ridiculous SuperMicro servers with drives on both sides of the chasis, sold by a SuperMicro reseller we've worked with in the past (aka they would sell us a complete warrantied system). We also have a Compellent SAN, but we don't think getting a pile of trays is going to be cost effective (though it might be, we're still getting quotes).

Are there any good inexpensive SANs for big data? We don't need high performance or a lot of features because this will mostly be static data, we just want the system to be manageable and expandable.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

FISHMANPET posted:

We've got a faculty member looking to buy 100-200TB, which to us is "big data." They're looking at some of those ridiculous SuperMicro servers with drives on both sides of the chasis, sold by a SuperMicro reseller we've worked with in the past (aka they would sell us a complete warrantied system). We also have a Compellent SAN, but we don't think getting a pile of trays is going to be cost effective (though it might be, we're still getting quotes).

Those SuperMicro chassis are fine for that use case. What OS/platform is the reseller offering?

quote:

Are there any good inexpensive SANs for big data? We don't need high performance or a lot of features because this will mostly be static data, we just want the system to be manageable and expandable.

Not in the SuperMicro price range.

Mr Shiny Pants
Nov 12, 2012

FISHMANPET posted:

We've got a faculty member looking to buy 100-200TB, which to us is "big data." They're looking at some of those ridiculous SuperMicro servers with drives on both sides of the chasis, sold by a SuperMicro reseller we've worked with in the past (aka they would sell us a complete warrantied system). We also have a Compellent SAN, but we don't think getting a pile of trays is going to be cost effective (though it might be, we're still getting quotes).

Are there any good inexpensive SANs for big data? We don't need high performance or a lot of features because this will mostly be static data, we just want the system to be manageable and expandable.

Buy a refurbished Sun Thumper (X4500)?

Stuff it with 3 - 4TB x 48 drives?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
We really want to stay away from something that requires us to maintain an OS on the hardware. And using ZFS is also a pain in the rear end for expandability.

The reseller doesn't put an OS, we'd run Solaris on it (which is fine, we're sadly a Solaris shop) but this group already has a pair of storage servers running ZFS and they're kind of a pain to manage, we'd rather just not have to worry about it.

It can actually be more expensive than SuperMicro also, because if we get the SuperMicro we'll probably end up charging them staff time to set it up and run it but if it's a SAN in a box we probably won't charge the grant anything.

We'd also like a system more robust that individual servers, because this isn't the first time a faculty member has wanted a lot of cheap storage, and it's not going to be the last. Even if the department buys the controller heads and just charges faculty for trays of disks we'd still be ahead of the curve compared to individual servers.

KS
Jun 10, 2003
Outrageous Lumpwad
So we use the Supermicro SC847 running the Solaris-derived OmniOS for d2d backup storage. It's 96 TB raw with 32 3TB drives, 72TB usable, and it cost about 16k with some fancy caching devices. You could add in a Nexenta license if you didnt want to deal with the OS and be up around 36k. Expansion beyond that single box sucks for sure, at least if you're talking about a shared namespace or something. Performance, however, kicks rear end for the price. There's a recently released successor to the SC847 with updated internals.

A Nimble CS460 with two expansion shelves would be in the $190k range for 126 TB with 3 years of support and would let you expand quite a bit beyond that.

In my experience, Compellent disks would be a fair bit more than that unless your system is already over 96 drives and into the enterprise license. You can expect to pay ~45k per shelf of disks up to 96 drives and ~30k beyond it.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Me might actually be at or near Enterprise licensing on our Compellent already, I think we've got 6 trays already and more are on order. We also pay a lot less as a public University than businesses would.

If this was for internal use I might be OK with building it ourselves and be aware of all the limitations, but faculty just want a pile of space they can access and don't really want to think about any of it's shortcomings (some of the problems with this group's existing storage servers). I guess we'll just hope and pray that Dell comes back with a decent quote for trays of 4TB disks.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

FISHMANPET posted:

We've got a faculty member looking to buy 100-200TB, which to us is "big data." They're looking at some of those ridiculous SuperMicro servers with drives on both sides of the chasis, sold by a SuperMicro reseller we've worked with in the past (aka they would sell us a complete warrantied system). We also have a Compellent SAN, but we don't think getting a pile of trays is going to be cost effective (though it might be, we're still getting quotes).

Are there any good inexpensive SANs for big data? We don't need high performance or a lot of features because this will mostly be static data, we just want the system to be manageable and expandable.


When you say "big Data" what is the data going to be doing exactly? Hot data? Cold Data? How long is this data need to be set in one place before it is migrated? How does the Data age? Does newly input Data stay active for 3-6 months, then not commonly accessed after that?

I would look into 3Par/Nimble and Equallogic and see what they can do.

Adbot
ADBOT LOVES YOU

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Dilbert As gently caress posted:

When you say "big Data" what is the data going to be doing exactly? Hot data? Cold Data? How long is this data need to be set in one place before it is migrated? How does the Data age? Does newly input Data stay active for 3-6 months, then not commonly accessed after that?

I would look into 3Par/Nimble and Equallogic and see what they can do.

I have no idea, and I'm guessing the researchers don't either. Everybody here's been trained to think of storage purely in terms of space, no other concern is given. I'm also only tangentially involved so I don't have any power to say "nope this is dumb."

I'm sure I'll hear about the quote that comes back from our reseller and just cry a bunch and move on with my life.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply