Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
KennyG
Oct 22, 2002
Here to blow my own horn.

madsushi posted:

So you might be able to wiggle a few more percentage points out of them, but it's very unlikely you take them to $450k without some serious leverage.

Very helpful.

I guess I'll throw the ol' "I like you and your product, but your competitor (truthfully) is offering me similar at 80%. What can you do to help me keep Finance from railroading me into taking that solution?" and see what happens.

1.3M for this would be highway robbery. Why do they even do that?! Nobody is going to pay that, for that point why not just say it costs $1b and then you could say hey, 99% off!

Adbot
ADBOT LOVES YOU

Thanks Ants
May 21, 2004

#essereFerrari


Because somewhere there's a person who genuinely believes that they are getting huge discounts.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

In my experience the 80% margin number is usually on the software side of things. Hardware is probably around 60% depending. I personally shoot for at least 50% off MSRP, but it doesn't always happen. Commodity servers I can't seem to get below around 40, maybe 45% off MSRP if I spend 6 figures.

A note about services, I've seen a huge push the last couple of years to increase 'services' revenue even if it means losing revenue on the hardware side of things. I purchased a quarter million worth of desktops from Dell a couple years ago. They took 20K off the price of the hardware if we also purchased a 10,000 dollar Dell Kace box with 100 licenses. They were hoping I would like it and they would make it up by me buying another 350 licenses for the Kace box. Didn't work though, it's still sealed in the box, I sometimes use it to hold the server room door open.

53% off list is pretty fair for not knowing exactly what you're buying. You can probably squeeze some more out of them if you offer fast payment, or wait until the quarter is about to end. If they're short on revenue targets they'll give poo poo away at the end of the quarter to make their bonus.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

skipdogg posted:

In my experience the 80% margin number is usually on the software side of things. Hardware is probably around 60% depending. I personally shoot for at least 50% off MSRP, but it doesn't always happen. Commodity servers I can't seem to get below around 40, maybe 45% off MSRP if I spend 6 figures.

A note about services, I've seen a huge push the last couple of years to increase 'services' revenue even if it means losing revenue on the hardware side of things. I purchased a quarter million worth of desktops from Dell a couple years ago. They took 20K off the price of the hardware if we also purchased a 10,000 dollar Dell Kace box with 100 licenses. They were hoping I would like it and they would make it up by me buying another 350 licenses for the Kace box. Didn't work though, it's still sealed in the box, I sometimes use it to hold the server room door open.

53% off list is pretty fair for not knowing exactly what you're buying. You can probably squeeze some more out of them if you offer fast payment, or wait until the quarter is about to end. If they're short on revenue targets they'll give poo poo away at the end of the quarter to make their bonus.

I will agree that 60% is closer for hardware, but not for enterprise storage. NetApp makes almost 60% margin average on the SALE price, let alone MSRP.

They're giving him 60% off MSRP without too much negotiation, so clearly there's got to be some margin left, or else he wouldn't have a sale anyway.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

That's good info to have. I'm not familiar with the big boy toys to be honest. I know I got a decent deal on my VNXe and we've beat HP up really good on commodity servers. I wasn't involved with the purchase of our VNX 5500's though, but I heard it was 50%+ off list.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

skipdogg posted:

That's good info to have. I'm not familiar with the big boy toys to be honest. I know I got a decent deal on my VNXe and we've beat HP up really good on commodity servers. I wasn't involved with the purchase of our VNX 5500's though, but I heard it was 50%+ off list.

This is one data point I was looking at (corroborated by other sites too):

KennyG
Oct 22, 2002
Here to blow my own horn.
I haven't signed anything but I can tell you the company I am talking to is on the left side of that graph and doesn't give away beer mugs as a promotion for their data onTap OE. :effort:

The two other arrays are designed to scale to 20PB in a single namespace. If it is true that the pricing is rock bottom I appreciate that, but I definitely get the feeling that I can put the screws to them for a bit and save $50-100k

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

KennyG posted:

I haven't signed anything but I can tell you the company I am talking to is on the left side of that graph and doesn't give away beer mugs as a promotion for their data onTap OE. :effort:

The two other arrays are designed to scale to 20PB in a single namespace. If it is true that the pricing is rock bottom I appreciate that, but I definitely get the feeling that I can put the screws to them for a bit and save $50-100k
If you are talking to NetApp make sure you get competitive quotes from both Nimble and Oracle. NetApp will cave on their pricing.

KennyG
Oct 22, 2002
Here to blow my own horn.
I said they DON'T give away beer mugs, do that leaves....

I did speak with NetApp but the Unified solution wasn't as good for our needs.

evil_bunnY
Apr 2, 2003

KennyG posted:

Everything is direct attach at this moment.
:catstare:

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

KennyG posted:

I said they DON'T give away beer mugs, do that leaves....

I did speak with NetApp but the Unified solution wasn't as good for our needs.

Yea, even without that caveat EMC is the only vendor that makes sense. Sounds like they're quoting you a VNX and then a couple of Isilon clusters. What is the use case for the Isilon?

KennyG
Oct 22, 2002
Here to blow my own horn.
Evil_bunny

I know! Only been here 4 months. Been trying to change it for 3.9.

Document management. Without being too specific, we are a legal services company that does e-discovery and doc review. This means company's ship us documents by the TB for legal issues and we "handle" them. Due to the nature of our business and the architecture of our business cluster one is going to be largely CIFS and a few other ancillary stores. The larger cluster is for archive of the first two (yes it's VNX) and Hadoop archive target.

Isilon is desirable for us due to the scale out nature as we currently have an IT staff of 3 and are growing at a rate of 1 tb+ /week

tehfeer
Jan 15, 2004
Do they speak english in WHAT?

KennyG posted:

Can we get dirty and talk about pricing?

I am trying to figure out what's doable and how much more can I gain from the deal.

I can provide specifics if important but lets use fake numbers from fake companies for now.


ProviderSolution:
30TB Usable w/ local flash cards MSRP $300k, $140k initial quote
96TB Unified Cluster Storage (5 nodes, balanced config) MSRP $450k, $200k quote
200TB Unified Cluster Storage (4 nodes, Near Line config) MSRP $550k, $250k quote
~53% discount applied across the board

This comes to ~650 after services and other stuff are included. They then added a 13% 4Q discount Giving me a number around $550

Now when they say you can get roughly 30% from negotiating, is that off the $1.3m or the $600k? If I get them down 30% from $550 that's 385! A number that I think is more than fair. Even if it was 30% from 650 that's 450 give or take and doable. How do you negotiate on this crap? This is the part of the job I hate the most.



Kenny,

I went around to all the vendors in our price range and got competitive quotes. I reminded all of them that we were shopping around and looking at all their competitors. After I had "final" quotes from everyone I decided on the vendor I wanted to go with then used the other vendors prices to get them to drop another 15% on the hardware and throw in free installation and training. I was also able to get Nimble to drop the price further by telling them I hated to have to replace my FC switch so they cut another 5k to cover the storage switches.

If your looking at Nimble or Pure they will both let you return your units within 30 days if your not happy with it.

Crackbone
May 23, 2003

Vlaada is my co-pilot.

Edit: Wrong thread.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]
Quick question: for real-time file server share change auditing, which *reasonably* priced 3rd party tool (Windows Server 2012 is still utter junk when it comes to reading logs) should I be looking at...? Just two servers, half a dozen shares on each...

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

skipdogg posted:

In my experience the 80% margin number is usually on the software side of things. Hardware is probably around 60% depending. I personally shoot for at least 50% off MSRP, but it doesn't always happen. Commodity servers I can't seem to get below around 40, maybe 45% off MSRP if I spend 6 figures.

A note about services, I've seen a huge push the last couple of years to increase 'services' revenue even if it means losing revenue on the hardware side of things. I purchased a quarter million worth of desktops from Dell a couple years ago. They took 20K off the price of the hardware if we also purchased a 10,000 dollar Dell Kace box with 100 licenses. They were hoping I would like it and they would make it up by me buying another 350 licenses for the Kace box. Didn't work though, it's still sealed in the box, I sometimes use it to hold the server room door open.

53% off list is pretty fair for not knowing exactly what you're buying. You can probably squeeze some more out of them if you offer fast payment, or wait until the quarter is about to end. If they're short on revenue targets they'll give poo poo away at the end of the quarter to make their bonus.

I have an ongoing horror story about KACE but I promised to give them one more chance to right all the wrongs before I go nuclear online - which I will do, for sure, at least others would not buy into their BS anymore -; we will see, only few weeks left...

szlevi fucked around with this message at 02:35 on Dec 10, 2013

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]
I have one more question: is anyone using ION Data Accelerator from Fusion-IO...? It seems to me like a competitive alternative to Violin's boxes (MUCH cheaper, even if you add the price of 2-3 FIO cards per server) - are those bandwidth numbers are for real..?

Last time I've seen such claims - see http://www.fusionio.com/load/-media-/2griol/docsLibrary/FIO_DS_ION_DataAccelerator.pdf - was when I *almost* got a box from Nimbus Data, only to see the CEO himself (!) injecting some really nasty lending terms into the final doc he sent over for my signature, done in a very low-brow, disgustingly sneaky manner (and then even had the audacity of accusing me not having funds ready - while implicitly admitting he didn't even have available test boxes in circulation... clowns in the storage circus.)

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
What kind of workloads are you doing?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

szlevi posted:

I have one more question: is anyone using ION Data Accelerator from Fusion-IO...? It seems to me like a competitive alternative to Violin's boxes (MUCH cheaper, even if you add the price of 2-3 FIO cards per server) - are those bandwidth numbers are for real..?

Last time I've seen such claims - see http://www.fusionio.com/load/-media-/2griol/docsLibrary/FIO_DS_ION_DataAccelerator.pdf - was when I *almost* got a box from Nimbus Data, only to see the CEO himself (!) injecting some really nasty lending terms into the final doc he sent over for my signature, done in a very low-brow, disgustingly sneaky manner (and then even had the audacity of accusing me not having funds ready - while implicitly admitting he didn't even have available test boxes in circulation... clowns in the storage circus.)

Nimbus has like 50 employees. The CEO sometimes handles support calls as well. It's pretty close to a one-man-show. From what I've heard from NetApp SEs that see Nimbus in the field they aren't lying about the performance though, at least for sequential workloads. Not sure about the FIO product, but it's not outside of the realm of possibility that can push some pretty serious throughput.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

Dilbert As gently caress posted:

What kind of workloads are you doing?

We're a medical visualization firm, the actual workflow here would be high-end compositing using uncompressed data (we have our own raw file format), sometimes 4K, possibly even higher res in the future - to put it simple I want to reach at least 10-15 fps and frame size can go up as high as 128MB (those workstations have dual 10Gb adapters.) Currently all I'm doing is using two FIO cards as transprane t cahce, fronting two production volumes but 1. it's not nearly fast enough 2. it's limited to one card/volume 3. it's a clunky, manual process when you un-bind the crad from one volume and bind it to another one, depending on the location of the next urgent project folder...

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

szlevi posted:

We're a medical visualization firm, the actual workflow here would be high-end compositing using uncompressed data (we have our own raw file format), sometimes 4K, possibly even higher res in the future - to put it simple I want to reach at least 10-15 fps and frame size can go up as high as 128MB (those workstations have dual 10Gb adapters.) Currently all I'm doing is using two FIO cards as transprane t cahce, fronting two production volumes but 1. it's not nearly fast enough 2. it's limited to one card/volume 3. it's a clunky, manual process when you un-bind the crad from one volume and bind it to another one, depending on the location of the next urgent project folder...



The problem with doing something like FIO cards in the host is that data may/may not be actually written or modified back to your storage processors in time to provide a viable copy of media if a host fails. Could it potentially speed up things? Sure, just wait till a locks up, freezes, or the FIO card decides to take a dump. Doctors are going to be PISSED.


before I got further are you using a VDI infrastructure like EPIC to do this on?

it's been a bit since I have worked with FIO cards so apologizes if I'm incorrect in their nature.

Things I would look at is for some kind of bottleneck.


You can have 10Gbps cards on all workstations but if traffic is being routed through a 1gb/s switch what's the point? If 128MB/s is your peak, chances are network isn't the issue.
What video cards do these workstations have? IGP may not cut it, but something like a 50 dollar AMD/Nvidia card in the remote workstation might. Something like an R7 240 with some ample video ram can drastically change these things.
What is the latency of the client talking to the server hosting these images? Is the latency high on the Image server to datastore?
What Datastores are you using? Flash accelerated storage works wonders for things, and flash storage is cheap.
If using VDI have you thought about GPU acceleration in your servers for VM's?

Dilbert As FUCK fucked around with this message at 21:06 on Dec 11, 2013

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

Dilbert As gently caress posted:

The problem with doing something like FIO cards in the host is that data may/may not be actually written or modified back to your storage processors in time to provide a viable copy of media if a host fails. Could it potentially speed up things? Sure, just wait till a locks up, freezes, or the FIO card decides to take a dump. Doctors are going to be PISSED.

We have artists and developers, we have no doctors. Our clients are big pharma and media firms so no end-users here. :)

quote:

before I got further are you using a VDI infrastructure like EPIC to do this on?

No, it'd make no sense. We need 96-128GB RAM in these machines, they sport 6-core 3.33GHz Xeons etc (they were maxed-out Precision T7500s 3 years ago when I bought them.)

quote:

it's been a bit since I have worked with FIO cards so apologizes if I'm incorrect in their nature.

Things I would look at is for some kind of bottleneck.


You can have 10Gbps cards on all workstations but if traffic is being routed through a 1gb/s switch what's the point? If 128MB/s is your peak, chances are network isn't the issue.

I thought it goes without saying that I have the infrastructure in place... why would anyone install dual 10-gig NICs for gigabit switches?

We run on 10Gb for 3-4 years now, few workstations. We can pull over 500MB/s from the FIO but that's enough for higher-res raw stuff.

quote:

What video cards do these workstations have? IGP may not cut it, but something like a 50 dollar AMD/Nvidia card in the remote workstation might. Something like an R7 240 with some ample video ram can drastically change these things.

We have several plugins/tools we developed in CUDA so various NV cards: for the few we have high-end Quadros like K5000, rest are GTX desktop ones (480, 570, 770) - beyond CUDA compatibility the only thing that matters to us is the amount of memory, to work with larger datasets fast these tools load them into the video card's memory... I just bought a few GTX770 4GB for dirt cheap, they are great deals.

quote:

What is the latency of the client talking to the server hosting these images? Is the latency high on the Image server to datastore?

These are simple project folders, on SMB3.0 file shares (Server 2012) and that's the issue. :)

quote:

What Datastores are you using? Flash accelerated storage works wonders for things, and flash storage is cheap.
If using VDI have you thought about GPU acceleration in your servers for VM's?

AGain, VDI is out of question - we need very high-end WS performance, that would make no sense.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Sorry I completely misread your "We're a medical visualization firm" as medical e.g. healthcare; It's been really an off day for me. I apologize.

The_Groove
Mar 15, 2003

Supersonic compressible convection in the sun
That sounds fairly similar to a data analysis/visualization environment I used to manage, 12 clients with a ton of memory, 10GbE, and GPUs, so virtualization didn't make sense. We ran linux though, and ended up running IBM's GPFS filesystem with a 2MB block size on large DDN9550/DDN9900 storage systems (about 1.5PB total) with 8 NSD (I/O) servers in front of it, serving everything out over 10G. A single client could max out it's 10G card when doing sequential reads/writes and the 9900 alone could hit 4-5 GB/sec peaks under a heavy load. Granted GPFS is not even close to free and probably pretty expensive for a relatively "small" installation like that. It's more geared towards huge clusters and HPC, but drat did it rock for that environment.

I'm not saying a different filesystem or anything will solve your issues. I just wanted to give a description of a similar environment where disk I/O was pretty sweet.

MrMoo
Sep 14, 2000

That's what pnfs and lustre are for and used in many places exactly so.

evol262
Nov 30, 2010
#!/usr/bin/perl
Seconding that Lustre is probably the sweet spot here. Or AFS if you hate yourself, and Gluster if you run Linux workstations (it'll take quite a few to get performance up to Lustre levels).

If you want to max 10GB, it's going to be an enormous SAN or a distributed filesystem, and the latter is easier/better/more scalable.

firehawk
May 23, 2005

Oookkeeee!
I know that both are relatively new products, but does anyone here have anything to say about either IBM V5000 or EMC VNX5200? We're currently looking into replacing our aging DS3300 and these two seem like pretty good candidates. The DS3k is currently providing about 8TB (15k SAS) of iSCSI storage for 12 blades running about 50 VMs which do a mishmash of webhosting (with backend databases), DNS and email.

As we'd like to be able to phase out the DS3k without any major downtime (don't we all?), the V5k would seem like a more attractive alternative as it appears to be able to do non-disruptive online mirroring from volumes on the DS3k, whereas the VNX cannot. I'm also inclined to assume that IBM-to-IBM migration might be a more trouble-free experience.

Our hardware supplier is pushing us EMC really hard, but they probably get better margins from those rather than IBM. I'm going to be sending out requests for quotes next week, so I don't yet have prices for comparable configurations. While EMC is probably the more expensive one, I've read that VNX2 is at least a bit more competitively priced than it's predecessors.

luminalflux
May 27, 2005



evol262 posted:

Or AFS if you hate yourself

I used to run a couple AFS cells and wrote parts of an AFS client (arla). This is absolutely true.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

evol262 posted:

If you want to max 10GB, it's going to be an enormous SAN

This isn't really true. There are some reasonably cheap arrays out there that can easily max out a 10G link with highly sequential workloads. An E5400 can max out dual 10G links and that's not big iron.

evol262
Nov 30, 2010
#!/usr/bin/perl

NippleFloss posted:

This isn't really true. There are some reasonably cheap arrays out there that can easily max out a 10G link with highly sequential workloads. An E5400 can max out dual 10G links and that's not big iron.

Granted in some respects, and medical imaging is probably enormous files. You can find cheap arrays which'll max 10g with nearline SAS, clever caching, or SSDs. Doing it for multiple workstations simultaneously and with a relatively unknown I/O pattern is much harder.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

Dilbert As gently caress posted:

Sorry I completely misread your "We're a medical visualization firm" as medical e.g. healthcare; It's been really an off day for me. I apologize.

Oh, no, your questions were totally valid, I wasn't clear enough. :)

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

The_Groove posted:

That sounds fairly similar to a data analysis/visualization environment I used to manage, 12 clients with a ton of memory, 10GbE, and GPUs, so virtualization didn't make sense. We ran linux though, and ended up running IBM's GPFS filesystem with a 2MB block size on large DDN9550/DDN9900 storage systems (about 1.5PB total) with 8 NSD (I/O) servers in front of it, serving everything out over 10G. A single client could max out it's 10G card when doing sequential reads/writes and the 9900 alone could hit 4-5 GB/sec peaks under a heavy load. Granted GPFS is not even close to free and probably pretty expensive for a relatively "small" installation like that. It's more geared towards huge clusters and HPC, but drat did it rock for that environment.

I'm not saying a different filesystem or anything will solve your issues. I just wanted to give a description of a similar environment where disk I/O was pretty sweet.

Nice, thanks for the info.

Funny you said DDN - I have a 9550, it used to pump out ~1.5GB/s total, around 400-500MB/s per client but it's out of warranty for a while now and being a single-headed unit I don't dare to put it into production anymore. :)

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]
The problem with pnfs, lustre etc that 1. we are a small shop and hard to get proper support for something like that 2. we are a Windows shop and clients are totally not supported at best (or does not even exist)...

We tested Stornext with our DDN back then and it sucked - I got better results with native NTFS, seriously (and that was a crap too.)

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

evol262 posted:

Granted in some respects, and medical imaging is probably enormous files. You can find cheap arrays which'll max 10g with nearline SAS, clever caching, or SSDs. Doing it for multiple workstations simultaneously and with a relatively unknown I/O pattern is much harder.

Yes and actually FIO just showcased their setup in the Summer, they had it running in their booth at SIGGRAPH: http://www.fusionio.com/siggraph-2013-demo/

This seems to be a LOT less complex for my team than a lustre setup (I'm handy with linux but pretty much that's it, nobody touches it unless necessary); essentially FIO applies some very clever caching all the way to the artist's WS, using block-level (IB) access... I probably wouldn't get those client cards but since I already have 2 FIO Duos that could drive the ioTurbine (newer version of my DirectCache) segment and I figured I might take a look how much those ioControl boxes (ex-Nexgen) cost, say, around 40-50TB... adding another pair of FIO cards in the other file server node could possibly bump me up to ~4GB/s, perhaps even further up...? Granted, it won't be as fast as the demo was and I will have to build it step-by-step but that's life. :)

Serfer
Mar 10, 2003

The piss tape is real



MrMoo posted:

That's what pnfs and lustre are for and used in many places exactly so.

It's anyone using ceph? It seems attractive because it combines block and object, but I'm not sure how widely used it is.

evol262
Nov 30, 2010
#!/usr/bin/perl

szlevi posted:

Yes and actually FIO just showcased their setup in the Summer, they had it running in their booth at SIGGRAPH: http://www.fusionio.com/siggraph-2013-demo/

This seems to be a LOT less complex for my team than a lustre setup (I'm handy with linux but pretty much that's it, nobody touches it unless necessary); essentially FIO applies some very clever caching all the way to the artist's WS, using block-level (IB) access... I probably wouldn't get those client cards but since I already have 2 FIO Duos that could drive the ioTurbine (newer version of my DirectCache) segment and I figured I might take a look how much those ioControl boxes (ex-Nexgen) cost, say, around 40-50TB... adding another pair of FIO cards in the other file server node could possibly bump me up to ~4GB/s, perhaps even further up...? Granted, it won't be as fast as the demo was and I will have to build it step-by-step but that's life. :)

FusionIO is awesome if you just want to throw money at the problem, and Infiniband lets them RDMA straight to the PCIe bus on the next server in the chain, but I don't know that there's anything particularly clever about it. It works and it's easy, though.

szlevi posted:

The problem with pnfs, lustre etc that 1. we are a small shop and hard to get proper support for something like that 2. we are a Windows shop and clients are totally not supported at best (or does not even exist)...

We tested Stornext with our DDN back then and it sucked - I got better results with native NTFS, seriously (and that was a crap too.)
Being a small shop without expertise is the best reason not to use pNFS or Lustre -- even though Lustre has a Windows client, it's going to be a PITA for you.

I guess the question is this:

FusionIO works for your needs, but it's a mess to scale out and isn't really standardized. Do the benefits outweight the costs? If yes, buy FIO. If not, get some hard numbers on the performance you need and an approximate price point and we'll get you recommendations.

Serfer posted:

It's anyone using ceph? It seems attractive because it combines block and object, but I'm not sure how widely used it is.
It's new and essentially has the same advantages and disadvantages as Gluster, except that it's newer, less stable, and arguably slower. It's mainline, though, and things should rapidly equalize.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

evol262 posted:

Granted in some respects, and medical imaging is probably enormous files. You can find cheap arrays which'll max 10g with nearline SAS, clever caching, or SSDs. Doing it for multiple workstations simultaneously and with a relatively unknown I/O pattern is much harder.

Well, sure, if you're talking about random small block IO maxing out 10g is a lot harder, but you're not going to be talking about throughput in that scenario, you're going to be talking about IOPs and latency. You don't even need clever caching or SSD to do high throughput from a pretty modest SAN. Spinning platters are pretty efficient at sequential IO already so all you need is a raid/volume management scheme that divides data between spindles pretty effectively, some good readahead algorithms to help manage multiple concurrent streams, and enough CPU and internal bandwidth between the different busses that your controller itself isn't the bottleneck. Most general purpose arrays are optimized for random IO because it's harder to get right and because most IO that affects the user experience is latency sensitive small block IO. These optimizations for random IO tend to make them less effective for sequential IO, but when you get an array built for that purpose you can do a lot with relatively little hardware.

Again, to reference the E-series, which is what I'm most familiar with, lustre running on top of a single E5460 with 30 drives can push about 10Gbs while handling 50 concurrent IO streams. Add more disk and we can get to 20Gbs split across even more concurrent streams.

the spyder
Feb 18, 2011
We max 40GB Infiniband using just spindle drives (massive quantities of spindle drives mind you.) This is on Solaris 11/ZFS/Dual 6-Core xeons with a minimum of 144 disks.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

the spyder posted:

We max 40GB Infiniband using just spindle drives (massive quantities of spindle drives mind you.) This is on Solaris 11/ZFS/Dual 6-Core xeons with a minimum of 144 disks.

How are you hanging that many spindles off a single box? Are you doing awful cheap hacky solutions or enterprise class hardware with service contracts and the whole bit?

Adbot
ADBOT LOVES YOU

Aquila
Jan 24, 2003

I'm looking for something for nearline local backups for our systems, mostly db backups. I'm thinking 3-6u, one or two boxes, 20-40tb usable, bonded Gbe or 10Gbe connected, nfs and or rsync, ftp, etc transfer. While we have alot of in house expertise rolling just this kind of solution ourselves I'm hoping for something very turnkey and reliable, while not being horrendously expensive, moderately expensive is potentially ok. We already have a hitachi fc san for db's and vm's, but it's file options appear to be so bad we're not even considering them (and they gave us a free file module).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply