Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

KennyG posted:

I appreciate the help but why do people look at me like I have 6 heads when I ask for a 5-10x improvement when I show up with a benchmark report that shows 20k iops in a system that has 300-500ms latencies and has only been in production 2 months. If my system was functioning in an optimal way, I wouldn't even call them.

Look you have various requirements for your environment don't attempt to have a curtian solution for it. Seriously it is much easier if you break your environment down into what it needs.

You seem to have 4 environments;

End user computing(EUC) - This is your VDI/Front end platform, scale it to fit the needs of this environment. Seriously a VNXe 3300 (4x200GB FAST CACHE and 600GB 10K drives)/Netapp FAS and a shelf will probably do you in a bunch of good honestly if you are scaling your environment, pure or all SSD is a waste for EUC. Follow it up with scaling what servers need to face who. Obviously your HA SQL cluster should not be front ended to the users, but what part of it is? What storage fits that need? Figure this out first.

Infrastructure - This is your heavy compute, you need to make sure low latency and high compute is available here.
Based on what you said I would scale it

>(1 or 2)Flash array for your 100K++ IOPS with 20TB
In all honesty I'd do something like 2TB(or 4TB) of FAST CACHE and backed with 15k DISKS, I'm not saying you aren't needing 100K IOPS but for your budget I dunno make sure you need it. Is 100% of that 100K IOPS 10TB worth of storage? If not how much needs to be +100K IOPS ready? Maybe get a All flash array at XTB and storage DRS pool it with some 15K+SSD cache arrays

>100TB of Cheap/large storage, anything will do this it depends what your IO requirements are Netapp/EMC/etc all do this. Depending on how hot/cold this 100TB is I'd go with a VNX 5100/Netapp FAS2240-4/3Par v7000's with some shelves, and front end it with some fast cache.


Backup Go Avamar/Datadomain, I dunno about how others would perform given the scope of your environment.

DR You need to ask who the shareholders/investors are of the data and services you are providing is for a "worst case"(e.g. if poo poo hits the fan what is the worst SLA you need), this can't be answered by anyone but your boss or stakeholders. Scaling 1:1 is great but you need to find out what is an acceptable level of performance if a fault/site down occurs.

quote:

I've consulted with partners running our stack at other sites with smaller file sets and similar client bases and they use pure storage on the fast and 3par on the slow.

Cool, Ask them "so what's your RIO?" "how much did you spend on your solution?", "What's your SLA on RPO",, or "if you could change something what would it be?" People love to talk about themselves and what they have done, hear them out.

quote:

I need to plan for this storage purchase handling foreseen needs for the next 3-5 years.

Scaling out per environmental need or setting profiles based on SLA's will greatly reduce the burden you have. Scaling 4 profiles/environments vs 1 or 2 mega monster(s); or alternatively scaling per SLA of the environment is gently caress loads easier than trying to find needle in hay stack.

quote:

I have a single schema that is 1TB and with the load /lag, it's costing us a lot of money (like the cost of this storage project in a few months). I am not trying to waste money and I'm not bitching about how much performance costs. I am just tired of convincing people (not in my company) that I need a high performance solution. When people go to buy a suit, the salesman doesn't say, you only go to weddings and funerals, do you really need a fifth suit. they say what size.


Again, scale per environmental need don't try to make one solution that is super large and covers it all. Four infrastructures can be easier to manage than 1 large one that have dependencies on eachother. Keep poo poo simple and focus on what the storage/network/virtual needs of the end user is.

Dilbert As FUCK fucked around with this message at 05:11 on Nov 10, 2013

Adbot
ADBOT LOVES YOU

KennyG
Oct 22, 2002
Here to blow my own horn.
I used DPACK over 24 hours and 7 days. I have provided this report to each of 6 vendors.

It includes avg io sizes, read write mix, latency, queue depth, iops, throughput, CPU & memory and graphed all those over time. I'm not pulling this number out of thin air.


Dilbert, I think I am trying to do what you are talking about. (Except as you alluded to as our vdi iops aren't as large, I am comfortable just running it on the mission critical tier.

At this time I am looking for information about the 100tb solutions ability to handle millions of small files. I have an avg file size of 300k but it is skewed heavily by a handful of 2+GB files. On a statistical bases my median file size is about 11k. I can run some mode numbers on block size but it ends up in the 8k range. The concern I have is the file system's ability to handle 100+million files with snapshots and dedupe etc. anyone have any real world info on this?

Maneki Neko
Oct 27, 2000

KennyG posted:

At this time I am looking for information about the 100tb solutions ability to handle millions of small files. I have an avg file size of 300k but it is skewed heavily by a handful of 2+GB files. On a statistical bases my median file size is about 11k. I can run some mode numbers on block size but it ends up in the 8k range. The concern I have is the file system's ability to handle 100+million files with snapshots and dedupe etc. anyone have any real world info on this?

My only experience with this is on the NetApp side, but if you're looking at them as a NAS solution, make sure to have a conversation with them explicitly about this. They have an internal document about high file count environments (we had to sign an NDA to get it) that you'll want.

Docjowles
Apr 9, 2009

My company is 100% NetApp over NFS. We have literally billions of files in the kb-to-mb range stored there and replicated with SnapMirror, not aware of any issues. Just... don't try to do a directory listing unless you have a few weeks to kill ;) . Not sure about dedup. Disclaimer: I am not the storage admin, though I'm hoping to learn more about our NetApp stuff over the next year.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

We were quoted $33,000 for a SAN + service contract from Dell (not sure on the exact model/specs but that's not the point) through a reseller that we bought a NetApp from about 5 years ago.

We immediately thought 'what a great' deal since we're paying like 10k a year to keep maintenance on the one we already own. But then they want $9,000 to install the loving thing.

what possibly could be involved with the install that costs $9,000? Rack the drat thing, connect to network and then setup a few volumes?

Docjowles
Apr 9, 2009

VAR's always quote you for installation it seems like whether it's necessary or not. If you're comfortable setting it up just tell them to knock that off the quote.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

NetApp will do high file count environments with snapshots and dedupe enabled, provided you are on a controller with enough memory or flash cache. As was mentioned, be sure to tell this to your VAR at the outset so that they can size appropriately. The big issue with HFC on NetApp is that there is so much metadata for the filesystem and if we can't keep all of that metadata in memory then performance can become very poor. So large memory models are almost always required, even if the actual IO load is fairly low. FlashCache or FlashPool can be used to cache only metadata, so those help as well. Snapshots and dedupe act at the block level so they don't care if those blocks are members of 1 billion files or 10 files, they will work the same either way. Ditto for volume snapmirror replication, though qtree snapmirror on 7-mode has challenges with HFC environments.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Docjowles posted:

VAR's always quote you for installation it seems like whether it's necessary or not. If you're comfortable setting it up just tell them to knock that off the quote.

It's where they make their money, they may only be getting a few points on the HW so they juice it up with some services

KennyG
Oct 22, 2002
Here to blow my own horn.

Docjowles posted:

My company is 100% NetApp over NFS. We have literally billions of files in the kb-to-mb range stored there and replicated with SnapMirror, not aware of any issues. Just... don't try to do a directory listing unless you have a few weeks to kill ;) . Not sure about dedup. Disclaimer: I am not the storage admin, though I'm hoping to learn more about our NetApp stuff over the next year.

Thankfully our app is architected better than z:\project\ (I'm looking at you gov't cms from my last job). Thankfully it's z:\project\##\##\##\##\ so it prevents that issue where some idiot says :hurr: what's in here?

Not that I trust reps when they say something negative about a competitor but a rep at NOT EMC warned me about this very thing with isilon so I am looking into it.

parid
Mar 18, 2004

Docjowles posted:

My company is 100% NetApp over NFS. We have literally billions of files in the kb-to-mb range stored there and replicated with SnapMirror, not aware of any issues. Just... don't try to do a directory listing unless you have a few weeks to kill ;) . Not sure about dedup. Disclaimer: I am not the storage admin, though I'm hoping to learn more about our NetApp stuff over the next year.

Another NetApp site with 100mil+ file volumes checking in. Its old school mail dirs for 20k+ mail boxes. Running with snapshots/syncmirror/quotas/dedupe without issue. I will say that NDMP based backups are painfully slow. I'm pushing hard for moving our data protection of these volumes to snapmirror instead.

Amandyke
Nov 27, 2004

A wha?

KennyG posted:

Thankfully our app is architected better than z:\project\ (I'm looking at you gov't cms from my last job). Thankfully it's z:\project\##\##\##\##\ so it prevents that issue where some idiot says :hurr: what's in here?

Not that I trust reps when they say something negative about a competitor but a rep at NOT EMC warned me about this very thing with isilon so I am looking into it.

The "bug" is with tree delete jobs. It takes a while as it has to enumerate all the files in the directory/structure. I've heard of customers with millions of directories each holding millions of files complaining that deleting them takes a long time.

Alfajor
Jun 10, 2005

The delicious snack cake.
We took offline a pair of EMC CLARiiON CX4-120s, and will not be using it anymore. Any recommendations on what to do with the hardware?
I'm thinking of listing all the drives (about 50 total) for sale on eBay, kinda like this: http://www.ebay.com/itm/EMC-CLARiiON-CX4-120-CX4-240-450GB-15K-CX-4G15-450-005048951-005048849-005049032-/150823481357 (on that note, any tips on where I can find the plastic enclosures to put the drives in safely? )

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Alfajor posted:

We took offline a pair of EMC CLARiiON CX4-120s, and will not be using it anymore. Any recommendations on what to do with the hardware?
I'm thinking of listing all the drives (about 50 total) for sale on eBay, kinda like this: http://www.ebay.com/itm/EMC-CLARiiON-CX4-120-CX4-240-450GB-15K-CX-4G15-450-005048951-005048849-005049032-/150823481357 (on that note, any tips on where I can find the plastic enclosures to put the drives in safely? )

Some VAR's have buyback programs for old equipment you might want to check any deals when who you bought your stuff from.

Alfajor
Jun 10, 2005

The delicious snack cake.
We got them straight from EMC, and they would have only bought them back if we replaced with a newer SAN from them... and we did not do that.

KennyG
Oct 22, 2002
Here to blow my own horn.

Amandyke posted:

The "bug" is with tree delete jobs. It takes a while as it has to enumerate all the files in the directory/structure. I've heard of customers with millions of directories each holding millions of files complaining that deleting them takes a long time.

So If I delete z:\Application it will buy me more time to freak out and figure out how to stop it :ohdear: sounds like a feature instead of a bug.

NullPtr4Lunch
Jun 22, 2012

Docjowles posted:

VAR's always quote you for installation it seems like whether it's necessary or not. If you're comfortable setting it up just tell them to knock that off the quote.

I was told this was mandatory for warranty reasons for my Hitachi HUS110. Seemed kind of skeezy...

Internet Explorer
Jun 1, 2005





NullPtr4Lunch posted:

I was told this was mandatory for warranty reasons for my Hitachi HUS110. Seemed kind of skeezy...

Every time I called EMC support for our VNX5300s, they gave me poo poo because I installed it myself. They wouldn't outright deny the support request, but it was a waste of 30 minutes every time I called.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Internet Explorer posted:

Every time I called EMC support for our VNX5300s, they gave me poo poo because I installed it myself. They wouldn't outright deny the support request, but it was a waste of 30 minutes every time I called.

This would make me furious. I have very specific OCD for racking and cabling stuff.

Amandyke
Nov 27, 2004

A wha?

Internet Explorer posted:

Every time I called EMC support for our VNX5300s, they gave me poo poo because I installed it myself. They wouldn't outright deny the support request, but it was a waste of 30 minutes every time I called.

Not sure how they would know you racked it yourself unless you told them you did... Not that it should matter either way, you're more than welcome to install whatever you want to install. If the install goes sideways, a CE will be dispatched anyway.

AtomD
May 3, 2009

Fun Shoe
I'm sorry if this was discussed before, but does anybody have experience with a Windows 2012 Storage Spaces solution? MS is pushing using a JBOD SAS array attached to a physical Windows Server cluster. It should allow scaling out and storage tiering (between platter and SSD only) at a much lower cost than a good SAN solution, but I've got a real bad feeling about trusting Windows to not screw something up eventually.

NullPtr4Lunch
Jun 22, 2012

AtomD posted:

... I've got a real bad feeling about trusting Windows to not screw something up eventually.

Yeah this ^

I'd sooner trust FreeNAS and ZFS than anything they cooked up over at Microsoft.

TKovacs2
Sep 21, 2009

1991, 1992, 2009 = Woooooooooooo

AtomD posted:

I'm sorry if this was discussed before, but does anybody have experience with a Windows 2012 Storage Spaces solution? MS is pushing using a JBOD SAS array attached to a physical Windows Server cluster. It should allow scaling out and storage tiering (between platter and SSD only) at a much lower cost than a good SAN solution, but I've got a real bad feeling about trusting Windows to not screw something up eventually.

I was recently at a Microsoft Camp for a one day Server 2012 lab they hosted at their campus. I was always taught that when it comes to RAID to rely on specialized hardware creating/running/managing for it as opposed to software control of it, and I've never had an experience that has lead me to think that isn't correct. I brought this up to the presenter and he pretty much focused on it being cheaper this way (which yeah, it is) than paying for hardware that'll handle the RAID functions.

I won't speak for anyone else, but I'd NEVER trust my RAID to a software solution.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Stuff like ZFS has proven to be pretty powerful living purely in software space, and also negates some of the specific downsides of hardware cards (RAID 5 write-hole, moving drives between controllers, etc), so I don't think a blanket "software bad hardware good" statement can really apply anymore. No idea if Storage Spaces is any good or not, but I don't think it should be disqualified just because it's software.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Syano posted:

How long did it take the array to rebuild when you did that?

I want to say over a day, but I can't be sure. I think I kicked the job off on a Friday night and it was done by that Sunday when I next checked.

AtomD posted:

I'm sorry if this was discussed before, but does anybody have experience with a Windows 2012 Storage Spaces solution? MS is pushing using a JBOD SAS array attached to a physical Windows Server cluster. It should allow scaling out and storage tiering (between platter and SSD only) at a much lower cost than a good SAN solution, but I've got a real bad feeling about trusting Windows to not screw something up eventually.

I've a cluster that uses Server 2012 as an iSCSI target and it's been working great. I posted about it in the home storage thread. I did discover that the hardware RAID provided by my P410 controller gave better performance than using a M1015 in pass-through mode, but that was common for Nexenta CE, Openfiler and FreeNAS as well.

Take note that tiering is available only with Server 2012 R2 and has sort of a strange model of operation: it will use the SSD tier as a L2ARC / ZIL cache, but will also migrate frequently used data to your SSD tier and over time your L2ARC cache will disappear as access patterns pull date up to the SSD tier. I'm not sure of the long term performance considerations, but you'll want more SSD than is traditionally required for a L2ARC/ZIL implementation.

My gut instinct on Server 2012 Storage Spaces is that it exists in the same market as other NAS/SAN software packages. It won't compete with dedicated storage appliances from NetApp, EMC and the like, but it works great for roll your own or low budget storage requirements. I use it instead of Nexenta CE or FreeNAS because I'm a windows guy and I can manage it as such.


Agrikk fucked around with this message at 21:31 on Nov 14, 2013

AtomD
May 3, 2009

Fun Shoe
Yeah, I'd rather wait to see if any horror stories reveal themselves and then reconsider it for our next refresh.

MJP
Jun 17, 2007

Are you looking at me Senpai?

Grimey Drawer
Anyone running Lefthand out there?

We're looking to upgrade from StoreVirtual 10 to 11. We've got P4300G2s which are listed as compatible on the release notes, but the majority of our Lefthand servers are older DL320s servers.

Those aren't listed as supported - will we still be able to upgrade onto them as an unsupported platform, or will the process outright fail/refuse to proceed?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

TKovacs2 posted:

I won't speak for anyone else, but I'd NEVER trust my RAID to a software solution.

There are a bunch of enterprise software vendors that use software raid. There's nothing inherently wrong with software raid and it has a lot of flexibility over hardware raid solutions as you can integrate it more tightly with the volume manager and filesystem to make smarter choices about where to place data. Modern CPUs are also significantly faster than they used to be so there is little need to worry about offloading that work.

There are a ton of great reasons to use software raid (generally speaking, this depends on the implementation) and very few left to use dedicated hardware raid cards for anything other than, say, root disks on a server.

YOLOsubmarine fucked around with this message at 00:15 on Nov 16, 2013

TKovacs2
Sep 21, 2009

1991, 1992, 2009 = Woooooooooooo

NippleFloss posted:

There are a bunch of enterprise software vendors that use software raid. There's nothing inherently wrong with software raid and it has a lot of flexibility over hardware raid solutions as you can integrate it more tightly with the volume manager and filesystem to make smarter choices about where to place data. Modern CPUs are also significantly faster than they used to be so there is little need to worry about offloading that work.

There are a ton of great reasons to use software raid (generally speaking, this depends on the implementation) and very few left to use dedicated hardware raid cards for anything other than, say, root disks on a server.

I appreciate the info. I work in the small business world, so enterprise level stuff doesn't trickle down my way too often. Probably should have kept my mouth shut here, but maybe someone else with the same mindset will read all this and learn as well. Thanks for the free lesson!

Maneki Neko
Oct 27, 2000

PROTIP: Don't buy Safenet encryption appliances. Holy poo poo I've never seen anything as crappy and unstable before.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
So with Server 2012 R2 out and ESXi 5.5 out, and both offering storage improvements, I'm wondering what's the best way to setup a failover storage cluster now?

In the past it's been a limitation that you couldn't vMotion a machine in a failover cluster because of the way the networking is setup, but since both Server 2012 R2 and ESXi 5.5 have improved backend storage stuff, I'm wondering if there's a good way to do it now?

In another thread someone said that they'd either want a single storage server in HA or a cluster that isn't in HA and wont' vMotion, but if the server won't vMotion I'd have to manually fail over the storage clsuter and shut down the VM if I wanted to do host maintenance, rather than just putting it into maintenance mode and letting it move on its own.

Any thoughts?

Novo
May 13, 2003

Stercorem pro cerebro habes
Soiled Meat

NippleFloss posted:

There are a bunch of enterprise software vendors that use software raid. There's nothing inherently wrong with software raid and it has a lot of flexibility over hardware raid solutions as you can integrate it more tightly with the volume manager and filesystem to make smarter choices about where to place data. Modern CPUs are also significantly faster than they used to be so there is little need to worry about offloading that work.

There are a ton of great reasons to use software raid (generally speaking, this depends on the implementation) and very few left to use dedicated hardware raid cards for anything other than, say, root disks on a server.

I know it's anecdotal, but all my server problems in the last 5 years have been caused either by a failed power supply or a failed RAID controller, and it's usually the RAID controller. So whenever I find myself refreshing a system I rip them out and set up Linux software RAID instead. With hardware RAID you have to fiddle with each particular controller, and each one requires different utilities and kernel modules for management. By standardizing on md I can spend all that money on more drives or servers instead of expensive-rear end cards that ruin my day and get chucked after two years (I'm looking at you, retarded-rear end MegaRaid controllers with the mouse-driven boot ROM that looks like Windows 3.1).

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
I sat down with some Nimble guys today, and I like what they are selling. The sales guy told me he loves to "race for pinks" which I thought was a pretty good way of showing his confidence in the system he is selling.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

adorai posted:

I sat down with some Nimble guys today, and I like what they are selling. The sales guy told me he loves to "race for pinks" which I thought was a pretty good way of showing his confidence in the system he is selling.

Nimble was great, except when it came to replication. We had some serious issues with replication performance across a VPN, and their answer was "welp we don't know why it starts and stops replicating at random".

b0nes
Sep 11, 2001
This might be the wrong thread, but is there any way to build a NAS with an external USB drive? I want a Western Digital MyCloud drive, but the reviews I am seeing rate it as a lovely product.

Zorak of Michigan
Jun 10, 2006

Is there a cheap option for reliable and replicate-able nearline storage? One of my coworkers is pushing us toward Amazon Glacier for archival storage. I'm wondering if there are on-prem solutions that might come close to that same price target. We're mostly an EMC shop now and while I know VNX2 pricing has gotten pretty good, I haven't seen them anywhere near a penny per gig per month over the life of the array.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

madsushi posted:

Nimble was great, except when it came to replication. We had some serious issues with replication performance across a VPN, and their answer was "welp we don't know why it starts and stops replicating at random".

Currently replicating across a lovely 50meg link and having no issues.

Running the latest Firmware?

hackedaccount
Sep 28, 2009

Zorak of Michigan posted:

Is there a cheap option for reliable and replicate-able nearline storage? One of my coworkers is pushing us toward Amazon Glacier for archival storage. I'm wondering if there are on-prem solutions that might come close to that same price target. We're mostly an EMC shop now and while I know VNX2 pricing has gotten pretty good, I haven't seen them anywhere near a penny per gig per month over the life of the array.

How near-line is it gonna be? If you go with Glacier it's cheap to put in and store but outbound bandwidth fees can get costly if you access it frequently.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

b0nes posted:

This might be the wrong thread, but is there any way to build a NAS with an external USB drive? I want a Western Digital MyCloud drive, but the reviews I am seeing rate it as a lovely product.

We've got a Home NAS thread, that's probably more what you're looking for:
http://forums.somethingawful.com/showthread.php?noseen=0&threadid=2801557&perpage=40&pagenumber=224#pti35

KennyG
Oct 22, 2002
Here to blow my own horn.
Can we get dirty and talk about pricing?

I am trying to figure out what's doable and how much more can I gain from the deal.

I can provide specifics if important but lets use fake numbers from fake companies for now.


ProviderSolution:
30TB Usable w/ local flash cards MSRP $300k, $140k initial quote
96TB Unified Cluster Storage (5 nodes, balanced config) MSRP $450k, $200k quote
200TB Unified Cluster Storage (4 nodes, Near Line config) MSRP $550k, $250k quote
~53% discount applied across the board

This comes to ~650 after services and other stuff are included. They then added a 13% 4Q discount Giving me a number around $550

Now when they say you can get roughly 30% from negotiating, is that off the $1.3m or the $600k? If I get them down 30% from $550 that's 385! A number that I think is more than fair. Even if it was 30% from 650 that's 450 give or take and doable. How do you negotiate on this crap? This is the part of the job I hate the most.

KennyG fucked around with this message at 20:01 on Nov 27, 2013

Adbot
ADBOT LOVES YOU

madsushi
Apr 19, 2009

Baller.
#essereFerrari
Here are some numbers that don't reflect any particular company, and actually, they're not even storage-specific. First, all of my math is a bit fuzzy (not using a calculator) but close enough.

You start by figuring out the margin. For high-end storage and many other things, the margin is around 80% of MSRP. That could be even higher or much lower, depending on the company. A lot of young companies run with thinner margins (lower MSRP) to get the sales.

But let's start with a nice round number, like 80%.

Your deal is about ~$1.4MM MSRP. Assuming 80% margin, they'd be making ~$1.2MM on the deal. But nobody pays that price.

They're giving you a 53% discount right off the top ($1.4MM * 0.47), and a 13% bonus for the quarter. Note that the 13% is compounded AFTER the 53%, so it's not a 66% discount off MSRP, but closer to 60% off MSRP.

So if you're getting 60% off MSRP, that means their margin is down to 20% of MSRP (remember, 80% MSRP was their original margin). Any discounts you get come out of their margin.

If your cost is $550k (40% of list), and their margin remaining is still 20% of list, they're making a "50% margin" off the sale value, which is right in the range of what NetApp/EMC/etc report. They are between 40-60% sale value margin at the end of the day.

So you're right in there. If you want to shave off another 10% MSRP, you're cutting their margin in half (20% MSRP to 10% MSRP, or 50% sale to 25% sale), which can be tough.

I know that many companies aim for between 10-20% MSRP margins, so you might be able to shave off a few more percentage points. But as you get closer and closer to the actual price of the gear (around 20% MSRP, or $300k), you are cutting their margins deeper and deeper. A $450k sale would mean they'd only be making ~30% sale margin (and 10% MSRP), which is below their usual ballpark.

So you might be able to wiggle a few more percentage points out of them, but it's very unlikely you take them to $450k without some serious leverage.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply