Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ILikeVoltron
May 17, 2003

I <3 spyderbyte!

bmoyles posted:

Yeap, the tier 3 was SATA, tier 1 was 15k FC.
We ended up with a MD3ki for a stopgap solution for VMware and Isilon for NAS. Prolly gonna go with a EqualLogic box to replace that MD3ki later this year.

The Compellent solution was really nice, and I'd recommend it to anyone who's got the cash.

I've got a Compellent SAN in active/passive, and like it very much. The data progression stuff from Teir1 -> TeirX is really nice too.

Adbot
ADBOT LOVES YOU

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

szlevi posted:

Anyone with knowledge about Compellent's Data Progression feature? How does it work, policies/period/chunk etc?

It marks used blocks and progresses them either up or down tiers (sometimes just a raid level)

It usually kicks off around 7-10pm depending on how you set it up. I've worked with their equipment for around 5 years, know several of their support staff and am currently upgrading to some newer controllers, so I can likely answer any of your questions about their equipment.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

szlevi posted:

Awesome, thank you. :) You mentioned this 7-10PM - what happens if I have multiple workflow changes throughout the day? What's the shortest sampling period for this tiering algorithm eg can I set it to check eveyr 4 hours or even smaller?

I'm told they are experimenting with the polling while teir0 data (only for SSD) at intervals of 5 minutes, however for normal 15k RPM / 7k RPM disk I believe it is daily, and there is no altering that. Basically you build out your teir1 shelf with the number of disks for IO/size that would be required to keep most of your data current.

Also, if you want to pull out a lun and assign it a storage profile that keeps it in teir1 (and never migrates) you can do that as well.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

three posted:

How do you handle systems that run on monthly cycles, which would have their data migrated down to slow speed by then? Disable data progression on those volumes, or create custom profiles?

I'd break up the data in such a way that the teir of disk it'll sit on is fine. One option is to use the method you describe, you could also build out your teir3 storage to take that hit, though obviously not as fast.

The way I run today, I have it split about 40/60, with 40% going to teir1, and 60% going to teir3. I have more teir3 spindles, so it works out that this is better for me.

You could do something weird with the snapshot intervals too I assume, though I wouldn't recommend that.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Kivi posted:

My company buys capacity from this 3rd party vendor and we're having issue where supposedly two equally specced perform dissimilar, other being almost one fourth slower. I'm going to guess it's due to disk perf (it's database server with Xeon 56c Plats and 128 GB of RAM on both) so what would be quick benchmark to run? Bonnie++ springs to mind but it takes hours to run, is there anything simple and fast that would provide disk metrics we could easily compare?



FIO is fine, iometer is fine too, your issue is going to be setting up a proper test to get accurate IO from the test. This means either you already have a profile for the DB (80% read / 20% writes / 8k blocks / etc) or you need to identify the profile you want to use. With that in mind, you also should use a disk twice the size of ram in the system, so if you've got 128 gigs of ram, you should allocate out a 256 gig disk to run the test on. Also, try and keep things the same, if you end up writing the test file on an existing filesystem, make sure it's the same filesystem as the one your db is running on. If it's oracle (and using it's own custom format disk) maybe test on xfs.

Anyway, there's lots of things you should be accounting for to run a good test, ram, cpu, and disk not-withstanding. Hope this helps

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply