Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

ILikeVoltron posted:

It marks used blocks and progresses them either up or down tiers (sometimes just a raid level)

It usually kicks off around 7-10pm depending on how you set it up. I've worked with their equipment for around 5 years, know several of their support staff and am currently upgrading to some newer controllers, so I can likely answer any of your questions about their equipment.

How do you handle systems that run on monthly cycles, which would have their data migrated down to slow speed by then? Disable data progression on those volumes, or create custom profiles?

Adbot
ADBOT LOVES YOU

ragzilla
Sep 9, 2005
don't ask me, i only work here


three posted:

How do you handle systems that run on monthly cycles, which would have their data migrated down to slow speed by then? Disable data progression on those volumes, or create custom profiles?

Remove the slow disk pools from the storage profile for that LUN (I'm fuzzy on the actual names in the GUI, haven't touched our Compellent in over a year).

Data progression also relies on snapshots so you have to snapshot every LUN you want data progression on.

madsushi
Apr 19, 2009

Baller.
#essereFerrari
At the end of the day, there are plenty of ways to "trick" the Compellent. Some storage usage profiles match up nicely with its Data Progression design and will see great performance, some usage profiles simply aren't compatible.

"What about the database that only blasts yesterday's records at 3AM and then never touches them again??"

In my opinion, the right way to size storage performance is to figure out your IOPS needs, and then to give yourself enough spindles to handle those IOPS (while also considering your space requirements). Any storage vendor that tries to sneak around the basic spinning disk IOPS requirements is going to run into caveats where their system doesn't work. NetApp loves to use SATA-heavy deployments with FlashCache, Compellent uses Data Progression, and I know EMC has some SSD-based options.

As long as you size your Compellent properly to handle your load WITHOUT all of the tricks, it will run fine. The problems start when you buy into their marketing-speak of "just buy a couple 15ks and then all SATA and everything will work out magically".

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

ILikeVoltron posted:

It marks used blocks and progresses them either up or down tiers (sometimes just a raid level)

It usually kicks off around 7-10pm depending on how you set it up. I've worked with their equipment for around 5 years, know several of their support staff and am currently upgrading to some newer controllers, so I can likely answer any of your questions about their equipment.

Awesome, thank you. :) You mentioned this 7-10PM - what happens if I have multiple workflow changes throughout the day? What's the shortest sampling period for this tiering algorithm eg can I set it to check eveyr 4 hours or even smaller?

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

szlevi posted:

Awesome, thank you. :) You mentioned this 7-10PM - what happens if I have multiple workflow changes throughout the day? What's the shortest sampling period for this tiering algorithm eg can I set it to check eveyr 4 hours or even smaller?

I'm told they are experimenting with the polling while teir0 data (only for SSD) at intervals of 5 minutes, however for normal 15k RPM / 7k RPM disk I believe it is daily, and there is no altering that. Basically you build out your teir1 shelf with the number of disks for IO/size that would be required to keep most of your data current.

Also, if you want to pull out a lun and assign it a storage profile that keeps it in teir1 (and never migrates) you can do that as well.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

three posted:

How do you handle systems that run on monthly cycles, which would have their data migrated down to slow speed by then? Disable data progression on those volumes, or create custom profiles?

I'd break up the data in such a way that the teir of disk it'll sit on is fine. One option is to use the method you describe, you could also build out your teir3 storage to take that hit, though obviously not as fast.

The way I run today, I have it split about 40/60, with 40% going to teir1, and 60% going to teir3. I have more teir3 spindles, so it works out that this is better for me.

You could do something weird with the snapshot intervals too I assume, though I wouldn't recommend that.

bort
Mar 13, 2003

Thanks ILikeVoltron and madsushi, that's really great advice.

Any other Compellent noob pitfalls I'm about to fall into?

Also, Spamtron7k, you got me thinking about SSD in case people start blaming my storage, thank you :mmmhmm:

KS
Jun 10, 2003
Outrageous Lumpwad
Ragzilla posted the biggest Compellent noob pitfall, but I wanted to draw attention to it: you must do snapshots of some sort on all volumes in order for data progression to work properly. Without it, data will not be paged down to lower tiers or even RAID-5 on Tier 1.

If you're space constrained and have a volume that has a 100% change rate every day (Exchange and SQL come to mind) a daily snapshot with 26 hour retention seems to be about the minimum you can get away with. You will still come out ahead on storage over no snapshots and the data sitting in RAID-10.

My only other advice is to use the recommended storage profile for nearly everything. The system is pretty smart, but you can outsmart it if you try. Treat custom storage profiles like VMWare memory reservations/limits: something to be avoided unless you know you need it.

KS fucked around with this message at 23:14 on Dec 22, 2011

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Someone earlier said that Compellent iSCSI didn't play nicely with Solaris, can anyone coroborate that or provide some more information? That would be a serious deal breaker for us, and I'd hate to dump all our money into a vison that is worse than what we have now.

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

FISHMANPET posted:

Someone earlier said that Compellent iSCSI didn't play nicely with Solaris, can anyone coroborate that or provide some more information? That would be a serious deal breaker for us, and I'd hate to dump all our money into a vison that is worse than what we have now.
I had a lot of goofy issues with iSCSI and ZFS in Solaris 10, but I wouldn't blame it on Compellent though. At the time we were using Solaris 10, Update 9 with the latest patch cluster. Every single time you reboot the server, ZFS would mark the Compellent iSCSI LUN as degraded and I would have to manually clear the error to bring the filesystem online. The problem is that when the server boots, ZFS tries to mount all the devices before the network is up so it can't mount the iSCSI LUN and therefore thinks it is faulty/offline. I had Oracle create an IDR (which took them months to come up with) but the issue would still happen occasionally. I think the problem was fixed in OpenSolaris so it might be fixed in Solaris 11. I also had another server in which I would have abysmal disk performance over iSCSI with multipathing turned on. Compellent and Oracle couldn't help me and the problem is still unresolved. As a workaround, we had to disable multipathing on the server. Funnily enough, we had a identical Compellent SAN/switch at another site that didn't have this problem but I couldn't figure it out for the life of me. Of course, if you are going to use Fibre Channel (which I recommend) for your Compellent then all of this would be a non-issue.

Also be aware that with ZFS you can't mount replays on the same server if the same filesystem is already mounted. For instance, say that you have a pool called rpool mounted on your server. Say that on the Compellent that you want to mount a replay of rpool from last week. You would think you can do a zpool import and call it rpool_old and be done right? Well you can't because replays have the same device IDs, and ZFS won't let you have a two devices with duplicate IDs on the same system. In order to actually use the replay of rpool, you will need to either destroy the first rpool or mount it on a different system.

One thing I would mention is that in my experience, Compellent's Copilot support isn't that great when it comes to answering questions on Solaris. Oracle's support isn't all that great either since all the good engineers left Sun a long time ago. I would recommend having good Solaris admins on staff to deal with any funky Solaris issues because you will certainly pull your hair out trying to get support to fix them.

ihafarm
Aug 12, 2004
Just had a FAST Cache SSD fail in my Celerra(pre-cursor to the VNXe line); hot spare was promoted properly and the system performed as normal. Thankfully.

Of course, dealing with support and getting a replacement, that's a different story.

EDIT: Dispatch a tech to replace a drive?!? WTF EMC.

SPAMTRON - I've got to put out another fire, but I'll update with the relevant code release numbers later.

ihafarm fucked around with this message at 20:28 on Dec 23, 2011

Slappy Pappy
Oct 15, 2003

Mighty, mighty eagle soaring free
Defender of our homes and liberty
Bravery, humility, and honesty...
Mighty, mighty eagle, rescue me!
Dinosaur Gum

ihafarm posted:

Just had a FAST Cache SSD fail in my Celerra(pre-cursor to the VNXe line); hot spare was promoted properly and the system performed as normal. Thankfully.

Of course, dealing with support and getting a replacement, that's a different story.

That's great. Do you know what version of FLARE you're on? EMC support wants me to upgrade to latest before re-enabling FAST cache. We had our failure on 4.30.000.5.511

They sent me the release notes for 4.30.000.5.522 (current) but I don't see anything that remotely resembles "single SSD or disk failure causes entire enclosure to fault". I can't wait to see the RCA.

luminalflux
May 27, 2005



luminalflux posted:

This is also true of HP LeftHand systems. That is, the VAR claims that that will happen, but the HP RSSWM stuff is kinda finicky. I'll get back to you when our next disk shits itself now that I've (hopefully) fixed RSSWM.

Update: Claims are still claims :(

ihafarm
Aug 12, 2004
Update on my situation: I am pretty happy with the response from EMC - I opened my ticket around noon and the replacement drive was sent to me by courier and it arrived around 8pm, redirected to my house no less, I live approximately 2 hours from their local service center.

SPAMTRON: I'm on the same release. I did notice that the shelf that holds the SSDs is still faulted, even after the hot spare was activated. My saving grace may simply have been that there aren't any other disks in this enclosure. Is it the Clarrion that doesn't allow mixing of disk types within an enclosure, or is that a Celerra limitation?

ihafarm fucked around with this message at 06:39 on Dec 27, 2011

Bitch Stewie
Dec 17, 2011
Any of you P4000 folks updated to the current patch baseline for SAN/iQ 9.0?

I don't want to pull the trigger on 9.5 yet, but our 9.0 hasn't been patched for six months or so.

ghostinmyshell
Sep 17, 2004



I am very particular about biscuits, I'll have you know.

Bitch Stewie posted:

Any of you P4000 folks updated to the current patch baseline for SAN/iQ 9.0?

I don't want to pull the trigger on 9.5 yet, but our 9.0 hasn't been patched for six months or so.

We did 8.0 to 9.5 in one run and it was painful. But that was really the 8.0 to 9.0 portion. 9.0 to 9.5 was 10 minutes pain free. I haven't checked on the thing since then since seeing the UI was now more bloated java poo poo and no one has contacted me about it being down for the past few months...

Anyone use IceWeb's 6XXX series? Comments?

Nimble keeps buying us lunch, but I dunno... maybe I could take one for the team and try it out for you guys.

Bitch Stewie
Dec 17, 2011
Personally I'm becoming more and more convinced that scale-out/node/grid based storage is the way it's headed. Just my opinion but I look around at most of the chassis based solutions and unless you've got a big budget I don't see the value there that you get with node based solutions, especially when you want to factor in some redundancy.

luminalflux
May 27, 2005



ghostinmyshell posted:

We did 8.0 to 9.5 in one run and it was painful.

Same here, with the added fun of manually downloading everything. Also had to manually reset a node via iLO a couple times during the upgrade process since it didn't start correctly.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bitch Stewie posted:

Personally I'm becoming more and more convinced that scale-out/node/grid based storage is the way it's headed. Just my opinion but I look around at most of the chassis based solutions and unless you've got a big budget I don't see the value there that you get with node based solutions, especially when you want to factor in some redundancy.
Most of the mid-tier solutions from major vendors (IBM StorWize V7000 Unified, Hitachi USP, Isilon) are moving towards a hybrid approach, so I'm not convinced that pure grid systems necessarily offer any long-term benefits other than that they're just viable enough to start driving down the prices from major storage vendors.

j3rkstore
Jan 28, 2009

L'esprit d'escalier


:3:

Pretty psyched to get going on this project.

WarauInu
Jul 29, 2003
Have people's experience with VNX been bad or has it just been the earlier models? We are getting pitched a VNX to replace our AX4 and NS-20 that hardware support is running out on.

Muslim Wookie
Jul 6, 2005

j3rkstore posted:



:3:

Pretty psyched to get going on this project.

Nice :) Make sure to rack the rails correctly and don't screw the front screws in too hard, they are just to keep the filer from moving back and forwards. Too tight and it's possible to bend the ears.

Happy to field any questions you have about the unit if you have any :)

evil_bunnY
Apr 2, 2003

I was told they removed the volume limits on the 20x0s, is that true?

Nomex
Jul 17, 2002

Flame retarded.
The limits are so high with 64 bit aggregates that you will never reach them in a 20x0 array.

Muslim Wookie
Jul 6, 2005
Sorry guys but this is not true. I have hit the limits quite often - there are many situations in which you can use a large amount of space and not need much CPU behind it.

Anyway, 64bit aggregates do give you much more space, and the amount depends on the model of filer you are running, and on what version of Data ONTAP you are running. 8.0.x has one set of limits, 8.1 will have (has...) another set significantly higher. To get the specific numbers guys jump onto kb.netapp.com and search "Storage Subsystem Technical FAQ", and within that document is a table displaying maximum aggregate sizes and also recommended raid group sizes. Note I say raid group sizes and not raid type, because you'd be an idiot to use anything but RAID-DP and thank god NetApp have finally discontinued allowing the creation of traditional volumes since 8.0.x. It's been at least longer than 5 years since FlexVols came about, what idiots keep tying volumes to specific disks? That's some stoneage poo poo right there.

Sigh sorry about the rant, just had some meetings with some DBAs today regarding some storage they want. They wanted traditional volumes and kept harping on about a specific raid type they wanted. All I could hear in my head was another similar meeting I'd had in which another storage architect aside from me was also in attendance. He had the best reply to their badgering about RAID type: "I'll tell you what RAID you are getting, RAID-none-of-your-loving-business."

madsushi
Apr 19, 2009

Baller.
#essereFerrari

evil_bunnY posted:

I was told they removed the volume limits on the 20x0s, is that true?

On Ontap 8 and up, volume limits are essentially gone, and you can dedupe a volume up to 16TB (which is up from 1-4TB previously, based on model). You CAN'T put 8+ on a 2020 or 2050, only the 2040 and newer 2240. So if you bought a NetApp 2xxx more than a year or two ago (likely a 2020/2050) then you're still stuck.

devmd01
Mar 7, 2006

Elektronik
Supersonik
Boy oh boy, we get to rezone nearly the entire san fabric because we hosed up the zoning assignments when we installed our first blade chassis (all 16 servers can talk to each other...). We're now on our 5th chassis. :stonk:

evil_bunnY
Apr 2, 2003

madsushi posted:

you can dedupe a volume up to 16TB (which is up from 1-4TB previously, based on model).
That's what I thought, thanks.

Mierdaan
Sep 14, 2004

Pillbug

madsushi posted:

So if you bought a NetApp 2xxx more than a year or two ago (likely a 2020/2050) then you're still stuck.

A bloo bloo bloo, this is my life.

Thankfully we get to buy some Compellent this year.

some kinda jackal
Feb 25, 2003

 
 
This may be a terrible question to ask among people who discuss actual SAN hardware, but if I want to get my feet wet with iSCSI ESXi datastores, what would be the best way to go about this on a whitebox as far as the operating system is concerned. I'm looking at something like OpenFiler. This would literally just be a small proof of concept test so I'm not terribly concerned about ongoing performance right now.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Martytoof posted:

This may be a terrible question to ask among people who discuss actual SAN hardware, but if I want to get my feet wet with iSCSI ESXi datastores, what would be the best way to go about this on a whitebox as far as the operating system is concerned. I'm looking at something like OpenFiler. This would literally just be a small proof of concept test so I'm not terribly concerned about ongoing performance right now.

In addition to OpenFiler, FreeNAS is an option.

Internet Explorer
Jun 1, 2005





Martytoof posted:

This may be a terrible question to ask among people who discuss actual SAN hardware, but if I want to get my feet wet with iSCSI ESXi datastores, what would be the best way to go about this on a whitebox as far as the operating system is concerned. I'm looking at something like OpenFiler. This would literally just be a small proof of concept test so I'm not terribly concerned about ongoing performance right now.

Not sure if your goal is to mess around with the SAN side of things or the virtualization side of things, but there are also some fairly cheap NAS devices that will do iSCSI.

bort
Mar 13, 2003

You might ask in the virtualization thread, too.

Bitch Stewie
Dec 17, 2011

Martytoof posted:

This may be a terrible question to ask among people who discuss actual SAN hardware, but if I want to get my feet wet with iSCSI ESXi datastores, what would be the best way to go about this on a whitebox as far as the operating system is concerned. I'm looking at something like OpenFiler. This would literally just be a small proof of concept test so I'm not terribly concerned about ongoing performance right now.

Download the Lefthand P4000 VSA and install it on a vSphere or Hyper-V host.

hackedaccount
Sep 28, 2009

Martytoof posted:

This may be a terrible question to ask among people who discuss actual SAN hardware, but if I want to get my feet wet with iSCSI ESXi datastores, what would be the best way to go about this on a whitebox as far as the operating system is concerned. I'm looking at something like OpenFiler. This would literally just be a small proof of concept test so I'm not terribly concerned about ongoing performance right now.

I used straight up out of the box CentOS to create iSCSI targets for RHCE practice. It took some messin around to get it working but I read and wrote files so it seems to work.

complex
Sep 16, 2003

Martytoof posted:

This may be a terrible question to ask among people who discuss actual SAN hardware, but if I want to get my feet wet with iSCSI ESXi datastores, what would be the best way to go about this on a whitebox as far as the operating system is concerned. I'm looking at something like OpenFiler. This would literally just be a small proof of concept test so I'm not terribly concerned about ongoing performance right now.

I have FreeNAS, OpenFiler, NexentaStor CE, and a ReadyNAS 2100 in my lab at work. I think if you're learning ISCSI basics it is actually good to try a bunch of different ones. You'll see how each one does volumes and LUNs, mapping, network ACL, etc.

All of them work great with ESXi 5.

some kinda jackal
Feb 25, 2003

 
 
Good call. I'm basically just loving around with vSphere 5 so nothing I do is mission critical. I was thinking about spending $12 on a PCI PATA controller so I could throw together a small JBOD with a bunch of surplus stuff I've taken out of computers through the years. I'll give all those a good look, thanks!

Hell, sounds like I could do the job with nothing more than an Ubuntu install then, if I wanted.

KS
Jun 10, 2003
Outrageous Lumpwad

Mierdaan posted:

A bloo bloo bloo, this is my life.

Thankfully we get to buy some Compellent this year.

I've worked with a bunch of vendors and I really like our Compellents, but Netapp and 3PAR are the other vendors I'd recommend without hesitation in the entry enterprise space. I'm sorta surprised to hear you're moving away from Netapp. I doubt Compellent really picks up a lot of their customers.

Be sure you buy series 40 controllers. They will last you a lot longer.

WarauInu
Jul 29, 2003

complex posted:

I have FreeNAS, OpenFiler, NexentaStor CE, and a ReadyNAS 2100 in my lab at work. I think if you're learning ISCSI basics it is actually good to try a bunch of different ones. You'll see how each one does volumes and LUNs, mapping, network ACL, etc.

All of them work great with ESXi 5.

How do you like NexentaStor? That's one I haven't used yet and have been looking to test.

Adbot
ADBOT LOVES YOU

Serfer
Mar 10, 2003

The piss tape is real



KS posted:

I've worked with a bunch of vendors and I really like our Compellents, but Netapp and 3PAR are the other vendors I'd recommend without hesitation in the entry enterprise space. I'm sorta surprised to hear you're moving away from Netapp. I doubt Compellent really picks up a lot of their customers.

Be sure you buy series 40 controllers. They will last you a lot longer.

Not the series 50 that is coming out at some point soon? (When does this NDA expire?)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply