Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
We just got a Compellent SAN and have been tinkering with it. Compellent's knowledge base is pretty good, but are there any other good resources?

Adbot
ADBOT LOVES YOU

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Drighton posted:

Have any problem in particular? And if you aren't following their best practices document there's a chance you'll get a hosed call with support. Happened to me twice, but I don't hold it against them since their documentation really is pretty good.

Dell gave us this unit for free and specced it out for us, and their Dell Services install guy completely botched the install doing things that didn't even logically make sense. Throughput has been abysmal (we're using legacy mode).

We're having to have a second guy come out and re-do everything this week. Kind of a pain, but it was free so I can't complain too much. We used mostly Equallogic prior.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

tronester posted:

Well luckily the HP proliants have SAS controllers, and will be equipped with 6 2.5" 300GB 10k rpm SAS drives. I honestly believe that they would have enough IOPS for their relatively light workload.

They do not use shared disk clustering.

Why does anyone every purchase 10K RPM drives? 2/3 of the performance for 9/10ths of the price.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Bluecobra posted:

In my experience, their support on Solaris is lovely. The last time I called Copilot about a Solaris 10 iSCSI problem, they told me to call "Solaris" for help. Also, when we had bought our first Compellent, the on-site engineer(s) couldn't figure out how to get iSCSI working on SUSE and restored to Googling for the solution. I ended up figuring it out on my own. Based on my anecdotal evidence, it seems like this product works best for Windows shops.

I should also mention that we had one of their lovely Supermicro controllers die in our London office and it took them 8 days to get a replacement controller in the office. This was with 24x7 priority onsite support. That being said, I don't think the product is that bad but is probably not very well suited for a Unix/Linux shops. We just had a bunch of unfortunate problems with it so now it is called the "Crapellent" around the office.

From my experience thus far, their support is terrible. Not to mention their install engineers are terrible since they all seem to be terribly trained for Compellent since they're Dell engineers now. The SAN is okay, but I'd still rather buy EqualLogic.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Serfer posted:

Compellent is pretty big, and it's owned by Dell now. It's also not "white box" equipment, you're just showing that you have absolutely no idea what you're talking about. It's like saying EMC Avamar is a white box just because it's just a Dell 2950.
Technically, Compellent controllers are built using SuperMicro hardware (for now).

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
I don't think Compellent should be considered whitebox, but I also don't think SuperMicro is a great hardware maker. I look forward to them being moved to Dell hardware.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

ILikeVoltron posted:

It marks used blocks and progresses them either up or down tiers (sometimes just a raid level)

It usually kicks off around 7-10pm depending on how you set it up. I've worked with their equipment for around 5 years, know several of their support staff and am currently upgrading to some newer controllers, so I can likely answer any of your questions about their equipment.

How do you handle systems that run on monthly cycles, which would have their data migrated down to slow speed by then? Disable data progression on those volumes, or create custom profiles?

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Martytoof posted:

This may be a terrible question to ask among people who discuss actual SAN hardware, but if I want to get my feet wet with iSCSI ESXi datastores, what would be the best way to go about this on a whitebox as far as the operating system is concerned. I'm looking at something like OpenFiler. This would literally just be a small proof of concept test so I'm not terribly concerned about ongoing performance right now.

In addition to OpenFiler, FreeNAS is an option.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

ragzilla posted:

Could be their NAS offering. IIRC their NAS was just WSS in front of a regular cluster.

The controllers are BSD based. I think their zNAS is too, since it uses ZFS.

They're coming out with a new NAS head equivalent to what was just released for the EQL soon (http://www.equallogic.com/products/default.aspx?id=10465).

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

szlevi posted:

Well, I think they are some linux-fork, at least according to my sales guy who's a pre-Dell employee. I asked them specifically about it and I got linux every single time I have asked, pointing out if they are part of the BSD-crowd like most SAN vendors...

...it would be interesting to learn he's wrong though. :)


They are not even suggesting it if you run Windows network, they tell you straight up "you don't want that" and should get WSS ones.


Wow, that would make a LOT of sense, especially w/ 10GbE front end (FS7500 is gigabit only. :()
Just when do you think it will be introduced? March-May?

You might be right. I thought they were BSD-based like the Equallogic.

I think the NAS head is due later this year. If you have a Dell rep, they may be able to pinpoint it to a specific Quarter.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

evil_bunnY posted:

1) This is kinda disingenuous: you want the raw capacity to matter somewhat, but do take into account what you'll pay to get the raid level you'll actually use (if you say RAID5 I'm going to lol) and do take into account what dedup/thin provision will get you.
2) Who complains about Netapp? Compellent?

If you can run Windows VM's and aren't bothered by the MS licenses, there are a few good reasons to not do this. It's not like you can't back them by dedup'ed datastore.

Why would you 'lol' at RAID5?

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

evil_bunnY posted:

Rebuild failures. It's happened twice to me before (once on a Dell MD which was bad enough, the other time on a semi-old EMC unit), and it's mathematically very likely to happen during any storage system's lifetime.

So I'm guessing you really hate RAID 50?

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Rhymenoserous posted:

Look at Nimble, it's a bit over your price range, but I'm pushing about 50% block level dedupe (They keep referring to it as compression which it technically isn't) on my SQL db's and my VMFS operating systems datastore is barely using any of the space I allocated it: to the point where I'm about to create a new datastore, migrate and reclaim some storage.

I've just started working with this thing and compared to the frustrations of EMC and Unisphere I'm having a blast.

I would never use Nimble given how small and new they are. We looked at them in the past because one of their reps is a personal friend of one of the managers here, and they have such a small user-base and team. Just my personal opinion.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Hyrax posted:

Disclaimer: I'm not a storage guy (I do VMware stuff mostly), but I do have to run a couple of EqualLogic setups.

We added a new member to a four member pool and after it had initialized its RAID setup we started seeing tons of closed sessions from our iSCSI initiators on all of our ESX hosts. The message ends with "Volume location or membership changed". Within three seconds the initiator picks the connection back up and is fine for anywhere between fifteen minutes and an hour, then it'd do it again. Those few seconds are enough time to piss off the application that these servers are running and essentially make them unusable and that goes against our SLAs on that app. EqualLogic support suggested pulling that new member from the group which didn't fix it immediately; it took another six or so hours after that member had evacuated for the group to stabilize and stop throwing errors.

EqualLogic wanted to blame our NIC config on some of our ESX hosts (a pair of teamed 10G nics with all traffic VLAN'd out), but we'd been running fine with that config for months without any problems. Also, there are a couple of ESX hosts that have separate 1gig NICs for iSCSI traffic. Also, we have a lone Windows host that runs Veeam that also had connection issues, so I highly doubt it's a network config that's the issue.

So, does anyone have an idea what to make of the "Volume location or membership changed" message? That reads to me like the group was moving data around when and pissed off the initiators, but I'm just pulling that interpretation out of my rear end. Any ideas or things that I should check on that new member before I try to put him back in? I need the capacity sooner rather than later, but I don't need another day that blows up our SLAs for the month.

What firmware are you running?

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
EQL also has a limit on number of connections in a group and storage pool. I don't recall what they are, but I'd look into that. I'm assuming adding a member equals adding connections.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
Compellent is absolutely terrible. We were given a unit due to the amount of business we do with Dell, and it's a pile of junk.

Controller crashes that they can't explain, blaming issues on firmware being out of date even when the array says there are no updates (you, literally, have to call to find out if there are updates and to get them to release them to you, but in the meantime if you do "Check for Updates" your array will tell you it is all up to date), Copilot support rebooting the wrong controller when 1 is down and bringing your entire storage down, Copilot support blaming performance issues on using thin-provisioned VMDKs, Copilot support saying a massive performance issue is due to number of disks (we're talking each disk getting like 10 iops, yes 10 not 100).

Do never buy.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Intraveinous posted:

That's quite odd... What series controller(s)? What version of Storage Center are you running?

I've had nothing but extremely good experiences with Copilot, and never had a problem getting the help I need.

Yes, it's a little annoying that you have to call and open a case to update the firmware/Storage Center software. But it's nice that when you do, they remote login and do a health check before releasing the software. If you set up an alert in Knowledge Center, you'll get an email notifying you whenever there's a new version, and then you can call in and start the case.

I've had mine about a year (SC40 controllers, on 5.5.6 OS right now) running VMware and Oracle Database backends. I have plenty of thin-provisioned VMDKs that were storage-vmotioned over from an old array, but I don't see problems with that. I've never had my controllers lock up, and never had Copilot reboot either of my controllers. I routinely hit 300-600MB/sec and 11K IOPS during busy times, with latency staying below 4ms.

Don't you get a survey link every time you open a case? If you're really having that many issues, I'd give a negative survey response and wait for the calls to come in. I rated something negatively the first time a Dell contractor came out to replace a part, and had never worked on any of the Compellent gear. He wasted over an hour reading manuals for a cache card replacement. I got a call from a manager asking for more details a few hours after I submitted the survey. The next time I had someone out (their lab had identified a problem in some limited cases with Emulex 8Gb FC cards, so they proactively replaced them with Qlogic 8Gb FC cards) the same guy came out. He said that he got a call asking him to schedule some Compellent training at Dell's expense after the last time he was out, and was a lot more comfortable working on them now.

I've been nothing but pleased so far.

We have the same controllers, 5.5.3 OS. We have had Dell and Compellent come into our office numerous times. We've had Storage Engineers, Dell Tiger Team, etc. We've given them beyond the benefit of the doubt. I find most people that like them have only used cheap devices or local storage.

Compellent doesn't even support for any of the VAAI features except space reclamation, so I'd never use it for virtualization.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Intraveinous posted:

That's right, I remember our conversation now. You gave me your SR number which I relayed to them. My bad, I guess I should have said, "Their lab was able to reproduce and verify the problem discovered by KS." Credit where credit is due, thanks for your help.

I'm just about to add more disk to Tier 3, at which point I'll be up to about 115TB total between two arrays. Offer still stands to take the array off your hands, three.

I imagine the array itself has hardware issues, which Compellent seems unable to see. I'd bet if I had a different unit it would probably be fine; however, the experience with a bad unit has just shown has poor Compellent and Copilot is.

Any support that accidentally reboots the wrong controller and brings down the storage is bad. They've even been unable to even diagnose some issues (e.g. one controller hanging on a firmware update). Their response times to difficult tickets is pretty terrible, too.

SC 6.0 finally implementing VAAI is nice. At least you can rest assure technologies will be supported eventually.

In any case, we'll likely use this for junk projects and keep critical machines on a vendor that isn't terrible.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
You could do a lot more with 30k, but it seems you feel the need to justify it instead of putting that 30k to good use. It sounds like your company is pretty terribly ran so they won't know any better, either way.

three fucked around with this message at 00:45 on May 24, 2012

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
Chad Sakac owns, so that video owns by proxy.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
If you're going to make a SAN for use with virtual environments, it's pretty silly to not fully support VAAI.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Internet Explorer posted:

Oh, I missed that part. I can't imagine having 6TB worth of VMs and only having 16k USD to work with. On the Equallogic side you actually can get boxes that have a mix of storage. Same with EMC. Should be the same with NetApp.

Equallogic doesn't offer mixed SATA/SAS in the same unit, afaik; only mix of SAS and SSD.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Beelzebubba9 posted:

I'm going to bump this too. The internet seems to have very good things to say about Nimble's product, and the people I know who use them really like them, but it wasn't in a production or similarly stressed environment.

....or do I need to be SA's $250K guinea pig?

How comfortable are you with being one of a very small number of users? It will mean you will be the one running into bugs more often than the other vendors that probably have just as many bugs but have more people to find them and fix them before you notice.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
You have to configure all of your hosts to tolerate ~40 seconds of the EQL array being unavailable during firmware upgrades. Maybe this is recommended with other iSCSI arrays, but I haven't ran into it yet?

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

szlevi posted:

I never did it but IIRC my values are ~30 secs and my hosts all tolerate failovers just fine...

Do other arrays require this, and specifically state this requirement?

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
People hang out on Spiceworks forums?

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
The main problem with VSA originally was the limitations. A lot of those are resolved in the next release: vCenter not being able to run on the same environment, one VSA per vCenter, no ability to expand, had to use pristine hosts.

I had not heard of any reliability concerns. It is still overpriced solo, but I believe the bundle pricing is a lot more reasonable.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

FISHMANPET posted:

Holy poo poo you have no idea what you're doing. This is what a SAN is. If you want to find some crazy overpriced FC gear go nuts I guess, or see if you can find some direct attached storage but I don't know if that exists anymore anyway.

If I were you I'd be way more worried about the overhead of running second equipment all over.

Well, you see, FC cables plug directly into the hard drive platters so there is no overheard like iSCSI. :psyduck:

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

EoRaptor posted:

If they made a mac mini with 2 (or more) Gb ports, I'd wedge one of those in just to serve AFP from an iSCSI LUN, but the current mini + thunderbolt to ethernet just seems to be asking for problems.

You can use the thunderbolt port as a 10Gb port.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

XMalaclypseX posted:

The transfer rates are over both ports.

What are you using to benchmark those numbers?

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Syano posted:

Our environment is small enough that I can still afford dedicated switches for our SAN. I have a buddy who is an admin over on the other side of town who constantly rides me about wasting ports by dedicating switches just to SAN traffic and how I should be just running all the traffic through the same top of rack switch and VLANing from there. Who cares I say, as long as I can afford it I am going to keep it simple and keep layer 3 off my SAN switches.

You should ride him about being terrible. You're doing it the correct way. iSCSI should have its own dedicated network.

The only justification for not doing it that was is if he has no budget, and if so then maybe he should've gone NFS instead of iSCSI.

three fucked around with this message at 00:50 on Sep 11, 2012

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

adorai posted:

A non routed vlan is not layer 3, it's still layer two. Dedicating a switch (and nics) to iscsi is a waste.

It's silly to not pay ~$15,000 for a couple of stacked switches to provide increased stability, performance (especially if the backplane of your top of the rack switches can't support the full bandwidth of all the ports), higher reliability since the environment is pristine, protecting from the network team causing blips that really cause problems with iSCSI but not typical traffic, ease of management in pinpointing issues/changes/configuration problems.

However, I will concede that this is a debatable method, but I can't believe any storage admin wouldn't dedicate NICs at the very minimum (especially with 1Gb, which most environments don't need 10Gb).

three fucked around with this message at 01:28 on Sep 11, 2012

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Misogynist posted:

At this point, it really depends on whether you're using (round-robin) 1 GbE or 10 GbE for your networking. There's some pretty major cost concerns associated with the extra 10 gigabit switches if you want any kind of reasonable port density at line rate. Lots of people do converged for their 10 gigabit infrastructure, but they don't have the problems we've had with unannounced network maintenance taking down my cluster on my wedding day because of isolation response :shobon:

Price goes up a bit if using 10Gb (which most people probably don't need), but it's the foundation for a SAN infrastructure that probably cost several hundred thousand dollars.

The foundation for the common belief that iSCSI is worse than FC is derived from people trying to implement it on their existing network infrastructure, and the problems that causes.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

NippleFloss posted:

The issues being mentioned are pretty overblown.

[snip]

And if they do go that route then they will probably just buy FC anyway and have a generally more stable storage backbone.

Your post contradicts itself. You're saying it's overblown then say FC is more stable. The reason FC is considered more stable is because it uses it's own switches and people don't usually gently caress with it and gently caress it up. The issues mentioned are real world scenarios, not hypothetical paradises where people don't do dumb things and break your storage network. (Hint: A lot of people anyone will work with are really bad at their jobs.)

Having a pristine, less hosed with, easily monitored and managed environment is so critical to stability. And the cost is so negligible given the cost of a quality Compellent, EMC, NetApp, etc array. Don't cut corners on storage.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

HP's website is painfully designed.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
I am really looking forward to VMware making the VSA/etc awesome so that it is actually feasible for most environments. It's a long way away with all the limitations it has now, but I think it's the future. Getting rid of the SAN would be awesome.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Rhymenoserous posted:

It will never happen, there are a ton of reasons to go with a shared chunk of external storage outside of the capacity arguments that VSA are likely to solve. For a small business though I see VSA as a godsend.

I disagree. SANs aren't used because people love them for virtualization, they're used because they're a requirement for HA/DRS/etc. Nutanix is already working in this space. It's just a matter of time.

Corvettefisher posted:

The Data Protection appliance? I have it sitting in a lab, might run a few tests if there is something particular you are looking for, and get back to you on it. I Have problems managing it through anything other than the web client for some reason...

The Web Client is required to use it. It won't work (along with many other new features) with the standard client. It's a decent product; I haven't used it in production, but set it up in my lab. Lots of little "gotchas," but it's still a new product. It depends on how much VMware sinks into it. If they give it all the bells and whistles, they'll give a serious business blow to several partner companies (Veeam, PHD Virtual, and Quest to a lesser extent since it has a lot of other software and is owned by Dell now).

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Rhymenoserous posted:

Do you really think an "All in one" virtualization box is really going to throw the world all a twitter? I'm kind of skeptical.

What is the benefit of continuing the traditional SAN architecture?

I would rather have a resilient scale-out infrastructure that uses cheaper technology. Scale-out SANs are already very popular (e.g. Equallogic), so let's go a step further and push that into the server, make it resilient and highly available, and ditch the behemoth SAN architecture. Solid-state drives becoming affordable and easily obtainable makes this idea easier, as well.

Push everything into the software layer.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

cheese-cube posted:

I've worked with SANs/NASs for several years and I like to think I'm somewhat on top of things but what the gently caress defines a "scale-out SAN"? A quick search on Google has simply led me to believe that it's just another lovely buzzword.

Equallogic calls its "frame-based" versus "frameless".

Adbot
ADBOT LOVES YOU

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Moey posted:

What if I only have a limited number hosts and have exceeded the internal (software shared) storage in them? I would be forced to purchase another host + licensing. With a traditional SAN you would just be adding on a shelf.

Perhaps we will see the architecture for the server change the accomodate this. No particular reason a server can't have "shelves" added.

Also, Equallogic, for example, can't have shelves added to it. You have to buy a whole new member, which includes two controllers attached to it; controllers are, more or less, "compute" so you're paying the same price in this approach that you would in the SAN-less approach except you also gain compute capacity in your virtual environment too in the SAN-less strategy.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply