Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Sickening
Jul 16, 2007

Black summer was the best summer.

Sickening posted:

Anybody have any experience with EMC's pricing model? We haven't made it to the stage of the itemized quote yet, but I am interested to know where the money is made. The usual suspect is always support, but I am really curious to the pricing of their fast cache disks.

Quoting from me posting in the wrong drat thread.

Adbot
ADBOT LOVES YOU

Sickening
Jul 16, 2007

Black summer was the best summer.

Vanilla posted:

In what sense?

They have margin everywhere, just like any such vendor but I recall support actually being quite untouchable in terms of discount......

Are the FAST Cache disks appearing too expensive?

Also, what size fast cache disks are you going for and how many of them?

3rd day on the job, so I haven't been given any specifics yet. We have been quoted something like 60k for a 20 gig usuable array. I haven't been given any of the specs so far so I was just curious. My only real storage experience has been with netapp.

Sickening
Jul 16, 2007

Black summer was the best summer.
EMC folks, does Unisphere even make sense to purchase with one unit?

Sickening
Jul 16, 2007

Black summer was the best summer.

Langolas posted:

Unisphere in general or added features? Basic management for the array is handled with Unisphere regardless, but there are added features and enablers you'd be buying licensing to use for non basic packages. I'll go see if I can find my older EMC sales guides I've used in the past, but I'm pretty sure general unisphere support comes with the array when you buy it. (I need to double check this as it may have changed and I haven't done a quote myself in a while so I could be way wrong on this)

Generally I would say no to the added packages if its just one array. Just depends on what software Licensing you are looking at getting into.

Also are you looking to do a full unified box with the NAS portion or just block only?

Block only. Unisphere showed up as its own line item for like 2k (as well as the other usual bs stuff they try to sneak by) and it had me a little confused. I asked why it was listed separately because I would assume that its part of the VNX system and he said it was a requirement. I understand licensing other features like FAST SUITE and Local Protection, but the GUI?

Sickening
Jul 16, 2007

Black summer was the best summer.

Dilbert As gently caress posted:

That's probably the management package, which can be handy depending how large the virtual environment, and how you want to look at data.

Its for a very small environment.

Sickening
Jul 16, 2007

Black summer was the best summer.

Vanilla posted:

I thought Unisphere was mandatory?

It really might be. I have no clue about EMC practices. I am pretty sure all the line item quotes I have gotten for all my previous storage never had the console as a seperate 2k line item.

Sickening
Jul 16, 2007

Black summer was the best summer.

Internet Explorer posted:

If you don't have Unisphere are you going to admin the entire thing by command line? It would be like ordering something without the web management interface.

If that's the case then that's fine. This particular quote had a 1/3 of it being pointless bullshit which is even more than I am use to with vendors. I just don't know the this vendor well enough to know if the saleperson is full of poo poo. He told me the same thing.

Sickening
Jul 16, 2007

Black summer was the best summer.

Langolas posted:

This is correct. It runs off the management service on the storage processors themselves.

I'm standing by its a piece of poo poo line item that you should fight them on Sickening. Its not worth $2000 even with all the features.

Talked to emc/Cdw for more than a hour today in a conference call. They are trying to say that the unisphere is the os of the processing units and is a required charge. They gave a lot of excuses why its a different line item. It seems to be just part of the unit. I don't really care.

If they didn't keep trying to sell installation that costs more than 50% of the cost of the solution I wouldn't assume they are such shady assholes. Why are storage salesmen the car dealer salesman of the IT industry?

At least getting initial quotes is easy. I got competing quotes from dell, netapp, and emc. Who else should I be talking to?

Sickening
Jul 16, 2007

Black summer was the best summer.

Docjowles posted:

Perhaps IBM, as Misogynist said above. He put me in touch with their guys for a project and they were great, it just didn't work out because our needs were too low end. Well, combined with the fact that my boss at the time literally didn't believe in SANs and would rather store critical production data on crappy 1U white-box servers with 4 consumer SATA drives in them. Which were EOL and the only replacement "source" was eBay.

But if you're talking seriously with EMC, IBM is not our of your price league :)

Our budget is around 35k. I have spent to long running already purchased storage systems and I have only been through one other order before (which I had no influence over at all.) Its my first time on the buying side with actual power. Its amazing how stupid you feel when its your first go around.

Sickening
Jul 16, 2007

Black summer was the best summer.

KillHour posted:

Are we talking iSCSI or FC here? Do you need any specific features, or is this just a place to dump a ton of data?

The Promise VessRAID might be a good place to start; VR2600FIDAME would be both iSCSI (1Gb) and FC (8Gb) in the same box with redundant controllers and 48TB raw HDD capacity for ~15k.

http://www.promise.com/media_bank/Download%20Bank/Datasheet/Vess%20R2000%20DS%20v1.6_20130924_en.pdf

FC . Its what I know and should meet our needs for as long as we own the next system. Tiering has always been something I have liked to play with and make sense for what we do.

We are a small shop that appears to be growing at a pretty decent pace. We only have a NAS right now and everything else is local storage. We are talking 3 vmware hosts with about 15 guests. Most of those guests are SQL databases that run some pretty heavy production apps.

Sickening
Jul 16, 2007

Black summer was the best summer.
Speaking of EMC, I am having a hell of a time on what should be one of the most basic things of my new VNX. For some reason I can't find a single place in Unisphere to change the network settings of my management port. Right now I am connecting to the service port but I can't find the drat thing in the gui.

Sickening
Jul 16, 2007

Black summer was the best summer.

Amandyke posted:

I am going to assume that this is a block only system.

There are a couple ways to do it. With Unisphere, you would want to login, click on system, then hardware, then on the right click on SPA Network Settings. The window that pops up should allow you to change the IP on SP A, then just do the same for SP B.

Via CLI:
naviseccli -h ***SPA_IP*** -user sysadmin -password sysadmin -scope 0 networkadmin -set -ipv4 -address ***NEW_IP*** -subnetmask ***NEW_SUBNET*** -gateway ***NEW_GATEWAY***

Then just do the same for SPB, pointing the command at SPB's ip address.

I figured that part out at least. Them being called virtual adapters in that settings menu threw me off. The issue I am having now is that I can ping those addresses but not access unisphere through them.

Sickening
Jul 16, 2007

Black summer was the best summer.

zen death robot posted:

I kind of forgot about this thread, but if anyone is ever planning on using block deduplication on 2nd gen VNX's be very very careful about doing it. I had a major issue last month where even when we followed all of the best practices laid out in the whitepapers we had some blocks deduplicate a little TOO well and it brought down the SPs and we had rolling outages on the drat array for a week. The only reason we were able to get ourselves out of the mess was because we had everything behind VPLEX and had enough disk space to migrate the LUNs out of the deduplicated space into standard thin luns.

I really don't think the block deduplication is ready for use at all. Maybe if you're planning on using it for some kind of archival storage only or something, but really don't even bother with it then. The pool we had was built out of all flash and 15K 300GB drives, so it wasn't even an IOPS issue, it's just not "smart" enough to realize that it's got too many references back to the same block and to split it off to a new one. It'll just beat itself to death, and the process runs all on CPU0.

EDIT: I should add that this particular environment is a horribly set up "VDI" cluster where the people who actually do the day-to-day mangement refuse to implement linked clones. So to make up for that we used deduplication on our old NetApp arrays, and assumed we would be able to do the same on our VNX2 replacements. In some ways I'm not too broken up about this because I'm just using it ammo for them to start doing this poo poo the right way and start using linked clones instead of deploying full-blown images for loving virtual desktops. :rant: You can probably imagine how the deduplcation engine managed to eat itself alive when you have 500 Windows 7 and another 500 Windows XP desktop deployments that are mostly identical in the same pool. It's a pretty extreme case, but it's something to consider before turning it on.

Well you are right, your setup is pretty dumb and was destined to fail. You had windows 500 machines targeting the same pool? Hell, were they all the same lun? Which VNX system are you guys using? Was your dedup settings set to max? Sounds like a system that was getting too greedy when disk is especially cheap these days.

Block dedupe is a great tool and makes a lot of sense. From a san perspective its not a magic pill that is going to suddenly going to make your storage needs drop by 1000%. In all honesty it shines more with your nl-sas storage pools in which you are tiering your less active data to (which yeah, is more like an archiving like scenario you mentioned). Your most active data should be on your 15k and flash, but even then you need to split it up among pools to ensure your processing isn't going to be pushed.

The netapp way of doing things and the EMC way is just different. You really shouldn't have expected your design to stay the same going from netapp to emc. I imagine that the flash%, the dedupe policies, the tiering (which i guess you probably aren't even using effectively, if at all).

I have just gone through the same transition from netapp to emc but I had the ability to build from the ground up.

Sickening
Jul 16, 2007

Black summer was the best summer.

Cavepimp posted:

Anyone know of any major reasons why I shouldn't pull the trigger on an EMC VNX 5200 for a small (3 host) VMware environment? This is a severely time-constrained project and I'm already familiar with its little brother (have a VNXe 3300 already), so this is looking like an attractive option I could get up and running quickly.

I just setup the same machine 2 weeks ago. Are you just going with block?

Sickening
Jul 16, 2007

Black summer was the best summer.

Cavepimp posted:

Yep, just going with iSCSI using the 1gb ports (4 onboard, 4 on card) and MPIO. The idea is to fairly closely mirror my existing VNXe's setup for simplicity. If I do that, I should be able to get it set up in a day or two max and move on to other things.

There really isn't much to block. I found it pretty painless and fast. We used FC though.

Sickening
Jul 16, 2007

Black summer was the best summer.

sudo rm -rf posted:

Because our Domain Controllers, DHCP Server, vCenter Server, AAA Server and workstations are also VMs, and at the moment everything is hosted on a single ESXi host. I can get more hosts, their cost isn't an issue - but additional hosts do not offer me much protection without being able to use HA and vMotion. That's a reasonable need, yeah?

Ha/vmnotion is more for an environment that has uptime/disaster recover needs. The environment posted (outside of a dc) doesn't really have those needs. If 10k is the budget, you will get more bang for your buck to add another beefy host and skipping the storage completely.

The perfect goal for you would be 2 hosts with central storage. I don't think its in your cards with that budget.

While I would normally recommend storage for almost any virtual environment, your budget is poo poo and planning for any kind of growth with that budget isn't going to happen this time around.

Sickening fucked around with this message at 21:24 on Jun 29, 2014

Sickening
Jul 16, 2007

Black summer was the best summer.

sudo rm -rf posted:

What kind of budget do you think would be the minimum required to plan get my DCs protected and plan for a minimal amount of growth?


This sounds like a good plan, and our hosts are pretty beefy - the discount we get on UCS hardware is significant.

Edit: I really appreciate you guys walking me through this. I'm essentially a one man team, so I don't have a lot of options for assistance outside of my own ability to research.

I would think somewhere in the 25k to 30k. Servers, expandable storage, and networking behind it. It might seem like a lot, but after the extra fees, support contracts, and tax that is where you are going to be.

I guess it also depends on what minimal means to you. Dc's aren't exactly going to be power hungry vm's. They could probably sit on the cheapest of the cheap storage and not show any difference in performance. Its the rest of the infrastructure that is going to drive your disk needs as far as capacity and speed.

Just seems like any host + storage combo you get for 10k is going to be completely poo poo in a hurry. Where if that is your hard limit you would be better suited to getting a great host to host your critical stuff and accepting some down time in restoring services if one of the hosts fail from backup. If you don't have backup at all a host + backup solution would be more important than host + san.

Sickening fucked around with this message at 22:24 on Jun 29, 2014

Sickening
Jul 16, 2007

Black summer was the best summer.

sudo rm -rf posted:

What do you mean by this?

Ugh, are you serious?

Sickening
Jul 16, 2007

Black summer was the best summer.

sudo rm -rf posted:

Yes? I wanted to see if some of the equipment I needed to include in my budget was something I already had.

What do you want from me? If you're willing to impart professional advice, I'm absolutely willing to hear it. If you don't want to, cool, that's fine too. I've never used enterprise storage in a professional setting before, I've been out of college barely a year and I've had my current (only) job for even less. I apologize if my questions are annoying, but you are free to ignore them if they are so loving unbelievable. I don't think being an rear end in a top hat about it is warranted.

I trying not to be a dick its just you bolded the most generic part of my post and it doesn't help me help you. I will give a more indepth just guessing what it is that you are not clear on.

First servers. In a server/san environment the only thing that is going to matter is the cpu/memory. They are going to largely be interchangeable and can be replaced or be worked on at will. The faster the better. You want something that is going to be an upgrade to your enterprise with each purchase but you also want as much bang for your buck as possible. Servers in your situation are easy.

Storage. Storage is the hardest part of your purchase. Buy something cheap and performance could suffer (IE, worse than your local disk you have right now). Something cheap might as die faster or be replaced all together a lot sooner. The balance of getting the most for your money while meeting all your requirements are hard. Getting a solution that you can also get more disks in reasonably later (if you need more capacity or speed) can also depend on how much you spend now.

Networking. You have to connect your servers to your storage somehow. Iscsi and FC are common choices. Not knowing what you are using now could mean that you are going to have to buy a new switch. Being that you work for Cisco this is probably more of a non-issue. Still, I don't know what your totally discount will be.

I say all this with the reasonable assumption that you have a backup scheme in place that works.

Adbot
ADBOT LOVES YOU

Sickening
Jul 16, 2007

Black summer was the best summer.

Langolas posted:

If you aren't getting the monitoring software for free/rear end cheap on your quotes, you didn't play your cards right.

Also don't get me started on Cisco and Microsoft licensing holy gently caress that can be a nightmare

One trick I've used for customers is to have their hosts monitor disk util and items and if we see any performance issues, I can get EMC to give me a free performance review from their support guys. They get to the nitty gritty and find issues to fix or if I need to add some more spindles.

Second trick is to say we need to buy some more EMC equipment but we need help sizing a quote, can you do a performance review and let us know whats up?

I haven't heard of anybody getting the storage monitoring tools for free or cheap unless they are already spending a god awful sum. Am I missing something here?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply