Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
parid
Mar 18, 2004
I have had similar experiences with NetApp really bending over backwards to fix mistakes. Something we haven't seen from someone like let's say Commvault.

I wonder if this is a by product of the heavy competition (and spending) in the storage space.

Adbot
ADBOT LOVES YOU

NullPtr4Lunch
Jun 22, 2012

Bitch Stewie posted:

Still leading with the HUS 110. Hitachi seem deathly honest but it would be useful to know if you consider there to be any "must have" license options?

Well, given my weirdo latency problems, I wish we'd bought the Tuning Manager.

Bitch Stewie posted:

We're planning on doing FC direct connect so other than tiering and the performance analyser license I don't see much else that jumps off the page as something we'd need?

In retrospect, I honestly wish we'd done FC direct to begin with. It would have been cheaper. Not even considering the extra set of switches I had to buy because the ones the VAR recommended were wholly inadequate.

Bitch Stewie posted:

Incidentally do you have VAAI? I'm still a little hazy on how the zero reclaim works depending if you have it enabled or not (we're cheap scum so only have vSphere Standard licenses).

I didn't install their vSphere integration stuff, so I can't really speak for them.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Bitch Stewie posted:

Incidentally do you have VAAI? I'm still a little hazy on how the zero reclaim works depending if you have it enabled or not (we're cheap scum so only have vSphere Standard licenses).

Zero page reclaim just consolidates the "thick" pages within the pool and then releases any pages that have all zeros. So if you provision an eagerzerothick vmdk of 100gb in a DP pool it will take up 100gb of space in the pool, but if you run zero page reclaim on it then it will shrink back to 0gb used in the pool (or however much data you've actually written to it).

It will still benefit from the improved first write latency that you get from eagerzeroing, but will act as if it is thin provisioned on the storage. Not hugely useful outside of that scenario, but better than nothing. Basically dedupe that only works on zeroed pages.

KennyG
Oct 22, 2002
Here to blow my own horn.
Out of curiosity, what kind of capacities are you running in your vnx2 /Vplex setup?

Kaddish
Feb 7, 2002
You guys complaining about IBM support, have any of you experienced Premium Support with an Account Advocate? It's pretty boss.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Kaddish posted:

You guys complaining about IBM support, have any of you experienced Premium Support with an Account Advocate? It's pretty boss.

Can he add contra-rotating cabling to your V7000?

Kaddish
Feb 7, 2002

NippleFloss posted:

Can he add contra-rotating cabling to your V7000?

No sir. Are there any SAS systems that have that type of cabling? I've only seen it with fiber.

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.

I'm sure it's great, meanwhile us scum with normal 24x7, 4HR onsite response time support get the shaft by IBM support. Today at 3:45 PM EST, a controller failed in one of my V7Ks. I'm still onsite now at 12:16 AM EST, a replacement hasn't even been dispatched yet and all they've had me do is reseat the loving thing. Awaiting callback from the National Duty Manager now, and it's already been a half hour since I escalated this to him for the second time.

gently caress IBM

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

mattisacomputer posted:

gently caress IBM
Well you know the good thing about IBM? No one has ever gotten fired for buying IBM.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Kaddish posted:

No sir. Are there any SAS systems that have that type of cabling? I've only seen it with fiber.

Sure, NetApp does it with their SAS expansion shelves. I'm sure other vendors do too. It's really baffling why they wouldn't, it's a sound design for resiliency.

evil_bunnY
Apr 2, 2003

Kaddish posted:

No sir. Are there any SAS systems that have that type of cabling? I've only seen it with fiber.
My pretty entry-level netapp does.

Kaddish
Feb 7, 2002
Oh. Well then. That's pretty dumb.

Cavepimp
Nov 10, 2006
Anyone know of any major reasons why I shouldn't pull the trigger on an EMC VNX 5200 for a small (3 host) VMware environment? This is a severely time-constrained project and I'm already familiar with its little brother (have a VNXe 3300 already), so this is looking like an attractive option I could get up and running quickly.

Sickening
Jul 16, 2007

Black summer was the best summer.

Cavepimp posted:

Anyone know of any major reasons why I shouldn't pull the trigger on an EMC VNX 5200 for a small (3 host) VMware environment? This is a severely time-constrained project and I'm already familiar with its little brother (have a VNXe 3300 already), so this is looking like an attractive option I could get up and running quickly.

I just setup the same machine 2 weeks ago. Are you just going with block?

Cavepimp
Nov 10, 2006

Sickening posted:

I just setup the same machine 2 weeks ago. Are you just going with block?

Yep, just going with iSCSI using the 1gb ports (4 onboard, 4 on card) and MPIO. The idea is to fairly closely mirror my existing VNXe's setup for simplicity. If I do that, I should be able to get it set up in a day or two max and move on to other things.

Sickening
Jul 16, 2007

Black summer was the best summer.

Cavepimp posted:

Yep, just going with iSCSI using the 1gb ports (4 onboard, 4 on card) and MPIO. The idea is to fairly closely mirror my existing VNXe's setup for simplicity. If I do that, I should be able to get it set up in a day or two max and move on to other things.

There really isn't much to block. I found it pretty painless and fast. We used FC though.

Langolas
Feb 12, 2011

My mustache makes me sexy, not the hat

Sickening posted:

There really isn't much to block. I found it pretty painless and fast. We used FC though.

Should be easy to setup to match his vnxe especially with iScsi. Just pull up the best practices from EMCs support site and go to town

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Cavepimp posted:

Yep, just going with iSCSI using the 1gb ports (4 onboard, 4 on card) and MPIO. The idea is to fairly closely mirror my existing VNXe's setup for simplicity. If I do that, I should be able to get it set up in a day or two max and move on to other things.

Out of curiosity, what kind is space and performance do you need, as well as what is that running you ballpark?

Cavepimp
Nov 10, 2006

Moey posted:

Out of curiosity, what kind is space and performance do you need, as well as what is that running you ballpark?

Not much space or performance at the time we build it. There will only be 3-4 VMs on the cluster initially, and it's somewhat undefined exactly how we're projecting to use it. It's a little bit of an odd project, but it made more sense to build the VMware environment now than it did to buy physical servers/appliances for everything we need to implement.

The config I was quoted was the 5200 + DAE, 2+1 100gb FAST cache, 25x600gb 10k 2.5", 2 4x1gb IO cards, 3yr 24x7x4h support, FAST suite, local protection suite, block suite for right about $22k.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Look at Nimble if all you care about is iSCSI. It will be very easy to get up and running within a day as there it's very little to configure.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Was about to say the same thing. Could probably get a Nimble CS220 for around the same price. Dead simple to work with and good performance.

You will spend more time racking the thing (their rails suck) than you will deploying it.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
So I've had this exact (HP N54L running FreeNAS 9.2.1.2) box connected to various ESXi boxes of different versions (4.0, 5.0 and 5.1) in the past several months. Each time it seemed rather finicky to get this thing connected and I stupidly never kept track of what exactly I did to get it working. Mainly because I wasn't working with production data. I basically changed iSCSI settings here and there and rescanning from vmware until it connected. Can't seem to get it connected to a host at the moment.

Anyway, I did a factory reset of FreeNAS and configured an IP address, DNS, gateway and configured iSCSI by...

1) Creating a Portal to the IP of the box 192.168.0.32:3260
2) Setting up an initiator. I left it default to allow all initiators and authorized networks. I later set the authorized network to 192.168.0.0/24
3) Created a file extent to /mnt/ZFS36/extent with a size of 3686GB. (browsed to this directory and the file exists and is 3.6tb)
4) Created a target, then a target/extent association.

I created a software iSCSI adapter, added a NIC and IP, pointed it to the portal address, and VMware picks up the target name, but doesn't connect. There's got to be something simple here I'm over looking...

goobernoodles fucked around with this message at 21:46 on Jun 20, 2014

Wicaeed
Feb 8, 2005

Moey posted:

Was about to say the same thing. Could probably get a Nimble CS220 for around the same price. Dead simple to work with and good performance.

You will spend more time racking the thing (their rails suck) than you will deploying it.

How well does their replication work? Do they support any form of active fail over?

I just got a tentative approval from my boss to quote out a secondary SAN for our current planned MSSQL Billing environment, with a budget of 80k.

Right now we were thinking we want to purchase a second copy of our Equallogic SAN to act as a backup in case of a primary array failure, but I'm fairly certain that Equallogic can't seamlessly failover in any way. It also doesn't support a lot of advanced features such as compression or dedup, and has absolutely no flash to speak of.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Wicaeed posted:

How well does their replication work? Do they support any form of active fail over?

I just got a tentative approval from my boss to quote out a secondary SAN for our current planned MSSQL Billing environment, with a budget of 80k.

Right now we were thinking we want to purchase a second copy of our Equallogic SAN to act as a backup in case of a primary array failure, but I'm fairly certain that Equallogic can't seamlessly failover in any way. It also doesn't support a lot of advanced features such as compression or dedup, and has absolutely no flash to speak of.

Replication is a snap to setup and works fast. We are going over a 50meg connection to our other site.

Controllers run active-passive and you can fail over live without issues. I'm able to run firmware updates without any outage.

I am currently running 2xCS240 with expansion shelves and a CS240.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

zen death robot posted:

Technically speaking the VNX uses post-process deduplication, so newly written data isn't deduplicated until later on. They might be using some bits of Data Domain's IP for the hashing and all, but since DD is mostly for backup and not live data they probably couldn't use it as is without some rather severe performance penalties.

It's also not very clear about when that deduplication pass occurs. I assumed it was something that occurred during the teiring schedule, but it seems to start then and keep running until it finishes whatever newly written chunk of data it's seen from that point on. So if you had 10TB of data and then you added another 5TB of VMs, then it'll keep running the deduplication process in the background until that new 5TB has been scanned and deduplicated into the container. There's no progress meter for the end-user to see, which makes monitoring performance kind of annoying. So whereas the tiering will stop at the time you tell it to no matter what, it will not stop deduplicating that chunk of data until it's finished with that blob/chunk/whatever you want to call it no matter what. It's ok if your environment is mostly static, I'd say around 80-90% reads.

The lesson here is just buy the right mix of NL-SAS/SAS/Flash disk and let FAST-VP do its thing instead of deduplication. I'm in the process of letting the storage costs do the talking on why we need to change how we deploy the majority of our VDI machines. It's easier to take advantage of linked clones to keep alike data in cache and reduce overall storage costs, which is killing the ROI on our VDI solution than it is to brute force it by making the storage controller do extra work. We just had a hasty rollout that was never given a second look and now its gotten out of hand. It'll be way too expensive to keep throwing hardware at doing this way forever.

They need to lock things down a bit anyway, for example we had someone running a 15-node hadoop cluster on the VDI infrastucture as well. I'm just glad I'm only responsible for the storage side of things and don't have to figure out how to keep our end-users (co-workers) from abusing the system like that.

For some reason I had it in my head that VNX2 did inline dedupe. Whoops! Yes, post-process is a different beast entirely, but it should still be run as a low priority background process that gives way to user IO and doesn't cause the system to fall over.

As far as just not using it, you can get away with it using linked clones for VDI (though I'd argue that you're not really getting the same benefits you would from deduplication since you can still end up with many duplicate blocks existing across linked VMs if they are simply written at different times) but VDI isn't going to be the only thing running on your SAN at most places. I'm just naturally skeptical of heavily using linked clones anyway due to the potential for performance issues. I'd much rather leverage VAAI copy offload on NAS to create thin clones on the storage.

bigmandan
Sep 11, 2001

lol internet
College Slice
So I've had a few meetings with Dell, Nimble and VMware (still waiting on NetApp to get back to us). And some of my colleagues like the idea of VMware vSAN. Based on our workload I think this is a bad idea, but they don't seem to think so. Also from what I understand management and scaling out/up vSAN is a pain in the rear end . How can I convince them that vSAN is a bad idea? I'm having a hard time articulating why.

Syano
Jul 13, 2005

bigmandan posted:

So I've had a few meetings with Dell, Nimble and VMware (still waiting on NetApp to get back to us). And some of my colleagues like the idea of VMware vSAN. Based on our workload I think this is a bad idea, but they don't seem to think so. Also from what I understand management and scaling out/up vSAN is a pain in the rear end . How can I convince them that vSAN is a bad idea? I'm having a hard time articulating why.

It's really not a bad idea by default. As with everything we would need to know your use case... and raid controllers (pray they aren't perc 310)

Nitr0
Aug 17, 2005

IT'S FREE REAL ESTATE

bigmandan posted:

So I've had a few meetings with Dell, Nimble and VMware (still waiting on NetApp to get back to us). And some of my colleagues like the idea of VMware vSAN. Based on our workload I think this is a bad idea, but they don't seem to think so. Also from what I understand management and scaling out/up vSAN is a pain in the rear end . How can I convince them that vSAN is a bad idea? I'm having a hard time articulating why.

Because you're trying to fit enterprise requirements into consumer hardware.

Don't do it.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Nitr0 posted:

Because you're trying to fit enterprise requirements into consumer hardware.

Don't do it.

How did you get consumer hardware out of that?

Nitr0
Aug 17, 2005

IT'S FREE REAL ESTATE
Because most people will deploy a vsan with 7200RPM drives, a lovely raid controller and then wonder why their VDI infrastructure doesn't work.

For the cost of buying proper components (15k drives, ssd, dual raid controllers, etc) you may as well just buy a proper storage system.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Nitr0 posted:

Because most people will deploy a vsan with 7200RPM drives, a lovely raid controller and then wonder why their VDI infrastructure doesn't work.

For the cost of buying proper components (15k drives, ssd, dual raid controllers, etc) you may as well just buy a proper storage system.

Yeah, I guess I assumed if they are doing proper legwork looking at options, they would spec their servers properly as well. I also read consumer as home stuff.

I think vSAN is neat for real small loads for the SMB, but probably still has room for improvement. Also that story of the poo poo hardware on the HCL and everything locking up due to the load of expanding a node is hilarious.

Nonetheless, I agree with you that a real SAN with redundant everything is the way to go.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
VSAN is probably great if you don't use cheap hardware. And then if you do use non-cheap hardware, your costs are more than just buying a cheaper array.

:owned: both ways.

bigmandan
Sep 11, 2001

lol internet
College Slice
I think the main problem is that going the vSAN route would fit our needs right now but some of my colleagues don't seem to realize is that we would very quickly outgrow what vSAN provides. Going with a "normal" SAN makes sense long term. Additionally, our read/write ratio is pretty drat close to 1:1. I'm fairly new to SANs in general but, based on my research and dealings with vendors, I think that alone would justify a SAN array. So far I think two Nimble cs220's (one for replication to satisfy DR) would fit our needs now and for the next few years based on our growth. Equallogic arrays would work as well, but I like the flexibility Nimble provides (on paper it seems that way).

Am I on the right track here or am I way off base?

Just to make sure what I'm thinking is sane I'll provide a few details of our environment:
We're an ISP. We're looking to consolidate majority of our physical servers with virtualization. Currently we have no unified storage solution. Replication to offsite is going to be a must have. Current performance across all servers, both physical and virtual, is about 50 MB/s avg, 100 peak. 1:1 read/write ratio, averaging 1k IOPS, peaking at around 2k. Performance is limited due to directly attached storage, either mirrored or RAID5. A lot of our production hardware is older than 7 years. Most services are the usual things an ISP has: DNS, mail, web servers, radius, etc. Mail accounts for half our IO. After we finish consolidation we'll end up with about 45-50 VM's. 6 DNS, 2 Mail, 1 MySQL (20 schemas or so), 2 RADIUS, 4-6 virtual desktops and the rest being Web servers serving various functions (customer vhosts, internal sites, etc..). Most servers are Debian, with a few Win2k8 servers that we needed for specific applications.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

bigmandan posted:

I think the main problem is that going the vSAN route would fit our needs right now but some of my colleagues don't seem to realize is that we would very quickly outgrow what vSAN provides. Going with a "normal" SAN makes sense long term. Additionally, our read/write ratio is pretty drat close to 1:1. I'm fairly new to SANs in general but, based on my research and dealings with vendors, I think that alone would justify a SAN array. So far I think two Nimble cs220's (one for replication to satisfy DR) would fit our needs now and for the next few years based on our growth. Equallogic arrays would work as well, but I like the flexibility Nimble provides (on paper it seems that way).

Am I on the right track here or am I way off base?

Just to make sure what I'm thinking is sane I'll provide a few details of our environment:
We're an ISP. We're looking to consolidate majority of our physical servers with virtualization. Currently we have no unified storage solution. Replication to offsite is going to be a must have. Current performance across all servers, both physical and virtual, is about 50 MB/s avg, 100 peak. 1:1 read/write ratio, averaging 1k IOPS, peaking at around 2k. Performance is limited due to directly attached storage, either mirrored or RAID5. A lot of our production hardware is older than 7 years. Most services are the usual things an ISP has: DNS, mail, web servers, radius, etc. Mail accounts for half our IO. After we finish consolidation we'll end up with about 45-50 VM's. 6 DNS, 2 Mail, 1 MySQL (20 schemas or so), 2 RADIUS, 4-6 virtual desktops and the rest being Web servers serving various functions (customer vhosts, internal sites, etc..). Most servers are Debian, with a few Win2k8 servers that we needed for specific applications.

Your IO requirements are really really low. You could probably run that on just about anything. Even VSAN would work just fine, though if you're concerned about growth it might be more problematic long term. Things like replacing a failed drive will require putting the host in maintenance mode and evacuating all VMs, which is a lot of hassle for something that would be handled very easily by a dedicated storage array with hot spares and hot swappable drives. Data also isn't guaranteed to be local to the node hosting the VM either, which adds latency. And the requirement for write mirroring to SSD on another node adds still more latency which can definitely be felt in VDI environments. VDI is fairly write intensive and very latency sensitive so all things being equal I would choose the lowest latency solution possible, which is going to be an array that does not have to distribute IO over a backplane and which acknowledges writes when then hit NVRAM, rather than SSD (both are fast, but NVRAM will be an order of magnitude faster).

Like everyone has said, by the time you spec out hardware for a proper VSAN deployment you're in dedicated SAN territory anyway and you might as well get one and accrue the other benefits that come with it.

Richard Noggin
Jun 6, 2005
Redneck By Default
I suspect the best use case of vSAN is extending the value of previous capex by repurposing hardware that already meets the requirements, not buying new.

Cavepimp
Nov 10, 2006
I ended up going with the VNX, mostly because of the familiarity and lack of time to research the Nimble.

Are the EMC VNX associate/specialist certs worth pursuing? After sitting through the training we had bundled and doing this implementation I'd probably be pretty close, I just don't know how much value that holds.

bigmandan
Sep 11, 2001

lol internet
College Slice

NippleFloss posted:

Your IO requirements are really really low. You could probably run that on just about anything. Even VSAN would work just fine, though if you're concerned about growth it might be more problematic long term. Things like replacing a failed drive will require putting the host in maintenance mode and evacuating all VMs, which is a lot of hassle for something that would be handled very easily by a dedicated storage array with hot spares and hot swappable drives. Data also isn't guaranteed to be local to the node hosting the VM either, which adds latency. And the requirement for write mirroring to SSD on another node adds still more latency which can definitely be felt in VDI environments. VDI is fairly write intensive and very latency sensitive so all things being equal I would choose the lowest latency solution possible, which is going to be an array that does not have to distribute IO over a backplane and which acknowledges writes when then hit NVRAM, rather than SSD (both are fast, but NVRAM will be an order of magnitude faster).

Like everyone has said, by the time you spec out hardware for a proper VSAN deployment you're in dedicated SAN territory anyway and you might as well get one and accrue the other benefits that come with it.

Thanks for the info. Our VDI is pretty minimal at the moment but it's good to know about the write intensity and latency.

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


I've got a budget of ~10k. Need to get an iSCSI solution for a small VMware environment. Right now we're at 2-3 hosts, with about 30 VMs. I can do 10GE, as it would hook into a couple of N5Ks.

Dilbert steered me away from hacking together a solution with some discount UCS 240s a while back and pointed me into the direction of Dell's MD series. I'm currently looking at the MD3800i with 8 2TB 7.2k SAS drives, does that sound alright?

I was looking at the configuration options, wasn't really sure what this refereed to:



I can't tell if that's needed or not.

Docjowles
Apr 9, 2009

sudo rm -rf posted:

I've got a budget of ~10k. Need to get an iSCSI solution for a small VMware environment. Right now we're at 2-3 hosts, with about 30 VMs. I can do 10GE, as it would hook into a couple of N5Ks.

Dilbert steered me away from hacking together a solution with some discount UCS 240s a while back and pointed me into the direction of Dell's MD series. I'm currently looking at the MD3800i with 8 2TB 7.2k SAS drives, does that sound alright?

I was looking at the configuration options, wasn't really sure what this refereed to:



I can't tell if that's needed or not.

That looks like an HBA for doing DAS (direct attach), which you wouldn't need if your goal is to use iSCSI.

More generally, do you have the option to go through a VAR or at least work directly with a Dell sales rep? They'll have access to discounts, no one should ever be paying list price for IT gear. Also their job is to help you ensure you're buying the right thing and answer questions like these.

Adbot
ADBOT LOVES YOU

Wicaeed
Feb 8, 2005
So I got to sit down for an hour with Nimble and go through a webex presentation about their product.

If half of what they are claiming is true, this should be pretty simple sell to Management, as long as it doesn't break the bank ($80k)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply