Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
I admire people that are fearless when pressing delete or remove operations.

Adbot
ADBOT LOVES YOU

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Badgerpoo posted:

I have a question for you storage folks that relates to multiple data centres:

If you wanted to run two geographically distinct data centres, but with dark fiber between them, do you want the storage to be HA or do you rely on the services doing their thing. We want to be able to have services active at both sites, and be able to fail them over with little/no downtime. The networking for this is easy, but our current storage (Nimble) apparently doesn't do HA. The best solution I can think of currently is to have two Nimbles, and just make the LUNs primary at either site, dependant on where the service currently lives. The problem with this is that the duplication runs over a 1Gb interface so we won't be able to fail services over very smoothly. What do other people do?

You can't do it with Nimble, unless they've added a lot of functionality recently. You need an array that does long distance clustering with synchronous replication and non-disruptive failover. EMC can do it with VPLEX, but it's a pretty complex architecture. NetApp can do it with Metrocluster, but there are certain failure scenarios where you will not get non-disruptive failover. Depending on the applications you are running it is often better to handle this at the application layer. SQL2012, Exchange 2010, Oracle, and others have the ability to do replication and automated failover through build in functionality, and that is generally preferable to trying to build it into your storage. When you do it at the storage layer getting the data to the other site is generally the easy part, it's making clients aware that they need to begin accessing the data at the new location, and doing so without causing disruption, that is the hard part.


three posted:

I admire people that are fearless when pressing delete or remove operations.

"Delete Volume"
"Are you sure you want to delete the volume?"
"Of course I'm sure!"
<click yes>
<5 seconds later>
"NOOOOOOOO!!!!!!"
<quickly close web GUI and back slowly away from keyboard>

Thanks Ants
May 21, 2004

#essereFerrari


It's worse when the web UI doesn't specifically state what you are about to delete (instead just popping up a generic "are you sure you want to delete") and the action of clicking the delete button jumps the highlighted option up or down the list due to a UI bug. You have to click cancel and then do it really slowly.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Caged posted:

It's worse when the web UI doesn't specifically state what you are about to delete (instead just popping up a generic "are you sure you want to delete") and the action of clicking the delete button jumps the highlighted option up or down the list due to a UI bug. You have to click cancel and then do it really slowly.
IBM's V7000 gear makes you actually type in the total number of things you're deleting. This is a brilliant mental captcha.

Wicaeed
Feb 8, 2005

theperminator posted:

Note to self: to move an eql member to a different pool, use "Modify Member configuration" rather than "Delete Member" from within the pool...

Setting up a new san so luckily no data is involved, but now I have to drive back to the DC to plug in with serial again...
God I'm poo poo at my job.

Take that time to make sure you have each controllers serial port plugged into a separate server, and have that server recorded :)

I am in the process of creating a new Equallogic cluster to host all of our production billing information and make drat sure I completed that step before we even started testing :)

Daddyo
Nov 3, 2000
I need to restart an SP on an older EMC SAN. What are the chances I kill the whole thing when I do? I'm wondering if I should just call a downtime just to make sure.

Amandyke
Nov 27, 2004

A wha?

Daddyo posted:

I need to restart an SP on an older EMC SAN. What are the chances I kill the whole thing when I do? I'm wondering if I should just call a downtime just to make sure.

Which Clariion are you talking about? If you want to totally minimize potential impact, you could manually trespass all luns to the other SP before you do the reboot. You would also want to make sure that you have fully redundant pathing to both SP's. Oh and make sure you're not over 50% utilization on the array when you reboot.

Internet Explorer
Jun 1, 2005





It definitely shouldn't kill the whole thing, but EMC SANs are designed for non-disruptive firmware upgrades which restart each SP in order. It's not something I would do in the middle of the day and I would probably do it during a maintenance window, but I doubt you'll "kill the whole thing." If iSCSI wasn't setup properly and you're only connecting to one SP, you will get disconnects on those LUNs.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Internet Explorer posted:

It definitely shouldn't kill the whole thing, but EMC SANs are designed for non-disruptive firmware upgrades which restart each SP in order. It's not something I would do in the middle of the day and I would probably do it during a maintenance window, but I doubt you'll "kill the whole thing." If iSCSI wasn't setup properly and you're only connecting to one SP, you will get disconnects on those LUNs.

The environment I stepped into was setup like this. One employee would do a fw update and half the servers connecting to the SAN would poo poo the bed.

Getting to rebuild everything isn't a bad thing though.

gallop w/a boner
Aug 16, 2002

Hell Gem
We have a HP P4000 SAN that hosts various VMs, including our Citrix XenApp farm.

We occasionally get performance complaints from XenApp users. These seem to correlate with disk latency getting above a certain level when the P4000 gets busy; e.g. because of a badly configured SSIS job. This doesn't really affect the application VMs, but causes complaints about GUI responsiveness within XenApp.

My boss has asked me to look at purchasing a a small, unsophisticated SAN to use solely for the XenApp session VMs.

Our requirements are pretty simple (I think):
  • Approx 5000 IOPS. I'm basing this on using ESXTOP to observe the CMD/s for the XenApp VMs. This may not be the most refined approach.
  • Has to be on the VMware HCL.
  • 2.5TB of usable space.
  • We don't need snapshots, dedupe, replication, tiering or any other high-end features.
Unfortunately, we have only have a budget of ~30k for this. Can we get enough SAN for that amount of money, or should I tell my boss that this isn't feasible?

Syano
Jul 13, 2005
You can get tons of stuff for 30k. If you want to stay low end why not look at a Dell powervault with SSD cache. You could fit that in 30k

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

gallop w/a boner posted:

We have a HP P4000 SAN that hosts various VMs, including our Citrix XenApp farm.

We occasionally get performance complaints from XenApp users. These seem to correlate with disk latency getting above a certain level when the P4000 gets busy; e.g. because of a badly configured SSIS job. This doesn't really affect the application VMs, but causes complaints about GUI responsiveness within XenApp.

My boss has asked me to look at purchasing a a small, unsophisticated SAN to use solely for the XenApp session VMs.

Our requirements are pretty simple (I think):
  • Approx 5000 IOPS. I'm basing this on using ESXTOP to observe the CMD/s for the XenApp VMs. This may not be the most refined approach.
  • Has to be on the VMware HCL.
  • 2.5TB of usable space.
  • We don't need snapshots, dedupe, replication, tiering or any other high-end features.
Unfortunately, we have only have a budget of ~30k for this. Can we get enough SAN for that amount of money, or should I tell my boss that this isn't feasible?

Have you looked at migrating to PVS and/or local storage?

gallop w/a boner
Aug 16, 2002

Hell Gem

three posted:

Have you looked at migrating to PVS and/or local storage?

I am currently building a test lab for PVS strangely enough.

Is deploying PVS target devices to local storage a commonly done practice? I am still building up my familiarity with the product.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

gallop w/a boner posted:

I am currently building a test lab for PVS strangely enough.

Is deploying PVS target devices to local storage a commonly done practice? I am still building up my familiarity with the product.

You may be able to lower your IOPS load by even going with PVS on your SAN vs full VMs on your SAN.

PVS on local storage is relatively common though; you can handle HA at the PVS layer instead of VMware HA. Caveat being you may need more RAM in this scenario.

Maneki Neko
Oct 27, 2000

Anyone played around with the NetApp EF540s at all? We've got some requirement around shared storage but like fast stuff and giving NetApp buckets of cash, so figured this looks like a decent fit.

Docjowles
Apr 9, 2009

Wicaeed posted:

Take that time to make sure you have each controllers serial port plugged into a separate server, and have that server recorded :)

I am in the process of creating a new Equallogic cluster to host all of our production billing information and make drat sure I completed that step before we even started testing :)

Or get a console server. They're pretty sweet, basically a KVM switch for serial connections.

Maneki Neko posted:

Anyone played around with the NetApp EF540s at all? We've got some requirement around shared storage but like fast stuff and giving NetApp buckets of cash, so figured this looks like a decent fit.

I think we're going to do an eval, but NetApp doesn't have a demo unit available for us until like December. It's not our first choice for a product but we're currently all NetApp so figured it's at least worth a look. If we do end up doing a POC I'll try to remember to post back here.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

gallop w/a boner posted:

We have a HP P4000 SAN that hosts various VMs, including our Citrix XenApp farm.

We occasionally get performance complaints from XenApp users. These seem to correlate with disk latency getting above a certain level when the P4000 gets busy; e.g. because of a badly configured SSIS job. This doesn't really affect the application VMs, but causes complaints about GUI responsiveness within XenApp.

My boss has asked me to look at purchasing a a small, unsophisticated SAN to use solely for the XenApp session VMs.

Our requirements are pretty simple (I think):
  • Approx 5000 IOPS. I'm basing this on using ESXTOP to observe the CMD/s for the XenApp VMs. This may not be the most refined approach.
  • Has to be on the VMware HCL.
  • 2.5TB of usable space.
  • We don't need snapshots, dedupe, replication, tiering or any other high-end features.
Unfortunately, we have only have a budget of ~30k for this. Can we get enough SAN for that amount of money, or should I tell my boss that this isn't feasible?

Question I have is,

What protocols are you using to access the storage?
Are you utilizing any Flash Cache? IIRC the P4000 doesn't have much/any
What is your Read to Write ratio, I imagine it is mostly reads.


For what you are doing I would look into a VNX 5300, you can get them for a pretty nice deal since VNX2 is coming out soon.

Something like 4 100EFD's + 16x300GB 15K drives should run you under 30k and provide you some Fast Cache capabilities for your environment.

Granted the backend IOPS are low however READ blocks are cached in flash giving you a much needed performance boost on your READ I/O

Dilbert As FUCK fucked around with this message at 17:05 on Oct 17, 2013

Vanilla
Feb 24, 2002

Hay guys what's going on in th

gallop w/a boner posted:


[*]Approx 5000 IOPS. I'm basing this on using ESXTOP to observe the CMD/s for the XenApp VMs. This may not be the most refined approach.


Remember application IOPS often translate into much higher backend IOPS depending on the read/write profile and RAID type. For example

Assume those 5000 IOPS are 50% Read and 50% write.

Reads = 2500 IOPS
Writes = Depends on RAID type.

Raid 1 = 2 IOPS, Raid 5 = 4 IOPS, Raid 6 = 6 IOPS. So assuming RAID 5 is used your storage requirements could actually be in the region of 12500 IOPS!

gallop w/a boner
Aug 16, 2002

Hell Gem

Dilbert As gently caress posted:

Question I have is,

What protocols are you using to access the storage?
Are you utilizing any Flash Cache? IIRC the P4000 doesn't have much/any
What is your Read to Write ratio, I imagine it is mostly reads.


For what you are doing I would look into a VNX 5300, you can get them for a pretty nice deal since VNX2 is coming out soon.

Something like 4 100EFD's + 16x300GB 15K drives should run you under 30k and provide you some Fast Cache capabilities for your environment.

Granted the backend IOPS are low however READ blocks are cached in flash giving you a much needed performance boost on your READ I/O

Protocols: iSCSI (sorry that is basic information I should have included)

Flash Cache: Not in the P4000 we have.

Read/Write ratio: What is best practice for determining this? I have just been watching the 'Average Read Requests' and 'Average Write Requests' counters for a XenApp VM in vSphere. Oddly write requests are approx 4 times higher than read requests using these counters?

Moey
Oct 22, 2010

I LIKE TO MOVE IT

gallop w/a boner posted:

Protocols: iSCSI (sorry that is basic information I should have included)

Flash Cache: Not in the P4000 we have.

Read/Write ratio: What is best practice for determining this? I have just been watching the 'Average Read Requests' and 'Average Write Requests' counters for a XenApp VM in vSphere. Oddly write requests are approx 4 times higher than read requests using these counters?

Have you looked into Nimble at all? You can get some pretty aggressive pricing out of them and something like a CS-220 should fit your bill.

You should be able to get a 30 day test unit and migrate your stuff onto it to make sure you are happy with the performance.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
If you're willing to entertain the idea of NFS, Tintri is awesome.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

gallop w/a boner posted:

Protocols: iSCSI (sorry that is basic information I should have included)

Flash Cache: Not in the P4000 we have.

Read/Write ratio: What is best practice for determining this? I have just been watching the 'Average Read Requests' and 'Average Write Requests' counters for a XenApp VM in vSphere. Oddly write requests are approx 4 times higher than read requests using these counters?

You can roll through it this way http://blog.synology.com/blog/?p=2225

Bunch of tools out there from vendors who would love to get their foot in the door to sell you something. Capacity Planner is also a good VMware tool.


Flash Caching can be really good for what you are doing, not to mention NFS may provide some additional benefits to you for your XenApps.

Internet Explorer
Jun 1, 2005





gallop w/a boner posted:

I am currently building a test lab for PVS strangely enough.

Is deploying PVS target devices to local storage a commonly done practice? I am still building up my familiarity with the product.

Use PVS. It will solve your IOPS problem and you'll wonder how you ever lived without it.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

gallop w/a boner posted:

We have a HP P4000 SAN that hosts various VMs, including our Citrix XenApp farm.

We occasionally get performance complaints from XenApp users. These seem to correlate with disk latency getting above a certain level when the P4000 gets busy; e.g. because of a badly configured SSIS job. This doesn't really affect the application VMs, but causes complaints about GUI responsiveness within XenApp.

My boss has asked me to look at purchasing a a small, unsophisticated SAN to use solely for the XenApp session VMs.

Our requirements are pretty simple (I think):
  • Approx 5000 IOPS. I'm basing this on using ESXTOP to observe the CMD/s for the XenApp VMs. This may not be the most refined approach.
  • Has to be on the VMware HCL.
  • 2.5TB of usable space.
  • We don't need snapshots, dedupe, replication, tiering or any other high-end features.
Unfortunately, we have only have a budget of ~30k for this. Can we get enough SAN for that amount of money, or should I tell my boss that this isn't feasible?
an oracle 7310 will run you under $20k and meet your needs. No ha though.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

VDI and VApp implementations tend to hit their limits based on random write latency, so I would stick with vendors that have an architecture that supports that kind of IO. Nimble, ZFS based appliances, NetApp, and some of the all SSD vendors would be your best bets, probably. At that price point NetApp is likely out and has way more features than you need. Since you're already an iSCSI shop Nimble is probably your best bet. Oracle ZFS appliances are also worth a look since you really don't care about features and just want fast and cheap.

MrMoo
Sep 14, 2000

NippleFloss posted:

Copying files is a really bad way to test storage performance, especially when you're to and from the same place. Most copy commands are single threaded so they have low concurrency and will drive limited throughput because the IO is serial and blocking.

This is more semantics, file copy is a reasonable serial speed test but clearly not a scalability or random access test.

A server that could handle 1 trillion 1Kb/s streams could be considered "high performance" on scalability but is pretty terrible for anything but niche highly concurrent non-shared state applications.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

MrMoo posted:

This is more semantics, file copy is a reasonable serial speed test but clearly not a scalability or random access test.

A server that could handle 1 trillion 1Kb/s streams could be considered "high performance" on scalability but is pretty terrible for anything but niche highly concurrent non-shared state applications.

You don't actually know that based on the IOPS number itself. You get to make that assumption based on your understanding of current storage technologies, that this is a workload that could happen to get those results.

kiwid
Sep 30, 2013

We're looking at an entry level iSCSI SAN to implement a 3 host VM solution (vmware essentials plus kit) for about 15 guests. Currently all our servers are on separate physical hardware and we want to slowly migrate into a fully virtualized solution. I was looking at the MD3220i with 24x 1TB 7200RPM Near-Line drives but a friend told me they were awful and to look into the EqualLogix PS solutions. He also told me to get 10k drives.

I was just wondering if I can get more insight on this before I go the more expensive route. Keep in mind, we're trying to keep the costs to a low as we're a smaller company and don't have an enterprise budget.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

TCPIP posted:

We're looking at an entry level iSCSI SAN to implement a 3 host VM solution (vmware essentials plus kit) for about 15 guests. Currently all our servers are on separate physical hardware and we want to slowly migrate into a fully virtualized solution. I was looking at the MD3220i with 24x 1TB 7200RPM Near-Line drives but a friend told me they were awful and to look into the EqualLogix PS solutions. He also told me to get 10k drives.

I was just wondering if I can get more insight on this before I go the more expensive route. Keep in mind, we're trying to keep the costs to a low as we're a smaller company and don't have an enterprise budget.

DO NOT BUILD FOR TB BUILD FOR IOPS

Get at least 10k drives. Dell has a great tool called DPAK it will show you in depth what your environment needs for IOPS.


Keeping costs low is great but how much does cheaping out cost the company in time? Storage is a good deal of how well the performance of the virtual environment performs, it is the brains(to some extent) of the operation. If nothing else teir your storage. Put a mix of 15k, 10k and 7.2k in your array.


Even going with something like
8x146GB 15K
8x600GB 10k
8x1TB 7.2k

May provide more benefitial as you can tier High priority, medium priority, and Low pirority SLA's. Not to mentions depending on your environment Outlook PST's and user docs may rest find on a 7.2k array where data is larger but not as frequently accessed simultaniuosly. Where SQL/DB/transaction/accounting servers on 15k drives will need that quick response time, while other services may rest fine on 10k or 7.2k

Dilbert As FUCK fucked around with this message at 02:31 on Oct 18, 2013

kiwid
Sep 30, 2013

Is the MD3220i a good choice with 15k/10k drives?

What do you guys typically use to store massive amount of data (300 users with 40GB Exchange mailboxes)?

We could probably go with a DAS at that point if we use DAGs, but what about file servers that hold a lot of data?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

TCPIP posted:

Is the MD3220i a good choice with 15k/10k drives?

What do you guys typically use to store massive amount of data (300 users with 40GB Exchange mailboxes)?

We could probably go with a DAS at that point if we use DAGs, but what about file servers that hold a lot of data?

We are an HP shop and go with MSA's/3Par.


What's your budget?

Teiring data and looking at what you have as well as what storage you need is important. 7.2k drives can provide a high capacity for low(overall) I/O need such as a File server or large stagnate data, while a SQL DB or other DB needs HIGH i/o frequent R/W.

kiwid
Sep 30, 2013

Dilbert As gently caress posted:

We are an HP shop and go with MSA's/3Par.


What's your budget?

Teiring data and looking at what you have as well as what storage you need is important. 7.2k drives can provide a high capacity for low(overall) I/O need such as a File server or large stagnate data, while a SQL DB or other DB needs HIGH i/o frequent R/W.

Just for storage, our budget is about 40-50k. That is including backups since our current backup solution is a couple FreeNAS lovely whiteboxes.

Is this possible?

We can't reuse much hardware, we're essentially starting over here after going a decade not upgrading anything.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

TCPIP posted:

Just for storage, our budget is about 40-50k. That is including backups since our current backup solution is a couple FreeNAS lovely whiteboxes.

Is this possible?

That is very possible, Storage is pretty cheap-ish now of days. What does your infrastructure look like(aside from 15 servers)? What's your projected growth?

Hell you can get an MD3220i with the drives I listed and support for like 20k

Moey
Oct 22, 2010

I LIKE TO MOVE IT

TCPIP posted:

Is the MD3220i a good choice with 15k/10k drives?

What do you guys typically use to store massive amount of data (300 users with 40GB Exchange mailboxes)?

We could probably go with a DAS at that point if we use DAGs, but what about file servers that hold a lot of data?

I have used a few MD3220i and enjoyed them. We were running a mix of 10k and 15k drives for different tiers.

As Dilbert said give DPAK a spin, I am willing to bet your Exchange server uses less IOPS than you think.

kiwid
Sep 30, 2013

Dilbert As gently caress posted:

That is very possible, Storage is pretty cheap-ish now of days. What does your infrastructure look like(aside from 15 servers)? What's your projected growth?

Everything is almost a decade old hardware running Server 2003. File server and exchange server storage are both on internal drives inside the physical servers. MSSQL databases total about 20GB (they're small) We do not have any DAS/NAS/SANs that we can reuse. We're basically starting over here.

We have about 30 users with 40-50GB Exchange mailboxes and the rest are all quota limited at 1GB but our CEO wants us to lift the limit due to too much user complaints. We want to 100% eliminate PST files so we need to account for all that junk that's sitting on their local hard drives as well. File server is small actually, about 1TB of data and it grows slow. Exchange is our biggest storage nightmare along with backups.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

TCPIP posted:

Everything is almost a decade old hardware running Server 2003. File server and exchange server storage are both on internal drives inside the physical servers. We do not have any DAS/NAS/SANs that we can reuse. We're basically starting over here.

We have about 30 users with 40-50GB Exchange mailboxes and the rest are all quota limited at 1GB but our CEO wants us to lift the limit due to too much user complaints. We want to 100% eliminate PST files so we need to account for all that junk that's sitting on their local hard drives as well. File server is small actually, about 1TB of data and it grows slow. Exchange is our biggest storage nightmare along with backups.

This is literally what you can get for 21k MSRP mind you through dell


Call HP storage, EMC, Netapp and see how far they can bend to match. Ah poo poo well you may want to add rails for a whole 20 bucks, and 24x7 3yr support is 800 more. This includes a Dell Fast Cache* technology.

kiwid
Sep 30, 2013

Can you get expansion units for a MD3220i if storage needs grow fast or would you be looking at a second SAN?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

TCPIP posted:

Can you get expansion units for a MD3220i if storage needs grow fast or would you be looking at a second SAN?

I believe you can add another rack but call your dell rep for that. What are your storage requirements ATM in GB? do you know your I/O load?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

TCPIP posted:

Currently all our servers are on separate physical hardware and we want to slowly migrate into a fully virtualized solution.
I'm not trying to poo poo on you here, but I am amazed that this still exists in 2013. We've been fully virtualized since 2009, and my previous employer was getting there as well.

As to your storage question, it's going to depend on your feature needs. You can pick up a pair of Oracle 7310 single head ZFS appliances with ~5TB usable that will push a shitload of IOPS and allow replication between them for around $40k, or a single 7320 HA pair with 11TB usable for around $50k.

Adbot
ADBOT LOVES YOU

Syano
Jul 13, 2005
Powervault kits are great. You can add shelves any time you need up to like 192 total drives or something like that. You can't go wrong with them for small deployments

EDIT: I just reread some of your environment. I run a 425 user mail system, along with about 30 more guests on a Dell MD3200i using near line SAS drives. Granted my environment is pretty low IOPs but still. Definately look at solutions from Equallogic, Netapp, etc but dont count out the powervaults cause someone told you they suck. They absolutely are fine for smallish environments

Syano fucked around with this message at 03:15 on Oct 18, 2013

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply