Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

adorai posted:

I'm not trying to poo poo on you here, but I am amazed that this still exists in 2013. We've been fully virtualized since 2009, and my previous employer was getting there as well.

As to your storage question, it's going to depend on your feature needs. You can pick up a pair of Oracle 7310 single head ZFS appliances with ~5TB usable that will push a shitload of IOPS and allow replication between them for around $40k, or a single 7320 HA pair with 11TB usable for around $50k.

Be amazed at SMB where +70% is still not virtual

Adbot
ADBOT LOVES YOU

Mierdaan
Sep 14, 2004

Pillbug

TCPIP posted:

We have about 30 users with 40-50GB Exchange mailboxes and the rest are all quota limited at 1GB but our CEO wants us to lift the limit due to too much user complaints. We want to 100% eliminate PST files so we need to account for all that junk that's sitting on their local hard drives as well. File server is small actually, about 1TB of data and it grows slow. Exchange is our biggest storage nightmare along with backups.

If your CEO really wants to keep your storage budget low, tell him to stop being a moron about mailbox size limits. Implement sane retention limits.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

MrMoo posted:

This is more semantics, file copy is a reasonable serial speed test but clearly not a scalability or random access test.

A server that could handle 1 trillion 1Kb/s streams could be considered "high performance" on scalability but is pretty terrible for anything but niche highly concurrent non-shared state applications.

If you're trying to benchmark an application that is going to do a single stream copy of data to your SAN, which is otherwise completely idle, then I suppose it's a fine benchmark. If you're trying to benchmark for any workload that actually exists in the real world on shared storage then it's a pretty terrible one.

TCPIP posted:

We have about 30 users with 40-50GB Exchange mailboxes and the rest are all quota limited at 1GB but our CEO wants us to lift the limit due to too much user complaints. We want to 100% eliminate PST files so we need to account for all that junk that's sitting on their local hard drives as well. File server is small actually, about 1TB of data and it grows slow. Exchange is our biggest storage nightmare along with backups.

You don't need fast disk to run Exchange 2010. If you can get 20ms you will be fine, and really it can go even higher than that without any user impact provided your CAS servers have enough ram to keep a large number of messages in RAM. So you would probably be fine with 7.2k disks from a latency perspective. However when sizing for 2010 you need to consider the impact of background maintenance IO, which is about 5 MB/s and runs on a per DB basis, so the more databases you have the more throughput you need available to cover situations where all DBs are running maintenance. Big large mailboxes can exacerbate this by forcing you to use more DBs than you otherwise would and by causing the maintenance to run longer than it otherwise would. Might not be a problem for you since it doesn't sound like you have a very large environment, but it's something to consider.

You also might be on the right track going with a DAS solution for your Exchange environment, if it really is your biggest storage problem. With enough enough DAG members to handle your failure scenarios, plus at least one LAG copy, you could pretty much do away with backups entirely and save a big chunk on your backup environment upgrades.

evil_bunnY
Apr 2, 2003

Dilbert As gently caress posted:

I believe you can add another rack but call your dell rep for that. What are your storage requirements ATM in GB? do you know your I/O load?
You can add 2 shelves on the older stuff, and the newer ones even take 60 drives high cap shelves

evil_bunnY fucked around with this message at 14:37 on Oct 18, 2013

kiwid
Sep 30, 2013

Dilbert As gently caress posted:

This is literally what you can get for 21k MSRP mind you through dell



Hey quick question, what type of RAID config would you do with this setup? RAID-10 or RAID-6 with 6 drives of the 3 different types of drives?

Also, I've never worked with SSD cache. Do you RAID those as well or is each SSD by itself? Also, the SAN takes care of that poo poo automatically right? Or is that something I setup with VMware under host cache?

kiwid fucked around with this message at 17:00 on Oct 21, 2013

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

kiwid posted:

Hey quick question, what type of RAID config would you do with this setup? RAID-10 or RAID-6 with 6 drives of the 3 different types of drives?

Also, I've never worked with SSD cache. Do you RAID those as well or is each SSD by itself? Also, the SAN takes care of that poo poo automatically right? Or is that something I setup with VMware under host cache?

Depends what the Data requirements are,

You may have some high transaction servers that would love to run on RAID 10 on those 15k drives, and do Raid 5 on the 10k for high level data, and 7.2k in raid 6 for your slow data Exchange DB's/File servers etc.

Really depends on the environment.


Here is how Dell works with their SSD caching.
http://www.dell.com/downloads/global/products/pvaul/en/powervault-md3200i-md3220i-technical-guidebook-en.pdf

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Anyone here a fan of HP storage? I am trying to find the Selling point of the MSA's but I see they fall short compared to dell's MD series of features, mainly SSD caching. Only thing I can figure that HP has going is their controllers have 4GB cache to dells 2 GB cache per controller, and are a bit cheaper.

Same with 3Par, The 7000's are good but compared to a VNX(to some extent the VNXe's)/NetApp/Compellent they feel limited.

Dilbert As FUCK fucked around with this message at 18:19 on Oct 22, 2013

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

HP Storage to me is a disjointed mess. They snapped up a bunch of companies, but then didn't really do much with them after the fact. I've only used LeftHand and MSA stuff from HP.

The MSA's are just rebadged dothill arrays for the most part. They're OK at best if budget limits you to this line.

I'm pretty sure the P4xxx stuff formerly known as LeftHand is basically dead at this point. Unfamiliar with the current P4xxx line.

I don't see the EVA 4xxx listed on their website anymore so that's probably gone and I can't speak to 3PAR storage at all.

My personal opinion is there are better options out there for the money.

My personal generalized starting point recommendations in the storage market

Basic iSCSI - EqualLogic

More features iSCSI - EMC VNXe or Compellent

Higher End kit - EMC VNX or NetApp


The Dell MD's seem to get a lot of love, but I have an irrational dislike for them. Mostly because of the market that uses them though.

Thanks Ants
May 21, 2004

#essereFerrari


I've had experience with an MSA P2000 iSCSI unit with a couple of expansion bays. It's easy enough to set up, the hardware design is nothing special but it's not terrible. Honestly I only ended up using it due to budget constraints, but the thing worked fine and the performance was as expected.

If I was doing it again I'd probably pick one of the lower end Dell MD arrays, especially as they can do SSD caching, which wasn't a feature when I was looking at them.

I think you could also level the complaint about acquiring vendors and not doing much with them at Dell as well with Compellent / EqualLogic, although the latest round of products seems to be addressing this.

Thanks Ants fucked around with this message at 22:56 on Oct 22, 2013

Docjowles
Apr 9, 2009

Caged posted:

I've had experience with an MSA P2000 iSCSI unit with a couple of expansion bays. It's easy enough to set up, the hardware design is nothing special but it's not terrible. Honestly I only ended up using it due to budget constraints, but the thing worked fine and the performance was as expected.

We used one of those (MSA 2000 G3) at a past job. That pretty much sums it up. Absolutely nothing special but pretty drat cheap and it worked reliably. The web UI was the worst poo poo but we were also very behind on firmware so there's an outside chance that got fixed.

I can't say I'd actively recommend it to anyone but it did the job, as long as the job was "provide some spinning disks over the network" and nothing fancier.

Rhymenoserous
May 23, 2008

Syano posted:

Powervault kits are great. You can add shelves any time you need up to like 192 total drives or something like that. You can't go wrong with them for small deployments

EDIT: I just reread some of your environment. I run a 425 user mail system, along with about 30 more guests on a Dell MD3200i using near line SAS drives. Granted my environment is pretty low IOPs but still. Definately look at solutions from Equallogic, Netapp, etc but dont count out the powervaults cause someone told you they suck. They absolutely are fine for smallish environments

Don't work with many small/medium businesses do you? I had 10 year old dell poweredge servers running mission critical stuff when I first started working at my current shop, and the prevailing attitude towards hardware purchases was generally "Just use one of the old ones laying around".

Case in point: About a half a year before I was hired (Call it three years ago) they purchased an entirely new software that drove... well everything from inventory to service calls. Rather than purchase a new server and new storage for their new program they just took down half of the old ERP softwares cluster, reinstalled windows on it, appropriated half of it's storage and setup the new system on it. Naturally it ran like poo poo.

The excuse I heard was "Well we already owned an extra server we didn't need".

Trying to pry money out of people like this is hard. My entire first year here was spent convincing the owner that just because you spent 10k on something five years ago doesn't mean it's worth a poo poo now. Servers don't have some intrinsic value that sticks around after they are past the point where they can do any of the jobs you need to do. This isn't a car you can restore and then keep on using as a daily driver.

Now I have a rack of lovely HP servers running ESX backed by a SAN. Much better than the baremetal raid 0 shitboxes with DAS that I was dealing with before.

Rhymenoserous fucked around with this message at 23:01 on Oct 22, 2013

Thanks Ants
May 21, 2004

#essereFerrari


No it didn't , the UI was still terrible 3 months ago. It had the undocumented admin account which we managed to dodge by pure coincidence because that was the account details I found when I was Googling for the default credentials and changed it because HPs documentation is pure poo poo and it's impossible to find relevant information on their website.

Edit: That's a response to Docjowles's post

TKovacs2
Sep 21, 2009

1991, 1992, 2009 = Woooooooooooo

Dilbert As gently caress posted:

Be amazed at SMB where +70% is still not virtual

This. Most SMB's I've done work for are still not doing any Virtualization.

Hell, I just took on a client running an NT 4.0 server. Guess what my first priority is there?

evil_bunnY
Apr 2, 2003

TKovacs2 posted:

Hell, I just took on a client running an NT 4.0 server. Guess what my first priority is there?
The first priority is always "it can't cost anything". The vast majority of SMB's will flatly bury their heads about operational risk and value.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

evil_bunnY posted:

The first priority is always "it can't cost anything". The vast majority of SMB's will flatly bury their heads about operational risk and value.

I DON'T HAVE TO BUY CAL'S RIGHT?? Those are just optional right?

Thanks Ants
May 21, 2004

#essereFerrari


"Well if you don't have to enter the CALs anywhere then I don't see how they can tell how many users we have"

I dealt with a small business once where one of the co-owners decided randomly overnight to change the 'server' and one of the desktops to Ubuntu overnight so they were effectively down a PC the next day and nobody could get to any file shares. This was the same guy who thought that spending time trying to trick Samba into being an AD domain controller rather than just buying a Windows Server license was a good use of time and money. I'm glad I gave up trying to help them.

Thanks Ants fucked around with this message at 23:10 on Oct 23, 2013

TKovacs2
Sep 21, 2009

1991, 1992, 2009 = Woooooooooooo

evil_bunnY posted:

The first priority is always "it can't cost anything". The vast majority of SMB's will flatly bury their heads about operational risk and value.

Yep. Their other five 'servers' are Compaq or whitebox PC's running illegal copies of everything from Windows Vista to Server 2008 R2 Web Server.

Good times ahoy.

Thanks Ants
May 21, 2004

#essereFerrari


One thing I will say in defence of smaller businesses is that Microsoft software is priced really loving ridiculously high, as well as being the most confusing thing you'll ever read. If you've got a small number of PCs the initial spend needed to go with some sort of volume license agreement is insane since you pretty much have to buy all your current licenses again just to be able to buy SA. There's no way to migrate your retail boxed server licenses across and just buy SA on them, or add SA to the Windows licenses included with the PCs you've just bought. This initial hit only gets bigger as the company grows until you're at the point where it's a gently caress ton of money to spend just to stay where you currently are but with SA.

Either that or the rep I spoke to was utter poo poo.

Thanks Ants fucked around with this message at 23:31 on Oct 23, 2013

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Is there any comparable competitor to the Cisco MDS-9148? Looking to hook up about 4 hosts to some DP 8Gb/s FC adapters to a VNX 5400, loaded with Flash/15k.

I don't generally work too much with FC, and when I do it is mostly HP...


Also Besides clockspeed/max drives is there a good compare and contrast of the new VNX's? Primarily the VNX5200 and VNX5400.

Got a much larger budget that I thought to overhaul the san for my vmware lab environments at school.

Dilbert As FUCK fucked around with this message at 01:04 on Oct 24, 2013

evil_bunnY
Apr 2, 2003

Just size it realistically and spend on something else?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

evil_bunnY posted:

Just size it realistically and spend on something else?

The new servers I got came all with DP FC cards, If I don't spend the full budget on the project welp, it's gone(lol government).

Personally I would prefer to do 10Gb, but buying accessories such as addin cards for the hosts is harder than buying straight up new equipment.

My preliminary plan is
2 MDS-9148's x16 port's
VNX5400
4x400GB SSD's
25x300GB 15k Drives
25x600GB 15k Drives
+Fast Suite

Already have a PS4000, and Netapp FAS2040 loaded with a bunch of 1TB 7.2k + 2 shelves NL SAS drives.

Nettapp will be the onsite backup and failback point if the VNX takes a massive poo poo; the PS4000 will most likely be offsite for backups.

Pretty syched and while I just found out my budget I really want to do my absolute best and dedicate as much time to it as I can because I would really like to use this as a VCDX defense some day.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
What's making you lean to EMC for the storage?

Unless there are external factors (tied to certain vendors, crazy edu discounts) that aren't as apparent, paying the EMC brand premium for a lab environment seems like a waste. Plus, FC is probably the least user-friendly protocol.

I imagine you could get way more bang for the buck elsewhere. If you need to eat up the funds, why not spend it on capacity or flash?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Dilbert As gently caress posted:

Pretty syched and while I just found out my budget I really want to do my absolute best and dedicate as much time to it as I can because I would really like to use this as a VCDX defense some day.

You're looking at the VCDX in entirely the wrong way based on this post.

That said you should look at either this:
http://www.brocade.com/products/all/switches/product-details/300-switch/index.page

or this:
http://www.brocade.com/products/all/switches/product-details/6505-switch/index.page

As alternatives to the MDS 9100 series.

quote:

What's making you lean to EMC for the storage?

It's possible if this is for education use that an EMC array may be needed to teach EMC centric classes. Then again it may not be so this question should really be answered before cutting a PO.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

three posted:

What's making you lean to EMC for the storage?

Unless there are external factors (tied to certain vendors, crazy edu discounts) that aren't as apparent, paying the EMC brand premium for a lab environment seems like a waste. Plus, FC is probably the least user-friendly protocol.

We teach EMC, I do like VNX's.

I completely agree with the FC use, I would really much prefer 10Gb/e and do iSCSI/NFS then FCoE if absolutely needed. However, 1) I don't have the adapters for 10Gb on the hosts, while I have 8Gb cards in the hosts already and B) Some others want to see what was purchased on the servers fully used, and want to move ahead with it, personally I feel it introduces a level of overhead not needed, but customer scope requirement.

1000101 posted:

You're looking at the VCDX in entirely the wrong way based on this post.

That said you should look at either this:
http://www.brocade.com/products/all/switches/product-details/300-switch/index.page

or this:
http://www.brocade.com/products/all/switches/product-details/6505-switch/index.page

As alternatives to the MDS 9100 series.
Cool Thanks I'll look into that and the vcdx part. Oh I just reread what I posted, that came out a bit wrong..

quote:

It's possible if this is for education use that an EMC array may be needed to teach EMC centric classes. Then again it may not be so this question should really be answered before cutting a PO.

Bingo!

Dilbert As FUCK fucked around with this message at 01:55 on Oct 24, 2013

GrandMaster
Aug 15, 2004
laidback

Dilbert As gently caress posted:

Is there any comparable competitor to the Cisco MDS-9148? Looking to hook up about 4 hosts to some DP 8Gb/s FC adapters to a VNX 5400, loaded with Flash/15k.

Do you really need 48 ports? with a VNX & 4 hosts you'll only be using 8 ports on each switch.
Theres the MDS-9124 or Brocade 300 which are kind of equivalent (8GB FC/24port).
I find the brocades a bit easier to work with (zoning etc), but Cisco's FC gear tends to be a bit cheaper.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

GrandMaster posted:

Do you really need 48 ports? with a VNX & 4 hosts you'll only be using 8 ports on each switch.
Theres the MDS-9124 or Brocade 300 which are kind of equivalent (8GB FC/24port).
I find the brocades a bit easier to work with (zoning etc), but Cisco's FC gear tends to be a bit cheaper.

:doh: Sorry yeah the 9124, wow no idea why I copied the 9148....

Pile Of Garbage
May 28, 2007



GrandMaster posted:

I find the brocades a bit easier to work with (zoning etc), but Cisco's FC gear tends to be a bit cheaper.

Seconding Brocade, FabricOS has a great GUI and the CLI is very sensible.

kiwid
Sep 30, 2013

Wikipedia posted:

However, the performance of an iSCSI SAN deployment can be severely degraded if not operated on a dedicated network or subnet (LAN or VLAN).

Can someone explain why this is the case? Why do they tell you to have a separate VLAN for iSCSI traffic, and in the case of MPIO, multiple VLANs?

Wicaeed
Feb 8, 2005
Do you really need multiple VLANs for MPIO?

I'm setting up an MSSQL Cluster running on an Equallogic backend, and it whines about not having multiple networks for the storage NICs, but works fine if both NICs are on the same subnet.

kiwid
Sep 30, 2013

Wicaeed posted:

Do you really need multiple VLANs for MPIO?


Pretty much everything I read says to use a different subnet for each interface, however, I guess you can do something with static routing to make it work but I've not looked into that.

Syano
Jul 13, 2005

Wicaeed posted:

Do you really need multiple VLANs for MPIO?



You should if you want to protect yourself from switch misconfiguration

KS
Jun 10, 2003
Outrageous Lumpwad
Not really experienced with Equallogic but I seem to recall they're a special snowflake where one network is the correct config. Maybe someone can confirm. That's unusual, though. Most vendors and I believe the MS software iscsi initiator want two networks to do MPIO.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

KS posted:

Not really experienced with Equallogic but I seem to recall they're a special snowflake where one network is the correct config. Maybe someone can confirm. That's unusual, though. Most vendors and I believe the MS software iscsi initiator want two networks to do MPIO.

Unsure about EQ, but the PowerVault MD series wanted a different VLAN for each address.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

kiwid posted:

Can someone explain why this is the case? Why do they tell you to have a separate VLAN for iSCSI traffic, and in the case of MPIO, multiple VLANs?

There are a fair number of good reasons. A separate VLAN means a separate Spanning Tree domain which can mean faster and more optimized convergence and resilience against Spanning Tree problems in other VLANs. There are also configurations that can be applied on a per VLAN basis such as MTU size and QOS. A dedicated VLAN also limits the amount of broadcast traffic that the NICs on your storage network have to deal with. And with a dedicated VLAN you can leave it un-routed and prune the VLAN off of trunks to limit possible paths the data can take through the network to make things as efficient as possible.


Wicaeed posted:

Do you really need multiple VLANs for MPIO?

I'm setting up an MSSQL Cluster running on an Equallogic backend, and it whines about not having multiple networks for the storage NICs, but works fine if both NICs are on the same subnet.

This is a bad idea and isn't guaranteed to work with all clients and all storage. The basic problem: let's say you've got two physical interfaces on your host configured with IPs in the same subnet, 192.168.1.1 and 192.168.1.2, and you want to communicate with 192.168.1.3 which is a host somewhere on the network. Which interface will traffic to 192.168.1.3 leave out of?

kiwid
Sep 30, 2013

NippleFloss posted:

There are a fair number of good reasons. A separate VLAN means a separate Spanning Tree domain which can mean faster and more optimized convergence and resilience against Spanning Tree problems in other VLANs. There are also configurations that can be applied on a per VLAN basis such as MTU size and QOS. A dedicated VLAN also limits the amount of broadcast traffic that the NICs on your storage network have to deal with. And with a dedicated VLAN you can leave it un-routed and prune the VLAN off of trunks to limit possible paths the data can take through the network to make things as efficient as possible.


This is a bad idea and isn't guaranteed to work with all clients and all storage. The basic problem: let's say you've got two physical interfaces on your host configured with IPs in the same subnet, 192.168.1.1 and 192.168.1.2, and you want to communicate with 192.168.1.3 which is a host somewhere on the network. Which interface will traffic to 192.168.1.3 leave out of?

Ah, thanks for clearing that up. I could never find anywhere that actually explained that.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

NippleFloss posted:

This is a bad idea and isn't guaranteed to work with all clients and all storage. The basic problem: let's say you've got two physical interfaces on your host configured with IPs in the same subnet, 192.168.1.1 and 192.168.1.2, and you want to communicate with 192.168.1.3 which is a host somewhere on the network. Which interface will traffic to 192.168.1.3 leave out of?
A decent storage device should be able to take care of this regardless of client with policy-based routing (a connection is started and associated with a network interface, then all traffic related to that connection flows over the same interface). In practice, this is pretty wonky and barely works right in Linux and FreeBSD, and I wouldn't trust most storage vendor firmware based on vxWorks or whatever to just get it right.

Langolas
Feb 12, 2011

My mustache makes me sexy, not the hat

Another thing is having to deal with iScsi initiator and targets. Some storage arrays ports can act as both initiator and target for doing types of replication and copying from san to san. If you don't segregate the traffic properly, you can have the san possibly login to itself from mis-configuration and start bouncing your iScsi ports. I've seen quite a few clients set things up on their own and wonder why their iscsi infrastructure isn't working how they like.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


For something like an Equallogic PSx100 series SAN, how exactly would you do different VLANs per path?

You have two controllers with 4 ports each. Only 4 are active at once and they do vertical port failover so you would need 4 VLANS. If each vertical port group was on a different VLAN, wouldn't you nee 8 ports per host to connect to all the paths?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

bull3964 posted:

For something like an Equallogic PSx100 series SAN, how exactly would you do different VLANs per path?

You have two controllers with 4 ports each. Only 4 are active at once and they do vertical port failover so you would need 4 VLANS. If each vertical port group was on a different VLAN, wouldn't you nee 8 ports per host to connect to all the paths?
You don't need active/passive network connections to be on separate VLANs -- I don't even see how that would work, since you couldn't have the same IP address between interface pairs.

Adbot
ADBOT LOVES YOU

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Misogynist posted:

You don't need active/passive network connections to be on separate VLANs -- I don't even see how that would work, since you couldn't have the same IP address between interface pairs.

I know, that's why I said 4 VLANs. Each pair of vertical failover ports would be their own VLAN.

For example.

0 A/B 10.100.1.x
1 A/B 10.100.2.x
2 A/B 10.100.3.x
3 A/B 10.100.4.x

But, if you want to have two switches for redundancy, each pair of a vertical port group would be connected to a different switch (A plugged into one, B plugged into the other). If you wanted all 4 ports to be accessible from each host even during a switch failure, then you would need 8 ports on each host, 4 connected to switch A (one on each VLAN) and 4 on switch B (one on each VLAN).

If you had fewer ports on a host, I don't see how the host would be able to see all ports on the storage in the event of a switch failure and you would lose half your storage bandwidth.

Dell's best practices for that array has all 4 ports being on a single VLAN and using a single VLAN for all your storage traffic.

bull3964 fucked around with this message at 05:35 on Oct 26, 2013

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply