Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Hok
Apr 3, 2003

Cog in the Machine

three posted:

I found his report from Equallogic showing it's able to hit almost full link speed.

I don't think this guy would even know if Equallogic had certain limitations. He managed to spell Equallogic incorrectly as "Equilogic" throughout the entire writeup. :psyduck:

Speaking as someone who deals with these arrays on a daily basis, this guy is completely full of it.

I'm assuming you've got support on the array, get a case open with Equallogic, they'll be able to tell you where the bottleneck is, you'll be surprised how much info is contained in the diag logs.

If that array can't saturate a 1GB link there's an issue somewhere.

Adbot
ADBOT LOVES YOU

Hok
Apr 3, 2003

Cog in the Machine

Xenomorph posted:

Our current Xraids have 12 HDDs (two RAID 5s) and uses two Fibre Channel connections, 2Gb.
Our current Xserves have a Fibre Channel card, 2Gb each port.
<snip>
Does anyone have experience with this? With ordering from Dell? $4,000 server plus almost $4,000 for adding Fibre Channel (parts and service plan) is nuts.
<snip>
Should we just drop Fibre Channel?


Are you trying to keep your existing xraids and use them with new servers, or are you replacing the lot?

If replacing the lot, don't bother with fibre, not unless you're talking about a decent scale SAN.

If it's just a single server with direct attached storage, get an MD1200/1220, 6Gb sas is a lot less hassle than fiber and a hell of a lot cheaper, if you want a couple of servers connected, MD3200/3220, they've got 4 6GB sas ports on each controller, so you can hang 4 servers off them and still have full redundancy on the links.

Hok
Apr 3, 2003

Cog in the Machine

Jadus posted:

Somewhat related, we're looking at an MD3220i with two hosts, and since we don't expect to grow beyond two hosts for a while, we're thinking about cutting out redundant switches between the servers and the MD3220i.

If we do this, is loadbalancing still possible across the links when direct connecting? For example, will it work properly to connect two NIC from server 1 to Controller 1, two NIC from server 1 to Controller 2, and the same for server 2?

Sorry for the slightly delayed response, haven't been checking in much the last week.

Yeah, if you've only got 2 systems, then there's no reason not to direct connect, you've got 4 ports on each controller, you can use all of em, and with enough connected it will load balance fine as long as you've got MPIO sorted properly.

If you've got 2 ports available on server, connect one to each controller, this will give some load balancing, but as each virtual disk is bound to a controller you'll only get 1GB through put for each VD, but you'll get more spread across all the VD's

Best case is 4 ports on each server, with 2 connected to each controller, that will give full load balancing and a nice perf boost.

Hok
Apr 3, 2003

Cog in the Machine

Badgerpoo posted:

Currently one team does both the SAN switches and the storage, it's looking like the SAN switches would come to the Network team as we know how to connect things together properly. I suppose we would be classified as a medium/large enterprise? I have no experience with SAN/NAS systems so am wondering if doing this breaks something crucial in the whole process of running a cohesive service.

I'm working for a vendor these days, but I was part of a storage team in my last job, and we did everything storage related, basically from the storage arrays and tape libraries to fibre cards in the servers.

I'm not sure how well what you describe would work, might be ok as long as the network guys know something about storage, it's similar, but not exactly the same as normal networking.

I think to manage a fibre network you either need a storage guy who knows some networking or a networking guy who knows some storage.

Hok
Apr 3, 2003

Cog in the Machine

Badgerpoo posted:

The problem we have is that the systems team itself don't really know a great deal about SANs either. They seem to have been sold a bunch of different systems over the years and they've never quite worked properly.

Fiber channel and ethernet networking are converging to an extent as there is increasing technology cross over (ethernet is emulating some of the features of fiber channel with larger layer 2 domains, kinda). The idea is that we would learn about Fiber Channel (properly) and do it properly, but would leave the management of the SAN itself to the systems guys. Operationally, when Systems want a new link(s) they would request it from us, and we would then implement it in the best way.

Fibre channel isn't that hard really, just remember the basic rules and stick to them, keep simple obvious naming standards, make sure it's flexible enough that you don't need to have exceptions, one initiator per zone, and the main one Redundancy redundancy redundancy

If you've got an existing environment which hasn't been designed properly, you really need to redo large amounts of it to introduce some sanity, and doing so without having an impact on production can be difficult.

Start by doing a full audit, list everything you've got, what servers are using what storage on what arrays, and how they're currently connected. then you need to put a design together, work out how everything should be connected, what your zones should be, how everything is should go together.

Thats the easy bit, next you need to go from where you are to where you need to be, and to do so without having a huge impact on production, get that right and your next job is going to pay a fuckload more than you're getting now.

Hok
Apr 3, 2003

Cog in the Machine

blk posted:

I'm thinking about replacing my nonprofit's file server.
<snip>

There are two questions here, first, whats the limiting factor on your current system, is it just storage, or are you seeing memory/cpu issues as well.

The other is how much have you got to spend, if you have the budget for something better and it's going to go away in a few months if you don't spend it than you really should spend it.

If it's just a storage issue, upgrade the memory and add a couple of extra drives, you've probably got 6 drive bays, that's the most common version out there so add two more and give yourself some extra capacity. The perc5 can handle 8 drives, the limit will be the slots.

If the money just needs to be spent, then go for the new system. $5k will get you a fairly well specced R710 which will blow the old system away.

Hok
Apr 3, 2003

Cog in the Machine

Vanilla posted:

Been a loooooong time since I touched on iSCSI so I have some 101 questions which I know the answers to but things change so worth asking!

-- With iSCSI it's still OK to just use the standard servers NICs but you need an iSCSI TOE card if you want to boot from iSCSI SAN?
-- All you need with iSCSI is the MS Initiator, which is free.
-- Generic IP switches are fine - or do they need certain 'things?

Anyone know some free, accepted tools for copying LUNs off of DAS onto arrays? Robocopy still liked?

Any general rules for someone moving from just DAS onto an iSCSI array?

You need an iSCSI HBA to boot from iSCSI, TOE on it's own won't do it.

TOE isn't really needed these days with the amount of CPU grunt we have available, and I've seen it cause lots of issues, especially with Jumbo frames in use.

And yup the MS initiator is all thats needed on the host side.

As for the switches, they don't need to be anything special, but I'd avoid the really cheap ones.

Just make sure they support Jumbo frames and flow control, the ability to vlan off your iSCSI ports can also help as well.

Hok
Apr 3, 2003

Cog in the Machine

Intraveinous posted:

Not sure if Dell's current PERC cards include R6 support by default, or if it's a feature you have to license (you do on some HP cards).

The Perc 5i didn't, the 6i and h700 do

Hok
Apr 3, 2003

Cog in the Machine

conntrack posted:

The compellent system looks cool but the presales from Dell looked miffed when i asked them technical details about their "not mirroring" secret sauce.

This is either them lying/embellishing the truth or they just don't know. Either way it makes them look bad.

Many people are starting the scale out with servers game. Has netapp come up with a counter product/propaganda?

They're sales, so the embellishing the truth bit is standard practice. Having said that, I'd say it's most likely that they just don't know, Compellent is very new to Dell and everyone's still madly trying to get up to speed on it. I've got a heap of training on it still to do on the support side.

As I understand the way they do the RAID (I haven't read up on it in any detail as yet so I might be getting it wrong) is just raiding at the block level rather than the disk level.

If you create a 4+1 raid 5, it will grab 5 blocks off any disks in any enclosure it decides is suitable and raid them, it will then repeat using blocks from other disks until it has a big enough volume. There's some load balancing and optimization smarts behind it, but that's the basics

Hok
Apr 3, 2003

Cog in the Machine

FISHMANPET posted:

I'm not 100% on this, but I don't think you even specify you want a 4+1, you just specify that in your tier you want some raid 5 and some raid 10, and the data progression magic figures out how many blocks to to raid 5 and how many to raid 10 so that you get the right speed for your progression. Though I'm not 100% on that, as I wasn't entirely understanding the blocks when he answered my question.

I'm still in the early stages of working my way through the training material, so I'm not 100% on it, I'm curious about it so I'll ask the L2 at work who spent a couple of weeks at Compellent doing training, he has some more in depth docs than I've been able to get my hands on so far.

FISHMANPET posted:


As for support, we're in the twin cites, 20-40 minutes away from Eden Prairie, depending on traffic, so hopefully we won't have any problems getting on site service quickly.

The onsite is outsourced, I'm not sure exactly who they use in the US, but here in Oz I've heard it will be the same Unisys or Fujitsu guys Dell use for the rest of their onsite work. They'll probably just train up the guys who currently do the EMC work.

From a hardware point of view it's all pretty generic server hardware, The SAS arrays look a lot like some of the HP ones, and from my understanding there's no reason they couldn't just drop in Dell MD1200/1220 shelves instead. I'm sure the Dell hardware guys are working away at adapting some of the existing server hardware for it. If it wasn't for the need for a heap of PCI slots any of the 2 socket PowerEdges would do just fine.

Hok
Apr 3, 2003

Cog in the Machine

Vanilla posted:

I've always been very dubious of the 'treat snapshots as a backup' line. The safest method is backing up using a different medium (tape, backup disk, backup appliance).

Replicating it to another array isn't going to help you if a disgruntled employee wants to cause some damage or a hacker gains access to your arrays ALA Distribute IT

http://www.theregister.co.uk/2011/06/21/hacks_wipe_aus_web_and_data/

Get tape in there somewhere and get it going to tape regularly!!

I'm sorry, but snapshots are not backups, they're useful tools, but if the poo poo hits the fan you can't rely on them.

To trust a backup, you need 3 copies of the data, on 2 different types of media, one of them offsite.

Hok
Apr 3, 2003

Cog in the Machine

three posted:

We've had good luck with Dell's tape drives. They're just rebranded IBM drives, I believe. v:shobon:v

The ML6000's are Quantum's, the TL2000/4000's are IBM's, and the smaller Dells are mostlty Quantums.

Tape libraries are pretty reliable these days, which is more than can be said of a certain brand of LTO5 tapes which are trashing drives left right and center

Hok
Apr 3, 2003

Cog in the Machine

Vanilla posted:

Yeah saw this a while back, had to inform someone a few months ago that they needed to upgrade before deploying any 10.7. Patch was released almost 6 months ago as the issue was noted in the OSX Lion beta.

Happened to quite a few vendors as it's a change in the Apple code.

I've seen this happen twice so far, one yesterday caused a pretty major outage for a customer, rather amusing issue if you're not the one having to fix it.

I really don't think many EMC customers do flare updates on a regular basis, I regularly see clarions running versions which are years old. I think we'll be seeing this for a while yet.

Equallogic on the other hand have a new firmware every 3 weeks, They've got the customers pretty well trained by now.

Hok
Apr 3, 2003

Cog in the Machine

Oddhair posted:

VVV This document says it requires both to function, but I can't imagine a point to the thing crashing if it doesn't have multiple controllers, kind of defeats the purpose of redundancy. I might just play with it this evening, the goonmate will be out.

You can run with one controller, but you need to put it into simplex mode, there's an SMCLI command to do it.

You'll also need to reflash the nvram with the single controller version.

Switching from iSCSI to SAS will probably require a full syswipe, so not something you want to do just for the sake of it.

Hok
Apr 3, 2003

Cog in the Machine

ozmunkeh posted:

Does anyone have any idea when Dell might be refreshing the Compellent controller offerings? I'm sure those with larger server rooms or datacenters can be a little more flexible about physical space but having to give up 6U before even attaching any disk shelves is a little much for those of us with limited rack space.

I've heard late Q4, nothing official yet though

Hok
Apr 3, 2003

Cog in the Machine

I've seen more calls with issues on Broadcom 10G cards than with the Intel one's, although that might be because there are more Broadcom cards out there.

I basically can't give solid evidence, but if I was putting together a system I was responsible for I'd be using the Intels.

As for the performance with software vs hardware Initiator, if the software works well then use it.

700MB/s and up is nothing to complain about.

Hok
Apr 3, 2003

Cog in the Machine

three posted:

I don't think Compellent should be considered whitebox, but I also don't think SuperMicro is a great hardware maker. I look forward to them being moved to Dell hardware.

You're not the only one, I don't think it's far off either, I'd heard Q4 but I'm not sure if they'll make that now

Hok
Apr 3, 2003

Cog in the Machine

It's pretty simple to run tests with and without iSCSI offloading enabled, if you're concerned do that and go with whatever gives you the best results.

In my experience offload isn't a good thing. Especially with Broadcom cards.

Hok
Apr 3, 2003

Cog in the Machine

BnT posted:

Hopefully this is an easy one, but focused more on the networking of a SAN. I have inherited an iSCSI SAN with a single VLAN, single IP network. It looks like this:

1. A SAN with 8 active and 8 failover interfaces, all plugged into multiple core switches
2. Four core switches running in a RSTP spanning-tree ring
3. Four edge switches, each has redundant trunks into two of the core switches
4. Hosts are multi-homed into two of these edge switches

While this seems like a valid configuration (everything should tolerate a switch failure), it's not ideal, correct? I'm guessing that there needs to be another VLAN with different IP numbering for a second network to mesh all this together properly and allow for better use of the redundant links via MSTP or multipathing? As far as performance goes we're not touching any of the capacity, so making changes for performance aren't currently needed. My goals at this point are to provide stability and avoid downtime.

It's going to depend on what type of storage you've got, each has it's own preferred way of setting up the network.

Some just want a single iSCSI Vlan with all the interfaces on it, other prefer 2 or more vlans.

Adbot
ADBOT LOVES YOU

Hok
Apr 3, 2003

Cog in the Machine

Walked posted:

Setting up for MPIO and iSCSI exactly as configured here; but nowhere does the Dell docs have anything about subnet configuration for MPIO

You might find this page handy.

http://en.community.dell.com/techcenter/storage/w/wiki/3615.rapid-equallogic-configuration-portal-by-sis

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply