Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Echidna
Jul 2, 2003



Well, I have finally got a "proper" iSCSI array for a small Xen virtualisation setup, so I can shift away from the current homebrew DRBD/Heartbeat/IET setup.

It's the Dell MD3000i, which I saw mentioned earlier along with some vaguely negative comments. It is a budget array, but I have to say for the price it's not a bad bit of kit, especially after we got our Dell account manager to knock the price down by a huge amount as we were ordering just in time for their end of month tally.

We've got it configured with dual controllers, 8x300Gb and 7x146GB 15k SAS drives. Throughput is around GigE wire speed - 110MB/s for both reads and writes. I'm also seeing a respectable IOPS figure depending on workloads; during an iozone run, I could see it sustaining around 1.5K IOPS to a RAID5 volume.

True, the management features are a world apart from the usual Sun and HP kit I'm used to, but it does the job. My main gripes are :

  • No built in graphing (seriously, Dell - WTF?), but you can do it from the CLI - see http://www.delltechcenter.com/page/MD3000i+Performance+Monitoring
  • Can't resize or change the I/O profile of a virtual disk once it's setup. This is a PITA, so make sure you set things up correctly the first time! You can however change the RAID level of a disk group once it's been created.
  • You need a Windows or RHEL box to run the administration GUI on - I'm sure you can probably hack a way to get the CLI running under Debian, but I haven't tried. You're probably SOL if you want to run it on anything else like Solaris. update: It looks like the admin tool and SMcli are just shell scripts that run Java apps. I tried a quick'n'dirty hack of installing everything under RHEL, tarring up /opt/dell and /var/opt/SM and then transferring them over to a Debian Lenny host. All I had to change was the #!/bin/sh to #!/bin/bash at the top of the SMcli and SMclient wrappers, and they seem to work. I haven't put them through any serious testing though...
  • Can't mix SAS and SATA in the same enclosure. The controllers do support SATA as well as SAS, although SATA drives don't show up as options in the Dell pricing configuration thingy. Our account manager advised us that although technically you can mix SAS and SATA in the same enclosure, they'd experienced a higher than average number of disk failures in that configuration, due to the vibration patterns created by disks spinning at different rates (15K SAS and 7.2K SATA). If you need to mix the two types, your only real option is to attach a MD1000 array to the back (you can add up to two of these) and have each chassis filled with just one type of drive.


The hardware failover works nicely - the array is active/passive for each virtual disk, as both controllers are typically active, each handling separate virtual disks for load-balancing purposes. When a controller fails, the remaining "good" controller takes over the virtual disks or disk groups from the failed controller. Failback is pretty transparent - the GUI guides you through the steps, but I found that simply inserting a replacement HD/Controller/etc. just did the job automagically.

Multipath support under RHEL/CentOS works fine with some tweaking - it uses the RDAC modules which lead to some oddness on CentOS 5.3. What tends to happen is that the first time device mapper picks up the paths, RDAC doesn't get a chance to initialise things properly (scsi_dh_rdac module isn't loaded) so you end up with all sorts of SCSI errors showing up in your logs. After flushing your paths (multipath -F) and restarting multipathd, things are OK. This is apparently fixed in RHEL 5.4 (https://bugzilla.redhat.com/show_bug.cgi?id=487293), so should make it's way out to CentOS from there. I'm unsure what the status is on other distros, though.

My multipath.conf contains the following :
code:
devices {
        device {
                vendor "DELL"
                product "MD3000i"
                path_grouping_policy group_by_prio
                getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
                path_checker rdac
                prio_callout "/sbin/mpath_prio_rdac /dev/%n"
                hardware_handler "1 rdac"
                failback immediate
        }
}
And with everything working, multipath -ll shows :
code:
360026b90002ab6f40000056a4aa9e87b dm-12 DELL,MD3000i
[size=409G][features=0][hwhandler=1 rdac][rw]
\_ round-robin 0 [prio=200][active]
 \_ 21:0:0:1  sdi 8:128 [active][ready]
 \_ 22:0:0:1  sdj 8:144 [active][ready]
\_ round-robin 0 [prio=0][enabled]
 \_ 20:0:0:1  sdg 8:96  [active][ghost]
 \_ 23:0:0:1  sdh 8:112 [active][ghost]
Just thought I'd chime in with my experiences as I didn't see any feedback on this particular array before.

Echidna fucked around with this message at 10:08 on Sep 15, 2009

Adbot
ADBOT LOVES YOU

Echidna
Jul 2, 2003



lilbean posted:

Our Sun reseller gave us this bullshit before, and we called him out on it. He said it was in the sales teams training literature. Dickheads.

Interesting that you've heard that line before as well, then. It did sound a bit fishy to me (I've used a mixed chassis before), but figured that regardless of the truth behind that statement, if that's what they're saying then I'd rather go with their recommended solution as I really don't want any finger pointing when getting support later.

Ah well, it actually worked out in our favour - our account manager obviously believed it, and knew I wasn't going to buy an additional MD1000 (no physical space for it in the rack) so he swapped out the SATA drives in my quote request, and replaced them with higher capacity 15k SAS drives so we'd be on all the same drive type. He then slashed the price of the whole thing down to well under what we'd been quoted for the original SATA solution.

It's amazing what putting an order through on the last day of the month can do, when they have targets to meet...

Echidna fucked around with this message at 06:19 on Sep 15, 2009

Echidna
Jul 2, 2003



Syano posted:

Ive got a MD3200i being delivered soon. How do you like the thing?

We have a MD3220i (the one with 24 disks), and my opinion of the thing is mixed. On the positive : Having a total of 8 iSCSI ports is great, the management software is vastly improved, it has 4 times the cache of the MD3000i and is generally a pretty nice bit of kit to manage. It also uses DM-Multipath on Linux (although you do still need to load the supplied RDAC drivers).

On the downside : Despite various bits of documentation proclaiming XenServer compatibility, the driver installation makes modifications to /etc/multipath.conf which under XenServer is a symlink. You need to make sure the changes go into /etc/multipath-enabled.conf for it to work. Performance is a bit ho-hum, and it makes me wonder if Dell have deliberately crippled it so they can sell you the 3k "High performance" licence-key.

On the major downside : There have been big QA problems with it. It arrived with 2 LEDs on the front not working. We called out for a replacement backplane, this arrived and was DOA. We got another replacement, this didn't work as the controllers refused to recognise it. Next day, a new backplane and new controller turn up - again, no joy. At this point, we have Dell tech support all over this issue, with engineers in two countries working on it.

They finally manage to resolve the issue and have re-created it in a lab - it appears that when the replacement backplanes are tested before being shipped, they are installed in a test unit to verify they work. However, this "locks" the backplane to the test system controllers, so they cannot be recognised by any other system. The engineer had to hook up to our unit via serial connection, and issue some weird commands (looking like he was changing the contents of memory locations - like an old school "poke") and then the new backplane was recognised.

Only, this replacement also had faulty LEDs on it. One week, and 6 replacement parts later, we finally have a working system. I know it's a new unit, but QA issues like this should really be caught before things leave the factory - and the issue with backplanes getting locked should never have happened. The engineers and Dell support team had no idea about it before we raised it; you would have expected that they would catch issues like this before customers run into them.

Echidna
Jul 2, 2003



vty posted:

I'll update when I've actually got it running and handling my VMs this weekend.

I'd be interested in how your experiences compare to mine. What Virtualization platform are you using ?

Echidna
Jul 2, 2003



Well, this is interesting : http://www.theregister.co.uk/2011/03/10/netapp_goes_diversified/

Seems like a good strategy for NetApp, but I'm a little surprised that LSI let the Engenio division go. I'm pretty sure the OEM deal with Sun will be killed by Oracle in the not too distant future, but the IBM and Dell Engenio-based offerings are still going strong...

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply