Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Erwin
Feb 17, 2006

What do folks use for backup to disk storage? Currently we have a Buffalo Terastation at each site. Other than not supporting SNMP monitoring (seriously Buffalo?) they've met our needs. Is this a bad idea or acceptable? I will need to increase our B2D capacity and will probably just get a second NAS, unless that's a bad idea for some reason.

Adbot
ADBOT LOVES YOU

Cowboy Mark
Sep 9, 2001

Grimey Drawer
I work at a small company and have basically gravitated towards being the IT admin. It was decided we needed to replace our completely non-redundant old server with something new. High availability was discussed, and this was passed to Dell.

Dell sold us 2x servers and an MD3000i. Both servers are Server 2008 Enterprise edition, configured as a failover cluster, both connected to the MD3000i via iSCSI.

I've now been given the task of moving all of our services to this new cluster. It was quickly established that Microsoft highly recommends that a failover cluster is not an active directory domain controller - so we've just had in another two new servers to be domain controllers (and WSUS).

As I understand it iSCSI works at the block level - so only one host can be connected to one virtual disk at a time.

The fileserver role is perfect. When one host is shutdown, the iSCSI initiator picks up the virtual disk and it's seamless. I think that clients with open handles may be interrupted though.

The idea is that if power is yanked from one server (as an example; it's all hooked to a beefy UPS) or something goes catastrophically wrong, the other will instantly pick up the slack and nobody has to get out of bed at 3am.

This cluster also needs to be our mail server and Sophos Antivirus master server. Since a long time before I joined and for the foreseeable future, our mail server is Merak Icewarp. This is not 'cluster aware'. Applications running on each host of the cluster obviously need their configuration and data on shared storage (i.e. the SAN), but this is only accessible to one host at a time. But in the case of failover they need to be ready and waiting. We also have some applications that act as a server for various scientific instruments that likely will never be cluster aware - is there any way to hack these into working in a fail-over mode? Perhaps a script that detects failover has occured and fires up the various applications once the host has picked up the virtual disk from the SAN?

Also I think the top brass were under the impression that shared storage meant literally that - that both hosts could use a single virtual disk simulataneously and that both hosts would be doing work, except in failure when one would take over all duties. It seems we're going to have one very expensive hot standby.

Personally I'd have got VMWare high availability and moved virtual machines around instead of this clustering lark, but I think it's a bit late for that now. Does anyone have any advice? Have Dell screwed us over and sold us something not fit for purpose?

Also I'm pretty up on IT and knew how a lot of this worked beforehand, but this is the first time I've ever physically had my hands on such kit.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Cowboy Mark posted:

Personally I'd have got VMWare high availability and moved virtual machines around instead of this clustering lark, but I think it's a bit late for that now. Does anyone have any advice? Have Dell screwed us over and sold us something not fit for purpose?
It'll fit, because you can run Hyper-V on the clustered role. It can be active/active and since you have enterprise edition, you can run 4 VMs on each host (not sure how the licensing compliance angle works out if one fails). Even if you decided to pick up VMware at this point, your windows server enterprise licenses aren't wasted, because they allow 4 VMs each on a single host.

Nukelear v.2
Jun 25, 2004
My optional title text

Cowboy Mark posted:

I work at a small company and have basically gravitated towards being the IT admin. It was decided we needed to replace our completely non-redundant old server with something new. High availability was discussed, and this was passed to Dell.

Nothing in iSCSI precludes two systems from working on the storage array. Windows clustering however will mean that one node will 'own' a drive, so partition your md logically and create groups of services and their associated disks in the cluster manager. Then distribute ownership of those service groups amongst cluster nodes as needed.

Most basic Windows services can be clustered without being specially cluster aware, you'll need to look at it on an app by app basis though. Worst case as adorai says, stick them in a hyperv session and cluster that.

Cowboy Mark
Sep 9, 2001

Grimey Drawer
Excellent! Thank you guys. I forgot about Hyper-V. Dell shipped the servers with a 32bit OS, so I'm digging some discs out now.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
How much experience do people here have with Sun's x4500/x4550 (thumper/thor)? I've got one at work and I'm going to be having a lot of stupid little questions, and I'm wondering if I can get them answered here or if I need to go register on the OpenSolaris forums or something.

conntrack
Aug 8, 2003

by angerbeet
I'm looking at the sun gear too. Does anyone know if it's possible to
connect FC luns to the openstorage appliances?

Or if we could get one of their storage servers with the slick GUI?

Support is needed for this as people get fired if there is dataloss.

I'm looking to make a poor mans vfiler. I like the openstorage GUI
but don't have any kidnapped orphans left to sell to netapp.

lilbean
Oct 2, 2003

FISHMANPET posted:

How much experience do people here have with Sun's x4500/x4550 (thumper/thor)? I've got one at work and I'm going to be having a lot of stupid little questions, and I'm wondering if I can get them answered here or if I need to go register on the OpenSolaris forums or something.
I've used one for a year now on Solaris 10, beat the poo poo out of it and I love it. H10hawk follows the thread too and manages like a dozen of them, so this is as good a place to ask questions as any.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Sweet. Now this is probably a stupid question, but I only ask it because the purchase went by at least a few people who should know better...

Is it possible to use an iSCSI card to share a target? We don't have a lot of experience with iSCSI here, only having a StorageTek array that acts as an iSCSI target, and an iSCSI card in the server acting as the initiator. Now we've got our thumper, and I know ZFS can export volumes as iSCSI (I actually know a fair amount about ZFS, so I'm not screwed in that regard) but I'm assuming it has to do that over the system network interfaces. The more I think about this, the stupider it sounds like that we got an iSCSI card to share file systems for iSCSI. That's the same as doing something like buying a network card for your hard drive, right? It doesn't make any sense?

Also, since you guys manage a bunch of thumpers, what should I tell my boss as to why I shouldn't make 2 20+2 RAIDZ2 pools? I had a hard enough time convicing him to let me use *both* system drives for my root partitions (1 TB? But it has Compact Flash!) and now I'm trying to to use one of these from the 'ZFS Best Practices Guide'
code:
  * 5x(7+2), 1 hot spare, 17.5 TB
  * 4x(9+2), 2 hot spares, 18.0 TB
  * 6x(5+2), 4 hot spares, 15.0 TB
I'm not sure which of those I like the best. 4x(9+2) is nice because we get the most storage ("WHAT? IT'S NOT 24T?) but the pools are a bit larger than I'd like. 5x(7+2) is my favorite, except I only have one hot spare :(. 6x(5+2) is never going to fly, too many hot stairs and too much wasted space.

lilbean
Oct 2, 2003

You don't need any card for iSCSI - just use the four built-in GigE ports. If you're feeling really spendy then get a PCIx 10gE card.

As far as the disk layout goes it can be pretty flexible, but try to build vdevs that have one disk from each of the controllers on them. I went with 7 RAIDZ2 vdevs and four hotspares. At first I thought that many hotspares was a waste, but then we started swapping out disks with larger ones and we do that by failing half of a vdev to the spares, replacing them, then doing it once more (which means for a full vdev upgrade we crack the chassis twice).

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
I have a CentOS 5.3 host with a "phantom" scsi device because the LUN it used to point to on the SAN got un-assigned to this host. Every time I run multipath it tries to create an mpath3 device mapper name for it and complains that its failing.

How do you get rid of /dev/sde if its not really there?

edit: as usual I figure something out the moment I post about it. I ran this echo "1" > /sys/block/sde/device/delete and it worked. Anyone care to tell me I just made a huge mistake?

StabbinHobo fucked around with this message at 20:45 on Feb 5, 2010

Erwin
Feb 17, 2006

I have a new EMC AX4 iSCSI array in place and it seems quite a bit slower than I think it should be. Is there a reliable way to benchmark its performance and any statistics for similar devices that I can compare it to? I've tried googling around but I can't find any "here is what speed you should expect with this iSCSI array" information.

oblomov
Jun 20, 2002

Meh... #overrated

Erwin posted:

I have a new EMC AX4 iSCSI array in place and it seems quite a bit slower than I think it should be. Is there a reliable way to benchmark its performance and any statistics for similar devices that I can compare it to? I've tried googling around but I can't find any "here is what speed you should expect with this iSCSI array" information.

You really need to provide some more info. What's slow exactly (i.e. whats the throughput you are getting)? Which disks do you have in there? How is it connected (how many ports, port speeds, what switch, switch config, jumbo frames, etc...)? What is it connected to (what is the server hardware, OS, application)?

Erwin
Feb 17, 2006

oblomov posted:

You really need to provide some more info. What's slow exactly (i.e. whats the throughput you are getting)? Which disks do you have in there? How is it connected (how many ports, port speeds, what switch, switch config, jumbo frames, etc...)? What is it connected to (what is the server hardware, OS, application)?

Sorry, I didn't provide information because I wanted to run a benchmarking test to see if the speeds are really slower than they should be before I ask for more help. I was getting 20-30MB/s read and write when copying files in either direction. I found CrystalDiskMark and here's what I get:

Sequential throughput: about 30MB/s write and read.
Random 512k blocks: 0.8 - 1MB/s read, 28-30MB/s write.
Random 4k blocks: 7MB/s read, 2MB/s write

It's an iSCSI AX4. The array I'm dealing with is a 7-disk RAID 5 with 1TB SATA drives (12 total disks, 4 SAS drives for the Flare software, 7 disks in RAID, one hot spare). The AX4 has dual controllers, so it has a total of 4 gigabit iSCSI ports. iSCSI is on its own switch, a ProCurve 1810-24g, gigabit managed switch. Jumbo frames is currently off.

I tested from two servers, one 2008 R2, one 2003 R2. Both use the Microsoft initiator over one regular gigabit ethernet adapter (not an HBA). EMC PowerPath is installed on both servers.

I realize there are a few things keeping me from optimal speed: SATA drives, no jumbo frames, and no HBAs. I still feel like the speeds are lower than they should be, even considering those factors. Maybe my expectations are too high?

H110Hawk
Dec 28, 2006

lilbean posted:

I've used one for a year now on Solaris 10, beat the poo poo out of it and I love it. H10hawk follows the thread too and manages like a dozen of them, so this is as good a place to ask questions as any.

I actually quit that job a few months ago. And it was 30+ thumpers, I lost count. :X I also only used Solaris 10 to great success and my replacement was hell bent on OpenSolaris. Last I heard it kept ending in tears. Stay clear of that hippie bullshit and you should be fine.

FISHMANPET posted:

Is it possible to use an iSCSI card to share a target?

I've never used iSCSI, but from what I've read about it an "iSCSI card" is nothing more than a glorified networking card with an iSCSI stack inside of it. ZFS handles this internally and you wasted money. I would keep this fact around if they try and lord over you other things they don't understand, as this one is them spending money they shouldn't have. The following is justifying you wasting space.

quote:

Also, since you guys manage a bunch of thumpers, what should I tell my boss as to why I shouldn't make 2 20+2 RAIDZ2 pools? I had a hard enough time convicing him to let me use *both* system drives for my root partitions (1 TB? But it has Compact Flash!) and now I'm trying to to use one of these from the 'ZFS Best Practices Guide'

In all fairness, it does have a compact flash port. Use it. Hell, use it as a concession to them. Did they buy the support contract with your thumper, even Bronze? Call them suckers up and ask for a best practices configuration on your very first thumper (Of Many!), and if they balk at it call your sales rep and ask them to get it for you. Get it in an email from Sun. Tell them your honest reliability concerns.

Now, think long and hard about how many parity disks you need, and how many hot spares you want. Your target with snapshots is 50% raw space as usuable. I tended to get 11T/thumper. In all honesty it isn't going to matter, because management is going to be Right(tm) and you are going to be Wrong(tm). I would setup 6 raid groups and "waste" those last 4 drives or whatever on hotspares, or just use RAIDZ instead of RAIDZ2 and reclaim a few terabytes. You have 4 hotspares, but you will need to very diligently monitor it for failure as it takes forever to rebuild a raidgroup.

Caveats: Update to the latest version of Solaris10 and upgrade your zpool. When resilvering a raidgroup do not take snapshots or other similar operations. Unless they've fixed it, doing anything like that restarts the resilvering process.

Edit: Oh, and stop swearing at Solaris. It can hear you, and it will punish you. Instead, embrace it, and hold a smug sense of superiority of others over knowing how things were done Back In The Day. Back when they did things the Right Way(tm). :clint:

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

H110Hawk posted:

I've never used iSCSI, but from what I've read about it an "iSCSI card" is nothing more than a glorified networking card with an iSCSI stack inside of it. ZFS handles this internally and you wasted money. I would keep this fact around if they try and lord over you other things they don't understand, as this one is them spending money they shouldn't have.

Haha, suckers.


quote:

In all fairness, it does have a compact flash port. Use it. Hell, use it as a concession to them. Did they buy the support contract with your thumper, even Bronze? Call them suckers up and ask for a best practices configuration on your very first thumper (Of Many!), and if they balk at it call your sales rep and ask them to get it for you. Get it in an email from Sun. Tell them your honest reliability concerns.


3rd of a few. I work for a University in a perpetually poor department. The only time we get thumpers is when big phat grants come in, or when we invent new departments (my current situation). The first thumper has a 1 disk UFS root system, 6 disks in a RAIDZ for one of my bossess tests, and the other is a 23 disks raidz2. No hot spares for either of those pools (not sure where the other 18 disks are). The second thumper has a single UFS root disk, 2 23 disk RAIDZs, and a single hot spare between all of them :rolleye:

My 'boss' doesn't really have much power here, and can in fact be easily over ruled by other people who know better. Except the Solaris admin is gone this week, so it's up to me to be a bastion of sanity. Ironically, the Solaris admin doesn't know much about ZFS, so he defers to my knowledge. So it will go something like this: 'boss' asks Solaris guy, Solaris guy asks me, I tell Solaris guy, Solaris guy tells 'boss', 'boss' tells me to do what I told him to do all along.

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades
On the topic of ZFS, we had a flaky drive the other day in our J4400 storage array and we decided to offline the drive and assign a hotspare to it. It took 50 hours to resliver a 1TB volume. Granted, this is a giant raidz2 pool with 24 disks and two hotspares so I kind of expected a long rebuild time. I'm thinking about going a step further and doing raidz3 because to me 50 hours is a pretty big window for Murphy's law to kick in and gently caress poo poo up.

Erwin
Feb 17, 2006

Erwin posted:

Sorry, I didn't provide information because I wanted to run a benchmarking test to see if the speeds are really slower than they should be before I ask for more help. I was getting 20-30MB/s read and write when copying files in either direction. I found CrystalDiskMark and here's what I get:

Sequential throughput: about 30MB/s write and read.
Random 512k blocks: 0.8 - 1MB/s read, 28-30MB/s write.
Random 4k blocks: 7MB/s read, 2MB/s write

It's an iSCSI AX4. The array I'm dealing with is a 7-disk RAID 5 with 1TB SATA drives (12 total disks, 4 SAS drives for the Flare software, 7 disks in RAID, one hot spare). The AX4 has dual controllers, so it has a total of 4 gigabit iSCSI ports. iSCSI is on its own switch, a ProCurve 1810-24g, gigabit managed switch. Jumbo frames is currently off.

I tested from two servers, one 2008 R2, one 2003 R2. Both use the Microsoft initiator over one regular gigabit ethernet adapter (not an HBA). EMC PowerPath is installed on both servers.

I realize there are a few things keeping me from optimal speed: SATA drives, no jumbo frames, and no HBAs. I still feel like the speeds are lower than they should be, even considering those factors. Maybe my expectations are too high?

Can anybody give me an idea as to whether these speeds are to be expected? The application that the server is for has been installed, and it's hanging whenever you do anything that involves reading files from the SAN.

Nukelear v.2
Jun 25, 2004
My optional title text

Erwin posted:

Can anybody give me an idea as to whether these speeds are to be expected? The application that the server is for has been installed, and it's hanging whenever you do anything that involves reading files from the SAN.

Yea, those are pretty terrible. In my shittiest SAN, an esxi 4 VM running against a dell md3000i I get,
@ 5/100mb
Seq: 108.3 read / 69 write
512: 101.1 read / 69 write
4k: 8.8 read / 4.9 write

No fancy hba's, no jumbo frames. It is using vmware round robin across two nics however.

Not knowing anything about AX4's or EMC in general. I would guess your cache setup is messed up, maybe something like your LUN owned by ctrl-0 is being accessed via ctrl-1.

Erwin
Feb 17, 2006

Nukelear v.2 posted:

Yea, those are pretty terrible. In my shittiest SAN, an esxi 4 VM running against a dell md3000i I get,
@ 5/100mb
Seq: 108.3 read / 69 write
512: 101.1 read / 69 write
4k: 8.8 read / 4.9 write

No fancy hba's, no jumbo frames. It is using vmware round robin across two nics however.

Not knowing anything about AX4's or EMC in general. I would guess your cache setup is messed up, maybe something like your LUN owned by ctrl-0 is being accessed via ctrl-1.

That's good to know. I've opened a ticket with EMC.

Syano
Jul 13, 2005

Erwin posted:

Can anybody give me an idea as to whether these speeds are to be expected? The application that the server is for has been installed, and it's hanging whenever you do anything that involves reading files from the SAN.

Is there any particular reason jumbo frames are off? We pulled a cx3-10 array off a Cisco and put it on a Dell and the performance was absolutely abysmal until we turned jumbo frames on. I didnt realize how much of a difference the two switches would make until I saw it with my own two eyes. Not sure if the procurve is your culprit but its worth a shot if you can turn jumbo frames on.

Erwin
Feb 17, 2006

Syano posted:

Is there any particular reason jumbo frames are off? We pulled a cx3-10 array off a Cisco and put it on a Dell and the performance was absolutely abysmal until we turned jumbo frames on. I didnt realize how much of a difference the two switches would make until I saw it with my own two eyes. Not sure if the procurve is your culprit but its worth a shot if you can turn jumbo frames on.

The contractor who set up the SAN didn't enable them, and I haven't been able to schedule downtime to enable them (I'm under the impression that the AX4 will reset connections when changing MTU size). It's certainly something that should be done, but I don't know if it's the entire cause of the poor performance. I'll see what EMC says.

Nukelear v.2
Jun 25, 2004
My optional title text

Erwin posted:

The contractor who set up the SAN didn't enable them, and I haven't been able to schedule downtime to enable them (I'm under the impression that the AX4 will reset connections when changing MTU size). It's certainly something that should be done, but I don't know if it's the entire cause of the poor performance. I'll see what EMC says.

If this is a platform that you need to schedule downtime on, and if I read your original post correctly, I would suggest adding a second switch and (at least) a second NIC to your hosts and doing MPIO. Once you get your main performance issue resolved, this will give you even more performance and more importantly availability.

In the example numbers above, that was off a pair of Dell PC6224s, which are cheap as dirt.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

I'm helping some engineers out setting up a scalability lab to test some of our software products, we're looking at a SAN to use for an oracle 11g DB.

At the max we'll have 5 DL380's connected via FC or 10GigE to it. Right now We're considering 5.4TB raw on either a:

EMC CX4 FC
LeftHand P4xxx 10Gig


I'm also willing to look at a EqualLogic solution, or NetApp 2050, but the NetApp will probably be cost prohibitive.

Anyone have any pro's or con's to these units? Budget is up to 50K, maybe 60. I get aggressive pricing from my VAR's. The CX4 is borderline our max price range and an AX4 might be a better fit, but doesn't offer 10Gig. Drives need to be 15K SAS or FC

Cowboy Mark
Sep 9, 2001

Grimey Drawer
Quick question about MD3000i; the status LED keeps flashing between blue and intermittent amber - this seems to be because the RAID module owner is constantly alternating and it's warning that the preferred path is not being used. Am I correct in thinking that this is the node doing round-robin MPIO? Is this how it is expected to be used?

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
I just got my new storage array setup, and I have a server available for benchmark testing. Its a dual-dual-core w/32GB of ram and a dual-port hba. each port goes through a separate fc-switch one hop to the array. all links 4Gbps. Very basic setup.

I've created one 1.1TB RAID-10 LUN, and one 1.7TB RAID-50 LUN, both have their own but identical underlying spindles.

Running centos 5.4, with the epel repo setup and sysbench, iozone, and bonnie++

I'm pretty familiar with sysbench and have a batch of commands to compare to a run on different equipment earlier in the thread. But not so much with bonnie and iozone. I'd be particularly interested in anyone with an md3000 to compare with.

EoRaptor
Sep 13, 2003

by Fluffdaddy
Never mind. :/

EoRaptor fucked around with this message at 16:51 on Feb 17, 2010

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
Why are the numbers for sdb so different from the underlying dm-2, and where the hell does dm-5 come from?
code:
Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     1.00  0.00  1.80     0.00    22.40    12.44     0.00    0.78   0.78   0.14
sda1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda2              0.00     1.00  0.00  1.80     0.00    22.40    12.44     0.00    0.78   0.78   0.14
sdb               0.00 73195.80  0.00 1169.20     0.00 595726.40   509.52   130.10  110.91   0.81  95.10
sdc               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdd               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sde               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-0              0.00     0.00  0.00  2.80     0.00    22.40     8.00     0.00    0.50   0.50   0.14
dm-1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-2              0.00     0.00  0.00 74363.40     0.00 594907.20     8.00  8295.81  111.20   0.01  95.10
dm-3              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-4              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-5              0.00     0.00  0.00 74363.40     0.00 594907.20     8.00  8295.92  111.20   0.01  95.10
code:
[root@localhost ~]# ls -l /dev/dm-*
brw-rw---- 1 root root 253, 2 Feb 17 07:22 /dev/dm-2
brw-rw---- 1 root root 253, 3 Feb 17 07:21 /dev/dm-3
code:
[root@localhost ~]# lvdisplay /dev/vg-r1/lvol0
  --- Logical volume ---
  LV Name                /dev/vg-r1/lvol0
  VG Name                vg-r1
  LV UUID                6f2nvg-c24t-lFez-sxWc-Hpmg-ZIlF-hIX4J5
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.06 TB
  Current LE             276991
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5

[root@localhost ~]# vgdisplay vg-r1
  --- Volume group ---
  VG Name               vg-r1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.06 TB
  PE Size               4.00 MB
  Total PE              276991
  Alloc PE / Size       276991 / 1.06 TB
  Free  PE / Size       0 / 0
  VG UUID               WuYvoz-G9M0-9J9I-p8lC-flxW-vHKb-FJAirR

[root@localhost ~]# pvdisplay /dev/dm-2
  --- Physical volume ---
  PV Name               /dev/dm-2
  VG Name               vg-r1
  PV Size               1.06 TB / not usable 4.00 MB
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              276991
  Free PE               0
  Allocated PE          276991
  PV UUID               9BZFZ8-2Pbk-zCKr-eya2-q2Rr-3z0U-7XR9mY


Nukelear v.2
Jun 25, 2004
My optional title text

StabbinHobo posted:

I'm pretty familiar with sysbench and have a batch of commands to compare to a run on different equipment earlier in the thread. But not so much with bonnie and iozone. I'd be particularly interested in anyone with an md3000 to compare with.

I just so happen to have an md3000 sitting idle. It's connected to Windows hosts though so you'll have to live with iozone. But I can run some numbers against an 8 disk raid10 using 15k rpm sas drives. I'll get something generated tomorrow when I get back to work.

ozmunkeh
Feb 28, 2008

hey guys what is happening in this thread
So, calling around trying to find a replacement for our old Lefthand boxes. I need something in the 2-4 TB range for 2-3 ESX hosts running a typical 6-8 server Windows environment (files, exchange, sql, sharepoint etc) I spoke with someone from EMC last week and just got quoted for a dual blade NX4 with 15 x 450GB SAS drives and CIFS/NFS/iSCSI for $40+K.

What's the difference between the NX4 and the AX4? I was surprised when they started going on about the NX4 after reading all the AX4 talk in this thread. Is he just trying to sell me what seems to be the more expensive unit or is the AX4 going away? We really don't have any out of the ordinary requirements.

I've been playing phone-tag with Netapp for a couple days and look forward to seeing what they have but was a little thrown by the above EMC quote.

Klenath
Aug 12, 2005

Shigata ga nai.

Insane Clown Pussy posted:

What's the difference between the NX4 and the AX4?

The NX4 is a Celerra family product, which is EMC's NAS product line & has CIFS/NFS interfaces out of the box. If you need or WILL need a NAS (or NAS capabilities), you'd want to go the NX4 route.

The AX4 is a Clariion family product, which is NOT a NAS product line by design. If you don't need a NAS, an AX4 will work fine (or whatever model they're at). You can add NAS capabilities later, but, if I remember my EMC rep correctly, you have to place a device to sit in front of a Clariion to achieve NAS functionality.

I'm sure there's a host of other nit-picky differences (i.e. redundant storage processors, FC and/or iSCSI options) and maybe you can tweak one's configuration to look like the other, but by default that's the base difference.

ozmunkeh
Feb 28, 2008

hey guys what is happening in this thread
Thanks, that makes sense. I think the sales guy took my offhand comment about NFS and ran with it. $40+K is way more than I want to spend on this.

Klenath
Aug 12, 2005

Shigata ga nai.

Insane Clown Pussy posted:

$40+K is way more than I want to spend on this.

EMC is known to be rather expensive.

If you plan to leverage some of their array-level features (LUN cloning/snapshotting, array replication for DR, caching), they implement them quite well and efficiently from what I've seen. Other vendors usually have these abilities also, but EMC tends to stand out when you start talking high-end or high-utilization arrays.

If all you need is basic SAN storage (even with a NAS front-end), you can probably get what you need elsewhere for a lot less money.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
If you don't need HA, you might want to take a look at the lower end 7000 series devices from sun. They are the only vendor that won't nickel and dime you on every little feature.

Zerotheos
Jan 2, 2005
Z
Does anyone have any experience, horror stories, success stories, etc regarding NexentaStor? I've been looking into as a cheaper alternative to a Sun 7000 series. I've been running 2.2 through its paces as well as checking out the 3.0 alpha. I've been using a Dell R710, which is what we'd use if we built this out for production.

Primary application is NFS for vSphere. I've been pretty impressed with it so far. PERC 6i/6e isn't ideal for it, they force hardware raid so I've setup some ghetto 1 disk RAID0s. I know I'd need some SAS HBAs if I wanted to do it properly. Probably will use a pair of e1000 4 port pcie cards for LACP or invest in some 10g gear.

I've had the latest nightly build of 3.0 alpha up and running on the same hardware and ZFS dedupe is working and integrated into the management GUI.

Anyways, I'm looking for any experiences anyone may have, particularly regarding their support, CDP, off the shelf ssd for ZIL/L2ARC, poo poo exploding and losing all your data, etc.

rage-saq
Mar 21, 2001

Thats so ninja...

Insane Clown Pussy posted:

So, calling around trying to find a replacement for our old Lefthand boxes. I need something in the 2-4 TB range for 2-3 ESX hosts running a typical 6-8 server Windows environment (files, exchange, sql, sharepoint etc)

Just get another LeftHand node, that way you can use your existing LeftHand stuff for secondary storage / backups / etc. Its a good platform and if your infrastructure is right it will perform more than adequately for the kind of environment you are talking about.

zapateria
Feb 16, 2003
Reposting a question from January if it's ok. We have some other options now:

We're planning a secondary site for a disaster recovery solution. We have around 7TB of VMware data now, and another 7TB of databases on physical blades. Does anyone have an opinion of this beast (literally) as a complete storage solution for this scenario?

http://www.nexsan.com/satabeast.php

We're thinking of getting atleast 2 ESX hosts at the 2nd site and transfer backups from the primary site daily to the 2nd storage as the source of recovery, then go from there (for our non-VM server - either get replacement physical servers or recover backups into to new VMs).

We're having a meeting with HP next week where they'll probably recommend a Lefthand solution, I just want to get some alternatives.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

If you don't need HA, you might want to take a look at the lower end 7000 series devices from sun. They are the only vendor that won't nickel and dime you on every little feature.
Interestingly, the Fishworks stuff also has better analytics than most of the solutions I've seen.

lilbean
Oct 2, 2003

Misogynist posted:

Interestingly, the Fishworks stuff also has better analytics than most of the solutions I've seen.
I wish I could add that UI to my Thumper running Solaris 10.

Adbot
ADBOT LOVES YOU

Klenath
Aug 12, 2005

Shigata ga nai.

zapateria posted:

secondary site for a disaster recovery

7TB of VMware data & another 7TB of databases on physical blades

SATABeast

meeting with HP next week, want some alternatives

My concern is you would move from a higher performance, and maybe fiber channel connected (?), EVA4400 array to a SATA-based bulk storage product (SATABeast or LeftHand). If your DR site is expected to work like your primary site, then SATABeast/LeftHand might not be able to handle the IOPS. Shoving 7TB of databases onto the same array as 7TB of VMWare data when they are currently separated has good potential to drastically change your array IOPS requirements.

Someone previously suggested direct attach, which may still be your best option for a cold site as it could be less costly than many SAN solutions. You might even be able to upgrade to higher speed (SAS) direct attached disk to boost your potential IOPS and still come under the total cost of implementing a SATABeast/LeftHand array (don't forget the supporting infrastructure!).

I think the same person also suggested looking into an array which would support array-to-array replication with your existing EVA4400. This is also a very good idea for down the road, especially if management wants to speed recovery times. You may not have to buy another EVA4400, but perhaps a smaller-scale array that is compatible with your EVA4400 for array-to-array replication - probably over iSCSI. You may get stuck with having to implement a FC infrastructure at your DR site for host connectivity, however, so you may want to save this idea for when you have to re-evaluate your storage environment altogether.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply