Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Catch 22
Dec 1, 2003
Damn it, Damn it, Damn it!

bmoyles posted:

100k might not get you much SAN from Pillar...
I'd recommend checking out Compellent, too, but again, 100k is going to be a somewhat small SAN.
What?!? Please give your definition on "Small SAN"?

Adbot
ADBOT LOVES YOU

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

brent78 posted:

I just saw an article about Pillar Data laying off 30% of their workforce.. and here I am with 100k to spend on a SAN and can't even get them to return my phone call. Anyone using Lefthand VSA in production? It sounds very cool and scary at the same time.

Oblomov has posted some pretty positive stuff about Lefthand in this thread.

That said, make sure you look at a couple of manufacturers to make sure you get exactly what you want. Just because you have 100 large to spend, it doesn't necessarily mean you should spend all of it.

bmoyles
Feb 15, 2002

United Neckbeard Foundation of America

Fun Shoe

Catch 22 posted:

What?!? Please give your definition on "Small SAN"?
I've got a quote from Compellent on a clustered solution that pushed 170k for about 8TB raw.

Catch 22
Dec 1, 2003
Damn it, Damn it, Damn it!

bmoyles posted:

I've got a quote from Compellent on a clustered solution that pushed 170k for about 8TB raw.
Still seams high with clustering (assuming you mean 2 Mirrored SANs) and full SAS.

Boltsky
Sep 3, 2004
Super karate monkey death car

My company has about 200 TB of file storage with a custom homegrown solution and low I/O requirements. Been talking with EMC and they're steering me towards Atmos, which they only recently (November) announced, so it's still pretty new. Does anyone have any experience with these new clustered storage solutions? I've also briefly looked at Caringo, as well as Gluster on the open source side.

mkosmo
Jul 15, 2006


Hey Storage Gurus, I have a question for you if you will permit:

At work we have an EMC Clariion (CX3-80) now replacing our old Intransa and a couple Isilon units (which have performed very poorly for our I/O needs).

EMC promised us >=350 megabytes per second per datamover and we're not really seeing that despite having been working with engineers for months. Also it appears there is no jumbo framing on the 10GbE interfaces which could cause us some poorer performance. In addition, getting CIFS and NFS working on one file system cooperatively proved to be a hassle.

Any idea whats up with that? What other issues have you seen with EMC and performance or otherwise?

rage-saq
Mar 21, 2001

Thats so ninja...

mkosmo posted:

Hey Storage Gurus, I have a question for you if you will permit:

At work we have an EMC Clariion (CX3-80) now replacing our old Intransa and a couple Isilon units (which have performed very poorly for our I/O needs).

EMC promised us >=350 megabytes per second per datamover and we're not really seeing that despite having been working with engineers for months. Also it appears there is no jumbo framing on the 10GbE interfaces which could cause us some poorer performance. In addition, getting CIFS and NFS working on one file system cooperatively proved to be a hassle.

Any idea whats up with that? What other issues have you seen with EMC and performance or otherwise?

First my two cents, your EMC engineer who sold and implemented the system for you should have properly designed your system to meet your necessary workload performance requirements.

Now on to the real question of why this is a difficult thing to achieve.

>350MB/s is quite a bit of throughput, but throughput alone is not the most important factor when it comes to disk performance. It can actually be extremely misleading. Disk performance is a very careful balancing act of the right raid level at the right block size for an optimal IOPS & throughput level to best meet your I/O pattern.

For example

350MB/s @ 8 IOPS would be a pretty poorly performing disk system, whereas 80MB/s @ 5000 IOPS could be an extremely well performing disk system.

A few things that will affect your performance that you'll want to look into, be sure to use IOMeter to do your performance testing, anything else is basically full of lies (I'm looking at you HDtach)

1: I/O pattern. This means what % is read, what % is write, and what % is random vs sequential.
2: Block sizes. The block size of your stripe (I think EMC calls this element size?) and the block size of your filesystems partition.

Per the same number of disks in a disk group you will have to balance the block size for both IOPS and throughput (MB/s).
The smaller the block size the more IOPS you are going to get, but at a lower MB/s. Conversely the larger the block size the more MB/s you are going to get but at a much lower IOPS.

Additionally you are going to get better IOPS AND MB/s the more sequential your workload is, and also even more if your workload is more read than write.

8kb block size on both the disk and filesystem size is one of the better balanced configurations. With a 24x 450gb 15k disk group on an HP EVA4400 I've gotten in excess of 6000 IOPS (some of the figure would be cached performance) at 250-300 MB/s in certain access patterns. Database servers and anything that has a highly random and mildly write heavy, like operating system drives and things like that.

Fileservers you can increase your block size to 32kb or higher and get good throughput but at a lower IOPS rate (which is generally fine for a file server).

IOMeter is going to become your best friend for evaluating your disk/filesystem configurations to see if you get the performance you need to meet your workload.

rage-saq fucked around with this message at 21:30 on Feb 4, 2009

bmoyles
Feb 15, 2002

United Neckbeard Foundation of America

Fun Shoe

Catch 22 posted:

Still seams high with clustering (assuming you mean 2 Mirrored SANs) and full SAS.

Nope. Clustered heads. To be completely fair, this might be close to list as I asked for ballpark pricing, so after discount it would've been somewhat cheaper, but still in that same ballpark. To add a set of tier 3 storage (750G disks, same aggregate capacity) would've added another 40k onto the cost.

Edit: this doesn't have Compellent, but it's interesting nonetheless:
http://storagemojo.com/storagemojos-pricing-guide/
The Pillar pricing is in line with the quote we got from them as well.

For example,
SLM 500-SAN SAN SLAMMER
SAN SLAMMER
$39,840
BRX 500-144F15J BRICK,144GB FC 15000RPM DRIVES,JBOD CONF
BRICK,144GB FC 15000RPM DRIVES,JBOD CONFIGURATION
$27,325

Now, if I remember correctly, a brick is 12 drives, so $30k there gets you 1.7TB.

You're at 70k for a 1.7TB SAN, and this is before support and software.

We added a bit more to our original quote, came up with just under 4TB of storage, a 2nd tier, as well as built-in NFS support from Pillar, and it was listing at almost 200k.

bmoyles fucked around with this message at 00:17 on Feb 5, 2009

Catch 22
Dec 1, 2003
Damn it, Damn it, Damn it!

bmoyles posted:

Nope. Clustered heads. To be completely fair, this might be close to list as I asked for ballpark pricing, so after discount it would've been somewhat cheaper, but still in that same ballpark. To add a set of tier 3 storage (750G disks, same aggregate capacity) would've added another 40k onto the cost.
750G drives Sounds like SATA drives. Now your IO and fabric needs might push you to another SAN, but I just pulled a EMC, dual active/active heads, iSCSI, and 10TB RAW (16x SAS 148GB 1500RPM and 8x SATA 1TB mix in two shelf/bricks) with some software options for <40K.


EDIT: looked again at the final: 25K, and I had them toss on the 3rd year of warranty for free

Catch 22 fucked around with this message at 00:30 on Feb 5, 2009

bmoyles
Feb 15, 2002

United Neckbeard Foundation of America

Fun Shoe

Yeap, the tier 3 was SATA, tier 1 was 15k FC.
We ended up with a MD3ki for a stopgap solution for VMware and Isilon for NAS. Prolly gonna go with a EqualLogic box to replace that MD3ki later this year.

The Compellent solution was really nice, and I'd recommend it to anyone who's got the cash.

Intrepid00
Nov 10, 2003

I'm tired of the PMs asking if I actually poisoned kittens, instead look at these boobies.

bmoyles posted:

Yeap, the tier 3 was SATA, tier 1 was 15k FC.
We ended up with a MD3ki for a stopgap solution for VMware and Isilon for NAS. Prolly gonna go with a EqualLogic box to replace that MD3ki later this year.

The Compellent solution was really nice, and I'd recommend it to anyone who's got the cash.

If you are going to look at Equallogic, look at Lefthand too.

oblomov
Jun 20, 2002

Meh... #overrated

Catch 22 posted:

What?!? Please give your definition on "Small SAN"?

$100K is smallish. I think anything up to say $150K is on the small size. To give you an example, we just paid about half a mil for a 6080 NetApp SAN with whole bunch of FAS storage with a few hundred TB. And that's really mid-size not high-end SAN, IMO, although it is going toward the high-end.

On the LeftHand, I am still testing it in the lab and it's pretty good from everything I am seeing. Don't expect huge IO, i.e. 1600-1800 per G2 node (SAS). So max you can get is maybe 40K IOPS out of a cluster of 20-25 boxes.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

bmoyles posted:

Yeap, the tier 3 was SATA, tier 1 was 15k FC.
We ended up with a MD3ki for a stopgap solution for VMware and Isilon for NAS. Prolly gonna go with a EqualLogic box to replace that MD3ki later this year.

The Compellent solution was really nice, and I'd recommend it to anyone who's got the cash.

I've got a Compellent SAN in active/passive, and like it very much. The data progression stuff from Teir1 -> TeirX is really nice too.

Nomex
Jul 17, 2002

Flame retarded.

Anyone here dealt with Compellent Storage Center equipment? I'm trying to find out what drawbacks they may have from people who've actually used the stuff.

Mr. Fossey
Mar 31, 2003

Fresh bananas for the whole crew!

Has anyone played around with Sun's 7000 Storage line? Specifically the 7210?

I can get a sweetheart of a deal, but even the best deal is no good if it's not ready yet.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


Mr. Fossey posted:

Has anyone played around with Sun's 7000 Storage line? Specifically the 7210?

I can get a sweetheart of a deal, but even the best deal is no good if it's not ready yet.
It's a fantastic tier-3 NAS for the money but don't try to use it as a SAN yet.

Mr. Fossey
Mar 31, 2003

Fresh bananas for the whole crew!

Misogynist posted:

It's a fantastic tier-3 NAS for the money but don't try to use it as a SAN yet.

We are thinking of using it primarily for 5-6TB over CIFS, and possibly a handful of VMs over iSCSI. The most intense would be a 80 user exchange VM. Is the SAN piece something that will come into its own as the software matures, or are there hardware or architecture inadequacies?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


Mr. Fossey posted:

We are thinking of using it primarily for 5-6TB over CIFS, and possibly a handful of VMs over iSCSI. The most intense would be a 80 user exchange VM. Is the SAN piece something that will come into its own as the software matures, or are there hardware or architecture inadequacies?
It's mostly the software end. For example, the iSCSI target server, as far as I'm aware, doesn't support SCSI-3 persistent reservations. This means that you can't run Windows Clustering off of it, which is kind of a big deal for a lot of people. There's a few other shortcomings, but it's mostly related to feature set rather than performance and stability. We're quite happy with the performance. There's support for it now in OpenSolaris nightlies, though, so I'm sure it's really not much longer before it finds its way into some Amber Road updates.

brent78
Jun 23, 2004

I killed your cat, you druggie bitch.

Just wanted to post that I got a shelf of EqualLogic 5000VX setup in the lab and I'm very impressed with it's performance. It's configured with 16 x 300GB 15k disks, active/backup controller and all 6 gigE connected to a pair of 3750's. Using jumbo frames and receive flow control as well. I'm achieving 200 MB/s writes with ease and barely sweats with mixed random reads/writes. This shelf as configured was 40k, not the cheapest thing out there but on par with 15k SAS. The equivalent NetApp or EMC solution would have been double considering all their retarded licensing costs. Ohh you want iSCSI, caa-ching.

oblomov
Jun 20, 2002

Meh... #overrated

brent78 posted:

Just wanted to post that I got a shelf of EqualLogic 5000VX setup in the lab and I'm very impressed with it's performance. It's configured with 16 x 300GB 15k disks, active/backup controller and all 6 gigE connected to a pair of 3750's. Using jumbo frames and receive flow control as well. I'm achieving 200 MB/s writes with ease and barely sweats with mixed random reads/writes. This shelf as configured was 40k, not the cheapest thing out there but on par with 15k SAS. The equivalent NetApp or EMC solution would have been double considering all their retarded licensing costs. Ohh you want iSCSI, caa-ching.

Equalogic is not bad at all performance wise. Management is straightforward, support is good and hardware is pretty neat. However, I must say that I like LeftHand more, mainly for flexibility of software there. Also, to be fair to NetApp (less so with EMC), you will see pricing converge closer together as you "fill up" on nodes. With NetApp and EMC (and Hitachi, HP EVA, etc...) you pay a lot more in the front, but at the end once you start scaling up, pricing is going to be much closer (if still more) then Equalogic (and LeftHand). So once you compare say NetApp 3160 with whole bunch of shelves and a similarly large Equalogic deployment, prices are much closer then you'd think at the start.

There are other advantages to Equalogic (Lefthand too) compared to traditional SANs though.

Intrepid00
Nov 10, 2003

I'm tired of the PMs asking if I actually poisoned kittens, instead look at these boobies.

brent78 posted:

Just wanted to post that I got a shelf of EqualLogic 5000VX setup in the lab and I'm very impressed with it's performance. It's configured with 16 x 300GB 15k disks, active/backup controller and all 6 gigE connected to a pair of 3750's. Using jumbo frames and receive flow control as well. I'm achieving 200 MB/s writes with ease and barely sweats with mixed random reads/writes. This shelf as configured was 40k, not the cheapest thing out there but on par with 15k SAS. The equivalent NetApp or EMC solution would have been double considering all their retarded licensing costs. Ohh you want iSCSI, caa-ching.

Chiming in as well that you should give Lefthand a try. We just purchased it and haven't regerted it yet.

spiny
May 20, 2004

round and round and round


Is it possible to buy a NAS or even a SAN bare board ? I have a dead surestore tape drive and two spare SATA disks and would like to combine the two into an external storage box :) if it was just one drive, that would be easy, but two drives is causing problems. SATA 'hubs' are silly money at the moment, so the only other option I can think of is to get two SATA > USB converters AND a small USB hub and stick the lot in the case. expensive and not very elegant.

any thoughts people ?

Intrepid00
Nov 10, 2003

I'm tired of the PMs asking if I actually poisoned kittens, instead look at these boobies.

spiny posted:

Is it possible to buy a NAS or even a SAN bare board ? I have a dead surestore tape drive and two spare SATA disks and would like to combine the two into an external storage box :) if it was just one drive, that would be easy, but two drives is causing problems. SATA 'hubs' are silly money at the moment, so the only other option I can think of is to get two SATA > USB converters AND a small USB hub and stick the lot in the case. expensive and not very elegant.

any thoughts people ?

Check out SanMelody from Datacore. Not exactly what you want, but might still fit what you need.

brent78
Jun 23, 2004

I killed your cat, you druggie bitch.

Intrepid00 posted:

Chiming in as well that you should give Lefthand a try. We just purchased it and haven't regerted it yet.
One step ahead... I have a call in now for a fully populated SAS shelf with dual controllers. My rep said I probably wouldn't be happy with it's remote management because it's based off a DL185. Nonetheless, working to get one in house in the next week or two. They keep trying to sell me on Lefthand's VSA (virtual storage adapter). Are they moving away from hardware based solutions?

Edit: Whats a ballpark figure for a fully populated SAS lefthand solution?

brent78 fucked around with this message at 23:27 on Feb 11, 2009

Rhymenoserous
May 23, 2008


mkosmo posted:

Hey Storage Gurus, I have a question for you if you will permit:

At work we have an EMC Clariion (CX3-80) now replacing our old Intransa and a couple Isilon units (which have performed very poorly for our I/O needs).

EMC promised us >=350 megabytes per second per datamover and we're not really seeing that despite having been working with engineers for months. Also it appears there is no jumbo framing on the 10GbE interfaces which could cause us some poorer performance. In addition, getting CIFS and NFS working on one file system cooperatively proved to be a hassle.

Any idea whats up with that? What other issues have you seen with EMC and performance or otherwise?

Can you give us an idea of the environment being worked in? Give as much detail as possible if you can. Nas or San et al et al.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Rhymenoserous posted:

Can you give us an idea of the environment being worked in? Give as much detail as possible if you can. Nas or San et al et al.

also how many disks, what type, what raid config, etc.

Intrepid00
Nov 10, 2003

I'm tired of the PMs asking if I actually poisoned kittens, instead look at these boobies.

brent78 posted:

Edit: Whats a ballpark figure for a fully populated SAS lefthand solution?

Take whatever cost the hardware is and add like another 10-20. This is very rough, the other guy that had a lab with clusters of them can proably give a much better figure.

Who's trying to push you to the VSA? Lefthand or the reseller? They just literly came out with a G2 box for one of their NSM's.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS


kind of a corner case question...

pretend for a moment you've been stuck with a pretty decent SAN. We're talking raid10 across 40 15k spindles and 2GB of write cache (mirrored, 4gb raw), plenty of raw iops horsepower.

but you need nas

your application is specifically designed around a shared filesystem (gfs), changing that would require lots of rewrite work. gfs, for various reasons, is not an option going forward. So its nfs or something more exotic, and exotic makes me angry.

what product do you shim inbetween the servers and the san to transform nfs into scsi? Preferably under 30k with 4hr support, a failover pair would be nice too. Now, I know about the obvious "pair of rhel boxes active/passive'ing a gfs volume", but I also want to evaluate my alternatives. Extra special bonus points if it can do snapshots and replication.

Does netapp make a "gateway" model this cheap?

The ideal product would be two 1u box's running some embedded nas software on an ssd disk, with ethernet and fibrechannel ports, all manageable through a web interface with *very* good performance analysis options.

Can you tell I wish sun would sell a 7001 gateway-only product real bad?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

You can in fact use a NetApp as a gateway in front of whoever.

http://www.netapp.com/us/products/storage-systems/v3000/

http://www.netapp.com/us/products/storage-systems/v3100/

http://www.netapp.com/us/products/storage-systems/v6000/

oblomov
Jun 20, 2002

Meh... #overrated


How well do these V-filers work? Haven't tried them yet and we were thinking of trying to front some EMC and Hitachi storage with it.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

oblomov posted:

How well do these V-filers work? Haven't tried them yet and we were thinking of trying to front some EMC and Hitachi storage with it.

Just FYI, this would be unsupported by EMC.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

oblomov posted:

How well do these V-filers work? Haven't tried them yet and we were thinking of trying to front some EMC and Hitachi storage with it.

I have a customer thats front-ending HDS USPs with it and he is pretty happy about it. That was actually my first and only experience with it. A series of AMS frontended with HDS USP which again has the NetApp in front of it. They're using iSCSI for their ESX project.

Two of my colleagues at work seem to think pretty highly of it though.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS


yea I stumbled across the v3020 the other day and it seemed perfect, until my san vendor said it'd be unsupported on both sides.

Right now I'm looking at Exanet, anyone got any opinions?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

I think unless you put a Cellera in front of your EMC, EMC won't support you period.

Another option is OnStor though.

Mierdaan
Sep 14, 2004



Pillbug

Performance question.

I've got an ESXi server with a VMFS LUN on our Netapp FAS2020. I need to create a file server VM which needs to serve up two shares, 500G each. I can't cram all of this inside the VMFS LUN because the A-SIS engine on the FAS2020 won't run against a volume larger than 500G, so I'm stuck separating at least the shares out in some way. Will I see any performance benefit by creating these as VMDKs in additional VMFS LUNs, or by just hooking the Server 2008 VM directly (ISCSi) to the LUNs and letting it format them with NTFS? What's best practice here?

Thanks storage goons.

Catch 22
Dec 1, 2003
Damn it, Damn it, Damn it!

Mierdaan posted:

Performance question.

I've got an ESXi server with a VMFS LUN on our Netapp FAS2020. I need to create a file server VM which needs to serve up two shares, 500G each. I can't cram all of this inside the VMFS LUN because the A-SIS engine on the FAS2020 won't run against a volume larger than 500G, so I'm stuck separating at least the shares out in some way. Will I see any performance benefit by creating these as VMDKs in additional VMFS LUNs, or by just hooking the Server 2008 VM directly (ISCSi) to the LUNs and letting it format them with NTFS? What's best practice here?

Thanks storage goons.

I have mine setup with the first VMDK as the boot and OS drive, then I have a RDM to the LUN from the host. I think this performs better than the iSCSI initiator pulling from the guest, but I don't have metrics to prove that.

Mierdaan
Sep 14, 2004



Pillbug

Thanks. I found this study on VMware's website that seems to indicate it doesn't make too much of a difference, and honestly this isn't a high IO file server. I'm probably worrying too much.

Nomex
Jul 17, 2002

Flame retarded.

I'm just gonna put this out there for anyone looking for cheap SAN stuff:

You can get an HP Enterprise Virtual Array 4400 dual controller with 12 x 400GB 10k FC drives and 5TB of licensing for less than $12k. The part number is AJ813A. Need more space? Order a second one and use just the shelf, then keep a spare set of controllers. You can get ~38TB for less than $96k this way. The only things you need to add are SFPs and switches.

Nomex fucked around with this message at 17:20 on Mar 5, 2009

complex
Sep 16, 2003



What kind of performance hit will deduplication incur?

Say I have 26 servers, A through Z. They all have 72G drives now, but they use ~10GB on each of them, and assume that ~3GB of that is exactly the same base OS image.

Dedeuplication will obviously save us a lot of space. I've seen the NetApp demo videos and it sounds awesome. But people are now telling me that performance will suffer. Still others say that all your duped blocks will probably be sitting in cache or on SSD anyway, so performance actaully increases.

I can see both sides of it: if I am just reading the same block all the time (say, a shared object in Linux or a DLL in Windows), then if that block is deduped then I'll be winning. But lets say I modify that block, then the storage array will sort of have to pull that block out and now start keeping a second copy of it, and managing that slows the array down.

Thoughts?

Adbot
ADBOT LOVES YOU

Intrepid00
Nov 10, 2003

I'm tired of the PMs asking if I actually poisoned kittens, instead look at these boobies.

Update since we put Lefthand boxes in production...

They are awesome :c00lbert:

Users are starting to notice the increased preformance as crap is moved off the DAS.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply