Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Beelzebubba9
Feb 24, 2004
Dear IT Goons,

After reading the entire SAN/NAS thread here in SH/SC and stumbling on this article:

http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/

I had a chat with my boss and he’d like me to build out a prototype SAN as part of our push to see if we can start adopting cheaper, commodity or open source devices in non-business critical roles at my company. I have a box running OpenFiler under my desk that I set up with an iSCSI target volume that’s been humming along reliably for a while now. It’s slow, but that’s to be expected of three Caviar Greens in a Raid 5 and communicating with my server over our office LAN. What I’d like to do is build on what I learned making the OpenFIler Box and make a prototype SAN for what may become the bulk storage we deploy to our data centers. The following is what I’d like the SAN to do initially:

  1. Act as a data store for our local ESXi box. There are a number of servers and desktops here that are used for testing and remote access, and I’d like to virtualize them to cut down on maintenance and consolidate our resources. None of these will be under high load or require much in the way of IOps.
  2. Act as a bulk network share to store backups on. Our critical data is still backed up to tape, which is great for off site storage, but bad for file recovery when Pat Q. User deletes their sales presentation that they have to give in 15 minutes.

The ultimate goal is to build an inexpensive, reliable storage box that can host 20+ TB of data as a slower, cheaper compliment to our real SANs in our corporate and QA environments. I’m looking to deploy these to continue our push towards virtualization and better access to local data. My questions are as follows:

  1. Is this actually a bad idea? I understand I will never see the performance or reliability of our EMC CLARiiONs, but that’s not the point. The total cost of the unit I’ve spec’d out is well under the annual support costs of a single one of the EMC units (with a lot more storage), so I think it’s worth testing. Or should I just get a few QNAPs?
  2. I was going to use BackBlaze’s parts list as a loose guide (save the case). I’d like to use a Sandy Bridge based CPU for AES-NI support, and SuperMicro doesn’t make any server class Socket 1155 motherboards. Does anyone have a suggestion for a S1155 motherboard that would be suitable for use in a SAN?
  3. What are some good guidelines to increase the reliability of a software raid array? Critical data on the array will *always* be backed up off the SAN, but I was wondering if there’s a good source of information or best practices guideline as to best configure a software raid array. I have never had an array die on me, so I don’t have much experience fixing them if they get degraded beyond a simple drive rebuild. I was thinking of going with Raid 60 if I can find support for it, but that might be hard. Would Raid 10 and a hot spare be smarter?
  4. I have a SAN running OpenFiler and would like to try out FreeNAS. Is there any other option I should keep in mind?

If anyone has any other suggestions or aspects I haven’t thought of, I’d really appreciate it. Thanks in advance!

Adbot
ADBOT LOVES YOU

Cpt.Wacky
Apr 17, 2005

WarauInu posted:

How do you like NexentaStor? That's one I haven't used yet and have been looking to test.

I've had it in production with light VM load for a month or two now. The only other stuff I tried was FreeNAS and OpenFiler, but I never made it through their lovely installers.

The web interface runs on a non-standard port which is annoying when you're trying to remember how to get to it. The style of the web interface is a little dated, and it feels like it's hard to get a sense of how it's performing but the information is all there.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Beelzebubba9 posted:

[*]Is this actually a bad idea? I understand I will never see the performance or reliability of our EMC CLARiiONs, but that’s not the point. The total cost of the unit I’ve spec’d out is well under the annual support costs of a single one of the EMC units (with a lot more storage), so I think it’s worth testing. Or should I just get a few QNAPs?

There's nothing wrong with building your own SAN, and it definitely can/does work. Your biggest issue with a homemade solution is going to be support/knowledge. If you build these things yourself, how many people at your organization knows how to fix them? Too often one IT guy gets assigned to "try building a SAN" and then he's the only guy that knows the software and tech well enough to fix/maintain it later. If you're going to roll your own, make sure you keep your coworkers informed and educated so that they can troubleshoot an EMC or a homemade box equally well. Otherwise you end up spending all of your time in the weeds trying to keep these things running.

Nomex
Jul 17, 2002

Flame retarded.

Martytoof posted:

This may be a terrible question to ask among people who discuss actual SAN hardware, but if I want to get my feet wet with iSCSI ESXi datastores, what would be the best way to go about this on a whitebox as far as the operating system is concerned. I'm looking at something like OpenFiler. This would literally just be a small proof of concept test so I'm not terribly concerned about ongoing performance right now.

You may wish to evaluate using NFS as well as iSCSI. NFS and VMware play very well together. If you're deduplicating your VMware environments, iSCSI and FC won't report correct data store usage, as VMware has no way of seeing anything but the raw space. NFS will, because it's just a network attached drive. (This was in VMWare 4.1, if it's changed in 5, someone please correct me). You can also mount way larger volumes in NFS. You're limited to 2TB data stores with iSCSI, but NFS is limited only by the maximum volume size on your storage array. You are limited to 255 VMDKs per datastore though. Also, NFS is way better at handling locking. You can (and I have) got into a situation where a VMDK becomes locked, and the only way to clear it is to bounce the VM host. With NFS you can simply delete the .lck file and off you go.

Beelzebubba9 posted:

[*]Is this actually a bad idea? I understand I will never see the performance or reliability of our EMC CLARiiONs, but that’s not the point. The total cost of the unit I’ve spec’d out is well under the annual support costs of a single one of the EMC units (with a lot more storage), so I think it’s worth testing. Or should I just get a few QNAPs?

Further to what madsushi wrote, if something catastrophic happens to your home built storage solution, the blame will probably be entirely on you. When you use a major vendor it might not be your neck on the line.

Nomex fucked around with this message at 18:28 on Jan 9, 2012

Beelzebubba9
Feb 24, 2004

madsushi posted:

There's nothing wrong with building your own SAN, and it definitely can/does work. Your biggest issue with a homemade solution is going to be support/knowledge. If you build these things yourself, how many people at your organization knows how to fix them? Too often one IT guy gets assigned to "try building a SAN" and then he's the only guy that knows the software and tech well enough to fix/maintain it later. If you're going to roll your own, make sure you keep your coworkers informed and educated so that they can troubleshoot an EMC or a homemade box equally well. Otherwise you end up spending all of your time in the weeds trying to keep these things running.

We have a pretty high turnover rate here, so this worries me a little bit. I'm hoping that a combination of using a well documented off-the-shelf product with good documentation on my end should help mitigate support issues if I get hit by a bus, but even then it'll be patchy.

Nomex posted:

Further to what madsushi wrote, if something catastrophic happens to your home built storage solution, the blame will probably be entirely on you. When you use a major vendor it might not be your neck on the line.

My IT Director is immensely knowledgeable, so I assume he wouldn't have sent me off on this project if he wasn't willing to trade some risk for much lower cost per GB. What I'm interested in is what are the best practices for improving the reliability of a SAN beyond the obvious methods (hot spare, server class hardware, backing up critical data off the SAN).

Bitch Stewie
Dec 17, 2011

Beelzebubba9 posted:

What I'm interested in is what are the best practices for improving the reliability of a SAN beyond the obvious methods (hot spare, server class hardware, backing up critical data off the SAN).

Something is always your weak link whether it's your backplane, your motherboard, your PSU, your single RAID controller, or the room itself, so it's about minimizing those risks that you think you're most at risk from.

I'd be looking at something like the P4000 VSA that is a "proper" storage appliance and that does replication between systems, that way if your whitebox is cheap enough you just buy two and replicate everything.

some kinda jackal
Feb 25, 2003

 
 

Nomex posted:

You may wish to evaluate using NFS as well as iSCSI. NFS and VMware play very well together. If you're deduplicating your VMware environments, iSCSI and FC won't report correct data store usage, as VMware has no way of seeing anything but the raw space. NFS will, because it's just a network attached drive. (This was in VMWare 4.1, if it's changed in 5, someone please correct me). You can also mount way larger volumes in NFS. You're limited to 2TB data stores with iSCSI, but NFS is limited only by the maximum volume size on your storage array. You are limited to 255 VMDKs per datastore though. Also, NFS is way better at handling locking. You can (and I have) got into a situation where a VMDK becomes locked, and the only way to clear it is to bounce the VM host. With NFS you can simply delete the .lck file and off you go.

Good advice. I'll certainly try both. Right now this is not production but just me learning vSphere so it's good to try each and get a feel for what they do :)

joem83
Oct 4, 2007

Sometimes, you have to shake it thrice.
I took a new job last year working for a software development company as their dedicated sys admin for a government software development contract. We inherited two racks of outdated GFE servers and equipment to be used as a test bed for development. Most of it is junk that I've deemed unworthy of my time, but there's an AX-100 SAN up there with about 3 TB of storage in it that I think I can put to good use.

I want to try to get the AX-100 up and running but they neglected to include the documentation or any of the software that came with it when they originally bought it way back when. It looks like it came as a package from Dell with two Silkworm 3250 fiber switches, a UPS, and the actual AX-100 unit.

Does anyone have any experience with this setup, or perhaps some documentation/software they can share with me? I was reading a review of the product and they mentioned it being absurdly easy to setup, all you had to do was follow the quick start guide.

I found this page, but I get an HTTP Status 500 when I try to view the installation/planning docs:
http://www.emc.com/microsites/clari...100/support.esp

Thanks in advance!

joem83 fucked around with this message at 22:38 on Jan 9, 2012

Erwin
Feb 17, 2006

joem83 posted:

AX-100

I probably have the manual somewhere around here on a CD. What's the legality of giving you the PDF?

Also, I've got an AX-100 collecting dust if you wanna buy it :) Also fifteen 500GB drives, new in packaging, that we never used (Dell branded).

joem83
Oct 4, 2007

Sometimes, you have to shake it thrice.

Erwin posted:

I probably have the manual somewhere around here on a CD. What's the legality of giving you the PDF?

Also, I've got an AX-100 collecting dust if you wanna buy it :) Also fifteen 500GB drives, new in packaging, that we never used (Dell branded).

Bah, I didn't even want this one in the first place, let alone yours :v:

I'm not sure on the legality of sending out the documents. I think the product is end of life as of April 2011, so I bet it's okay. If you could send me those PDFs I'd be really grateful Erwin!

some kinda jackal
Feb 25, 2003

 
 
I can't imagine a situation where an installation manual would ever fall in the realm of :filez:

BelDin
Jan 29, 2001
So it seems that 12 fully populated EMC DAE4Ps (8x15x146GB FC and 4x15x500GB SATAII) have arrived on our dock for free courtesy of a closed government office. No head unit, just the DAEs. Hopefully, they didn't run the drives through a degausser and they are still functional.

Has anyone had any experience buying used EMC equipment on the second-hand market? I know we can hook them up to our NS-120, but that will slow down the fibre channel backend to 4Gb/s. I was thinking of buying something like a CX3-20c and hooking them up to it, but I'm pretty sure they are EOL and would like the option to at least get the last Flare update made for the system.

Would we be ahead to get a newer NX4 for use with them?

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

Beelzebubba9 posted:

Homemade SAN stuff
If I were in your shoes, I would definitely implement something that uses ZFS as the back-end storage. You can use Solaris 10/11, OpenSolaris and the various forks out there, and FreeBSD. If you want to use something point and click it looks like FreeNAS is based on FreeBSD and has ZFS but I have never used it. From a command-line perspective, it is stupidly easy to administer ZFS considering all the commands start with either zpool or ZFS. I am most comfortable with doing this on Solaris, but thanks to Uncle Larry you would need to pay $1,000/per CPU a year for the privilege of running Solaris 10/11. Given that OpenSolaris hasn't been updated for years now, you are left with the OpenSolaris forks and FreeBSD. If I were to do this, I would pick FreeBSD since ZFS has been on it for some time now and the community support is pretty good. The only things I don't know about is how well iSCSI exports or NFS is handled on FreeBSD which may make want to veer back to one of the OpenSolaris forks.

When building the server, I would put in as much memory as I can. I would also get two SSD drives for a mirrored ZFS intent log to improve synchronous write performance. For disks, I would go with enterprise-grade SAS drives and stay far away from the WD Green Drives. Once all of that is out of the way make sure you have a good plan on how you want to lay out the vdevs. If I were to use this controller, I would be able to hook up 8 disks per controller. If I had two of these cards and 16 2TB drives, I would probably set up my vdevs like this:

code:
#zpool status dpool
  pool: dpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        dpool	    ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c1t0d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0
            c2t2d0  ONLINE       0     0     0
            c2t3d0  ONLINE       0     0     0
            c2t4d0  ONLINE       0     0     0
            c2t5d0  ONLINE       0     0     0
            c2t6d0  ONLINE       0     0     0

        spares
          c1t7d0    AVAIL   
          c2t7d0    AVAIL   
 
	cache
	  c3t0d0    ONLINE
	  c3t1d0    ONLINE
The above config would result in roughly 17-18TB of usable space. If you good with RAID-Z1 instead, you would get 21-22TB of space. RAID-Z1 is similar to RAID 5 in terms of redundancy and RAID-Z2 is similar to RAID 6. The above config there are also two hotspares as as well.

Beelzebubba9 posted:

I was going to use BackBlaze’s parts list as a loose guide (save the case). I’d like to use a Sandy Bridge based CPU for AES-NI support, and SuperMicro doesn’t make any server class Socket 1155 motherboards. Does anyone have a suggestion for a S1155 motherboard that would be suitable for use in a SAN?
Supermicro actually does, but for single processors only. The reason for this is that all the Ivy Bridge stuff isn't out yet so you are going to have to wait a few months for the dual processor boards to come out. If you want dual processors now, I suggest getting a 1336-based motherboard instead.

Muslim Wookie
Jul 6, 2005
I guess you could go ahead and made the homemade SAN but don't be like every other jerk that thinks he's stumbled upon some secret no-one else has thought of.

Aside from that, from my experience here is that people say these solutions are for non-critical business data, but in that case, why are you storing it? There's just no such thing as non-critical business data. Dev environment disappears? x amount of devs now sitting around doing nothing. Test environment dissapears? x amount of testing not going ahead. About the only thing I can think of is ISO stores.

And then consider that usually the person that builds the homemade SAN forever looks after it. How much is your salary? Is that still less than maintenance? And how do you like being the go-to-fall-guy forever? I've found in IT it's always the storage's fault, that's the first place people point fingers, and it's almost never actually the SANs fault. I don't know but when you start looking at all the angles these things really start to fall apart in proper businesses.

I will never ever ever ever ever ever again run a homebrew SAN in an enterprise and any boss that asks me to will get my resignation letter as soon as I can land another job.

Having said all that it's pretty fun initially and I like to do homebrew SANs at home :)

some kinda jackal
Feb 25, 2003

 
 
To go with the above post:

The guy who runs the labs here (not my boss, but close enough I guess) got wind of me building a little whitebox SAN (if that poo poo disappears then I guess I'm out some personal time that is worthless anyway) for my vmware test lab and suggested that I build a second one to store some experiment data.

Uhhhh no.

bort
Mar 13, 2003

marketingman posted:

any boss that asks me to will get my resignation letter as soon as I can land another job.
That's a little more dramatic than what I'd do. I would surely probe the boss' reasoning for asking you to do this. Is the company in dire financial straits? Did he screw up budgeting and not have money allocated for storage? Does he think he's smarter than every other jerk and thinks he's stumbled upon some secret noone else has thought of?

quote:

There's just no such thing as non-critical business data. Dev environment disappears? x amount of devs now sitting around doing nothing. Test environment dissapears? x amount of testing not going ahead.
:tipshat: very well said, sir.

Muslim Wookie
Jul 6, 2005
Even more simply, when I'm doing a requirements gathering exercise I am without fail told "oh don't worry about those data sets, they aren't important" and my response is the same every single time:

"Not important? Then why are you storing it?"

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

marketingman posted:

Even more simply, when I'm doing a requirements gathering exercise I am without fail told "oh don't worry about those data sets, they aren't important" and my response is the same every single time:

"Not important? Then why are you storing it?"

Well to be fair there are plenty of use cases for transient storage that are not worth spending a lot of money to protect.

A good example might be a bunch of VMs in VMware lab manager/vCloud director. These things all get spun off of base templates that might actually be backed up but the storage you need is mostly a scratch space because you need a place to store this temporary data to run tests.

The test results and source code of course are safely stored/backed up/protected but the actual VMs hold little to no value. If I lost 100 VMs because my cheap storage fell over then I can get them back in a day's worth of work. Unless that day's work of lost productivity costs me more than buying a bigger/better storage array then it doesn't make sense.

That said I ask that question too and hope to get some sort of intelligent answer back. Some people can articulate the above and others can't. If they can't I tend to err on the side of caution.

Muslim Wookie
Jul 6, 2005

1000101 posted:

Unless that day's work of lost productivity costs me more than buying a bigger/better storage array then it doesn't make sense.

I hear what you are saying and I agree but (and I'm sorry for sounding like a jerk) within my client base that last sentence is always the case.

Bitch Stewie
Dec 17, 2011
So, let's say you have 24x2tb Nearline SAS spindles in a pair of MD1200 cabinets and you need as many random read IOPS out of them as you can without sacrificing down to RAID10 levels.

What config would you go with?

This is for Commvault if anyone if familiar with it.

Thanks.

Slappy Pappy
Oct 15, 2003

Mighty, mighty eagle soaring free
Defender of our homes and liberty
Bravery, humility, and honesty...
Mighty, mighty eagle, rescue me!
Dinosaur Gum

Bitch Stewie posted:

So, let's say you have 24x2tb Nearline SAS spindles in a pair of MD1200 cabinets and you need as many random read IOPS out of them as you can without sacrificing down to RAID10 levels.

What config would you go with?

This is for Commvault if anyone if familiar with it.

Thanks.

Why would you need to maximize random reads IOPS for CommVault? Are you using CommVault's dedupe agent? Unless you are I would guess the backup data is more sequential.

Bitch Stewie
Dec 17, 2011

Spamtron7000 posted:

Why would you need to maximize random reads IOPS for CommVault? Are you using CommVault's dedupe agent? Unless you are I would guess the backup data is more sequential.

We're aux copying dedupe data to tape so it's rather random as it's rehydrating it.

Slappy Pappy
Oct 15, 2003

Mighty, mighty eagle soaring free
Defender of our homes and liberty
Bravery, humility, and honesty...
Mighty, mighty eagle, rescue me!
Dinosaur Gum

Bitch Stewie posted:

We're aux copying dedupe data to tape so it's rather random as it's rehydrating it.

Got it. I think you have to choose speed vs. capacity. RAID10 vs RAID5.

I'm curious how the CommVault dedupe is working out and whether you've implemented variable block length for dedupe or whether you have everything in the same policy. We are evaluating it but I'm not impressed with their salesman's treatment of "variable block length dedupe" so I'm leaning toward re-upping on our DataDomains or switching to Quantum or ExaGrid.

Bitch Stewie
Dec 17, 2011

Spamtron7000 posted:

Got it. I think you have to choose speed vs. capacity. RAID10 vs RAID5.

I'm curious how the CommVault dedupe is working out and whether you've implemented variable block length for dedupe or whether you have everything in the same policy. We are evaluating it but I'm not impressed with their salesman's treatment of "variable block length dedupe" so I'm leaning toward re-upping on our DataDomains or switching to Quantum or ExaGrid.

I think it's likely to be multiple RAID5 sets right now (that's what their building blocks white paper suggests) with enough mount paths that hopefully you get lots/all of the spindles working for us at the same time.

As for the dedupe, I like it. We've got around 100tb of backup data stored in around 8.5tb of physical disk right now.

I've not tried variable block length or global dedupe, but there's a reason for that, which is that our current MA was never purchased with D2D or dedupe in mind. We got a very good deal on dedupe so went with it, but had to make do with our current server.

With the new server my (rough) plan is to use global dedupe, not sure about variable block length, I need to look into that.

Have you read their building blocks white paper? It's very useful IMO.

As for Commvault vs. DD/Exagrid, never used either but I know the discount we get on Commvault, I know the (rough) lists on DataDomain and I don't see how it's even close tbh?

Nukelear v.2
Jun 25, 2004
My optional title text

Bitch Stewie posted:

So, let's say you have 24x2tb Nearline SAS spindles in a pair of MD1200 cabinets and you need as many random read IOPS out of them as you can without sacrificing down to RAID10 levels.

What config would you go with?

This is for Commvault if anyone if familiar with it.

Thanks.

RAID 5/6 probably is your best bet for that workload. Smaller blocks tend to widen the gap in favor of RAID10 though. From my last benchmark on a HP P2000.

code:
Random Read, 16 disks
RAID	Size	IO/s	MB/s	Avg Lat
6	128	1039	130	61
10 	128	1088	136	58

Sequential Read, 16 disks
RAID	Size	IO/s	MB/s	Avg Lat
6	128	6126	765	xx
10	128	4676	584	13

Bitch Stewie
Dec 17, 2011
Thank you for those.

Yeah, I think the way it's looking it will be multiple smaller RAID sets, idea being that Commvault likes to read and write data to lots of drives simultaneously.

I need to do a shitload of testing with IOMeter I guess but my gut reaction given the physical cabinet layouts is that 2x 5+1 RAID5's per cabinet should see me good.

Bitch Stewie fucked around with this message at 19:23 on Jan 13, 2012

Oddhair
Mar 21, 2004

Let me preface this by saying I don't have any Enterprise storage experience, and this isn't for enterprise (well, yet anyway.)

I've posted previously in this thread that I have an MD3000 array at home with SAS controllers (SFF8470 connectors.)
The SAS card I was given with it won't start on my WHS box. I was also given a MegaRAID SAS 8888 ELP, which has two internal and two external ports, but is a RAID controller rather than an HBA. The RAID card started up fine on WHS, but I can't tell if the external ports could be used to control the array, and I don't want to drop any money on cables to adapt SFF8088 to SFF8470 if it's totally incapable of working, nor can I really justify the $250+ price tag on another HBA. Since I have an iSCSI controller for the MD3000, I do have an out, but I like the higher bandwidth offered by SAS (especially since I have a single iSCSI controller and not two.)

In the short term, I plan on getting some experience managing the array while also using it to back up my WHS data, as well as moving some data off the WHS so two failing drives can be RMAed. For the long term, I'm pushing for my boss to virtualize some of our aging servers and storage (we have a couple of PIII machines :froggonk:)

Thanks in advance for any help, I simply can't tell if the 8888 can be used like this in light of the fact it says "connect up to 240 drives." I'll also ask the friend I got all this from.

Beelzebubba9
Feb 24, 2004

Bluecobra posted:

Words

This was an awesome post; thank you immensely!

Bluecobra posted:

Supermicro actually does, but for single processors only. The reason for this is that all the Ivy Bridge stuff isn't out yet so you are going to have to wait a few months for the dual processor boards to come out. If you want dual processors now, I suggest getting a 1336-based motherboard instead.

Would there be any reason a single SNB CPU wouldn't be fast enough to power a SAN of the type we're discussing?

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

Serfer posted:

Not the series 50 that is coming out at some point soon? (When does this NDA expire?)

You mean the new ones, built on Dell servers? Summer, as I heard... at least that's what I was told recently by Dell/Compellent. I just got a nice setup offered, 24x200GB SSD for tier1 (around 3TB configured) and 12x15k for tier2 (~5TB configured) but there is not even a remote chance for any new Series 50 to get.


KS posted:

Be sure you buy series 40 controllers. They will last you a lot longer.

I think it's a bigger deal that they finally announced Storage Center 6.0, their first 64-bit OS: they doubled the cache (RAM, that is) in the controllers, effective immediately as I heard and, more importantly, they will be able to use smaller block size for data progression (current is 512k I think.)

What I'm really interested in is when they will be able to tier data among other Dell storage boxes eg EqualLogic? They are saying it's a year away but I hope Dell is putting more engineers on this project because it would be a killer feature...

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

Mierdaan posted:

A bloo bloo bloo, this is my life.

Thankfully we get to buy some Compellent this year.

Have you got any of your gear yet? I'm in the process of getting my quotes in and I'm curious what do you think... I'm concerned that Compellent is still built on Server 2008 R2 file servers.

Besides of not having a fast storage tier my biggest PITA is the NAS layer: Windows CIFS is just pure, oozing poo poo. Really, utter poo poo.
Whatever you try to use it will destroy performance with its stupid crap 4k and smaller IO sizes: I tried custom-tailor our compositing tools to use large IOs and when I look at the other side (couple of Dell NX3000 NAS, Fusion-IO card's mgmt software) it's not only re-buffering all the time (you can see the load going up then falling back to zero, only to start again) but all very small ones, 3k-4k max...

When I tested Server 8 w/ Windows 8 it seemed to work better, even a file copy over gigabit was stable at 115-120MB/s which is a lot more impressive than what I've ever seen from any Windows-Windows networking.
Problem is even if I don't get to spend $100k+ on Compellent (=no way to bug the hell out of their support to optimize the NAS layer) and I'd just buy another EQL full of SSDs eg PS6110S* or, ad absurdum, a bunch of SSD or RAM-based cards and stack them into my NAs units it still won't help me as long as I go through Windows Server's uber-lovely CIFS. :(

drat EMC, they bought Isilon...

*: EQL PS6110 is just around the corner (new 24-slot PS6100-series chassis w/ 10GbE), all-SSD version is supposedly coming with 400GB drives.

szlevi fucked around with this message at 10:09 on Jan 14, 2012

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]
FYI ElReg posted a summary of the Swedish EMC cockup: http://www.theregister.co.uk/2012/01/13/tieto_emc_crash/

Vanilla
Feb 24, 2002

Hay guys what's going on in th

szlevi posted:


drat EMC, they bought Isilon...


Come.

Only registered members can see post attachments!

Mierdaan
Sep 14, 2004

Pillbug

szlevi posted:

Have you got any of your gear yet? I'm in the process of getting my quotes in and I'm curious what do you think... I'm concerned that Compellent is still built on Server 2008 R2 file servers.

No, we're probably ordering next week. Our first quote came back surprisingly low, so we're having them requote with some SSDs for VDI base images.

My new boss is pushing Compellent really hard, he's not even interested in talking to other vendors too much. Of course we're not pushing the cutting edge of storage technology around here, so it's pretty much six of one, half-dozen of the other for us and he's already got the budget approved, so :effort:

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

szlevi posted:

Have you got any of your gear yet? I'm in the process of getting my quotes in and I'm curious what do you think... I'm concerned that Compellent is still built on Server 2008 R2 file servers.
What controller are you talking about? I saw our SC40 boot up from the serial console before and it looked like it was running some flavor of BSD.

ragzilla
Sep 9, 2005
don't ask me, i only work here


Bluecobra posted:

What controller are you talking about? I saw our SC40 boot up from the serial console before and it looked like it was running some flavor of BSD.

Could be their NAS offering. IIRC their NAS was just WSS in front of a regular cluster.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

ragzilla posted:

Could be their NAS offering. IIRC their NAS was just WSS in front of a regular cluster.

The controllers are BSD based. I think their zNAS is too, since it uses ZFS.

They're coming out with a new NAS head equivalent to what was just released for the EQL soon (http://www.equallogic.com/products/default.aspx?id=10465).

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

Bluecobra posted:

What controller are you talking about? I saw our SC40 boot up from the serial console before and it looked like it was running some flavor of BSD.

I'm talking about their NAS fronts: they are standard Dell NX3000 boxes except they actually run Storage Server 2008 R2 as opposed to Dell's Storage Server 2008 (they canceled their R2 upgrade when NX3500 cluster variations got under way - though Windows-based NAS cluster is already officially dead IIRC.)

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

three posted:

The controllers are BSD based.


Well, I think they are some linux-fork, at least according to my sales guy who's a pre-Dell employee. I asked them specifically about it and I got linux every single time I have asked, pointing out if they are part of the BSD-crowd like most SAN vendors...

...it would be interesting to learn he's wrong though. :)

quote:

I think their zNAS is too, since it uses ZFS.

They are not even suggesting it if you run Windows network, they tell you straight up "you don't want that" and should get WSS ones.

quote:

They're coming out with a new NAS head equivalent to what was just released for the EQL soon (http://www.equallogic.com/products/default.aspx?id=10465).

Wow, that would make a LOT of sense, especially w/ 10GbE front end (FS7500 is gigabit only. :()
Just when do you think it will be introduced? March-May?

szlevi fucked around with this message at 08:50 on Jan 15, 2012

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

szlevi posted:

Well, I think they are some linux-fork, at least according to my sales guy who's a pre-Dell employee. I asked them specifically about it and I got linux every single time I have asked, pointing out if they are part of the BSD-crowd like most SAN vendors...

...it would be interesting to learn he's wrong though. :)


They are not even suggesting it if you run Windows network, they tell you straight up "you don't want that" and should get WSS ones.


Wow, that would make a LOT of sense, especially w/ 10GbE front end (FS7500 is gigabit only. :()
Just when do you think it will be introduced? March-May?

You might be right. I thought they were BSD-based like the Equallogic.

I think the NAS head is due later this year. If you have a Dell rep, they may be able to pinpoint it to a specific Quarter.

Adbot
ADBOT LOVES YOU

KS
Jun 10, 2003
Outrageous Lumpwad

szlevi posted:

I think it's a bigger deal that they finally announced Storage Center 6.0, their first 64-bit OS: they doubled the cache (RAM, that is) in the controllers, effective immediately as I heard and, more importantly, they will be able to use smaller block size for data progression (current is 512k I think.)

It is definitely a bigger deal, but it requires series 40 or better controllers at the moment, hence my recommendation. 512k is the smallest page size -- the default is 2mb.

6.0 also adds full VAAI support which is nice.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply