Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
madsushi
Apr 19, 2009

Baller.
#essereFerrari

Mierdaan posted:

Oh; I tried that in 1.1 and it was terrible, I'll give it another shot!

They rebuilt it from the ground up in 2.0. It now just runs through your browser (works in IE and Chrome if you set the "Browser" .exe path to the right file) and is much snappier. Really my only gripe is that it's still missing SnapVault configuration stuff.

Adbot
ADBOT LOVES YOU

ozmunkeh
Feb 28, 2008

hey guys what is happening in this thread
Does anyone have any idea when Dell might be refreshing the Compellent controller offerings? I'm sure those with larger server rooms or datacenters can be a little more flexible about physical space but having to give up 6U before even attaching any disk shelves is a little much for those of us with limited rack space.

Oddhair
Mar 21, 2004

Crackbone posted:

Is there any resale value in a Dell MD3000 (bare, or with 15 176G 10K SAS drives inside)? I inherited this from a company buyout, and it's honestly more a hassle than it's worth in our environment. I checked ebay and there appears to tons of them not selling at $2500 or higher.

Not much value; I just inherited one with 15x 500GB 7200K SAS disks. Admittedly the giver is an old friend of mine who used to work in storage, but it seems (from links and searches following your request) that the value is pretty low.

Hok
Apr 3, 2003

Cog in the Machine

ozmunkeh posted:

Does anyone have any idea when Dell might be refreshing the Compellent controller offerings? I'm sure those with larger server rooms or datacenters can be a little more flexible about physical space but having to give up 6U before even attaching any disk shelves is a little much for those of us with limited rack space.

I've heard late Q4, nothing official yet though

xarph
Jun 18, 2001


TobyObi posted:

Yes.

For bonus points, the enterprise line has a flash interface now.

That then can launch the old java interface in the right circumstances.

Guess what I've spent the last month or so setting up?

Oh, the career-limiting poo poo I could write about the 18 months I spent working on HiCommand :allears:

I think the most I can say is that the UI group, the sales/service group, the hardware design group, the management software group, and the group that built the underlying middleware to bridge the UI group and the management software group are all different companies. They all had names starting with "Hitachi", but all communications between them (down to "what does this checkbox do") had to be vetted by lawyers, because ~trade secrets~.

KS
Jun 10, 2003
Outrageous Lumpwad

ozmunkeh posted:

Does anyone have any idea when Dell might be refreshing the Compellent controller offerings? I'm sure those with larger server rooms or datacenters can be a little more flexible about physical space but having to give up 6U before even attaching any disk shelves is a little much for those of us with limited rack space.

Even if they refresh it I can't see them getting much smaller -- they have 7 expansion slots, which is probably too much for a 2u design.

Serfer
Mar 10, 2003

The piss tape is real



ZombieReagan posted:

Definitely, this...FAST-Cache will help keep you from having to add more drives to a pool just for IO most of the time.
Things I hate? EMC's lower end hardware won't support FAST cache. It won't even support SSD's at all. It's loving stupid. We need a SAN at every one of our offices, but can't justify spending $45,000 on a higher end SAN for each location. So we have NX4's currently, and would like to eventually upgrade to VNXe, but without FAST cache, it's still a ridiculous proposition.

I'm actually looking at building our own poo poo with Gluster (I really don't want to do this). It will cost roughly the same, but could be 100% SSD. Why can't someone offer something for smaller offices that still have a really high IO need?

Vanilla
Feb 24, 2002

Hay guys what's going on in th

ZombieReagan posted:

Definitely, this...FAST-Cache will help keep you from having to add more drives to a pool just for IO most of the time. Just be aware that you have to add the SSD's in pairs (mirroring), and as far as I can tell you have to destroy the FAST-Cache group in order to expand it. Shouldn't be a major issue, just do it during off-peak times.

Still beats NetApp PAM-II cards accelerating reads only, and having to take a controller offline to plug it in. :cool:

Indeed, I was looking at a report last week where the Fast Cache was servicing about 80% of the busy IO without going to disk. Loads of FC drives sitting there at low utilisation - wished they'd gone for all high cap drives now!

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Serfer posted:

Things I hate? EMC's lower end hardware won't support FAST cache. It won't even support SSD's at all. It's loving stupid. We need a SAN at every one of our offices, but can't justify spending $45,000 on a higher end SAN for each location. So we have NX4's currently, and would like to eventually upgrade to VNXe, but without FAST cache, it's still a ridiculous proposition.

I'm actually looking at building our own poo poo with Gluster (I really don't want to do this). It will cost roughly the same, but could be 100% SSD. Why can't someone offer something for smaller offices that still have a really high IO need?

EMC is especially anal about flash drives and it's all to do with things like data protection, failure rates, reliability, etc.

The only vendor that churns out an SSD that most of the industry trusts for enterprise workloads is STEC (Zeus IOPS range). However, pretty much every other vendor goes to STEC also - HP, Netapp, HDS, etc. So they have these great drives in high demand and they can't make enough of them - result is a pretty high price.

Price has come down massively but compared to a VNXe it's just not feasable. Spending $10000 on a VNXe and then filling it with a few flash drives costing about $10000 isn't going to make sense today - so it's not offered.

Given time and more SSD suppliers (rumours of a second are floating around) you'll start to see them more and more in different arrays and in greater numbers.

Serfer
Mar 10, 2003

The piss tape is real



Vanilla posted:

EMC is especially anal about flash drives and it's all to do with things like data protection, failure rates, reliability, etc.
I get that, and I think it's dumb. Regardless of the drives costing $10k a piece (which is ridiculous as well), sticking two or four in an NX4 or VNXe is still going to be $30,000 cheaper than going with the next step up that does allow SSD's.

Support for cheaper ones like Intel drives would be nice, but I get their reluctance (all the issues with Intel's firmware), which is why I was looking at building my own.

Internet Explorer
Jun 1, 2005





Serfer posted:

I get that, and I think it's dumb. Regardless of the drives costing $10k a piece (which is ridiculous as well), sticking two or four in an NX4 or VNXe is still going to be $30,000 cheaper than going with the next step up that does allow SSD's.

Support for cheaper ones like Intel drives would be nice, but I get their reluctance (all the issues with Intel's firmware), which is why I was looking at building my own.

Not that I don't disagree, but the SSDs from EMC do not cost 10k a piece. Maybe half that, MSRP.

optikalus
Apr 17, 2008

Serfer posted:

I'm actually looking at building our own poo poo with Gluster (I really don't want to do this). It will cost roughly the same, but could be 100% SSD. Why can't someone offer something for smaller offices that still have a really high IO need?

Make sure you run some tests before you set your heart on gluster. The performance was just acceptable at best in my test cases, which was still quite a bit slower than even NFS on a LVM vol. Also, this was with a TCPoIB scheme and gluster would hang/crash when using RDMA. My benchmarks were done a year ago, so maybe they're completely invalid now.

You can PM me if you like and I'll provide you with my bonnie+ benchmark results

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Internet Explorer posted:

Not that I don't disagree, but the SSDs from EMC do not cost 10k a piece. Maybe half that, MSRP.
And do you use only one? What if the SSD you are using for write cache fails? I know in the sun world it is best practice to run a mirror of two (or more) for write cache.

Internet Explorer
Jun 1, 2005





adorai posted:

And do you use only one? What if the SSD you are using for write cache fails? I know in the sun world it is best practice to run a mirror of two (or more) for write cache.

I read in this thread that you run two, but I am not sure how true that is. Wouldn't surprise me. My point was he said 10k a piece, which is not accurate.

Serfer
Mar 10, 2003

The piss tape is real



optikalus posted:

Make sure you run some tests before you set your heart on gluster. The performance was just acceptable at best in my test cases, which was still quite a bit slower than even NFS on a LVM vol. Also, this was with a TCPoIB scheme and gluster would hang/crash when using RDMA. My benchmarks were done a year ago, so maybe they're completely invalid now.
I don't have my heart set on anything, I want to explore my options (supporting a bunch of in-house built stuff is really not what I want to do).

Internet Explorer posted:

I read in this thread that you run two, but I am not sure how true that is. Wouldn't surprise me. My point was he said 10k a piece, which is not accurate.
You do need at least two, and I guess I was wrong on the price. I was trying to remember what my EMC rep told me off hand at our last meeting. Either way, I can't use it where I really need it, and since our IO loads are only going to go up, I don't see much future for us in spinning disks, but nobody makes something in SSD that we can afford for our other offices.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

adorai posted:

And do you use only one? What if the SSD you are using for write cache fails? I know in the sun world it is best practice to run a mirror of two (or more) for write cache.

So the minimum SSD drives you'd need are three. One RAID 1 pair and one hot spare.

They probably don't cost 10k a piece, it does actually depend on the model of array. The price changes so often that it's probably dropping a few % a month. Another few years we'll be throwing them in like candy.

GrandMaster
Aug 15, 2004
laidback

Vanilla posted:

(edit: then again not sure if the CX3 could have gone there)


It cant, unisphere came out with FLARE 30 and cx3 only goes up to FLARE 29
I believe you can run it off array though, will have to get around to doing that some day as our cx3's still have about 2 years of maintenance left :(

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
I gotta say freenas 8.2 is really impressive

zapateria
Feb 16, 2003
How much would a NetApp equivalent of 2 x HP EVA4400 SANs with continous access/business copy and about 25TB disk cost?

We're having so much trouble with our EVAs that we're thinking about ditching the whole thing. Our HP partner have sent us two alternative offers, one is replacing one of the EVAs with a 6300 with SAS disks, the other is replacing all 60 disks with 15K FC disks and moving 10K disks to the other EVA. Both are gonna cost us about 100K.

Also what other vendors have comparable solutions?

Internet Explorer
Jun 1, 2005





zapateria posted:

How much would a NetApp equivalent of 2 x HP EVA4400 SANs with continous access/business copy and about 25TB disk cost?

We're having so much trouble with our EVAs that we're thinking about ditching the whole thing. Our HP partner have sent us two alternative offers, one is replacing one of the EVAs with a 6300 with SAS disks, the other is replacing all 60 disks with 15K FC disks and moving 10K disks to the other EVA. Both are gonna cost us about 100K.

Also what other vendors have comparable solutions?

At that price point I would also look at Dell's Equallogic line and EMC's VNX/VNXe line.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

zapateria posted:

How much would a NetApp equivalent of 2 x HP EVA4400 SANs with continous access/business copy and about 25TB disk cost?

The question is super vague, but 25 tb of netapp storage without HA and all sata would probably be close to $100k.

Drighton
Nov 30, 2005

All the google results I found back when setting up our iSCSI SAN said that 100-130 MB/s was acceptable. Of course, I think that was based on a 1GB network, but I couldn't find any indication on the type of network used on any of those sites.

So today I'm testing the secondary site's iSCSI SAN using iometer, and I'm getting 800MB/s.

That's 64k 100% sequential writes. Every new test I perform seems to be different. It was just reporting about 500MB/s. I keep thinking hoping I'm doing something wrong so, I just changed it to 64k random 50-50 read-write and I'm seeing 800MB/s again.

I say hoping because we've already started using primary site's SAN, so we can't really make changes as we like anymore. Both sites are using the same equipment from server to switch (10GB) to SAN. Only difference is we have redundant switches and controllers at the primary site. Configuration on both site's switches are the same except for the LAG between the redundant switches.

So am I doing something wrong in iometer, or is this what I should be expecting out of a 10GB iSCSI SAN? And where should I start looking on the primary site's switches to get this fixed?

Edit: After some troubleshooting with the switches, narrowed it down to a configuration problem on the controller, and the support rep recommended an additional VLAN, so looks like I have a weekend project coming up.

Drighton fucked around with this message at 01:23 on Sep 21, 2011

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

Drighton posted:

Edit: After some troubleshooting with the switches, narrowed it down to a configuration problem on the controller, and the support rep recommended an additional VLAN, so looks like I have a weekend project coming up.
Are both your switches handling only iSCSI traffic? If so, it would be better to remove the LAG between them, and create a separate iSCSI subnet for each switch so you have a truly redundant switch fabric. This would require dual 10GbE NICs in each server. I made a crappy little diagram to illustrate what I mean. Also, you should be using jumbo frames (MTU=9000) and every server/controller/switchport in that VLAN would need to be configured for that MTU size.

Only registered members can see post attachments!

Drighton
Nov 30, 2005

Bluecobra posted:

Also, you should be using jumbo frames (MTU=9000) and every server/controller/switchport in that VLAN would need to be configured for that MTU size.

That is exactly the plan now. Although the Broadcom drivers for VMWare would require we use the ESX Software iSCSI initiator to utilize Jumbo Frames. We figured iSCSI Offload was the better choice of the two.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

ZombieReagan posted:

I just got a quote earlier today for a FAS3210 with 2x24 600GB 15K RPM SAS drives, 1x24 2TB SATA, in HA, with NFS,CIFS,iSCSI,FC and 2 10GbE cards for $110K. That's without SnapMirror though, but you don't really need that unless you've got another filer to mirror to. It's less than 25TB usable in SAS disks, but it gives you an idea.

Not exactly a high-end unit, but it's not bad.

It depends. I don't know Netapp but my 48x2TB EQL PS6510E (~42TB RAID10/70TB RAID6) was around $80k in Jan (end of last Q at Dell) as part of a package but I bet you can get it w/ the FS7500 HA NAS cluster under $100k...
I'd say if you know how to play your cards (VAR etc) at the end of this quarter (in a week or two) you can even get a PS6010XV(parhps even XVS) + PS6010E + FS7500 setup around $100k or so...

Drighton
Nov 30, 2005

Yet more problems! I think we've isolated the issue to the Broadcom BCM57711 cards. While using the software initiators in either Microsoft or vSphere we can achieve 700MB/s up to 1.2GB/s. But when we try using the Offload Engine our speeds drop to 10MB/s up to 100MB/s with the latest driver update.

This is on a simpler environment, with 1 switch and 1 controller, but the symptoms are consistent on all 10 of these cards. We've confirmed our SAN configuraiton is correct with their tech's, and we've stumped the VMWare support guys - they are doing further research. Dell is now doing their required troubleshooting before we can get replacements, and I've even hit up Broadcom for support (no reply yet).

Does anything stand out to anyone here? The last troubleshooting step I can try is to load Windows directly on one of these machines and test the performance that way. I believe this is also the only way to update the firmware on these cards (which I've found on Dell's website, but not Broadcom's :confused: ).

We've also looked into the Intel cards - is my understanding correct that they do not have an equivalent iSCSI offload engine? From what I've read it looks like they just reduce the impact the software initiator has on the processors.

E: It never fails. When I post about it, I get the answer. Flow Control in ESXi is hidden very well. Throw the command to enable it: instant fix.

Drighton fucked around with this message at 20:10 on Oct 13, 2011

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
we get very high throughput for relatively little CPU without toe or iscsi cards. We use software initiators only.

Hok
Apr 3, 2003

Cog in the Machine
I've seen more calls with issues on Broadcom 10G cards than with the Intel one's, although that might be because there are more Broadcom cards out there.

I basically can't give solid evidence, but if I was putting together a system I was responsible for I'd be using the Intels.

As for the performance with software vs hardware Initiator, if the software works well then use it.

700MB/s and up is nothing to complain about.

Drighton
Nov 30, 2005

Hok posted:

700MB/s and up is nothing to complain about.

Totally agree. With Flow Control enabled, hardware adapaters and software adapters both run the same: 600MB/s - 700MB/s. I expect the host was just flooding the controller to achieve that 1GB/s, and that might have been a problem if we had more than just one VM accessing a volume. I'm disappointed that we aren't getting the payoff I expected using HBAs, but that may come with a bit more tweaking. At least ultimately we've relieved the CPU of that load (however small it might be v:shobon:v) without losing performance.

It's going to be a busy weekend.

Internet Explorer
Jun 1, 2005





Post in this thread if your average latency hangs out at about 100ms (with spikes up to 4000ms) on your SAN for reads and you're hosting all your infrastructure on it.

Sup. :(

Can't wait for our new EMC VNX to come save the day.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

Drighton posted:

Yet more problems! I think we've isolated the issue to the Broadcom BCM57711 cards. While using the software initiators in either Microsoft or vSphere we can achieve 700MB/s up to 1.2GB/s. But when we try using the Offload Engine our speeds drop to 10MB/s up to 100MB/s with the latest driver update.

This is on a simpler environment, with 1 switch and 1 controller, but the symptoms are consistent on all 10 of these cards. We've confirmed our SAN configuraiton is correct with their tech's, and we've stumped the VMWare support guys - they are doing further research. Dell is now doing their required troubleshooting before we can get replacements, and I've even hit up Broadcom for support (no reply yet).

Does anything stand out to anyone here? The last troubleshooting step I can try is to load Windows directly on one of these machines and test the performance that way. I believe this is also the only way to update the firmware on these cards (which I've found on Dell's website, but not Broadcom's :confused: ).

We've also looked into the Intel cards - is my understanding correct that they do not have an equivalent iSCSI offload engine? From what I've read it looks like they just reduce the impact the software initiator has on the processors.

E: It never fails. When I post about it, I get the answer. Flow Control in ESXi is hidden very well. Throw the command to enable it: instant fix.

Ah, don't even start me with BCM57711...

Broadcom IS JUNK. Seriously, I run dozens of BCM57711 cards in my servers, two per servers and ANY OFFLOAD BREAKS SOMETHING and different things in different drivers but they all screw up your speed and/or connectivity (more than ridiculous if you think about it what's the point in offloading.)

I used to buy only BCM-based stuff to match my switches and onboard NICs but NEVER AGAIN; at least twice we spent weeks figuring out these issues and it was always a goddamn junk Broadcom drivers/HBA issue at the end...

...never use iSOE and also avoid TOE on BCM57711, stick to all sw-based iSCSI connections - BCM's offload is pure, oozing sh!t.

FYI recently bought some dual-port Intel ones, for the price of a BCM5710 (single-port version) and it works like it supposed to. From now on, Intel, that is.

PS: did I mention when a firmware update in the Summer wiped out all settings on all adapters...? That was 'fun' too, thanks to Broadcom.

szlevi fucked around with this message at 17:21 on Oct 14, 2011

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
We just got a Compellent SAN and have been tinkering with it. Compellent's knowledge base is pretty good, but are there any other good resources?

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

three posted:

We just got a Compellent SAN and have been tinkering with it. Compellent's knowledge base is pretty good, but are there any other good resources?

What kind of config?

Drighton
Nov 30, 2005

Have any problem in particular? And if you aren't following their best practices document there's a chance you'll get a hosed call with support. Happened to me twice, but I don't hold it against them since their documentation really is pretty good.

Slappy Pappy
Oct 15, 2003

Mighty, mighty eagle soaring free
Defender of our homes and liberty
Bravery, humility, and honesty...
Mighty, mighty eagle, rescue me!
Dinosaur Gum
I'm evaluating backup target replacements for our Data Domains. We back up our colo and then replicate to the home office to do tape-outs from here. We use Simpana for backups. It's worked great for 3 years but now we've outgrown the DD's. EMC is getting extremely sassy with their pricing so I'm looking elsehwere. I'm evaluating Quantum, Exagrid and Oracle/ZFS as hardware solutions. I've also read that CommVault has re-written their dedupe so it can do global variable block length at each client. Intriguing. End result is that instead of paying EMC $200k for whiteboxes I can pay CommVault $35k in software and then buy my own whiteboxes.

First - does anyone have experience moving from Data Domains to Simpana Dedupe and can you tell me how it's going? Second, does anyone have good solutions for cheap but supportable 20+ TB NFS whiteboxes to use as backup targets?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
for a backup target that can replicate, an openindiana box seems like a good choice. 2 boxes each with 2x 7+2 raidz2 arrays of 2tb disks with two hot spares and some cache could probably be done for under $10k.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Drighton posted:

Have any problem in particular? And if you aren't following their best practices document there's a chance you'll get a hosed call with support. Happened to me twice, but I don't hold it against them since their documentation really is pretty good.

Dell gave us this unit for free and specced it out for us, and their Dell Services install guy completely botched the install doing things that didn't even logically make sense. Throughput has been abysmal (we're using legacy mode).

We're having to have a second guy come out and re-do everything this week. Kind of a pain, but it was free so I can't complain too much. We used mostly Equallogic prior.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I got handed a pair of machines each with 16 2TB drives, running Solaris 10. Guess how the drives are setup.





16 disk RAIDZ1. I'm supposed to get them mirroring to each other and comment on how I feel about them, except for the drive layout.

My gut tells me this is the stupidest loving thing ever and almost guarantees data loss, but can anyone point to some hard data I can use to shame the idiot who set this up, and hopefully get it fixed?

The justification from my boss (not the one who set it up) is that since we're going to mirror the machines (with a nightly ZFS send/receive) that it doesn't matter if a machine goes down because of hard drive death. Nevermind that the act of syncing back 30TB of data is sure to kick of a couple dead disks in your backup array, but gently caress, what do I know.

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

FISHMANPET posted:

The justification from my boss (not the one who set it up) is that since we're going to mirror the machines (with a nightly ZFS send/receive) that it doesn't matter if a machine goes down because of hard drive death. Nevermind that the act of syncing back 30TB of data is sure to kick of a couple dead disks in your backup array, but gently caress, what do I know.
It sounds like your boss doesn't value your time. RAIDZ1 will protect you from one disk dying but in my experience it seems like cheap SATA disks take a long time to re-silver in big pools. I used to have a 24x1TB enclosure with one RAIDZ2 + 2 hot spares and it would take 3-4 days to re-silver. In your case, if a second drive fails during the re-silver you will have to rebuild the array from scratch and re-sync all that data which will take some time. I should also mention that in my case of having a giant 24-disk raidz2 that performance wasn't stellar even with a 32gb of system memory and two SSD L2ARC drives. Part of the problem is that anytime there was a read or a write, a given file would be spread across 22 disks which adds to slowness. What I would do is benchmark the pool as it is right now and try to simulate a disk re-silver. I would then consider re-creating the pool with two raid-z1 devices + two hotspares (see below) and compare the results.

code:

  pool: horse_porn
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        horse_porn  ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
            c1t7d0  ONLINE       0     0     0
          raidz1-1  ONLINE       0     0     0
            c1t8d0  ONLINE       0     0     0
            c1t9d0  ONLINE       0     0     0
            c1t10d0  ONLINE      0     0     0
            c1t11d0  ONLINE      0     0     0
            c1t12d0  ONLINE      0     0     0
            c1t13d0  ONLINE      0     0     0
            c1t14d0  ONLINE      0     0     0
          spares
          c1td15d0    AVAIL
          c1td16d0    AVAIL
If you haven't already take a look at the ZFS entries at the Solaris Internals wiki:

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide

Adbot
ADBOT LOVES YOU

Drighton
Nov 30, 2005

three posted:

Dell gave us this unit for free and specced it out for us, and their Dell Services install guy completely botched the install doing things that didn't even logically make sense. Throughput has been abysmal (we're using legacy mode).

We're having to have a second guy come out and re-do everything this week. Kind of a pain, but it was free so I can't complain too much. We used mostly Equallogic prior.

Dell just acquired them. I'd be surprised if that tech was a Compellent guy in a Dell polo. We got in literally weeks before it happened and have had Compellent guys helping us all the way, and they've been terrific. I guess the Dell techs are still being trained.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply