|
Mierdaan posted:Oh; I tried that in 1.1 and it was terrible, I'll give it another shot! They rebuilt it from the ground up in 2.0. It now just runs through your browser (works in IE and Chrome if you set the "Browser" .exe path to the right file) and is much snappier. Really my only gripe is that it's still missing SnapVault configuration stuff.
|
# ? Sep 12, 2011 17:33 |
|
|
# ? Apr 25, 2024 12:32 |
|
Does anyone have any idea when Dell might be refreshing the Compellent controller offerings? I'm sure those with larger server rooms or datacenters can be a little more flexible about physical space but having to give up 6U before even attaching any disk shelves is a little much for those of us with limited rack space.
|
# ? Sep 12, 2011 21:29 |
|
Crackbone posted:Is there any resale value in a Dell MD3000 (bare, or with 15 176G 10K SAS drives inside)? I inherited this from a company buyout, and it's honestly more a hassle than it's worth in our environment. I checked ebay and there appears to tons of them not selling at $2500 or higher. Not much value; I just inherited one with 15x 500GB 7200K SAS disks. Admittedly the giver is an old friend of mine who used to work in storage, but it seems (from links and searches following your request) that the value is pretty low.
|
# ? Sep 13, 2011 05:46 |
|
ozmunkeh posted:Does anyone have any idea when Dell might be refreshing the Compellent controller offerings? I'm sure those with larger server rooms or datacenters can be a little more flexible about physical space but having to give up 6U before even attaching any disk shelves is a little much for those of us with limited rack space. I've heard late Q4, nothing official yet though
|
# ? Sep 13, 2011 12:15 |
|
TobyObi posted:Yes. Oh, the career-limiting poo poo I could write about the 18 months I spent working on HiCommand I think the most I can say is that the UI group, the sales/service group, the hardware design group, the management software group, and the group that built the underlying middleware to bridge the UI group and the management software group are all different companies. They all had names starting with "Hitachi", but all communications between them (down to "what does this checkbox do") had to be vetted by lawyers, because ~trade secrets~.
|
# ? Sep 14, 2011 18:30 |
|
ozmunkeh posted:Does anyone have any idea when Dell might be refreshing the Compellent controller offerings? I'm sure those with larger server rooms or datacenters can be a little more flexible about physical space but having to give up 6U before even attaching any disk shelves is a little much for those of us with limited rack space. Even if they refresh it I can't see them getting much smaller -- they have 7 expansion slots, which is probably too much for a 2u design.
|
# ? Sep 14, 2011 19:49 |
|
ZombieReagan posted:Definitely, this...FAST-Cache will help keep you from having to add more drives to a pool just for IO most of the time. I'm actually looking at building our own poo poo with Gluster (I really don't want to do this). It will cost roughly the same, but could be 100% SSD. Why can't someone offer something for smaller offices that still have a really high IO need?
|
# ? Sep 15, 2011 06:36 |
|
ZombieReagan posted:Definitely, this...FAST-Cache will help keep you from having to add more drives to a pool just for IO most of the time. Just be aware that you have to add the SSD's in pairs (mirroring), and as far as I can tell you have to destroy the FAST-Cache group in order to expand it. Shouldn't be a major issue, just do it during off-peak times. Indeed, I was looking at a report last week where the Fast Cache was servicing about 80% of the busy IO without going to disk. Loads of FC drives sitting there at low utilisation - wished they'd gone for all high cap drives now!
|
# ? Sep 15, 2011 10:39 |
|
Serfer posted:Things I hate? EMC's lower end hardware won't support FAST cache. It won't even support SSD's at all. It's loving stupid. We need a SAN at every one of our offices, but can't justify spending $45,000 on a higher end SAN for each location. So we have NX4's currently, and would like to eventually upgrade to VNXe, but without FAST cache, it's still a ridiculous proposition. EMC is especially anal about flash drives and it's all to do with things like data protection, failure rates, reliability, etc. The only vendor that churns out an SSD that most of the industry trusts for enterprise workloads is STEC (Zeus IOPS range). However, pretty much every other vendor goes to STEC also - HP, Netapp, HDS, etc. So they have these great drives in high demand and they can't make enough of them - result is a pretty high price. Price has come down massively but compared to a VNXe it's just not feasable. Spending $10000 on a VNXe and then filling it with a few flash drives costing about $10000 isn't going to make sense today - so it's not offered. Given time and more SSD suppliers (rumours of a second are floating around) you'll start to see them more and more in different arrays and in greater numbers.
|
# ? Sep 15, 2011 10:46 |
|
Vanilla posted:EMC is especially anal about flash drives and it's all to do with things like data protection, failure rates, reliability, etc. Support for cheaper ones like Intel drives would be nice, but I get their reluctance (all the issues with Intel's firmware), which is why I was looking at building my own.
|
# ? Sep 15, 2011 18:49 |
|
Serfer posted:I get that, and I think it's dumb. Regardless of the drives costing $10k a piece (which is ridiculous as well), sticking two or four in an NX4 or VNXe is still going to be $30,000 cheaper than going with the next step up that does allow SSD's. Not that I don't disagree, but the SSDs from EMC do not cost 10k a piece. Maybe half that, MSRP.
|
# ? Sep 15, 2011 18:55 |
|
Serfer posted:I'm actually looking at building our own poo poo with Gluster (I really don't want to do this). It will cost roughly the same, but could be 100% SSD. Why can't someone offer something for smaller offices that still have a really high IO need? Make sure you run some tests before you set your heart on gluster. The performance was just acceptable at best in my test cases, which was still quite a bit slower than even NFS on a LVM vol. Also, this was with a TCPoIB scheme and gluster would hang/crash when using RDMA. My benchmarks were done a year ago, so maybe they're completely invalid now. You can PM me if you like and I'll provide you with my bonnie+ benchmark results
|
# ? Sep 15, 2011 20:13 |
|
Internet Explorer posted:Not that I don't disagree, but the SSDs from EMC do not cost 10k a piece. Maybe half that, MSRP.
|
# ? Sep 16, 2011 00:53 |
|
adorai posted:And do you use only one? What if the SSD you are using for write cache fails? I know in the sun world it is best practice to run a mirror of two (or more) for write cache. I read in this thread that you run two, but I am not sure how true that is. Wouldn't surprise me. My point was he said 10k a piece, which is not accurate.
|
# ? Sep 16, 2011 01:01 |
|
optikalus posted:Make sure you run some tests before you set your heart on gluster. The performance was just acceptable at best in my test cases, which was still quite a bit slower than even NFS on a LVM vol. Also, this was with a TCPoIB scheme and gluster would hang/crash when using RDMA. My benchmarks were done a year ago, so maybe they're completely invalid now. Internet Explorer posted:I read in this thread that you run two, but I am not sure how true that is. Wouldn't surprise me. My point was he said 10k a piece, which is not accurate.
|
# ? Sep 16, 2011 05:00 |
|
adorai posted:And do you use only one? What if the SSD you are using for write cache fails? I know in the sun world it is best practice to run a mirror of two (or more) for write cache. So the minimum SSD drives you'd need are three. One RAID 1 pair and one hot spare. They probably don't cost 10k a piece, it does actually depend on the model of array. The price changes so often that it's probably dropping a few % a month. Another few years we'll be throwing them in like candy.
|
# ? Sep 16, 2011 06:02 |
|
Vanilla posted:(edit: then again not sure if the CX3 could have gone there) It cant, unisphere came out with FLARE 30 and cx3 only goes up to FLARE 29 I believe you can run it off array though, will have to get around to doing that some day as our cx3's still have about 2 years of maintenance left
|
# ? Sep 16, 2011 08:14 |
|
I gotta say freenas 8.2 is really impressive
|
# ? Sep 19, 2011 01:28 |
|
How much would a NetApp equivalent of 2 x HP EVA4400 SANs with continous access/business copy and about 25TB disk cost? We're having so much trouble with our EVAs that we're thinking about ditching the whole thing. Our HP partner have sent us two alternative offers, one is replacing one of the EVAs with a 6300 with SAS disks, the other is replacing all 60 disks with 15K FC disks and moving 10K disks to the other EVA. Both are gonna cost us about 100K. Also what other vendors have comparable solutions?
|
# ? Sep 19, 2011 13:32 |
|
zapateria posted:How much would a NetApp equivalent of 2 x HP EVA4400 SANs with continous access/business copy and about 25TB disk cost? At that price point I would also look at Dell's Equallogic line and EMC's VNX/VNXe line.
|
# ? Sep 19, 2011 14:34 |
|
zapateria posted:How much would a NetApp equivalent of 2 x HP EVA4400 SANs with continous access/business copy and about 25TB disk cost? The question is super vague, but 25 tb of netapp storage without HA and all sata would probably be close to $100k.
|
# ? Sep 19, 2011 23:14 |
|
All the google results I found back when setting up our iSCSI SAN said that 100-130 MB/s was acceptable. Of course, I think that was based on a 1GB network, but I couldn't find any indication on the type of network used on any of those sites. So today I'm testing the secondary site's iSCSI SAN using iometer, and I'm getting 800MB/s. That's 64k 100% sequential writes. Every new test I perform seems to be different. It was just reporting about 500MB/s. I keep I say hoping because we've already started using primary site's SAN, so we can't really make changes as we like anymore. Both sites are using the same equipment from server to switch (10GB) to SAN. Only difference is we have redundant switches and controllers at the primary site. Configuration on both site's switches are the same except for the LAG between the redundant switches. So am I doing something wrong in iometer, or is this what I should be expecting out of a 10GB iSCSI SAN? And where should I start looking on the primary site's switches to get this fixed? Edit: After some troubleshooting with the switches, narrowed it down to a configuration problem on the controller, and the support rep recommended an additional VLAN, so looks like I have a weekend project coming up. Drighton fucked around with this message at 01:23 on Sep 21, 2011 |
# ? Sep 20, 2011 20:41 |
|
Drighton posted:Edit: After some troubleshooting with the switches, narrowed it down to a configuration problem on the controller, and the support rep recommended an additional VLAN, so looks like I have a weekend project coming up.
|
# ? Sep 22, 2011 00:05 |
|
Bluecobra posted:Also, you should be using jumbo frames (MTU=9000) and every server/controller/switchport in that VLAN would need to be configured for that MTU size. That is exactly the plan now. Although the Broadcom drivers for VMWare would require we use the ESX Software iSCSI initiator to utilize Jumbo Frames. We figured iSCSI Offload was the better choice of the two.
|
# ? Sep 22, 2011 02:28 |
|
ZombieReagan posted:I just got a quote earlier today for a FAS3210 with 2x24 600GB 15K RPM SAS drives, 1x24 2TB SATA, in HA, with NFS,CIFS,iSCSI,FC and 2 10GbE cards for $110K. That's without SnapMirror though, but you don't really need that unless you've got another filer to mirror to. It's less than 25TB usable in SAS disks, but it gives you an idea. It depends. I don't know Netapp but my 48x2TB EQL PS6510E (~42TB RAID10/70TB RAID6) was around $80k in Jan (end of last Q at Dell) as part of a package but I bet you can get it w/ the FS7500 HA NAS cluster under $100k... I'd say if you know how to play your cards (VAR etc) at the end of this quarter (in a week or two) you can even get a PS6010XV(parhps even XVS) + PS6010E + FS7500 setup around $100k or so...
|
# ? Oct 10, 2011 19:40 |
|
Yet more problems! I think we've isolated the issue to the Broadcom BCM57711 cards. While using the software initiators in either Microsoft or vSphere we can achieve 700MB/s up to 1.2GB/s. But when we try using the Offload Engine our speeds drop to 10MB/s up to 100MB/s with the latest driver update. This is on a simpler environment, with 1 switch and 1 controller, but the symptoms are consistent on all 10 of these cards. We've confirmed our SAN configuraiton is correct with their tech's, and we've stumped the VMWare support guys - they are doing further research. Dell is now doing their required troubleshooting before we can get replacements, and I've even hit up Broadcom for support (no reply yet). Does anything stand out to anyone here? The last troubleshooting step I can try is to load Windows directly on one of these machines and test the performance that way. I believe this is also the only way to update the firmware on these cards (which I've found on Dell's website, but not Broadcom's ). We've also looked into the Intel cards - is my understanding correct that they do not have an equivalent iSCSI offload engine? From what I've read it looks like they just reduce the impact the software initiator has on the processors. E: It never fails. When I post about it, I get the answer. Flow Control in ESXi is hidden very well. Throw the command to enable it: instant fix. Drighton fucked around with this message at 20:10 on Oct 13, 2011 |
# ? Oct 13, 2011 16:53 |
|
we get very high throughput for relatively little CPU without toe or iscsi cards. We use software initiators only.
|
# ? Oct 13, 2011 22:49 |
|
I've seen more calls with issues on Broadcom 10G cards than with the Intel one's, although that might be because there are more Broadcom cards out there. I basically can't give solid evidence, but if I was putting together a system I was responsible for I'd be using the Intels. As for the performance with software vs hardware Initiator, if the software works well then use it. 700MB/s and up is nothing to complain about.
|
# ? Oct 14, 2011 11:07 |
|
Hok posted:700MB/s and up is nothing to complain about. Totally agree. With Flow Control enabled, hardware adapaters and software adapters both run the same: 600MB/s - 700MB/s. I expect the host was just flooding the controller to achieve that 1GB/s, and that might have been a problem if we had more than just one VM accessing a volume. I'm disappointed that we aren't getting the payoff I expected using HBAs, but that may come with a bit more tweaking. At least ultimately we've relieved the CPU of that load (however small it might be vv) without losing performance. It's going to be a busy weekend.
|
# ? Oct 14, 2011 15:27 |
|
Post in this thread if your average latency hangs out at about 100ms (with spikes up to 4000ms) on your SAN for reads and you're hosting all your infrastructure on it. Sup. Can't wait for our new EMC VNX to come save the day.
|
# ? Oct 14, 2011 16:54 |
|
Drighton posted:Yet more problems! I think we've isolated the issue to the Broadcom BCM57711 cards. While using the software initiators in either Microsoft or vSphere we can achieve 700MB/s up to 1.2GB/s. But when we try using the Offload Engine our speeds drop to 10MB/s up to 100MB/s with the latest driver update. Ah, don't even start me with BCM57711... Broadcom IS JUNK. Seriously, I run dozens of BCM57711 cards in my servers, two per servers and ANY OFFLOAD BREAKS SOMETHING and different things in different drivers but they all screw up your speed and/or connectivity (more than ridiculous if you think about it what's the point in offloading.) I used to buy only BCM-based stuff to match my switches and onboard NICs but NEVER AGAIN; at least twice we spent weeks figuring out these issues and it was always a goddamn junk Broadcom drivers/HBA issue at the end... ...never use iSOE and also avoid TOE on BCM57711, stick to all sw-based iSCSI connections - BCM's offload is pure, oozing sh!t. FYI recently bought some dual-port Intel ones, for the price of a BCM5710 (single-port version) and it works like it supposed to. From now on, Intel, that is. PS: did I mention when a firmware update in the Summer wiped out all settings on all adapters...? That was 'fun' too, thanks to Broadcom. szlevi fucked around with this message at 17:21 on Oct 14, 2011 |
# ? Oct 14, 2011 17:17 |
|
We just got a Compellent SAN and have been tinkering with it. Compellent's knowledge base is pretty good, but are there any other good resources?
|
# ? Oct 16, 2011 02:31 |
|
three posted:We just got a Compellent SAN and have been tinkering with it. Compellent's knowledge base is pretty good, but are there any other good resources? What kind of config?
|
# ? Oct 17, 2011 17:36 |
|
Have any problem in particular? And if you aren't following their best practices document there's a chance you'll get a hosed call with support. Happened to me twice, but I don't hold it against them since their documentation really is pretty good.
|
# ? Oct 17, 2011 17:55 |
|
I'm evaluating backup target replacements for our Data Domains. We back up our colo and then replicate to the home office to do tape-outs from here. We use Simpana for backups. It's worked great for 3 years but now we've outgrown the DD's. EMC is getting extremely sassy with their pricing so I'm looking elsehwere. I'm evaluating Quantum, Exagrid and Oracle/ZFS as hardware solutions. I've also read that CommVault has re-written their dedupe so it can do global variable block length at each client. Intriguing. End result is that instead of paying EMC $200k for whiteboxes I can pay CommVault $35k in software and then buy my own whiteboxes. First - does anyone have experience moving from Data Domains to Simpana Dedupe and can you tell me how it's going? Second, does anyone have good solutions for cheap but supportable 20+ TB NFS whiteboxes to use as backup targets?
|
# ? Oct 18, 2011 01:09 |
|
for a backup target that can replicate, an openindiana box seems like a good choice. 2 boxes each with 2x 7+2 raidz2 arrays of 2tb disks with two hot spares and some cache could probably be done for under $10k.
|
# ? Oct 18, 2011 01:12 |
|
Drighton posted:Have any problem in particular? And if you aren't following their best practices document there's a chance you'll get a hosed call with support. Happened to me twice, but I don't hold it against them since their documentation really is pretty good. Dell gave us this unit for free and specced it out for us, and their Dell Services install guy completely botched the install doing things that didn't even logically make sense. Throughput has been abysmal (we're using legacy mode). We're having to have a second guy come out and re-do everything this week. Kind of a pain, but it was free so I can't complain too much. We used mostly Equallogic prior.
|
# ? Oct 18, 2011 14:02 |
|
I got handed a pair of machines each with 16 2TB drives, running Solaris 10. Guess how the drives are setup. 16 disk RAIDZ1. I'm supposed to get them mirroring to each other and comment on how I feel about them, except for the drive layout. My gut tells me this is the stupidest loving thing ever and almost guarantees data loss, but can anyone point to some hard data I can use to shame the idiot who set this up, and hopefully get it fixed? The justification from my boss (not the one who set it up) is that since we're going to mirror the machines (with a nightly ZFS send/receive) that it doesn't matter if a machine goes down because of hard drive death. Nevermind that the act of syncing back 30TB of data is sure to kick of a couple dead disks in your backup array, but gently caress, what do I know.
|
# ? Oct 18, 2011 14:49 |
|
FISHMANPET posted:The justification from my boss (not the one who set it up) is that since we're going to mirror the machines (with a nightly ZFS send/receive) that it doesn't matter if a machine goes down because of hard drive death. Nevermind that the act of syncing back 30TB of data is sure to kick of a couple dead disks in your backup array, but gently caress, what do I know. code:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
|
# ? Oct 18, 2011 16:09 |
|
|
# ? Apr 25, 2024 12:32 |
|
three posted:Dell gave us this unit for free and specced it out for us, and their Dell Services install guy completely botched the install doing things that didn't even logically make sense. Throughput has been abysmal (we're using legacy mode). Dell just acquired them. I'd be surprised if that tech was a Compellent guy in a Dell polo. We got in literally weeks before it happened and have had Compellent guys helping us all the way, and they've been terrific. I guess the Dell techs are still being trained.
|
# ? Oct 18, 2011 16:25 |