|
We just got a Compellent SAN and have been tinkering with it. Compellent's knowledge base is pretty good, but are there any other good resources?
|
# ¿ Oct 16, 2011 02:31 |
|
|
# ¿ Apr 25, 2024 12:47 |
|
Drighton posted:Have any problem in particular? And if you aren't following their best practices document there's a chance you'll get a hosed call with support. Happened to me twice, but I don't hold it against them since their documentation really is pretty good. Dell gave us this unit for free and specced it out for us, and their Dell Services install guy completely botched the install doing things that didn't even logically make sense. Throughput has been abysmal (we're using legacy mode). We're having to have a second guy come out and re-do everything this week. Kind of a pain, but it was free so I can't complain too much. We used mostly Equallogic prior.
|
# ¿ Oct 18, 2011 14:02 |
|
tronester posted:Well luckily the HP proliants have SAS controllers, and will be equipped with 6 2.5" 300GB 10k rpm SAS drives. I honestly believe that they would have enough IOPS for their relatively light workload. Why does anyone every purchase 10K RPM drives? 2/3 of the performance for 9/10ths of the price.
|
# ¿ Nov 3, 2011 22:40 |
|
Bluecobra posted:In my experience, their support on Solaris is lovely. The last time I called Copilot about a Solaris 10 iSCSI problem, they told me to call "Solaris" for help. Also, when we had bought our first Compellent, the on-site engineer(s) couldn't figure out how to get iSCSI working on SUSE and restored to Googling for the solution. I ended up figuring it out on my own. Based on my anecdotal evidence, it seems like this product works best for Windows shops. From my experience thus far, their support is terrible. Not to mention their install engineers are terrible since they all seem to be terribly trained for Compellent since they're Dell engineers now. The SAN is okay, but I'd still rather buy EqualLogic.
|
# ¿ Nov 12, 2011 18:41 |
|
Serfer posted:Compellent is pretty big, and it's owned by Dell now. It's also not "white box" equipment, you're just showing that you have absolutely no idea what you're talking about. It's like saying EMC Avamar is a white box just because it's just a Dell 2950.
|
# ¿ Dec 2, 2011 22:06 |
|
I don't think Compellent should be considered whitebox, but I also don't think SuperMicro is a great hardware maker. I look forward to them being moved to Dell hardware.
|
# ¿ Dec 4, 2011 03:53 |
|
ILikeVoltron posted:It marks used blocks and progresses them either up or down tiers (sometimes just a raid level) How do you handle systems that run on monthly cycles, which would have their data migrated down to slow speed by then? Disable data progression on those volumes, or create custom profiles?
|
# ¿ Dec 21, 2011 15:37 |
|
Martytoof posted:This may be a terrible question to ask among people who discuss actual SAN hardware, but if I want to get my feet wet with iSCSI ESXi datastores, what would be the best way to go about this on a whitebox as far as the operating system is concerned. I'm looking at something like OpenFiler. This would literally just be a small proof of concept test so I'm not terribly concerned about ongoing performance right now. In addition to OpenFiler, FreeNAS is an option.
|
# ¿ Jan 7, 2012 06:23 |
|
ragzilla posted:Could be their NAS offering. IIRC their NAS was just WSS in front of a regular cluster. The controllers are BSD based. I think their zNAS is too, since it uses ZFS. They're coming out with a new NAS head equivalent to what was just released for the EQL soon (http://www.equallogic.com/products/default.aspx?id=10465).
|
# ¿ Jan 15, 2012 05:46 |
|
szlevi posted:Well, I think they are some linux-fork, at least according to my sales guy who's a pre-Dell employee. I asked them specifically about it and I got linux every single time I have asked, pointing out if they are part of the BSD-crowd like most SAN vendors... You might be right. I thought they were BSD-based like the Equallogic. I think the NAS head is due later this year. If you have a Dell rep, they may be able to pinpoint it to a specific Quarter.
|
# ¿ Jan 15, 2012 21:47 |
|
evil_bunnY posted:1) This is kinda disingenuous: you want the raw capacity to matter somewhat, but do take into account what you'll pay to get the raid level you'll actually use (if you say RAID5 I'm going to lol) and do take into account what dedup/thin provision will get you. Why would you 'lol' at RAID5?
|
# ¿ Jan 24, 2012 14:38 |
|
evil_bunnY posted:Rebuild failures. It's happened twice to me before (once on a Dell MD which was bad enough, the other time on a semi-old EMC unit), and it's mathematically very likely to happen during any storage system's lifetime. So I'm guessing you really hate RAID 50?
|
# ¿ Jan 24, 2012 15:23 |
|
Rhymenoserous posted:Look at Nimble, it's a bit over your price range, but I'm pushing about 50% block level dedupe (They keep referring to it as compression which it technically isn't) on my SQL db's and my VMFS operating systems datastore is barely using any of the space I allocated it: to the point where I'm about to create a new datastore, migrate and reclaim some storage. I would never use Nimble given how small and new they are. We looked at them in the past because one of their reps is a personal friend of one of the managers here, and they have such a small user-base and team. Just my personal opinion.
|
# ¿ Feb 14, 2012 22:50 |
|
Hyrax posted:Disclaimer: I'm not a storage guy (I do VMware stuff mostly), but I do have to run a couple of EqualLogic setups. What firmware are you running?
|
# ¿ Feb 23, 2012 22:04 |
|
EQL also has a limit on number of connections in a group and storage pool. I don't recall what they are, but I'd look into that. I'm assuming adding a member equals adding connections.
|
# ¿ Feb 23, 2012 23:15 |
|
Compellent is absolutely terrible. We were given a unit due to the amount of business we do with Dell, and it's a pile of junk. Controller crashes that they can't explain, blaming issues on firmware being out of date even when the array says there are no updates (you, literally, have to call to find out if there are updates and to get them to release them to you, but in the meantime if you do "Check for Updates" your array will tell you it is all up to date), Copilot support rebooting the wrong controller when 1 is down and bringing your entire storage down, Copilot support blaming performance issues on using thin-provisioned VMDKs, Copilot support saying a massive performance issue is due to number of disks (we're talking each disk getting like 10 iops, yes 10 not 100). Do never buy.
|
# ¿ Apr 6, 2012 23:56 |
|
Intraveinous posted:That's quite odd... What series controller(s)? What version of Storage Center are you running? We have the same controllers, 5.5.3 OS. We have had Dell and Compellent come into our office numerous times. We've had Storage Engineers, Dell Tiger Team, etc. We've given them beyond the benefit of the doubt. I find most people that like them have only used cheap devices or local storage. Compellent doesn't even support for any of the VAAI features except space reclamation, so I'd never use it for virtualization.
|
# ¿ Apr 30, 2012 21:27 |
|
Intraveinous posted:That's right, I remember our conversation now. You gave me your SR number which I relayed to them. My bad, I guess I should have said, "Their lab was able to reproduce and verify the problem discovered by KS." Credit where credit is due, thanks for your help. I imagine the array itself has hardware issues, which Compellent seems unable to see. I'd bet if I had a different unit it would probably be fine; however, the experience with a bad unit has just shown has poor Compellent and Copilot is. Any support that accidentally reboots the wrong controller and brings down the storage is bad. They've even been unable to even diagnose some issues (e.g. one controller hanging on a firmware update). Their response times to difficult tickets is pretty terrible, too. SC 6.0 finally implementing VAAI is nice. At least you can rest assure technologies will be supported eventually. In any case, we'll likely use this for junk projects and keep critical machines on a vendor that isn't terrible.
|
# ¿ May 3, 2012 18:43 |
|
You could do a lot more with 30k, but it seems you feel the need to justify it instead of putting that 30k to good use. It sounds like your company is pretty terribly ran so they won't know any better, either way. three fucked around with this message at 00:45 on May 24, 2012 |
# ¿ May 24, 2012 00:41 |
|
Chad Sakac owns, so that video owns by proxy.
|
# ¿ May 26, 2012 20:08 |
|
If you're going to make a SAN for use with virtual environments, it's pretty silly to not fully support VAAI.
|
# ¿ Jun 14, 2012 20:03 |
|
Internet Explorer posted:Oh, I missed that part. I can't imagine having 6TB worth of VMs and only having 16k USD to work with. On the Equallogic side you actually can get boxes that have a mix of storage. Same with EMC. Should be the same with NetApp. Equallogic doesn't offer mixed SATA/SAS in the same unit, afaik; only mix of SAS and SSD.
|
# ¿ Jul 30, 2012 21:41 |
|
Beelzebubba9 posted:I'm going to bump this too. The internet seems to have very good things to say about Nimble's product, and the people I know who use them really like them, but it wasn't in a production or similarly stressed environment. How comfortable are you with being one of a very small number of users? It will mean you will be the one running into bugs more often than the other vendors that probably have just as many bugs but have more people to find them and fix them before you notice.
|
# ¿ Aug 15, 2012 21:54 |
|
You have to configure all of your hosts to tolerate ~40 seconds of the EQL array being unavailable during firmware upgrades. Maybe this is recommended with other iSCSI arrays, but I haven't ran into it yet?
|
# ¿ Aug 16, 2012 20:07 |
|
szlevi posted:I never did it but IIRC my values are ~30 secs and my hosts all tolerate failovers just fine... Do other arrays require this, and specifically state this requirement?
|
# ¿ Aug 17, 2012 02:37 |
|
People hang out on Spiceworks forums?
|
# ¿ Aug 30, 2012 14:50 |
|
The main problem with VSA originally was the limitations. A lot of those are resolved in the next release: vCenter not being able to run on the same environment, one VSA per vCenter, no ability to expand, had to use pristine hosts. I had not heard of any reliability concerns. It is still overpriced solo, but I believe the bundle pricing is a lot more reasonable.
|
# ¿ Sep 2, 2012 19:40 |
|
FISHMANPET posted:Holy poo poo you have no idea what you're doing. This is what a SAN is. If you want to find some crazy overpriced FC gear go nuts I guess, or see if you can find some direct attached storage but I don't know if that exists anymore anyway. Well, you see, FC cables plug directly into the hard drive platters so there is no overheard like iSCSI.
|
# ¿ Sep 7, 2012 04:15 |
|
EoRaptor posted:If they made a mac mini with 2 (or more) Gb ports, I'd wedge one of those in just to serve AFP from an iSCSI LUN, but the current mini + thunderbolt to ethernet just seems to be asking for problems. You can use the thunderbolt port as a 10Gb port.
|
# ¿ Sep 8, 2012 21:07 |
|
XMalaclypseX posted:The transfer rates are over both ports. What are you using to benchmark those numbers?
|
# ¿ Sep 10, 2012 18:21 |
|
Syano posted:Our environment is small enough that I can still afford dedicated switches for our SAN. I have a buddy who is an admin over on the other side of town who constantly rides me about wasting ports by dedicating switches just to SAN traffic and how I should be just running all the traffic through the same top of rack switch and VLANing from there. Who cares I say, as long as I can afford it I am going to keep it simple and keep layer 3 off my SAN switches. You should ride him about being terrible. You're doing it the correct way. iSCSI should have its own dedicated network. The only justification for not doing it that was is if he has no budget, and if so then maybe he should've gone NFS instead of iSCSI. three fucked around with this message at 00:50 on Sep 11, 2012 |
# ¿ Sep 11, 2012 00:48 |
|
adorai posted:A non routed vlan is not layer 3, it's still layer two. Dedicating a switch (and nics) to iscsi is a waste. It's silly to not pay ~$15,000 for a couple of stacked switches to provide increased stability, performance (especially if the backplane of your top of the rack switches can't support the full bandwidth of all the ports), higher reliability since the environment is pristine, protecting from the network team causing blips that really cause problems with iSCSI but not typical traffic, ease of management in pinpointing issues/changes/configuration problems. However, I will concede that this is a debatable method, but I can't believe any storage admin wouldn't dedicate NICs at the very minimum (especially with 1Gb, which most environments don't need 10Gb). three fucked around with this message at 01:28 on Sep 11, 2012 |
# ¿ Sep 11, 2012 01:18 |
|
Misogynist posted:At this point, it really depends on whether you're using (round-robin) 1 GbE or 10 GbE for your networking. There's some pretty major cost concerns associated with the extra 10 gigabit switches if you want any kind of reasonable port density at line rate. Lots of people do converged for their 10 gigabit infrastructure, but they don't have the problems we've had with unannounced network maintenance taking down my cluster on my wedding day because of isolation response Price goes up a bit if using 10Gb (which most people probably don't need), but it's the foundation for a SAN infrastructure that probably cost several hundred thousand dollars. The foundation for the common belief that iSCSI is worse than FC is derived from people trying to implement it on their existing network infrastructure, and the problems that causes.
|
# ¿ Sep 11, 2012 01:35 |
|
NippleFloss posted:The issues being mentioned are pretty overblown. Your post contradicts itself. You're saying it's overblown then say FC is more stable. The reason FC is considered more stable is because it uses it's own switches and people don't usually gently caress with it and gently caress it up. The issues mentioned are real world scenarios, not hypothetical paradises where people don't do dumb things and break your storage network. (Hint: A lot of people anyone will work with are really bad at their jobs.) Having a pristine, less hosed with, easily monitored and managed environment is so critical to stability. And the cost is so negligible given the cost of a quality Compellent, EMC, NetApp, etc array. Don't cut corners on storage.
|
# ¿ Sep 11, 2012 05:07 |
|
Amandyke posted:Like here? http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/12169-304616-3930449-3930449-3930449-4118659.html?dnr=1 HP's website is painfully designed.
|
# ¿ Sep 12, 2012 01:58 |
|
I am really looking forward to VMware making the VSA/etc awesome so that it is actually feasible for most environments. It's a long way away with all the limitations it has now, but I think it's the future. Getting rid of the SAN would be awesome.
|
# ¿ Nov 8, 2012 17:06 |
|
Rhymenoserous posted:It will never happen, there are a ton of reasons to go with a shared chunk of external storage outside of the capacity arguments that VSA are likely to solve. For a small business though I see VSA as a godsend. I disagree. SANs aren't used because people love them for virtualization, they're used because they're a requirement for HA/DRS/etc. Nutanix is already working in this space. It's just a matter of time. Corvettefisher posted:The Data Protection appliance? I have it sitting in a lab, might run a few tests if there is something particular you are looking for, and get back to you on it. I Have problems managing it through anything other than the web client for some reason... The Web Client is required to use it. It won't work (along with many other new features) with the standard client. It's a decent product; I haven't used it in production, but set it up in my lab. Lots of little "gotchas," but it's still a new product. It depends on how much VMware sinks into it. If they give it all the bells and whistles, they'll give a serious business blow to several partner companies (Veeam, PHD Virtual, and Quest to a lesser extent since it has a lot of other software and is owned by Dell now).
|
# ¿ Nov 8, 2012 19:39 |
|
Rhymenoserous posted:Do you really think an "All in one" virtualization box is really going to throw the world all a twitter? I'm kind of skeptical. What is the benefit of continuing the traditional SAN architecture? I would rather have a resilient scale-out infrastructure that uses cheaper technology. Scale-out SANs are already very popular (e.g. Equallogic), so let's go a step further and push that into the server, make it resilient and highly available, and ditch the behemoth SAN architecture. Solid-state drives becoming affordable and easily obtainable makes this idea easier, as well. Push everything into the software layer.
|
# ¿ Nov 8, 2012 20:32 |
|
cheese-cube posted:I've worked with SANs/NASs for several years and I like to think I'm somewhat on top of things but what the gently caress defines a "scale-out SAN"? A quick search on Google has simply led me to believe that it's just another lovely buzzword. Equallogic calls its "frame-based" versus "frameless".
|
# ¿ Nov 8, 2012 22:13 |
|
|
# ¿ Apr 25, 2024 12:47 |
|
Moey posted:What if I only have a limited number hosts and have exceeded the internal (software shared) storage in them? I would be forced to purchase another host + licensing. With a traditional SAN you would just be adding on a shelf. Perhaps we will see the architecture for the server change the accomodate this. No particular reason a server can't have "shelves" added. Also, Equallogic, for example, can't have shelves added to it. You have to buy a whole new member, which includes two controllers attached to it; controllers are, more or less, "compute" so you're paying the same price in this approach that you would in the SAN-less approach except you also gain compute capacity in your virtual environment too in the SAN-less strategy.
|
# ¿ Nov 9, 2012 03:49 |