|
My boss had the misfortune of trying to buy some Compellent kit right as the acquisition was wrapping up - it was a pretty high-priority purchase for him, and they couldn't get anyone from Compellent on the phone because they'd all hosed off to the Bahamas for two weeks or something.
|
# ? Oct 18, 2011 16:55 |
|
|
# ? Apr 19, 2024 06:53 |
|
Let's say I want about 3TB (effective) of SAS 15k and 30 of nearline SAS, what else should I be looking at: Dell (Equallogic and Compellent) Netapp 2240 (the new one) IBM netapp OEM stuff (3500, 3700?) EMC VNX(e) Budget would be about 80k eurobucks. I'm more used to fixing other people's mistakes than making my own, so this is new for me. Hjälp! If that's of any interest, they have a MD3000i (fast drives) and 2 MD1000's full of SATA drives now. evil_bunnY fucked around with this message at 20:42 on Oct 18, 2011 |
# ? Oct 18, 2011 20:30 |
|
Are you going to be doing any replication? I would stick to Equallogic, NetApp, and EMC. I think the Compellent would be out of price range, if US pricing is similar. You should make sure you're checking IOPs and read/write ratios and not just eyeballing how much space you need.
|
# ? Oct 18, 2011 20:57 |
|
Internet Explorer posted:Are you going to be doing any replication? I would stick to Equallogic, NetApp, and EMC. I think the Compellent would be out of price range, if US pricing is similar. You should make sure you're checking IOPs and read/write ratios and not just eyeballing how much space you need. No replication. Integrating into the mothership SAN infrastructure is out of the budget. evil_bunnY fucked around with this message at 21:18 on Oct 18, 2011 |
# ? Oct 18, 2011 21:08 |
|
FISHMANPET posted:can anyone point to some hard data I can use to shame the idiot who set this up, and hopefully get it fixed? The best practices guide however, does state: -consider using a RAIDZ1 main pool with RAIDZ1 backup pool rather than higher level RAIDZ or mirroring (touch on the value of backup vs. stronger RAIDZ) If it were me, assuming they are enterprise drives, I would go 3x 4+1 vdevs with 1 hot spare. Consumer class drives and I would go with 2x 5+2 vdevs with 2 hot spares.
|
# ? Oct 18, 2011 23:08 |
|
Internet Explorer posted:I think the Compellent would be out of price range, if US pricing is similar. We're currently in the bidding process to replace our entire SAN infrastructure. Talked with my boss a few days ago about it, he's gotten Dell and HP down to 70% off list...even after that, Compellent is still twice as much as the HP for the same basic spec. It sure is nice, though...
|
# ? Oct 19, 2011 13:40 |
|
FISHMANPET posted:16 disk RAIDZ1. I'm supposed to get them mirroring to each other and comment on how I feel about them, except for the drive layout. Have a look at the potential scenarios. One disk failure isn't a big deal. Two disks failing risks data loss, or will mean that one array is no longer functional. If two fail on one array that array is broken until replacement drives are installed and it's mirrored again. If one or more drives fail on each array there's a serious risk of data loss. Even if there's no data loss there will be huge performance hit when the drives and swapped and the arrays rebuild. Then you run the risk of disks failing during rebuild. Raid Z2 mirrored two drives failing either one on each array or two on one array is no big deal, and so on. What I can't understand is that with so many disks there are no hot spares. Hot spares at least repair something before needing intervention, and negate some of the issues of the above scenarios. Also, by combinatoric probabilities multiple disk failures are more likely due to the large number of disks.
|
# ? Oct 20, 2011 03:42 |
|
evil_bunnY posted:
Get Dell and EMC fighting. EMC took 15k of the top of a 36TB (600 15k and 2TB NLSAS blend in a EMC VNXe) just because we dropped EQL name.
|
# ? Oct 20, 2011 05:27 |
|
incoherent posted:Get Dell and EMC fighting. EMC took 15k of the top of a 36TB (600 15k and 2TB NLSAS blend in a EMC VNXe) just because we dropped EQL name. Do this. In addition 80k Euro's isn't VNXe territory but easily the lower end of the VNX range if you need to go there.
|
# ? Oct 20, 2011 11:01 |
|
Spamtron7000 posted:I'm evaluating backup target replacements for our Data Domains. We back up our colo and then replicate to the home office to do tape-outs from here. We use Simpana for backups. It's worked great for 3 years but now we've outgrown the DD's. EMC is getting extremely sassy with their pricing so I'm looking elsehwere. I'm evaluating Quantum, Exagrid and Oracle/ZFS as hardware solutions. I've also read that CommVault has re-written their dedupe so it can do global variable block length at each client. Intriguing. End result is that instead of paying EMC $200k for whiteboxes I can pay CommVault $35k in software and then buy my own whiteboxes. Firstly make sure a few vendors are involved which will work on getting the price down. DD's are awesome and if they've done the job and you like them just push the price down as best you can. EMC end of year soon! Personally with dedup backup I always prefer an appliance / array based approach. http://www.theregister.co.uk/2011/09/10/dcig_dedupe_report/ This link has a (paid for and bias) guide but it's always worth having a look and seeing they metrics they use to compare these things.
|
# ? Oct 20, 2011 11:05 |
|
Vanilla posted:Firstly make sure a few vendors are involved which will work on getting the price down.
|
# ? Oct 20, 2011 12:18 |
|
ZombieReagan posted:We've beat the poo poo out of a FAS3140 and a FAS6080, and yeah...we get the 100ms spikes more often than I ever care to admit. The good news is our VNX is installed and in use, and our Exchange admin ran a Jetstress test on the NL-SAS pool I built for our new servers. That is really good news. We are getting our VNX shortly and I cannot wait to put it through its paces.
|
# ? Oct 20, 2011 13:02 |
|
We have 2 x VNX 5700 on the way. Super excited.
|
# ? Oct 20, 2011 14:38 |
|
Vanilla posted:Firstly make sure a few vendors are involved which will work on getting the price down. Thanks for the link to the report. My experience with Data Domain has been great but EMC has gotten ridiculous with list prices on the DD's. I know I can get them to around 65-70% off of list but even then, they're on some serious crack. Setting list price at $200k for a piece of hardware that literally costs them no more than $8k each is just monumentally stupid. I realize they have to try and recoup some of the $2.1billion they paid to Data Domain for the dedupe patent but I'm not spending as much as my SAN costs to back up my SAN onto a whitebox and then replicate it to another whitebox. They insist they won't get beaten on price but I'm not so sure this time. ExaGrid is definitely in the mix but their sales team is really green. They are really "motivated" and I hate motivated sales people. loving leave me alone already.
|
# ? Oct 20, 2011 17:52 |
|
Spamtron7000 posted:. Corvettefisher posted:My Freenas box just died today... Only 150G of data, good thing it was only a test machine evil_bunnY fucked around with this message at 18:57 on Oct 20, 2011 |
# ? Oct 20, 2011 18:01 |
|
My Freenas box just died today... Only 150G of data, good thing it was only a test machine
Dilbert As FUCK fucked around with this message at 18:16 on Oct 20, 2011 |
# ? Oct 20, 2011 18:14 |
|
incoherent posted:Get Dell and EMC fighting. EMC took 15k of the top of a 36TB (600 15k and 2TB NLSAS blend in a EMC VNXe) just because we dropped EQL name. Works with Netapp and EMC too. My rep told me there was no way he was going to get beat by EMC
|
# ? Oct 20, 2011 19:11 |
|
Anyone have NetApp V62xx series filers in production today and how many?
|
# ? Oct 23, 2011 02:17 |
|
Last year, my boss gave me his 2TB external to fill with linux isos to play on the boxee box I convinced him to get. A couple of weeks ago, he gave me two 1TBs to get him the latest point releases. I kind of split up each folder among each drive as I wasn't sure what he did or did not have. Today, I see snapshot error messages about backups on one of our command view eva servers, so I log in to take a look. He had provisioned a 2 TB LUN and was in the process of copying everything from the two 1TBs to it in order to consolidate the folders, then copy it back to the drives. Gotta love abusing company resources, my boss owns.
|
# ? Oct 24, 2011 16:07 |
|
Anyone know if there are any sneaky ways around HP's lack of official support for installation of their utilities for monitoring HD status on DL180 servers running ESXi? We have some great DIY SANs thanks to their VSA but obviously VSAs only see their virtual disks thanks to the virtual abstraction and there's no easy way to remotely check for failed HDs. Anyone know if there's a way to poll the smart array controller somehow else?
|
# ? Oct 25, 2011 13:44 |
|
Stoo posted:Anyone know if there are any sneaky ways around HP's lack of official support for installation of their utilities for monitoring HD status on DL180 servers running ESXi? We have some great DIY SANs thanks to their VSA but obviously VSAs only see their virtual disks thanks to the virtual abstraction and there's no easy way to remotely check for failed HDs. Anyone know if there's a way to poll the smart array controller somehow else? Get the ESXi ISO from HP that contains the CIM drivers, it will push individual HD info up to VMWare. https://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=HPVM09
|
# ? Oct 25, 2011 17:40 |
|
Has anyone installed the Fail Over Manager for an HP4000 series? The only download I could find for the latest FOM(9.5) was a zip file, but the docs say nothing about a zip containing a couple VMDKs and an OVF. I may be under the incorrect assumption that to install the FOM, you create a new VM on a single ESX host, and that keeps quorum between SAN cluster members.
|
# ? Oct 26, 2011 19:28 |
|
Just figured out what an OVF file is and what it does. I'M LEARNING.
|
# ? Oct 26, 2011 20:45 |
|
We're finally starting to meet with vendors and quote out SANs/blades. For background, we're a relatively small company (low initial capital) expanding into hosted offerings (needs to be scalable), including public cloud and hosted VDI solutions (relatively high IOPS). Met with Dell/Compellant today, and I've got meetings lined up with EMC, NetApp, and HP. Anyone else I should make sure to talk to over the next couple months?
|
# ? Oct 26, 2011 23:23 |
|
check into IBM's storwize v7000
|
# ? Oct 27, 2011 03:29 |
|
InferiorWang posted:Has anyone installed the Fail Over Manager for an HP4000 series? The only download I could find for the latest FOM(9.5) was a zip file, but the docs say nothing about a zip containing a couple VMDKs and an OVF. The FOM does indeed keep quorum. Don't however put it's storage in the SAN or it won't be operational if one is list. You also only need to use it if you don't have 3 or more regular boxes as they would be better to run the manager. When I install it the installer created the VM from start to finish, I just had to set the drive to use. I used the Hyper-v one though so the process of setting it up is probably diffrent.
|
# ? Oct 27, 2011 05:29 |
|
Regex posted:We're finally starting to meet with vendors and quote out SANs/blades.
|
# ? Oct 27, 2011 14:04 |
|
Intrepid00 posted:The FOM does indeed keep quorum. Don't however put it's storage in the SAN or it won't be operational if one is list. You also only need to use it if you don't have 3 or more regular boxes as they would be better to run the manager. Thanks. I put it on the ESXi cluster, but just on the local data store, not the SAN itself. After it was installed and configured, I screwed up the networking config on one of the P4300s, but the whole storage cluster didn't go offline, so I guess it's doing its job.
|
# ? Oct 27, 2011 21:04 |
|
Misogynist posted:Isilon (now owned by EMC) would probably fit your model nicely. Isilon is quickly becoming my favourite platform ever. It's all sort of amazing. It can do crazy IOPS, recently looked into a 5 x S-Series node cluster with SSD's that was rated to 75k IOPS and 1.5GB/sec. However, given what Regex said i'd recommend against Isilon. - Isilon is not geared towards small, transactional IOPS. I'd looks towards a traditional FC array for this. - Isilon starts at around 20-30TB, so may already be well out of the park for a small company. I'd say for a hosting company if you've got the money go Vblock. Storage, networking, security, compute all in one package and UCS does offer a high memory footprint for more VMs. May not be that cheap but it makes life for service providers and hosting companies about a million times easier.
|
# ? Oct 29, 2011 01:19 |
|
Let me know if any of you storage goons are going to be at NetApp Insight this week, we can share a McRib in the McDonalds of the MGM Grand.
|
# ? Oct 31, 2011 16:25 |
|
Regex posted:We're finally starting to meet with vendors and quote out SANs/blades. I'm hearing some buzz around Nutanix for scale-out VDI. And if money were no object, I'd echo Vanilla's Vblock recommendation--30 days from PO to production is pretty cool.
|
# ? Oct 31, 2011 20:56 |
|
Is it possible to purchase a loaded storage array for an iSCSI SAN at less than 5000 dollars for 1-2TB of usable space? My client needs just that, to function as a storage platform for vSphere. They are limited to 5000 dollars by government red tape (otherwise they are looking at 4-6 months for a lengthy approval process) Pretty much anything decent ive seen starts at $10K.
|
# ? Nov 3, 2011 18:28 |
|
tronester posted:Is it possible to purchase a loaded storage array for an iSCSI SAN at less than 5000 dollars for 1-2TB of usable space? Would loading OpenFiler on a Dell (or whitebox) server do the trick? Cram 4-6 1TB HD's in there.
|
# ? Nov 3, 2011 18:42 |
|
tronester posted:Is it possible to purchase a loaded storage array for an iSCSI SAN at less than 5000 dollars for 1-2TB of usable space? It's been a while since I checked out their site but Cybernetics has always had crazy good pricing on iSCSI SANs. While we didn't end up going with them (our CFO wanted to stay all within one brand, for our server/storage refresh; there was nothing technically wrong with their products) they undercut Dell by nearly 75%. They had a 16TB array for something crazy like $9k. I'm sure if you talk to them, you'll be able to find something in your price range.
|
# ? Nov 3, 2011 18:44 |
|
Bob Morales posted:Would loading OpenFiler on a Dell (or whitebox) server do the trick? Cram 4-6 1TB HD's in there. That's actually something I've considered. They have a ton of hp dl360 g5s that aren't being used since we virtualized most of their infrastructure. However is openfiler or Freenas a viable option for enterprise? I don't have any experience with either but could certainly learn.
|
# ? Nov 3, 2011 18:49 |
|
So I've been playing around with our new VNX 5300 and it's replication partner, and let me just say man this has been quite an experience. Coming from an Equallogic box, and not having a storage admin, it is not nearly as kiddy-proof. I'd be more excited about it if I did not have 5 million other things going on (Setup our SAN ASAP! Fix this person's printer ASAP!) I was running the VNX Initialization Assistant and it got to the point where it set an IP address and hostname and then bombed out. Apparently there is no way to reset the Control Station (or reinstall the OS on it), without them sending out a tech, so that you can run the VIA again, it all has to be done manually. The support has been hit or miss. Have had a hard time getting a hold of anyone in their support chat who knows anything, and they supposedly "dispatched a tech" over 48 hours ago, but we were able to get it working and are now updating the firmware. It definitely seems like these SANs were build from the bottom up, with dozens of little tools and separate interfaces, which is very different from the Equallogic side of things. I am very excited to put it through it's paces and then put it in production, though.
|
# ? Nov 3, 2011 18:49 |
|
tronester posted:That's actually something I've considered. Also keep in mind that the iSCSI targets used by OpenFiler, FreeNAS and basically any NAS distribution other than Nexenta will not be able to support shared disk clustering of Windows servers using MSCS. This may not be an issue for your environment, though.
|
# ? Nov 3, 2011 18:55 |
|
Misogynist posted:Openfiler or FreeNAS isn't nearly as much of an issue as the fact that cramming a handful of SATA disks into some enclosure is not going to give you nearly enough IOPS to support even a lightly used vSphere environment. How many spindles do you think would be necessary for about 50-60 lightly used VMs? Would an enclosure with 12 in raid6+hot spare cut it? I'm going to try to set it up as a science experiment anyway, but knowing whether I should be disappointed in advance would be helpful.
|
# ? Nov 3, 2011 20:05 |
|
tronester posted:Is it possible to purchase a loaded storage array for an iSCSI SAN at less than 5000 dollars for 1-2TB of usable space? Drobo? 8TB iSCSI at a $4,999 list. http://www.drobostore.com/store/drobo/en_US/buy/productID.233947000
|
# ? Nov 3, 2011 20:11 |
|
|
# ? Apr 19, 2024 06:53 |
|
Internet Explorer posted:The support has been hit or miss. Have had a hard time getting a hold of anyone in their support chat who knows anything, and they supposedly "dispatched a tech" over 48 hours ago, but we were able to get it working and are now updating the firmware. This has been our feedback to EMC as well, both on their rather awful support and the interfaces/tools used to manage their arrays. They may have a ton of market share and solid architecture, but they are far behind in those two areas IMO. It's nice to have an ear direct into their engineering teams to give feedback on their UI, hopefully they are able to make some progress.
|
# ? Nov 3, 2011 20:13 |