|
oblomov posted:Backing up or replicating (locally) large amount of data, how do you guys do it? So, my new project will require me to backup/replicate/copy/whatever about 100TB of data to tertiary storage. I may be a little late with this. You should look into a data de-duplicating solution for the backup and tertiary storage. Check out Data Domain. They can be optioned to mount as SMB, NFS, FC or iSCSI. I've had one that I've been playing with for a little while now. My 300GB test data set deduplicated down to 101 GB on the first pass. Speed is pretty good too. 3GB/min over a single gigabit link. As it just shows up as disk space, it's supported by pretty much every backup product you can think of too.
|
# ¿ Dec 11, 2008 19:15 |
|
|
# ¿ Apr 23, 2024 18:01 |
|
ddavis posted:I inherited a Dell AX100 that's a bit long in the tooth. It hosts our Exchange database, SQL database, file shares, and some MS .vhds. When a lot of customers buy SAN hardware, they buy for long term reliability, performance and cost, in that order. You're going to get 100x the support when something fucks up from EMC or HP, versus building it yourself. Big vendors will also guarantee their product will work with other major products at a certain SLA. Nomex fucked around with this message at 05:35 on Dec 13, 2008 |
# ¿ Dec 13, 2008 05:32 |
|
Jadus posted:In general, what are people doing to back up these large multi-TB systems? To add on to what others have said, because you're going to be dealing with large amounts of unused files, you should look at archiving off everything that doesn't get used in say, 3 months, to secondary storage. As it's mostly unchanging, it doesn't need to be backed up as regularly as your production data. You can use a program like EMC DiskXtender to flip data back and forth transparently between primary and secondary storage as well. As for the actual data backup, if you don't want to use tape, a bunch of vendors offer data de-duplicated disk based backup solutions that are faster and more reliable than tape. For example Data Domain makes a hardware product, EMC uses Avamar software.
|
# ¿ Jan 28, 2009 13:56 |
|
Anyone here dealt with Compellent Storage Center equipment? I'm trying to find out what drawbacks they may have from people who've actually used the stuff.
|
# ¿ Feb 6, 2009 15:10 |
|
I'm just gonna put this out there for anyone looking for cheap SAN stuff: You can get an HP Enterprise Virtual Array 4400 dual controller with 12 x 400GB 10k FC drives and 5TB of licensing for less than $12k. The part number is AJ813A. Need more space? Order a second one and use just the shelf, then keep a spare set of controllers. You can get ~38TB for less than $96k this way. The only things you need to add are SFPs and switches. Nomex fucked around with this message at 18:20 on Mar 5, 2009 |
# ¿ Mar 5, 2009 18:16 |
|
zapateria posted:Our company has two office locations and we're planning to use the second as a disaster recovery location. Ideally, I would say get a second EVA 4400 with enough FATA disks to cover your storage needs, then get 2 Brocade 7500 SAN extension switches. You can then pick up a Continuous Access EVA license and enable EVA asynchronous replication between your primary and DR sites. This will have 0 downtime costs. You can plug all the required equipment in and configure it all while live. This won't be the cheapest option unfortunately, but it will be the best. Also, you probably won't have to size the EVA to be as large as your primary storage, as a lot of your disk is probably carved into RAID 10. You can set all the DR stuff to RAID 5 and sacrifice performance for space. Nomex fucked around with this message at 00:37 on Jan 10, 2010 |
# ¿ Jan 10, 2010 00:32 |
|
adorai posted:Our secondary site has enough storage to hold all of the data, and just enough performance for our critical apps to keep running, so we have all sata disk on a single controller at our DR site. We are comfortable with letting our non critical apps be down so long as the data is intact. In that case, you can get an EVA starter kit for pretty cheap. Call your HP rep and have him quote you on model #AJ700B. That's the part number for the 4TB (10 x 400 10k FC) model. If that's not a good fit there's a few more options Here. The starter kits tend to be a lot cheaper than just buying an EVA. I forgot to mention, if you do decide to go this route, DO NOT under any circumstances let anyone talk you into using HP MPX110 IP distance gateways. They're complete poo poo. Nomex fucked around with this message at 06:35 on Jan 10, 2010 |
# ¿ Jan 10, 2010 06:33 |
|
Insane Clown Pussy posted:We were sold an underperforming Lefthand system that was discontinued within a few weeks of the purchase. I get the feeling they were offloading old stock. There's nothing particularly wrong with them that I've noticed but everything about dealing with them was like pulling teeth. Well, I shouldn't say everything. Dealing with their support was a pleasure, soured only by the amount of times we had to contact them. One module was DOA and its replacement died within a month or two but since then they've been fine and given us little trouble. They had a lot of problems during the transition, however now Lefthand equipment is built on HP Proliant gear. The reliability and support are a lot better now that the transition is done. FISHMANPET posted:I was wondering if anybody could provice a useful link on SAS expanders? I've seen all sorts of SAS cards that say they support 100+ drives, but I don't understand how they do it. Google isn't helpful for once, and I'm just really curious. Some manufacturers make internal cards too: http://www.amazon.com/Hewlett-Packard-468406-B21-Sas-Expander-Card/dp/B0025ZQ16K
|
# ¿ Mar 24, 2010 19:51 |
|
Zerotheos posted:I'm not familiar with this, so I thought I'd briefly read up on it. I understand what you're saying but 3PAR themselves seem to disagree that single parity (regardless of wide stripping) is good enough with growing drive capacities. This doesn't sound like something they're doing just to appease dumb customers. It mentions their system was still vulnerable to double disk failures and I don't think I'd feel better about that just because it rebuilds faster than a normal RAID5 array. I can only speak for HP, but I'm sure 3par and others are similar. On an HP EVA, disks are divided into 8 disk parity groups and also have fault tolerance disks. In a RAID 5 configuration, you can lose 1 disk per parity group plus however many disks you have set to be fault tolerant. So say you have a 1.2 TB usable VRAID 5 set on 300GB disks (equal size to 4+1 RAID 5), with 5 disk groups and 2 parity drives, you can lose 5 disks (as long as it's 1 per set) plus an additional 2 disks from anywhere before you lose data. This means your chances of losing data in a properly configured EVA are extremely remote. EoRaptor posted:The difference in RAW space isn't actually 'real'. With HP, I lose half of that space right off (network mirror), and I can then sacrifice more with different raid levels if I want. They have the same software features (snaps for anything that supports VSS, pretty much), and I just need to figure out the backup support. The products are really close. For backup support you just need to present the LUNs as read only to the backup server, then use whatever backup type and product you want. Nomex fucked around with this message at 04:45 on Apr 2, 2010 |
# ¿ Apr 2, 2010 04:41 |
|
quote != edit
|
# ¿ Apr 2, 2010 04:44 |
|
Intraveinous posted:OK, I'm finally caught back up. This is such a great thread in general, so thanks to everyone contributing so far. If you're worried about fault tolerance, you might want to go with an sb40c storage blade and 6 of the MDL SSDs in RAID 10. That would give you about 60k random read IOPS and ~15k writes.
|
# ¿ May 26, 2010 17:27 |
|
brent78 posted:About to pick up 40 TB of Compellent storage. I liked their solution the best out of Lefthand, Equallogic and Netapp. Anything I should be aware of before dropping this PO? You can't control how it does its disk platter tiering. It'll move data around the platters and you can't tell it what data to move or when to move data. (Primary to secondary storage is controllable). It can cause some performance issues.
|
# ¿ Jun 11, 2010 00:27 |
|
Nukelear v.2 posted:This thread needs a bump. If you decide to go with the LFF SAS FC option, an EVA4400 starter kit will work out to be almost the same price as an MSA2000FC (possibly cheaper), but offers higher availability, better expandability and better performance.
|
# ¿ Jun 24, 2010 14:20 |
|
Misogynist posted:I got to see what happens when an IBM SAN gets unplugged in the middle of production hours today, thanks to a bad controller and a SAN head design that really doesn't work well with narrow racks. If it's a DS4xxx unit I would schedule a maintenance window so you can power it off and reboot it properly. DS units are touchy, and you might see some glitches down the road.
|
# ¿ Jun 30, 2010 16:47 |
|
You should see what happens when you turn one on in the wrong order. I hate IBM DS equipment with the fury of a thousand suns.
|
# ¿ Jun 30, 2010 18:53 |
|
Intraveinous posted:This was the way we ended up getting approval for. BL460c + SB40c with SSDs. Now that I'm getting down to actually buying things, I wondered about using something other than HP's MDL SSDs. Performance numbers for them aren't the greatest, and although I'll be dramatically increasing the performance no matter what, I can't help but worry about using midline drives with a 1 year warranty in a production box. For the price point on the HP 60GB MDL SSDs, I can get 100GB (28% Overhead) "Enterprise" SSDs from other vendors. Examples would be the recently announced Sandforce 1500 controller based offering from OCZ, Super Talent, etc. The SF1500 allows MLC, eMLC, or SLC flash to be used, has a super capacitor for clearing the write buffers in case of a power outage (these will be on UPS and generators, but still nice in case someone does something stupid), Promises read/write rates up to near the limits of SATA 2, and come with 3-5 year warranties, vs HP's puny 1 year. You could go with 6 Intel X25-E drives instead. They're still unsupported, but they have a 5 year warranty and use SLC flash. Also they're rated for 35,000/3,300 read/write IOPS each. They might be older tech, but pretty reliable. On a side note, I've got a customer who's going to be stacking 10 Fusion IO drives in a DL980 as soon as the server is released. I can't wait to run some benchmarks on that.
|
# ¿ Jul 23, 2010 19:13 |
|
shablamoid posted:They have 10 VMs setup on the root of the server, one of which is a medium to heavy load SQL server. They also have all of their users (100~) with roaming profiles and a couple of users who use GIS all day, which makes up the bulk of the data. Run a defrag task in each VM.
|
# ¿ Sep 3, 2010 23:04 |
|
If you can, convince your company to, go with a disk based backup solution. Something like a small Datadomain or HP D2D. Backup and rebuild speeds are way faster than tape, which means if you do have to rebuild a server, it'll be down for a lot less time. Later on if the company wants, you can get a second unit off site and replicate the data between them. Tape will be cheaper, but slower and less reliable.
|
# ¿ Sep 8, 2010 21:58 |
|
Mausi posted:Given that remote access cards only support standard ethernet, I'm going to guess that he's running 2x 10GbE for data, and 1x 100Mb ethernet for the remote access. Any server HP sells that's larger than a DL380 has 4 10GbE links. I've got a few customers using all four, though probably not to saturation. It's more for fault tolerance. We generally present each 10 gig link as 4 2.5 gig links to VMWare.
|
# ¿ Sep 24, 2010 16:55 |
|
Nebulis01 posted:I'm curious, I've never worked with 10GE but do you run in the queue depth issues like this? I'm assuming it would split the queue to appropriate levels? I haven't had any problems.
|
# ¿ Sep 29, 2010 14:50 |
|
three posted:The consultant sent us his formal write-up of his findings. Here are some snippets: Is there a time of the day or month when this DB has low I/O? If there is, you should try copying a very large file from the server to the storage. This will at least show you if it is actually the DB causing the issue or if it's a problem with the setup.
|
# ¿ Oct 18, 2010 02:44 |
|
How many disks did you say were in this array? Also what size and speed are they?
Nomex fucked around with this message at 17:32 on Oct 18, 2010 |
# ¿ Oct 18, 2010 17:27 |
|
three posted:Sorry, didn't see this. 16 15k-SAS disks, Raid-10 with 2 spares. Given the number of disks you have and the raid level, 2100 IOPS would be about the maximum you would see. Obviously the cache is going to play into this, but for uncached data it looks like you're approaching the limit, at least on the far right side of the graph.
|
# ¿ Oct 18, 2010 23:32 |
|
Most 15k drives are good for 150-180 IOPs, so you'd be pretty close with 14 in RAID 10. I would definitely try adding more drives if that's an available option. Another question first though, are you having performance issues with the database? If you're only pulling 1900 IOPS and you're good for 2100, there should be no performance issues.
Nomex fucked around with this message at 18:49 on Oct 19, 2010 |
# ¿ Oct 19, 2010 01:27 |
|
skipdogg posted:It's not uncommon to get 40% off list or more. Especially if you hit them at the end of the quarter and are ready to buy now. Look at the HP X1400 and X1600 network storage appliances. It's built on a DL180 storage server platform and runs Windows Storage Server. It'll wind up being quite a bit cheaper than a DL360 and shelf, and you can get them up to 24 TB raw in one 2u server.
|
# ¿ Oct 29, 2010 03:26 |
|
mrbucket posted:I've got an EMC CX3-10 in building A and building B. I've only done replication over fiber, but do you have to present the arrays to each other?
|
# ¿ Oct 29, 2010 21:32 |
|
As an HP vendor I'd be interested to know your reasoning behind that.
|
# ¿ Nov 19, 2010 21:56 |
|
I got a FusionIO IODrive to play with, but I'm having some issues. VMWare formats the drive with 512 byte sectors. I've made sure it starts at sector 128, so it should be write aligned for 4k blocks in the VM, however I'm getting absolutely terrible 4k random IO. Have a look: Does anyone know if there's any way to format the drive with 4k blocks? Or does anyone have any suggestions?
|
# ¿ Dec 6, 2010 13:12 |
|
Shaocaholica posted:Thats what I was thinking but I don't have any PCs right now I'd like to use for this task. Honestly, there's no point in using these drives. Those drives will be at best U320. A single SSD can saturate that. You can probably use a whole shelf of those and still not get nearly the performance one 2.5" drive can get you.
|
# ¿ Dec 17, 2010 01:39 |
|
szlevi posted:Words You should take a look at the 3Par T series arrays. They were recently purchased by HP, so they would still fit into your single vendor requirement. You can mix SSD, FC and SATA disks, they can be equipped to do both FC and iSCSI and they do autonomic storage tiering.
|
# ¿ Jan 2, 2011 01:55 |
|
three posted:We're using VMware View, and we're pretty happy with it. It definitely takes a decent amount of getting used to and learning how things should work, and training desktop support technicians in getting comfortable with supporting it. We did a pilot with the actual machines running off a FusionIO 320GB drive, with all the user data offloaded to an HP EVA. The virtual desktops would load at freakish speed. I would definitely recommend hosting the VMs on solid state storage if you're going the View route.
|
# ¿ Feb 1, 2011 14:47 |
|
devmd01 posted:Got my first toe in the water today with enterprise storage beyond raid arrays in servers, w000! Please tell me your boss ungrouped and ejected the disk properly before swapping it.
|
# ¿ Feb 8, 2011 21:41 |
|
To me it doesn't sound like the network team needs to take over the switches, it sounds like the SAN team needs to hire somebody competent.
|
# ¿ Feb 16, 2011 14:30 |
|
Most major vendors have sizing tools and are usually willing to assist you with purchasing the right sized solution for what you're doing. You can't really ballpark IOPS,as there's a ton of factors that will change the answer, including RAID level, block size, application, array features etc.
|
# ¿ Feb 23, 2011 23:40 |
|
If you're sizing for a major app like Exchange or Oracle, the vendor will also be able to help you with your projections.
|
# ¿ Feb 24, 2011 03:53 |
|
Not really. You can bench domain services servers (File/print, DC, DNS etc.) to get an idea about how much IO you need. Things like Exchange, Sharepoint, Oracle, BES etc you can get some baselines from vendors, or just bench them yourself. Perfmon is your friend.
|
# ¿ Feb 24, 2011 22:04 |
|
I just inherited an environment where they're about to get a FAS6210. One of the workloads will be 8 SQL servers, each needing a contiguous 4TB volume. I'll need 32TB worth of volumes total. I'm wondering what the best practice would be for carving up my aggregates. Should I just make 1 large aggregate per FAS or would it be better to split them into smaller ones? This was my first week working with Netapp, so I'm not sure what would be recommended.
|
# ¿ Apr 14, 2011 22:24 |
|
Would you put multiple workloads in 1 aggregate to maximize the amount of disks globally? Should I only have 2 aggregates total? If so, that makes my life a lot easier.
|
# ¿ Apr 16, 2011 05:20 |
|
That's correct. The iLO port is for management of the hardware only. Everything else goes through the two other ports.
|
# ¿ Jun 13, 2011 00:14 |
|
|
# ¿ Apr 23, 2024 18:01 |
|
Posts Only Secrets posted:I have 2 fibre cards in the expansion slot already, along with a fibre switch. The adapter wouldn't work. The drives that MSA uses have hot plug SCSI connections right on the drive system board, so you wouldn't be able to slide the drive cages in with an adapter installed. The only way you could really use it and be massive would be to get some 300 or 450 GB hot plug SCSI drives. Even then you won't be breaking 4TB, and 1 modern SSD on a SATA-3 controller will eclipse the maximum performance of it.
|
# ¿ Jul 18, 2011 02:50 |