Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Nomex
Jul 17, 2002

Flame retarded.

oblomov posted:

Backing up or replicating (locally) large amount of data, how do you guys do it? So, my new project will require me to backup/replicate/copy/whatever about 100TB of data to tertiary storage.

I will already be doing replicate to remote DR system, but will want to do a backup or replication job to local storage. I ruled out NetBackup with VTL or tapes since that is really unmanageable with this much storage, and now I am trying to figure out what is out there to use. So far, best option seems to be SAN vendor based replication of DATA to nearby cheaper storage SAN.

So, with NetApp, for example, I could take primary 3170 SAN cluster and then SnapMirror or SnapVault that to NearPoint SAN (basically a 3140 or something). It would be similar with say Equalogic from Dell or EMC. Other then this sort of thing, which requires bunch of overhead for SnapShots, is there any sort of say block-level streaming backup software that could be used (ala MS DPM 2007)?

I haven't kept up with EMC recently, but their Celerra stuff looks interesting. Is anyone here familiar with it?

I may be a little late with this. You should look into a data de-duplicating solution for the backup and tertiary storage. Check out Data Domain. They can be optioned to mount as SMB, NFS, FC or iSCSI. I've had one that I've been playing with for a little while now. My 300GB test data set deduplicated down to 101 GB on the first pass. Speed is pretty good too. 3GB/min over a single gigabit link. As it just shows up as disk space, it's supported by pretty much every backup product you can think of too.

Adbot
ADBOT LOVES YOU

Nomex
Jul 17, 2002

Flame retarded.

ddavis posted:

I inherited a Dell AX100 that's a bit long in the tooth. It hosts our Exchange database, SQL database, file shares, and some MS .vhds.

I'm looking to replace this soon and go full tilt with a few VMware servers to virtualize our entire environment.
My question is: one of the key benefits of virtualization (at least from my perspective) is that it divorces the server from the underlying hardware. Servers become pretty generic and interchangeable. Why then isn't it common to follow this same line of thinking and run a SAN on commodity hardware? You'd get all the benefits of an enterprise SAN without the proprietary software. Then down the road when you need to replace something that's no longer supported, it's trivial. Or upgrading becomes much cheaper because you can easily embrace whatever disk interface gives best performance/cost ratio. Does it mainly have to do with support contracts? Or am I missing some big picture aspect?

Something like Openfiler, FreeNAS or even this article seem like viable solutions since iSCSI and NAS are both supported in VMware.

When a lot of customers buy SAN hardware, they buy for long term reliability, performance and cost, in that order. You're going to get 100x the support when something fucks up from EMC or HP, versus building it yourself. Big vendors will also guarantee their product will work with other major products at a certain SLA.

Nomex fucked around with this message at 05:35 on Dec 13, 2008

Nomex
Jul 17, 2002

Flame retarded.

Jadus posted:

In general, what are people doing to back up these large multi-TB systems?

Our company is currently looking at scanning most of the paper trail from the last 20 years and putting it on disk. We've already got a direct-attached MD3000 from Dell so we're not worried about storage space. However, backing up that data doesn't seem to be as easy.

If it's to tape, does LTO4 provide enough speed to complete a backup within a reasonable window? If it's back up to disk, what are you doing for offsite backups, and how can you push so much data within the same window?

I think I may be missing something obvious here, and if so proceed to call me all sorts of names, but I don't see an ideal solution.

To add on to what others have said, because you're going to be dealing with large amounts of unused files, you should look at archiving off everything that doesn't get used in say, 3 months, to secondary storage. As it's mostly unchanging, it doesn't need to be backed up as regularly as your production data. You can use a program like EMC DiskXtender to flip data back and forth transparently between primary and secondary storage as well. As for the actual data backup, if you don't want to use tape, a bunch of vendors offer data de-duplicated disk based backup solutions that are faster and more reliable than tape. For example Data Domain makes a hardware product, EMC uses Avamar software.

Nomex
Jul 17, 2002

Flame retarded.
Anyone here dealt with Compellent Storage Center equipment? I'm trying to find out what drawbacks they may have from people who've actually used the stuff.

Nomex
Jul 17, 2002

Flame retarded.
I'm just gonna put this out there for anyone looking for cheap SAN stuff:

You can get an HP Enterprise Virtual Array 4400 dual controller with 12 x 400GB 10k FC drives and 5TB of licensing for less than $12k. The part number is AJ813A. Need more space? Order a second one and use just the shelf, then keep a spare set of controllers. You can get ~38TB for less than $96k this way. The only things you need to add are SFPs and switches.

Nomex fucked around with this message at 18:20 on Mar 5, 2009

Nomex
Jul 17, 2002

Flame retarded.

zapateria posted:

Our company has two office locations and we're planning to use the second as a disaster recovery location.

Our primary location has the following gear:

4 HP BL460c G1 blades running ESX3.5
6 HP BL460c G1 blades running Win2003 with Oracle/MSSQL
1 HP EVA4400 with about 17TB storage, 15TB in use

We're probably looking at 1 or 2 days of acceptable downtime before we have things up and running at the secondary location so for the physical servers we'll just order new hardware and restore backups in case of a disaster.

First step is to set up one or two ESXi hosts with a storage system and transfer backups of our VMs from the primary location. We have a gigabit WAN link between the locations.

What kind of storage system would be suitable as a cheap and offline kind of solution at the secondary location to take over maybe 25 VMs with stuff like domain controllers, print servers, etc.

Ideally, I would say get a second EVA 4400 with enough FATA disks to cover your storage needs, then get 2 Brocade 7500 SAN extension switches. You can then pick up a Continuous Access EVA license and enable EVA asynchronous replication between your primary and DR sites. This will have 0 downtime costs. You can plug all the required equipment in and configure it all while live. This won't be the cheapest option unfortunately, but it will be the best.

Also, you probably won't have to size the EVA to be as large as your primary storage, as a lot of your disk is probably carved into RAID 10. You can set all the DR stuff to RAID 5 and sacrifice performance for space.

Nomex fucked around with this message at 00:37 on Jan 10, 2010

Nomex
Jul 17, 2002

Flame retarded.

adorai posted:

Our secondary site has enough storage to hold all of the data, and just enough performance for our critical apps to keep running, so we have all sata disk on a single controller at our DR site. We are comfortable with letting our non critical apps be down so long as the data is intact.

In that case, you can get an EVA starter kit for pretty cheap. Call your HP rep and have him quote you on model #AJ700B. That's the part number for the 4TB (10 x 400 10k FC) model. If that's not a good fit there's a few more options Here. The starter kits tend to be a lot cheaper than just buying an EVA.

I forgot to mention, if you do decide to go this route, DO NOT under any circumstances let anyone talk you into using HP MPX110 IP distance gateways. They're complete poo poo.

Nomex fucked around with this message at 06:35 on Jan 10, 2010

Nomex
Jul 17, 2002

Flame retarded.

Insane Clown Pussy posted:

We were sold an underperforming Lefthand system that was discontinued within a few weeks of the purchase. I get the feeling they were offloading old stock. There's nothing particularly wrong with them that I've noticed but everything about dealing with them was like pulling teeth. Well, I shouldn't say everything. Dealing with their support was a pleasure, soured only by the amount of times we had to contact them. One module was DOA and its replacement died within a month or two but since then they've been fine and given us little trouble.

This was when they were still Lefthand. I called their support one time a couple months after they became HP and nobody had a clue what the gently caress was going on - much like trying to deal with AT&T. I don't know what they're like these days.

They had a lot of problems during the transition, however now Lefthand equipment is built on HP Proliant gear. The reliability and support are a lot better now that the transition is done.

FISHMANPET posted:

I was wondering if anybody could provice a useful link on SAS expanders? I've seen all sorts of SAS cards that say they support 100+ drives, but I don't understand how they do it. Google isn't helpful for once, and I'm just really curious.

Some manufacturers make internal cards too:

http://www.amazon.com/Hewlett-Packard-468406-B21-Sas-Expander-Card/dp/B0025ZQ16K

Nomex
Jul 17, 2002

Flame retarded.

Zerotheos posted:

I'm not familiar with this, so I thought I'd briefly read up on it. I understand what you're saying but 3PAR themselves seem to disagree that single parity (regardless of wide stripping) is good enough with growing drive capacities. This doesn't sound like something they're doing just to appease dumb customers. It mentions their system was still vulnerable to double disk failures and I don't think I'd feel better about that just because it rebuilds faster than a normal RAID5 array.

I can only speak for HP, but I'm sure 3par and others are similar. On an HP EVA, disks are divided into 8 disk parity groups and also have fault tolerance disks. In a RAID 5 configuration, you can lose 1 disk per parity group plus however many disks you have set to be fault tolerant. So say you have a 1.2 TB usable VRAID 5 set on 300GB disks (equal size to 4+1 RAID 5), with 5 disk groups and 2 parity drives, you can lose 5 disks (as long as it's 1 per set) plus an additional 2 disks from anywhere before you lose data. This means your chances of losing data in a properly configured EVA are extremely remote.

EoRaptor posted:

The difference in RAW space isn't actually 'real'. With HP, I lose half of that space right off (network mirror), and I can then sacrifice more with different raid levels if I want. They have the same software features (snaps for anything that supports VSS, pretty much), and I just need to figure out the backup support. The products are really close.

For backup support you just need to present the LUNs as read only to the backup server, then use whatever backup type and product you want.

Nomex fucked around with this message at 04:45 on Apr 2, 2010

Nomex
Jul 17, 2002

Flame retarded.
quote != edit

Nomex
Jul 17, 2002

Flame retarded.

Intraveinous posted:

OK, I'm finally caught back up. This is such a great thread in general, so thanks to everyone contributing so far.

My question is if anyone has any experience with the FusionIO line of enterprise PCI-Express SSDs, AKA HP StorageWorks I/O Accelerator Mezzanine cards for the C-Series blades. I believe IBM OEMs their standard form factor PCIe cards as well, but I don't know what they call them.

Basically, I have a fairly small (~30GB) index db sitting in front of a much larger Oracle RAC db. This index handles the majority of queries from a website that gets about 15 million hits a month, and only when a user drills down into a query does it fetch from the main RAC db.

The index is running right now on a fairly ancient (We're about to have a tenth birthday party for it) IBM RS/6000 box and a SSA attached 6 x 9GB disk (4+1, 1 hot spare) RAID 5 array that was set up long before I was around. It sits at 100-105% utilization in topas 24x7, pulling between 700 and 1000 IOPS of 99% random small reads.

AFAIK, nothing says I can't replace this box with non-IBM Power hardware, so I'm thinking about dumping it on a BL460/465c blade (CPU licensing costs will likely skew things in Intel's favor since I should still be able to get dual core 550x cpu) with one of the 80GB SSDs. FusionIO and HP have been claiming north of 100K IOPS, and 600-800MB/sec read rates from this kit.

I'm sure once I eliminate the disk I/O bottleneck, I'll find another, but this seems like the perfect use for the part. Considering that I was looking at 5-10x more money, wasted space (both disk and rack unit), plus a bunch of extra power to short stroke an array to get even 3-5K IOPS, I'm having a hard time finding a fault, even if I only get 25% of advertised performance.

My one big worry would be fault tolerance. The data is pretty static, generated at timed intervals by a script from the larger database, so I'm not worried about the data loss as much as the downtime if it fails. A half-height blade would (in theory) let me put two of them in (if I didn't need any other expansion at all) and do a software mirror, but am I being stupid? I'm not going to be able to hot-swap a mezzanine card no matter what I do.

I'd have another blade at our DR site that could be failed over to in that case, but if I can avoid that problem as much as possible, that would be ideal.

So anyway, please tell me I've found exactly the right thing for this job, or that I'm an idiot. Although please, if it's the latter, tell me why and suggest something else to look into.

If you're worried about fault tolerance, you might want to go with an sb40c storage blade and 6 of the MDL SSDs in RAID 10. That would give you about 60k random read IOPS and ~15k writes.

Nomex
Jul 17, 2002

Flame retarded.

brent78 posted:

About to pick up 40 TB of Compellent storage. I liked their solution the best out of Lefthand, Equallogic and Netapp. Anything I should be aware of before dropping this PO?

You can't control how it does its disk platter tiering. It'll move data around the platters and you can't tell it what data to move or when to move data. (Primary to secondary storage is controllable). It can cause some performance issues.

Nomex
Jul 17, 2002

Flame retarded.

Nukelear v.2 posted:

This thread needs a bump.

Building up a smallish HA MSSQL cluster and my old cheap standby MD3000 is definitely looking long in the tooth. So I'm going back to my first love, the HP MSA and I must say the P2000 G3 MSA looks very tempting. Anyone use either the FC or SAS variants of this and have any opinions on it? I've also been reading that small form factor drives are the 'wave of the future' for enterprise storage, logically it seems to be better but I haven't really heard too much about it, so I'm also trying to decide if the SFF variant is the better choice.

If you decide to go with the LFF SAS FC option, an EVA4400 starter kit will work out to be almost the same price as an MSA2000FC (possibly cheaper), but offers higher availability, better expandability and better performance.

Nomex
Jul 17, 2002

Flame retarded.

Misogynist posted:

I got to see what happens when an IBM SAN gets unplugged in the middle of production hours today, thanks to a bad controller and a SAN head design that really doesn't work well with narrow racks.

(Nothing, if you plug it right back in. It's battery-backed and completely skips the re-initialization process. Because of this incidental behavior, I still have a job.)

If it's a DS4xxx unit I would schedule a maintenance window so you can power it off and reboot it properly. DS units are touchy, and you might see some glitches down the road.

Nomex
Jul 17, 2002

Flame retarded.
You should see what happens when you turn one on in the wrong order. I hate IBM DS equipment with the fury of a thousand suns.

Nomex
Jul 17, 2002

Flame retarded.

Intraveinous posted:

This was the way we ended up getting approval for. BL460c + SB40c with SSDs. Now that I'm getting down to actually buying things, I wondered about using something other than HP's MDL SSDs. Performance numbers for them aren't the greatest, and although I'll be dramatically increasing the performance no matter what, I can't help but worry about using midline drives with a 1 year warranty in a production box. For the price point on the HP 60GB MDL SSDs, I can get 100GB (28% Overhead) "Enterprise" SSDs from other vendors. Examples would be the recently announced Sandforce 1500 controller based offering from OCZ, Super Talent, etc. The SF1500 allows MLC, eMLC, or SLC flash to be used, has a super capacitor for clearing the write buffers in case of a power outage (these will be on UPS and generators, but still nice in case someone does something stupid), Promises read/write rates up to near the limits of SATA 2, and come with 3-5 year warranties, vs HP's puny 1 year.

Being such new stuff, I'm a little hesitant to put prod on Sandforce, and in an "unsupported" configuration, but I'm also hesitant to spend the money on HP's drives which aren't rated for high end workloads, have a shorter warranty, and are slower all around.

HP is supposed to be releasing their "Third Generation Enterprise SSDs" some time in the next few months, but I can't really wait around any longer, as the performance problems are getting more and more common on the current kit.

TL;DR version:
Making an array of 6x SSDs in Storage Blade, stick with supported, but slower, lower rated midline SATA HP drives, or go balls out and bleeding edge with unsupported SandForce 1500 based enterprise SSDs like the OCZ Deneva Reliability/Vertex 2 EX, or Super Talent TeraDrive FT2 for about the same cost.

You could go with 6 Intel X25-E drives instead. They're still unsupported, but they have a 5 year warranty and use SLC flash. Also they're rated for 35,000/3,300 read/write IOPS each. They might be older tech, but pretty reliable.

On a side note, I've got a customer who's going to be stacking 10 Fusion IO drives in a DL980 as soon as the server is released. I can't wait to run some benchmarks on that.

Nomex
Jul 17, 2002

Flame retarded.

shablamoid posted:

They have 10 VMs setup on the root of the server, one of which is a medium to heavy load SQL server. They also have all of their users (100~) with roaming profiles and a couple of users who use GIS all day, which makes up the bulk of the data.

Run a defrag task in each VM.

Nomex
Jul 17, 2002

Flame retarded.
If you can, convince your company to, go with a disk based backup solution. Something like a small Datadomain or HP D2D. Backup and rebuild speeds are way faster than tape, which means if you do have to rebuild a server, it'll be down for a lot less time. Later on if the company wants, you can get a second unit off site and replicate the data between them. Tape will be cheaper, but slower and less reliable.

Nomex
Jul 17, 2002

Flame retarded.

Mausi posted:

Given that remote access cards only support standard ethernet, I'm going to guess that he's running 2x 10GbE for data, and 1x 100Mb ethernet for the remote access.

I've not seen anyone plan for more than 2x 10GbE into a single server.

Tell you what tho, 10GbE upsets the VMware health check utility - starts complaining that all your services are on a single switch.

Any server HP sells that's larger than a DL380 has 4 10GbE links. I've got a few customers using all four, though probably not to saturation. It's more for fault tolerance. We generally present each 10 gig link as 4 2.5 gig links to VMWare.

Nomex
Jul 17, 2002

Flame retarded.

Nebulis01 posted:

I'm curious, I've never worked with 10GE but do you run in the queue depth issues like this? I'm assuming it would split the queue to appropriate levels?

I haven't had any problems.

Nomex
Jul 17, 2002

Flame retarded.

three posted:

The consultant sent us his formal write-up of his findings. Here are some snippets:



Thoughts? We're thinking about sending this to our Dell rep to have them refute the iSCSI complaints as well.

Is there a time of the day or month when this DB has low I/O? If there is, you should try copying a very large file from the server to the storage. This will at least show you if it is actually the DB causing the issue or if it's a problem with the setup.

Nomex
Jul 17, 2002

Flame retarded.
How many disks did you say were in this array? Also what size and speed are they?

Nomex fucked around with this message at 17:32 on Oct 18, 2010

Nomex
Jul 17, 2002

Flame retarded.

three posted:

Sorry, didn't see this. 16 15k-SAS disks, Raid-10 with 2 spares.



Given the number of disks you have and the raid level, 2100 IOPS would be about the maximum you would see. Obviously the cache is going to play into this, but for uncached data it looks like you're approaching the limit, at least on the far right side of the graph.

Nomex
Jul 17, 2002

Flame retarded.
Most 15k drives are good for 150-180 IOPs, so you'd be pretty close with 14 in RAID 10. I would definitely try adding more drives if that's an available option. Another question first though, are you having performance issues with the database? If you're only pulling 1900 IOPS and you're good for 2100, there should be no performance issues.

Nomex fucked around with this message at 18:49 on Oct 19, 2010

Nomex
Jul 17, 2002

Flame retarded.

skipdogg posted:

It's not uncommon to get 40% off list or more. Especially if you hit them at the end of the quarter and are ready to buy now.

I've got a bit of a conundrum... A satellite office of engineers needs some major storage. They have ~960GB right now and use every bit of it. I'm looking at getting them about 4TB or so.

The status quo in my company would be to load up a DL380G7 with 16 * 600GB 10K drives. This is extremely expensive and it's really pointless to use that size server for what's basically a storage box.

We're an HP shop, so ordering Dell kit is unlikely. I've been thinking a smaller DL360 and attached MSA60 or 70. What are some other options I should be considering?

Look at the HP X1400 and X1600 network storage appliances. It's built on a DL180 storage server platform and runs Windows Storage Server. It'll wind up being quite a bit cheaper than a DL360 and shelf, and you can get them up to 24 TB raw in one 2u server.

Nomex
Jul 17, 2002

Flame retarded.

mrbucket posted:

I've got an EMC CX3-10 in building A and building B.

I bought and paid for MirrorView/S. A tech came by and installed MirrorView/A as well, since I couldnt get /S to work and figured it was a latency issue.

I cant get them to replicate for the life of me. Both are on the same VLAN at the same campus facility. EMC tech says "welp its your network" and sends me off. Several times.

I've got a 4gbps fiber link between the buildings. Gig-e to the ports dedicated to mirrorview. Ping works fine.

:confused:

I've only done replication over fiber, but do you have to present the arrays to each other?

Nomex
Jul 17, 2002

Flame retarded.
As an HP vendor I'd be interested to know your reasoning behind that.

Nomex
Jul 17, 2002

Flame retarded.
I got a FusionIO IODrive to play with, but I'm having some issues. VMWare formats the drive with 512 byte sectors. I've made sure it starts at sector 128, so it should be write aligned for 4k blocks in the VM, however I'm getting absolutely terrible 4k random IO. Have a look:



Does anyone know if there's any way to format the drive with 4k blocks? Or does anyone have any suggestions?

Nomex
Jul 17, 2002

Flame retarded.

Shaocaholica posted:

Thats what I was thinking but I don't have any PCs right now I'd like to use for this task.

Mac Mini - nope
Shuttle - nope
HTPC - guh, I could but I'd rather not dig it out from its cave
Various laptops - nope

Are there any old workstations I could buy that come with hot swap 80pin backpane type thing? I've found some old Dell workstations with SCSI onboard for around the same price as actually getting a SCSI card which is better for me since I don't really have a machine to put the card into.

Honestly, there's no point in using these drives. Those drives will be at best U320. A single SSD can saturate that. You can probably use a whole shelf of those and still not get nearly the performance one 2.5" drive can get you.

Nomex
Jul 17, 2002

Flame retarded.

szlevi posted:

Words

You should take a look at the 3Par T series arrays. They were recently purchased by HP, so they would still fit into your single vendor requirement. You can mix SSD, FC and SATA disks, they can be equipped to do both FC and iSCSI and they do autonomic storage tiering.

Nomex
Jul 17, 2002

Flame retarded.

three posted:

We're using VMware View, and we're pretty happy with it. It definitely takes a decent amount of getting used to and learning how things should work, and training desktop support technicians in getting comfortable with supporting it.

As far as the technology goes, PCoIP isn't quite as good as HDX (XenDesktop), but it has a lot of potential... and everything else about View is much better than XenDesktop (which has a very cobbled together feel), in my opinion.

Our users are much happier with their virtual desktops, as well. Every single person we migrated over during our pilot preferred their experience with VDI over their physical desktop (which was, to be fair, slightly older), and none wanted to be moved back to physical.

To relate this back to storage: storage is the #1 bottleneck people run into with VDI. We've been using Equallogic units, and we plan to add more units to our group to increase IOPS/Capacity as needed. (Currently our users are on the same SAN(s) as our server virtualization, and this is why I want to move them to their own group.)

We did a pilot with the actual machines running off a FusionIO 320GB drive, with all the user data offloaded to an HP EVA. The virtual desktops would load at freakish speed. I would definitely recommend hosting the VMs on solid state storage if you're going the View route.

Nomex
Jul 17, 2002

Flame retarded.

devmd01 posted:

Got my first toe in the water today with enterprise storage beyond raid arrays in servers, w000!

Boss: "We had a failed drive on the san, I already swapped in our cold spare, I need you to call it in. Here's how you log in to CommandView EVA, and let me give you a 2 minute tutorial on what buttons NOT to press. Have fun!"

:v:

Please tell me your boss ungrouped and ejected the disk properly before swapping it.

Nomex
Jul 17, 2002

Flame retarded.
To me it doesn't sound like the network team needs to take over the switches, it sounds like the SAN team needs to hire somebody competent.

Nomex
Jul 17, 2002

Flame retarded.
Most major vendors have sizing tools and are usually willing to assist you with purchasing the right sized solution for what you're doing. You can't really ballpark IOPS,as there's a ton of factors that will change the answer, including RAID level, block size, application, array features etc.

Nomex
Jul 17, 2002

Flame retarded.
If you're sizing for a major app like Exchange or Oracle, the vendor will also be able to help you with your projections.

Nomex
Jul 17, 2002

Flame retarded.
Not really. You can bench domain services servers (File/print, DC, DNS etc.) to get an idea about how much IO you need. Things like Exchange, Sharepoint, Oracle, BES etc you can get some baselines from vendors, or just bench them yourself. Perfmon is your friend.

Nomex
Jul 17, 2002

Flame retarded.
I just inherited an environment where they're about to get a FAS6210. One of the workloads will be 8 SQL servers, each needing a contiguous 4TB volume. I'll need 32TB worth of volumes total. I'm wondering what the best practice would be for carving up my aggregates. Should I just make 1 large aggregate per FAS or would it be better to split them into smaller ones? This was my first week working with Netapp, so I'm not sure what would be recommended.

Nomex
Jul 17, 2002

Flame retarded.
Would you put multiple workloads in 1 aggregate to maximize the amount of disks globally? Should I only have 2 aggregates total? If so, that makes my life a lot easier.

Nomex
Jul 17, 2002

Flame retarded.
That's correct. The iLO port is for management of the hardware only. Everything else goes through the two other ports.

Adbot
ADBOT LOVES YOU

Nomex
Jul 17, 2002

Flame retarded.

Posts Only Secrets posted:

I have 2 fibre cards in the expansion slot already, along with a fibre switch.

That's why i'm wondering about using the adapter, I have practically everything needed to get this running.


Edit: I'm running out for the night, but i'll post pics of the hardware when I get in.

The adapter wouldn't work. The drives that MSA uses have hot plug SCSI connections right on the drive system board, so you wouldn't be able to slide the drive cages in with an adapter installed. The only way you could really use it and be massive would be to get some 300 or 450 GB hot plug SCSI drives. Even then you won't be breaking 4TB, and 1 modern SSD on a SATA-3 controller will eclipse the maximum performance of it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply