Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Rhymenoserous
May 23, 2008

mkosmo posted:

Hey Storage Gurus, I have a question for you if you will permit:

At work we have an EMC Clariion (CX3-80) now replacing our old Intransa and a couple Isilon units (which have performed very poorly for our I/O needs).

EMC promised us >=350 megabytes per second per datamover and we're not really seeing that despite having been working with engineers for months. Also it appears there is no jumbo framing on the 10GbE interfaces which could cause us some poorer performance. In addition, getting CIFS and NFS working on one file system cooperatively proved to be a hassle.

Any idea whats up with that? What other issues have you seen with EMC and performance or otherwise?

Can you give us an idea of the environment being worked in? Give as much detail as possible if you can. Nas or San et al et al.

Adbot
ADBOT LOVES YOU

Rhymenoserous
May 23, 2008

Nukelear v.2 posted:

Doing a new SMB Vmware deployment, thinking that the Netapp FAS2240-2 looks like my best bet in this space. Any other opinions on this?

We will have:
4-6 physical hosts running about 50 VM's, not heavy IO. ~1.5TB non-deduped
One physical database cluster SQL08, heavy-ish IO. 2TB currently

Dedupe is the only fancy feature I'm really interested in. Reliability of the unit is paramount.


Price point I'm looking at is low-end san territory, $20k or so to start plus reasonable expansion costs.

EMC VNXe, dedupe appears not as good and this thread basically turned into a poo poo on EMC support factory, so that's a turnoff.

Equallogic didn't seem particularly compelling, no 10g/8g FC options in this price range.


NFS still seems to be nicest way to present deduped volumes to vmware, but I guess vaai is going to change this, so not be as much of an issue if the arrays supports that. Can't think of a compelling reason for iscsi/nfs besides that.

Assuming I could direct connect my database pair via the netapp's 4 x SAS ports, or are those reserved for disk shelf expansions?
Also, don't currently work with any Netapp resellers. Is it worth finding one or just go right to netapp, who's going to have better pricing?

Look at Nimble, it's a bit over your price range, but I'm pushing about 50% block level dedupe (They keep referring to it as compression which it technically isn't) on my SQL db's and my VMFS operating systems datastore is barely using any of the space I allocated it: to the point where I'm about to create a new datastore, migrate and reclaim some storage.

I've just started working with this thing and compared to the frustrations of EMC and Unisphere I'm having a blast.

Rhymenoserous
May 23, 2008

optikalus posted:

In EMC's defense, it was only the initial install guys that were garbage. We had some great techs come after the install to assist with new shelf installations, flare upgrades, etc.

However, they should be sending top guys to make sure the system gets up and running and is installed properly and professionally. First impressions are the most important. Wasting 60+ hours of the client's time for a simple installation should never happen.

The initial install team always belongs to the sales guy who sold you the crap and are rarely qualified CE's. I've found when EMC sends a team out for big projects, they generally do it in fours. One guy to be the salesman. One guy to do the work. Two guys to discuss where they are going for lunch.

Rhymenoserous
May 23, 2008

Misogynist posted:

It's perfectly fine as long as you're loading the rack with the heaviest stuff on the bottom. Unless you're stuffing towers onto a shelf, sliding rails are the only reason to even have four posts.

[ASK] Me about the guy who came before me putting UPS + Batteries in the middle of the rack (And stacked on top of the rack... and on the floor....).

I keep expecting to push up a ceiling tile to find a UPS hanging from a wire harness up there.

Rhymenoserous
May 23, 2008

Hyrax posted:


EqualLogic wanted to blame our NIC config on some of our ESX hosts (a pair of teamed 10G nics with all traffic VLAN'd out)

Honestly it would take less time to pull the team, set up a single iSCSI network on one of the 10G lines and throw the continuing errors in their face than it took for me to write this post (almost). I'd do that during downtime.

Rhymenoserous
May 23, 2008

madsushi posted:

Inflating your dedupe ratio by stacking only OS drives into one volume is bad for your overall dedupe amount. You get the BEST dedupe results (total number of GBs saved) by stacking as MUCH data into a single volume as possible. The ideal design would be a single, huge volume with all of your data in it with dedupe on.

EFB

Several times...

This is also very dependent on the device. There I contributed.

Rhymenoserous
May 23, 2008
Yeah there is, but odds are he's either punching retirement age or is considering hanging himself.

Rhymenoserous
May 23, 2008

adorai posted:

Take a look at Nimble.

Love my nimble.

Rhymenoserous
May 23, 2008

NippleFloss posted:

You probably get that answer because it's not a common thing to do because the benefits of active/active for NAS aren't that great. Any single export or share is only going to be accessible from a single physical host, just as it would be in an active/passive config. Yes, you can provide a better balance between all exports if you have active active, but if your intent is to run both hosts at above 50% utilization then when you do have a failure event you're going to be hosed.

That said, CentOS would be my choice if I was intent on doing something like that.

[ASK] Me about working for a webhosting company that ran both datamovers on an EMC box up to 80% utilization... and what happened when one of them failed.

Bonus: The plastic pull tabs that allow you to remove the datamover from the chassis were so brittle they snapped: Queue another two hours of downtime while the CE tried to pry it out with a screwdriver.

So glad I don't work there anymore.

Rhymenoserous
May 23, 2008

Aniki posted:

Edit: I am still going to look into Nimble and some other options, but it seems like it is going to be hard to beat NetApp's management software.

Buy a nimble, seriously. I've worked with EMC, Dell, and NetApp. They all have their pros and cons, but when it comes to just ease of provisioning/backup managing backup groups, dedupes, disk quota's and initiator groups the nimble is a hell of a lot less annoying to actually use.

Also I'm getting some really good dedupe here and the performance has been absolutely wild (I'm doing iSCSI over 10GE)

Rhymenoserous
May 23, 2008

Aniki posted:

I'm looking at their white paper right now. I'll contact them later today and at the very least, I can use them to compare to NetApp. The bulk of our data won't be subject to Dedupe, since it will be archived calls, which are already compressed, but I know that becomes a lot more important with getting rid of the overhead from running VMs. Do you have a broad idea of how Nimble compares to NetApp in pricing, is it typically cheaper, about the same, or more expensive?

What do you consider to be the cons of NetApp and were those cons before or after they changed their management software in November?

I predate the change. The last time I worked on a netapp was six years ago give or take, and the last thing I did was trade the thing in for an EMC. The box we had suffered from some pretty serious I/O issues.

I'm sure their culture has changed but I had a hell of a time getting support out of those guys. Meanwhile Nimble actually drove an engineer down and we step by step integrated my entire virtual environment into the new storage array. No reading a whitepaper, no reading best practices, boots on the ground giving a hand.

For dedupe on virtual machines I'm using about 2TB of *Provisioned* storage for OS VMDK's that is compressing down to about... 90G, give or take.

For my stuff that's not easy to dedupe (Fileserv) I'm getting about
1.31X.

If you do anything with SQL it dedupes there by a considerable amount as well.

Rhymenoserous
May 23, 2008

complex posted:

No, I meant CS240. Instead of usable I should have said "effective". http://www.nimblestorage.com/products/nimble-cs-series-family/

I have a CS240 and the compression on VMDKs exceeds the estimated 50% compression. your compression ratio may be different.

I have a CS220 and the compression on VMDK's far far far exceed 50% compression.

EDIT: put succinctly when I was shopping for this thing: I sized out out pretty much to what we needed with about 2tb of "Growing space", with the assumption that compression may not work at all. Now I'm trying to find more data to huck on this thing because I've used about a quarter of what I was expecting to use.

Rhymenoserous fucked around with this message at 22:05 on Mar 22, 2012

Rhymenoserous
May 23, 2008

FISHMANPET posted:

I just spent most of the day trying to setup a Microsoft Failover cluster using our Sun x4500 as the iSCSI initiator, only to discover that Solaris 10 (at least not update 8) doesn't support SCSI-3 commands over iSCSI, mainly persistent reservations.

It looks like it's available in Solaris 11, but anyone know if it's in a subsequent version of Solaris 10?

I understand it doesn't like MPIO either (I could be wrong).

Rhymenoserous
May 23, 2008

Internet Explorer posted:

Storage and Virtualization are both incredibly complex topics, especially when you start talking about different vendors and different products. I wish I was half as knowledgeable as some of the people in this thread or the Virtualization thread. I would love to work on this stuff all day.

This is pretty much my goal. I'm on my third major virtualization project in the last 6 years. Every company I go to shall virtualize. Every.

Rhymenoserous
May 23, 2008

adorai posted:

Sure it is. A lot of our servers have an M: drive, which is just a lun connected that is used for mountpoints and random file storage. I'd like to snapshot this M: drive every day, and never have to delete a snapshot. But I can only do that for 8-9 months. And there is no real reason that the filesystem shouldn't support it, other than someone decided to use one type of integer in the code rather than a larger one. And it's not a big enough deal to matter when it comes to purchasing, but it causes me a bit of frustration from time to time, and that does count for something.

Snapshot storage efficiency being what it is there's no reason not to change daily snapshots to hourly snapshots or even twice daily snapshots if you are not feeling froggy. Unless your changerate is through the ceiling. Our ability to retain snaps has come a long way, at this point a limit of 255 without intervening software is pretty silly.

Rhymenoserous
May 23, 2008

Aniki posted:

I talked to Nimble today. Their SAN is very interesting, though I'm not sure how much some of their performance and compression features are going to matter for our situation. I do like the ability to take 10,000+ snapshots and I like that when you update the controllers it will automatically take a controller off line apply the updates, then bring it back and repeat the process for the other controller. Their management software seems decent, it looks like they do all or most of the same things that NetApp does, albeit in what seemed to be a slightly clunkier way. As for cost, I think they said the street price for the 220 is $40k and they were very adament about being willing to do what it takes beat NetApp's price. They also offered to send out a SAN for us to try for 10 days, which we may do.

I also talked to CDW, who we would be purchasing the NetApp equipment through. I gave them some more detailed information about what Nimble was willing to do with their price and also mentioned that we have some other equipment that we need and they're going to come back with a couple quotes and I'm curious to see what kind of discounts they will offer. I'm hoping that we'll get some pretty heavy discounts, which would make getting this project approved a lot easier.

CDW resells nimble now too so make sure you bark up the tree from that angle as well.

If you buy a Nimble let me list you as a referrer so I can get a free Ipad :colbert:

Rhymenoserous
May 23, 2008

NippleFloss posted:

I hate to say it, but their usable space calculations are misleading. You start with 12 disks. In an active/active configuration you must have at minimum two raid groups, as each controller requires it's own aggregate. With raid-DP each raid group uses two parity disks, so that knocks your 12 disks down to 8. Best practices are to leave at least one hot spare available on each controller, which would drop you down to six data disks.

But even assuming you don't do that and use all 8 drives as data disks you still don't get 16T. For a couple of reasons a 2T disk does not actually provide 2T of storage. Radek does a good job of explaining in this thread:https://communities.netapp.com/thread/8509. Basically block checksums + disk geometry differences require that disks are "rightsized" to a smaller value than the size stated by a disk manufacturer. In the case of a 2T sata disk you get around 1.7T. See this output from one of my filers:

29.22: NETAPP X306_HJUPI02TSSM NA02 1695.4GB (3907029168 512B/sect)

That is a 2T disk.

So at best you get 8*1.7T, or 13.6T, of which WAFL reserve takes a fixed 10% off the top. That leaves you around 12T, with no spares on either controller.

If you are being sold 16T of usable storage you should ask them to explain exactly how they are getting that number given right-sizing, raid-dp penalty, and WAFL reserve.

I've dealt with a lot unhappy people who thought they were getting more storage than they really were because the sales team did a poor job of selling real, usable capacity. It certainly does NetApp no favors to disappoint new customers that way.

I'm glad someone else is breaking this down: I knew 16TB of usable allocatable space at that pricepoint didn't sound right at all.

Rhymenoserous
May 23, 2008

szlevi posted:

This is why I hate Netapp, EMC etc, for this BS nickel-and-diming on features - and this is exactly the main reason why I went with EqualLogic last time and I will always go with all-inclusive vendors only: all features are available from day 1, no ripoff prices later, after I already bought into the system.

Yep that's a big part of why I went with Nimble. The listed feature set is the listed feature set. There are no little stars by anything that link to a footnote saying "Only valid if you purchased some lovely overpriced software addon"

Also gently caress replication manager.

Rhymenoserous
May 23, 2008

NippleFloss posted:

Not sure where the VNX to Equallogic transition is coming from. Going from a unified block/file scale up SAN to an iSCSI only scale out SAN doesn't make much sense to me. The VNX line is a direct response to NetApp and NetApp would be the obvious competitor.

The VNXe line was put out to compete with Equallogic.

Efb. By days.

Rhymenoserous
May 23, 2008

lol internet. posted:

I'm a bit of a SAN noob but these are most likely easy questions for you guys.

1. I have two physical Qlogic HBA on a IBM Blade. I would like to connect them both to 1 LUN via iSCSI on a Netapp SAN. How would I setup MPIO? Do I just configure both HBA's to point at the LUN and the MPIO will figure itself out? Or is there third party software I would install on the OS?

Needs more info. Is this going to a 10g network? If so use one for iSCSI and the other for redundancy.

quote:

2. For Netapp, can someone explain initiator groups to me? From my understanding, you need a initiator group mapped to a lun so will be availible as a target. Also it stops other nodes from accessing the lun unless it's iqn? is put inside the initiator group.

I can't speak for Netapp, but initiator groups on my nimble are so you aren't broadcasting 300 different initiators to a single box. If I just dumped everything out in the wild without grouping, when I fired up iSCSI initiator I'd see pretty much every chunk of iSCSI storage flopping around. It's messy and harder to manage. So instead I create a group for each server (Or one to hold all of my VMFS partitions) and present them logically to the array.

quote:

3. What are the pros\cons\best practices for Volume size\LUN per Volume\Lun sizes etc..\

Start small. It's easier to grow if you need it than it is to reclaim if you need it. Don't create huge fuckoff luns. It's tempting but it WILL bite you in the rear end unless you have an unlimited budget. Keep like data together as much as possible. Honestly these questions would be a lot easier to answer if you told us what you were doing.

quote:

4. Pros\Cons between hardware\software initiators? I assume hardware has better performance overall.

The difference is overhead, but in a modern setup, overhead is not a big enough deal to matter. Most array vendors seem to prefer software initiators as their software can mod the initiator timeouts and such on the fly. Do whatever the storage vendor suggests.

quote:

5. I mapped a LUN through a physical HBA initiator but when I boot up into windows and check the network connections, the IP address of the network adapter is 169.x.x.x? is this right?

It's all dependent on how you set it up. I route my poo poo through a 10G segregated vlan on my switches because it gives me better scalability and I can monitor what's going on. If an HBA starts dropping packets I can find out easily.

quote:

Any references or websites which are helpful in making me understand the concepts and real life usage of SANs would be awesome. I understand SANs are used everywhere, but I'd like to know why X scenario would be better then Y scenario using ____ method.

Thanks!

Again it would be easier for you to just tell us what you want out of it.

Rhymenoserous
May 23, 2008

skipdogg posted:

I'm really curious as to how big this Exchange environment is as well. We outsourced exchange though and it's been wonderful. It's someoneelse's problem now.

I'm trying to get this done now.

Rhymenoserous
May 23, 2008

FISHMANPET posted:

Can anyone link to some specific criticisms of Drobo, or tell me what I should be looking for? Normally "I read it on the Internet" would be good enough but in this case, because of ~~PoLiTiCs~~ I really have to beat them over the head with why this would be bad.

Slow as gently caress. Prone to causing disk failure. There's your specific criticism of Drobo.

(gently caress drobo I've got two of them)

Rhymenoserous
May 23, 2008

Beelzebubba9 posted:

I'm going to bump this too. The internet seems to have very good things to say about Nimble's product, and the people I know who use them really like them, but it wasn't in a production or similarly stressed environment.

....or do I need to be SA's $250K guinea pig?

I've got a Nimble CS-220 in production. Ask away.

Rhymenoserous
May 23, 2008

Number19 posted:

Is the performance as good as they advertise? Have you had any major issue with bugs? Do you use the snapshot backup feature and if so are you replicating it to an offsite unit or just using a regular backup system?

I'm curious about the hardware since it looks like it's a SuperMicro storage chassis with Nimble branding plastered on top of it. Are the bulk storage drives midline SAS or just regular SATA drives? I know that the SSDs are just consumer level MLC Intel drives but I'm curious about the other drives and their reliability.

1. The performance is pretty loving good honestly. I have the 220 which nets me about 12 TB of storage and it handles two production databases with 150+ users each as well as a bunch of ancillary reporting db's and my primary file and print server/voicemail server etc. Oh and I'm almost 100% virtual so the VHD's are running off of it as well. From my understanding the array is pretty good at figuring out what data is most used and shuffling it over to the SSD's, my SSD Cache hit rate is something like 95%.

2. Bugs: I've been in production for 6 months now and I've only run into one bug: a disk reporting bug which was ironed out in the next patch. I've gotten a much higher bug rate out of EMC than these guys to be frank.

3. I am using the snapshot backup feature but not yet replicating (Waiting for budget to buy another unit).

4. Bulk storage is high capacity SATA

If I had a criticism of the system it's that it's actually a little too simple. It's a fairly barebones san, so you don't get as many of the integration features as you would see in others. Want to restore a backup? You are going to be mounting the latest snapshot: there are no tools to say... automatically mount the last several that you may see in other companies arrays.

But it also compresses like a motherfucker. I'm getting near 12 TB of usable storage out of 7tb of physical disk (Post raid of course).

Rhymenoserous
May 23, 2008

three posted:

You have to configure all of your hosts to tolerate ~40 seconds of the EQL array being unavailable during firmware upgrades. Maybe this is recommended with other iSCSI arrays, but I haven't ran into it yet?

This is fairly normal. Usually the handoff is smooth (i.e. within a few seconds) this is just an rear end covering number.

Rhymenoserous
May 23, 2008

Number19 posted:

Do you just backup the snapshots to tape/whatever?

Yeah I read this in the documentation that they make public and it's one of the things that scares me the most. I guess if you used a backup system to handle the actual restore process it would work well enough. The restore methods for SQL and Exchange in their demo videos are pretty :stare:

Apparently they are teaming up with CommVault to provide some sort of full solution but that's gonna add a lot of cost to it. Unless you're already using CommVault that is. I'm not but I hate Backup Exec a lot so I might be able to make that change too (assuming that CommVault is any better).

Right now we're not offloading anything (Yeah I know). Please bear in mind that this organization had 0 operational level backups/snapshots of any kind with the exception of the SQL databases (And only the DB backups themselves) prior to me being hired about 9 months ago.

They had 10 servers. No server backups. No file level backups. Oh and the primary and secondary ERP server were sitting on a UPS with a dead battery pack, and we're in a region well known for power spikes during: thunderstorms, wind storms, toddler crossing the yard, owner burps, local power company spontaneously combusting, etc. Everything was on 6-7 year old servers: all bare metal installs.

My first day on the job I was getting a walkthrough of the server room and a thunderstorm shot through. I watched as five servers snapped off then back on six times in a row and had a small panic attack. I stomped my feet and yelled at the guy in charge and demanded he purchase a new UPS because "This is how you lose raid arrays".

Two weeks later I lost my first raid array! No backups! Old Custom Software system who we couldn't get support for to do a reinstall! FUN!

FAKE EDIT: The best part is the guy who had run the companies IT previously actually had purchased a UPS to replace the one that was dead. He just didn't buy one with enough power capacity nor the right power input for our building. So he hid it behind the racks so no one would yell at him. It now sits on my workbench as a testament to :wtc:

Real Edit: Now almost everything is on 4 brand new HP servers running ESX 5, managed from my shiny Vcenter interface, everything on SAN and snapshotted. I've got copies of every major production VM tucked away on USB drives. It's not perfect but it's a hell of a lot better than the frantic sobbing that was going on months back.

Rhymenoserous fucked around with this message at 21:33 on Aug 16, 2012

Rhymenoserous
May 23, 2008

Docjowles posted:

Raising my hand here as a storage idiot again. I'm still evaluating my options for buying an entry level SAN to replace our aging HP MSA's. One roadblock I'm hitting is that my boss is really, really mistrustful of a SAN as a single point of failure. Despite having redundant controllers, each with their own ethernet ports and dedicated switches, their own power supplies plugged into different PDU's, and the disks themselves protected with RAID, he still sees one physical box and thinks there's a decent chance the entire thing will die. For this reason we have two MSA's and our applications' architecture is such that we can lose one entire enclosure and still limp along.

Is that an eventuality most of you plan for, losing both controllers in your SAN (or the entire RAID array)? Edit: We do have a DR site, I'm talking about within your primary site so one can poo poo the bed without needing the DR site.

The thing is: it's not really a single box. It's two boxes in a single chassis, they don't share a common interface and usually use heartbeats of some form to check eachothers health. I have only seen both controllers go down a single time in the last 10 years of working with various models of SAN, and that was at a budget web hosting company running one of the old EMC NS series all in one SAN/NAS cabinets. The reason it went down? They went Active/Active on the controllers when they went over the point where a single controller could handle the workload they were dealing with. So what happened was they had two controllers spinning at about 75% capacity and one took a poo poo. Do you know what happens when you dump double the load on an already taxed controller? It takes a poo poo too.

Rhymenoserous
May 23, 2008

NippleFloss posted:

FWIW the guys at Nimble seem pretty intent on building themselves into a real competitor, rather than selling out at the first opportunity. A lot of their executive staff came from high positions in other well established companies like NetApp and Data Domain so they had ample opportunity to cash in in those positions.

The other reason I think it's unlikely is that I don't really see who would buy them. Dell and HP have already acquired very similar products. Nimble has nothing in their portfolio has nothing to offer NetApp. IBM has invested a lot recently in developing their own unified storage platform and picking up Nimble would require taking focus off of that. EMC already has products that fill the same niche Nimble does. Hitachi is really the only vendor left that they could help, but that product doesn't fit very well with the rest of Hitachi's portfolio.

Of course EMC has a tendency to try to buy everyone at some point or another, so who knows.

EMC would actually do well to pick up the product. The basic storage architecture EMC uses hasn't really changed over the last 10 years. They could also learn something about GUI development as well because gently caress unisphere right up it's worthless rear end (Still better than what came before but drat.)

Of course they won't do it because then they wouldn't be able to make bank by selling those people those sweet NAS management classes (That one was pretty good tbh) or the terrible VNX SAN management class (Ugh kill me).

Rhymenoserous
May 23, 2008

Number19 posted:

If I still have to offload backups and manage restores using a data protection suite then I'm probably just going to evaluate the Nimble from a performance perspective and compare the overall costs to a solution that has all those features like a Netapp filer.

It has replication features, I'm just waiting for my company to stop dragging it's feet and let me purchase one of the smaller arrays to hold offsite snapshots.

Rhymenoserous
May 23, 2008

Number19 posted:

Yeah they're coming after us hard too and we're not even all that big of a shop. We're 6ish months away from making and sort of decision and they're already trying to get me to see a demo and want to see if they can accelerate our timeline.

They did this to me too, I was told in confidence by a tech that they put some pretty hard Quota's on their sales guys, and that he actually transfered to the technical end to avoid the stress.

I remember a point a long time ago when NetApp was just as annoying.

Docjowles posted:

I think a lot of it is this, yeah. Firmware update screwing the pooch, or one controller dying and the failover not happening correctly, stuff like that.

My rear end in a top hat used to pucker during EMC flair code updates.

Rhymenoserous
May 23, 2008

NippleFloss posted:


At NetApp Professional Services are not treated as a separate profit center. Generally PS is meant to basically break even internally while accelerating sales. Im told that at EMC PS is a large profit generating portion of the organization, distinct from sales. I suspect that a lot of this is by design, and why EMC products are behind still behind the curve when it comes to simplicity and manageability.

This is absolutely true: on the higher end units EMC drat near hands you the hardware while selling you support contracts that will make your wallet bleed.

Rhymenoserous
May 23, 2008

cheese-cube posted:

See that's what I thought the main reason would be: the ability to leverage existing switching equipment. However what about environments which require storage bandwidth greater than 1Gb but do not already have 10Gb switching equipment? Is that the point where deploying FC become cost-effective?

On that subject have 16Gb FC HBAs and switches hit the market yet or are vendors still finalising their designs?

Not even then sometimes because of port aggregation. I got lucky (or unlucky depending on your ~views) in that my office uses nothing but Dell Powerconnect switches which all have the option to buy a fairly inexpensive module that you can plug 10G HBA's into.

So I slapped together a nice 10G iSCSI backbone fairly quickly. Works like a champ too.

Rhymenoserous
May 23, 2008

Bea Nanner posted:

Okay, so my RAID 5 disappeard overnight last night. Not exactly sure what happened yet, still looking into it. I did just update my AMD drivers, which includes that RAIDXpert utility, so that may have something to do with it. Anyway, I don't have all my specs and such on hand, so I am asking this in a generic sense. If I need additional help, I'll make a thread in Haus of Tech Support.

1) What is the best tool to recreate the array without losing my data? I see a few tools via Google search, but I am wondering if there is a standard recommended way to doing this.

2) My controller is onboard, so I am thinking of getting a more legitimate controller card. What are my options here? I need something with at least 5 SATA III ports, can handle RAID 5, and can detect an existing array.

This seems to be the front-runner:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816103220

Am I going to find anything better/cheaper/more effective?

Have you tried rolling the drivers back?

Rhymenoserous
May 23, 2008
You have good backups right? No guts no glory.

Rhymenoserous
May 23, 2008
'Arf a brick at it.

Rhymenoserous fucked around with this message at 20:53 on Aug 27, 2012

Rhymenoserous
May 23, 2008
I'm going to make this suggestion again: Roll back to prior to the update that broke everything, then go get a real backup solution. Since this is all on a das rolling back your drivers shouldn't affect anything on the DAS itself.

Raid is not backup. If this server + Das is holding "Live" data then you need another device to which you back up that live data at the bare minimum. Not to beat up on you but at one point you are saying "This is the backup" yet at others you are alluding there is live data on this that would cause issues if it were to go missing.

Which is it?

Or is it both?

Rhymenoserous
May 23, 2008

Boogeyman posted:

I can tell you exactly what that is. Every month we have to run a process that looks at about billion records individually, does some stuff, then inserts a new copy of each of those records in another table. Moving from local disk to the SAN greatly increased the length of that process, and they're convinced that it's due to the latency.


:catstare:

I think we found your problem.

Rhymenoserous
May 23, 2008
Ggggah I need to stay out of the Spiceworks forums. If I see Scott Alan Miller yap on about how Local Disk is the best thing since sliced bread and SAN is poo poo one more time I'm just going to steal the axe he wandered out of the dark ages with and execute him.

The guy constantly harps about the network being a ~major failure point~ for a SAN, which is true but I kinda wonder what wildly hosed up networks he's been setting up if his SAN takes a poo poo due to network problems more than once every two years or so.

Rhymenoserous
May 23, 2008

Misogynist posted:

I, for one, like to ensure that my systems that nobody can interact with because the network is down stay up and running in the event of a complete service outage! :awesome:

I'd say that but I'd probably get banned over there. The entire community worships the guy as some kind of genius storage/VM expert but half of the poo poo he spouts is absolutely archaic or dumb.

Every time a medium business pipes in asking for a SAN solution he tries to convince them to do the old "Roll your own" with a 1U HP chassis a DAS and a copy of FreeNAS. I personally think that's great... for a project SAN or a lab. But I'll be hosed if I ever put anything in production where the last line of "Yell at people till it works" is in my office.

Adbot
ADBOT LOVES YOU

Rhymenoserous
May 23, 2008

Nukelear v.2 posted:

This guy is insane. We need a SAN.txt thread that can be filled with everything this guy posts. Propose we change the thread title to 'Enterprise Storage Megathread: SAN is never what you want, only what you get stuck with.'

I'm so glad I'm not the only one that noticed. I mean I've been working with various SAN solutions over the years and I know the pitfalls. He never seems to hit any of the ACTUAL pitfalls and always ends up talking about how local disk will make IO operations on the host VMDK 1% faster or some dumb bullshit metric that no one will ever care about.

I expect at any minute to find out that he's basing all of his iSCSI knowledge from when he tried to setup a iSCSI vlan on his old 100mb network.

Nukelear v.2 posted:

^^^
Why would we he propose that? It combines all the failure points of a SAN, with all the failure points of DAS in addition to the 'roll-your-own' problems. Crazy.

Oops my bad, not Freenas: Openfiler

He even created a lovely acronym to describe his lovely device:

SAM-SD (Scott Allen Miller - Storage Device)

http://community.spiceworks.com/topic/99354-what-is-a-sam-sd

Because no one had thought of slapping a das on a lovely old server and installing openfiler before. It needed it's own name.

Rhymenoserous fucked around with this message at 22:44 on Aug 29, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply