Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
H110Hawk
Dec 28, 2006

lilbean posted:

I was just gonna post this. I ordered a 4540 today (more cores, more RAM, etc). Can't loving wait for it to arrive.

I've started the process of getting a try-and-buy of a 4540+J4000. I'm waiting to see if they've fixed a minor sata driver problem.

http://www.dreamhoststatus.com/index.php?s=file+server+upgrades

Adbot
ADBOT LOVES YOU

H110Hawk
Dec 28, 2006

lilbean posted:

Hm, haven't heard of that one. Do you have a bug ID for that? Is it a performance related issue?

They seem to not be making a big deal of it. The X4500 shipped with a broken sata driver, which they consider low priority, even though the box is 6x8 sata cards with some cpu and memory stuffed in the back. We had to install IDR137601 (or higher) for Solaris 10u4 to fix it. The thumpers all ship wih u3, so first you have to do a very tedious upgrade process.

Sorry, I don't have a bug ID. OpenSolaris suffers as well, google "solaris sata disconnect bug" or "solaris marvell" and you will find some people who hit it. It's pretty much anyone who puts any kind of load on a thumper. Or in my case, 29 thumpers.

H110Hawk
Dec 28, 2006

lilbean posted:

Jeeeeesus Christ, 29 of them? Nicely done.

Thanks. :) They're big, loud, HEAVY, NOISY monsters, but if you don't care about power redundancy you can stuff 6 of them in a 120v 60amp rack! Once they're purring along with that IDR they're lots of fun. :3:

quote:

How do you have your ZPOOLs laid out on them? We're basically going to use ours for a backup disk (with NetBackup 6.5's disk staging setup), so we'll be writing and reading in massive sequential chunks. We plan on benchmarking with different setups like 40 drives in mirrors of 2, raidz and raidz2 vdevs (in different group sizes).

Only disks 0 and 1 are bootable (c5t0d0 and c5t4d0), but you are correct, they come in a SVM mirror. It makes upgrades not so scary, since if you totally bone it somewhere, you can revert quickly. The new x4540 seems to be able to boot from flash, which will be quite nice, adding 2 more spindles to the mix.

Right now we're only getting 11tb usable out of a machine, with 5 disks raidz2's and a handful of spare disks.

Oh, and a stock thumper zpool won't rebuild from a spare, either. It gets to 100% and starts over. Enjoy! :cheers:

H110Hawk
Dec 28, 2006

lilbean posted:

Well with only one it shouldn't be too much trouble. As for the weight, well I think I'll make our co-op student rack mount the thing - and take the cost out of his paycheck if he breaks it by dropping it.

Hah! I hope your disability insurance is paid up.

quote:

Yeesh, is that with the unpatched Solaris 10 that comes with it? I'd planned on a fresh install once I get it with the latest ISOs and then patching it.

Yup! Thing should ship with a damned working copy of Solaris. :(


Wedge of Lime posted:

The 'Marvell bugs' have now been fixed as part of an official patch, the following patches:

127128-11 : Solaris 10 U5 Kernel Feature patch
138053-02 : marvell88sx driver patch

If you're running an X4500 I would recommend moving to these over the IDR. Sun does take this issue seriously, its just getting this thing fixed has not been easy :(

Do you work for Sun? If so, I would like to speak with your privately about this stuff. I've sunk a lot of man hours into this thing trying to patch something with an IDR for U4 of Solaris based on a plan from our gold support contract.

It looks like the marvell patch was just released a month ago. I'll have to ask my sales rep why we weren't notified about it.

quote:

Also, before doing anything with ZFS please read this:

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

Yes, read this, it is awesome.

H110Hawk
Dec 28, 2006
I come seeking a hardware recommendation. It's not quite enterprise, but it's not quite home use, either. We need a bulk storage solution on the cheap. Performance is not a real concern of ours, it just needs to work.

Our requirements:

Hardware raid, dual parity preferred (RAID6), BBU
Cheap!
Runs or attachable to Debian Etch, 2.6 kernel.
Power-dense per gb.
Cheap!

To give you an idea, we currently have a 24-bay server with a 3ware 9690SA hooked up to a SAS/SATA backplane, and have stuffed it full of 1TB disks. People are using SFTP/FTP and soon RSYNC to upload data, but once the disks are full they are likely to simply stay that way with minimal I/O.

We are looking for the cheapest/densest cost/gig. If that is buying an external array to attach to this system via SAS/FC, OK! If it's just buying more of these systems, then so be it. I've started poking around at Dot Hill, LSI, and just a JBOD looking system, but I figured I would ask here as well, since LSI makes it hard to contact a sales rep, and Dot Hill stuff on CDW has no pricing.

The ability to buy the unit without any disks is a big plus, we are frequently able to purchase disks well below retail.

I need about 50-100tb usable in the coming month or two, then it will scale back. I am putting these in 60amp/110v four post 19" racks.

Edit: Oh and I will murder anyone who suggests coraid, and not just because it is neither power dense nor hardware raid.

H110Hawk
Dec 28, 2006

complex posted:

This is for the 50GB backup offer, I presume?

Hrm? :downs:

rage-saq posted:

Get an HP SmartArray P800 card and then attack up to 8 MSA60 12xLFF enclosures.
P800 is about $1k, each shelf is about $3k and then add your 1TB LFF SATA drives and you are good to go. If you need more attach another P800 and more shelves etc.

Right now our current theory is a 3ware 9690SA card with these:

http://www.siliconmechanics.com/i20206/4u-24drive-jbod-sas_sata.php

So your solution is about 2x the cost. :) It's a supermicro backplane, we're getting a demo unit in about 5-10 days. Any horror stories about the card? Backplane? Is there something cheaper per rack U per gb? (Or moderately close, mmonthly cost of the rack and all.)

H110Hawk
Dec 28, 2006

Misogynist posted:

If you're really afraid of Sun OSes, you could also run Linux on the thing, but if you're trying to manage a 48TB pool without using ZFS you're kind of an rear end in a top hat.

I have 26 thumpers. :)

H110Hawk
Dec 28, 2006

M@ posted:

I've got one 6080 :monocle:

I've worked with a bunch of goons in the past and no one's ever said anything bad about me to my face :)

Sorry about that 6080. :(

I'll vouch for M@, he's yet to let me down, and I've bought one or two things from him.

I'll have a look see at the IX systems stuff, thanks. A 48-in-4u beige box chassis is just the kind of thing I was looking for to knock off this Sun X4500/J4000 stuff.

Edit: Hah, I was just looking at the Xyratex stuff and being annoyed at no obvious link to resellers. http://xyratex.com/products/storage-systems/storage-F5404E.aspx I've had good luck with their DS14 shelves, I can't imagine this being much worse, besides the onboard raid.

H110Hawk fucked around with this message at 22:15 on Sep 18, 2008

H110Hawk
Dec 28, 2006

Catch 22 posted:

What? Is that a tool? Google turns up nothing for that. How did you make that chart?

If I had to guess it's just snmp data graphed with something like cacti/rrdtool.

Edit: Jesus, you would think they could just sell you a SNMP license.

H110Hawk fucked around with this message at 07:05 on Sep 21, 2008

H110Hawk
Dec 28, 2006

Maneki Neko posted:

We once hit a bug in Data Ontap while resizing a LUN that knocked one head offline, then the other head took over and continued the same operation, hit the same bug and died too. It took about an hour for everything to come back up and be happy after replaying the logs, but I would classify that as a failure. :)

Hah! We had this same thing happen with our BlueArc once, only the bug triggering data was committed into the logs. Replaying the logs caused both the clustered heads to crash. Wound up having to wipe the logs (data loss ahoy!) and go from there.

Never again with BlueArc. I've never had an error like that on a NetApp.

H110Hawk
Dec 28, 2006

Catch 22 posted:

You can store data on the OS LUN, but its best not to.

That's why you carve up a SAN. The 4 dedicated OS drives you buy need to be small and fast. 36GB 15K.

Coming from the world of netapp, this seems ridiculous to me. On a single tray why would I give up 4 disks to OS? What is it even paging, and why doesn't it simply pre-allocate what it needs to manage your system? Doesn't a SAN OS know everything about itself when you initialize it?

That stinks of horrible design flaw. Am I missing something obvious here besides "buy more spindles?"

H110Hawk
Dec 28, 2006

Catch 22 posted:

Winner Winner Chicken Dinner, I would say, but he cant with the AX150.

My main point is the AX150 seems like a pretty poorly designed solution if I have to burn 25% of my available spindles for what should be some paltry OS space consumption. What the heck are they storing on those disks?

H110Hawk
Dec 28, 2006

Catch 22 posted:

That paltry OS is rather large depending on the SAN series you pick. The CX line referenced in the above post requires 62GB.

That seems excessive. :)

To diverge from the thrilling debate about some Dell bottom of the barrel disk enclosure, why am I seeing a shitton of ECC (and similar) errors on my filers in the past week?

In the past 7 days I've had transient ECC errors, a watchdog reset, and two filers with NMI's being shot off the PCI bus. We also had our Cisco 6509 register:

code:
Sep 30 21:51:42 PDT: %SYSTEM_CONTROLLER-SP-3-ERROR: Error condition detected: TM_NPP_PARITY_ERROR
Sep 30 21:51:42 PDT: %SYSTEM_CONTROLLER-SP-3-EXCESSIVE_RESET: System Controller is getting reset so frequently
I would chalk it up to power, but this is at three different datacenter locations, with three different power feeds, one of which is many miles away. We've also had countless webservers and stuff just arbitrarily falling over. Has anyone else been having a "when it rains it pours" week with really random errors?

At this point I'm blaming bogons and the LHC.

H110Hawk
Dec 28, 2006

Mierdaan posted:

Sorry we don't all have 23 thumpers to brag about. Seriously dude, you're a loving dick sometimes.

It was a joke. Catch22 and I seemed to be having quite the lively (read: boring) argument over something I had no idea about, and I had actual content I was curious about related to this thread.

Aquila posted:

Actually he has 27, but I had lost three and not bothered to rma a fourth when I worked there.

Normally when people quit they steal stuff, not alert others of what they find. In that sense, you failed at quitting.

H110Hawk
Dec 28, 2006

Saukkis posted:

Maybe he did that to distract you from the real loot. When was the last time you took a look at your core router? Are you sure it hasn't been replaced by a WRT54G? It could explain why my site is down. :argh:

Don't be silly, those three thumpers are worth FAR more than some paltry core router.

(And we use an Airport Express. Can't you read my witty avatar?)

H110Hawk
Dec 28, 2006

Ray_ posted:

I know I can pickup a StorVault for pretty cheap, but as far as I know they are single-controller only.

StorVault is indeed single controller only. No clustering options. They are fairly robust in that netapp sort of way, but unless they've upgraded ontap it has some strange bugs that will bite you in the long run. Memory leaks in various supported/unsupported features will eventually cause them to stop working.

Failover is kinda hokey last time I tested it, in that you couldn't just move all of your disks to a cold chassis and have it work instantly. License codes are tied to the chassis, etc. Support that had trouble comprehending requests also was a minor issue.

H110Hawk
Dec 28, 2006

Catch 22 posted:

That's not failover you know...

Yeah, it is a bad habit of mine. At work we have a system for "failing over" hardware on to new hardware. It is not the same as what everyone else refers to as a high availability clustered failover. Sorry about that misuse of the term. I did mention no clustering/single controller. :)

H110Hawk
Dec 28, 2006

Catch 22 posted:

Cool. As long as you know there are better ways. I would hate for you to think that is the only "failover" available out there. Good God I think I would shoot myself if it was.

code:
tastesgreat> cf status
Cluster enabled, lessfilling is up.
At least some of our hardware is somewhat fault tolerant. :v:

quote:

Question: Why do you have it setup for cold replacements like that? Or are you referencing that you are using StorVault meaning there is no other kind of recovery other than cold replacements.

Cold replacement (or warm replacements in the case of some of our web servers) is a failsafe problem solver. We just order N+1 of everything and leave one sitting there. It scales out for us since normally N winds up being a very large number. Backplane goes out? Just throw in a new entire system and then work to replace the backplane on your now cold and dead system.

In reality N+1 is actually closer to N+N*5-10% depending on hardware class, reliability, ease and speed of repairs, etc.

The StorVault wound up being kind of a special case for us, we tried out two of them, own three of them (cold spare!), and they didn't work for our needs. We're migrating users off them, slowly, but the above bugs hold true for anyone who is going to use one. Internally they are pretty much like a FAS250 or whatever, only with truly off the shelf hardware.

H110Hawk
Dec 28, 2006

optikalus posted:

Well, my "SAN" has just turned itself into a pile of crap.

I lost two drives and the thing is still running thanks to the hotspare and RAID5, but obviously if I lose one more drive, I'm totally F'd. I overnighted two new drives to the DC which are supposed to be identical to the ones in there, but it detects them as being just a few MB smaller so the controller won't use them.

What is your "SAN"? What brand+part number hard disks did you have in it? Did you order the exact same model disk, down to the revision, or a "compatible" one?

Guaranteed sector count is something you need to pay attention to in the future. It is a good idea to only carve 95% of your disks into your array, this lets you use the high end of them as fudge factor in case you ever have to switch disk manufacturers.

How long between disk failures? Were you able to gank the old disk before the second one failed?

H110Hawk
Dec 28, 2006

optikalus posted:

I quote "SAN" because it isn't really a dedicated SAN at all, just a RAID box with SAN protocols running on it.

Fujitsu drives

Pretty much a vanilla fileserver, then? Those are some real cost savers. We're using them at work now over netapp for a lot of things. Nothing wrong with them, so long as you don't go overboard on the cheap factor. I would suggest picking up some Hitachi or Seagate 5-year warranty disks and rebuilding from scratch. This can save you a lot of money over Netapp/HDS/EMC, and likely give you as much reliability as you need.

I don't know about recent years, but around 8-10 years ago my uncle worked for a company that assembled and sold raid systems, I forget the name. He managed the disk burn-in and certification for their units. He told me never ever to buy Fujitsu disks. They apparently hit 50% failure in about half the time as all other major brands. We got a batch of brand new Fujitsu-label 15k disks, and they still seem to fail about twice as often as the Seagate equivalents. I don't have hard numbers, though, so I don't know if it's just the bad taste left in my mouth by my creepy uncle, or if they really do still suck.

H110Hawk
Dec 28, 2006

Wicaeed posted:

How hopelessly outdated are these things? I believe the HDD magazine has 36GB drives in it, and I know my company has about 9 or 10 older magazines with 16GB drives in them.

As long as it has something like ontap 6 on it you should be able to get a feel for it. Have fun. See if they have any hard-copy manuals for it, or ontap installation diskettes.

H110Hawk
Dec 28, 2006

Cultural Imperial posted:

I'm an employee of NetApp and I can try to put you in touch with someone more responsive if you're still interested.

What do you do at NetApp?

H110Hawk
Dec 28, 2006

Wicaeed posted:

Erg, I guess I had a mistype, it's not a 270, it's a Net Appliance NetApp F720 :v: and it uses Eurologic NetApp XL501R Fibre Channel JBOD FC9 (I know our company has a bunch of FC7's laying around)

How outdated is that? :)

For learning it is fine. OnTAP 6.5 or something will the be the latest OS it runs, since they stopped making Alpha versions sometime around then, maybe 6.4. It is going to be brutal on your electric bill, though. :)

H110Hawk
Dec 28, 2006

Wicaeed posted:

Welp, I made the plunge :D

Got the filer head, two HD magazines with 7 32GB drives in them each, new in box ( :lol: ) Data OnTap 6.5 OS software (it says its for a FAS250, dunno if it will work), and a shrinkwrapped box with Data ONTAP 6.5.1R1 software in it as well as all the cabling :)

As long as it's the correct architecture type (Alpha vs. i386) you will be fine. I don't believe any of the ontap images are any different from any other. The only thing it does is run/not run processes based on licensing.

H110Hawk
Dec 28, 2006

Mierdaan posted:

(sales bullshit)

edit: nevermind, I asked him the question again via email and he recanted. Huzzah!

Always get them to put it in writing. It helps keep sales people honest. If you're really not sure, call up Netapp and ask them directly!

H110Hawk
Dec 28, 2006

Wicaeed posted:

Can someone explain to me how Netapp does their licensing?

Expensively, and based on raw capacity/filer capacity.

quote:

I figure it probably wouldn't be worth my time/money to buy a license from Netapp, am I right?

You might as well give them a call. The worst thing that happens is they laugh at you. If you can't get a license from them, and are willing to work something out a little hokey just to get the units legally working shoot me a PM/IM and I can tell you a couple companies that can help.

H110Hawk
Dec 28, 2006

Wicaeed posted:

Well thats part of the problem, I can't access the CLI. Rather the OS wont load all the way.


I'm guessing it's a CPU issue

Oh, jeez. I misread the filer model you had. F720 is a very old filer. It looks like your PCI cards are out of order. Try moving your NVRAM card to slot 1 and putting the FastEthernet card in Slot 2 (or whatever).

Don't bother calling netapp. Shoot me an IM, I tried sending you one and it said "refused by client."

Edit: \/ I tried sending you an IM on AIM.

H110Hawk fucked around with this message at 22:55 on Dec 11, 2008

H110Hawk
Dec 28, 2006

Wicaeed posted:

I assume there's going to be a way to reset it to something I want, correct?

I know nothing at all about ONTAP5, in 6+ you would hit control-C sometime after starting and before it prints the OS banner, and press that you needed to reset the password. In Ontap 5.3.6R1 according to NOW:

quote:

How to reset the filer password
Reset the password if you forget it

If you forget your filer password, reset the password by using the system boot diskette. To avoid security problems, take care to limit access to the system boot diskette.
Procedure for resetting the password

Complete the steps in the following table to reset the filer password.
Step Action
1 Reboot from diskette as described in "Booting from system boot diskette."
2 When the boot menu appears, enter 3 to choose Change Password.
3 When the filer prompts you for a new password, enter it at the prompt.
Results
The system prints the following message:

Password Changed

Hit Return to reboot:

4 Remove the diskette from the filer's diskette drive and reboot the filer by pressing the Enter key.

H110Hawk
Dec 28, 2006
Jesus christ. 14 hours later:

code:
peeler-rescue> vol online boot
Volume 'boot' is now online.
I just felt like sharing. It's been a long day.

H110Hawk
Dec 28, 2006

tinabeatr posted:

Does anyone have any experience with BlueArc?

There is no pole long enough.

H110Hawk
Dec 28, 2006

skullone posted:

And whatever NAS/SAN you get, it'll suck. There will always be odd performance problems that you'll have to spend hours troubleshooting before your vendor will listen to what your saying, only to have them say its a known problem, and a patch will be ready in a few weeks.

I'm not bitter or anything :)

Seems to be the standard Sun way of doing business. We had similar problems with the X4500 units which shipped with a simply faulty SATA driver, which won't be fixed until Update4, no Update5. You should also be ready for a real heck of a time if you ever start doing what the ZFS papers say you should be able to do with the filesystem. Things start to get hairy with the management tools around a few thousand nested filesystems. Update6 resolved a lot of those issues, but it's still there.

From what I hear, and contrary to what their sales guys insisted, BlueArc appears to be doing demo units now, or is it still their "if we think you like it you have to buy it" try-and-buy program?

In theory I will have a couple of Titan 2200? 2400? units for sale w/ NFS, Clustering, and Data Migrator licenses. Anyone interested? Might also sell the disk trays if we don't have another use for them, Engenio (LSI), around 10 trays of FC and 10 trays of sata. Exact numbers available for serious inquiries.

H110Hawk
Dec 28, 2006

skullone posted:

You guys are scaring me... I already have a drive with predictive failure on my Sun box. Haven't reported it to Sun yet... but now I'm thinking "this RAID-Z set with hot spares isn't look as good as RAID-Z2 anymore"

It's a lot cheaper to keep a sata disk on hand than it is to replace dead data. Just buy a Hitachi/Seagate disk of correct size for your array. It costs, what, $150? Swap it in, deal with Sun, swap their part in.

H110Hawk
Dec 28, 2006

InferiorWang posted:

I hate sales people. I want to get pricing on some lefthand gear, but I don't want to listen to any of their spiel, or get follow up calls only to have the person get pissy when I remind them we're a public school and everything comes down to dollars, not necessarily doing things the proper way. I don't want to talk to a reseller either. All I want to know is how much it costs.

Are there any resources available which might have this information available without having to talk to someone or does it pretty much come down to having to put up with sales people?

Get a full retail quote from them for "the right way", have them itemize costs as much as they can claiming government red tape. Then, generate quote for what you actually want, cut price 50%, and make up a PO. Call them up and send them your proposal and tell them it's this or EMC. See what happens. You generally won't be able to cut the service contract by nearly 50%, and I assume this is required by your school policy.

If they still balk, remind them that you have to keep a service contract on the device for 3? 5? years due to that same government red tape. Don't be afraid to lie outright to them, their sales guys will do the same to you.

H110Hawk fucked around with this message at 17:14 on Mar 18, 2009

H110Hawk
Dec 28, 2006

Catch 22 posted:

You can also save by doing the install yourself (if they are charging for it) and if you get the Manufacture behind you they can discount the Vendors quote passing savings to you. In my case I got a discount and a extra warranty year for nothing. Call Lefthand and talk with them. It can pay off.

Self-install of most stuff is a snap, assuming you aren't afraid of lifting disk trays. Just make sure a self-install doesn't conflict with your service contract and warranty. If it does, they should do it for free since you are paying for it with the service contract.

H110Hawk
Dec 28, 2006

InferiorWang posted:

My problem is I can't even get anyone to listen to me about doing this and getting away from bare metal machines for our critical data without having some semblance of a dollar figure attached to it. I have to approach this rear end backwards from how most normal people would approach it.

Remember above where I said lie to them? Do that. I imagine at this point they're calling you and wanting to setup meetings? Tell them to bring their spreadsheets because you have to talk cost as well as performance. Let them go through their whole dog and pony show with the glossy brochures and powerpoint, then ask them what "that" (pointing at the screen) costs. Play hardball right back.

H110Hawk
Dec 28, 2006

brent78 posted:

I have 4 Coraid SR-1521's with 15 x 500GB drives each (7.5 TB RAW each), willing to unload real cheap to the first people who PM me. I was using them for a media project and now no longer need them.

I might take them. Shoot me a PM with your initial cost quote. How used are they?

H110Hawk
Dec 28, 2006

Sock on a Fish posted:

I came in this morning to find my Solaris box had crashed, and when I brought it backup it threw this at me:


The linked URL says that these types of errors are unrecoverable, but my OS is still chugging along and after the scrub finished the error disappeared. Am I in the clear or should I still be wary?

The filesystem is now suspect, because you don't know if a block had data changed and then re-checksummed to appear valid. I would check your logs for information about the crash itself, versus just what zpool is telling you about your pool.

H110Hawk
Dec 28, 2006

Sock on a Fish posted:

Say I took the mirror pool containing c1t0d0s0 and c1t8d0s0, and then one at a time removed each device from the pool and then added back as c1t(0|8)d0, allowing the pool to resilver in between moves. Also, let's say I rebooted the machine only to discover that I'd wiped out my boot sector.

How would I go about getting that back? I'm not finding any kind of recovery disks for Solaris 10.

Whoops. In theory you can just dd the boot sector from one of your old disks on to a new one. If they're in the mirror the worst you'll do is hose one of them. Remember to do it to the disk device itself (c1t0d0) or the whole disk partition (c1t0d0s2).

grub-install or what not *should* work, googling around found this untested bit:

http://opensolaris.org/jive/message.jspa?messageID=179529

As for what to dd on and off, the boot sector is a set size, and from there it should be enough to get you reading ZFS. Google dd grub gave this:

http://www.sorgonet.com/linux/grubrestore/

You will want to use the stage1 loader first and see what happens. I don't know where they switch from raw disk reading to actually being bootstrapped.

H110Hawk
Dec 28, 2006

Syano posted:

What about provisioning? Is it usually worth it to split up the disks into separate raid groups or just build one raid set from all the available disks? Or is this something you need to know more about your IO load to make a decision on?

This depends on your workload. If you make one large raid set, then carve up luns, you will have the maximum possible IO throughput, but any one VM can bog down the rest of them with an I/O spike.

Consider a logging, email, and sql server running on your array. Each one has its own lun. We all know logging services sometimes go batshit insane and start logging things several times per second until you kill them. Do you want that to be able to bog down your email and sql service until you fix it? There are very real tradeoffs, and my suggestion to you is if you have time to test out several setups.

You may also want to consider which workloads have the heaviest read load (email, typical sql) and which have the heaviest write load (logging, atypical sql) for combining. Figure out how the various read and write caches interact and you may be able to squeeze out some extra iops, but I will leave that part to the more experienced in this field.

Adbot
ADBOT LOVES YOU

H110Hawk
Dec 28, 2006

bmoyles posted:

What's the scoop on Coraid, btw? I tried looking into them a few years back, but they didn't do demo units for some reason so I passed.

They suck balls, for lack of a more elegant way of phrasing it. Their "device" is just plan9 with a AOE stack on it. They're slow, latent, and bug prone. We bought 400-500TB worth of them a few years back, using 750gb disks, and regretted it every second of the way.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply