Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
lilbean
Oct 2, 2003

LE posted:

Don't forget about Sun!
I was just gonna post this. I ordered a 4540 today (more cores, more RAM, etc). Can't loving wait for it to arrive.

I plan on using it for staging backups to disk first mostly, but also a general NFS storage server.

Adbot
ADBOT LOVES YOU

lilbean
Oct 2, 2003

H110Hawk posted:

I've started the process of getting a try-and-buy of a 4540+J4000. I'm waiting to see if they've fixed a minor sata driver problem.

http://www.dreamhoststatus.com/index.php?s=file+server+upgrades
Hm, haven't heard of that one. Do you have a bug ID for that? Is it a performance related issue?

lilbean
Oct 2, 2003

H110Hawk posted:

They seem to not be making a big deal of it. The X4500 shipped with a broken sata driver, which they consider low priority, even though the box is 6x8 sata cards with some cpu and memory stuffed in the back. We had to install IDR137601 (or higher) for Solaris 10u4 to fix it. The thumpers all ship wih u3, so first you have to do a very tedious upgrade process.

Sorry, I don't have a bug ID. OpenSolaris suffers as well, google "solaris sata disconnect bug" or "solaris marvell" and you will find some people who hit it. It's pretty much anyone who puts any kind of load on a thumper. Or in my case, 29 thumpers.
Jeeeeesus Christ, 29 of them? Nicely done. How do you have your ZPOOLs laid out on them? We're basically going to use ours for a backup disk (with NetBackup 6.5's disk staging setup), so we'll be writing and reading in massive sequential chunks. We plan on benchmarking with different setups like 40 drives in mirrors of 2, raidz and raidz2 vdevs (in different group sizes).

It'll probably take us weeks just to figure out the best layout for our load.

Edit: As for the u3, I'm not too worried about that. We've used LiveUpgrade extensively to move things from Solaris 8 to 10 as well as for patching systems with less downtime, so I imagine our Thumper's system disk will be an SVM mirror across 2 of the physical disks, with a third being reserved for the upgrades.

lilbean
Oct 2, 2003

H110Hawk posted:

Thanks. :) They're big, loud, HEAVY, NOISY monsters, but if you don't care about power redundancy you can stuff 6 of them in a 120v 60amp rack! Once they're purring along with that IDR they're lots of fun. :3:
Well with only one it shouldn't be too much trouble. As for the weight, well I think I'll make our co-op student rack mount the thing - and take the cost out of his paycheck if he breaks it by dropping it.

quote:

Oh, and a stock thumper zpool won't rebuild from a spare, either. It gets to 100% and starts over. Enjoy! :cheers:
Yeesh, is that with the unpatched Solaris 10 that comes with it? I'd planned on a fresh install once I get it with the latest ISOs and then patching it.

lilbean
Oct 2, 2003

Here's a storage question that's unrelated to SANs and NASs - I just upgraded my company's backup system from LTO2 to LTO4 (which is loving awesome I might add). But now I'm stuck with LTO2 tapes from offsite coming back every week and I'm not sure what to do with them. Blanking them is easy enough, but is there somewhere that recycles them or something? Anyone else have to deal with that?

lilbean
Oct 2, 2003

Catch 22 posted:

IronMountain will shred them. Yes SHRED them. They even shred harddrives. HARDDRIVES!
Yeah I know that, but it'd be a waste to shred a couple hundred tapes.

lilbean
Oct 2, 2003

Catch 22 posted:

Shredding is cooler that being green but...
http://www.dell.com/content/topics/segtopic.aspx/dell_recycling?c=us&cs=19&l=en&s=dhs

Looks like a odd site, but hey.
http://www.recycleyourmedia.com/
Thanks, the second site looks perfect actually. And yeah, shredders rock - like so:
http://www.youtube.com/watch?v=9JL77ECcOoQ

lilbean
Oct 2, 2003

H110Hawk posted:

code:
tastesgreat> cf status
Cluster enabled, lessfilling is up.
Edit: I'm retarded.

lilbean
Oct 2, 2003

I'm scoping out a low-cost SAN to share between a couple database servers for an active-passive failover setup. They're Sun servers so I'm inclined to pickup something along the lines of the J4200. That'll suit my needs fairly well, and I don't need anything too high-caliber since it's for staging and testing databases. Anyone have experience with the J4000 line, or can anyone recommend something similar? It's basically going to come down to around 4K with a couple HBAs and full of 250 gigabyte disks.

lilbean
Oct 2, 2003

Alright, I feel like a cheap bastard for asking this but here goes. We have an X4540 at work and it's awesome. It came loaded up with 250 gigabyte Seagate SATA drives, and I'd like to upgrade one of the vdevs (6 drives) and a hot spare with 1 terrabyte drives. My Sun vendor (unsurprisingly) wants $850 for the Sun-branded Seagate ES.2 1TB drives. I can get the same drives from CDW for about $250. These are Canadian prices by the way.

Now I also have Sun J4200 disk arrays here running since January or so and a few of them have had their 250GB drives replaced with the cheaper Seagate Barracuda 7200.11 drives (after upgrading their firmware fortunately) and they're working fine. Only one has failed with block errors, and I haven't had any strange RAID drop outs, caching issues, or other strange problems that could be associated to non-Enterprise firmware.

So the real question is, can I cheap out and use the 7200.11/7200.12 drives for the X4540 without any issue? They're literally half the cost of the ES.2 disks. Also, I'm not worried about support since we've confirmed that issues not caused by third-party disks are still supported.

lilbean
Oct 2, 2003

Bluecobra posted:

Why don't just use the same model drives that Sun uses? According to the Sun System Handbook the X4540 either uses 1TB Hitachi HUA721010KLA330 disks or 1TB Seagate ST31000340NS disks. Though, I can't find anything that says you can or cannot mix disks in the X4500/X4540. I don't see why you couldn't though.
Those are the ES.2 disks, which are over twice the cost of the 7200.12 disks. The mix and match is fine, I'm just more worried about the consumer firmware-based drives causing issues.

lilbean
Oct 2, 2003

Bluecobra posted:

$159 for a 1TB enterprise-quality disk isn't that much money. Plus I'd be careful of using other disks unless you're sure that stuff like ZFS cache flushes work correctly.
That's pretty much what I'm asking, but I've used the non-ES.2 drives in our J4200s with no issue (on ZFS as well). Plus the ES.2 drives are literally twice the price as the non-enterprise ones.

Saukkis posted:

Those 7200 series drives probably won't support TLER/ERC, so that may be a problem. See if you can find a way to enable it.
I thought TLER was specific to Western Digital drives.

lilbean fucked around with this message at 20:43 on Jun 29, 2009

lilbean
Oct 2, 2003

A while back I mentioned filling up a 4540 with lovely consumer disks, and I went ahead last Friday and did just that. Well, not full yet, but I've replaced to begin with one ZFS vdev of 6 disks plus one of the available hot spares with Seagate 1TB 7200.12 drives (100CDN each). In the last weekend and week we've pounded the poo poo out of them since our 4540 is our primary backup server and acts as a disk-to-disk staging area. No problems whatsoever so far.

Also, I think my manager is ready to blow me for being able to expand at 20 cents per usable gigabyte (after RAIDZ2 and a spare are taken into account).

lilbean
Oct 2, 2003

It's neat but each power supply runs about half of the components. That's doubling a point a failure and a pretty crappy compromise.

lilbean
Oct 2, 2003

ragzilla posted:

Depends on their environment- if they also implement redundancy at the network or application levels (storing the same file on multiples of these boxes) ala waffleimages/lefthand it doesn't really matter if one of the boxes dies on rebuild, just get copies redistributed and bring the node back into the cluster when it's fixed. But at that point you're just reinventing mogilefs/googlefs.
Oh sure, I know you can replicate everything at a higher-than-device level. It just seems to me that I'd rather add a few thousand dollars for a higher grade power system.

lilbean
Oct 2, 2003

H110Hawk posted:

Agreed. You should compliment him on his quick thinking on the vibration patterns, though. Ask him if it was his idea. Sales guys occasionally need to be laughed at and called out.
Our Sun reseller gave us this bullshit before, and we called him out on it. He said it was in the sales teams training literature. Dickheads.

lilbean
Oct 2, 2003

Echidna posted:

It's amazing what putting an order through on the last day of the month can do, when they have targets to meet...
Oh absolutely. My last big order was six Sun T5140 systems and a couple of J4200 disk arrays for them, and our CDW sales rep was falling over himself to get us to order it by the end of the month (in January). He called me twice a day to check on the status and what not, and then I finally e-mailed him to tell him to calm down.

So he calls me and he says "Look, if I make this sale I win a 42-inch plasma TV." As if I give a poo poo. So one of our other Sun sales guys get us the lowest price (Parameter Driven Solutions in Toronto) and we go with them.

Then the CDW sales guy leaves a threatening message on my voicemail! He's angry as gently caress and saying poo poo like "I worked for a month on your quote and I DON'T LOSE SALES" and so on. So I didn't reply, but CDWs website helpfully mentions the contact information of your rep's manager which let me forward the voicemail to him. So the salesman no longer works there :)

Fake edit: Also, I've seen Apple sales people and sales engineer give the same lines of bullshit about vibrational testing.

lilbean
Oct 2, 2003

FISHMANPET posted:

How much experience do people here have with Sun's x4500/x4550 (thumper/thor)? I've got one at work and I'm going to be having a lot of stupid little questions, and I'm wondering if I can get them answered here or if I need to go register on the OpenSolaris forums or something.
I've used one for a year now on Solaris 10, beat the poo poo out of it and I love it. H10hawk follows the thread too and manages like a dozen of them, so this is as good a place to ask questions as any.

lilbean
Oct 2, 2003

You don't need any card for iSCSI - just use the four built-in GigE ports. If you're feeling really spendy then get a PCIx 10gE card.

As far as the disk layout goes it can be pretty flexible, but try to build vdevs that have one disk from each of the controllers on them. I went with 7 RAIDZ2 vdevs and four hotspares. At first I thought that many hotspares was a waste, but then we started swapping out disks with larger ones and we do that by failing half of a vdev to the spares, replacing them, then doing it once more (which means for a full vdev upgrade we crack the chassis twice).

lilbean
Oct 2, 2003

Misogynist posted:

Interestingly, the Fishworks stuff also has better analytics than most of the solutions I've seen.
I wish I could add that UI to my Thumper running Solaris 10.

lilbean
Oct 2, 2003

Serfer posted:

Well, I was trying to figure out how they had drives connected to both systems really. I wanted to build something similar and use something like Nexenta, but it's looking more like these things are specialized and not available to people who want to roll their own.
You may be better off looking at just building a couple of systems and using something like DRBD to mirror a slab of disks.

lilbean
Oct 2, 2003

I would just mitigate most of the risk by using a UPS and - if you can afford it - systems with redundant power. I mean a motherboard can still blow and take down the whole system immediately, but most drives follow the flush and sync commands enough to not worry that much.

lilbean
Oct 2, 2003

Sooo, I'm in the enviable position of looking at SSD-based disk arrays right now. I'm looking at the Sun F5100, and it's pretty expensive but seems completely solid (and loving awesome). Anyone have experience with other SSD arrays? Should I even be thinking about filling a standard disk array with flash disks (probably X25-Es?)

lilbean
Oct 2, 2003

adorai posted:

It would be useful to know the intended purpose and budget.
It's for a very IO-heavy production OLTP database of about 400GB (Oracle 10 on Solaris SPARC systems). I think the applications on it could use a lot of tender-loving care, but the powers that be decided it's better to fix with hardware (of course!)

The floated budget is around 50K. Currently we have a pair of Sun 2530s with fast disks and multipath I/O but hey, if that's not enough then that's not enough. :downs:

EoRaptor posted:

There is also the scenario that the disk controller can't keep up with the i/o abilities of the drives, and bottlenecks the whole setup. No idea if this is a realistic possibility.
Yeah, this was my objection too.

Edit: Another option is adding a pair or more of SSDs to the Oracle host itself and carving it up between some of the high-activity datafiles, ZIL and maybe give the rest over for L2ARC.

lilbean fucked around with this message at 03:04 on May 21, 2010

lilbean
Oct 2, 2003

FISHMANPET posted:

Psst, It's Solaris 10 on an Oracle SPARC system.
Oh I know, god I know. We're actually planning to go to Linux/x86 next year.

Edit: Also, to be fair to the SPARCs the Intel/AMDs rock their loving world on the CPU-bound queries but for IO throughput the SPARCS have kept the lead. At least in our environment. And ZFS is awesome.

lilbean fucked around with this message at 03:11 on May 21, 2010

lilbean
Oct 2, 2003

EnergizerFellow posted:

:words:
Yeah the SSDs in a standard array are out (I have enough pull thankfully to kaibosh some plans). The working set is the 400 gb - the whole system is probably 1TB, but more than half is local backups, low traffic schemas, etc. There are a *lot* of terrible, terrible ORM-generated queries hammering the database that involve numerous nested views across instances and schemas and other horrible poo poo (Cartesian products, etc). Obviously the right answer is to fix the products, but ... yeah.

We don't have any investment into any SAN deployment currently. We use SAS arrays for everything full of 15K disks, and at most have two hosts connected to each array for a failover situation so there's not really a cost to migrate off of it.

I have been seriously looking at using the SSDs as the front-end to the disk arrays, but the bitch is that only ZFS can really take advantage of that from what I know of. If we do end up pulling the trigger in January or February and moving to X86 RHEL systems then we'll have to figure out the deployment plan again for how to use them decently. If we got an SSD array, then Linux and Solaris can take advantage of it completely (in theory).

I've looked at the Ramsan stuff a long time ago, but had totally forgotten about them so I'll check again and try to get pricing.

lilbean
Oct 2, 2003

EnergizerFellow posted:

What's the budget we're working with? Also, pretty much this...
The budget for the storage improvement is 50K. We already have a pair of Sun 2530s with 12x600GB 15K SAS disks, and three HBAs to put in each host (active-passive).

Edit: Also, there's very little long-term about this. The idea is to temporarily fix the busted applications now with this to give us time to fix the applications, with the caveat that we may never have the time. I wish I was joking.

lilbean
Oct 2, 2003

adorai posted:

Can you just get ahold of sun and pick up a few zeus ssds for your ZIL and l2arc on the existing 2350s?

edit: I see that the 2350s are apparently not using zfs. If you don't need HA you could try a 7200 series unit from sun.
Right now the setup is ZFS on a single 2530. We do hardware RAID10 on two groups of six disks, assign one group to each controller and then use ZFS on the host to concatenate the pools. Additionally we have two SSDs (Sun's rebranded X25E drives) that we're going to use for the L2ARC, a few hot Oracle partitions and possibly the ZIL. I did a bit of testing though for the ZIL and it seems the two controllers in the array soak up the log writes to their cache anyways, and the system didn't get a big boost from moving the ZIL to SSDs.

We have a second 2530 that is going to be used for secondary purposes - other hosts, backups from Oracle, and so on but none of those uses are necessary... So we have the option of growing the pool across that chassis as well in the same config to improve spindle count and get more cache.

The 50K dream spend is to basically either improve on that current scenario or replace it entirely with something bonerific like the F5100.

Edit: It's basically a pretty solid setup right now, and it's obviously throwing more money at apps with poor performance.

lilbean fucked around with this message at 23:48 on May 21, 2010

lilbean
Oct 2, 2003

EnergizerFellow posted:

Why not striped for the host-level combination?
When zpools are concatenated the writes are striped across devices, so with multipathing and the striping it's pretty damned fast.

You sound bang on about 50K being low for the expansion (which is why I'm leaning towards the flash array).

lilbean
Oct 2, 2003

Thanks, excellent suggestions all around.

EoRaptor - I can't post anything, but there's a good mix of CPU-bound and IO-bound queries. There's dozens of apps, and some of them perform as expected and a lot don't. We're replacing the server potentially too as part of this exercise with a separate budget. It's currently a SPARC T2+ system (1200mhz). We've tested on Intel systems and get some boost, but the SPARCs memory speed seems to nullify most of the advantage. On Monday we're going to profile the apps on an i7 as we're hoping the DDR3 memory bus will be better.

If the SPARC really does have an advantage then gently caress. We could get a faster T2+ system or possibly an M4000 :-( Who the gently caress buys an M4000 in 2010? Anyways, that's life.

lilbean
Oct 2, 2003

Bluecobra posted:

I'm really pissed that loving Oracle is EOLing the X4540 in just a few days. They also silently killed their JBOD's like the J4400/J4500 arrays as well. Unified Storage is nice and all but is way more pricey to get 48TB raw than it is to go with an X4540. Also, now you can only buy Oracle "Primer" support which is roughly three times the price of Sun Gold support. gently caress you Oracle. :arghfist:
Wait what? The 4540 was one of the best things Sun ever made. Goddamnit.

lilbean
Oct 2, 2003

Speaking of Dell, anybody actually use the MD3200 product? It *looks* decent, and we need a successor to the Sun 2530 series - does it fit the bill?

Adbot
ADBOT LOVES YOU

lilbean
Oct 2, 2003

Syano posted:

I have one in production at the moment. For an entry level SAN Im not sure it can be beat. The one big feature I wish it had was replication but other than that its been a great unit.
Okay thanks. We're pretty much using the Sun gear as barebones arrays that are shared between hosts for failover of databases and virtual machines, so replication is not a deal-killer.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply