|
LE posted:Don't forget about Sun! I plan on using it for staging backups to disk first mostly, but also a general NFS storage server.
|
# ¿ Aug 29, 2008 15:28 |
|
|
# ¿ Apr 25, 2024 09:26 |
|
H110Hawk posted:I've started the process of getting a try-and-buy of a 4540+J4000. I'm waiting to see if they've fixed a minor sata driver problem.
|
# ¿ Aug 29, 2008 16:24 |
|
H110Hawk posted:They seem to not be making a big deal of it. The X4500 shipped with a broken sata driver, which they consider low priority, even though the box is 6x8 sata cards with some cpu and memory stuffed in the back. We had to install IDR137601 (or higher) for Solaris 10u4 to fix it. The thumpers all ship wih u3, so first you have to do a very tedious upgrade process. It'll probably take us weeks just to figure out the best layout for our load. Edit: As for the u3, I'm not too worried about that. We've used LiveUpgrade extensively to move things from Solaris 8 to 10 as well as for patching systems with less downtime, so I imagine our Thumper's system disk will be an SVM mirror across 2 of the physical disks, with a third being reserved for the upgrades.
|
# ¿ Aug 30, 2008 00:30 |
|
H110Hawk posted:Thanks. They're big, loud, HEAVY, NOISY monsters, but if you don't care about power redundancy you can stuff 6 of them in a 120v 60amp rack! Once they're purring along with that IDR they're lots of fun. quote:Oh, and a stock thumper zpool won't rebuild from a spare, either. It gets to 100% and starts over. Enjoy!
|
# ¿ Sep 1, 2008 20:31 |
|
Here's a storage question that's unrelated to SANs and NASs - I just upgraded my company's backup system from LTO2 to LTO4 (which is loving awesome I might add). But now I'm stuck with LTO2 tapes from offsite coming back every week and I'm not sure what to do with them. Blanking them is easy enough, but is there somewhere that recycles them or something? Anyone else have to deal with that?
|
# ¿ Sep 30, 2008 18:07 |
|
Catch 22 posted:IronMountain will shred them. Yes SHRED them. They even shred harddrives. HARDDRIVES!
|
# ¿ Sep 30, 2008 18:44 |
|
Catch 22 posted:Shredding is cooler that being green but... http://www.youtube.com/watch?v=9JL77ECcOoQ
|
# ¿ Sep 30, 2008 19:02 |
|
H110Hawk posted:
|
# ¿ Oct 6, 2008 20:40 |
|
I'm scoping out a low-cost SAN to share between a couple database servers for an active-passive failover setup. They're Sun servers so I'm inclined to pickup something along the lines of the J4200. That'll suit my needs fairly well, and I don't need anything too high-caliber since it's for staging and testing databases. Anyone have experience with the J4000 line, or can anyone recommend something similar? It's basically going to come down to around 4K with a couple HBAs and full of 250 gigabyte disks.
|
# ¿ Dec 19, 2008 22:20 |
|
Alright, I feel like a cheap bastard for asking this but here goes. We have an X4540 at work and it's awesome. It came loaded up with 250 gigabyte Seagate SATA drives, and I'd like to upgrade one of the vdevs (6 drives) and a hot spare with 1 terrabyte drives. My Sun vendor (unsurprisingly) wants $850 for the Sun-branded Seagate ES.2 1TB drives. I can get the same drives from CDW for about $250. These are Canadian prices by the way. Now I also have Sun J4200 disk arrays here running since January or so and a few of them have had their 250GB drives replaced with the cheaper Seagate Barracuda 7200.11 drives (after upgrading their firmware fortunately) and they're working fine. Only one has failed with block errors, and I haven't had any strange RAID drop outs, caching issues, or other strange problems that could be associated to non-Enterprise firmware. So the real question is, can I cheap out and use the 7200.11/7200.12 drives for the X4540 without any issue? They're literally half the cost of the ES.2 disks. Also, I'm not worried about support since we've confirmed that issues not caused by third-party disks are still supported.
|
# ¿ Jun 29, 2009 15:38 |
|
Bluecobra posted:Why don't just use the same model drives that Sun uses? According to the Sun System Handbook the X4540 either uses 1TB Hitachi HUA721010KLA330 disks or 1TB Seagate ST31000340NS disks. Though, I can't find anything that says you can or cannot mix disks in the X4500/X4540. I don't see why you couldn't though.
|
# ¿ Jun 29, 2009 18:07 |
|
Bluecobra posted:$159 for a 1TB enterprise-quality disk isn't that much money. Plus I'd be careful of using other disks unless you're sure that stuff like ZFS cache flushes work correctly. Saukkis posted:Those 7200 series drives probably won't support TLER/ERC, so that may be a problem. See if you can find a way to enable it. lilbean fucked around with this message at 20:43 on Jun 29, 2009 |
# ¿ Jun 29, 2009 20:41 |
|
A while back I mentioned filling up a 4540 with lovely consumer disks, and I went ahead last Friday and did just that. Well, not full yet, but I've replaced to begin with one ZFS vdev of 6 disks plus one of the available hot spares with Seagate 1TB 7200.12 drives (100CDN each). In the last weekend and week we've pounded the poo poo out of them since our 4540 is our primary backup server and acts as a disk-to-disk staging area. No problems whatsoever so far. Also, I think my manager is ready to blow me for being able to expand at 20 cents per usable gigabyte (after RAIDZ2 and a spare are taken into account).
|
# ¿ Aug 28, 2009 21:09 |
|
It's neat but each power supply runs about half of the components. That's doubling a point a failure and a pretty crappy compromise.
|
# ¿ Sep 2, 2009 23:10 |
|
ragzilla posted:Depends on their environment- if they also implement redundancy at the network or application levels (storing the same file on multiples of these boxes) ala waffleimages/lefthand it doesn't really matter if one of the boxes dies on rebuild, just get copies redistributed and bring the node back into the cluster when it's fixed. But at that point you're just reinventing mogilefs/googlefs.
|
# ¿ Sep 3, 2009 01:19 |
|
H110Hawk posted:Agreed. You should compliment him on his quick thinking on the vibration patterns, though. Ask him if it was his idea. Sales guys occasionally need to be laughed at and called out.
|
# ¿ Sep 14, 2009 16:35 |
|
Echidna posted:It's amazing what putting an order through on the last day of the month can do, when they have targets to meet... So he calls me and he says "Look, if I make this sale I win a 42-inch plasma TV." As if I give a poo poo. So one of our other Sun sales guys get us the lowest price (Parameter Driven Solutions in Toronto) and we go with them. Then the CDW sales guy leaves a threatening message on my voicemail! He's angry as gently caress and saying poo poo like "I worked for a month on your quote and I DON'T LOSE SALES" and so on. So I didn't reply, but CDWs website helpfully mentions the contact information of your rep's manager which let me forward the voicemail to him. So the salesman no longer works there Fake edit: Also, I've seen Apple sales people and sales engineer give the same lines of bullshit about vibrational testing.
|
# ¿ Sep 15, 2009 12:33 |
|
FISHMANPET posted:How much experience do people here have with Sun's x4500/x4550 (thumper/thor)? I've got one at work and I'm going to be having a lot of stupid little questions, and I'm wondering if I can get them answered here or if I need to go register on the OpenSolaris forums or something.
|
# ¿ Feb 5, 2010 19:49 |
|
You don't need any card for iSCSI - just use the four built-in GigE ports. If you're feeling really spendy then get a PCIx 10gE card. As far as the disk layout goes it can be pretty flexible, but try to build vdevs that have one disk from each of the controllers on them. I went with 7 RAIDZ2 vdevs and four hotspares. At first I thought that many hotspares was a waste, but then we started swapping out disks with larger ones and we do that by failing half of a vdev to the spares, replacing them, then doing it once more (which means for a full vdev upgrade we crack the chassis twice).
|
# ¿ Feb 5, 2010 20:06 |
|
Misogynist posted:Interestingly, the Fishworks stuff also has better analytics than most of the solutions I've seen.
|
# ¿ Mar 3, 2010 16:28 |
|
Serfer posted:Well, I was trying to figure out how they had drives connected to both systems really. I wanted to build something similar and use something like Nexenta, but it's looking more like these things are specialized and not available to people who want to roll their own.
|
# ¿ Mar 7, 2010 21:34 |
|
I would just mitigate most of the risk by using a UPS and - if you can afford it - systems with redundant power. I mean a motherboard can still blow and take down the whole system immediately, but most drives follow the flush and sync commands enough to not worry that much.
|
# ¿ Mar 10, 2010 19:02 |
|
Sooo, I'm in the enviable position of looking at SSD-based disk arrays right now. I'm looking at the Sun F5100, and it's pretty expensive but seems completely solid (and loving awesome). Anyone have experience with other SSD arrays? Should I even be thinking about filling a standard disk array with flash disks (probably X25-Es?)
|
# ¿ May 21, 2010 02:23 |
|
adorai posted:It would be useful to know the intended purpose and budget. The floated budget is around 50K. Currently we have a pair of Sun 2530s with fast disks and multipath I/O but hey, if that's not enough then that's not enough. EoRaptor posted:There is also the scenario that the disk controller can't keep up with the i/o abilities of the drives, and bottlenecks the whole setup. No idea if this is a realistic possibility. Edit: Another option is adding a pair or more of SSDs to the Oracle host itself and carving it up between some of the high-activity datafiles, ZIL and maybe give the rest over for L2ARC. lilbean fucked around with this message at 03:04 on May 21, 2010 |
# ¿ May 21, 2010 03:00 |
|
FISHMANPET posted:Psst, It's Solaris 10 on an Oracle SPARC system. Edit: Also, to be fair to the SPARCs the Intel/AMDs rock their loving world on the CPU-bound queries but for IO throughput the SPARCS have kept the lead. At least in our environment. And ZFS is awesome. lilbean fucked around with this message at 03:11 on May 21, 2010 |
# ¿ May 21, 2010 03:08 |
|
EnergizerFellow posted:We don't have any investment into any SAN deployment currently. We use SAS arrays for everything full of 15K disks, and at most have two hosts connected to each array for a failover situation so there's not really a cost to migrate off of it. I have been seriously looking at using the SSDs as the front-end to the disk arrays, but the bitch is that only ZFS can really take advantage of that from what I know of. If we do end up pulling the trigger in January or February and moving to X86 RHEL systems then we'll have to figure out the deployment plan again for how to use them decently. If we got an SSD array, then Linux and Solaris can take advantage of it completely (in theory). I've looked at the Ramsan stuff a long time ago, but had totally forgotten about them so I'll check again and try to get pricing.
|
# ¿ May 21, 2010 14:01 |
|
EnergizerFellow posted:What's the budget we're working with? Also, pretty much this... Edit: Also, there's very little long-term about this. The idea is to temporarily fix the busted applications now with this to give us time to fix the applications, with the caveat that we may never have the time. I wish I was joking.
|
# ¿ May 21, 2010 21:17 |
|
adorai posted:Can you just get ahold of sun and pick up a few zeus ssds for your ZIL and l2arc on the existing 2350s? We have a second 2530 that is going to be used for secondary purposes - other hosts, backups from Oracle, and so on but none of those uses are necessary... So we have the option of growing the pool across that chassis as well in the same config to improve spindle count and get more cache. The 50K dream spend is to basically either improve on that current scenario or replace it entirely with something bonerific like the F5100. Edit: It's basically a pretty solid setup right now, and it's obviously throwing more money at apps with poor performance. lilbean fucked around with this message at 23:48 on May 21, 2010 |
# ¿ May 21, 2010 23:45 |
|
EnergizerFellow posted:Why not striped for the host-level combination? You sound bang on about 50K being low for the expansion (which is why I'm leaning towards the flash array).
|
# ¿ May 22, 2010 03:08 |
|
Thanks, excellent suggestions all around. EoRaptor - I can't post anything, but there's a good mix of CPU-bound and IO-bound queries. There's dozens of apps, and some of them perform as expected and a lot don't. We're replacing the server potentially too as part of this exercise with a separate budget. It's currently a SPARC T2+ system (1200mhz). We've tested on Intel systems and get some boost, but the SPARCs memory speed seems to nullify most of the advantage. On Monday we're going to profile the apps on an i7 as we're hoping the DDR3 memory bus will be better. If the SPARC really does have an advantage then gently caress. We could get a faster T2+ system or possibly an M4000 :-( Who the gently caress buys an M4000 in 2010? Anyways, that's life.
|
# ¿ May 22, 2010 17:14 |
|
Bluecobra posted:I'm really pissed that loving Oracle is EOLing the X4540 in just a few days. They also silently killed their JBOD's like the J4400/J4500 arrays as well. Unified Storage is nice and all but is way more pricey to get 48TB raw than it is to go with an X4540. Also, now you can only buy Oracle "Primer" support which is roughly three times the price of Sun Gold support. gently caress you Oracle.
|
# ¿ Oct 30, 2010 01:42 |
|
Speaking of Dell, anybody actually use the MD3200 product? It *looks* decent, and we need a successor to the Sun 2530 series - does it fit the bill?
|
# ¿ Dec 14, 2010 22:07 |
|
|
# ¿ Apr 25, 2024 09:26 |
|
Syano posted:I have one in production at the moment. For an entry level SAN Im not sure it can be beat. The one big feature I wish it had was replication but other than that its been a great unit.
|
# ¿ Dec 16, 2010 17:18 |