Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Trouser Mouse Bear
Mar 20, 2004
Bancount - 1

StabbinHobo posted:

Yea thats almost exactly what I was saying. They're implementing it because it makes for good buzzphrase-packed marketing copy. Keep in mind, the design in question only reduces the risk by a massive degree, its never fully gone. Therefore they can straight-faced claim to be making something more reliable with raid6, even if so minutely that its almost being deceptive to act like it matters.


edit: remember what you think of as "raid 5" or "raid 6" are really just wildly oversimplified dumbed down examples of what actually gets implemented on raid controllers and in storage arrays. 3par for instance doesn't even write their raid5 algorithm, they license it from a 3rd party software developer and design it into custom ASIC chips. Think of it like "3G" for cellular stuff or "HDTV" for video, there's a lot of different vendors making a lot of different implementation decisions under the broad term.

Sorry to revisit this again, but I think the following, taken from http://www.thestoragearchitect.com/2008/10/29/understanding-eva-revisited/
explains what StabbinHobo said quite succinctly ( for EVA's anyway ).

thestoragearchitect.com posted:

* EVA disks are placed in groups – usually recommended to be one single group unless there’s a compelling reason not to (like different disk types e.g. FC/FATA).

* Disk groups are logically divided into Redundancy Storage Sets, which can be from 6-11 disks in size, depending on the number of disks in the group, but ideally 8 drives.

* Virtual LUNs are created across all disks in a group, however to minimise the risk of data loss from disk failure, equal slices of LUNs (called PSEGs) are created in each RSS with additional parity to recreate the data within the RSS if a disk failure occurs. PSEGs are 2MB in size.

* In the event of a drive failure, data is moved dynamically/automagically to spare space reserved on each remaining disk.

The v in vRAID should really be the tip off to understanding it.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Can anyone unfortunate enough to be managing an IBM SAN tell me if there's a way to get performance counters on a physical array, or am I limited to trying to aggregate my LUN statistics together using some kind of LUN-array mapping and a cobbled-together SMcli script?

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Misogynist posted:

Can anyone unfortunate enough to be managing an IBM SAN tell me if there's a way to get performance counters on a physical array, or am I limited to trying to aggregate my LUN statistics together using some kind of LUN-array mapping and a cobbled-together SMcli script?

I just started working at IBM (XIV really). PM me or post your specific question and hardware and I'll look it up on Monday.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

paperchaseguy posted:

I just started working at IBM (XIV really). PM me or post your specific question and hardware and I'll look it up on Monday.
I ended up just modifying my Nagios perfdata collector plugin to map the LUNs to their physical arrays, then aggregate together the LUN-level statistics into some pretty and easily-graphable numbers. Today, I woke up early to upgrade our SAN firmware, fix some physical cabling issues and fix the totally broken-rear end multipathing on our fabric, so I look to be getting the hang of this pretty quickly. :)

I'm going to be setting up a DS5300DS5100 at our DR site this week, though, so I'll let you know if I have any specific questions about it.

Vulture Culture fucked around with this message at 21:59 on May 13, 2010

GrandMaster
Aug 15, 2004
laidback

Vanilla posted:

So I have a ton of perfmon stats from a certain server.

What tools do you use to analyse these? I know there's the windows Performance Monitor tool but i've found it a bit 'hard'.

Do you know of any third party tools for analysing permon outputs?

we use this:
http://pal.codeplex.com/

its brilliant, you feed it the log files in either binary or csv format, it churns the logs and spits out a html report, makes pretty graphs, highlights any issues and has explanations for what each counter means.

Maneki Neko
Oct 27, 2000

Oh hay IBM people. I've got a DS4300 that I need to wipe the drives on so we can surplus it. Does IBM have a nice fancy utility to do that in bulk or anything? Didn't see anything in the storage manager client, but we are using a version that's roughly 400 years old.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
This is a long shot, but what the hell, lemme run it by you guys.

I've got a pair of Brocade 300 switches (rebranded as IBM SAN24B-4), and I'm trying to connect up each switch's Ethernet management port to a separate Cisco Nexus fabric. Each one is run with a 5020, and the switches link up to a 2048T FEX. Problem is, whenever I do this, there is no link. I can hook up the 300 to a laptop and get a link. I can hook up the 300s to each other and get a link on both. I can hook it up to some old-rear end Cisco 100-megabit 16-port switch and get a link. I can hook up other devices, like an IBM SAN and a Raritan KVM, and get links. But for some reason, the goddamn things just will not show a link when I hook them up to the gigabit ports on the 2048T.

Any ideas? The only thing I can think is that the Nexus has issues with stuff below 1 gigabit, but if that's the case, that's some of the most braindead poo poo I've ever heard.

Vulture Culture fucked around with this message at 21:12 on May 20, 2010

Maneki Neko
Oct 27, 2000

Misogynist posted:

This is a long shot, but what the hell, lemme run it by you guys.

I've got a pair of Brocade 300 switches (rebranded as IBM SAN24B-4), and I'm trying to connect up each switch's Ethernet management port to a separate Cisco Nexus fabric. Each one is run with a 5020, and the switches link up to a 2048T FEX. Problem is, whenever I do this, there is no link. I can hook up the 300 to a laptop and get a link. I can hook up the 300s to each other and get a link on both. I can hook it up to some old-rear end Cisco 100-megabit 16-port switch and get a link. I can hook up other devices, like an IBM SAN and a Raritan KVM, and get links. But for some reason, the goddamn things just will not show a link when I hook them up to the gigabit ports on the 2048T.

Any ideas? The only thing I can think is that the Nexus has issues with stuff below 1 gigabit, but if that's the case, that's some of the most braindead poo poo I've ever heard.

From what I recall during discussions with our resellers when we were picking up our 5000s + fabric extenders, Nexus 2000 is all 1gig, no 100 meg.

EDIT: Apparently the 2248s do 100/1000.

Maneki Neko fucked around with this message at 21:19 on May 20, 2010

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Okay, cool. I scrounged up a Cisco 2940. Let's see if that gets us through for the time being.

EnergizerFellow
Oct 11, 2005

More drunk than a barrel of monkeys

Maneki Neko posted:

From what I recall during discussions with our resellers when we were picking up our 5000s + fabric extenders, Nexus 2000 is all 1gig, no 100 meg.

EDIT: Apparently the 2248s do 100/1000.
Correct on the 2148T.

We just built out a new datacenter extension with 2148Ts all over the place and people where wondering why the DRACs didn't work... Good times. At least the 2248TP is launching next month, which does support 100 Mbit.

lilbean
Oct 2, 2003

Sooo, I'm in the enviable position of looking at SSD-based disk arrays right now. I'm looking at the Sun F5100, and it's pretty expensive but seems completely solid (and loving awesome). Anyone have experience with other SSD arrays? Should I even be thinking about filling a standard disk array with flash disks (probably X25-Es?)

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

lilbean posted:

Sooo, I'm in the enviable position of looking at SSD-based disk arrays right now. I'm looking at the Sun F5100, and it's pretty expensive but seems completely solid (and loving awesome). Anyone have experience with other SSD arrays? Should I even be thinking about filling a standard disk array with flash disks (probably X25-Es?)
It would be useful to know the intended purpose and budget.

EoRaptor
Sep 13, 2003

by Fluffdaddy

lilbean posted:

Sooo, I'm in the enviable position of looking at SSD-based disk arrays right now. I'm looking at the Sun F5100, and it's pretty expensive but seems completely solid (and loving awesome). Anyone have experience with other SSD arrays? Should I even be thinking about filling a standard disk array with flash disks (probably X25-Es?)

I believe the major concern with using SSD's in an otherwise 'off the shelf' array setup is that the disk controller probably won't understand trim, and therefor be unable to present trim capabilities to the o/s, nor pass them along to each drive correctly. Eventually, this will cause performance degradation of the array, though I can't honestly say how much. Lifespan should be unaffected.

There is also the scenario that the disk controller can't keep up with the i/o abilities of the drives, and bottlenecks the whole setup. No idea if this is a realistic possibility.

Dedicated units, particularly ones that present over a shared file system (nfs, cifs, etc) should be okay, though I don't have any direct knowledge of the sun/oracle stuff.

lilbean
Oct 2, 2003

adorai posted:

It would be useful to know the intended purpose and budget.
It's for a very IO-heavy production OLTP database of about 400GB (Oracle 10 on Solaris SPARC systems). I think the applications on it could use a lot of tender-loving care, but the powers that be decided it's better to fix with hardware (of course!)

The floated budget is around 50K. Currently we have a pair of Sun 2530s with fast disks and multipath I/O but hey, if that's not enough then that's not enough. :downs:

EoRaptor posted:

There is also the scenario that the disk controller can't keep up with the i/o abilities of the drives, and bottlenecks the whole setup. No idea if this is a realistic possibility.
Yeah, this was my objection too.

Edit: Another option is adding a pair or more of SSDs to the Oracle host itself and carving it up between some of the high-activity datafiles, ZIL and maybe give the rest over for L2ARC.

lilbean fucked around with this message at 03:04 on May 21, 2010

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

lilbean posted:

(Oracle 10 on Solaris SPARC systems

Psst, It's Solaris 10 on an Oracle SPARC system.

lilbean
Oct 2, 2003

FISHMANPET posted:

Psst, It's Solaris 10 on an Oracle SPARC system.
Oh I know, god I know. We're actually planning to go to Linux/x86 next year.

Edit: Also, to be fair to the SPARCs the Intel/AMDs rock their loving world on the CPU-bound queries but for IO throughput the SPARCS have kept the lead. At least in our environment. And ZFS is awesome.

lilbean fucked around with this message at 03:11 on May 21, 2010

TobyObi
Mar 7, 2005
Ain't nobody Obi like Toby ;)

lilbean posted:

Sooo, I'm in the enviable position of looking at SSD-based disk arrays right now. I'm looking at the Sun F5100, and it's pretty expensive but seems completely solid (and loving awesome). Anyone have experience with other SSD arrays? Should I even be thinking about filling a standard disk array with flash disks (probably X25-Es?)

The F5100 is a phenomenal bit of kit. I had one in a test centre not that long ago, I passed on it for the F20 cards due to our use case.

Don't fill a standard disk array with flash disks. You will want to stab yourself in the face as they thrash the poo poo out of each other. If the controller of the SSD has any onboard TRIM style algorithm, they will go nuts in a RAID group. The controller will not be able to handle the burst I/O that the SSD can produce, so you end up speed limiting the entire thing.

EnergizerFellow
Oct 11, 2005

More drunk than a barrel of monkeys

lilbean posted:

Sooo, I'm in the enviable position of looking at SSD-based disk arrays right now.
:awesome:

But in all seriousness, what part of your current storage isn't up to snuff? Latency on reads and/or writes? Lack of IOps in reads/writes? Throughput? Not enough of the right cache types? Of that 400GB, how much is the typical working set?

quote:

I'm looking at the Sun F5100, and it's pretty expensive but seems completely solid (and loving awesome).
While quick, it's basically four logical groups across 16 4-channel SAS ports, plus not very big. You can definably use it for ZIL and L2ARC if you're already deep into the Sun clustered storage world.

quote:

Anyone have experience with other SSD arrays?
If you're already invested in an existing SAN infrastructure that will let you use 3rd party FC trays, look into the Texas Memory Systems RamSan. TMS is a NetApp partner company.

TobyObi posted:

Don't fill a standard disk array with flash disks. You will want to stab yourself in the face as they thrash the poo poo out of each other. If the controller of the SSD has any onboard TRIM style algorithm, they will go nuts in a RAID group. The controller will not be able to handle the burst I/O that the SSD can produce, so you end up speed limiting the entire thing.
Pretty much, yep. You need to think of SSDs as big, super fast cache to front lots and lots of 15K spindles. May as well got SAS (and iSCSI/NFS) unless you have a huge investment in an FC infrastructure.

EnergizerFellow fucked around with this message at 05:05 on May 21, 2010

oblomov
Jun 20, 2002

Meh... #overrated
Can anyone recommend an open source distributed file system that can stretch across multiple jbod systems ala Google FS? I am looking for one that will stripe across different drives in this "storage cluster". I've looked at ParaScale, HDFS, OpenAFS, etc... HDFS seems the most promising out of the bunch, but target metrics of multitude huge files is not quite what I was looking for.

Basically we got a potential project where we may need to store a whole bunch of archive/backup/tier3 and 4 data with a fairly small budget and I wanted to explore possibility of "rolling my own" "storage cloud".

lilbean
Oct 2, 2003

EnergizerFellow posted:

:words:
Yeah the SSDs in a standard array are out (I have enough pull thankfully to kaibosh some plans). The working set is the 400 gb - the whole system is probably 1TB, but more than half is local backups, low traffic schemas, etc. There are a *lot* of terrible, terrible ORM-generated queries hammering the database that involve numerous nested views across instances and schemas and other horrible poo poo (Cartesian products, etc). Obviously the right answer is to fix the products, but ... yeah.

We don't have any investment into any SAN deployment currently. We use SAS arrays for everything full of 15K disks, and at most have two hosts connected to each array for a failover situation so there's not really a cost to migrate off of it.

I have been seriously looking at using the SSDs as the front-end to the disk arrays, but the bitch is that only ZFS can really take advantage of that from what I know of. If we do end up pulling the trigger in January or February and moving to X86 RHEL systems then we'll have to figure out the deployment plan again for how to use them decently. If we got an SSD array, then Linux and Solaris can take advantage of it completely (in theory).

I've looked at the Ramsan stuff a long time ago, but had totally forgotten about them so I'll check again and try to get pricing.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

oblomov posted:

Can anyone recommend an open source distributed file system that can stretch across multiple jbod systems ala Google FS? I am looking for one that will stripe across different drives in this "storage cluster". I've looked at ParaScale, HDFS, OpenAFS, etc... HDFS seems the most promising out of the bunch, but target metrics of multitude huge files is not quite what I was looking for.

Basically we got a potential project where we may need to store a whole bunch of archive/backup/tier3 and 4 data with a fairly small budget and I wanted to explore possibility of "rolling my own" "storage cloud".
Ceph and Gluster are the two big ones that I'm aware of.

Edit: There's also MogileFS.

Syano
Jul 13, 2005
I am looking for something entry level. We have about 14 servers we want to virtualize and we need some flexibility in the device so we are able to grow with it and my vendors keep coming back with these out of the ballpark priced solutions. What do you guys suggest for a good entry level unit?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Syano posted:

I am looking for something entry level. We have about 14 servers we want to virtualize and we need some flexibility in the device so we are able to grow with it and my vendors keep coming back with these out of the ballpark priced solutions. What do you guys suggest for a good entry level unit?
"14 servers we want to virtualize" means absolutely nothing, since a "server" is not a unit of measurement, and "so we are able to grow with it" means very different things to different people. Do you need to grow your capacity? Your speeds? For how long? Where is your capacity/performance now? Where do you anticipate it in five years? If you want a solution that still scales out half a decade from now, you're going to be paying for it. You need to look at what it is you really need, and what the real business case is for it.

The big cost savings of virtualization is simple consolidation, and you can't determine how to maximize your consolidation ratio if you don't have the right numbers.

From a storage perspective, we need to know what kind of IOPS (I/O transfers per second) you're looking for, and what the basic performance profile is of your applications (random vs. sequential I/O, reads vs. writes, and where your bottlenecks are). We also need to know what kind of reliability you're looking for, what your plans are regarding replication and disaster recovery features, whether you need to boot from SAN, and how you plan to back all of this up.

You need to think about these things from both a business and technical perspective. In terms of the raw numbers, here are the places you need to look (I'm assuming you're a Windows shop):

\PhysicalDisk(_Total)\Disk Transfers/sec
\PhysicalDisk(_Total)\Disk Bytes/sec
\PhysicalDisk(_Total)\Current Disk Queue Length

You should probably let those perfmon logs cook for about a week before you take a look at them and start figuring out specs.

These numbers, by themselves, don't indicate what your real requirements are. You need to look at the bottlenecks in your application to make sure that your disk is really performing where it should be. For example, if your system is starved for memory, you may be paging, or your database servers may be reaching out to disk a lot more than they need to because they don't have the memory they need for a query cache.

Your best bet is to post over in the Virtualization Megathread, and I'll help you out with getting all the other numbers you need.

Vulture Culture fucked around with this message at 15:07 on May 21, 2010

Syano
Jul 13, 2005
Welp! I mainly wanted just some pointers to some entry level SAN units but since you are willing to help to THAT extent I will get my performance numbers together and move on over to the virtualization thread when I get a moment.

EnergizerFellow
Oct 11, 2005

More drunk than a barrel of monkeys

lilbean posted:

:words:
What's the budget we're working with? Also, pretty much this...

Misogynist posted:

Do you need to grow your capacity? Your speeds? For how long? Where is your capacity/performance now? Where do you anticipate it in five years? If you want a solution that still scales out half a decade from now, you're going to be paying for it. You need to look at what it is you really need, and what the real business case is for it.
[...]
From a storage perspective, we need to know what kind of IOPS (I/O transfers per second) you're looking for, and what the basic performance profile is of your applications (random vs. sequential I/O, reads vs. writes, and where your bottlenecks are). We also need to know what kind of reliability you're looking for, what your plans are regarding replication and disaster recovery features, whether you need to boot from SAN, and how you plan to back all of this up.

lilbean
Oct 2, 2003

EnergizerFellow posted:

What's the budget we're working with? Also, pretty much this...
The budget for the storage improvement is 50K. We already have a pair of Sun 2530s with 12x600GB 15K SAS disks, and three HBAs to put in each host (active-passive).

Edit: Also, there's very little long-term about this. The idea is to temporarily fix the busted applications now with this to give us time to fix the applications, with the caveat that we may never have the time. I wish I was joking.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

lilbean posted:

The budget for the storage improvement is 50K. We already have a pair of Sun 2530s with 12x600GB 15K SAS disks, and three HBAs to put in each host (active-passive).
Can you just get ahold of sun and pick up a few zeus ssds for your ZIL and l2arc on the existing 2350s?

edit: I see that the 2350s are apparently not using zfs. If you don't need HA you could try a 7200 series unit from sun.

lilbean
Oct 2, 2003

adorai posted:

Can you just get ahold of sun and pick up a few zeus ssds for your ZIL and l2arc on the existing 2350s?

edit: I see that the 2350s are apparently not using zfs. If you don't need HA you could try a 7200 series unit from sun.
Right now the setup is ZFS on a single 2530. We do hardware RAID10 on two groups of six disks, assign one group to each controller and then use ZFS on the host to concatenate the pools. Additionally we have two SSDs (Sun's rebranded X25E drives) that we're going to use for the L2ARC, a few hot Oracle partitions and possibly the ZIL. I did a bit of testing though for the ZIL and it seems the two controllers in the array soak up the log writes to their cache anyways, and the system didn't get a big boost from moving the ZIL to SSDs.

We have a second 2530 that is going to be used for secondary purposes - other hosts, backups from Oracle, and so on but none of those uses are necessary... So we have the option of growing the pool across that chassis as well in the same config to improve spindle count and get more cache.

The 50K dream spend is to basically either improve on that current scenario or replace it entirely with something bonerific like the F5100.

Edit: It's basically a pretty solid setup right now, and it's obviously throwing more money at apps with poor performance.

lilbean fucked around with this message at 23:48 on May 21, 2010

EnergizerFellow
Oct 11, 2005

More drunk than a barrel of monkeys

lilbean posted:

The budget for the storage improvement is 50K.
That's not much in the storage world, sadly. Something like a Netapp FAS3100 w/ a pair of Flash Cache (PAM II) cards and 2+ 15K SAS trays will set you back $100K+, easily. Something cheaper would be an EqualLogic PS6010XV or PS6510X.

lilbean posted:

I did a bit of testing though for the ZIL and it seems the two controllers in the array soak up the log writes to their cache anyways, and the system didn't get a big boost from moving the ZIL to SSDs.
Sounds about right on the ZIL. Realistically, all the ZIL buys you is faster confirmation from the NAS/SAN on synchronous writes, i.e. cached synchronous writes. If you aren't limited on synchronous wrights now, the ZIL indeed won't buy you much.

quote:

Right now the setup is ZFS on a single 2530. We do hardware RAID10 on two groups of six disks, assign one group to each controller and then use ZFS on the host to concatenate the pools. Additionally we have two SSDs (Sun's rebranded X25E drives) that we're going to use for the L2ARC, a few hot Oracle partitions and possibly the ZIL.
Why not striped for the host-level combination?

EnergizerFellow fucked around with this message at 02:55 on May 22, 2010

lilbean
Oct 2, 2003

EnergizerFellow posted:

Why not striped for the host-level combination?
When zpools are concatenated the writes are striped across devices, so with multipathing and the striping it's pretty damned fast.

You sound bang on about 50K being low for the expansion (which is why I'm leaning towards the flash array).

EoRaptor
Sep 13, 2003

by Fluffdaddy
Do you have any performance profiling data available? Which queries take the longest to run, which tables have the highest lock contention? Even just basic iostat stuff, such as read and write queue depth, average io size, etc. How high is the cpu usage ont eh oracle machines themselves?

I wondering if the database is even disk i/o bound. It might be getting data from disk just fine, but simply be starved of cpu throughput or local memory bandwidth. You've said that application improvements our a no-go, but how about better hardware in the oracle cluster itself?

oblomov
Jun 20, 2002

Meh... #overrated

Misogynist posted:

Ceph and Gluster are the two big ones that I'm aware of.

Edit: There's also MogileFS.

Cool, appreciate the info. GlusterFS seems promising. Going to check that out along with Hadoop's HDFS and OpenAFS. MogileFS seemed interesting but requires their libraries to write to it, which is not what I was quite looking for. Going to check out ParaScale again as well, even though it's commercial. Ceph looks cool but seems a bit too raw even for dev/test environment.

oblomov
Jun 20, 2002

Meh... #overrated

lilbean posted:

When zpools are concatenated the writes are striped across devices, so with multipathing and the striping it's pretty damned fast.

You sound bang on about 50K being low for the expansion (which is why I'm leaning towards the flash array).

$50K can get you 2 x Equallogic PS6010XV (you would need to beat up your Dell rep) with each having 16x450GB SAS drives. Your writes will be striped across each unit, so in affect you can have 28 drives (2 are hot space per each unit) striped in a single storage pool with 2 active controllers. What's your OS server side?

IMO, NetApp FS3100 with PAM card is gonna run you a lot more then $100K. Hell, a shelf of 450GB fiber drives is going to run over $30K. However, a 2150 (or whatever the hell is the equivalent right now) with a fiber shelf can probably be had for around $50K. NFS license is pretty cheap for 2000 series too.

EoRaptor
Sep 13, 2003

by Fluffdaddy

oblomov posted:

$50K can get you 2 x Equallogic PS6010XV (you would need to beat up your Dell rep) with each having 16x450GB SAS drives. Your writes will be striped across each unit, so in affect you can have 28 drives (2 are hot space per each unit) striped in a single storage pool with 2 active controllers. What's your OS server side?

Note: the mutli device stripping works for any platform, but will only get full throughput where an optimized multipath driver is available. Currently, only windows has such a driver, though a vmware one is in development. Other platforms have to wait for a redirect from the device they are hitting if the data they are seeking is elsewhere, leading to a latency hit at best.

Equallogic has some great ideas, but some really weird drawbacks. A single EQ device filled with ssd's might actually be a faster solution, though it depends on what you are bound by (i/o, throughput, latency, etc). We are back at our SSD device discussion, however, and there are betetr players than equallogic, I feel.

EnergizerFellow
Oct 11, 2005

More drunk than a barrel of monkeys

oblomov posted:

IMO, NetApp FS3100 with PAM card is gonna run you a lot more then $100K. Hell, a shelf of 450GB fiber drives is going to run over $30K.
Yeah, more like ~$200K. Nothing like overkill. ;)

quote:

However, a 2150 (or whatever the hell is the equivalent right now) with a fiber shelf can probably be had for around $50K. NFS license is pretty cheap for 2000 series too.
Yeah, the FAS2040 is a much more reasonable option. Some back of the envelope numbers would put a FAS2040 w/ DS4243 tray, 15K spindles, and NFS option at ~$70K and ~5500 IOPS.

The FAS2040 is the only model I'd touch in that whole lineup. The FAS2020 and FAS2050 are effectively a major generation behind and I'd expect to get an EOL notice on them any day now. I'd also expect a FAS2040 variant with 10G and more NVRAM to show up soonish as well.

oblomov
Jun 20, 2002

Meh... #overrated

EoRaptor posted:

Note: the mutli device stripping works for any platform, but will only get full throughput where an optimized multipath driver is available. Currently, only windows has such a driver, though a vmware one is in development. Other platforms have to wait for a redirect from the device they are hitting if the data they are seeking is elsewhere, leading to a latency hit at best.

Equallogic has some great ideas, but some really weird drawbacks. A single EQ device filled with ssd's might actually be a faster solution, though it depends on what you are bound by (i/o, throughput, latency, etc). We are back at our SSD device discussion, however, and there are betetr players than equallogic, I feel.

Well, there are workarounds with vSphere without native driver (no idea why Dell delayed that till 4.1). I have bunch of Equallogics running on MPIO just fine (just need to follow Dell/VMware guidelines on how many iSCSI vnics to create for the physical ones and then do some command line config on ESX hosts, not particularly difficult stuff there).

That said, I just am not sure of the state of Linux iSCSI initiators beyond simple failover stuff. We usually use NetApp NFS for Linux beyond a few boxes with not too intensive IO. 10GB helps with Equallogic, but does not help addressing MPIO volume addressing limitations. One can configure most of that manually (in Windows at least, so imagine Redhat/Suse have similar options). Redhat/SUSE drivers are supposed to be coming out Q3/Q4 along with vSphere 4.1 plugin, I believe.

oblomov
Jun 20, 2002

Meh... #overrated

EnergizerFellow posted:

Yeah, more like ~$200K. Nothing like overkill. ;)

Yeah, the FAS2040 is a much more reasonable option. Some back of the envelope numbers would put a FAS2040 w/ DS4243 tray, 15K spindles, and NFS option at ~$70K and ~5500 IOPS.

The FAS2040 is the only model I'd touch in that whole lineup. The FAS2020 and FAS2050 are effectively a major generation behind and I'd expect to get an EOL notice on them any day now. I'd also expect a FAS2040 variant with 10G and more NVRAM to show up soonish as well.

Oh yeah, forgot about 2040. We have a few of the older FAS2020 and FAS2050 boxes around, and they are pretty nice. However, we started buying Equallogic lately for the lower end tasks like this one, they are very price competitive (especially on Windows side).

All of that said, I am exploring distributed/clustering file system options on home-built JBODs now for couple different thing. I can certainly see why Google/Amazon use that sort of thing (well, and they have a few hundred devs working on the problem too :P). If only ZFS was cluster aware...

lilbean
Oct 2, 2003

Thanks, excellent suggestions all around.

EoRaptor - I can't post anything, but there's a good mix of CPU-bound and IO-bound queries. There's dozens of apps, and some of them perform as expected and a lot don't. We're replacing the server potentially too as part of this exercise with a separate budget. It's currently a SPARC T2+ system (1200mhz). We've tested on Intel systems and get some boost, but the SPARCs memory speed seems to nullify most of the advantage. On Monday we're going to profile the apps on an i7 as we're hoping the DDR3 memory bus will be better.

If the SPARC really does have an advantage then gently caress. We could get a faster T2+ system or possibly an M4000 :-( Who the gently caress buys an M4000 in 2010? Anyways, that's life.

EoRaptor
Sep 13, 2003

by Fluffdaddy

lilbean posted:

EoRaptor - I can't post anything, but there's a good mix of CPU-bound and IO-bound queries. There's dozens of apps, and some of them perform as expected and a lot don't. We're replacing the server potentially too as part of this exercise with a separate budget. It's currently a SPARC T2+ system (1200mhz). We've tested on Intel systems and get some boost, but the SPARCs memory speed seems to nullify most of the advantage. On Monday we're going to profile the apps on an i7 as we're hoping the DDR3 memory bus will be better.

Yeah, I didn't expect hard numbers, just that you had looked at the database performance counters, and know that you are i/o bound (i/o wait) not database limited (lock contention/memory limited)

For a server, look at a Xeon 56xx or 75xx series cpu, cram as much memory into the system as you can afford, and you should end up with a database monster. It's not going to be cheap, but the cpu and memory throughput is probably untouchable at the price point.

Adbot
ADBOT LOVES YOU

oblomov
Jun 20, 2002

Meh... #overrated

EoRaptor posted:

Yeah, I didn't expect hard numbers, just that you had looked at the database performance counters, and know that you are i/o bound (i/o wait) not database limited (lock contention/memory limited)

For a server, look at a Xeon 56xx or 75xx series cpu, cram as much memory into the system as you can afford, and you should end up with a database monster. It's not going to be cheap, but the cpu and memory throughput is probably untouchable at the price point.

For memory throughput especially, the 75xx series is really really good.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply