Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Nomex
Jul 17, 2002

Flame retarded.

Misogynist posted:

I got to see what happens when an IBM SAN gets unplugged in the middle of production hours today, thanks to a bad controller and a SAN head design that really doesn't work well with narrow racks.

(Nothing, if you plug it right back in. It's battery-backed and completely skips the re-initialization process. Because of this incidental behavior, I still have a job.)

If it's a DS4xxx unit I would schedule a maintenance window so you can power it off and reboot it properly. DS units are touchy, and you might see some glitches down the road.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Nomex posted:

If it's a DS4xxx unit I would schedule a maintenance window so you can power it off and reboot it properly. DS units are touchy, and you might see some glitches down the road.
We already offlined the other controller and brought it back, no worries. The touchy controller is actually why we were performing the maintenance in the first place -- configurations got out of sync between A and B, and we ended up fully replacing Controller B because there were some PCI bus issues in the logs.

I also have a call open with engineering asking why the gently caress the controllers on a $250,000 SAN are allowed to ever be out of sync with one another for any reason, but at least because of the battery-backed memory I'm not helping our Exchange admin rebuild corrupt mailboxes while I do it!

Vulture Culture fucked around with this message at 17:21 on Jun 30, 2010

Nomex
Jul 17, 2002

Flame retarded.
You should see what happens when you turn one on in the wrong order. I hate IBM DS equipment with the fury of a thousand suns.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Nomex posted:

You should see what happens when you turn one on in the wrong order. I hate IBM DS equipment with the fury of a thousand suns.
The whole problem is that you can't even refer to "IBM DS equipment," because it's a series of completely different architectures. The DS4000/DS5000 ("Midrange Storage") stuff is all rebranded LSI, whereas the DS6000/DS8000 is all IBM in the core, running on Power processors.

Confused_Donkey
Mar 16, 2003
...

ozmunkeh posted:

Also very interested to hear from people who have Compellent kit.
Something of a budget freeze here so the project (about 1/5th the size of yours) has been put on hold. I liked the Compellent offering but the best price we could get was 50% more than the comparable Lefthand we were also looking at. The only weird thing was the physical size of the units. As said, we'd be a small installation and the controllers alone take up 6U before you even get around to adding any of the 2U disk trays. That's exactly double the size of the Lefthand setup. Nothing of consequence but strange nonetheless.

I dream of the day I get the email "ozm, go ahead with the Compellent and throw a couple Juniper EX2200 switches in while you're at it". One day.....

We run 2 Compellent systems (one for SQL Clustering, other for Hyper-V Clustering) and they tend to work pretty well. Ours are the older Model 20 controllers, (Xeon, not Dual Core Xeon) and FC Shelves (146GB FC, 300GB FC, and 500GB FATA)

The only challenges I have them with are the random little software bugs that plague the systems. For instance we had one system where the primary controller failed, however it caused the secondary to reboot and get stuck in a reboot loop because the primary was still technically there (Fibre was active, but the controller was dead) Month later I'm STILL waiting for an explanation from Compellent on why the darn thing crashed..nothing yet though.

The Java interface is typical Java, likes to act up depending on versions. Finally the other issue I have is the block level disk spanning. The thing is impressive on how it moves data around, however its not the quickest in the world.

I still have an MSA1000 and MSA1500 system running and the performance is about 20% higher then the compellent (MSA systems are running 72.8 15K U320 Drives), however the MSA does not really have much to think about in the end.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
So it looks like Enhanced Remote Mirroring on the IBM DS4000/5000 doesn't actually replicate any LUNs bigger than 2TB, even though it lets you configure the relationship, runs it and then tells you that it's completely synchronized. Also, this limitation isn't documented anywhere.

Hope this doesn't burn anyone else! :)

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Anyone know a good way to get a Mac to connect to an iSCSI network? There are software initiators, but our Xserve is a piece of poo poo, and I'd rather just get a dedicated card. Does anybody make a card with OSX drivers? I know QLogic doesn't (at least according to their site). Apple seems to be all on board the Fibre Channel bandwagon, and ignoring iSCSI.

namaste friends
Sep 18, 2004

by Smythe

FISHMANPET posted:

Anyone know a good way to get a Mac to connect to an iSCSI network? There are software initiators, but our Xserve is a piece of poo poo, and I'd rather just get a dedicated card. Does anybody make a card with OSX drivers? I know QLogic doesn't (at least according to their site). Apple seems to be all on board the Fibre Channel bandwagon, and ignoring iSCSI.

Why not just use NFS?

edit:
Generally speaking, TCP offload engines don't give you that much advantage over using a dedicated NIC. If you've got a good amount of RAM in your Mac, it's more trouble than it's worth.

namaste friends fucked around with this message at 20:16 on Jul 15, 2010

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Cultural Imperial posted:

Why not just use NFS?

edit:
Generally speaking, TCP offload engines don't give you that much advantage over using a dedicated NIC. If you've got a good amount of RAM in your Mac, it's more trouble than it's worth.

RAM, Hahahahahahaha. Like I said, our server is a POS. I think it only has 3 GB. The guy who was paying for it wanted to be a cheapass, and my boss didn't have the balls to tell him to gently caress off, so we turned everything down to -11. We've got like 10T that we can share out via NFS from our thumper, but it's being a bastard in OSX. We were trying to use NFS reshares and that just sucked. Our next step is to try and get each client to mount the NFS share directly from our thumper, instead of from the Xserve.

I hate Apple, but I also hate cheapasses.

namaste friends
Sep 18, 2004

by Smythe

FISHMANPET posted:

RAM, Hahahahahahaha. Like I said, our server is a POS. I think it only has 3 GB. The guy who was paying for it wanted to be a cheapass, and my boss didn't have the balls to tell him to gently caress off, so we turned everything down to -11. We've got like 10T that we can share out via NFS from our thumper, but it's being a bastard in OSX. We were trying to use NFS reshares and that just sucked. Our next step is to try and get each client to mount the NFS share directly from our thumper, instead of from the Xserve.

I hate Apple, but I also hate cheapasses.

What kind of data is stored on your thumper?

Maneki Neko
Oct 27, 2000

Not quite sure if this is the domain of this thread or maybe that other centralized storage thread, but was curious what (if anything) people are doing on the cheap.

Got a friend who works at a place that has a fine NetApp setup, but through some shenanigans with a different storage vendor, they now have a giant pile of SATA drives, which he was looking to just throw in a giant case and use as a dumping ground for things (likely over NFS), with the expectation of eventually spooling it off to tape.

Hardware side seems pretty straightforward, but software stuff gets fairly interesting from there. Can just do linux with something like xfs/drbd or get a bit more exotic, but things look a little hazy from there.

Opensolaris/ZFS looks to be circling the bowl since the Oracle buyout, and Nexenta building off that seems interesting, but who knows what will end up happening there once Oracle finally cuts off OpenSOlaris

What other options out there are actually worth considering?

Maneki Neko fucked around with this message at 04:08 on Jul 16, 2010

AmericanCitizen
Nov 25, 2003

I am the ass-kickin clown that'll twist you like a balloon animal. I will beat your head against this bumper until the airbags deploy.

Maneki Neko posted:

Got a friend who works at a place that has a fine NetApp setup, but through some shenanigans with a different storage vendor, they now have a giant pile of SATA drives, which he was looking to just throw in a giant case and use as a dumping ground for things (likely over NFS), with the expectation of eventually spooling it off to tape.

We have a similar situation at my place. NetApp for the real stuff, but we use a ton of relatively low performance disk for backups and a variety of other needs.

We have three of these:
http://www.serversdirect.com/config.asp?config_id=SDR-A8301-T42

And really, they're perfectly suited to the task though you could certainly spend a lot more money on something else (and of course you should if it's worth it for your scenario, but it's not in our particular case.)

We load them up with CentOS and make big xfs partitions.

vvv It definitely depends on what you're using them for, but we honestly haven't had any problems with them (much to my surprise, really.)

AmericanCitizen fucked around with this message at 04:58 on Jul 16, 2010

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

AmericanCitizen posted:

We have three of these:
http://www.serversdirect.com/config.asp?config_id=SDR-A8301-T42
We just threw out about a dozen of these and couldn't be happier

NeuralSpark
Apr 16, 2004

FISHMANPET posted:

RAM, Hahahahahahaha. Like I said, our server is a POS. I think it only has 3 GB. The guy who was paying for it wanted to be a cheapass, and my boss didn't have the balls to tell him to gently caress off, so we turned everything down to -11. We've got like 10T that we can share out via NFS from our thumper, but it's being a bastard in OSX. We were trying to use NFS reshares and that just sucked. Our next step is to try and get each client to mount the NFS share directly from our thumper, instead of from the Xserve.

I hate Apple, but I also hate cheapasses.

With only 3GB of RAM is it even an Intel box?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Maneki Neko posted:

What other options out there are actually worth considering?
Short term: stick with opensolaris; long term linux + btrfs.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

NeuralSpark posted:

With only 3GB of RAM is it even an Intel box?

Maybe it's four. It's brand new, but the guy wouldn't spend more than $2000 (being split 50/50 with another department, so $4000 total) which resulted in the minimum amount of RAM, one of the slowest CPUs, and the smallest hard drives.

TobyObi
Mar 7, 2005
Ain't nobody Obi like Toby ;)

Maneki Neko posted:

Opensolaris/ZFS looks to be circling the bowl since the Oracle buyout, and Nexenta building off that seems interesting, but who knows what will end up happening there once Oracle finally cuts off OpenSOlaris
Oracle aren't going to drop OpenSolaris and ZFS.

A large part of the purchase was the 7000 series, which are these software components behind pretty clicky buttons.

Whether OpenSolaris remains actually "open" is another question though.

As for btrfs, with the main driver of this being Oracle for Oracle Unbreakable Linux, they may drop a lot of the development push behind it and work on rolling it into ZFS.

Maneki Neko
Oct 27, 2000

TobyObi posted:

Oracle aren't going to drop OpenSolaris and ZFS.

A large part of the purchase was the 7000 series, which are these software components behind pretty clicky buttons.

Whether OpenSolaris remains actually "open" is another question though.

As for btrfs, with the main driver of this being Oracle for Oracle Unbreakable Linux, they may drop a lot of the development push behind it and work on rolling it into ZFS.

Sure, I didn't mean ZFS was going anywhere, just that OpenSolaris community is probably just going to implode at some point unless Oracle actually decides to do something to support it. I'm sure Oracle will keep Solaris around and ZFS kicking, they want to sell more poo poo.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
even if oracle isn't going to kill the stuff it bought for nefarious reasons like they do to almost every other business they buy, they're going to do it for reasons of simply being a giant enterprise closed source software company.

you can argue they didn't "kill" berkelydb when they bought it, they just choked it off from the rest of the community. same exact thing happened to innodb. same thing will happen to mysql and opensolaris now.

fishworks may live on as a top notch storage product, but from here own out it will be developed to compete with netapp and force you into the same dozens-of-sub-license-line-items way of doing business.

wwb
Aug 17, 2004

I'm not exactly looking for enterprise storage, but I suspect this is the best place to ask this question.

Amongst other things, I'm responsible for keeping our development network humming. And right now it is in need of an upgrade off of a single Hyper-V box with local storage to something more modern and NAS-like. Now, it is a development network so we don't have as strict uptime or reliability requirements as one would have in a more production oriented network. Nor do I have the budget to spend the kind of money one would on a more permenant fixture.

Anyhow, we realized the best thing would be to get our hands on an economical, iSCSI based SAN solution. And we've found two options that sound at least somewhat appealing. First would be an el-cheapo dedicated NAS box; we were looking at the Sans Digital AN4L [no, that isn't anal]. The other option on the table would be to use OpenFiler to build a beige-box SAN as we've got the bulk of the parts lying around to make that happen.

Anyone have any experience with either of the two, or have recommendations for a sub $1000 delivered solution to get me ~1TB of iSCSI NAS space?

Intraveinous
Oct 2, 2001

Legion of Rainy-Day Buddhists

Nomex posted:

If you're worried about fault tolerance, you might want to go with an sb40c storage blade and 6 of the MDL SSDs in RAID 10. That would give you about 60k random read IOPS and ~15k writes.

This was the way we ended up getting approval for. BL460c + SB40c with SSDs. Now that I'm getting down to actually buying things, I wondered about using something other than HP's MDL SSDs. Performance numbers for them aren't the greatest, and although I'll be dramatically increasing the performance no matter what, I can't help but worry about using midline drives with a 1 year warranty in a production box. For the price point on the HP 60GB MDL SSDs, I can get 100GB (28% Overhead) "Enterprise" SSDs from other vendors. Examples would be the recently announced Sandforce 1500 controller based offering from OCZ, Super Talent, etc. The SF1500 allows MLC, eMLC, or SLC flash to be used, has a super capacitor for clearing the write buffers in case of a power outage (these will be on UPS and generators, but still nice in case someone does something stupid), Promises read/write rates up to near the limits of SATA 2, and come with 3-5 year warranties, vs HP's puny 1 year.

Being such new stuff, I'm a little hesitant to put prod on Sandforce, and in an "unsupported" configuration, but I'm also hesitant to spend the money on HP's drives which aren't rated for high end workloads, have a shorter warranty, and are slower all around.

HP is supposed to be releasing their "Third Generation Enterprise SSDs" some time in the next few months, but I can't really wait around any longer, as the performance problems are getting more and more common on the current kit.

TL;DR version:
Making an array of 6x SSDs in Storage Blade, stick with supported, but slower, lower rated midline SATA HP drives, or go balls out and bleeding edge with unsupported SandForce 1500 based enterprise SSDs like the OCZ Deneva Reliability/Vertex 2 EX, or Super Talent TeraDrive FT2 for about the same cost.

Intraveinous
Oct 2, 2001

Legion of Rainy-Day Buddhists

Nukelear v.2 posted:

This thread needs a bump.

Building up a smallish HA MSSQL cluster and my old cheap standby MD3000 is definitely looking long in the tooth. So I'm going back to my first love, the HP MSA and I must say the P2000 G3 MSA looks very tempting. Anyone use either the FC or SAS variants of this and have any opinions on it? I've also been reading that small form factor drives are the 'wave of the future' for enterprise storage, logically it seems to be better but I haven't really heard too much about it, so I'm also trying to decide if the SFF variant is the better choice.

We have a couple of the P2000 MSAs and have been happy with them by and large. Ours are the SAS versions. One is used for a small SQL server install, and the other for a remote web server farm that allows uploads of documents into our doc mgmt system.

Nomex
Jul 17, 2002

Flame retarded.

Intraveinous posted:

This was the way we ended up getting approval for. BL460c + SB40c with SSDs. Now that I'm getting down to actually buying things, I wondered about using something other than HP's MDL SSDs. Performance numbers for them aren't the greatest, and although I'll be dramatically increasing the performance no matter what, I can't help but worry about using midline drives with a 1 year warranty in a production box. For the price point on the HP 60GB MDL SSDs, I can get 100GB (28% Overhead) "Enterprise" SSDs from other vendors. Examples would be the recently announced Sandforce 1500 controller based offering from OCZ, Super Talent, etc. The SF1500 allows MLC, eMLC, or SLC flash to be used, has a super capacitor for clearing the write buffers in case of a power outage (these will be on UPS and generators, but still nice in case someone does something stupid), Promises read/write rates up to near the limits of SATA 2, and come with 3-5 year warranties, vs HP's puny 1 year.

Being such new stuff, I'm a little hesitant to put prod on Sandforce, and in an "unsupported" configuration, but I'm also hesitant to spend the money on HP's drives which aren't rated for high end workloads, have a shorter warranty, and are slower all around.

HP is supposed to be releasing their "Third Generation Enterprise SSDs" some time in the next few months, but I can't really wait around any longer, as the performance problems are getting more and more common on the current kit.

TL;DR version:
Making an array of 6x SSDs in Storage Blade, stick with supported, but slower, lower rated midline SATA HP drives, or go balls out and bleeding edge with unsupported SandForce 1500 based enterprise SSDs like the OCZ Deneva Reliability/Vertex 2 EX, or Super Talent TeraDrive FT2 for about the same cost.

You could go with 6 Intel X25-E drives instead. They're still unsupported, but they have a 5 year warranty and use SLC flash. Also they're rated for 35,000/3,300 read/write IOPS each. They might be older tech, but pretty reliable.

On a side note, I've got a customer who's going to be stacking 10 Fusion IO drives in a DL980 as soon as the server is released. I can't wait to run some benchmarks on that.

Intraveinous
Oct 2, 2001

Legion of Rainy-Day Buddhists

Nomex posted:

You could go with 6 Intel X25-E drives instead. They're still unsupported, but they have a 5 year warranty and use SLC flash. Also they're rated for 35,000/3,300 read/write IOPS each. They might be older tech, but pretty reliable.

On a side note, I've got a customer who's going to be stacking 10 Fusion IO drives in a DL980 as soon as the server is released. I can't wait to run some benchmarks on that.

I figured if I were doing an unsupported config anyway, may as well take advantage of the additional speed offered by the new drives (285/275MBps R/W, 50K IOPS Aligned 4k random write), though I would be using something with more known reliability in the X25-E.

On the 10X Fusion IOs, how do they plan to stack them? Software RAID? The Infiniband attached chassis with IO Drives that Fusion used for one of the National Laboratories look insanely nice.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
IBM SAN guys: I figured out IBM's mostly-undocumented recovery procedure for an accidentally-deleted LUN. Is there any way to make recover logicalDrive (or equivalent) preserve the old NAA ID of the deleted LUN? I'm paranoid about resignaturing in VMware, especially when boot LUNs get involved.

Vulture Culture fucked around with this message at 06:24 on Jul 28, 2010

GanjamonII
Mar 24, 2001
Does anyone here run oracle on iscsi? we're having some performance issues.
I apologise in advance if I mess up the terminology or details as my experience is on the application side and some of this stuff is kinda new (as in.. since since we discovered the issue). There are real server/storage folks looking at this too but I think there is a lot of expertise on the forums also. Also I'm responsible for the application running on this DB.

We have NetApp 3070 supplying about 1.8Tb to our oracle box (server 2008 x64) which is running 4 databases. The server is a beefy 4 socket HP blade (bl something or other g6, can't remember the model) 32gb ram with 2 1gb nics teamed for the production lan. We're using microsofts iscsi initiator software.

Our DBA ran the AWR report for oracle which shows that its waiting on i/o something like 70+% of the time. It reported the throughput at something stupid like 478kb/s read and 68kb/s write which I can pull on my home DSL connection so either that is not accurate or something is really wrong cause these things are in a serious datacenter about 3 feet apart. The LUNs are in a volume/aggregate/whatever with 26 disks so there should be plenty of spindles, and the report from storage team is that the disks are basically idling along at under 10% average utilization.

Oracle is reporting average request latency of 35-50+ms for some of the database files, whereas our storage team reports average request latency on the filer is something like 4ms. So seems there is something going on between oracle and the filer. CPU usage on the servers is low, there isn't any network issues we're aware of, though we're checking into it.

This is supporting a business critical application and looks like our db is going to increase significantly in size over the next 6 months. Performance for the application overall now is borderline - it is very slow but still usable but its definitely not acceptable and users are not happy with it.

Anyone have any advice? It would be really appreciated.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

GanjamonII posted:

Oracle is reporting average request latency of 35-50+ms for some of the database files, whereas our storage team reports average request latency on the filer is something like 4ms. So seems there is something going on between oracle and the filer. CPU usage on the servers is low, there isn't any network issues we're aware of, though we're checking into it.
Two things I would try/check: Unteam the NICs, use one for iSCSI and one for normal traffic, and verify that your LUNs are properly aligned.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Are the iSCSI target and initiator on the same subnet? Can you put them on the same subnet?

Are you using jumbo frames? Does it work any better if you fall back to 1500 MTU?

Do your performance problems go away if you dismantle the port channel for testing? (The port channel probably isn't doing you any good whatsoever, because port channels only distribute traffic from different hosts, and can't saturate more than a single link with traffic from a single other host. You can round-robin your traffic to the Oracle server (writes), but you'd probably get better application performance dedicating one NIC to the iSCSI initiator and one to Oracle connections from applications.)

Since you're running from a software iSCSI initiator, do you see anything funny going on (e.g. lots of retransmits) if you run Wireshark as you test?

Vulture Culture fucked around with this message at 03:52 on Jul 29, 2010

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

verify that your LUNs are properly aligned.
He's on Server 2008, so I doubt that's it unless the server was upgraded. (Server 2008 aligns any new partitions to a 1MB boundary, which should be good for pretty much any segment width.)

Probably a good thing to check just for due diligence, though. Besides, it never hurts for guys interacting with SANs to actually understand segments, stripes and alignment. :)

namaste friends
Sep 18, 2004

by Smythe

GanjamonII posted:

Does anyone here run oracle on iscsi? we're having some performance issues.
I apologise in advance if I mess up the terminology or details as my experience is on the application side and some of this stuff is kinda new (as in.. since since we discovered the issue). There are real server/storage folks looking at this too but I think there is a lot of expertise on the forums also. Also I'm responsible for the application running on this DB.

We have NetApp 3070 supplying about 1.8Tb to our oracle box (server 2008 x64) which is running 4 databases. The server is a beefy 4 socket HP blade (bl something or other g6, can't remember the model) 32gb ram with 2 1gb nics teamed for the production lan. We're using microsofts iscsi initiator software.

Our DBA ran the AWR report for oracle which shows that its waiting on i/o something like 70+% of the time. It reported the throughput at something stupid like 478kb/s read and 68kb/s write which I can pull on my home DSL connection so either that is not accurate or something is really wrong cause these things are in a serious datacenter about 3 feet apart. The LUNs are in a volume/aggregate/whatever with 26 disks so there should be plenty of spindles, and the report from storage team is that the disks are basically idling along at under 10% average utilization.

Oracle is reporting average request latency of 35-50+ms for some of the database files, whereas our storage team reports average request latency on the filer is something like 4ms. So seems there is something going on between oracle and the filer. CPU usage on the servers is low, there isn't any network issues we're aware of, though we're checking into it.

This is supporting a business critical application and looks like our db is going to increase significantly in size over the next 6 months. Performance for the application overall now is borderline - it is very slow but still usable but its definitely not acceptable and users are not happy with it.

Anyone have any advice? It would be really appreciated.

Is this a new oracle install?
Are you using snapdrive?
What initiator version?
How have you configured your target NICs on the filer? Are they vif'd?
What ontap and snapdrive versions are you running?
What oracle version?
Are your servers clustered?
How complicated is your networking? ie multiple vlans?
Misogynist and Adorai asked very good questions about your network. I'd also take a look at basic stuff like duplex and autonegotiate on the switches.

Finally, have you opened a case? If not, I suggest you do so now. You're paying for support anyway. Might as well use it. This sort of problem is well within the responsibilities of the support centre.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Anyone know the standard log write size for Exchange 2010, perchance? I know the DB page size was increased to 32k, which actually makes it possible to run on RAID-5 in our environment, but I can't find anything about the transaction logs.

Nebulis01
Dec 30, 2003
Technical Support Ninny
So I have a stupid question. I'm new to administering a SAN so please bear with me.

We just recently got a Dell Equalogic PS4000X. I configured it out of the box for RAID6 since it's mainly going to be used for read access. However I was exploring some of the volume options and it appears you can assign a raid type for the volume, but it won't change any of the member drives to fit this configuration.

My question is this:

Is it possible for a member to have more than one raid type or do all 16 drives have to be allocated to the same raid type? It seems like that would be a stupid design but I've not found anything in the documentation to answer it, or I'm blind and can't locate it.

Any help would be much appreciated.

namaste friends
Sep 18, 2004

by Smythe

Misogynist posted:

Anyone know the standard log write size for Exchange 2010, perchance? I know the DB page size was increased to 32k, which actually makes it possible to run on RAID-5 in our environment, but I can't find anything about the transaction logs.

From here: http://technet.microsoft.com/en-us/library/bb331958.aspx

quote:

If a database suddenly stops, cached changes aren't lost just because the memory cache was destroyed. When the database restarts, Exchange scans the log files, and reconstructs and applies any changes not yet written to the database file. This process is called replaying log files. The database is structured so that Exchange can determine whether any operation in any log file has already been applied to the database, needs to be applied to the database, or doesn't belong to the database.

Rather than write all log information to a single large file, Exchange uses a series of log files, each exactly one megabyte, or 1,024 kilobytes (KB), in size. When a log file is full, Exchange closes it and renames it with a sequential number. The first log that's filled ends with the name Enn00000001.log. The nn refers to a two-digit number known as the base name or log prefix.

Intraveinous
Oct 2, 2001

Legion of Rainy-Day Buddhists

Nebulis01 posted:

So I have a stupid question. I'm new to administering a SAN so please bear with me.

We just recently got a Dell Equalogic PS4000X. I configured it out of the box for RAID6 since it's mainly going to be used for read access. However I was exploring some of the volume options and it appears you can assign a raid type for the volume, but it won't change any of the member drives to fit this configuration.

My question is this:

Is it possible for a member to have more than one raid type or do all 16 drives have to be allocated to the same raid type? It seems like that would be a stupid design but I've not found anything in the documentation to answer it, or I'm blind and can't locate it.

Any help would be much appreciated.
I don't have specific experience with the Equalogic kit, but the way it works in the EVA world is that your disk group (made up of all or some of the disks you have) is turned into a large RAID volume. You can choose "Single drive failure protection", similar to RAID 5, or "Double drive failure protection", similar to RAID 6. From there, each LUN (vDisk) you create on that disk group has a "vRAID" level that you assign when you create it. The EVA secret sauce then splits things out for that vDisk and spreads them around the larger disk group. So say you have a Double Protection disk group, and you create a vdisk with vRAID1. You acheive that by having each block stored on two different disks of the disk group. vRAID 5 is similar, except you have each block striped across multiple disks, and then have a parity block stored as well.

I'm not sure if that's how EQL works it or not, but I hope it wasn't a waste of time.

GanjamonII
Mar 24, 2001

Cultural Imperial posted:

Is this a new oracle install?
Are you using snapdrive?
What initiator version?
How have you configured your target NICs on the filer? Are they vif'd?
What ontap and snapdrive versions are you running?
What oracle version?
Are your servers clustered?
How complicated is your networking? ie multiple vlans?
Misogynist and Adorai asked very good questions about your network. I'd also take a look at basic stuff like duplex and autonegotiate on the switches.

Finally, have you opened a case? If not, I suggest you do so now. You're paying for support anyway. Might as well use it. This sort of problem is well within the responsibilities of the support centre.

Everyone who replied to my query - thank you. I haven't posted an update cause I can't post at work and my internet at home just got reconnected after moving.
We found that the servers/enclosure/network hadn't been configured as per the PTM. All traffic was going over the production network, not the storage vlan. We had purchased nics with hardware iscsi which are disabled and the correct cabling hadnt been run to the blade enclosure blah blah blah. All getting fixed this weekend hopefully. We also found Oracle was misconfigured in terms of some parameters and memory configuration which looks like it was slowing it down a bit anyway.

vty
Nov 8, 2007

oh dott, oh dott!
I'm having a difficult time rolling out a Powervault MD3220i iSCSI network, could anyone give me a tip? I'm sure a few people in here are running MDs and have dealt with this. I'm missing something small I'm sure- but I've never deployed any iSCSI. Everything is brand new, including the network topology. I'm not sure if this is a network misconfiguration or a server misconfig right now. I'm assuming server- simply based on how basic I currently have the network set up (so far as to everything being on the SAN subnet).

DETAILS--
The host/mangement machine is 2k8 and has 10 NICs. Switch is a Powerconnect 5424.

MD3220i - I haven't been able to configure this at all because of no tcp connectivity.. so we'll assume this is default configuration from factory.

Server1nic1- 192.168.128.110 /16
Server1nic2- 192.168.128.111 /16
Switch - 192.168.128.100 /16
MDmgmt1 - 192.168.128.101
MDmgmt2 - 192.168.128.102

The raid controllers have their own ips also, I believe 192.168.130.101, .131.101

MY STEPS--
I've installed the MD3200i resources (mpio/iscsi initiator) on the management server.. but I'm not able to discover anything.

I then went into the Windows iscsi initiator and attempted to discover from there.. nothing.

I've actually gone so far as to completely reconfigure everything a few times, including switches.. Then I thought maybe there was an issue where it was searching for the iscsi targets over the wrong NIC so I disabled all of the other NICs (WAN/etc) and only had the two on the management going. I then went into the switch and specified (by default its 3260 0.0.0.0) the IPs of the iscsi targets.. Nothing.

I then said screw it, set up DHCP, restarted the SAN, and it won't pull a DHCP lease.

I'm out of ideas.

Edit: Oh, the cables all work and the tx/rx lights on all of the equipment (san included) are active. I cannot ping any of the supposed default IPs on the SAN from the switch or server.

vty fucked around with this message at 21:51 on Aug 8, 2010

Nukelear v.2
Jun 25, 2004
My optional title text

vty posted:

I'm having a difficult time rolling out a Powervault MD3220i iSCSI network, could anyone give me a tip? I'm sure a few people in here are running MDs and have dealt with this.


If it's the same management software as the MD3000i then it should automatically discover your MD, this is in the management software not the iSCSI initiator. You need to use the management software first to configure the unit. If you can't ping the management interface on the MD then you have some switch or host IP issues you need to work through first.

vty
Nov 8, 2007

oh dott, oh dott!
Man, I don't know what the deal was with it not pulling a dhcp lease from my servers AND not kicking over to its default ip range after the 150second interval.. Was probably sitting there with an apipa IP.

Anyway, I had the brilliant idea of plugging in my home linksys router to DHCP it and.. voila.

Gonna run this network on 192.168.1 /24, baby! (kidding).

Syano
Jul 13, 2005
Ive got a MD3200i being delivered soon. How do you like the thing?

Adbot
ADBOT LOVES YOU

ghostinmyshell
Sep 17, 2004



I am very particular about biscuits, I'll have you know.

GanjamonII posted:

Everyone who replied to my query - thank you. I haven't posted an update cause I can't post at work and my internet at home just got reconnected after moving.
We found that the servers/enclosure/network hadn't been configured as per the PTM. All traffic was going over the production network, not the storage vlan. We had purchased nics with hardware iscsi which are disabled and the correct cabling hadnt been run to the blade enclosure blah blah blah. All getting fixed this weekend hopefully. We also found Oracle was misconfigured in terms of some parameters and memory configuration which looks like it was slowing it down a bit anyway.

Also don't be afraid to open a case with NetApp, they don't mind these kind of cases if you think something is weird as long as you act like a reasonable human being.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply