Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
The clariion has a certain amount of space reserved for the OS on the first five disks. Any page file would be contained in that space and is fixed in size. Since these are slow disks in a slow clariion, the large increase in I/O on those disks may have slowed down the OS to the point where it couldn't cope. You should put your lowest performance applications on the first five disks.

Adbot
ADBOT LOVES YOU

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Catch 22 posted:

5 or 4 disks? Your talking about the CX line.

The AX line can be 3 or 4 disks, depending on how many SP you have.

Heh. :downs: I've installed hundreds of clariions but never an AX.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
micropolis drives :argh:

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Ray_ posted:

The 32kb alignment is different from the VMWare-recommended 64kb alignment. Is this suggestion just for Exchange?


I'm going a little nuts here. Most of my previous SAN work was with Datacore (former company's vendor of choice), and it was ridiculously simplified. I'm thinking for my Exchange LUNs I should set the stripe size to 64kb, align the partition at 32k boundary, and set allocation size to 32k. Does that sound correct?

On any Windows (or Linux) host you should use diskpar or diskpart prior to creating a partition on a SAN drive. This will help eliminate stripe crossing which can degrade performance.

Stripe sizing can help performance, and it depends on your application's I/O profile. I would follow recommended sizing for your application.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

foobat posted:

question to the emc guys

I inherited a cx3, mirrored with 52tb of storage about a year ago and setup by the guy before me. We've got a setup where each of the luns is basically exported as an nfs share on a rhel5 box. So

emcpowera -> home01
emcpowerb -> home02
emcpower? -> home09

This of course is becomes a pain for adding users, or having to shuffle them around if a lun runs out of space. I'd like to just use the volume manager to make them "homes" or something appropriate but I have no idea whether this is best practice or just retarted.

Ofc, i'd like to setup some sort of failover for our nfs service but that's another story.

Cheers

You want one home directory instead of three? Sure, that's possible with a logical volume manager. Or you could merge the three LUNs into one large metavolume. The problem is then you have a 52TB filesystem. What happens when you reboot and have to do a fsck? Or a crash of the host where you certainly will have to clean it? I haven't seen a lot of large Linux / ext3 filesystems so I can't say for sure, but I would think that the fscks would take a large amount of time. What happens now when you reboot, how large is your biggest filesystem? The largest filesystem in my memory was a 10TB NTFS that was a bear after a crash. They eventually cut it down to ten 1TB filesystems.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Rhymenoserous posted:

Can you give us an idea of the environment being worked in? Give as much detail as possible if you can. Nas or San et al et al.

also how many disks, what type, what raid config, etc.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

complex posted:

Anyone well versed with EMC Symmetrix arrays? We have only a single admin, and he isn't very good...

Whenever he presents a LUN (or multiple LUNs to the same machine) it comes along with a 1MB LUN. He says this is required for the EMC and we should just ignore it, but I'm not so sure.

What is this 1MB LUN, and do we need it? If not, what can I tell our SAN admin in order to stop this madness?

It's probably the VCM device, and yes, this is perfectly normal for a Symm attached host.

edit: more info, the VCM device is a device that is presented to all paths with lun masking enabled. It contains lun masking information (the symmaskdb). If you don't have this on a FA port, your hosts attached to that port will see all devices. You don't "need" it from the host side, but you can safely ignore it.

paperchaseguy fucked around with this message at 00:39 on Jul 27, 2009

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Weird Uncle Dave posted:

This is probably an invitation to all sorts of weird PMs, but do any of you do SAN consulting and/or sales?

I'm pretty sure I'm in over my head with my boss's request to virtualize several of our bigger physical servers. The virtualizing part is easy enough, but I don't really know enough about how to measure my hardware requirements, or how to shop intelligently for a low-end SAN that will meet those requirements, and I don't want to clutter the thread with all my newbie-level questions.

I work for EMC, there's at least one other person here who does, and several other professionals. You really won't clutter up the thread, it's not super active. Fire away with any questions.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Weird Uncle Dave posted:

Is Windows perfmon, monitoring number of disk operations per second, a decent approximation of IOPS? I'm only really worried about disk speed for one server (about 4000 email addresses); the others (a little-used Web server and a small database server) aren't a big problem. Network bandwidth and RAM are cheap by comparison.

Exchange 2007 average user profile estimates .4 IOPS per user account. I like to use .5 IOPS because that gives some wiggle room and is easy to calculate: 2000 IOPS. A 15k FC drive maxes out on typical IO sizes at 180 IOPS. You could probably do all this on one 15 disk enclosure of FC drives, maybe a little more depending on the size.

perfmon would give you a realistic estimate of IOPS.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Weird Uncle Dave posted:

That sounds like way more than what I'd need. Right now, the mail server is an eight-year-old Dell PowerEdge, and all the email is on three 15krpm SCSI drives, RAID-5'd, and everything works perfectly well.

Left perfmon running over lunch, and the drive with all the mailboxes peaked at about 100 IOPS. Granted, that was lunch, and I'll leave things running for a couple days and peek in on them occasionally.

If you have 4000 addresses, but many aliases, unused accounts, etc, that will obviously lower your requirements. Maybe a 4+1 SATA appliance would be better for you. SATA does 50-80 IOPS as a rule of thumb.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

oblomov posted:

Speaking of storage, anyone have experience with fairly large systems, as in 600-800TB, with most of that being short-term archive type of storage? If so, what do you guys use? NetApp wasn't really great solution for this due to volume size limitations, which I guess one could mask with a software solution on top, but that's clunky. They just came out with 8.0, but I have 0 experience with that revision. What about EMC, say Clariion 960, anyone used that? Symmetrix would do this, but that's just stupidly expensive. Most of my experience is NetApp with Equallogic thrown in for a good measure (over last year or so).

I've put together a few CX4-480s and 960s, though I was mostly designing for performance (mail systems with 100k+ users). At the extreme, you can get 740TB+ usable with the 960 these days. (With 1TB and 2TB drives I would recommend RAID 6 since they take forever to rebuild.) Soon you will be able to get 800TB raw on a single floor tile.

With short term archiving, are you going to tape? Consider a CDL or Data Domain DDX?

http://www.datadomain.com/pdf/DataDomain-DDXArraySeries-Datasheet.pdf

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

crazyfish posted:

Doing it by hand from the CLI isn't necessary because the GUI has options for all cluster sizes.

In any case, we found a solution by creating a spanning volume of several ~16TB LUNs before, my question was primarily out of curiosity and seeing if we can make the management portion a bit easier.

The reason for the big filesystem is integration with an application that will only take a single path for its data, which also refuses to work with a network drive so any kind of NAS type solution won't work.

let me save you from wanting to kill yourself when this thing takes 10 hours to chkdsk: use mount points if at all possible and keep your partitions to 1TB.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Misogynist posted:

Can anyone unfortunate enough to be managing an IBM SAN tell me if there's a way to get performance counters on a physical array, or am I limited to trying to aggregate my LUN statistics together using some kind of LUN-array mapping and a cobbled-together SMcli script?

I just started working at IBM (XIV really). PM me or post your specific question and hardware and I'll look it up on Monday.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Mausi posted:

Something like a NetApp will maintain a hash table of where every logical block is physically located for any given level of the snapshot. True copy-on-write will simply allocate an unwritten block, write the data, and then change the hash table to point to the new data. Any reads will also check the hash table.
Very minor performance overhead, brilliant for all sorts of things.

XIV does this, but at the block level. XIV snapshots are incredibly easy to work with.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Misogynist posted:

Erm, doesn't any block storage device by definition do this at the block level?

XIV does redirect on write. It's taking a snapshot not of the LUN blocks, but of the LUN's block pointers. Most block storage does copy-on-first-write, which creates much more load.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

GrandMaster posted:

are there many emc users here?
we just had a nightmare day at the office - emc engineer was in to install 3 new dae's to our cx3-40. first tray went in with no problems but when the second was plugged in, the bus faulted and took the entire array down. the "non-disruptive" upgrade pretty much brought down our entire call centre :(

after the emc engineering lab did all their research it looks like a dodgy lcc in the dae was the cause.

has anyone else seen anything like this happen before?

yes. Once a few years ago saw a bad lcc in a clariion when adding DAEs. Don't think it took down the whole array though.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

GrandMaster posted:

just heard back from support, they will be replacing the cabling on the SPA side bus 0 as there were some other strange bus errors. it looks like SPA crashed, SPB didnt so i'm not sure why the luns didnt all trespass and stay online :(

The lcc took out a whole enclosure in my case. Yes, the LUNs should have trespassed unless the whole enclosure faulted. This appears to be a rare, but occasional achilles' heel of the Clariion.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
Do you mean one team does the SAN switches, and another team does the storage? That's kind of rare though one place I went to had us put zoning in a spreadsheet and hand it off to the customer, while the contract team put together the storage for the hosts.

In larger shops it is quite common to have a separate storage team that does the switches and arrays. Now with virtualization the storage teams are tending to get merged back in with the server teams.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
That would be a little out of the ordinary as usually the network team doesn't know much about SANs. Sounds like management is trying to put a cap on something they understand (loose cables) while risking something they don't understand (putting the SAN in inexperienced hands).

From what you've said it doesn't sound like a good idea. It could just be that the storage people need some procedural help, to know to remove cables once they aren't being used.

What exactly is going wrong with the cabling? Just how much hardware do you have?

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
If it's just the cables that are the problem, maybe assign fibre cable management to the storage team.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
As a very rough rule of thumb, I use 120 IOPS/10k disk, 180 IOPS/15k disk, 60 IOPS/5k SATA. But yes, any major vendor will help you size it if you can collect some iostat data or give some good projections.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
haha, well maybe you should have a support contract and a CE to do disk swaps if your co-workers can't count to zero.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
The DCX is the competitor to the 9506 and 9509. You get redundant CPs, big backplane, etc. Really it depends on your port needs and bandwidth requirements. Most people who need a lot of ports don't buy a bunch of small switches for a core-edge SAN, they buy director class for dual SAN. Depends a lot on the port count and the availability you need.

I don't think the features and functionality are vastly different between the 300 and the director. It's more the port count, backplane, and some additional redundancy. If you need hundreds of ports it may be easier to go with the director class.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
lol

Try searching for something, tell us how that goes!

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
Information Lifecycle Management

gently caress if anyone else knows what that means, either.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

grobbendonk posted:

I'm a team leader for the storage team for a fairly large UK company, and we're currently in the final stage of discussions with three of the larger vendors in the market (block based) to refresh our existing EMC estate (DMX / Clariion / VMAX).

I was wondering if anyone has had any practical experience with an IBM Storwize V7000 midrange disk system? Are there anythings to be aware of, or whether it does exactly as advertised.

I've used it, it's a nice platform. It comes with SVC which is pretty easy to implement. I work for IBM so feel free to ask more specific questions. IBM is pretty conservative on the marketing material, so yes, the v7000 works as advertised.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Nukelear v.2 posted:

WTF. It would not let him do a software update because the VNX had a run-time greater than 41 days. They made him power cycle his array to do a software update. Any idea why that would be?

clean up memory leaks is my guess

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
check into IBM's storwize v7000

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Misogynist posted:

Has anyone in here had an opportunity to play around with the IBM Storwize V7000 kit? How do you like it? What's awesome, and what are your pain points with it? How does it compare to full-on SVC and SONAS?

I work for IBM. The v7000 is good mid-range storage that uses the SVC stack. It has less memory than a SVC, so while it can do virtualization you wouldn't want to put a ton of arrays behind it.

Usability is extremely nice. Here's a GUI tour. Performance is very good for midrange. About a month ago they announced the v7000 Unified (san plus nas).

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
Here's a good link on the difference between SATA, SAS, and NL-SAS.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

cheese-cube posted:

Anyone here have any experience with FCIP (Fibre Channel Tunneling)? I'm going to be working on a project soon that involves merging two, physically seperate FC fabrics via FCIP for the purpose of volume copy/mirroring. We will be using IBM V7000 SANs connected via FC with IBM SAN06B-R Multi-Protocol Routers to do the FCIP tunneling.

Any feedback or anecdotal accounts would be awesome. Alternatively I have extensive experience working with IBM hardware, including SANs, so I can answer questions anyone may have.

I have set up FCIP on both Cisco and Brocade/IBM. Make sure your link between sites is solid (I had one that fritzed out every 30 seconds) and that the network group has given you enough bandwidth. There's two ways to configure the fabrics: either have your fabrics stretch across sites, or have separate fabrics but do interfabric VSAN to VSAN communication (on Cisco this requires the enterprise license). Each option has pluses and minuses but the former is a bit easier. If you do that then you can set one switch with a different domain ID and no config, then once you have the two sites communicating properly the fabrics will merge automatically. The Brocade stuff I did was a bit longer ago but is similar in concept.

code:
local                              remote
Fabric A switch 1 ----- Fabric A switch 2
Fabric B switch 3 ----- Fabric B switch 4

evil_bunnY posted:

What's the sticker like on those (v7000s)? Couple of $100k?

The entry level units list a lot lower than that. Of course it depends on how much storage you want but they are very competitively priced mid range, especially considering the features.

paperchaseguy fucked around with this message at 20:03 on Feb 29, 2012

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
http://www.youtube.com/watch?v=yHJOz_y9rZE

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

the spyder posted:

Does anyone have experience with 1+ PB storage systems? We have a project that may require up to 7.5TB of data collection per day.

I know some guys collecting data about this rate, say 5TB/day. Unstructured files on an IBM SONAS.

If at all possible, lean on technology that will reduce your needed real disk space. Thin provision, online dedupe, data compression, etc.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

skipdogg posted:

We're looking at a new SAN to consolidate some aging infra on VMWare and provide some bulk storage. Right now we're looking at a EMC VNXe 3300 with 15 x 600GB 15K and 6 or 8 x 2TB drives for NFS/CIFS storage. What else should we be looking at in that price range? The only thing I can think of off the top of my head is the P4000/LeftHand stuff, but I would lose out on the Network Raid crap unless I doubled the price. My HP reseller mentioned a small EVA, but I'm not sure how well that would be received.

EMC is the incumbent as the storage guy likes their stuff, so I need something in that price range.

look at v7000 Unified

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
Is this your question: you want to put the CX300 behind your Celerra and use its block disk space as another NAS share? You can do that as long as the interoperability of the CX300 and Celerra firmwares are supported.

Don't do this unless you really know what you're doing though. Celerras are FRAGILE.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
Um... wow. I'm not sure what's more surprising, either a) that he was a director and didn't know permission issues after an NTFS/CIFS move was a possibility, or b) that he was a director and was performing the move himself.

Did he not use robocopy or similar migration tool?

hope you got good backups lol!

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
For a $700 million company, two sysadmins seems a pretty small number.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
That's pretty inexcusable for anywhere but a startup. And they could have (should have) paid Dell to give them documentation for a small amount extra.

Obviously they're woefully understaffed on top of questionably competent. Where are the adults?

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Powdered Toast Man posted:

COO just came over and asked what the flying gently caress is going on. He kinda got the brushoff. So.

so... there are no adults?

skipdogg posted:

This is where I'm at. I can't believe anyone would do something like this without an easily restorable verified backup and a rollback plan. If I didn't feel bad for PTM, I would call troll post.

Oh I believe it. It's far from the dumbest thing I've heard an ostensibly experienced IT professional do.

Adbot
ADBOT LOVES YOU

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
:george:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply