|
The clariion has a certain amount of space reserved for the OS on the first five disks. Any page file would be contained in that space and is fixed in size. Since these are slow disks in a slow clariion, the large increase in I/O on those disks may have slowed down the OS to the point where it couldn't cope. You should put your lowest performance applications on the first five disks.
|
![]() |
|
![]()
|
# ¿ Mar 21, 2023 05:29 |
|
Catch 22 posted:5 or 4 disks? Your talking about the CX line. Heh. ![]()
|
![]() |
|
micropolis drives ![]()
|
![]() |
|
Ray_ posted:The 32kb alignment is different from the VMWare-recommended 64kb alignment. Is this suggestion just for Exchange? On any Windows (or Linux) host you should use diskpar or diskpart prior to creating a partition on a SAN drive. This will help eliminate stripe crossing which can degrade performance. Stripe sizing can help performance, and it depends on your application's I/O profile. I would follow recommended sizing for your application.
|
![]() |
|
foobat posted:question to the emc guys You want one home directory instead of three? Sure, that's possible with a logical volume manager. Or you could merge the three LUNs into one large metavolume. The problem is then you have a 52TB filesystem. What happens when you reboot and have to do a fsck? Or a crash of the host where you certainly will have to clean it? I haven't seen a lot of large Linux / ext3 filesystems so I can't say for sure, but I would think that the fscks would take a large amount of time. What happens now when you reboot, how large is your biggest filesystem? The largest filesystem in my memory was a 10TB NTFS that was a bear after a crash. They eventually cut it down to ten 1TB filesystems.
|
![]() |
|
Rhymenoserous posted:Can you give us an idea of the environment being worked in? Give as much detail as possible if you can. Nas or San et al et al. also how many disks, what type, what raid config, etc.
|
![]() |
|
complex posted:Anyone well versed with EMC Symmetrix arrays? We have only a single admin, and he isn't very good... It's probably the VCM device, and yes, this is perfectly normal for a Symm attached host. edit: more info, the VCM device is a device that is presented to all paths with lun masking enabled. It contains lun masking information (the symmaskdb). If you don't have this on a FA port, your hosts attached to that port will see all devices. You don't "need" it from the host side, but you can safely ignore it. paperchaseguy fucked around with this message at 23:39 on Jul 26, 2009 |
![]() |
|
Weird Uncle Dave posted:This is probably an invitation to all sorts of weird PMs, but do any of you do SAN consulting and/or sales? I work for EMC, there's at least one other person here who does, and several other professionals. You really won't clutter up the thread, it's not super active. Fire away with any questions.
|
![]() |
|
Weird Uncle Dave posted:Is Windows perfmon, monitoring number of disk operations per second, a decent approximation of IOPS? I'm only really worried about disk speed for one server (about 4000 email addresses); the others (a little-used Web server and a small database server) aren't a big problem. Network bandwidth and RAM are cheap by comparison. Exchange 2007 average user profile estimates .4 IOPS per user account. I like to use .5 IOPS because that gives some wiggle room and is easy to calculate: 2000 IOPS. A 15k FC drive maxes out on typical IO sizes at 180 IOPS. You could probably do all this on one 15 disk enclosure of FC drives, maybe a little more depending on the size. perfmon would give you a realistic estimate of IOPS.
|
![]() |
|
Weird Uncle Dave posted:That sounds like way more than what I'd need. Right now, the mail server is an eight-year-old Dell PowerEdge, and all the email is on three 15krpm SCSI drives, RAID-5'd, and everything works perfectly well. If you have 4000 addresses, but many aliases, unused accounts, etc, that will obviously lower your requirements. Maybe a 4+1 SATA appliance would be better for you. SATA does 50-80 IOPS as a rule of thumb.
|
![]() |
|
oblomov posted:Speaking of storage, anyone have experience with fairly large systems, as in 600-800TB, with most of that being short-term archive type of storage? If so, what do you guys use? NetApp wasn't really great solution for this due to volume size limitations, which I guess one could mask with a software solution on top, but that's clunky. They just came out with 8.0, but I have 0 experience with that revision. What about EMC, say Clariion 960, anyone used that? Symmetrix would do this, but that's just stupidly expensive. Most of my experience is NetApp with Equallogic thrown in for a good measure (over last year or so). I've put together a few CX4-480s and 960s, though I was mostly designing for performance (mail systems with 100k+ users). At the extreme, you can get 740TB+ usable with the 960 these days. (With 1TB and 2TB drives I would recommend RAID 6 since they take forever to rebuild.) Soon you will be able to get 800TB raw on a single floor tile. With short term archiving, are you going to tape? Consider a CDL or Data Domain DDX? http://www.datadomain.com/pdf/DataDomain-DDXArraySeries-Datasheet.pdf
|
![]() |
|
crazyfish posted:Doing it by hand from the CLI isn't necessary because the GUI has options for all cluster sizes. let me save you from wanting to kill yourself when this thing takes 10 hours to chkdsk: use mount points if at all possible and keep your partitions to 1TB.
|
![]() |
|
Misogynist posted:Can anyone unfortunate enough to be managing an IBM SAN tell me if there's a way to get performance counters on a physical array, or am I limited to trying to aggregate my LUN statistics together using some kind of LUN-array mapping and a cobbled-together SMcli script? I just started working at IBM (XIV really). PM me or post your specific question and hardware and I'll look it up on Monday.
|
![]() |
|
Mausi posted:Something like a NetApp will maintain a hash table of where every logical block is physically located for any given level of the snapshot. True copy-on-write will simply allocate an unwritten block, write the data, and then change the hash table to point to the new data. Any reads will also check the hash table. XIV does this, but at the block level. XIV snapshots are incredibly easy to work with.
|
![]() |
|
Misogynist posted:Erm, doesn't any block storage device by definition do this at the block level? XIV does redirect on write. It's taking a snapshot not of the LUN blocks, but of the LUN's block pointers. Most block storage does copy-on-first-write, which creates much more load.
|
![]() |
|
GrandMaster posted:are there many emc users here? yes. Once a few years ago saw a bad lcc in a clariion when adding DAEs. Don't think it took down the whole array though.
|
![]() |
|
GrandMaster posted:just heard back from support, they will be replacing the cabling on the SPA side bus 0 as there were some other strange bus errors. it looks like SPA crashed, SPB didnt so i'm not sure why the luns didnt all trespass and stay online The lcc took out a whole enclosure in my case. Yes, the LUNs should have trespassed unless the whole enclosure faulted. This appears to be a rare, but occasional achilles' heel of the Clariion.
|
![]() |
|
Do you mean one team does the SAN switches, and another team does the storage? That's kind of rare though one place I went to had us put zoning in a spreadsheet and hand it off to the customer, while the contract team put together the storage for the hosts. In larger shops it is quite common to have a separate storage team that does the switches and arrays. Now with virtualization the storage teams are tending to get merged back in with the server teams.
|
![]() |
|
That would be a little out of the ordinary as usually the network team doesn't know much about SANs. Sounds like management is trying to put a cap on something they understand (loose cables) while risking something they don't understand (putting the SAN in inexperienced hands). From what you've said it doesn't sound like a good idea. It could just be that the storage people need some procedural help, to know to remove cables once they aren't being used. What exactly is going wrong with the cabling? Just how much hardware do you have?
|
![]() |
|
If it's just the cables that are the problem, maybe assign fibre cable management to the storage team.
|
![]() |
|
As a very rough rule of thumb, I use 120 IOPS/10k disk, 180 IOPS/15k disk, 60 IOPS/5k SATA. But yes, any major vendor will help you size it if you can collect some iostat data or give some good projections.
|
![]() |
|
haha, well maybe you should have a support contract and a CE to do disk swaps if your co-workers can't count to zero.
|
![]() |
|
The DCX is the competitor to the 9506 and 9509. You get redundant CPs, big backplane, etc. Really it depends on your port needs and bandwidth requirements. Most people who need a lot of ports don't buy a bunch of small switches for a core-edge SAN, they buy director class for dual SAN. Depends a lot on the port count and the availability you need. I don't think the features and functionality are vastly different between the 300 and the director. It's more the port count, backplane, and some additional redundancy. If you need hundreds of ports it may be easier to go with the director class.
|
![]() |
|
lol Try searching for something, tell us how that goes!
|
![]() |
|
Information Lifecycle Management gently caress if anyone else knows what that means, either.
|
![]() |
|
grobbendonk posted:I'm a team leader for the storage team for a fairly large UK company, and we're currently in the final stage of discussions with three of the larger vendors in the market (block based) to refresh our existing EMC estate (DMX / Clariion / VMAX). I've used it, it's a nice platform. It comes with SVC which is pretty easy to implement. I work for IBM so feel free to ask more specific questions. IBM is pretty conservative on the marketing material, so yes, the v7000 works as advertised.
|
![]() |
|
Nukelear v.2 posted:WTF. It would not let him do a software update because the VNX had a run-time greater than 41 days. They made him power cycle his array to do a software update. Any idea why that would be? clean up memory leaks is my guess
|
![]() |
|
check into IBM's storwize v7000
|
![]() |
|
Misogynist posted:Has anyone in here had an opportunity to play around with the IBM Storwize V7000 kit? How do you like it? What's awesome, and what are your pain points with it? How does it compare to full-on SVC and SONAS? I work for IBM. The v7000 is good mid-range storage that uses the SVC stack. It has less memory than a SVC, so while it can do virtualization you wouldn't want to put a ton of arrays behind it. Usability is extremely nice. Here's a GUI tour. Performance is very good for midrange. About a month ago they announced the v7000 Unified (san plus nas).
|
![]() |
|
Here's a good link on the difference between SATA, SAS, and NL-SAS.
|
![]() |
|
cheese-cube posted:Anyone here have any experience with FCIP (Fibre Channel Tunneling)? I'm going to be working on a project soon that involves merging two, physically seperate FC fabrics via FCIP for the purpose of volume copy/mirroring. We will be using IBM V7000 SANs connected via FC with IBM SAN06B-R Multi-Protocol Routers to do the FCIP tunneling. I have set up FCIP on both Cisco and Brocade/IBM. Make sure your link between sites is solid (I had one that fritzed out every 30 seconds) and that the network group has given you enough bandwidth. There's two ways to configure the fabrics: either have your fabrics stretch across sites, or have separate fabrics but do interfabric VSAN to VSAN communication (on Cisco this requires the enterprise license). Each option has pluses and minuses but the former is a bit easier. If you do that then you can set one switch with a different domain ID and no config, then once you have the two sites communicating properly the fabrics will merge automatically. The Brocade stuff I did was a bit longer ago but is similar in concept. code:
evil_bunnY posted:What's the sticker like on those (v7000s)? Couple of $100k? The entry level units list a lot lower than that. Of course it depends on how much storage you want but they are very competitively priced mid range, especially considering the features. paperchaseguy fucked around with this message at 19:03 on Feb 29, 2012 |
![]() |
|
http://www.youtube.com/watch?v=yHJOz_y9rZE
|
![]() |
|
the spyder posted:Does anyone have experience with 1+ PB storage systems? We have a project that may require up to 7.5TB of data collection per day. I know some guys collecting data about this rate, say 5TB/day. Unstructured files on an IBM SONAS. If at all possible, lean on technology that will reduce your needed real disk space. Thin provision, online dedupe, data compression, etc.
|
![]() |
|
skipdogg posted:We're looking at a new SAN to consolidate some aging infra on VMWare and provide some bulk storage. Right now we're looking at a EMC VNXe 3300 with 15 x 600GB 15K and 6 or 8 x 2TB drives for NFS/CIFS storage. What else should we be looking at in that price range? The only thing I can think of off the top of my head is the P4000/LeftHand stuff, but I would lose out on the Network Raid crap unless I doubled the price. My HP reseller mentioned a small EVA, but I'm not sure how well that would be received. look at v7000 Unified
|
![]() |
|
Is this your question: you want to put the CX300 behind your Celerra and use its block disk space as another NAS share? You can do that as long as the interoperability of the CX300 and Celerra firmwares are supported. Don't do this unless you really know what you're doing though. Celerras are FRAGILE.
|
![]() |
|
Um... wow. I'm not sure what's more surprising, either a) that he was a director and didn't know permission issues after an NTFS/CIFS move was a possibility, or b) that he was a director and was performing the move himself. Did he not use robocopy or similar migration tool? hope you got good backups lol!
|
![]() |
|
For a $700 million company, two sysadmins seems a pretty small number.
|
![]() |
|
That's pretty inexcusable for anywhere but a startup. And they could have (should have) paid Dell to give them documentation for a small amount extra. Obviously they're woefully understaffed on top of questionably competent. Where are the adults?
|
![]() |
|
Powdered Toast Man posted:COO just came over and asked what the flying gently caress is going on. He kinda got the brushoff. So. so... there are no adults? skipdogg posted:This is where I'm at. I can't believe anyone would do something like this without an easily restorable verified backup and a rollback plan. If I didn't feel bad for PTM, I would call troll post. Oh I believe it. It's far from the dumbest thing I've heard an ostensibly experienced IT professional do.
|
![]() |
|
![]()
|
# ¿ Mar 21, 2023 05:29 |
|
![]()
|
![]() |