Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Mierdaan
Sep 14, 2004

Pillbug
To be on the paranoid side, for an upcoming datacenter move we had a local Compellent partner quote out the work of powering down, unracking, transporting .2 miles to another building, reracking, recabling, and bringing up our array.

$6500.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Mierdaan posted:

To be on the paranoid side, for an upcoming datacenter move we had a local Compellent partner quote out the work of powering down, unracking, transporting .2 miles to another building, reracking, recabling, and bringing up our array.

$6500.
We're having IBM come in to move a tape library 24 inches to the left. Over $2,000.

some_admin
Oct 11, 2011

Grimey Drawer
storage throughput prediction question,:

I have 7 LTO4 drives in a nice tape library; fiber channel to robust host (24 cores, 8GB RAM)

@ 120MB/s per drive = 432GB/hr X 24 hours X 7 drives = 72567GB = 72TB~ maximum capacity in a week

is my math all messed up?
I know this number is not likely to be reflected in reality but really all I need is the absolute best case possible.
I'm pretty sure I am beating my current horse pretty hard, just want to confirm my mathses.

on a side not, anyone move from LTO-4 to LTO-5 and see giant performance gains?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
Here is where I am at.

We have a Netapp 3240 HA pair that we think is close to capacity on performance. Occasionally we get reports of slowness from our users, and while we haven't been able to diagnose the source, we are pretty sure it's SAN latency at this point. Compounding the problem is that we have to provide VDI for 250 seats in about 3 months or less. We are prepared with a working solution, but don't have the SAN capacity. Our choices are:

Add more spindles to both our Production and DR Netapp ($50k for each site)
Add Flash Cache to both our Production and DR Netapp ($50k for each site)
Purchase another SAN that is low capacity but high IO and a DR counterpart ($Unknown)

Since I won't need any features beyond 10Gbe iSCSI and high IO, do you think I am better off upgrading my Netapp and paying their premium, or should I look elsewhere? I am not sure if I can actually beat an outlay of $100k for both sides, but I was thinking that it might be possible. What vendors should I be looking at? I've inquired about pricing on a 6TB Oracle 7320, but won't have a quote until next week. Anyone have any pricing experience on that gear?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
We've got about 6000 ZFS Unix home directories shared out from a single server, and it takes fooorever to boot. Is there any way to keep individual file systems per home directory, but reduce the boot time? Some way to split the load across multiple servers, like the Solaris equivalent of Microsoft DFS?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

adorai posted:

Here is where I am at.

We have a Netapp 3240 HA pair that we think is close to capacity on performance. Occasionally we get reports of slowness from our users, and while we haven't been able to diagnose the source, we are pretty sure it's SAN latency at this point. Compounding the problem is that we have to provide VDI for 250 seats in about 3 months or less. We are prepared with a working solution, but don't have the SAN capacity. Our choices are:

What kind of load do you have on the 3240 right now? If you grab a simple statit and some sysstat -x output during the event that will often give you enough to determine whether you're seeing an issue with load, and whether it's disk or controller bound. You should definitely get a definitive answer on whether you're truly having performance issues before you consider adding more load, especially in the form of VDI, which can be taxing on a system depending on how it is configured.

If you just want someone to look over some data from the filer and see how it looks, but don't want to go through the trouble of opening a case, I can take a look at the output from some of the perf commands and tell you if it looks like it's behaving well or not.

It can be way too easy to just sort of go over a cliff and see some real problems with performance all at once if you don't size properly, so I'd definitely recommend getting some solid answers about how your system is performing now and whether it has the additional headroom before making any decisions.

FISHMANPET posted:

We've got about 6000 ZFS Unix home directories shared out from a single server, and it takes fooorever to boot. Is there any way to keep individual file systems per home directory, but reduce the boot time? Some way to split the load across multiple servers, like the Solaris equivalent of Microsoft DFS?

Why do you want to have a separate filesystem for each user? Anyway, I suppose you could use NFS referrals, depending on what NFS version you're running.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

NippleFloss posted:

If you just want someone to look over some data from the filer and see how it looks, but don't want to go through the trouble of opening a case, I can take a look at the output from some of the perf commands and tell you if it looks like it's behaving well or not.
I opened a case today, prior to making any purchases. Thanks for the offer though.

edit: I did pull this just now

code:
 CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s
                                       in    out    read  write    read  write    age    hit  time  ty  util                            in    out      in    out
 55%   1087      2      0    1494    7843  37123  121808  40011       0      0    17s    93%  100%  Hn   99%       1      0    404       0      0     272  24950
 48%   1046      1      0    1358    8951  27687   97281  30903       0      0    13s    93%  100%  Bn  100%      17      0    294       0      0     284  13318
 57%    887      1      0    1632    6251  86925  158132  34488       0      0     0s    93%  100%  Hf   95%       7      0    737       0      0     155  72316
 56%    807      1      0    1569    6734  89861  160486  25156       0      0     0s    94%  100%  Hf   95%       1      0    760       0      0     127  75922
 58%   1046      1      0    1475    8423  47103  122193  37370       0      0     5s    81%  100%  Bn   98%       2      0    426       0      0     198  35380
Probably not a good time, since the daily backups are running right now. I'll pull them again in the AM.

adorai fucked around with this message at 07:12 on Feb 2, 2013

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

adorai posted:

code:
 CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s
                                       in    out    read  write    read  write    age    hit  time  ty  util                            in    out      in    out
 55%   1087      2      0    1494    7843  37123  121808  40011       0      0    17s    93%  100%  Hn   99%       1      0    404       0      0     272  24950
 48%   1046      1      0    1358    8951  27687   97281  30903       0      0    13s    93%  100%  Bn  100%      17      0    294       0      0     284  13318
 57%    887      1      0    1632    6251  86925  158132  34488       0      0     0s    93%  100%  Hf   95%       7      0    737       0      0     155  72316
 56%    807      1      0    1569    6734  89861  160486  25156       0      0     0s    94%  100%  Hf   95%       1      0    760       0      0     127  75922
 58%   1046      1      0    1475    8423  47103  122193  37370       0      0     5s    81%  100%  Bn   98%       2      0    426       0      0     198  35380
Probably not a good time, since the daily backups are running right now. I'll pull them again in the AM.

You're overloading your disks and you're seeing Back-to-Back CPs, which are pretty bad news. At least in this iteration you're basically spending all of your time writing to disk and still not getting it flushed from RAM before the next CP is triggered by write activity. You've either got an undersized aggregate or some hot disks in the aggregate. A statit would tell you which. If that's what you see when running backups then pretty much any time you're running backups you can expect high write latencies and generally sluggish performance.

The good news is that at those loads you're probably disk limited, rather than controller limited. How are you doing backups that it puts that much load on the system?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

NippleFloss posted:

Why do you want to have a separate filesystem for each user? Anyway, I suppose you could use NFS referrals, depending on what NFS version you're running.

Because it's ZFS best practices. NFS referrals would work, if we were on NFSv4. I suppose another reason to move in that direction.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

NippleFloss posted:

How are you doing backups that it puts that much load on the system?

Looks to be massive iSCSI out (and out in general), to the tune of over 1Gbps read from disk. My guess is Exchange/SQL verification jobs, along with a solid 320Mbps of writes, so it sounds like there's a lot going on in general. Backup windows always suck.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

madsushi posted:

Looks to be massive iSCSI out (and out in general), to the tune of over 1Gbps read from disk. My guess is Exchange/SQL verification jobs, along with a solid 320Mbps of writes, so it sounds like there's a lot going on in general. Backup windows always suck.

We have 14 snapmanager for SQL backups that begin around 6pm. There are probably 2-3 running for those stats. They all do a verify.

edit: Saturday morning stats. We have about 20% of our workforce in on saturdays.

code:
 CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s
                                       in    out    read  write    read  write    age    hit  time  ty  util                            in    out      in    out
 26%   3338      1      0    3564   18186   3535   41904  40593       0      0     1     96%   60%  Hv   38%      23      0    202       0      0     187   1690
 11%    550      2      0     756    6577   4277   18959  25513       0      0    11s    92%   33%  2f   23%       1      0    203       0      0     154   1820
 15%    707      1      0     881    7990   3725   17224  14574       0      0    11s    94%   31%  9    44%       3      0    170       0      0     179   1381
 19%    368      1      0     550    4463   1938   17547  18216       0      0     1     89%   39%  Z    42%       1      0    180       0      0     152   1506
 19%   2260      3      0    2540   12704  10489   36693  25054       0      0     1     96%   26%  3f   35%      23      0    254       0      0     210   8503
 11%    394      9      0     618    4116   2246   21196  24051       0      0     1     94%   31%  2    32%       1      0    214       0      0     236   1842
 15%   1134      1      0    1356   11876   3534   25105  23168       0      0     1     95%   31%  2    35%       2      0    219       0      0     423   1782
  9%    401      1      0     640    5630   2292    9604   7785       0      0     1     95%   19%  T    22%       1      0    237       0      0     552   1893
Looks much better to me.

adorai fucked around with this message at 14:57 on Feb 2, 2013

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

adorai posted:

We have 14 snapmanager for SQL backups that begin around 6pm. There are probably 2-3 running for those stats. They all do a verify.

Looks much better to me.

Yea, SQL verifications would do it. You could put the verifications on a separate schedule and stagger them to keep multiple verifications jobs from blowing up your disk all at once.

Your disk utilization looks better in that sysstat and you're not having any abnormal CPs, but it looks like your sysstat interval is pretty long since you're seeing multiple CPs per interval. I'd be curious to see what a "sysstat -x 1" shows. It's also strange that your disk reads and writes are both much higher than the amount of net send/receive. It could just be an artifact of the long sysstat intervals not reporting correctly, but generally you'd want to see your network IO match up pretty well with disk IO. In one interval you're sending out 3.5 MB but reading 41.9 MB from disk, so where is all of that read IO going? It could be normal internal system processes like reallocates or de-dupe, or it could be a that you have to read a bunch of data for CPs, which is a bad thing.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

http://techcrunch.com/2013/02/05/dell-goes-private-in-leveraged-buyout-deal-by-michael-dell-and-partners-worth-24-4-billion/

So Dell has gone private in an LBO. I'll be curious to see how this affects their storage line. Generally when companies go private they try to pare down product lines and trim the less profitable products and divisions to make it easier to manage debt service. I think Compellent and EQL are both pretty strong for them so I'd imagine they will hang on to both, but I'd be curious to hear other opinions. Also wondering if a potential strategic change to focus more on the enterprise and cloud, and away from SMB and PCs might mean that EQL gets a bit of short shrift since it's generally targeted to smaller deployments.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I don't think Dell is going to divest of much of anything, since they've been on a buying spree. They're trying to turn into a different company, and Michael Dell thinks that would be easier if his company would be private.
There's some stuff in this article about tax stuff, but also what reason the CEO would have for buying his own company:
http://www.slate.com/articles/busin...minimizing.html

Nomex
Jul 17, 2002

Flame retarded.

adorai posted:

Compounding the problem is that we have to provide VDI for 250 seats in about 3 months or less. We are prepared with a working solution, but don't have the SAN capacity. Our choices are:

Add more spindles to both our Production and DR Netapp ($50k for each site)
Add Flash Cache to both our Production and DR Netapp ($50k for each site)
Purchase another SAN that is low capacity but high IO and a DR counterpart ($Unknown)

Since I won't need any features beyond 10Gbe iSCSI and high IO, do you think I am better off upgrading my Netapp and paying their premium, or should I look elsewhere? I am not sure if I can actually beat an outlay of $100k for both sides, but I was thinking that it might be possible. What vendors should I be looking at? I've inquired about pricing on a 6TB Oracle 7320, but won't have a quote until next week. Anyone have any pricing experience on that gear?

The method that we took was to load a FusionIO card into each of our VDI servers, then served up the production VDI image from there. All the dev stuff is still on spindles, but everyone boots from the FusionIO card. We were originally going to go with 512GB of extra flashcache and a couple of 15k DS4243s, but the flash backed production image works great and cost less.

pr0digal
Sep 12, 2008

Alan Rickman Overdrive
While I read through the thread I figured I would pose a question.

I do IT for a media company and to be honest I know jack poo poo about SAN solutions aside from the Apple XSAN with 336 TB of Peagus RAIDs attached we're currently running along with two ActiveRAIDs running SANMp. Everything is over Fibre controlled by those wonderful QLogic boxes though the Xsan requires two ethernet drops for its metadata control. We got an expansion coming up in a different building and our ActiveRAIDs are pretty much at capacity and I'm really starting to hate SANMp. We edit 1080i video encoded in Apple ProRes 422 (LT) with a data rate of around 100 megs/s and usually involving multiple streams plus heavy effect work sometimes. To the point we've managed to piss off the XSAN every so often.

My question to you all is what are some viable solutions for our needs. We'll probably have 10 machines hooked up to this SAN and only need between 60 and 72TB of storage. Fibre is preferable due to our bandwidth needs but from reading the thread Ethernet can do just as well. The main thing we're gunning for is an ISIS 5000 system from Avid because as a department we are looking to switch to Avid from Final Cut in the near future and since the ISIS system is kind of all-in-one for AVID that would be great! But of course we have to convince upper management of that and upper management always needs alternatives. I'm hesitant to look at Active Storage solutions because they are possibly going out of business and I have EMC breathing down my neck about their Isilion system. So I would appreciate some suggestions regarding affordable storage solutions for a media company because honestly I'm in over my head when it comes to SAN solutions.

Amandyke
Nov 27, 2004

A wha?

pr0digal posted:

I'm hesitant to look at Active Storage solutions because they are possibly going out of business and I have EMC breathing down my neck about their Isilion system. So I would appreciate some suggestions regarding affordable storage solutions for a media company because honestly I'm in over my head when it comes to SAN solutions.

I may be a bit biased, but Isilon basically rocks.

Nomex
Jul 17, 2002

Flame retarded.

pr0digal posted:

While I read through the thread I figured I would pose a question.

I do IT for a media company and to be honest I know jack poo poo about SAN solutions aside from the Apple XSAN with 336 TB of Peagus RAIDs attached we're currently running along with two ActiveRAIDs running SANMp. Everything is over Fibre controlled by those wonderful QLogic boxes though the Xsan requires two ethernet drops for its metadata control. We got an expansion coming up in a different building and our ActiveRAIDs are pretty much at capacity and I'm really starting to hate SANMp. We edit 1080i video encoded in Apple ProRes 422 (LT) with a data rate of around 100 megs/s and usually involving multiple streams plus heavy effect work sometimes. To the point we've managed to piss off the XSAN every so often.

My question to you all is what are some viable solutions for our needs. We'll probably have 10 machines hooked up to this SAN and only need between 60 and 72TB of storage. Fibre is preferable due to our bandwidth needs but from reading the thread Ethernet can do just as well. The main thing we're gunning for is an ISIS 5000 system from Avid because as a department we are looking to switch to Avid from Final Cut in the near future and since the ISIS system is kind of all-in-one for AVID that would be great! But of course we have to convince upper management of that and upper management always needs alternatives. I'm hesitant to look at Active Storage solutions because they are possibly going out of business and I have EMC breathing down my neck about their Isilion system. So I would appreciate some suggestions regarding affordable storage solutions for a media company because honestly I'm in over my head when it comes to SAN solutions.

Do you know what kind of IO requirements you're looking at? Given that it's video editing I'd guess long sequential reads and writes, but how much data does each system work on at one time? Are you able to grab any metrics off the XSAN?

pr0digal
Sep 12, 2008

Alan Rickman Overdrive

Nomex posted:

Do you know what kind of IO requirements you're looking at? Given that it's video editing I'd guess long sequential reads and writes, but how much data does each system work on at one time? Are you able to grab any metrics off the XSAN?
I ran the AJA System Test app to simulate read/write of a 4 gig 1080 29.97 8-bit file. It read/write at around 130 MB/s both ways. This exceeds the data rate for LT footage but that's only for a single file at 4 gigs; longer clips well exceed 4 gigs and also multiple streams. So nothing in the GB/s range but upwards of 200-300 MB/s so probably well under what you guys are used to but I figured this was the best place to ask. I also ran the Blagmagic disk speed test and apparently we can Read/Write up to 2K! Nifty. Much longer explanation below because I have no idea if my speed tests meant anything. I feel like such a scrub asking these questions because aside from some permissions issues and the occasional spinny ball of death the XSAN serves us just fine. What I'm looking for is an alternative for an XSAN system/ISIS System. Our needs aren't as hardcore as others because we're a video editing house so we don't need a super top of the line SAN because we aren't running a render farm.

When the XSAN is at full tilt I'll run iostat on the volumes (though it's not letting me run it on the full 96TB volume for some reason) and see what pops up for tps which is read IOPS and write IOPS combined.

I can grab metrics off the XSAN as a whole for network traffic and fibre traffic. One of our shows averages about 1.5 TB of footage per episode and according to the AJA Data Calculator ProRes 422 (LT) data rate is 15 Mbytes/second (which is at odds with what Apple says at 102 MB/s) and an hour long clip is around 53 gigs. The interview clips for shows are often around an hour and are usually around 10 so 500 gigs just for interview clips plus all the b-roll, recree, etc living in one Final Cut Project (or multiple ones) with multiple streams of video plus effects work. This is only for the editors of which there are 15 working simultaneously working plus six iMacs running over AFP to one of the XSAN volumes. The Assistant editors have a heavier load as we ingest all the footage and are responsible for backing it up and transcoding it; often at the same time. So lots of concurrent read/write to the volumes on six systems. Like we could be transcoding 300+ clips to ProRes while also backing up raw footage to the volumes. Also our Final Cut Server implementation reads the footage from the volumes and compresses them. So just on our side we've got almost 30 machines accessing our two XSAN volumes.

I hope that gives a decently clear picture of our needs

pr0digal fucked around with this message at 05:57 on Feb 14, 2013

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

In addition to basic capacity and performance you should determine whether you want features like replication, efficient snapshots, etc. How will you be backing the data up? If it's through a normal dump process of some sort then you need to account for that workload as well. If you're looking to get rid of SANmp then you're likely going to want to go with a NAS solution which will limit you somewhat.

There's a ton of SAN vendors out there and most can propose some solution that will likely meet your needs, so you'll have to differentiate based on price, feature set, power and space consumption, etc...It sounds like you're mostly doing sequential work Isilon is probably a good fit but you could also look at some of the flash vendors like Violin that might be a good fit since you don't need a ton of space and flash will help with lower latencies on any random portions of your workload that come up.

pr0digal
Sep 12, 2008

Alan Rickman Overdrive

NippleFloss posted:

In addition to basic capacity and performance you should determine whether you want features like replication, efficient snapshots, etc. How will you be backing the data up? If it's through a normal dump process of some sort then you need to account for that workload as well. If you're looking to get rid of SANmp then you're likely going to want to go with a NAS solution which will limit you somewhat.

There's a ton of SAN vendors out there and most can propose some solution that will likely meet your needs, so you'll have to differentiate based on price, feature set, power and space consumption, etc...It sounds like you're mostly doing sequential work Isilon is probably a good fit but you could also look at some of the flash vendors like Violin that might be a good fit since you don't need a ton of space and flash will help with lower latencies on any random portions of your workload that come up.

We will probably stick with SANMp for the existing licenses but I'm hesitant to expand more using Active Storage as there are swirling rumors of them going out of business. We're talking with an Avid rep today and getting price points and hopefully an Isilon rep soon after that.

We have a LTO-5 archive system in place though I wish we had a mirror of the XSAN/Active Storage to backup everything to.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

pr0digal posted:

We will probably stick with SANMp for the existing licenses but I'm hesitant to expand more using Active Storage as there are swirling rumors of them going out of business. We're talking with an Avid rep today and getting price points and hopefully an Isilon rep soon after that.

We have a LTO-5 archive system in place though I wish we had a mirror of the XSAN/Active Storage to backup everything to.

If you're planning on sticking with SANmp that gives you some additional options. NetApp E-series gear is low footprint and does very well at sequential I/O. It's generally used for things like streaming video or big data analytics. Dell might be able to do something for you with Compellant, though that will probably be more than you want to pay. It behooves you to talk to a few different vendors and let them KNOW that you're talking to other vendors. Prices will come down pretty quickly if you get them into a bidding war, particularly some of the bigger players like EMC, Dell, and NetApp.

Beelzebubba9
Feb 24, 2004
Gents,

I will be racking a Nimble CS460 with a full 2.4TB of cache and wiring it up to our Cisco UCS via 10Gb next week. I know there has been some interest in their products - especially regarding performance, and this is their top end model - so if any of you have questions or test/benchmarks/etc you would like me to run on it, I'd be more than happy to. This pile of parts is my baby, so I have a ton of flexibility as to what I do with it before it goes into production.

Feel free to PM me or respond to this post and I'll see what I can do.

-B

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
I just want to know how much it cost.

underlig
Sep 13, 2007
Lost password on an IBM DS3512, this link http://joeraff.org/tag/ds3512/ refers to a console-cable you can hook up to the san to clear a locked state, and other sites tells me i can use the same to reset the password.

I just want to know if anyone's actually done it and how risk-free it is? The password reset shouldn't drop all the disks or whatever, but since this is our only san i can't really experiment.

IBMs manual says that port on the san is for support technicians only.

Pile Of Garbage
May 28, 2007



underlig posted:

Lost password on an IBM DS3512, this link http://joeraff.org/tag/ds3512/ refers to a console-cable you can hook up to the san to clear a locked state, and other sites tells me i can use the same to reset the password.

I just want to know if anyone's actually done it and how risk-free it is? The password reset shouldn't drop all the disks or whatever, but since this is our only san i can't really experiment.

IBMs manual says that port on the san is for support technicians only.

Is the device still under it's factory warranty or covered by an extended ServicePac warranty agreement? If yes, call them up and log a fault so that they can send out a tech to do the procedure. From the looks of it you run the risk of really loving things up with that procedure. Also it says on that website that you need a username/password specific to the controller model and that you have to contact IBM to get that anyway.

Zephirus
May 18, 2004

BRRRR......CHK

underlig posted:

Lost password on an IBM DS3512, this link http://joeraff.org/tag/ds3512/ refers to a console-cable you can hook up to the san to clear a locked state, and other sites tells me i can use the same to reset the password.

I just want to know if anyone's actually done it and how risk-free it is? The password reset shouldn't drop all the disks or whatever, but since this is our only san i can't really experiment.

IBMs manual says that port on the san is for support technicians only.

I've done similar things on other DS units (4X00s,3400s), but not on a DS35. Assuming they're all LSI units, and it looks like they are, then you may be able to do it.

The command is, i think clearSYMbolPassword. According to internet loadDebug isp clearSYMbolPassword “Unld ffs:Debug” is the correct syntax, but I can't confirm this.

You should have got the cable with the unit to do this.

A word of warning, I would absolutely not do this on a system that was live - call IBM if you are putting any data at risk. It will be worth the callout fees to save your butt if things to wrong.

cheese-cube posted:

Also it says on that website that you need a username/password specific to the controller model and that you have to contact IBM to get that anyway.

They say this, but as far as I'm aware it's changed maybe twice in 10 years?

Zephirus fucked around with this message at 11:56 on Feb 17, 2013

underlig
Sep 13, 2007
Yeah i contacted the vendor to see if they either had documentation about the system they set up, or could help me contact IBM to get a technician out :)

I do not want to be the guy who fcked everything up

underlig fucked around with this message at 16:48 on Feb 17, 2013

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
does anyone have experience with an Oracle 7320 HA storage appliance? I am faced with the dilemma of either adding space to my NetApp or purchasing TWO 7320 HA pairs. Both solutions are close to the same price. These are my options:

Two DS4243 shelves each with 24 600GB SAS disks for a 3240 (and equivalent for DR site, only in FC shelves on a 3140)

versus

Two 7320 HA pairs, each with two 512GB read accelerators (one for each head) and two write accelerators (in raid1 in case of failure), and 20x 2.5 900GB 10k sas disks per pair.

The netapp solution will give me more spindles and more space, but I expect the oracle solution will give me more IOPS for VDI based on the cache. I can't prove the higher IOPS because it really depends on the cache hit rate. I also looked at read accelerators for NetApp, but they were far more expensive than I think they should be, and I do not think cache alone will give me the IOPS I need, I need some additional spindles as well.

help me goons.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

adorai posted:

does anyone have experience with an Oracle 7320 HA storage appliance? I am faced with the dilemma of either adding space to my NetApp or purchasing TWO 7320 HA pairs. Both solutions are close to the same price. These are my options:

Two DS4243 shelves each with 24 600GB SAS disks for a 3240 (and equivalent for DR site, only in FC shelves on a 3140)

versus

Two 7320 HA pairs, each with two 512GB read accelerators (one for each head) and two write accelerators (in raid1 in case of failure), and 20x 2.5 900GB 10k sas disks per pair.

The netapp solution will give me more spindles and more space, but I expect the oracle solution will give me more IOPS for VDI based on the cache. I can't prove the higher IOPS because it really depends on the cache hit rate. I also looked at read accelerators for NetApp, but they were far more expensive than I think they should be, and I do not think cache alone will give me the IOPS I need, I need some additional spindles as well.

help me goons.

Be wary of Oracle appliances. They can generally post very good IOP numbers but they aren't terribly feature rich and they've been known to have stability problems. Given that they are based on ZFS and most of those developers have long since left Oracle you're left with a marginally supported product that Oracle doesn't seem terribly interested in. I've heard of a myriad of little issues with those boxes ranging from replication problems to degraded performance with a large number of snapshots, to very long failover times (think a few minutes, versus a few seconds) to drive replacements requiring a reboot to make the new drive visible...

They can certainly post some very solid I/O numbers but it's not really enterprise quality storage.

The other thing to consider is that VDI tends to be more write heavy than read heavy so you want to have enough spindles on the back end to deal with the write workload, and unlike with read caches there is really no way to "accelerate" a write workload other than adding more disk to absorb it. You also have the option of doing your read caching at the server layer with something like Fusion I/O, or asking about NetApp FlashPool instead of FlashCache to see if that brings the price more in line with what you're expecting.

evil_bunnY
Apr 2, 2003

Yeah I'd skip on any Oracle ZFS stuff. This ship's long since sailed. When they got acquired then proceeded to close the ZFS course, pretty much the whole core team up and left.

Alfajor
Jun 10, 2005

The delicious snack cake.
Long shot question:

Environment is
Site A:
EMC Clariion CX4-120 SAN
VM server cluster (3 physical hosts)
Site B:
EMC Clariion CX4-120 SAN, same hardware as one above. It's basically an exact copy, with the same LUNs allocated as at siteA.


In site A, all the VMs are stored in a "VM Storage LUN" at that SAN. Is it OK to set up an asynchronous mirror to SiteB? I'd like to set it up every 24 hours, from A to B.
The biggest concern in question is if reading these VM files in the "VM Storage LUN" for copying.
My guess is that no, because the VMs read the file from the storage once, and then don't touch that store while they're running... but I really don't want to just guess on this one. Any ideas?

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
Why wouldn't the VMs write to the storage at site A? I don't understand your concern here. Snapview (as part of Mirrorview/A) is pretty crappy but in theory this should work.

Amandyke
Nov 27, 2004

A wha?
I don't quite understand your question either. Are you asking if while mirrorview is synchronizing the data from siteA to siteB will you still have access to that data on siteA, the answer is yes.

Alfajor
Jun 10, 2005

The delicious snack cake.
It's my boss's concern, I don't think there's anything to worry about, but since I can't find a straight answer anywhere, I can't go back to him and say I'm 100% sure we're doing the right thing.

The concern is: Would a VM that is stored in SiteA be in any way corrupted because the data is being accessed by mirrorview to copy to SiteB.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Alfajor posted:

It's my boss's concern, I don't think there's anything to worry about, but since I can't find a straight answer anywhere, I can't go back to him and say I'm 100% sure we're doing the right thing.

The concern is: Would a VM that is stored in SiteA be in any way corrupted because the data is being accessed by mirrorview to copy to SiteB.
No.

hackedaccount
Sep 28, 2009
To further ease his mind, the same technology has been doing this with big, active databases for a long time (think financial institutions).

Alfajor
Jun 10, 2005

The delicious snack cake.
That's what I thought :) Now if only I could find an official document that said something like that.
Anyways, thanks. At least now I know what's right, and that's a good thing to be sure of.

Amandyke
Nov 27, 2004

A wha?

Alfajor posted:

That's what I thought :) Now if only I could find an official document that said something like that.
Anyways, thanks. At least now I know what's right, and that's a good thing to be sure of.

Here's the white paper that describes how mirrorview works. http://www.emc.com/collateral/hardware/white-papers/h2417-mirrorview-know-cx-series-flare-wp-ldv.pdf

Adbot
ADBOT LOVES YOU

Bitter[HATE]
Jul 28, 2000
I am EDGAR and today is THE BIG DAY.
Now that the Backblaze Pod 3.0 is out, anyone have any experience with using one of them?

http://blog.backblaze.com/2013/02/20/180tb-of-good-vibrations-storage-pod-3-0/

We are looking to backup about 100TB offsite for disaster recovery and this looks like a really good deal. Works out to around $19000 for a completed unit where 45drives.com builds the unit and then you populate it with your own Hard Drives. They sell a version with redundant Power Supplys and OS drives. Plan is to seed it here with Crashplan ProE and send it off to the Colo. Anyone seen a better deal for that much storage? We have shitloads of data but tiiiny budgets :(

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply