Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
what is this
Sep 11, 2001

it is a lemur
Literally read the above kbase article where microsoft says "you need block level storage for exchange database server" and do a find replace for microsoft exchange and put in your application name.


Print it out, give it to customers and sales people, and walk away a hero as your support calls for all this dumb concurrency, file locking, and other bullshit go to zero.

Adbot
ADBOT LOVES YOU

namaste friends
Sep 18, 2004

by Smythe

adorai posted:

I'm not sure how it is any more of a 'hack' than trying to run a database off of NFS.

I know of at least two large telecoms/outsourcers that standardize their oracle databases on NFS. One of these telecoms runs the billing system for a national utility company using 10 GbE with NFS.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Cultural Imperial posted:

I know of at least two large telecoms/outsourcers that standardize their oracle databases on NFS. One of these telecoms runs the billing system for a national utility company using 10 GbE with NFS.
Oracle specifically supports it. I cannot think of a single other relational database that does, but if you know of one, please enlighten me.

namaste friends
Sep 18, 2004

by Smythe

adorai posted:

Oracle specifically supports it. I cannot think of a single other relational database that does, but if you know of one, please enlighten me.

I believe DB2, MaxDB and Sybase support NFS. That said, my point of contention is that it is not impossible, nor is it unheard of for customers to choose NFS over FC and iscsi as a storage protocol. Please understand I'm not trying to tell you that one should choose NFS over FC or iscsi. All I'm saying is that one shouldn't rule NFS out as a 'hack' for reasons which I've previously stated in this thread. I think it all depends on the customer's requirements and I don't presume to understand individual clients' decision making processes other than I can only assume that they are making competent decisions based upon clear requirements as dictated by their business.

what is this
Sep 11, 2001

it is a lemur

Cultural Imperial posted:

I know of at least two large telecoms/outsourcers that standardize their oracle databases on NFS. One of these telecoms runs the billing system for a national utility company using 10 GbE with NFS.

Oracle (1) only supports it if it's a validated controlled NFS with all the bits and pieces being something they support, (2) charges obscene amounts of money, (3) is not the system in question here, and (4) is different from basically all other RDBMS is this regard.

I also pointed out oracle as the exception earlier in the discussion. In any case it's neither here nor there. (btw I'm not experienced with sybase but a quick google reveals "Ensure that all database devices, including quorum devices, are located on raw partitions. Do not use the Network File System (NFS).")

The point is that for the issues experienced in this case NFS is the root cause of the issues. Block level storage 100% solves problems of caching, file locking, and so on. Moving to a SAN will solve the problems he is experiencing, but he keeps skipping over this suggestion and instead getting into arguments about whether NFS is a file system or a protocol for exposing files over a network.


I think many of us are still interested in knowing which RDBMS is being used here...

what is this
Sep 11, 2001

it is a lemur
also for DB2 a quick google (DB2 NFS) shows as the first result:


quote:

The installation of DB2® products on an NFS (Network File System) mounted directory is not recommended. Running DB2 products on an NFS-mounted directory ...

and googling about MaxDB (MaxDB NFS) reveals lots of forum posts about people having problems with NFS, and recommendations that they switch to iSCSI or FC.

what is this fucked around with this message at 07:43 on Dec 21, 2010

conntrack
Aug 8, 2003

by angerbeet
Like said before, make two flavours of the app.

One "enterprise" that requires fc/iscsi to get a valid support contract.

Make the other one a virtual appliance with a preconfigured linux install. Tell the customers to use what ever storage they need and then take it up with vmware/xen/jebus if there is problems that does not come from your virtualised server.

It would also give you a lot more buzzwords for marketing too.

Syano
Jul 13, 2005
What do you guys think of the HP 4000 G2 series arrays? A vendor is trying to close some business before weeks end and he has offered me a really good deal on a 4300 7.2TB array. I wasnt even completely sure I was going to buy that particular array but the price certainly is right. I just want to make sure Im not buying some horrible kit.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

It's their LeftHand stuff. They're moving away from that name.

I like it, no issues with it at all. It has some limitations that other kit doesn't have, but if you understand how it works, and it fits your environment go for it.

Syano
Jul 13, 2005
What limitations would you say it has?

For reference our only array in production at the moment is a Dell powervault 3200i

Intrepid00
Nov 10, 2003

I'm tired of the PM’s asking if I actually poisoned kittens, instead look at these boobies.

Syano posted:

What limitations would you say it has?

For reference our only array in production at the moment is a Dell powervault 3200i

You'll be moving up in features for sure. One limitation that comes to mind when using our Lefthand boxes is that while you can clone a volume you can never split it from the point where you cloned it. The 2 or volumes are forever attached by a common snapshot. I'm actually upgrading to 9.x right now that just came out but I'm not even sure what's new about it besides ugprades being totally automatic (the process, you still have to start it).

I'll find out more about 9, but I needed to get them all to match in the cluster on version number. I think I was told before MPIO imporvements.

The box you have now is diffrent SAN storage methods really from the lefthand. Take a closer look at the P4000 boxes (lefthand), you will like them over what you got.

Cultural Imperial posted:

I know of at least two large telecoms/outsourcers that standardize their oracle databases on NFS. One of these telecoms runs the billing system for a national utility company using 10 GbE with NFS.

:xd:

Intrepid00 fucked around with this message at 21:54 on Jan 1, 2011

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]
I'd interested to know those P4000 limitations too.

I'm lurking here for a while but now I'm getting ready to close on a $100k+ solution and I'm curious what others might say about my offers Dell vs HP vs IBM...
Original idea was to keep my fast (~1.4GB/s r/w over 4x FC4) ~20TB storage as tier1 and create one slow but large tier2 (preferably SATA/iSCSI) and one very fast but small tier0 (utilizing some SSD/memory-based solution.)

My RFP called for at least 40-50TB *usable* space (after all RAID, snapshots etc accounted for), preferably iSCSI storage for tier2 and 2x 640GB Fusion-io Duo cards or comparable solutions for tier0 and the following two MS clusters:
- one dual-node Server 2008 R2 Ent failover cluster (CSV) running Hyper-V (dual 12-core Opterons w/ 64GB memory per node)
- one dual-node Storage Server 2008 R2 Ent running all file shares (CIFS), namespaces (DFS-N), everything shared off all storage tiers while, of course, taking care of necessary data tiering (probably involving FCI + my scripting kung-fu unless vendor supplies something useful.)
Backend should be all 10GbE (both iSCSI and regular data) and completely redundant (dual-dual cards, dual switches etc.)
RFP also calls for backup option with a smaller (eg 24-slot) tape library w/ LTO5 and preferably Commvault - I passionately hate BackupExec - but it'll probably end up as a simple money question ie effect on the bottom line.
All this should come from one-vendor - we don't have a large team but we do a lot of high-end 3D work as well as 2D ones and I don't want to deal with any finger-pointing if something doesn't work as promised. (OEM'd gear is OK, of course, as long as it's covered by the regular support.)

+ one added wish: I'd love to have an option at a later point to hook up few stations from my 3D group via 10GBASE-T.

IBM submitted a very good solution with certain basics still missing/being worked out: it's built the new Storwize V7000 which is 0% Storwize but 70% SVC (sw that runs on the box) + 15% DS8xxx (EasyTier) + 5% XIV (nice GUI) + 10% DS5xxx (smaller-sized but still controllers + enclosures storage.) Biggest missing item is 10GbE iSCSI: supposedly the chipset is there -or so I was told - but it needs to be validated and then a firmware upgrade can enable it...?
The huge advantage here is the SVC which can virtualize all my existing and future storage and EasyTier which would also solve all my data-tiering woes.
This is also the downside: absolutely ridiculous license fees and I'm not entirely convinced about this extra layer between my storage and OSes... does anyone know more about this SVC thing, how does it deal with block sizes etc?
Another downside is that they are not only almost 50% more expensive but included storage space is far the smallest, one-fourth/fifth of any competing offer.


HP obviously offers their (post-Lefthand so it's 100% HP) StorageWorks scale-out capacity P4500 G2 w/ DL385 G7s and their new X3000-series NAS heads but they have absolutely no data tiering as far as I know, not even within their P4000-boxes (the offer comes with 5 boxes.) This won't be a problem for now but could be if I have to add a new system in 2-3 years time (quoted 120TB raw should hold out for 2-3 years in normal circumstances.) OTOH they offer decent sync replication which could make a difference if I have to build out a new floor in the next building... they have the virtual VSA though, which scales up to 10TB but I'm reluctant to think of it anything more than a DRS solution or a dev platform for my R&D guys.
HP offers great edge switches, I could get two 5406 w/ modules around $10-12k - no 10GBASE-T whatsoever, though.

Dell offers their EQL PS6510E 'Sumo' unit w/ R715 and NX3000-line NASes but so far their pricing is horrible. Last week I almost dropped them but at the last moment they came back promising to match HP - I only gave them a one item idea what kind of discounts to aim for - so I'm assuming they do. If they indeed do it they will still lack sync replication and they only offer data tiering within EQL and even async replication only works to another EQL unit (though it could be any unit incl. cheapest PS4000E.) They also behind HP by just readying their NAS NX-line with Storage Server 2008 R2 Ent... did I mention they still do not offer remote VM snapshotting for Hyper-V, unlike HP?
On the other hand EQL boxes look much more sturdier than P4000 ones and it's a single box w/ 48 spindles... that's higher individual IOPS and no need to waste bandwidth and storage space on replication (though they are pretty much even around ~7xTB usable if I take ~30% replication on HP.)
Also they just bought Compellent and they are readying their Ocarina-enabled firmware or perhaps as an appliance (remember their purchase of Exanet?) which means pretty good future upgrades for their entire storage line.
Dell can also throw in a KACE box which would make me soften up on the lateness of new NAS heads - messy return and exchange when new one is out - and they seem to be willing to sell and support Commvault without buying a dedicated backup box.
They have a pretty cool 24-port RJ45 10GbE switch but no real competitors to a pair of 5406 + modules.

I must add that I tested a Dell PS6010XV demo unit for two weeks and I was impressed by the simplicity - I'm also testing a VSA in Hyper-V and while as I understand SANiQ 9.0 introduced a lot of positive changes it still seems much more convoluted/complex to manage than SANHQ from Dell, even when it comes to basics (how the hell you congfigure SSL mail account for reporting?)

I welcome comments from all admins and architects, with or without experience etc - thanks!

Note: it's going to be cross-posted in another server-related forum.

szlevi fucked around with this message at 23:55 on Jan 1, 2011

Maneki Neko
Oct 27, 2000

Intrepid00 posted:

:xd:

Not sure what that's about, as even Oracle themselves runs their poo poo on NFS.

Nomex
Jul 17, 2002

Flame retarded.

szlevi posted:

Words

You should take a look at the 3Par T series arrays. They were recently purchased by HP, so they would still fit into your single vendor requirement. You can mix SSD, FC and SATA disks, they can be equipped to do both FC and iSCSI and they do autonomic storage tiering.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

Nomex posted:

You should take a look at the 3Par T series arrays. They were recently purchased by HP, so they would still fit into your single vendor requirement. You can mix SSD, FC and SATA disks, they can be equipped to do both FC and iSCSI and they do autonomic storage tiering.

True but I already have a frame-based storage, my DDN S2A9xx-series system, I'm tryng to avoid lining up another one. Also I don't think they would sell it at the same price they sell their 120TB scale-out P4500... :)

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

RE: P4000 limitations, I would really call them limitations, it's just the way the boxes work. I had some engineers looking for a SAN, but they wanted granular control of the disks and what LUN etc they went to. They wanted to say disks 0,1,2,3 are part of this, and 4-10 are here, etc. The LeftHand boxes don't do that. They spread the data out automagically. The network RAID also immediately halves your usable storage if you only run 2 nodes.

I actually like them alot, just plug in a few more nodes and let the software take care of everything.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

skipdogg posted:

I had some engineers looking for a SAN, but they wanted granular control of the disks and what LUN etc they went to. They wanted to say disks 0,1,2,3 are part of this, and 4-10 are here, etc. The LeftHand boxes don't do that. They spread the data out automagically.

Equallogic also works this way.

Maneki Neko
Oct 27, 2000

Anyone using the commercial version of NexentaStor?

We've been looking around at some options for an side storage project and this looks like a fairly decent hands off option vs. rolling our own software stack for a cheapo giant pile of storage.

Mainly curious if people are happy with the support they're getting, etc. Also sounds like potentially a less iffy future now (assuming that OpenIndiana doesn't crumble).

Maneki Neko fucked around with this message at 18:52 on Jan 3, 2011

Nebulis01
Dec 30, 2003
Technical Support Ninny

three posted:

Equallogic also works this way.

Not the P4000 series. I wish it did, but apparently the P6x00 series does.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

three posted:

Equallogic also works this way.

Well, it's one RAID mode per box, true but you can switch off the autopilot and map out your LUNs/volumes manually IIRC - I don't think P4000-series boxes allow anything like that...?

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

Nebulis01 posted:

Not the P4000 series. I wish it did, but apparently the P6x00 series does.

AFAIK all EQL boxes, regardless of generation or model numbers, run exactly the same firmware, same features, same everything - are you sure you are not in some manual mode?

Nebulis01
Dec 30, 2003
Technical Support Ninny

szlevi posted:

AFAIK all EQL boxes, regardless of generation or model numbers, run exactly the same firmware, same features, same everything - are you sure you are not in some manual mode?

Well it's quite possible I'm retarded. It's my first SAN. I put a ticket into EQL support and they state it couldn't be done. I can't find anything in the documentation to let me do it either.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
I'm confused. Are we talking about Equallogic PS4000s?

I thought the 4000 series was almost the exact same as the 6000 series except that you could only have 2 members in a group (or something to that effect)?

(And this isnt a real limitation, but one imposed by Dell because they dont want people buying the cheaper versions and putting them in huge groups.)

Nebulis01
Dec 30, 2003
Technical Support Ninny
Yes, the box I'm talking about is the EqualLogic PS4000X (sorry about the make/model confusion)

It's limited to 2 members in a group, 256 Volumes, 2048 Snapshots, 128 Snapshots/volume, 32 volumes for replication, and 128 replicas per volume. The PS6x00 series has a substantial increase in all of those metrics.

Nebulis01 fucked around with this message at 20:30 on Jan 3, 2011

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

Nebulis01 posted:

Yes, the box I'm talking about is the EqualLogic PS4000X (sorry about the make/model confusion)

It's limited to 2 members in a group, 256 Volumes, 2048 Snapshots, 128 Snapshots/volume, 32 volumes for replication, and 128 replicas per volume. The PS6x00 series has a substantial increase in all of those metrics.

Correct but these are all artificial scaling limits imposed by Dell, as three noted correctly above - there's no difference at RAID/LUN levels etc. One RAID type per box and volumes are handled internally (unless you turn this auto mode off manually.)

Intrepid00
Nov 10, 2003

I'm tired of the PM’s asking if I actually poisoned kittens, instead look at these boobies.

Maneki Neko posted:

Not sure what that's about, as even Oracle themselves runs their poo poo on NFS.

We are talking about network file storage vs iSCSI right?

Boner Buffet
Feb 16, 2006
Has anyone here had any experience with HP's VSA(virtual storage appliance)? We're probably going to move forward with a basic P4300 7.2TB SAN. The guy who is the admin for the local vo-tech school has the same unit, but has two and replicates data from the active to the passive each night at a DR site. I'd love to do that but it was hard enough to convince everyone on one SAN, 2 would be close to impossible.

VSA and an HP server with a bunch of storage seems like it would be a cheaper option. The active SAN will host VMWare data stores and half a dozen iScsi LUNS for non VMWare HA cluster resources.

I'm not looking too much at performance for a DR site, because in the case of a disaster, we're only looking to bring certain services/servers online, not everything.

what is this
Sep 11, 2001

it is a lemur

Intrepid00 posted:

We are talking about network file storage vs iSCSI right?

Yes and not to rehash this discussion for the 20th time but it's more like "only oracle hosts supports database filesystem writes over NFS." And even then it's only in their rigorously controlled environment where they validate all the parts of the NFS chain.

For almost every database you want block level storage, whether that be an iSCSI LUN presented to the OS, fibre channel, or direct attached storage.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

InferiorWang posted:

Has anyone here had any experience with HP's VSA(virtual storage appliance)? We're probably going to move forward with a basic P4300 7.2TB SAN. The guy who is the admin for the local vo-tech school has the same unit, but has two and replicates data from the active to the passive each night at a DR site. I'd love to do that but it was hard enough to convince everyone on one SAN, 2 would be close to impossible.

He uses three because you need at least 3 nodes for HA -in his case the VSA is the acting FOM, taking care of the quorum (rerouting all volumes to one node when the other one goes down.)

quote:

VSA and an HP server with a bunch of storage seems like it would be a cheaper option. The active SAN will host VMWare data stores and half a dozen iScsi LUNS for non VMWare HA cluster resources.

Not sure how important HA is for you but again, you cannot get HA working with two nodes - you will need a third node, at least with a virtual FOM running.

quote:

I'm not looking too much at performance for a DR site, because in the case of a disaster, we're only looking to bring certain services/servers online, not everything.

Based on my limited testing of the VSA it should work fine for you - it's a fully working P4000-series node in a virtual form except it's limited in terms of capacity (10TB per VSA I believe) and sizing/striping (2TB per volume /X number of disks per RAID5) etc.

Boner Buffet
Feb 16, 2006
Sorry, I wasn't clear with my post. He has two units all together. One active, and one that has the data replicated to it once a day. It's not a true HA but rather an offsite copy of the LUNs(volumes in Lefthand-ese I believe) in the event of a true disaster or data loss. This is also accepting that we'd lose whatever data was created or changed between the catastrophic data loss and the last replication.

The piece I really need to dig into is how I would then make that DR site live. I'm thinking a third VMWare host would be at the DR site and we'd bring the most critical virtual machines online. I work at a K-12 school district so downtime doesn't mean millions of dollars in business being lost every second of downtime. Automatic fail over is not a concern. However, having a relatively straight forward and documented plan to cut over to a DR site is what I'm looking for, even if that cut over takes a couple of hours to get going.

Thanks for the info though. I wouldn't expect any single volume to be larger than 2TB or combined, be more than 10TB, seeing as we're only going with 7.2TB starter kit anyway on the primary SAN.

Just because, here's a crappy visio diagram of what I'm thinking of:

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

InferiorWang posted:

Just because, here's a crappy visio diagram of what I'm thinking of:



You better seal that end pipe, or all your data's going to run into the floor of your DR site.

Boner Buffet
Feb 16, 2006
it's ok because the floor has a drain and sump pump.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

InferiorWang posted:

Sorry, I wasn't clear with my post. He has two units all together. One active, and one that has the data replicated to it once a day. It's not a true HA but rather an offsite copy of the LUNs(volumes in Lefthand-ese I believe) in the event of a true disaster or data loss. This is also accepting that we'd lose whatever data was created or changed between the catastrophic data loss and the last replication.

I see - FYI it's not only losing data but staying offline as well if you don't have HA...

quote:

The piece I really need to dig into is how I would then make that DR site live.

Errr... it's called HA, you might have heard it somewhere... ;)

quote:

I'm thinking a third VMWare host would be at the DR site and we'd bring the most critical virtual machines online. I work at a K-12 school district so downtime doesn't mean millions of dollars in business being lost every second of downtime. Automatic fail over is not a concern. However, having a relatively straight forward and documented plan to cut over to a DR site is what I'm looking for, even if that cut over takes a couple of hours to get going.

It's not just the hour but the PITB to re-link everything, only to link back once the primary site is back again - again, I think your best bet is to set up another VM running FOM (it's free AFAIRC) so you can have HA between your P4300 and the remote VSA... I'm not an actual P4000-series user (yet) so make sure you talk to someone from HP about this setup.

quote:

Thanks for the info though. I wouldn't expect any single volume to be larger than 2TB or combined, be more than 10TB, seeing as we're only going with 7.2TB starter kit anyway on the primary SAN.

To be sure check my facts on ITRC, I'm relying on vague memories...

quote:

Just because, here's a crappy visio diagram of what I'm thinking of:



What's really sad is that Visio 2010 still sports the same god-awful graphics...

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

szlevi posted:

Errr... it's called HA, you might have heard it somewhere... ;)
HA and DR are completely different concepts.

If you want an easy, drop-in, push one button for DR solution in a VMware environment with all VMDKs you would probably use SRM. It's like $3k a proc.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

adorai posted:

HA and DR are completely different concepts.

If you want an easy, drop-in, push one button for DR solution in a VMware environment with all VMDKs you would probably use SRM. It's like $3k a proc.

It's Per-VM now, I believe.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

three posted:

It's Per-VM now, I believe.
after I posted that I remembered that fact, but chose not to edit, because gently caress VMware for that one.

Boner Buffet
Feb 16, 2006
I plan on using migrating our old Netware based cluster to Linux, so in addition to serving up datastores for vmware, the SAN will be offering up iscsi luns directly to those virtual machine based cluster nodes, so I don't think that would work for me.

Also, we're probably going with the Essentials Plus bundle from VMware, which might not have all the bells and whistles including SRM, but I have to dig into it to be sure.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

InferiorWang posted:

Also, we're probably going with the Essentials Plus bundle from VMware, which might not have all the bells and whistles including SRM, but I have to dig into it to be sure.
It definately doesnt come with SRM, that's an entirely different product. If you are using LUNs connected up to VMs you are going to require one of the following to achieve your DR plan:

A) Scripting
B) Built in SAN utilities (such as vFilers on NetApp)
C) manual intervention

Boner Buffet
Feb 16, 2006
Thanks for the feedback. Manual Intervention is fine assuming it's something that can be properly documented and done by a trained , again, even if it takes an hour or two to get up and running.

There would have to be a real disaster like the building burning down or being pulled into the gamma quadrant through a wormhole for me to go to the redundant box, which is why I'm looking at VSA as a cheaper alternative over a second P4300 which may never have to be put into production.

Adbot
ADBOT LOVES YOU

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

InferiorWang posted:

The piece I really need to dig into is how I would then make that DR site live. I'm thinking a third VMWare host would be at the DR site and we'd bring the most critical virtual machines online. I work at a K-12 school district so downtime doesn't mean millions of dollars in business being lost every second of downtime. Automatic fail over is not a concern. However, having a relatively straight forward and documented plan to cut over to a DR site is what I'm looking for, even if that cut over takes a couple of hours to get going.


I think you're walking down the right road for your budget. I assume the DR site is probably running free ESXi so you'll have to do a couple things:

1. Enable sshd/remote support shell on the ESXi server
2. Create yourself a handy recovery script to script bringing the volumes up! Now you have something you can type in, get a cup of coffee then come back.

Things you'll want to know how to use in the script:

Check with the VSA to figure out how to present your replicated LUN to your server. It probably entails breaking replication and making it writable. Ideally this can be scripted via the CLI.

Next some VMware specific stuff:
'esxcfg-volumes' -This command lets you tell VMware its okay to mount a replicated volume. You'll want to let it resignature the LUNs in question.

'esxcfg-rescan' -Use this to rescan the iSCSI initiator after you present the LUN and allow for a re-sign (don't recall if this is 100% required in 4.X anymore, the last script I wrote for this was for 3.5.)

Since its ESXi you're going to want to fiddle about in 'vmware-vim-cmd' (this is a way to get into the VMware management agent via the CLI) and feed it arguments via a find of .vmx files in your new datastore after the re-scan.

At this point you can actually use vmware-vim-cmd to power everything on for you and answer the question "hey did you move this or copy this?" (you probably just want to say you moved it.)

I had to build something like this for one of my customers who's outsourced IT is probably worse than herding cats. I use ssh key authentication for everything and all some guy has to do is run "StartDR.sh" and the script does everything he needs.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply