Reinstatement after failover

I have just setup a physical dg environment with a single primary and standby. I have setup the broker and the observer. Everything works as expected except one thing and I don't know if it's a problem or not. I have read conflicting information. The problem (or possible problem) is that when I issue a "shutdown abort" on the primary everything works except I thought the broker would attempt to reinstate the failed primary as a standby. That only happens when I manually "startup mount" the aborted database. Oracle support told me to issue the "shutdown abort" from the broker rather than in the database - which I did. However, I got the same result. The former standby becomes the primary like it should but the failed database stays shut down. Once I manually mount the database, the broker reinstates it and gets everything back to normal.
Is the broker supposed to try to "startup mount" the failed database. Seems like the answer is no but support tells me yes. What is right?

Never mind. Got another response from support and they verified that the way it is working is the way it is expected to work...so no problem.

Similar Messages

  • Server 2012 File Server Cluster Shadow Copies Disappear Some Time After Failover

    Hello,
    I've seen similar questions posted on here before however I have yet to find a solution that worked for us so I'm adding my process in hopes someone can point out where I went wrong.
    The problem: After failover, shadow copies are only available for a short time on the secondary server.  Before the task to create new shadow copies happens the shadow copies are deleted.  Failing back shows them missing on the primary server as
    well when this happens.
    We have a 2 node (hereafter server1 and server2) cluster with a quorum disk.  There are 8 disk resources which are mapped to the cluster via iScsi.  4 of these disks are setup as storage and the other 4 are currently set up as shadow copy volumes
    for their respective storage volume.
    Previously we weren't using separate shadow copy volumes and seeing the same issue described in the topic title.  I followed two other topics on here that seemed close and then setup the separate shadow copy volumes however it has yet to alleviate the
    issue.  These are the two other topics :
    Topic 1: https://social.technet.microsoft.com/Forums/windowsserver/en-US/ba0d2568-53ac-4523-a49e-4e453d14627f/failover-cluster-server-file-server-role-is-clustered-shadow-copies-do-not-seem-to-travel-to?forum=winserverClustering
    Topic 2: https://social.technet.microsoft.com/Forums/windowsserver/en-US/c884c31b-a50e-4c9d-96f3-119e347a61e8/shadow-copies-missing-after-failover-on-2008-r2-cluster
    After reading both of those topics I did the following:
    1) Add the 4 new volumes to the cluster for shadow copies
    2) Made each storage volume dependent on it's shadow copy volume in FCM
    3) Went to the currently active node directly and opened up "My Computer", I then went to the properties of each storage volume and set up shadow copies to go to the respective shadow copy volume drive letter with correct size for spacing, etc.
    4) I then went back to FCM and right clicked on the corresponding storage volume and choose "Configure Shadow Copy" and set the schedule for 12:00 noon and 5:00 PM.
    5) I noticed that on the nodes the task was created and that the task would failover between the nodes and appeared correct.
    6) Everything appears to failover correctly, all volumes come up, drive letters are same, shadow copy storage settings are the same, and 4 scheduled tasks for shadow copy appear on the current node after failover.
    Thinking everything was setup according to best practice I did some testing by changing file contents throughout the day making sure that previous versions were created as scheduled on server1.  I then rebooted Server1 to simulate failure.  Server2
    picked up the role within about 10 seconds and files were avaiable.  I checked and I could still see previous versions for the files after failover that were created on server1.  Unfortunately that didn't last as the next day before noon I was going
    to make more changes to files to ensure that not only could we see the shadow copies that were created when Server1 owned the file server role but also that the copies created on Server2 would be seen on failback.  I was disappointed to discover that
    the shadow copies were all gone and failing back didn't produce them either.
    Does anyone have any insight into this issue?  I must be missing a switch somewhere or perhaps this isn't even possible with our cluster type based on this: http://technet.microsoft.com/en-us/library/cc779378%28v=ws.10%29.aspx
    Now here's an interesting part, shadow copies on 1 of our 4 volumes have been retained from both nodes through the testing, but I can't figure out what makes it different though I do suspect that perhaps the "Disk#s" in computer management / disk
    management perhaps need to be the same between servers?  For example, on server 1 the disk #s for cluster volume 1 might be "Disk4" but on server 2 the same volume might be called "Disk7", however I think that operations like this
    and shadow copy are based on the disk GUID and perhaps this shouldn't matter.
    Edit, checked on the disk numbers, I see no correlation between what I'm seeing in shadow copy and what is happening to the numbers.  All other items, quotas, etc fail and work correctly despite these diffs:
    Disk Numbers on Server 1:
    Format: "shadow/storerelation volume = Disk Number"
    aHome storage1 =   16 
    aShared storage2 = 09
    sHome storage3 =   01
    sShared storage4 = 04
    aHome shadow1 =   10
    aShared shadow2 = 11
    sHome shadow3 =   02
    sShared shadow4 = 05
    Disk numbers on Server 2:
    aHome storage1 = 16 (SAME)
    aShared storage2 = 04 (DIFF)
    sHome storage3 = 05 (DIFF)
    sShared storage4 = 08 (DIFF)
    aHome shadow1 = 10 (SAME)
    aShared shadow2 = 11 (SAME)
    sHome shadow3 = 06 (DIFF)
    sShared shadow4 = 09 (DIFF)
    Thanks in advance for your assistance/guidance on this matter!

    Hello Alex,
    Thank you for your reply.  I will go through your questions in order as best I can, though I'm not the backup expert here.
    1) "Did you see any event ID when the VSS fail?
    please offer us more information about your environment, such as what type backup you are using the soft ware based or hard ware VSS device."
    I saw a number of events on inspection.  Interestingly enough, the event ID 60 issues did not occur on the drive where shadow copies did remain after the two reboots.  I'm putting my event notes in a code block to try to preserve formatting/readability.
     I've written down events from both server 1 and 2 in this code block, documenting the first reboot causing the role to move to server 2 and then the second reboot going back to server 1:
    JANUARY 2
    9:34:20 PM - Server 1 - Event ID: 1074 - INFO - Source: User 32 - Standard reboot request from explorer.exe (Initiated by me)
    9:34:21 PM - Server 1 - Event ID: 7036 - INFO - Source: Service Control Manager - "The Volume Shadow Copy service entered the running state."
    9:34:21 PM - Server 1 - Event ID: 60 - ERROR - Source: volsnap - "The description for Event ID 60 from source volsnap cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    \Device\HarddiskVolumeShadowCopy49
    F:
    T:
    The locale specific resource for the desired message is not present"
    9:34:21 PM - Server 1 - Event ID 60 - ERROR - Source: volsnap - "The description for Event ID 60 from source volsnap cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    \Device\HarddiskVolumeShadowCopy1
    H:
    V:
    The locale specific resource for the desired message is not present"
    ***The above event repeats with only the number changing, drive letters stay same, citing VolumeShadowCopy# numbers 6, 13, 18, 22, 27, 32, 38, 41, 45, 51,
    9:34:21 PM - Server 1 - Event ID: 60 - ERROR - Source: volsnap - "The description for Event ID 60 from source volsnap cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    \Device\HarddiskVolumeShadowCopy4
    E:
    S:
    The locale specific resource for the desired message is not present"
    ***The above event repeats with only the number changing, drive letters stay same, citing VolumeShadowCopy# numbers 5, 10, 19, 21, 25, 29, 37, 40, 46, 48, 48
    9:34:28 PM - Server 1 - Event ID: 7036 - INFO - Source: Service Control Manager - "The NetBackup Legacy Network Service service entered the stopped state."
    9:34:28 PM - Server 1 - Event ID: 7036 - INFO - Source: Service Control Manager - "The Volume Shadow Copy service entered the stopped state.""
    9:34:29 PM - Server 1 - Event ID: 7036 - INFO - Source: Service Control Manager - "The NetBackup Client Service service entered the stopped state."
    9:34:30 PM - Server 1 - Event ID: 7036 - INFO - Source: Service Control Manager - "The NetBackup Discovery Framework service entered the stopped state."
    10:44:07 PM - Server 2 - Event ID: 7036 - INFO - Source: Service Control Manager - "The Volume Shadow Copy service entered the running state."
    10:44:08 PM - Server 2 - Event ID: 7036 - INFO - Source: Service Control Manager - "The Microsoft Software Shadow Copy Provider service entered the running state."
    10:45:01 PM - Server 2 - Event ID: 48 - ERROR - Source: bxois - "Target failed to respond in time to a NOP request."
    10:45:01 PM - Server 2 - Event ID: 20 - ERROR - Source: bxois - "Connection to the target was lost. The initiator will attempt to retry the connection."
    10:45:01 PM - Server 2 - Event ID: 153 - WARN - Source: disk - "The IO operation at logical block address 0x146d2c580 for Disk 7 was retried."
    10:45:03 PM - Server 2 - Event ID: 34 - INFO - Source: bxois - "A connection to the target was lost, but Initiator successfully reconnected to the target. Dump data contains the target name."
    JANUARY 3
    At around 2:30 I reboot Server 2, seeing that shadow copy was missing after previous failure. Here are the relevant events from the flip back to server 1.
    2:30:34 PM - Server 2 - Event ID: 60 - ERROR - Source: volsnap - "The description for Event ID 60 from source volsnap cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    \Device\HarddiskVolumeShadowCopy24
    F:
    T:
    The locale specific resource for the desired message is not present"
    2:30:34 PM - Server 2 - Event ID: 60 - ERROR - Source: volsnap - "The description for Event ID 60 from source volsnap cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    \Device\HarddiskVolumeShadowCopy23
    E:
    S:
    The locale specific resource for the desired message is not present"
    We are using Symantec NetBackup.  The client agent is installed on both server1 and 2.  We're backing them up based on the complete drive letter for each storage volume (this makes recovery easier).  I believe this is what you would call "software
    based VSS".  We don't have the infrastructure/setup to do hardware based snapshots.  The drives reside on a compellent san mapped to the cluster via iScsi.
    2) "Confirm the following registry is exist:
    - HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\VSS\Settings"
    The key is there, however the DWORD value is not, would that mean that the
    default value is being used at this point?

  • How to restore standby from primary after failover.

    HI we have a dr setup but the standby is provided archives through an application not the datagaurd.
    So for applying application patch I cant switchover the database.
    I have to shutdown the primary database and apply failover to the standby database to apply patch.
    Can you please tell how can I restore my standby database to actual primary after failover.
    As our actual activity will be in 2 weeks time. We want to apply patch on saturday.
    DR DRILL is on 8th nov.

    Your earlier statement "Can you please tell how can I restore my standby database to actual primary after failover"
    means something different from "SO have to restore from cold backup of primary database"
    What you seem to want is a procedure to recreate the Standby from a Backup of the Pirmary. (ie FROM Primary TO Standby).
    You haven't specified your Oracle Version. DG has been available since 9i (was even backported to 8i) but I will presume that you are using 9i
    You should follow the same process as used for the first creation of the standby, except that you don't have to repeat some of the setup steps (listener, tnsnames etc)
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96653/create_ps.htm#63563
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Server for NFS deletes the partially copied file after failover

    Hello, 
    I found the strange behaviour in Server for NFS in the microsoft foc cluster. 
    I have a foc cluster of two nodes(windows server 2k12r2) and server for nfs role is installed on both. now I did mount the export from the client(windows server 2k12r2) where client for nfs role is installed;
    and a large file copying from the clients local drive to the mounted export is started.
    Now during this copying operation I did switch the export to second node(moved it using the failover cluster manager wizard). and what I found is the copying interrupts with
    error message "there is problem accessing file path" and on retry the copying starts all over again, I mean if the 50% was copied previously then
    after retry it will start from 0%.
    I watched the system events on second node with the "procmon" and I found; after the disk gets attached to the second node, the delete operation is get called for this file.
    Now what I need to know is why this delete is been called and who made this called. In the stack of the delete operation I could track the nfssvr.sys but not sure what attributes of that file made it to call the
    delete. 

    Hi,
    It takes us some time to create a failover cluster for testing.
    We tried to copy a large folder which contains many files, and a large VHD file we created.
    The result is the same - it will not restart the copy after failover. Thus as you said it is not caused by design and it should be an issue on your side which causes a Delete and redo the copy.
    Is there any application or specific network settings could terminate a disconnected copy process? Currently I do not have much information about this. 
    Edit: I noticed that it is NFS share and in our test we are using SMB share. Will do the test again to see if any difference. 

  • Creating standby DB after failover

    Hi,
    I have performed failover to my standby DB, now i need to re create the standby db for the new production.
    But there is some confusion, because previously , in my production db_name and unique name was same, suppose
    test1. And the db_name and unique_name for standby was test1 and test2. And i created standby db that way using test1 and test2 in log_archive_config.
    After failover, the scenario changed , the production has two different value,db name test1 and db unique name test2. And i need create standby from this. It makes me confuse, how i will create standby? What will be the db unique name for new standby??
    Please help me...
    regards,

    user8983130 wrote:
    thanks.
    we use db_unque_name in log_archive_config, ok???
    and in the log_archive_dest_state_2, we service name..should it be necessary to be same of the db_unique_name???DB_NAME should be same across the primary and physical standby databases.
    DB_UNIQUE_NAME choose different names for each database.
    Service names whatever you use either in FAL_CLIENT/FAL_SERVER (or) DEST_n there is no relation with DB_NAME/DB_UNIQUE_NAME, Its just service name how you want to call.
    HTH.

  • Certs did not working after failover

    we had to failover our primary ace to secondary because primary ace crashed and we had to replace it. After we failied over certain certs stopped working
    s
    To fix this problem i had to remove the cert from ssl-proxy service and re-add it work. has anybody run into the problem like this? why would this fail after failover to secondary ace?

    Oracle-User wrote:
    Hi All
    We are facing issues on FNDWRR.exe after we failover to DR apps servers. We are not able to retrieve concurrent job logs using web browser. Apache log file shows "Premature end of script headers" error message.
    [Sat May 25 19:51:21 2013] [error] [client 10.64.224.134] [ecid: 1369507879:10.166.3.16:3282:0:13,0] Premature end of script headers: /d01/app/ebs/gap/apps_st/comn/webapps/oacore/html/bin/FNDWRR.exe
    Any idea what can be causing this issue?
    Thanks in advance!Did AutoConfig complete successfully?
    Please relink FNDWRR.exe manually or via adadmin and check then.
    Thanks,
    Hussein

  • Enqueue replication server does not terminate after failover

    Hi,
    We are trying to setup high availability of enqueue server where in we have running enqueue server on node-A and ERS on node-B all the time.
    Whenever enqueue is stopped on node-A, it automatically failovers on node-B but after replication of lock table, enqueue does not terminate the ERS running on node-B and as a result our enqueue and ERS both keeps running on the same host (Failover node-B) which should not be the case.
    We havenu2019t configured polling in that scenario SAP note-1018968 depicts the same however this is applicable only for version-640 and 700.
    Ideally when enqueue server switches to node-B, it should terminate the ERS on the same node after replication and then HA software would take care of its restart on node -A.
    We have ERS running of version 701; could anyone please let me know if the same behaviour is common for 701 version as well?
    Or there is any additional configuration to be done to make it working.
    Thanks in advance.
    Cheers !!!
    Ashish

    Hi Naveed,
    Stopping ERS is suppose to be taken care by SAP only and not the HA software.
    Once ERS stops on node -B there would be a fault reported and as a result HA software will restart the ERS on node A.
    Please refer to a section of SAP Note 1018968 - 'Enqueue replication server does not terminate after failover'
    "Therefore, the cluster software must only organise the restart of the replication server and does not need to do anything for the shutdown."
    Another blog about the same:
    http://www.symantec.com/connect/blogs/veritas-cluster-server-sap  
    - After the successful lock table takeover, the Enqueue Replication Server will fault on this node (initiated by SAP). Veritas Cluster Server recognizes this failure and initiates a failover to a remaining node to create SAP Enqueue redundancy again. The Enqueue Replication Server will receive the complete Enqueue table from the Enqueue Server (SCS) and later Enqueue lock updates in a synchronous fashion.
    So it is nothing about HA software, it is the SAP which should control ERS on node-B.
    Cheers !!!
    Ashish

  • Can I know what is "Commit" after failover to Azure ?

    Can I know what is "commit" after failover to Azure ?
    I want to know "Commit" button of protected items.
    SETUP RECOVERY: [ Between an on-premisese Hyper-V site and Azure ]
    After failover from on-premis Hyper-V site to Azure, Protected item show "Commit" button.
    "Commit" Jobs include "Prerequisite check"  and "Commit".
    Regards,
    Yoshihiro Kawabata

    In ASR Failover can be thought to be a 2 Phase Activity.
    1) The Actual Failover where you bring up the VM in Azure using the latest recovery point available.
    2) Committing the failover  to that point.
    Now the question in your mind will be why do we have these 2 Phases. The reason for this is as follows
    Lets say you have configured your VM to have 24 recovery points with hourly App Consistent Snapshots. When you failover ASR automatically picks up the latest point in time that is available for failover (day 9:35 AM). Say you were not happy with that recovery
    point because of some consistency issue in the application data, you can use the Change Recovery Point button(Gesture) in the ASR Portal to choose a different recovery point (say an app consistent snapshot from 9:00 AM that day) to perform the failover.
    Once you are satisfied with the snapshot that is failed over in Azure you can hit the Commit Button. Once you hit the commit button you will not be able to change your Recovery Point.
    Let me know if you have more questions.
    Regards,
    Anoop KV

  • Reinstate the formerly primary db after failover

    We have a data guard guard configuration, oracle 10g r2, windows servers 2003, RAC with two nodes, ASM, Data Broker Manager, same configuration on both primary and standby. After we failover to the standby db, we need to rebuild the formerly primary db. I know the steps using flashback db to reinstate the db. Unfortunately, due to performance reason, we can not use flashback db. Today I read an article with this statement for rebuild the formerly primary db:
    If flashback database is not on, the database needs to be build up again with a new level 0 rman backup
    I am just curious if anyone has done this before using RMAN level 0 backup instead of flashback to rebuild formerly primary db? This is what I am thinking:
    1) From new primary, get the scn number
    select to_char(standby_became_primary_scn) from v$database;
    2) From formly primary (current standby), use the level 0 backup to restore db to this scn
    then run alter database convert to physical standby; command in sqlplus. Shutdown the db and restart using real-time redo apply.
    3) From new primary, run Reinstate database something from DGMGRL.
    As I can only do the failover test next week, I am interested if someone has actually done this before and provide me more guidance on this task.
    Thanks a lot in advance

    The Orcale documentation said:
    You can use any backup copy of the primary database to create the physical standby database, as long as you have the necessary archived redo log files to completely recover the database. Oracle recommends that you use the Recovery Manager utility (RMAN).
    I prefer RMAN, because it's not required to shut down the primary database. With RMAN you can take a level 0 backup or the "duplicate for standby" command to create the standby database. In principle you also may use a cold file copy of your database. But using ASM, you must take RMAM - as far as I know.

  • SQL 2012 AlwaysOn cluster IP not moving after failover, causing database to be read-only

    SQL Server Cluster Name: SQLDAG01
    SQL Server Cluster IP: 10.0.0.50
    Cluster Listener IP: 10.0.0.60
    Node 1 Name: SQL01
    Node 1 IP: 10.0.0.51
    Node 2 Name: SQL02
    Node 2 IP: 10.0.0.52
    Everything is fine when SQL01 is the primary. When failing over to SQL02, everything looks fine in the dashboard but for some reason the cluster IP, 10.0.0.50, is stuck on node 1. The databases are configured to provide secondary read access. When executing
    a query on SQLDAG01, I get an error that the database is in read-only mode. Connectivity tests verify that SQLDAG01, 10.0.0.50, connects to SQL01 even though SQL02 is now the primary.
    I've been Googling this for the better part of the day with no luck. Any suggestions? Is there a Powershell command force the cluster IP to move to the active node or something? Also I'm performing the failover as recommended, from Management Studio connected
    to the secondary node.

    This was the answer, it had been setup to use the cluster name instead of the application name. Whoever installed Sharepoint connected it to SBTSQLDAG01 instead of SHAREPOINT01. Once we changed Sharepoint to connect to SHAREPOINT01, the failover worked as
    expected. We did have a secondary issue with the ARP cache and had to install the hotfix from http://support.microsoft.com/kb/2582281 to resolve it. One of the Sharepoint app servers was failing to
    ping the SQL node after a failover, the ARP entry was stuck pointing to the previous node. This article actually helped a lot resolving that: http://blog.serverfault.com/2011/05/11/windows-2008-and-broken-arp/
    One thing I did notice is that the SQL failover wizard does not move cluster groups "Available Storage" and "Cluster Group", I had to move those through the command line after using the wizard. I'm going to provide the client with a Powershell script that
    moves all cluster groups when they need to do a manual failover. This also happens to be why the Sharepoint issue started, "Cluster Group" is what responds to the cluster name SBTSQLDAG01. Moving that group over to the node that has the active SQL cluster
    group also made it work properly, but using the application name is the correct method.
    Thanks everyone for all your help. Although the nitpicking about terminology really didn't help, that was a pointless argument and we really could have done without it. Yeah I know 2008 called is "Failover Cluster Manager" and MSCS is the "2003 term" but
    really, they're basically the same thing and we don't really need to derail the conversation because of it. Also, If you look at the screenshot below you can clearly see "AlwaysOn High Availability" in SQL Management Studio. That's what it's called in SQL,
    that's where you do all the work. Trying to tell me it's "not a feature" is wrong, pointless, and asinine, and doesn't get us anywhere.
    Sorry it took so long to get back, I was off the project for a couple weeks while they were resolving some SAN issues that caused the failover to happen in the first place.

  • Message Bridge Destination points to old IP, after failover with HA pair

    Hi
    Weblogic Version in use is 7.0sp7
    We have this Messaging Bridge configured, between two instances of Weblogic server (Each weblogic server exists on Different Weblogic domains)
    Both the weblogic servers run on the same machine.
    So we have configured the message bridge destination with localhost, instead of an IP or DNS name in it.. viz.
    t3://localhost:8001
    t3://localhost:8020 respectively
    where 8001 and 8020 are the listen ports of the respective weblogic servers.
    This machine is a HA pair, controlled by vcs cluster server.
    Which means when we failover from one node to another of the HA pair, the transition(which also involves the IP change from node A to node B) should be ideally transparent to weblogic.
    But when it comes to these messaging bridge destinations
    The Messaging bridge resolves the localhost in the Message Bridge destination to the old node IP, after the failover is done using vcs cluster server.
    To me it looks like weblogic is caching the IP details somewhere and probably reusing it, unaware of the fact that the IP of the machine has changed after the failover.
    I tried configuring the message bridge destination with addresses like t3://<DNSname>:8001 and 8020 respectively. But stiil it tries to connect to the IP of the old node only.
    I did come across this patch in Weblogic 7.0 sp5, which is pretty close to the issue that i am encountering..
    CR126201 In a multiple-server domain, if a Managed Server was rebooted to use a different address or port
    number, the JTA subsystem failed to update the address information. This would cause the
    following exception when the changed server was rebooted:
    javax.naming.CommunicationException. Root exception is
    java.net.ConnectException: t3://ip_address:port_number:
    Destination unreachable;
    nested exception is: java.net.ConnectException: Connection
    refused; No available router to destination
    Would really appreciate if someone can help with this issue.
    Regards
    Azad

    I'm not quite sure what the correct solution might be, but a possible work-around might be to hard-code the two addresses directly into the bridge URL. For example: t3://host:7010,host:7020. WebLogic will try each address in turn, starting from the first one until one succeeds or all fail.
    Another solution might be to configure an "external listen address" on each server. This is the address that "reconnectable" external clients (such as EJBs or a bridge ) will use for reconnect in place of the server's actual listen address.
    Tom

  • Restore/recover after failover to a Logical Standby database

    I have a question about how to recover or restore back to my original environment after I failover to my Logical Standby database.
    My setup is as follows: Oracle version 11.2.0.3, Non-RAC.
    1. A Primary database at one location.
    2. A Physical Standby database at a second location.
    3. A Logical Standby database at a third location.
    All three databases have Flashback Database on
    All three databases are configured thru the Data Guard Broker.
    All three databases has a Fast Recovery Area.
    Suppose I loose my primary database (1) and my Physical Standby database (3) so I Failover to my only remaining database, the Logical Standby database (2).
    What type of Databases are let after the this failover?
    What are my recover/restore/Instantiate options?
    The Data Guard Concepts book on 13.2.2 says how to bring the old Primary in to the Data Guard Configuration as a new Logical Standby database.
    How do I get back to the original configuration above, a primary, a Physical, and a Logical Standby without having to re-create some databases?
    Will Flashback database thru the Broker or Cloud Control/Grid Control rewind my databases?

    Suppose I loose my primary database (1) and my Physical Standby database (3) so I Failover to my only remaining database, the Logical Standby database (2).
    What type of Databases are let after the this failover?
    What are my recover/restore/Instantiate options? Then forget about your current primary database, its out of network.
    AFAIK
    Now you will have only Physical standby & logical standby. Now you will perform failover only current standby to primary database.
    So now you will have only one primary database with new incarnation, so you have to recreate a new standby database again.
    The Data Guard Concepts book on 13.2.2 says how to bring the old Primary in to the Data Guard Configuration as a new Logical Standby database.
    How do I get back to the original configuration above, a primary, a Physical, and a Logical Standby without having to re-create some databases?
    Will Flashback database thru the Broker or Cloud Control/Grid Control rewind my databases?Check this. But not much aware of Clound control and all. Sorry for that. ;-) I'm sure Uwe/Mseberg can answer this ;-)
    http://www.idevelopment.info/data/Oracle/DBA_tips/Data_Guard/DG_49.shtml#Flashing%20Back%20a%20Failed%20Primary%20Database%20into%20a%20Logical%20Standby%20Database

  • How to re-build the Production database after failover

    We have performed a failover in our environment by the below method . It was worst we are not able to bring up the production the only choice left over is failover.
    We have enabled the flash back and created a checkpoint then failover.
    SQL> select max(sequence#) from v$log_history;
    MAX(SEQUENCE#)
    9221
    SQL> alter system set db_recovery_file_dest_size=14G;
    System altered.
    SQL> alter system set db_recovery_file_dest='/u01/oradata/flashback';
    System altered.
    SQL> alter database recover managed standby database cancel;
    Database altered.
    SQL> alter database flashback on;
    Database altered.
    SQL> create restore point before_open_standby guarantee flashback database;
    Restore point created.
    SQL> alter database activate standby database;
    Database altered.
    SQL> select database_role from v$database;
    DATABASE_ROLE
    PRIMARY
    SQL> shutdown immediate;
    ORA-01109: database not open
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    SQL> select max(sequence#) from v$log_history;
    MAX(SEQUENCE#)
    9221 (This is the log sequence same after the failover also)
    after the we have nearly some 30 log sequence are generated but it started from the no 1.
    Now we need to rebuild the Production DB and to sync with the standby.. please help us with the steps and suggest some documents.

    Hi,
    Please take a look at this http://shivanandarao.wordpress.com/2012/08/28/dataguard-failover/
    SHANOJ     
    Handle:     SHANOJ
    Status Level:     Newbie (5)
    Registered:     Feb 15, 2006
    Total Posts:     154
    Total Questions:     *25 (21 unresolved)*
    Name     SHANOJ
    Location     Chennai - India
    Occupation     DBA
    Biography     OCP 10G, LPI, ITIL V3
    If you feel your questions have been answered, then please consider closing your threads as answered by providing appropriate points rather than leaving it open. Follow the forums etiquette.

  • Outlook not switched over to active DAG member after failover

    I have a 2 server DAG stretched across sites for DR.  Both of these servers are multi-role and have the HT, CAS, and MBOX roles.  Everything with the DAG seems to be working fine, as does what turned out to be an accidental failover to the remote
    site.  A fail back also went fairly smoothly.
    However, I've now discovered that some Outlook clients switched over to the DR server automatically and now they will not switch back.  I will come out right away and say that I do not have a CAS array.  I just learned about the CAS array while
    researching the problem.  After refreshing my own mailbox profile, everything swapped back, but obviously I don't want to go around everywhere making that change.  I don't even know how many people are still connecting to the DR server.  In
    fact, the only problem with the people that are connecting to the DR server is a little extra latency.
    Obviously, I would like this to work better, and would like a good way to get the outlook users swapped back to where they should be right now.

    There is a public folder and it is on the same server as the mailboxes.
    2014-05-05T19:58:17.555Z,239,0,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,10.1.1.32,fe80::45df:6c5e:aec6:fad1%14,ncacn_ip_tcp,,Connect,0,00:00:00.0312510,"SID=S-1-5-21-2073836326-1490033320-1384523041-3159, Flags=None",
    2014-05-05T19:58:17.571Z,239,1,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,OwnerLogon,1144 (rop::WrongServer),00:00:00,"Logon: Owner, /O=Company/OU=domain/cn=Recipients/cn=user in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; Redirected: this server is not in a preferred site for the database, suggested new server: /o=Company/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=Server1",RopHandler: Logon:
    2014-05-05T19:58:17.602Z,239,2,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,Disconnect,0,00:00:00.0781275,,
    2014-05-05T20:22:59.321Z,217,222,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,DelegateLogoff,0,00:00:00,LogonId: 0,
    2014-05-05T20:22:59.337Z,217,222,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,DelegateLogoff,0,00:00:00.0156255,LogonId: 1,
    2014-05-05T20:22:59.368Z,217,222,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,DelegateLogoff,0,00:00:00.0468765,LogonId: 2,
    2014-05-05T20:22:59.400Z,217,222,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,Disconnect,0,07:43:06.1863315,,
    2014-05-05T20:28:35.067Z,240,0,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,10.1.1.32,fe80::45df:6c5e:aec6:fad1%14,ncacn_ip_tcp,,Connect,0,00:00:00.0156255,"SID=S-1-5-21-2073836326-1490033320-1384523041-3159, Flags=None",
    2014-05-05T20:28:35.098Z,240,1,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,OwnerLogon,1144 (rop::WrongServer),00:00:00,"Logon: Owner, /O=Company/OU=domain/cn=Recipients/cn=user in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; Redirected: this server is not in a preferred site for the database, suggested new server: /o=Company/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=Server1",RopHandler: Logon:
    2014-05-05T20:28:35.113Z,240,2,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,Disconnect,0,00:00:00.0625020,,
    2014-05-05T20:28:35.348Z,241,0,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,10.1.1.32,fe80::45df:6c5e:aec6:fad1%14,ncacn_ip_tcp,,Connect,0,00:00:00,"SID=S-1-5-21-2073836326-1490033320-1384523041-3159, Flags=None",
    2014-05-05T20:28:35.379Z,241,1,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,OwnerLogon,1144 (rop::WrongServer),00:00:00.0156255,"Logon: Owner, /O=Company/OU=domain/cn=Recipients/cn=user in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; Redirected: this server is not in a preferred site for the database, suggested new server: /o=Company/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=Server1",RopHandler: Logon:
    2014-05-05T20:28:35.395Z,241,2,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,Disconnect,0,00:00:00.0468765,,
    2014-05-05T20:28:42.411Z,242,0,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,10.1.1.32,fe80::45df:6c5e:aec6:fad1%14,ncacn_ip_tcp,,Connect,0,00:00:00,"SID=S-1-5-21-2073836326-1490033320-1384523041-3159, Flags=None",
    2014-05-05T20:28:42.442Z,242,1,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,OwnerLogon,1144 (rop::WrongServer),00:00:00.0156255,"Logon: Owner, /O=Company/OU=domain/cn=Recipients/cn=user in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; Redirected: this server is not in a preferred site for the database, suggested new server: /o=Company/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=Server1",RopHandler: Logon:
    2014-05-05T20:28:42.457Z,242,2,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,Disconnect,0,00:00:00.0468765,,
    2014-05-05T20:28:57.708Z,243,0,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,10.1.1.32,fe80::45df:6c5e:aec6:fad1%14,ncacn_ip_tcp,,Connect,0,00:00:00,"SID=S-1-5-21-2073836326-1490033320-1384523041-3159, Flags=None",
    2014-05-05T20:28:58.005Z,244,0,/O=Company/OU=domain/cn=Recipients/cn=user,,SearchProtocolHost.exe,12.0.6672.5000,Cached,10.1.1.32,fe80::45df:6c5e:aec6:fad1%14,ncacn_ip_tcp,,Connect,0,00:00:00,"SID=S-1-5-21-2073836326-1490033320-1384523041-3159, Flags=None",
    2014-05-05T20:28:58.020Z,244,1,/O=Company/OU=domain/cn=Recipients/cn=user,,SearchProtocolHost.exe,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,OwnerLogon,1144 (rop::WrongServer),00:00:00,"Logon: Owner, /O=Company/OU=domain/cn=Recipients/cn=user in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; Redirected: this server is not in a preferred site for the database, suggested new server: /o=Company/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=Server1",RopHandler: Logon:
    2014-05-05T20:28:58.052Z,244,2,/O=Company/OU=domain/cn=Recipients/cn=user,,SearchProtocolHost.exe,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,Disconnect,0,00:00:00.0468765,,
    2014-05-05T20:28:58.677Z,243,1,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,DelegateLogon,0,00:00:00.9531555,"Logon: Delegate, /o=Company/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=other user in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; LogonId: 0",
    2014-05-05T20:29:05.286Z,243,7,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,DelegateLogon,0,00:00:01.0469085,"Logon: Delegate, /O=Company/OU=domain/cn=Recipients/cn=cjoynt in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; LogonId: 1",
    2014-05-05T20:29:06.552Z,243,27,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,DelegateLogon,0,00:00:00.6718965,"Logon: Delegate, /O=Company/OU=domain/cn=Recipients/cn=cjoynt in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; LogonId: 2",
    2014-05-05T20:33:09.013Z,243,36,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,DelegateLogoff,0,00:00:00,LogonId: 0,
    2014-05-05T20:33:09.028Z,243,36,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,DelegateLogoff,0,00:00:00.0156255,LogonId: 1,
    2014-05-05T20:33:09.060Z,243,36,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,DelegateLogoff,0,00:00:00.0468765,LogonId: 2,
    2014-05-05T20:33:09.091Z,243,36,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,Disconnect,0,00:04:11.3830440,,
    2014-05-05T20:33:15.435Z,245,0,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,10.1.1.32,fe80::45df:6c5e:aec6:fad1%14,ncacn_ip_tcp,,Connect,0,00:00:00,"SID=S-1-5-21-2073836326-1490033320-1384523041-3159, Flags=None",
    2014-05-05T20:33:15.482Z,245,1,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,OwnerLogon,1144 (rop::WrongServer),00:00:00,"Logon: Owner, /O=Company/OU=domain/cn=Recipients/cn=user in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; Redirected: this server is not in a preferred site for the database, suggested new server: /o=Company/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=Server1",RopHandler: Logon:
    2014-05-05T20:33:15.497Z,245,2,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,Disconnect,0,00:00:00.0625020,,
    2014-05-05T20:33:15.763Z,246,0,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,10.1.1.32,fe80::45df:6c5e:aec6:fad1%14,ncacn_ip_tcp,,Connect,0,00:00:00,"SID=S-1-5-21-2073836326-1490033320-1384523041-3159, Flags=None",
    2014-05-05T20:33:15.779Z,246,1,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,OwnerLogon,1144 (rop::WrongServer),00:00:00,"Logon: Owner, /O=Company/OU=domain/cn=Recipients/cn=user in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; Redirected: this server is not in a preferred site for the database, suggested new server: /o=Company/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=Server1",RopHandler: Logon:
    2014-05-05T20:33:15.810Z,246,2,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,Disconnect,0,00:00:00.0468765,,
    2014-05-05T20:33:15.810Z,247,0,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,10.1.1.32,fe80::45df:6c5e:aec6:fad1%14,ncacn_ip_tcp,,Connect,0,00:00:00,"SID=S-1-5-21-2073836326-1490033320-1384523041-3159, Flags=None",
    2014-05-05T20:33:15.841Z,247,1,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,OwnerLogon,1144 (rop::WrongServer),00:00:00,"Logon: Owner, /O=Company/OU=domain/cn=Recipients/cn=user in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; Redirected: this server is not in a preferred site for the database, suggested new server: /o=Company/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=Server1",RopHandler: Logon:
    2014-05-05T20:33:15.857Z,247,2,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,Disconnect,0,00:00:00.0468765,,
    2014-05-05T20:34:26.375Z,248,0,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,10.1.1.32,fe80::45df:6c5e:aec6:fad1%14,ncacn_ip_tcp,,Connect,0,00:00:00.0156255,"SID=S-1-5-21-2073836326-1490033320-1384523041-3159, Flags=None",
    2014-05-05T20:34:26.390Z,248,1,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,OwnerLogon,1144 (rop::WrongServer),00:00:00,"Logon: Owner, /O=Company/OU=domain/cn=Recipients/cn=user in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; Redirected: this server is not in a preferred site for the database, suggested new server: /o=Company/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=Server1",RopHandler: Logon:
    2014-05-05T20:34:26.422Z,248,2,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,Disconnect,0,00:00:00.0625020,,
    2014-05-05T20:34:26.937Z,249,0,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,10.1.1.32,fe80::45df:6c5e:aec6:fad1%14,ncacn_ip_tcp,,Connect,0,00:00:00,"SID=S-1-5-21-2073836326-1490033320-1384523041-3159, Flags=None",
    2014-05-05T20:34:26.953Z,249,1,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,OwnerLogon,1144 (rop::WrongServer),00:00:00,"Logon: Owner, /O=Company/OU=domain/cn=Recipients/cn=user in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; Redirected: this server is not in a preferred site for the database, suggested new server: /o=Company/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=Server1",RopHandler: Logon:
    2014-05-05T20:34:26.984Z,249,2,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,Disconnect,0,00:00:00.0468765,,
    2014-05-05T20:34:27.234Z,250,0,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,10.1.1.32,fe80::45df:6c5e:aec6:fad1%14,ncacn_ip_tcp,,Connect,0,00:00:00,"SID=S-1-5-21-2073836326-1490033320-1384523041-3159, Flags=None",
    2014-05-05T20:34:27.250Z,250,1,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,OwnerLogon,1144 (rop::WrongServer),00:00:00,"Logon: Owner, /O=Company/OU=domain/cn=Recipients/cn=user in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; Redirected: this server is not in a preferred site for the database, suggested new server: /o=Company/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=Server1",RopHandler: Logon:
    2014-05-05T20:34:27.281Z,250,2,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,Disconnect,0,00:00:00.0468765,,
    2014-05-05T20:34:28.234Z,251,0,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,10.1.1.32,fe80::45df:6c5e:aec6:fad1%14,ncacn_ip_tcp,,Connect,0,00:00:00.0156255,"SID=S-1-5-21-2073836326-1490033320-1384523041-3159, Flags=None",
    2014-05-05T20:34:29.219Z,251,1,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,DelegateLogon,0,00:00:00.9687810,"Logon: Delegate, /o=Company/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=other user in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; LogonId: 0",
    2014-05-05T20:34:30.250Z,251,2,/O=Company/OU=domain/cn=Recipients/cn=user,,OUTLOOK.EXE,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,DelegateLogon,0,00:00:01.0312830,"Logon: Delegate, /O=Company/OU=domain/cn=Recipients/cn=cjoynt in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; LogonId: 1",
    2014-05-05T20:36:00.971Z,252,0,/O=Company/OU=domain/cn=Recipients/cn=user,,SearchProtocolHost.exe,12.0.6672.5000,Cached,10.1.1.32,fe80::45df:6c5e:aec6:fad1%14,ncacn_ip_tcp,,Connect,0,00:00:00,"SID=S-1-5-21-2073836326-1490033320-1384523041-3159, Flags=None",
    2014-05-05T20:36:00.987Z,252,1,/O=Company/OU=domain/cn=Recipients/cn=user,,SearchProtocolHost.exe,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,OwnerLogon,1144 (rop::WrongServer),00:00:00,"Logon: Owner, /O=Company/OU=domain/cn=Recipients/cn=user in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; Redirected: this server is not in a preferred site for the database, suggested new server: /o=Company/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=Server1",RopHandler: Logon:
    2014-05-05T20:36:01.003Z,252,2,/O=Company/OU=domain/cn=Recipients/cn=user,,SearchProtocolHost.exe,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,Disconnect,0,00:00:00.0312510,,
    2014-05-05T20:36:07.909Z,253,0,/O=Company/OU=domain/cn=Recipients/cn=user,,SearchProtocolHost.exe,12.0.6672.5000,Cached,10.1.1.32,fe80::45df:6c5e:aec6:fad1%14,ncacn_ip_tcp,,Connect,0,00:00:00.0156255,"SID=S-1-5-21-2073836326-1490033320-1384523041-3159, Flags=None",
    2014-05-05T20:36:07.925Z,253,1,/O=Company/OU=domain/cn=Recipients/cn=user,,SearchProtocolHost.exe,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,OwnerLogon,1144 (rop::WrongServer),00:00:00,"Logon: Owner, /O=Company/OU=domain/cn=Recipients/cn=user in database Mailbox Database 2063624589 last mounted on Server1.Company.com at 5/2/2014 4:36:10 AM, currently Mounted; Redirected: this server is not in a preferred site for the database, suggested new server: /o=Company/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=Server1",RopHandler: Logon:
    2014-05-05T20:36:07.940Z,253,2,/O=Company/OU=domain/cn=Recipients/cn=user,,SearchProtocolHost.exe,12.0.6672.5000,Cached,,,ncacn_ip_tcp,,Disconnect,0,00:00:00.0312510,,

  • PGW2200 Standby still Platform:OOS after failover

    Pardon me for being a rookie CSC user.  We've got a Active/Standby server setup and had a failover but the standby never came back up.  We've power cycled (since didn't have root passwd handy at first) and then reinstalled SW.  Seems there's some good procedures at: http://www.cisco.com/en/US/products/hw/vcallcon/ps2027/products_tech_note09186a008010ff0a.shtml#prov_fail
    which we've followed but trying prov-sync & dply from active didn't work.  Also messing with /etc/init.d/CiscoMGC stop/start with pom.dataSync=yes or no didn't seem to matter.  Anyone get this "stuck" state in their paired systems.  Attached are the gory details of the debugs via standby when active did it's prov-sync & dply.  I did a diff of XECfgParm.dat and things look good there.  I also tried loading in new active into our VSPT GUI but that failed; I was hoping the VSPT backdoor approach would sync things back to proper ACT/STBY config.  Also pings work fine from/to both servers.  Any help would be appreciated.Thanks
    godiva mml> rtrv-ne      
       MGC-01 - Media Gateway Controller 2010-11-17 14:42:57.585 EST
    M  RTRV
       "Type:MGC"
       "Hardware platform:sun4u sparc SUNW,Netra-240"
       "Vendor:"Cisco Systems, Inc.""
       "Location:MGC-01 - Media Gateway Controller"
       "Version:"9.3(2)""
       "Platform State:OOS"
    rover mml> rtrv-ne
       MGC-02 - Media Gateway Controller 2010-11-17 14:43:45.213 EST
    M  RTRV
       "Type:MGC"
       "Hardware platform:sun4u sparc SUNW,Netra-240"
       "Vendor:"Cisco Systems, Inc.""
       "Location:MGC-02 - Media Gateway Controller"
       "Version:"9.3(2)""
       "Platform State:ACTIVE"

    Thanks much!  We ended up getting a Cisco Engineer involved but this doc would of got us back on target to previous config by these means.  Part of the original problem was the standby that turned active did_not have the /opt/CiscoMGC/etc/CONFIG_LIB/new directory at all.  I copied over from other side which did allow VSPT to pull configs in or prov-sta to be accepted. Though logs and core files are being looked at,  I think assigning a TG to the new OPC caused the crash and also pointer mismatches for trying to update/pull in other configs.  Our SS7 links we down or bouncing after the failover.  I now there are docs for new OPC provisioning which included caveats so I'll have to go back and review.  Again thanks, you nailed it with a quick revert back to sain db via the manual method.

Maybe you are looking for