RAC Standby database sync

Hi all,
I have 2 node rac database and also it is having standby database on another standalone server with asm.
Now how can i check the sync between these two databases. Both the instances having different sequence numbers.
Please let me know.
thanks

you can check current_scn of database from v$database in standby and prod....

Similar Messages

  • 2 Node RAC standby database

    Hi,
    I am planning to create a 2-node RAC physical standby database which uses ASM from 2-node RAC production database. I am familiar with RMAN duplicate (11g) network based backup to create the physical standby database, but not sure how this will work to create 2-node RAC standby with ASM.
    Could any help me with the document ID which describes how to create a RAC standby database using ASM as storage.
    Source:
    OS: Linux 64 bit
    Oracle: 11.2.0.1
    RAC/ASM: yes/yes
    Target:
    OS: Linux 64 bit
    Oracle: 11.2.0.1
    RAC/ASM: yes/yes
    Thanks in advance!!
    Regards,

    Hi
    You can follow following...
    1.Install crs.
    2.Create ASM instance at standby site...
    3.Prepare parameter file for standby database.
    4.start database instance in one node at nomount state..
    5.create standby control file at primary database...
    6.Copy and paste created control file at primary to standby site..
    7.Copy control file at file system and modify location of control file at init<SID>.ora.
    8.start standby database mount state..
    9.create backup text backup controlfile at primary site...
    10.shutdown standby database,change location of controfile inside ASM.. and place nomount state...
    11.Create controfile through text backup controfile...
    12.backup primary db and paste it in standby site..
    13.Use RMAN to place datafile inside ASM..if you not using RMAN for backup and restoration purpose..
    14.start standby database in mount state and recover through MRP or foreground process..
    15.use SRVCTL to register ASM instances..database and database instances.
    For standby database of RAC..there can only be one node up...So..remember you can start instance of one node only...
    Hope this will help U.
    Tinku

  • RAC Standby Database rebuild

    Hi
    Current 10.2.0.4 Setup is with Dataguard Broker (Unix/No OEM Setup):
    Rac Primary 2node cluster
    Rac Standby 2node cluster
    The rac Standby Database are going to be opened read/write.
    Then will require a rebuild of standby ie delete db files and duplicate db using rman
    1) My questions is how do I rebuild of a rac standby database with an existing enabled dg broker?
    2) Are there any documentation on the steps to follow.
    3) Should the dg broker config be dropped or just disabled and if so at which stage?
    Config on Rac primary should be kept to a minimum as its being used so no outage expected here.
    Regards
    Me

    Have you given any thought to using flashback (depending on how long your standby will be open for read/write)? You could (before you open read/write) copy your dr*.dat files (broker config files), stop the broker and create a guarenteed restore point. After you have finished and ready to convert back to standby you would flashback your database to the restore point, convert back to a physical standby, restart the broker and shutdown. Restore the dr*.dat files to their location and then startup the standby. From there it should fetch the logs from the time the broker was down and catch back up.
    If you would rather rebuild, again you would need to take the broker out of the equation and use RMAN to create a clone for standby and then once in place you would reconfigure the broker (may be able to just restore the dr*.dat files from before the open operation but unsure.

  • Can I use data guard to create a RAC standby database for a non RAC primary

    Hi,
    we need to RAC our production database but the normal methods will mean a long outage. It is possible to create a standby as a single node RAC database and when ready do a graceful failover to the standby database and open it for business. The next step would be to create another RAC node from this on the original server.
    servers are already cluster aware, using ASM etc
    Oracle 10.2

    Yes, you will be able to setup RAC stnadby for a non-RAC Primary. For primary it just needs a available destination for redo shipping it doesn't matter whether it's RAC enabled or not. And ofcourse you are using 10.2 anyway only one node will be running MRP and that is too in standby mount mode.
    However since you have are using You may follow below sequence.
    1. Setup a new standby as RAC enabled.
    2. Perform a switchover.
    3. Shutdown the Old primary (which is standby now).
    4. Install CRS and RDBMS on the old primary and it's new node.
    5. Modify the cluster_database=TRUE and cluster_database_instances=<required number of instances>.
        With above modification mount the standby database in standby mode and start MRP.
    6. Introduce the database and instances to the OCR using SRVCTL add command.
    7. Once you your database is synchronized with Primary do a switchover.
    9. Now you can repeat step 3 to 6 on the other site too.   <- if you need your secondary site to be RAC enabled too
    10. Finally both the sites should be RAC enabled.
    Hope this is helpful!!!
    Thanks,
    Asif Haliyal

  • Primary Database and Standby Database sync

    How to know that standby database is in sync with primary database?

    Query the V$LOG_HISTORY view on the standby database, which records the latest log sequence number that has been applied. For example, issue the following query:
    SQL> SELECT THREAD#, MAX(SEQUENCE#) AS "LAST_APPLIED_LOG"
    2> FROM V$LOG_HISTORY
    3> GROUP BY THREAD#;
    THREAD# LAST_APPLIED_LOG
    1 967In this example, the archived redo log with log sequence number 967 is the most recently applied log.
    You can also use the APPLIED column in the V$ARCHIVED_LOG fixed view on the standby database to find out which log is applied on the standby database. The column displays YES for the log that has been applied. For example:
    SQL> SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;
    THREAD# SEQUENCE# APP
    1 2 YES
    1 3 YES
    1 4 YES
    1 5 YES
    1 6 YES
    1 7 YES
    1 8 YES
    1 9 YES
    1 10 YES
    1 11 NO

  • Creating a RAC standby database for a single instance database

    Dear All,
    I have a task of migrating a 500GB single instance database to a two-node RAC database with a little downtime at hand. My migration strategy is to:
    1) Create a RAC physical standby for the Single Instance database
    2) Switchover to RAC standby.
    Primary and Standby OS and DB configurations:
    OS: Windows Server EE 2003 (64-bit)
    DB: Oracle 10g Database Release 2 (10.2.0.4)
    Oracle 10g Clusterware Release 2 (10.2.0.4)
    To create a RAC standby, I will:
    a) Install Clusterware (10.2.0.1)
    b) Install Database (10.2.0.1)
    c) Patch both Clusterware and Database (10.2.0.4)
    d) Create ASM instance for both the nodes (+ASM1 & +ASM2)
    e) create standby controlfile on primary
    f) Move standby controlfile, RMAN backup of primary, pfile, listener.ora, tnsnames.ora, password file to standby host-1
    g) make necessary changes to the pfile on standby host-1 like cluster_database, instance_name, thread, ...
    h) mount standby database and restore backup
    Kindly validate my steps and if there already exists such a document then please do provide me with a link.
    Regards

    Please refer to MAA whitet paper :
    [http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10g_RACPrimaryRACPhysicalStandby.pdf]
    [MAA website|http://www.oracle.com/technology/deploy/availability/htdocs/maa.htm]

  • Standby database sync issue

    Hi,
    We get standby database is behind 6 hrs message some times. We check in production by issuing this.
    SQL> select thread#,max(sequence#) from v$archived_log group by thread#;
    THREAD# MAX(SEQUENCE#)
    2 177416
    1 169771
    in Standby
    SQL> select thread#,max(sequence#) from v$archived_log group by thread#;
    THREAD# MAX(SEQUENCE#)
    1 169771
    2 177416
    Does this mean that both are in sync?

    Viacheslav Ostapenko wrote:
    Unfortunately, v$archived_log doesn't provide actual information - sometimes very strange behaviour when according to this view gap increases without any reason and then suddenly it goes in sync with primary - a lot of false triggering were when we used this view for monitoring.I asked to check in both primary & standby .... not only in primary
    Thanks

  • Need query to find out standby database sync information

    Hi All,
    Please provide me query which give me information about how many standby database configured .I need to generate report hourly sync information of database in our project. I have more than 120 production db and some db having more than one standby database.some of then at local satndby (some of database (prod) in datacenter and some of database in DR datacenter)
    Does it it possible to configure in 12c oem?.If it is possible guide me.How can I create report for all databases sync status within one report?
    Thanks
    Ganesh

    Hello;
    I like this one:
    Monitor Data Guard Transport
    Best Regards
    mseberg

  • How to sync the standby database with the primary?

    Hi..We have dataguard setup for our production databases(10.2.0.3).
    I need one clarification regarding the standby database sync with primary.We identified one of our databases is not in sync with the primary and for longtime the archives are not getting applied at the secondary.
    We decided to take the incremental (from SCN)backup from primary and use that at the standby to resolve the gap as described in the book "Oracle dataguard concepts and administration".
    But, before doing that,we observed couple of structural changes were not applied at the standby(ex:some tablespaces were not created in secondary where as they were there in primary).
    How to apply these changes at the secondary as the standby is in mount state.(Can I open this database and create those tablespaces ?? or do you suggest any other way.
    Thanks for your reply.

    This points to a setup problem. Something missing, a parameter set wrong etc.
    While your question is asking something different - How to correct, I would consider how to prevent. Unless you have a gap or logs not applied you have a deeper issue.
    If you have a gap or logs not applied then correct it and check to see if the missing object are present.
    The key parameters for Oracle 10 are :
    FAL_SERVER=STANDBY
    FAL_CLIENT=PRIMARY
    STANDBY_FILE_MANAGEMENT=AUTO
    DB_FILE_NAME_CONVERT='STANDBY','PRIMARY'
    LOG_FILE_NAME_CONVERT='STANDBY','PRIMARY'
    log_archive_dest_1='LOCATION=/u01/app/oracle/oradata/PRIMARY/archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PRIMARY'
    log_archive_dest_2='SERVICE=STANDBY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY'
    LOG_ARCHIVE_DEST_STATE_1=ENABLE
    LOG_ARCHIVE_DEST_STATE_2=ENABLE
    LOG_ARCHIVE_MAX_PROCESSES=30
    Check them on both sides for issues.
    If your standby is really trashed consider using RMAN to duplicate it. You might end up saving time. I have Oracle 11 short notes on this but it will work on Oracle 10 too.
    http://www.visi.com/~mseberg/duprman.html
    Otherwise use statements like this to resolve you gap.
    ALTER DATABASE REGISTER LOGFILE '/u01/app/oracle/oradata/STANDBY/archive/PRIMARY_1_20_716110538.arc';
    ALTER DATABASE REGISTER LOGFILE '/u01/app/oracle/oradata/STANDBY/archive/PRIMARY_1_21_716110538.arc';See Section 5.8 Managing Archive Gaps of oracle document B14239-05
    Best Regards
    mseberg
    CKPT is correct, sorry I missed the Welcome to OTN!
    Edited by: mseberg on Jul 5, 2011 10:34 AM

  • Standby database query

    Hi All,
    How do i find the standby databases that are available for a primary database?
    Suppose i have 2 standby databases for the primary. How do i find out the standby databases from the primary database other than checking the log_archive_dest parameter?
    For eg: I have a 2 node rac database and a 2 node rac standby database.
    I want to find out the standby node names and instance names from the primary node
    OS:AIX
    DB:10.2.0.5
    Edited by: user13364377 on Oct 7, 2011 8:26 AM

    user13364377 wrote:
    Hi All,
    How do i find the standby databases that are available for a primary database?
    Suppose i have 2 standby databases for the primary. How do i find out the standby databases from the primary database other than checking the log_archive_dest parameter?
    For eg: I have a 2 node rac database and a 2 node rac standby database.
    I want to find out the standby node names and instance names from the primary node
    OS:AIX
    DB:10.2.0.5
    Edited by: user13364377 on Oct 7, 2011 8:26 AM"other than checking the log_archive_dest parameter?"
    Why the limitation/restriction from doing the obvious? Is this another "interview" question?
    How to change a flat tire without using a jack and lug wrench?

  • Backup standby database

    We have 11grac implemented on windows with a physical standby on DR site, please advise how I can take backup of the phyiscal standby database?

    I have done cold backups of the RAC standby databases in ASM from another database on the same box that is open using the DBMS_FILE_TRANSFER.COPY_FILE unility. I do the following steps 1) Use the following script to create a cold backups script when the standby is in mount mode. 2) Shutdown the standby. 3) Run the result script in another database that is in open mode (the DBMS_FILE_TRANSFER.COPY_FILE will only run in an open database). 4) restart the standby application of archive logs after the cold backup is done.
    -- script to create cold backup of RAC ASM Standby
    set pages 0
    set feedback off
    set lines 300
    col dummy noprint
    set wrap off
    -- for gzip's to work
    col name for a50
    select name from v$datafile;
    select 'cd /stg/oracle/backups/cold_backup' from dual
    select 'CREATE OR REPLACE DIRECTORY asm_files AS '''||'&substr_ASM_PREFIX_OF_FILENAME'||''' ;' from dual
    -- grants not needed if running from sys
    select 'GRANT WRITE ON DIRECTORY asm_files TO "ENWEBP1";' from dual
    select 'CREATE OR REPLACE DIRECTORY DSK_FILES AS ''/stg/oracle/backups/cold_backup'';' from dual
    select 'GRANT WRITE ON DIRECTORY dsk_files TO "ENWEBP1";' from dual
    select substr(name,length(name)-instr(reverse(name),'/',1)+2,100)||'.A' dummy,'exec DBMS_FILE_TRANSFER.COPY_FILE ( ''asm_files'' , '''
    ||substr(name,length(name)-instr(reverse(name),'/',1)+2,100)
    ||''' , ''dsk_files'' , '''||substr(name,length(name)-instr(reverse(name),'/',1)+2,100)||''' );' cmd
    from v$datafile
    where upper(substr(name,1,length('&substr_ASM_PREFIX_OF_FILENAME')))=upper('&substr_ASM_PREFIX_OF_FILENAME')
    union
    select substr(name,length(name)-instr(reverse(name),'/',1)+2,100)||'.B' dummy,'host gzip '
    ||substr(name,length(name)-instr(reverse(name),'/',1)+2,100) cmd from v$datafile
    where upper(substr(name,1,length('&substr_ASM_PREFIX_OF_FILENAME')))=upper('&substr_ASM_PREFIX_OF_FILENAME')
    order by dummy
    select name||'.A' dummy,'CREATE OR REPLACE DIRECTORY asm_files AS '''||substr(name,1,length(name)-instr(reverse(name),'/',1))||''' ;'
    from v$controlfile
    union
    select name||'.b' dummy,'exec DBMS_FILE_TRANSFER.COPY_FILE ( ''asm_files'' , '''||substr(name,length(name)-instr(reverse(name),'/',1)+2,100)
    ||''' , ''dsk_files'' , '''||substr(name,length(name)-instr(reverse(name),'/',1)+2,100)||''' );'
    from v$controlfile order by dummy
    set pages 100
    set feedback on
    If I do happen to restore, I 1) Shutdown the standby 2) Erase datafiles and controlfiles (I keep the redo logs but that last script later (see below) will drop and recreate them with the clear command so you can also remove the redo logs if you wish). 3) Use the following script to create a restore script.
    -- Script to create cold restore of RAC standby database in ASM
    set pages 0
    set feedback off
    set lines 300
    col dummy noprint
    set wrap off
    -- for gzip's to work
    col name for a50
    select name from v$datafile;
    select 'cd /stg/oracle/backups/cold_backup' from dual
    select 'rm '||name from v$datafile order by name
    select 'CREATE OR REPLACE DIRECTORY asm_files AS '''||'&substr_ASM_PREFIX_OF_FILENAME'||''' ;' from dual
    -- grants not needed if running from sys
    select 'GRANT WRITE ON DIRECTORY asm_files TO "ENWEBP1";' from dual
    select 'CREATE OR REPLACE DIRECTORY DSK_FILES AS ''/stg/oracle/backups/cold_backup'';' from dual
    select 'GRANT WRITE ON DIRECTORY dsk_files TO "ENWEBP1";' from dual
    select substr(name,length(name)-instr(reverse(name),'/',1)+2,100)||'.A' dummy,'host gunzip '
    ||substr(name,length(name)-instr(reverse(name),'/',1)+2,100)||'.gz' cmd from v$datafile
    where upper(substr(name,1,length('&substr_ASM_PREFIX_OF_FILENAME')))=upper('&substr_ASM_PREFIX_OF_FILENAME')
    union
    select substr(name,length(name)-instr(reverse(name),'/',1)+2,100)||'.B' dummy,'exec DBMS_FILE_TRANSFER.COPY_FILE ( ''dsk_files'' , '''
    ||substr(name,length(name)-instr(reverse(name),'/',1)+2,100)
    ||''' , ''asm_files'' , '''||substr(name,length(name)-instr(reverse(name),'/',1)+2,100)||''' );' cmd
    from v$datafile
    where upper(substr(name,1,length('&substr_ASM_PREFIX_OF_FILENAME')))=upper('&substr_ASM_PREFIX_OF_FILENAME')
    union
    select substr(name,length(name)-instr(reverse(name),'/',1)+2,100)||'.C' dummy,'host gzip '
    ||substr(name,length(name)-instr(reverse(name),'/',1)+2,100) cmd from v$datafile
    where upper(substr(name,1,length('&substr_ASM_PREFIX_OF_FILENAME')))=upper('&substr_ASM_PREFIX_OF_FILENAME')
    order by dummy
    select name||'.A' dummy,'CREATE OR REPLACE DIRECTORY asm_files AS '''||substr(name,1,length(name)-instr(reverse(name),'/',1))||''' ;'
    from v$controlfile
    union
    select name||'.b' dummy,'exec DBMS_FILE_TRANSFER.COPY_FILE ( ''dsk_files'' , '''||substr(name,length(name)-instr(reverse(name),'/',1)+2,100)
    ||''' , ''asm_files'' , '''||substr(name,length(name)-instr(reverse(name),'/',1)+2,100)||''' );'
    from v$controlfile order by dummy
    set pages 100
    set feedback on
    and finally I run 4) clear redo logs with the following script:
    set pages 0
    set feedback off
    select 'alter database recover managed standby database cancel;' from dual;
    select 'alter database clear logfile group '||group#||';' from v$log order by group#;
    set feedback on
    set pages 100
    and 5) Open database and restart replication
    For Example:
    ENWEBP1 > @cr8_COLD_BACKUP_dbms_FILE_TRANSFER_COPY_FILE_from_ASM_to_FS.sql
    Enter value for substr_asm_prefix_of_filename: +DATA1/nwebp/datafile
    Enter value for substr_asm_prefix_of_filename: +DATA1/nwebp/datafile
    Enter value for substr_asm_prefix_of_filename: +DATA1/nwebp/datafile
    Enter value for substr_asm_prefix_of_filename: +DATA1/nwebp/datafile
    Enter value for substr_asm_prefix_of_filename: +DATA1/nwebp/datafile
    cd /stg/oracle/backups/cold_backup
    CREATE OR REPLACE DIRECTORY asm_files AS '+DATA1/nwebp/datafile' ;
    GRANT WRITE ON DIRECTORY asm_files TO "ENWEBP1";
    CREATE OR REPLACE DIRECTORY DSK_FILES AS '/stg/oracle/backups/cold_backup';
    GRANT WRITE ON DIRECTORY dsk_files TO "ENWEBP1";
    Enter value for substr_asm_prefix_of_filename: +DATA1/nwebp/datafile
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'asm_files' , 'content_data.290.623955671' , 'dsk_files' , 'content_data.290.623955671' );
    host gzip content_data.290.623955671
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'asm_files' , 'gsis_data.284.623955547' , 'dsk_files' , 'gsis_data.284.623955547' );
    host gzip gsis_data.284.623955547
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'asm_files' , 'site_data.288.623955617' , 'dsk_files' , 'site_data.288.623955617' );
    host gzip site_data.288.623955617
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'asm_files' , 'sysaux.264.621275031' , 'dsk_files' , 'sysaux.264.621275031' );
    host gzip sysaux.264.621275031
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'asm_files' , 'system.262.621275025' , 'dsk_files' , 'system.262.621275025' );
    host gzip system.262.621275025
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'asm_files' , 'undotbs1.263.621275031' , 'dsk_files' , 'undotbs1.263.621275031' );
    host gzip undotbs1.263.621275031
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'asm_files' , 'undotbs2.266.621275035' , 'dsk_files' , 'undotbs2.266.621275035' );
    host gzip undotbs2.266.621275035
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'asm_files' , 'undotbs3.267.621275037' , 'dsk_files' , 'undotbs3.267.621275037' );
    host gzip undotbs3.267.621275037
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'asm_files' , 'users.268.621275039' , 'dsk_files' , 'users.268.621275039' );
    host gzip users.268.621275039
    CREATE OR REPLACE DIRECTORY asm_files AS '+DATA1/nwebp' ;
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'asm_files' , 'control01.ctl' , 'dsk_files' , 'control01.ctl' );
    CREATE OR REPLACE DIRECTORY asm_files AS '+DATA1/nwebp' ;
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'asm_files' , 'control02.ctl' , 'dsk_files' , 'control02.ctl' );
    CREATE OR REPLACE DIRECTORY asm_files AS '+DATA1/nwebp' ;
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'asm_files' , 'control03.ctl' , 'dsk_files' , 'control03.ctl' );
    ENWEBP1 > @cr8_COLD_RESTORE_dbms_FILE_TRANSFER_COPY_FILE_from_FS_to_ASM.sql
    Enter value for substr_asm_prefix_of_filename: +DATA1/nwebp/datafile
    CREATE OR REPLACE DIRECTORY asm_files AS '+DATA1/nwebp/datafile' ;
    GRANT WRITE ON DIRECTORY asm_files TO "ENWEBP1";
    CREATE OR REPLACE DIRECTORY DSK_FILES AS '/stg/oracle/backups/cold_backup';
    GRANT WRITE ON DIRECTORY dsk_files TO "ENWEBP1";
    Enter value for substr_asm_prefix_of_filename: +DATA1/nwebp/datafile
    Enter value for substr_asm_prefix_of_filename: +DATA1/nwebp/datafile
    Enter value for substr_asm_prefix_of_filename: +DATA1/nwebp/datafile
    Enter value for substr_asm_prefix_of_filename: +DATA1/nwebp/datafile
    Enter value for substr_asm_prefix_of_filename: +DATA1/nwebp/datafile
    Enter value for substr_asm_prefix_of_filename: +DATA1/nwebp/datafile
    cd /stg/oracle/backups/cold_backup
    rm +DATA1/nwebp/datafile/content_data.290.623955671
    rm +DATA1/nwebp/datafile/gsis_data.284.623955547
    rm +DATA1/nwebp/datafile/site_data.288.623955617
    rm +DATA1/nwebp/datafile/sysaux.264.621275031
    rm +DATA1/nwebp/datafile/system.262.621275025
    rm +DATA1/nwebp/datafile/undotbs1.263.621275031
    rm +DATA1/nwebp/datafile/undotbs2.266.621275035
    rm +DATA1/nwebp/datafile/undotbs3.267.621275037
    rm +DATA1/nwebp/datafile/users.268.621275039
    rm +DATA3/nwebp/datafile/content_index01.dbf
    rm +DATA3/nwebp/datafile/gsis_index01.dbf
    rm +DATA3/nwebp/datafile/polls_data01.dbf
    rm +DATA3/nwebp/datafile/polls_index01.dbf
    rm +DATA3/nwebp/datafile/profile_data01.dbf
    rm +DATA3/nwebp/datafile/profile_index01.dbf
    rm +DATA3/nwebp/datafile/site_index01.dbf
    CREATE OR REPLACE DIRECTORY asm_files AS '+DATA1/nwebp/datafile' ;
    GRANT WRITE ON DIRECTORY asm_files TO "ENWEBP1";
    CREATE OR REPLACE DIRECTORY DSK_FILES AS '/stg/oracle/backups/cold_backup';
    GRANT WRITE ON DIRECTORY dsk_files TO "ENWEBP1";
    host gunzip content_data.290.623955671.gz
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'dsk_files' , 'content_data.290.623955671' , 'asm_files' , 'content_data.290.623955671' );
    host gzip content_data.290.623955671
    host gunzip gsis_data.284.623955547.gz
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'dsk_files' , 'gsis_data.284.623955547' , 'asm_files' , 'gsis_data.284.623955547' );
    host gzip gsis_data.284.623955547
    host gunzip site_data.288.623955617.gz
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'dsk_files' , 'site_data.288.623955617' , 'asm_files' , 'site_data.288.623955617' );
    host gzip site_data.288.623955617
    host gunzip sysaux.264.621275031.gz
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'dsk_files' , 'sysaux.264.621275031' , 'asm_files' , 'sysaux.264.621275031' );
    host gzip sysaux.264.621275031
    host gunzip system.262.621275025.gz
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'dsk_files' , 'system.262.621275025' , 'asm_files' , 'system.262.621275025' );
    host gzip system.262.621275025
    host gunzip undotbs1.263.621275031.gz
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'dsk_files' , 'undotbs1.263.621275031' , 'asm_files' , 'undotbs1.263.621275031' );
    host gzip undotbs1.263.621275031
    host gunzip undotbs2.266.621275035.gz
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'dsk_files' , 'undotbs2.266.621275035' , 'asm_files' , 'undotbs2.266.621275035' );
    host gzip undotbs2.266.621275035
    host gunzip undotbs3.267.621275037.gz
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'dsk_files' , 'undotbs3.267.621275037' , 'asm_files' , 'undotbs3.267.621275037' );
    host gzip undotbs3.267.621275037
    host gunzip users.268.621275039.gz
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'dsk_files' , 'users.268.621275039' , 'asm_files' , 'users.268.621275039' );
    host gzip users.268.621275039
    CREATE OR REPLACE DIRECTORY asm_files AS '+DATA1/nwebp' ;
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'dsk_files' , 'control01.ctl' , 'asm_files' , 'control01.ctl' );
    CREATE OR REPLACE DIRECTORY asm_files AS '+DATA1/nwebp' ;
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'dsk_files' , 'control02.ctl' , 'asm_files' , 'control02.ctl' );
    CREATE OR REPLACE DIRECTORY asm_files AS '+DATA1/nwebp' ;
    exec DBMS_FILE_TRANSFER.COPY_FILE ( 'dsk_files' , 'control03.ctl' , 'asm_files' , 'control03.ctl' );
    ENWEBP1 > @RMAN_Cr8_Clear_Standby_logs.sql
    alter database recover managed standby database cancel;
    alter database clear logfile group 1;
    alter database clear logfile group 2;
    alter database clear logfile group 3;
    alter database clear logfile group 4;
    alter database clear logfile group 5;
    alter database clear logfile group 6;
    alter database clear logfile group 7;
    alter database clear logfile group 8;
    alter database clear logfile group 9;
    alter database clear logfile group 10;
    alter database clear logfile group 11;
    alter database clear logfile group 12;
    alter database clear logfile group 13;
    alter database clear logfile group 14;
    alter database clear logfile group 15;
    Alan

  • Logical Standby Database Not Getting Sync With Primary Database

    Hi All,
    I am using a Primary DB and Logical Standby DB configuration in Oracle 10g:-
    Version Name:-
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
    PL/SQL Release 10.2.0.5.0 - Production
    CORE 10.2.0.5.0 Production
    TNS for Solaris: Version 10.2.0.5.0 - Production
    NLSRTL Version 10.2.0.5.0 - Production
    We have build the logical standby last week and till date the Logical DB is not sync. I have checked the init parameters and I wont see any problems with it. Also archive log destinations are also fine enough.
    We have a important table named "HPD_HELPDESK" where record count is growing gradually whereas in logical standby it's not growing. There are some 19K record difference in the both the tables.
    I have checked the alert log but it is also not giving any error message. Please find the last few lines of the alert log in logical Database:-
    RFS LogMiner: Registered logfile [oradata_san1/oradata/remedy/arch/ars1_1703_790996778.arc] to LogMiner session id [1]
    Tue Aug 28 14:56:52 GMT 2012
    RFS[2853]: Successfully opened standby log 5: '/oracle_data/oradata/remedy/stbyredo01.log'
    Tue Aug 28 14:56:58 GMT 2012
    RFS LogMiner: Client enabled and ready for notification
    Tue Aug 28 14:57:00 GMT 2012
    RFS LogMiner: Registered logfile [oradata_san1/oradata/remedy/arch/ars1_1704_790996778.arc] to LogMiner session id [1]
    Tue Aug 28 15:06:40 GMT 2012
    RFS[2854]: Successfully opened standby log 5: '/oracle_data/oradata/remedy/stbyredo01.log'
    Tue Aug 28 15:06:47 GMT 2012
    RFS LogMiner: Client enabled and ready for notification
    Tue Aug 28 15:06:49 GMT 2012
    RFS LogMiner: Registered logfile [oradata_san1/oradata/remedy/arch/ars1_1705_790996778.arc] to LogMiner session id [1]
    I am not able to trace the issue that why the records are not growing in logical DB. Please provide your inputs.
    Regards,
    Arijit

    How do you know that there's such a gap between the tables?
    If your standby db is a physical standby, then it is not open and you can't query your table without cancelling the recovery of the managed standby database.
    What does it say if you execute this sql?
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;The ARCH processes should be connected and MRP waiting for a file.
    If you query for the archive_gaps, do you get any hits?
    select * from gv$archive_gapIf you're not working in a RAC environment you need to query v$archive_gap, instead!
    Did you check whether the archives generated from the primary instance are transferred and present in the file system of your standby database?
    I believe your standby is not in recovery_mode anymore or has an archive_gap, which is the reason why it doesn't catch up anymore.
    Hope it helps a little,
    Regards,
    Sebastian
    PS: I'm working on 11g, so unfortunately I'm not quite sure if the views are exist in 10gR2. It's worth a try though!
    Edited by: skahlert on 31.08.2012 13:46

  • Manual Standby Database not in sync with missing archivelogs

    Hello,
    OS: Solaris
    DB: Oracle 11.2.0.1 EE
    Not Using ASM or RAC
    I have a Production database that is in archivelog mode and a Standby DR server.
    Both servers (Prod, Standby) have exact same structure and db name/version.
    We manually scp archive logs and recover them to a manual standby database via SQL Scripts "cron". (I.E. set autorecovery on; recover standby database;)
    We recently got out of sync with our log files and have not been applying them to the standby. As part of Prod Maintenance, these log files were deleted and are not available anymore.
    I've tried several ways to "rebuild" our standby database. I have tried to Shutdown prod, backup all the db files and scp them to standby, re-create standby controlfile and startup mount and recover standby.
    Every time I try to apply a new archive log via recover standby, these are the errors:
    ORA-00279: change 211077622 generated at 1/27/2012 12:18:42 needed for thread 1
    ORA-00289: suggestion : /oradump/arch/PROD/PROD_arch_1_69486_736618850.arc
    ORA-00280: change 211077622 for thread 1 is in sequence #69486
    ORA-00308: cannot open archived log '/oradump/arch/PROD/PROD_arch_1_69486_736618850.arc'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-10879: error signaled in parallel recovery slave
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01152: file 1 was not restored from a sufficiently old backup
    ORA-01110: data file 1: '/oradevices/PROD/oraPRODsystem1.dbf'
    When I check v$log_history, the new logs have not been applied.
    I've also tried the "Restore from incremental backup via SCN" method with same results.
    Is there a way to re-create the standby clean and ensure that the log chain that is currently broken gets fixed or reset?
    I would eventually like to get DataGuard in here, but that's not the case at the moment.
    Thanks for your suggestions.
    -Dav

    if you are using the cold backup to create the standby database, Check that have you followed the following steps or not.
    1. remove
    all the datafiles and controlfiles from the standby database.
    2. Create a new standby controlfile of the production for standby using the following cmd
    'alter database create standby controlfile as 'Location';'
    3. move the new controlfile to standby database server location as specified in initialization parameter file.
    4. Restore all the datafiles to its appropriate loaction which was taken through cold backup.
    5. startup nomount
    6. alter database mount standby database;
    7. recover standby database.
    scp the archive log sequence that is asked by the database, from production.
    You can try this steps.

  • Need advice regarding physical standby databases in rac environment on orac

    need advice regarding physical standby databases in rac environment on oracle 10G r2
    I like to have have a Primary ( 10 node RAC cluster ) shipping to a physical standby ( 3 node RAC cluster ) shipping using LGWR .
    So I have a lot of questions
    1) What will be the performance overhead on the Primay if we are using LGWR SYNC option .
    2) Does the overhead depends on the physical distance between primary & physical
    3)Do you recommend a seperate private network for shipping logs between primary and standby.
    4) I know that DGMGRL supports RAC only from 10g. So are there any know issues are bugs using DGMGRL is RAC environments.
    Thanks in advance
    -Satish

    Generally you should have same CPU architecture and same operating system but it's not mandatory to have the exactly the same CPU model, the same number of CPU, the same RAM size etc.
    Actually starting with 11.1 you don't need to have the same hardware setup: it 's even possible to have primary and standby database on different platforms: http://download.oracle.com/docs/cd/B28359_01/server.111/b28294/standby.htm#i72053.

  • Differences in creating a Standby database in RAC

    Hi All
       SO...: Solaris 11 SPARC 64
       DB...: 11.2.0.3.6 (64 bits)
       My experience regarding creating Physical Standby Databases is restricted in only single instances to single instances. I am reading the book "Oracle Database 11g Release 2 High Availability", but, in parallel, i would like to hear from the experienced DBAs here some opinions and recomendations in how to implement a RAC to RAC solution. Below are my doubts:
       1) In single instance, using RMAN Backup to create the Physical Standby, i have to create manually a static Listener Entry. But in a RAC env, the listener is a resource, so, i have to create that resource in the Clusterware manually, or there is other approach?
       2) I have to create a pfile and start the instance in NOMOUNT mode, but in RAC, i have to start all the nodes in NOMOUNT, or just one node is sufficient?
       3) The archives will be replicated from FRA to FRA. Is this the best approach?
       My main concern is with the Listener and SCAN questions... If you have any considerations, they will be most welcome.
       Thanks in advance.

    Following link might be helpful
    http://www.oracledba.org/11gR2/dr/11gR2_dataguard_RAC_to_RAC.html

Maybe you are looking for

  • Read Only Element Table Attributes not working since we migrate 3.0.1

    Dear all, I have a page to dispaly the content of a journalling table and highlight the changes in red. To do it, the guy before me use the read only. In the read only condition type he put pl/sql expression. the expression is :P120_OLD_DEADLINE_DT N

  • How to calculate the amount for cleared documents.

    Hi Experts, I am working on a program and here i need to display the amount for the cleared document. I have a scenario where there are two financial document related to a single cleared document. I can get the amount for financial document from BSAK

  • HDV 720p 24

    I'm using a JVCpro HD workflow, and am struggling to find the best way to finish to a standard def DVD. The footage is captured into FCP via firewire using the HDV720p 24 easy set up. This yields gorgeous results playing out to HD TV. Anyway, I've us

  • Regarding Print Problem in ALV

    hi all,           I am getting o/p in ALV Report. When i am taking Print Print is not coming. I am using Function Module Reuse_alv_grid_display. Here i need to mention anything for printing. Please suggest. Thanx & Regards Rami

  • Van MailBox

    Hi All, Our client has a global ID number and we get data from 10 different customers (EDI format) . We are using seeburger as 3rd party. we are planning to buy two mail boxes and planning to ask all customers to send data to that mail boxes and we w