Data gaurd replication

hi all
i have a doubt in replication method in Data gaurd
I have configured data gaurd for my 11g database.Evrything is working fine. I want to empty tables in standby or may be drop the schema and re create them with empty tables.
And after that is it possible to duplicate the primary again and sync it up with primary.
if i ponder on this i can use data pump to load the data, but how will i sync it with primary.
thanks in advance...
feel free to ask questions...

>
And after that is it possible to duplicate the primary again and sync it up with primary.
That means you simply want to recreate your Standby from Primary which is not impacted by whatever changes you made earlier to the Standby. If my understanding to your statement is correct, you are duplicating Primary again to create standby. So the changes you made in Standby is no longer available because it is recreated.

Similar Messages

  • Is this possible in Oracle Data Gaurd 11g?

    Dear All,
    I was looking for replication scenerio while this thing came in my mind.
    We have setup 2 database servers on 2 different location, thousands of miles apart. Both servers have oracle 11gR1 and are on linux operating system.
    We have been facing connectivity problems as users are accessing our application connected to one of these servers from all over the world and when there is connectivity problems applications are no more accessible.
    I want to setup a stand by database server, like if there is connectivity problem on one server, or its down due to any reason applications are automatically shift to the second available server. The primary server is obvioulsy the first server and in case of no connectivity or any other problem that should bein use.
    In this case we have to keep an exact copy of the primary server on second server. I want to know if Oracle data Gaurd will provide us such a functionality when the servers are on remote locations? How the data will be copied to secondry server?
    Kindly let me know if there is any document available to setup such a scenerio. I will be thankful to you.
    Regards,
    Imran Baig

    DataGuard Physical standby will keep a copy of your database on a standby server. Utilizing fast start failover with an observer (3rd party) that sees both the primary and standby server will give you the capability to automatically failover to the standby server should connectivity to the primary be interrupted. Fast start failover is configured with a threshold that will determine the amount of time it waits for lost connectivity prior to actually doing the failover allowing you to configure the threshold prior to failover. Your application accessing the database will need to reconnect to the new primary after failover.

  • EM console HA page says "Data Gaurd not configured for this database"

    I am attempting to use EM console's "High Availability Console" to create a standby database, but, the option is not available. Under "Data Gaurd Summary" the page shows "Oracle DataGaurd is not configured on this database."
    How do I configure this option so that I can use the console to create Standy databases? I have looked everywhere I can think of and cannot find any instructions!

    Click on your database name, and you'll be lead to a page which shows you all the statistics of your Db
    Click on the "Availability" Sub Tab.
    You'll see the option to "Add Standby Database". You can take it from there. (you'll be prompted to log in to your database -- log in with sys)
    Now click on "Add Standby Database"
    The process I normally follow is this
    1. RMAN your primary
    2. Create standby control file from primary.
    3. Get your init file from primary
    4. Copy everything over.
    5. Make the init file changes (fal client, fal server on primary, any kind of directory mapping, point the init file to the standby control file on the standby side)
    6. Bring up the standby db (doing all the other changes that are required)
    Then you can import that Db into OEM,
    or you could just use the Wizard from OEM :)

  • What is data gaurd

    Hello,
    Please let me know about "Data Gaurd". I'll welcome you tto provide me any document about it.
    Thanks in advance
    Saqib

    Start with the
    Data Guard Concepts and Administration
    Yoann.

  • Unable to ship Redo logs to standby DB (Oracle Data Gaurd)

    Hi all,
    We have configured Oracle Data Gaurd between our Production (NPP) & Standby (NPP_DR).
    The configuration is complete however, the production is unable to ship redo logs to standby DB.
    We keep getting the error "PING[ARC0]: Heartbeat failed to connect to standby 'NPP_DR'. Error is 12154." in Primary DB
    Primary & DR are on different boxes.
    Please see the logs below in the production alert log file & npp_arc0_18944.trc trace files:
    npp_arc0_18944.trc :*
    2011-01-19 09:17:38.007 62692 kcrr.c
    Error 12154 received logging on to the standby
    Error 12154 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'NPP_DR'
    Error 12154 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'NPP_DR'
    2011-01-19 09:17:38.007 62692 kcrr.c
    PING[ARC0]: Heartbeat failed to connect to standby 'NPP_DR'. Error is 12154.
    2011-01-19 09:17:38.007 60970 kcrr.c
    kcrrfail: dest:2 err:12154 force:0 blast:1
    2011-01-19 09:22:38.863
    Redo shipping client performing standby login
    OCIServerAttach failed -1
    .. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS:could not resolve the connect identifier specified
    alert log file on Primary*
    Error 12154 received logging on to the standby
    Wed Jan 19 09:02:35 2011
    Error 12154 received logging on to the standby
    Wed Jan 19 09:07:36 2011
    Error 12154 received logging on to the standby
    Wed Jan 19 09:12:37 2011
    Error 12154 received logging on to the standby
    Wed Jan 19 09:13:10 2011
    Incremental checkpoint up to RBA [0x2cc.2fe0.0], current log tail at RBA [0x2cc.2fe9.0]
    Wed Jan 19 09:17:38 2011
    Error 12154 received logging on to the standby
    Wed Jan 19 09:22:38 2011
    Error 12154 received logging on to the standby
    Wed Jan 19 09:27:39 2011
    Error 12154 received logging on to the standby
    However, we are able to tnsping from primary to DR
    Tnsping Results
    From Primary:
    juemdbp1:oranpp 19> tnsping NPP_DR
    TNS Ping Utility for HPUX: Version 10.2.0.4.0 - Production on 19-JAN-2011 09:32:50
    Copyright (c) 1997,  2007, Oracle.  All rights reserved.
    Used parameter files:
    /oracle/NPP/102_64/network/admin/sqlnet.ora
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (COMMUNITY = SAP.WORLD) (PROTOCOL = TCP) (HOST = 10.80.51.101) (PORT = 49160))) (CONNECT_DATA = (SID = NPP) (SERVER = DEDICATED)))
    OK (60 msec)
    Tnsnames.ora in Primary:
    Filename......: tnsnames.ora
    Created.......: created by SAP AG, R/3 Rel. >= 6.10
    Name..........:
    Date..........:
    @(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ora/ind/TNSNAMES.ORA#4 $
    NPP.WORLD=
      (DESCRIPTION =
        (ADDRESS_LIST =
            (ADDRESS =
              (COMMUNITY = SAP.WORLD)
              (PROTOCOL = TCP)
              (HOST = nppjorp)
              (PORT = 49160)
        (CONNECT_DATA =
           (SID = NPP)
           (GLOBAL_NAME = NPP.WORLD)
    NPP_HQ.WORLD=
    (DESCRIPTION =
       (ADDRESS_LIST =
            (ADDRESS =
              (COMMUNITY = SAP.WORLD)
              (PROTOCOL = TCP)
              (HOST = nppjorp)
              (PORT = 49160)
       (CONNECT_DATA =
            (SID = NPP)
            (SERVER = DEDICATED)
    NPP_DR.WORLD=
    (DESCRIPTION =
      (ADDRESS_LIST =
       (ADDRESS =
            (COMMUNITY = SAP.WORLD)
            (PROTOCOL = TCP)
            (HOST = 10.80.51.101)
            (PORT = 49160)
    (CONNECT_DATA =
            (SID = NPP)
            (SERVER = DEDICATED)
    NPPLISTENER.WORLD=
            (DESCRIPTION =
              (ADDRESS =
                    (PROTOCOL = TCP)
                    (HOST = nppjorp)
                    (PORT = 49160)
    Listener.ora in Primary
    Filename......: listener.ora
    Created.......: created by SAP AG, R/3 Rel. >= 6.10
    Name..........:
    Date..........:
    @(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ora/ind/LISTENER.ORA#4 $
    ADMIN_RESTRICTIONS_LISTENER = on
    LISTENER =
      (ADDRESS_LIST =
            (ADDRESS =
              (PROTOCOL = IPC)
              (KEY = NPP.WORLD)
            (ADDRESS=
              (PROTOCOL = IPC)
              (KEY = NPP)
            (ADDRESS =
              (COMMUNITY = SAP.WORLD)
              (PROTOCOL = TCP)
              (HOST = nppjorp)
              (PORT = 49160)
    STARTUP_WAIT_TIME_LISTENER = 0
    CONNECT_TIMEOUT_LISTENER = 10
    TRACE_LEVEL_LISTENER = OFF
    SID_LIST_LISTENER =
      (SID_LIST =
        (SID_DESC =
          (SID_NAME = NPP)
          (ORACLE_HOME = /oracle/NPP/102_64)
    Thank You,
    Salman Qayyum
    Edited by: Salman M.A. Qayyum on Jan 19, 2011 8:12 AM

    Hi,
    Please find the remaining post ...
    Tnsnames.ora in DR:
    Filename......: tnsnames.ora
    Created.......: created by SAP AG, R/3 Rel. >= 6.10
    Name..........:
    Date..........:
    @(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ora/ind/TNSNAMES.ORA#4 $
    NPP.WORLD=
      (DESCRIPTION =
        (ADDRESS_LIST =
            (ADDRESS =
              (COMMUNITY = SAP.WORLD)
              (PROTOCOL = TCP)
              (HOST = nppjor)
              (PORT = 49160)
        (CONNECT_DATA =
           (SID = NPP)
           (GLOBAL_NAME = NPP.WORLD)
    NPP_HQ.WORLD=
      (DESCRIPTION =
        (ADDRESS_LIST =
          (ADDRESS =
            (COMMUNITY = SAP.WORLD)
            (PROTOCOL = TCP)
            (HOST = hq_nppjor)
            (PORT = 49160)
        (CONNECT_DATA =
            (SID = NPP)
            (SERVER = DEDICATED)
    NPP_DR.WORLD=
      (DESCRIPTION =
        (ADDRESS_LIST =
          (ADDRESS =
            (COMMUNITY = SAP.WORLD)
            (PROTOCOL = TCP)
            (HOST = nppjor)
            (PORT = 49160)
        (CONNECT_DATA =
            (SID = NPP)
            (SERVER = DEDICATED)
            (SERVICE_NAME = NPP_DR)
    NPPLISTENER.WORLD=
            (DESCRIPTION =
              (ADDRESS =
                    (PROTOCOL = TCP)
                    (HOST = nppjor)
                    (PORT = 49160)
    Listener.ora in DR
    Filename......: listener.ora
    Created.......: created by SAP AG, R/3 Rel. >= 6.10
    Name..........:
    Date..........:
    @(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ora/ind/LISTENER.ORA#4 $
    ADMIN_RESTRICTIONS_LISTENER = on
    LISTENER =
      (ADDRESS_LIST =
            (ADDRESS =
              (PROTOCOL = IPC)
              (KEY = NPP.WORLD)
            (ADDRESS=
              (PROTOCOL = IPC)
              (KEY = NPP)
            (ADDRESS =
              (COMMUNITY = SAP.WORLD)
              (PROTOCOL = TCP)
              (HOST = nppjor)
              (PORT = 49160)
           (ADDRESS =
             (COMMUNITY = SAP.WORLD)
             (PROTOCOL = TCP)
             (HOST = 10.80.50.101)
             (PORT = 49160)
    STARTUP_WAIT_TIME_LISTENER = 0
    CONNECT_TIMEOUT_LISTENER = 10
    TRACE_LEVEL_LISTENER = OFF
    SID_LIST_LISTENER =
      (SID_LIST =
        (SID_DESC =
          (SID_NAME = NPP)
          (ORACLE_HOME = /oracle/NPP/102_64)
    /etc/hosts settings in Primary
    host:oranpp 25> grep nppjor /etc/hosts
    10.32.243.54    nppjor.sabic.com        nppjor
    10.32.50.115    nppjorp.sabic.com        nppjorp
    /etc/hosts settings in DR
    host:oranpp 11> grep nppjor /etc/hosts
    10.32.243.54    hq_nppjor.sabic.com     hq_nppjor
    10.80.243.54    nppjor.sabic.com        nppjor
    10.80.50.115    nppjorp.sabic.com        nppjorp
    Thank You,
    Salman Qayyum

  • Can any one please tell me how to configure the oracle data gaurd. in steps

    can any one please tell me how to configure the oracle data gaurd. in steps

    Hi,
    http://docs.oracle.com/cd/E11882_01/server.112/e25608/create_ps.htm#i63561
    http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r2/prod/ha/dataguard/physstby/physstdby.htm
    Regards
    Yoonas

  • Data Domain - Replication Acceleration vs Application Acceleration?

    http://www.datadomain.com/pdf/DataDomain-Cisco-SolutionBrief.pdf
    I recently read an article by Cisco detailing the WAAS appliances capability to add addional deduplicaion to Data Domain replication traffic being forwarded across the WAN.  After reading the aricle and speaking with my SE.  He recommended a 674 in Application Acceleration mode vs the WAE 7341 in Replication Acceleration mode.  The article also states Application Acceleration mode was used.
    Why is Application Acceleration mode recommended over Replication Acceleration mode for this traffic?
    I have a project requiring 15 - 20 GB / day of incremental backups to be replicated between 3 sites.  The sites are 90 ~ 110ms apart and the links are around 20mbps at each site.  Would a 674 in Application Acceleration mode really make a difference for the Data Domain replications?

    Is their any possibility you could dig up the configuration that was used on the WAAS Application Accelerator during the Data Domain testing.  I see Data Domain replication service runs over port TCP 4126.  My SE recommends disabling the WAE 674s DRE functions for the Data Domain traffic and simply reply on LZ & TFO.  How do you simply disable DRE, but still use LZ and TFO?
    I see 3 common settings;
    action optimize full                                                       LZ+TFO+DRE
    action optimize DRE no compression none          TFO only
    action pass-through                                                    bypass
    Can you do LZ+TFO only?  None of the applications in the link below show this type of action setting.  This leads me to believe my SE was really suggesting turn off DRE completely for the WAAS.
    This WAAS needs to optimize traffic for;
    Lotus-Notes, HTTP, CIFS, Directory Services, RDP, and Data Domain
    Can all applications above + Data Domain be optimized from a pair of WAE 674s in AA mode?
    http://cisco.biz/en/US/docs/app_ntwk_services/waas/waas/v407/configuration/guide/apx_apps.html
    policy-engine application
       name Data-Domain
       classifier Data-Domain
          match dst port eq 4126
    exit
       map basic
          name Data-Domain classifier Data-Domain action optimize DRE no compression LZ
    exit

  • Convert Standyby to Primary and Primary to Standby using Data Gaurd

    What are the steps involved to do the following.
    Convert Standyby to Primary and Primary to Standby databse server using Data Gaurd.
    Pls help.

    That will be a role Switchovers.
    Check the Oracle document and follow the steps.
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14239/role_management.htm#i1030646

  • Data type Replication and ODBC

    I want to convert table that has column with LONG datatype to
    supported datatype by replication. Currently the LONG column
    contains more than 4,000 bytes so I can't convert to Varchar(2).
    If I convert to CLOB then the application that we are using is
    connecting through ODBC drivers and CLOB is not supported by
    ODBC.
    Is any one run into this situation? What is recommended or
    practical solutions for this problem?
    Thanks.
    --Pravin

    Thx,
    I used data type java.sql.Timezone and it works fine ;) This data type parse Date to format yyyy-MM-dd hh:mm:ss, so basically does not matter if I use string or this type :) but this is better ;)
    Problem is in timezone ! :( In data type java.sql.Timezone is not possible set timezone! :(

  • Transfer bulk data using replication

    Hi,
    We are having transactional replication setup between two database where one is publisher and other is subscriber. We are using push subscription for this setup.
    Problem comes when we have a bulk data updates on the publisher. On publisher size the update command gets completed in 4 mins while the same takes approx 30 mins to reach on the subscriber side. We have tried customizing the different properties in Agent
    Profile like MaxBatchSize, SubsriptionStreams etc but none of this is of any help. I have tried breaking the command and lot of per & comb but no success.
    The data that we are dealing with is around 10 millions and our production environment is not able to handle this.
    Please help. Thanks in advance!
    Samagra

    How is the production
    publisher server
    and subscriber server
    configuration ? both are same ? How about the network bandwidth ? have you tried the same task with during working
    hours and off hours ? I am thinking problem may be with network as well as both the server configuration..
    If you are doing huge operating with replication this is always expected, either you should have that much configuration or you have divided the workload on your publisher server to avoid all these issues. Why can you split the transactions ?
    Raju Rasagounder Sr MSSQL DBA

  • Data Souce Replication

    Hi
    Hello,
    We've just refreshed our BI Prod  system and the ECC Prod system has been refreshed and upgraded. We are connecting to the source system just fine.
    The problem we are having is that some, not all, of our datasources in the process chains are getting the error that the datasource needs to be replicated.
    Now I understand that the time stamps are different. Where do I find the timestamp in BI? I can find it on the extract structure in ECC 6.0.
    Also, we used the context menu on the source system and chose activate and replicated as well so why were not all of the datasources replicated?
    Archana

    Archana
    You will find datasource timestamp in table RSDS in BI.
    You will find the generate export data source time stamps for BI system in ROOSGEN in BI.
    You will find the timestamps of R/3 data source in ROOSOURCE table in R/3 or in RSOLTPSOURCE in BW.
    Heading 1: Dianne - You need to find for the datasource in RSOLTPSOURCE in BW and ROOSOURCE in R/3.
    These tables do have timestamps, if the timestamp in RSOLTPSOURCE is greater than the timestamp in
    ROOSOURCE then you dont need to replicate, Else you need to replicate the data source.
    You can create an ABAP program which would help you find the data sources which need replication
    May be the datasources were activated and you have to activate the transfer structures of the replicated data sources.
    A better way than finding the data source that need replication would be, right click on the source systme and in the context menu choose , replicate data sources.
    Then go to program RS_TRANSTRU_ACTIVATE_ALL mention the source system and run. This will activate the required transfer structures
    Hope it helps,
    Regards,
    Santosh Nagaraj

  • Data source Replication error in BW Quality system

    hi experts,
    when i replicate the data source in BW QUALITY SYSTEM it is giving the below error.
    Invalid call sequence for interfaces when rec
    changes.
    plz find the long text
    Message no. TK425
    Diagnosis
    The function you selected uses interfaces to record changes made to your objects in requests and tasks.
    Before you save the changes, a check needs to be carried out to establish whether the objects may be changed. This check must be performed before you begin to make changes.
    System Response
    The function terminates.
    Procedure
    The function you have used must correct the two Transport Organizer interfaces required. Contact your system administrator.
    After replication of data source changes are not reflecting in BW QUALITY SYSTEM.
    REGARDS
    venuscm

    Hi,
    Hope you have checked whether the DS has been transported to R/3 Quality and got activated  before Replication in BW quality.
    Check RFC connections between the systems.
    Check RSLOGSYSMAP table in both Bw & R/3 to confirm the system mappings.
    Try replicating again.......
    Regards,
    Suman

  • Data streems replication

    Hi Guru's
    Just thought of checking with you all for the possible solution to a problem.
    The problem is with the strems replication.
    Client has come back stating that data is not replicating to the target database .
    could some put please come up with the possible factors for this failure .
    I checked the db links are working fine .
    apart from this any suggestion on this will be highly apprciated .
    Thanks in Advance .

    Not much informations, so let's grasp whatever we can.
    Can you post the results of the following queries (
    (Please, enclosed the response with the tags [ code] and [ /code]
    (otherwise we simply cannot read the resonse):
    set lines 190 pages 66 feed off pause off verify off
    col rsn format A28 head "Rule Set name"
    col rn format A30 head "Rule name"
    col rt format A64 head "Rule text"
    COLUMN DICTIONARY_BEGIN HEADING 'Dictionary|Build|Begin' FORMAT A10
    COLUMN DICTIONARY_END HEADING 'Dictionary|Build|End' FORMAT A10
    col CHECKPOINT_RETENTION_TIME head "Checkpoint|Retention|time" justify c
    col LAST_ENQUEUED_SCN for 999999999999 head "Last scn|enqueued" justify c
    col las format 999999999999 head "Last remote|confirmed|scn Applied" justify c
    col REQUIRED_CHECKPOINT_SCN for 999999999999 head "Checkpoint|Require scn" justify c
    col nam format A31 head 'Queue Owner and Name'
    col table_name format A30 head 'table Name'
    col queue_owner format A20 head 'Queue owner'
    col table_owner format A20 head 'table Owner'
    col rsname format A34 head 'Rule set name'
    col cap format A22 head 'Capture name'
    col ti format A22 head 'Date'
    col lct format A18 head 'Last|Capture time' justify c
    col cmct format A18 head 'Capture|Create time' justify c
    col emct format A18 head 'Last enqueued|Message creation|Time' justify c
    col ltme format A18 head 'Last message|Enqueue time' justify c
    col ect format 999999999 head 'Elapsed|capture|Time' justify c
    col eet format 9999999 head 'Elapsed|Enqueue|Time' justify c
    col elt format 9999999 head 'Elapsed|LCR|Time' justify c
    col tme format 999999999999 head 'Total|Message|Enqueued' justify c
    col tmc format 999999999999 head 'Total|Message|Captured' justify c
    col scn format 999999999999 head 'Scn' justify c
    col emn format 999999999999 head 'Enqueued|Message|Number' justify c
    col cmn format 999999999999 head 'Captured|Message|Number' justify c
    col lcs format 999999999999 head 'Last scn|Scanned' justify c
    col AVAILABLE_MESSAGE_NUMBER format 999999999999 head 'Last system| scn'  justify c
    col capture_user format A20 head 'Capture user'
    col ncs format 999999999999 head 'Captured|Start scn' justify c
    col capture_type format A10 head 'Capture |Type'
    col RULE_SET_NAME format a15 head "Rule set Name"
    col NEGATIVE_RULE_SET_NAME format a15 head "Neg rule set"
    col status format A8 head 'Status'
    -- For each table in APPLY site, given by this query
    col SOURCE_DATABASE format a30
    set linesize 150
    select distinct SOURCE_DATABASE,
           source_object_owner||'.'||source_object_name own_obj,
           SOURCE_OBJECT_TYPE objt, instantiation_scn, IGNORE_SCN,
           apply_database_link lnk
    from  DBA_APPLY_INSTANTIATED_OBJECTS order by 1,2;
    -- do
    execute DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(  source_object_name=> 'owner.table',
               source_database_name => 'source_database' ,  instantiation_scn => NULL );
    -- List instantiation objects at source
    select TABLE_OWNER, TABLE_NAME,SCN, to_char(TIMESTAMP,'DD-MM-YYYY HH24:MI:SS') ti
    from dba_capture_prepared_tables  order by table_owner
    col LOGMINER_ID head 'Log|ID'  for 999
    select  LOGMINER_ID, CAPTURE_USER,  start_scn ncs,  to_char(STATUS_CHANGE_TIME,'DD-MM HH24:MI:SS') change_time
        ,CAPTURE_TYPE,RULE_SET_NAME, negative_rule_set_name , status from dba_capture
    order by logminer_id
    set lines 190
    col rsname format a22 head 'Rule set name'
    col delay_scn head 'Delay|Scanned' justify c
    col delay2 head 'Delay|Enq-Applied' justify c
    col state format a24
    col process_name format a8 head 'Process|Name' justify c
    col LATENCY_SECONDS head 'Lat(s)'
    col total_messages_captured head 'total msg|Captured'
    col total_messages_enqueued head 'total msg|Enqueue'
    col ENQUEUE_MESG_TIME format a17 head 'Row creation|initial time'
    col CAPTURE_TIME head 'Capture at'
    select a.logminer_id , a.CAPTURE_NAME cap, queue_name , AVAILABLE_MESSAGE_NUMBER, CAPTURE_MESSAGE_NUMBER lcs,
          AVAILABLE_MESSAGE_NUMBER-CAPTURE_MESSAGE_NUMBER delay_scn,
          last_enqueued_scn , applied_scn las , last_enqueued_scn-applied_scn delay2
          from dba_capture a, v$streams_capture b where a.capture_name = b.capture_name (+)
    order by logminer_id
    SELECT c.logminer_id,
             SUBSTR(s.program,INSTR(s.program,'(')+1,4) PROCESS_NAME,
             c.sid,
             c.serial#,
             c.state,
             to_char(c.capture_time, 'HH24:MI:SS MM/DD/YY') CAPTURE_TIME,
             to_char(c.enqueue_message_create_time,'HH24:MI:SS MM/DD/YY') ENQUEUE_MESG_TIME ,
            (SYSDATE-c.capture_message_create_time)*86400 LATENCY_SECONDS,
            c.total_messages_captured,
            c.total_messages_enqueued
       FROM V$STREAMS_CAPTURE c, V$SESSION s
       WHERE c.SID = s.SID
      AND c.SERIAL# = s.SERIAL#
    order by logminer_id ;
    -- Which is lowest requeired archive :
    -- this query assume you have only one logminer_id or your must add ' and session# = <id_nn>  '
    set serveroutput on
    DECLARE
    hScn number := 0;
    lScn number := 0;
    sScn number;
    ascn number;
    alog varchar2(1000);
    begin
      select min(start_scn), min(applied_scn) into sScn, ascn
        from dba_capture ;
      DBMS_OUTPUT.ENABLE(2000);
      for cr in (select distinct(a.ckpt_scn)
                 from system.logmnr_restart_ckpt$ a
                 where a.ckpt_scn <= ascn and a.valid = 1
                   and exists (select * from system.logmnr_log$ l
                       where a.ckpt_scn between l.first_change# and
                         l.next_change#)
                  order by a.ckpt_scn desc)
      loop
        if (hScn = 0) then
           hScn := cr.ckpt_scn;
        else
           lScn := cr.ckpt_scn;
           exit;
        end if;
      end loop;
      if lScn = 0 then
        lScn := sScn;
      end if;
       -- select min(name) into alog from v\$archived_log where lScn between first_change# and next_change#;
      -- dbms_output.put_line('Capture will restart from SCN ' || lScn ||' in log '||alog);
        dbms_output.put_line('Capture will restart from SCN ' || lScn ||' in the following file:');
       for cr in (select name, first_time  , SEQUENCE#
                   from DBA_REGISTERED_ARCHIVED_LOG
                   where lScn between first_scn and next_scn order by thread#)
      loop
         dbms_output.put_line(to_char(cr.SEQUENCE#)|| ' ' ||cr.name||' ('||cr.first_time||')');
      end loop;
    end;
    -- List all archives
    prompt If 'Name' is empty then the archive is not on disk anymore
    prompt
    set linesize 150 pagesize 0 heading on embedded on
    col name          form A55 head 'Name' justify l
    col st    form A14 head 'Start' justify l
    col end    form A14 head 'End' justify l
    col NEXT_CHANGE#   form 9999999999999 head 'Next Change' justify c
    col FIRST_CHANGE#  form 9999999999999 head 'First Change' justify c
    col SEQUENCE#     form 999999 head 'Logseq' justify c
    select thread#, SEQUENCE# , to_char(FIRST_TIME,'MM-DD HH24:MI:SS') st,
           to_char(next_time,'MM-DD HH24:MI:SS') End,FIRST_CHANGE#,
           NEXT_CHANGE#, NAME name
            from ( select thread#, SEQUENCE# , FIRST_TIME, next_time,FIRST_CHANGE#,
                     NEXT_CHANGE#, NAME name
                     from v$archived_log  order by first_time desc  )
            where rownum <= 30
    -- APPLY status
    col apply_tag format a8 head 'Apply| Tag'
    col QUEUE_NAME format a24
    col DDL_HANDLER format a20
    col MESSAGE_HANDLER format a20
    col NEGATIVE_RULE_SET_NAME format a20 head 'Negative|rule set'
    col apply_user format a20
    col uappn format A30 head "Apply name"
    col queue_name format A30 head "Queue name"
    col apply_captured format A14 head "Type of|Applied Events" justify c
    col rsn format A24 head "Rule Set name"
    col sts format A8 head "Apply|Process|Status"
    col apply_tag format a8 head 'Apply| Tag'
    col QUEUE_NAME format a24
    col DDL_HANDLER format a20
    col MESSAGE_HANDLER format a20
    col NEGATIVE_RULE_SET_NAME format a20 head 'Negative|rule set'
    col apply_user format a20
    set linesize 150
      select apply_name uappn, queue_owner, DECODE(APPLY_CAPTURED, 'YES', 'Captured', 'NO',  'User-Enqueued') APPLY_CAPTURED,
           RULE_SET_NAME rsn , apply_tag, STATUS sts  from dba_apply;
      select QUEUE_NAME,DDL_HANDLER,MESSAGE_HANDLER, NEGATIVE_RULE_SET_NAME, APPLY_USER, ERROR_NUMBER,
             to_char(STATUS_CHANGE_TIME,'DD-MM-YYYY HH24:MI:SS')STATUS_CHANGE_TIME
      from dba_apply ;
    set head off
    select  ERROR_MESSAGE from  dba_apply;
    -- propagation status
    set head on
    col rsn format A28 head "Rule Set name"
    col rn format A30 head "Rule name"
    col rt format A64 head "Rule text"
    col d_dblk format A40 head 'Destination dblink'
    col nams format A41 head 'Source queue'
    col namd format A66 head 'Remote queue'
    col prop format A40 head 'Propagation name '
    col rsname format A20 head 'Rule set name'
    COLUMN TOTAL_TIME HEADING 'Total Time Executing|in Seconds' FORMAT 999999
    COLUMN TOTAL_NUMBER HEADING 'Total Events Propagated' FORMAT 999999999
    COLUMN TOTAL_BYTES HEADING 'Total mb| Propagated' FORMAT 9999999999
    COL PROPAGATION_NAME format a26
    COL SOURCE_QUEUE_NAME format a34 head "Source| queue name" justify c
    COL DESTINATION_QUEUE_NAME format a24 head "Destination| queue name" justify c
    col QUEUE_TO_QUEUE format a9 head "Queue to| Queue"
    col RULE_SET_NAME format a18
    set linesize 125
    prompt
    set lines 190
    select PROPAGATION_NAME prop,  RULE_SET_NAME rsname , nvl(DESTINATION_DBLINK,'Local to db') d_dblk,NEGATIVE_RULE_SET_NAME
                  from dba_propagation ;
      select SOURCE_QUEUE_OWNER||'.'|| SOURCE_QUEUE_NAME nams , DESTINATION_QUEUE_OWNER||'.'|| DESTINATION_QUEUE_NAME||
              decode( DESTINATION_DBLINK,null,'','@'|| DESTINATION_DBLINK) namd, status , QUEUE_TO_QUEUE
                  from dba_propagation ;
    -- Archive numbers
    set linesize 150 pagesize 0 heading on embedded on
    col name          form A55 head 'Name' justify l
    col st    form A14 head 'Start' justify l
    col end    form A14 head 'End' justify l
    col NEXT_CHANGE#   form 9999999999999 head 'Next Change' justify c
    col FIRST_CHANGE#  form 9999999999999 head 'First Change' justify c
    col SEQUENCE#     form 999999 head 'Logseq' justify c
    select thread#, SEQUENCE# , to_char(FIRST_TIME,'MM-DD HH24:MI:SS') st,
           to_char(next_time,'MM-DD HH24:MI:SS') End,FIRST_CHANGE#,
           NEXT_CHANGE#, NAME name
            from ( select thread#, SEQUENCE# , FIRST_TIME, next_time,FIRST_CHANGE#,
                     NEXT_CHANGE#, NAME name
                     from v$archived_log  order by first_time desc  )
            where rownum <= 30
    -- List queues
    col queue_table format A26 head 'Queue Table'
    col queue_name format A32 head 'Queue Name'
    col primary_instance format 9999 head 'Prim|inst'
    col secondary_instance format 9999 head 'Sec|inst'
    col owner_instance format 99 head 'Own|inst'
    COLUMN MEM_MSG HEADING 'Messages|in Memory' FORMAT 99999999
    COLUMN SPILL_MSGS HEADING 'Messages|Spilled' FORMAT 99999999
    COLUMN NUM_MSGS HEADING 'Total Messages|in Buffered Queue' FORMAT 99999999
    select
         a.owner||'.'|| a.name nam, a.queue_table,
                  decode(a.queue_type,'NORMAL_QUEUE','NORMAL', 'EXCEPTION_QUEUE','EXCEPTION',a.queue_type) qt,
                  trim(a.enqueue_enabled) enq, trim(a.dequeue_enabled) deq, (NUM_MSGS - SPILL_MSGS) MEM_MSG, spill_msgs, x.num_msgs msg,
                  x.INST_ID owner_instance
                  from dba_queues a , sys.gv_$buffered_queues x
            where
                   a.qid = x.queue_id (+) and a.owner not in ( 'SYS','SYSTEM','WMSYS','SYSMAN')  order by a.owner ,qt desc
    -- List instantiated objects
    set linesize 150
    select distinct SOURCE_DATABASE,
           source_object_owner||'.'||source_object_name own_obj,
           SOURCE_OBJECT_TYPE objt, instantiation_scn, IGNORE_SCN,
           apply_database_link lnk
    from  DBA_APPLY_INSTANTIATED_OBJECTS
    order by 1,2;
    -- Propagation senders : to run on source site
    prompt
    prompt ++ EVENTS AND BYTES PROPAGATED FOR EACH PROPAGATION++
    prompt
    COLUMN Elapsed_propagation_TIME HEADING 'Elapsed |Propagation Time|(Seconds)' FORMAT 9999999999999999
    COLUMN TOTAL_NUMBER HEADING 'Total Events|Propagated' FORMAT 9999999999999999
    COLUMN SCHEDULE_STATUS HEADING 'Schedule|Status'
    column elapsed_dequeue_time HEADING 'Total Dequeue|Time (Secs)'
    column elapsed_propagation_time HEADING 'Total Propagation|Time (Secs)' justify c
    column elapsed_pickle_time HEADING 'Total Pickle| Time(Secs)' justify c
    column total_time HEADING 'Elapsed|Pickle Time|(Seconds)' justify c
    column high_water_mark HEADING 'High|Water|Mark'
    column acknowledgement HEADING 'Target |Ack'
    prompt pickle : Pickling is the action of building the messages, wrap the LCR before enqueuing
    prompt
    set linesize 150
    SELECT p.propagation_name,q.message_delivery_mode queue_type, DECODE(p.STATUS,
                    'DISABLED', 'Disabled', 'ENABLED', 'Enabled') SCHEDULE_STATUS, q.instance,
                    q.total_number TOTAL_NUMBER, q.TOTAL_BYTES/1048576 total_bytes,
                    q.elapsed_dequeue_time/100 elapsed_dequeue_time, q.elapsed_pickle_time/100 elapsed_pickle_time,
                    q.total_time/100 elapsed_propagation_time
      FROM  DBA_PROPAGATION p, dba_queue_schedules q
            WHERE   p.DESTINATION_DBLINK = NVL(REGEXP_SUBSTR(q.destination, '[^@]+', 1, 2), q.destination)
      AND q.SCHEMA = p.SOURCE_QUEUE_OWNER
      AND q.QNAME = p.SOURCE_QUEUE_NAME
      order by q.message_delivery_mode, p.propagation_name;
    -- propagation receiver : to run on apply site
    COLUMN SRC_QUEUE_NAME HEADING 'Source|Queue|Name' FORMAT A20
    COLUMN DST_QUEUE_NAME HEADING 'Target|Queue|Name' FORMAT A20
    COLUMN SRC_DBNAME HEADING 'Source|Database' FORMAT A15
    COLUMN ELAPSED_UNPICKLE_TIME HEADING 'Unpickle|Time' FORMAT 99999999.99
    COLUMN ELAPSED_RULE_TIME HEADING 'Rule|Evaluation|Time' FORMAT 99999999.99
    COLUMN ELAPSED_ENQUEUE_TIME HEADING 'Enqueue|Time' FORMAT 99999999.99
    SELECT SRC_QUEUE_NAME,
           SRC_DBNAME,DST_QUEUE_NAME,
           (ELAPSED_UNPICKLE_TIME / 100) ELAPSED_UNPICKLE_TIME,
           (ELAPSED_RULE_TIME / 100) ELAPSED_RULE_TIME,
           (ELAPSED_ENQUEUE_TIME / 100) ELAPSED_ENQUEUE_TIME, TOTAL_MSGS,HIGH_WATER_MARK
      FROM V$PROPAGATION_RECEIVER;
    -- Apply reader
    col rsid format 99999 head "Reader|Sid" justify c
    COLUMN CREATION HEADING 'Message|Creation time' FORMAT A17 justify c
    COLUMN LATENCY HEADING 'Latency|in|Seconds' FORMAT 9999999
    col deqt format A15 head "Last|Dequeue Time" justify c
    SELECT APPLY_NAME, sid rsid , (DEQUEUE_TIME-DEQUEUED_MESSAGE_CREATE_TIME)*86400 LATENCY,
            TO_CHAR(DEQUEUED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD') CREATION, TO_CHAR(DEQUEUE_TIME,'HH24:MI:SS MM/DD') deqt,
            DEQUEUED_MESSAGE_NUMBER  FROM V$STREAMS_APPLY_READER;
    -- Do we have any library cache lock ?
    set head on pause off feed off linesize 150
    column event format a48 head "Event type"
    column wait_time format 999999 head "Total| waits"
    column seconds_in_wait format 999999 head " Time waited "
    column sid format 9999 head "Sid"
    column state format A17 head "State"
    col seq# format 999999
    select
      w.sid, s.status,w.seq#, w.event
      , w.wait_time, w.seconds_in_wait , w.p1 , w.p2 , w.p3 , w.state
    from
      v$session_wait w , v$session s
    where
      s.sid = w.sid and
      w.event != 'pmon timer'                  and
      w.event != 'rdbms ipc message'           and
      w.event != 'PL/SQL lock timer'           and
       w.event != 'SQL*Net message from client' and
      w.event != 'client message'              and
      w.event != 'pipe get'                    and
      w.event != 'Null event'                  and
      w.event != 'wakeup time manager'         and
      w.event != 'slave wait'                  and
      w.event != 'smon timer'
      and w.event != 'class slave wait'
      and w.event != 'LogMiner: wakeup event for preparer'
      and w.event != 'Streams AQ: waiting for time management or cleanup tasks'
      and w.event != 'LogMiner: wakeup event for builder'
      and w.event != 'Streams AQ: waiting for messages in the queue'
      and w.event != 'ges remote message'
      and w.event != 'gcs remote message'
      and w.event != 'Streams AQ: qmn slave idle wait'
      and w.event != 'Streams AQ: qmn coordinator idle wait'
      and w.event != 'ASM background timer'
      and w.event != 'DIAG idle wait'
      and w.seconds_in_wait > 0
    /

  • Database Upgrade in Data Gaurd

    Hi all,
    i have an active data guard running
    i wont to upgrade my database what should i do?? should i stop my dataguard
    shutdown both primary database and secondary database
    upgrade primary first then secondary database, then i'll make again dataguard  ??
    or any other best recommendations ??

    Hi CKPT,
    Can you help me to find the best method how to upgrade 10.2.0.4 to 11.2.0.3 with physical standby database, in my opinion :
    1. Stop applying redo log
    2. Stop all standby service
    3. Upgrade standby
    4. Switchover to upgrade standby
    5. Upgrade old primary if no error in step 4
    6. Switchback
    Please give any suggestion to best method and minimal downtime, and note i have more 4TB data, thank's.

  • Truncate data in replication

    Hi,
    I have a problem with functions removeDatabase and truncateClass, now I want to clear all data in a database with a small cost and I found these two APIs can meet my needs, but it awalys give me com.sleepycat.je.rep.DatabasePreemptedException when a request comes to replicated node, the master node works fine. Below is my code:
    trans = env.beginTransaction(null, null);
                   closeDatabase(); //Close database and entity store
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.BdbMember#mobile");
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.BdbMember");
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.Card#businessNo");
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.Card");
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.PointFreeze");
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.RedMemberBlockPeriod");
    reOpenDatabase("persist#EntityStore#xxx.BdbMember");
                   reOpenDatabase("persist#EntityStore#xxx.Card");
                   reOpenDatabase("persist#EntityStore#xxx.PointFreeze");
                   reOpenDatabase("persist#EntityStore#xxx.RedMemberBlockPeriod");               
                   trans.commit();
    private Database reOpenDatabase(String dbName) throws Exception {
              Transaction trans = null;
              Database db = null;
              try {
                   trans = env.beginTransaction(null, null);
                   DatabaseConfig dc = new DatabaseConfig();
                   dc.setTransactional(true);
                   dc.setAllowCreate(true);
                   db = env.openDatabase(trans, dbName, dc);
                   trans.commit();
              } catch (Exception e) {
                   if (trans != null) {
                        trans.abort();
                   throw e;
              return db;
    Please do help me!
    Thanks

    You are using Berkeley DB, Java Edition, High Availability, which is a different product from Berkeley DB, (C version). That's why we suggested that you move your question to the Berkeley DB, Java Edition forum. I know that the product names are confusing.
    Please read the javadoc for DatabasePreemptedException at http://download.oracle.com/docs/cd/E17277_02/html/java/com/sleepycat/je/rep/DatabasePreemptedException.html. There you will find out that this happens when a database has been truncated, renamed, or deleted on the master. You will get this exception on the replica node. It tells you that the database has had a major change, and you must close and reopen your cursors, database and environment handles. Please read the javadoc for more details.

Maybe you are looking for