RAC for downstreams capture process

I have created a real-time downstreams capture process in a RAC to protect the process of any failure but I have some doubts about this:
1- I need create the group of standby redo log for each instance in cluster or its shared for all ?
2- if one instance goes down and we perform Redos sending from the source via the following service depicted in the source TNSNAME:
RAC_STR=
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=
(PROTOCOL=TCP)
(HOST=VIP-instance1)
(PORT=1521)
(ADDRESS=
(PROTOCOL=TCP)
(HOST=VIP-instance1)
(PORT=1521)
(CONNECT_DATA=
(SERVICE_NAME=RAC-global_name)
configured process will be able to continue capturing changes without data redo loss ?
Appreciate any explanation.

>
if one instance goes down and we perform Redos sending from the source via the following service depicted in the source TNSNAME:
RAC_STR=
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=
(PROTOCOL=TCP)
(HOST=VIP-instance1)
(PORT=1521)
(ADDRESS=
(PROTOCOL=TCP)
(HOST=VIP-instance1)
(PORT=1521)
(CONNECT_DATA=
(SERVICE_NAME=RAC-global_name)
configured process will be able to continue capturing changes without data redo loss ?You will not expirience with data loss if one of RAC instances goes down - next one will overtake yours downstream capture process and continues to mine redo from source database. But you definitly need to correct your tnsnames, because it is pointing twice to the same RAC instance "VIP-instance1"
The downstream capture on RAC unfortunatly has other problems, with I've already expirienced, but maybe it will not concern your configuration. The undocumented problems (or bugs which are open and not solved yet) are:
1. if your RAC DB has the phys. standby than it can happened that it discontinue to register redo from upstream Streams database.
2. if your RAC DB has both downstream and local capture then if more as 2 RAC instances are running, the local capture can't continue with current redolog (only after log switch)

Similar Messages

  • Error running Archived-Log Downstream Capture Process

    I have created a Archived-Log Downstream Capture Process with ref. to following link
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_ccap.htm#i1011654
    After executing the capture process get following error in trace
    ============================================================================
    Trace file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_13572.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORACLE_HOME = /home/oracle/app/oracle/product/11.2.0/dbhome_1
    System name: Linux
    Node name: localhost.localdomain
    Release: 2.6.18-194.el5
    Version: #1 SMP Fri Apr 2 14:58:14 EDT 2010
    Machine: x86_64
    Instance name: orcl
    Redo thread mounted by this instance: 1
    Oracle process number: 37
    Unix process pid: 13572, image: [email protected] (CP01)
    *** 2011-08-20 14:21:38.899
    *** SESSION ID:(146.2274) 2011-08-20 14:21:38.899
    *** CLIENT ID:() 2011-08-20 14:21:38.899
    *** SERVICE NAME:(SYS$USERS) 2011-08-20 14:21:38.899
    *** MODULE NAME:(STREAMS) 2011-08-20 14:21:38.899
    *** ACTION NAME:(STREAMS Capture) 2011-08-20 14:21:38.899
    knlcCopyPartialCapCtx(), setting default poll freq to 0
    knlcUpdateMetaData(), before copy IgnoreUnsuperrTable:
    source:
    Ignore Unsupported Error Table: 0 entries
    target:
    Ignore Unsupported Error Table: 0 entries
    knlcUpdateMetaData(), after copy IgnoreUnsuperrTable:
    source:
    Ignore Unsupported Error Table: 0 entries
    target:
    Ignore Unsupported Error Table: 0 entries
    knlcfrectx_Init: rs=STRMADMIN.RULESET$_66, nrs=., cuid=0, cuid_prv=0, flags=0x0
    knlcObtainRuleSetNullLock: rule set name "STRMADMIN"."RULESET$_66"
    knlcObtainRuleSetNullLock: rule set name
    knlcmaInitCapPrc+
    knlcmaGetSubsInfo+
    knlqgetsubinfo
    subscriber name EMP_DEQ
    subscriber dblinke name
    subscriber name APPLY_EMP
    subscriber dblinke name
    knlcmaTerm+
    knlcmaTermSrvs+
    knlcmaTermSrvs-
    knlcmaTerm-
    knlcCCAInit()+, err = 26802
    knlcnShouldAbort: examining error stack
    ORA-26802: Queue "STRMADMIN"."STREAMS_QUEUE" has messages.
    knlcnShouldAbort: examing error 26802
    knlcnShouldAbort: returning FALSE
    knlcCCAInit: no combined capture and apply optimization err = 26802
    knlzglr_GetLogonRoles: usr = 91,
    knlqqicbk - AQ access privilege checks:
    userid=91, username=STRMADMIN
    agent=STRM05_CAPTURE
    knlqeqi()
    knlcRecInit:
    Combined Capture and Apply Optimization is OFF
    Apply-state checkpoint mode is OFF
    last_enqueued, last_acked
    0x0000.00000000 [0] 0x0000.00000000 [0]
    captured_scn, applied_scn, logminer_start, enqueue_filter
    0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908]
    flags=0
    Starting persistent Logminer Session : 13
    krvxats retval : 0
    CKPT_FREE event=FALSE CCA=FALSE Checkptfreq=1000 AV/CDC flags=0
    krvxssp retval : 0
    krvxsda retval : 0
    krvxcfi retval : 0
    #1: krvxcfi retval : 0
    #2: krvxcfi retval : 0
    About to call krvxpsr : startscn: 0x0000.0004688c
    state before krvxpsr: 0
    dbms_logrep_util.get_checkpoint_scns(): logminer sid = 13 applied_scn = 288908
    dbms_logrep_util.get_checkpoint_scns(): prev_ckpt_scn = 0 curr_ckpt_scn = 0
    *** 2011-08-20 14:21:41.810
    Begin knlcDumpCapCtx:*******************************************
    Error 1304 : ORA-01304: subordinate process error. Check alert and trace logs
    Capture Name: STRM05_CAPTURE : Instantiation#: 65
    *** 2011-08-20 14:21:41.810
    ++++ Begin KNST dump for Sid: 146 Serial#: 2274
    Init Time: 08/20/2011 14:21:38
    ++++Begin KNSTCAP dump for : STRM05_CAPTURE
    Capture#: 1 Logminer_Id: 13 State: DICTIONARY INITIALIZATION [ 08/20/2011 14:21:38]
    Capture_Message_Number: 0x0000.00000000 [0]
    Capture_Message_Create_Time: 01/01/1988 00:00:00
    Enqueue_Message_Number: 0x0000.00000000 [0]
    Enqueue_Message_Create_Time: 01/01/1988 00:00:00
    Total_Messages_Captured: 0
    Total_Messages_Created: 0 [ 01/01/1988 00:00:00]
    Total_Messages_Enqueued: 0 [ 01/01/1988 00:00:00]
    Total_Full_Evaluations: 0
    Elapsed_Capture_Time: 0 Elapsed_Rule_Time: 0
    Elapsed_Enqueue_Time: 0 Elapsed_Lcr_Time: 0
    Elapsed_Redo_Wait_Time: 0 Elapsed_Pause_Time: 0
    Apply_Name :
    Apply_DBLink :
    Apply_Messages_Sent: 0
    ++++End KNSTCAP dump
    ++++ End KNST DUMP
    +++ Begin DBA_CAPTURE dump for: STRM05_CAPTURE
    Capture_Type: DOWNSTREAM
    Version:
    Source_Database: ORCL2.LOCALDOMAIN
    Use_Database_Link: NO
    Logminer_Id: 13 Logfile_Assignment: EXPLICIT
    Status: ENABLED
    First_Scn: 0x0000.0004688c [288908]
    Start_Scn: 0x0000.0004688c [288908]
    Captured_Scn: 0x0000.0004688c [288908]
    Applied_Scn: 0x0000.0004688c [288908]
    Last_Enqueued_Scn: 0x0000.00000000 [0]
    Capture_User: STRMADMIN
    Queue: STRMADMIN.STREAMS_QUEUE
    Rule_Set_Name[+]: "STRMADMIN"."RULESET$_66"
    Checkpoint_Retention_Time: 60
    +++ End DBA_CAPTURE dump
    +++ Begin DBA_CAPTURE_PARAMETERS dump for: STRM05_CAPTURE
    PARALLELISM = 1 Set_by_User: NO
    STARTUP_SECONDS = 0 Set_by_User: NO
    TRACE_LEVEL = 7 Set_by_User: YES
    TIME_LIMIT = -1 Set_by_User: NO
    MESSAGE_LIMIT = -1 Set_by_User: NO
    MAXIMUM_SCN = 0xffff.ffffffff [281474976710655] Set_by_User: NO
    WRITE_ALERT_LOG = TRUE Set_by_User: NO
    DISABLE_ON_LIMIT = FALSE Set_by_User: NO
    DOWNSTREAM_REAL_TIME_MINE = FALSE Set_by_User: NO
    MESSAGE_TRACKING_FREQUENCY = 2000000 Set_by_User: NO
    SKIP_AUTOFILTERED_TABLE_DDL = TRUE Set_by_User: NO
    SPLIT_THRESHOLD = 1800 Set_by_User: NO
    MERGE_THRESHOLD = 60 Set_by_User: NO
    +++ End DBA_CAPTURE_PARAMETERS dump
    +++ Begin DBA_CAPTURE_EXTRA_ATTRIBUTES dump for: STRM05_CAPTURE
    USERNAME Include:YES Row_Attribute: YES DDL_Attribute: YES
    +++ End DBA_CAPTURE_EXTRA_ATTRIBUTES dump
    ++ LogMiner Session Dump Begin::
    SessionId: 13 SessionName: STRM05_CAPTURE
    Start SCN: 0x0000.00000000 [0]
    End SCN: 0x0000.00046c2d [289837]
    Processed SCN: 0x0000.0004689e [288926]
    Prepared SCN: 0x0000.000468d4 [288980]
    Read SCN: 0x0000.000468e2 [288994]
    Spill SCN: 0x0000.00000000 [0]
    Resume SCN: 0x0000.00000000 [0]
    Branch SCN: 0x0000.00000000 [0]
    Branch Time: 01/01/1988 00:00:00
    ResetLog SCN: 0x0000.00000001 [1]
    ResetLog Time: 08/18/2011 16:46:59
    DB ID: 740348291 Global DB Name: ORCL2.LOCALDOMAIN
    krvxvtm: Enabled threads: 1
    Current Thread Id: 1, Thread State 0x01
    Current Log Seqn: 107, Current Thrd Scn: 0x0000.000468e2 [288994]
    Current Session State: 0x20005, Current LM Compat: 0xb200000
    Flags: 0x3f2802d8, Real Time Apply is Off
    +++ Additional Capture Information:
    Capture Flags: 4425
    Logminer Start SCN: 0x0000.0004688c [288908]
    Enqueue Filter SCN: 0x0000.0004688c [288908]
    Low SCN: 0x0000.00000000 [0]
    Capture From Date: 01/01/1988 00:00:00
    Capture To Date: 01/01/1988 00:00:00
    Restart Capture Flag: NO
    Ping Pending: NO
    Buffered Txn Count: 0
    -- Xid Hash entry --
    -- LOB Hash entry --
    -- No TRIM LCR --
    Unsupported Reason: Unknown
    --- LCR Dump not possible ---
    End knlcDumpCapCtx:*********************************************
    *** 2011-08-20 14:21:41.810
    knluSetStatus()+{
    *** 2011-08-20 14:21:44.917
    knlcapUpdate()+{
    Updated streams$_capture_process
    finished knlcapUpdate()+ }
    finished knluSetStatus()+ }
    knluGetObjNum()+
    knlsmRaiseAlert: keltpost retval is 0
    kadso = 0 0
    KSV 1304 error in slave process
    *** 2011-08-20 14:21:44.923
    ORA-01304: subordinate process error. Check alert and trace logs
    knlz_UsrrolDes()
    knstdso: state object 0xb644b568, action 2
    knstdso: releasing so 0xb644b568 for session 146, type 0
    knldso: state object 0xa6d0dea0, action 2 memory 0x0
    kadso = 0 0
    knldso: releasing so 0xa6d0dea0
    OPIRIP: Uncaught error 447. Error stack:
    ORA-00447: fatal error in background process
    ORA-01304: subordinate process error. Check alert and trace logs
    Any suggestions???

    Output of above query
    ==============================
    CAPTURE_NAME STATUS ERROR_MESSAGE
    STRM05_CAPTURE ABORTED ORA-01304: subordinate process error. Check alert and trace logs
    Alert log.xml
    =======================
    <msg time='2011-08-25T16:58:01.865+05:30' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='localhost.localdomain' host_addr='127.0.0.1' module='STREAMS'
    pid='30921'>
    <txt>Errors in file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_30921.trc:
    ORA-01304: subordinate process error. Check alert and trace logs
    </txt>
    </msg>
    The orcl_cp01_30921.trc has the same thing posted in the first message.

  • DAC Metadata Table information for Change Capture process

    Hi All,
    is there any separate metadata tables available for the Change Capture process configured in DAC for siebel , separately , like w_etl_step_run ?
    I just wonder that the information is not available elsewhere
    Please help with your ans.
    - M.

    This forum is for OLAP option of Oracle database.
    Post your DAC question in BI-Apps forum at:
    Business Intelligence Applications
    .

  • Source DB on RAC, Archived Log Downstream Capture:Logs could not be shipped

    I don't have much experience in Oracle RAC.
    We are implementing Oracle Streams using Archived-Log Downstream capture. Source and Target DBs are 11gR2.
    The source DB is in RAC (uses scan listeners).
    To prevents, users from accessing the source DB, the DBA of the source DB shutdown the listener on port 1521 (changed the port number to 0000 in some file). There was one more listener on port 1523 that was up and running. We used port 1523 to create DB link between the 2 databases.
    But, because the listener on Port 1521 was down, the archived logs from the source DB could not be shipped to the shared rive. As per the source DB DBA, the two instances in RAC use this listener/port to communicate with each other.
    As such, when we ran DBMS_CAPTURE_ADM.CREATE_CAPTURE procedure from the target DB, the Logminer Data Dictionary that was extracted from the source DB to the Redo Logs was not avaialble to the target DB and the streams implementation failed.
    It seems that for the archived logs to ship from the source DB to the shared drive, we need the listener on the port 1521 up and running. (Correct me if I am wrong ).
    My question is:
    Is there a way to shutdown a listener to prevent users from accessing the DB and have another listsener up so that the archived logs can be shipped to the shared drive ? If so, can you please give the details/example ?
    We asked the same question to the DBA of the source DB and we were told that it could not be done.
    Thanks in advance.

    Make sure that the dblink "using" clause is referencing a service name that uses a listener that is up and operational. There is no requirement that the listener be on port 1521 for Streams or for shipping logs.
    Chapter 4 of the 2Day+ Data Replication and Integration manual has instructions for configuring downstream capture in Tutorial: Configuring Two-Database Replication with a Downstream Capture Process
    http://docs.oracle.com/cd/E11882_01/server.112/e17516/tdpii_repcont.htm#BABIJCDG

  • Create multiple capture processes for same table depending on column value

    Hi,
    is it possible to create multiple realtime downstream capture processes to capture changes for the same table depending on column value?
    Prakash

    i found it - by using subset rules
    prakash

  • Downstream Capturing - Two Sources

    Hi
    We are planning to have two source databases and one consolidated/merge database.We are planning to use streams.I just configured downstream real time capture from one source database.
    As I can have only one real-time downstream capture process and one archive-log downstream capture process.
    How do I configure this archivelog-downstream capture process.Where in the code that differentiates between real time and archivelog.
    Thanks
    Sree

    You will find the steps for configuring downstream capture in the 11.2 Streams Replication Administrator's Guide towards the end of chapter 1. Here is the URL to the online doc section that gives examples of the source init.ora parameter log_arch_dest_2 for real-time mining and archive log mining:
    http://docs.oracle.com/cd/E11882_01/server.112/e10705/prep_rep.htm#i1014093
    In addition to the different log_arch_dest_* parameter settings between real-time and archive log mining, real-time mining requires standby logfiles at the downstream mining database. The instructions for that are also in the Streams Replication Administrator's Guide.
    Finally, the capture parameter DOWNSTREAM_REAL_TIME_MINE must be set to Y for real-time mining. The default is N for archive log mining.
    Chapter 2 in that same manual includes how to configure downstream capture using the MAINTAIN_SCHEMAS procedure

  • Is it possible to move some of the capture processes to another rac node?

    Hi All,
    Is it possible to move some of the ODI (Oracle Data Integrator) capture processes running on node1 to node2. Once moved does it work as usual or not? If its possible please provide me with steps.
    Appreciate your response
    Best Regards
    SK.

    Hi Cezar,
    Thanks for your post. I have a related question regarding this,
    Is it really necessary to have multiple capture and multiple apply processes? One for each schema in ODI? Because if set to automatic configuration, ODI seems to create a capture and a related apply process for each schema, which I guess leads to our specific performance problem (high cpu etc) I mentioned in my other post: Re: Is it possible to move some of the capture processes to another rac node?
    Is there way to use just one capture and one apply process for all of the schemas in ODI?
    Thanks a million.
    Edited by: oyigit on Nov 6, 2009 5:31 AM

  • Capture process status waiting for Dictionary Redo: first scn....

    Hi
    i am facing Issue in Oracle Streams.
    below message found in Capture State
    waiting for Dictionary Redo: first scn 777777777 (Eg)
    Archive_log_dest=USE_DB_RECOVERY_FILE_DEST
    i have space related issue....
    i restored the archive log to another partition eg. /opt/arc_log
    what should i do
    1) db start reading archive log from above location
    or
    2) how to move some archive log to USE_DB_RECOVERY_FILE_DEST from /opt/arc_log so db start processing ...
    Regard's

    Hi -
    Bad news.
    As per note 418755.1
    A. Confirm checkpoint retention. Periodically, the mining process checkpoints itself for quicker restart. These checkpoints are maintained in the SYSAUX tablespace by default. The capture parameter, checkpoint_retention_time, controls the amount of checkpoint data retained by moving the FIRST_SCN of the capture process forward. The FIRST_SCN is the lowest possible scn available for capturing changes. When the checkpoint_retention_time is exceeded (default = 60 days), the FIRST_SCN is moved and the Streams metadata tables previous to this scn (FIRST_SCN) can be purged and space in the SYSAUX tablespace reclaimed. To alter the checkpoint_retention_time, use the DBMS_CAPTURE_ADM.ALTER_CAPTURE procedure.
    Check if the archived redologfile it is requesting is about 60 days old. You need all archived redologs from the requested logfile onwards; if any are missing then you are out of luck. It doesnt matter that there have been mined and captured already; capture still needs these files for a restart. It has always been like this and IMHO is a significant limitation for streams.
    If you cannot recover the logfiles, then you will need to rebuild the captiure process and ensure that any gap in data captures has been resynced manually using tags tofix the data.
    Rgds
    Mark Teehan
    Singapore

  • Capture Process hangs and LOGMINER Stops - "wait for transaction" ???

    HI all
    Any ideas why the LOGMINRER would stop mining the logs at the capture site DB (just hangs 40 short of current archivelog)
    Capture Process is a status of Capturing Changes
    And the wait event on the Capture Process is "wait for transaction"
    How to diagnose whats wrong with Capture Process - been this way for 4 days !

    Hi
    Yes we have had to explicity register archivelogs also.
    Unfortunately this archivelog is registered -> so I am not sure. It apapers to have been as a result of a large DML transaction -> and I am not 100% sure the archivelog is posibly corrupt (however I doubt it as in 5 years as DBA I have not once hit a corruption - but always a first).
    Any thoughts on how to proceed ?

  • Excise invoice capture process for dealer invoice

    Hi,
      As I read in many thread, that for the capture of excise invoice for dealed invoice in Po for item we have to maintain the base price as addition of both base value + excise duty amounts,  then using MRP indicator we have to capture the duty amounts for dealer invoice but I have some query,  when delaer make the invoice . they pass on the duty,  even the duty amount is not intially known which excise invoice amount the delaer will pass, so while creating PO, How we can exactly enter he amount as BASE VALUE + DUTY amount,  we have to chnage the the PO  or without changing PO amount we can proces for capture of excise duty  also in PO which tax code should be select , if we seelct the regular tax code which we use for excise invoice capture for other supplier  then system will calculate the tax amount in the PO also, so which tax code should be calculate.
    regards,
    zafar

    At the time of Po creation we will be knowing the duties which he will be passing you, but you will be not knowing the base price of material ,as vendor will add his amount also into the original base price and apss the invoice.
    So will be creating the PO as original base price and excise only , but at the time of GRN he will add his profit into the base price and will give supply the material, so you need to change the base price while GRN to match the excise duties and tick the MRP indicator so that at time of miro the duties will flow correctly.
    Hope this may help you.
    BR,
    Patil

  • The (stopped) Capture process & RMAN

    Hi,
    We have a working 1-table bi-directional replication with Oracle 10.2.0.4 on SPARC/Solaris.
    Every night, RMAN backs up the database and collects/removes the archive logs (delete all inputs).
    My understanding from (Oracle Streams Concept & Administration) is that RMAN will not remove an archived log needed by a capture process (I think for the logminer session).
    Fine.
    But now, If I stop the Capture process for a long time (more than a day), whatever the reason.
    It's not clear what is the behaviour...
    I'm afraid that:
    - RMAN will collect the archived logs (since there is no more logminer session because of the stopped capture process)
    - When I'll restart the capture process, it will try to start from the last known SCN and the (new) logminer session will not find the redo logs.
    If that's correct, is it possible to restart the Capture process with an updated SCN so that I do not run into this problem ?
    How to find this SCN ?
    (In the case of a long interruption, we have a specific script which synchronize the table. It would be run first before restarting the capture process)
    Thanks for your answers.
    JD

    RMAN backup in 10g is streams aware. It will not delete any logs that contain the required_checkpoint_scn and above. This is true only if the capture process is running in the same database(local capture) as the RMAN backup is running.
    If you are using downstream capture, then RMAN is not aware of what logs that streams needs and may delete those logs. One additional reason why logs may be deleted is due to space pressure in flash recovery area.
    Please take a look at the following documentation:
    Oracle® Streams Concepts and Administration
    10g Release 2 (10.2)
    Part Number B14229-04
    CHAPTER 2 - Streams Capture Process
    Section - RMAN and Archived Redo Log Files Required by a Capture Process

  • Queue-to-Queue Propagation VS. Downstream Capture

    Can someone please provide some insights into the advantages and disadvantages of using Queue-to-Queue Propagation VS. Downstream Capture ?
    Thanks for your input.
    -Reid

    As far as my knowldege is concerned "Q-to-Q propagation" is a way of messaging between diffrent queeues belonging to diffrent stages of replication like staging, propgation and has its own job processes, where as downstream capture is just a capture where changes are captured at some diffrent database other then where actaully changes occured. The database where these changes occur is called as "local database" while the database where these changes are captuerd is called as downstream database because from there the changes will be "downstreamed" to diffrent nodes where apply process resides.
    Kapil

  • Restart downstream capture  server,dequeue operation did not work

    Hi, every one. I created real time downstream capture on SUSE Linux 10 with Oracle 10R2, it worked. But after I restarted downstream DB, the replication didn't work while capture, apply,propagation process was all in normal status . In V$Buffered_queue I saw not dequeued messages spilled to disk. Later I configured real time downstream capture on Windows XP platform, and got the same situation. I searched on this forum and found a thread about this. I changed the AQ_TM_PROCESSES from 0 to 1 on Windows XP Platform, the replication started to work. But when I changed this parameter on SUSE Linux Platform, the replication still didn't work and messages were spilled to disk in V$Buffered_queue. I don't know why. I did all of this in vmware workstation. I'm not a DBA and don't have a metalink account. Any help about this is appreciated.
    thanks.
    JunWang

    Spilled to disk? are they on the aq$_<queue_name>p table or in the queue table? Anything into dbaapply_error?
    Can you give the following output :
    set lines 190
    -- reader
    col DEQUEUED_MESSAGE_NUMBER for 999999999999
    SELECT ap.APPLY_NAME, DECODE(ap.APPLY_CAPTURED,'YES','Captured LCRS', 'NO','User-Enqueued','UNKNOWN') APPLY_CAPTURED,
           SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME, r.STATE, r.TOTAL_MESSAGES_DEQUEUED, r.sga_used
           FROM V$STREAMS_APPLY_READER r, V$SESSION s, DBA_APPLY ap
           WHERE r.SID = s.SID AND
                 r.SERIAL# = s.SERIAL# AND
                 r.APPLY_NAME = ap.APPLY_NAME;
    SELECT APPLY_NAME, sid rsid , (DEQUEUE_TIME-DEQUEUED_MESSAGE_CREATE_TIME)*86400 LATENCY,
            TO_CHAR(DEQUEUED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD') CREATION, TO_CHAR(DEQUEUE_TIME,'HH24:MI:SS MM/DD') deqt,
            DEQUEUED_MESSAGE_NUMBER  FROM V$STREAMS_APPLY_READER;
    -- coordinator : compare to reader to see if there is an effective apply problem
    col totr format 999999999 head "Total|Received"
    col tad format 9999999 head "Total|Admin"
    col appn format A22 head "Apply name"
    col terr format 9999999 head "Total|Errors"
    col twd format 9999999 head "Total|Wait"
    col TOTAL_ROLLBACKS format 9999999 head "Total|Rollback"
    col twc format 9999999 head "Total|Wait|Commits"
    select apply_name appn, apply#,sid,state, total_received totr, total_applied tap, total_wait_deps twd, TOTAL_ROLLBACKS,
          total_wait_commits twc, total_errors terr, to_char(hwm_time,'DD-MM HH24:MI:SS')hwt
    from v$streams_apply_coordinator order by apply_name
    -- any errors?
    SELECT queue_name,source_commit_scn scn, message_count, source_database,LOCAL_TRANSACTION_ID, ERROR_MESSAGE FROM DBA_APPLY_ERROR order by message_count desc;

  • Multisource env. + downstream capture

    Hi all,
    Ok, here's our situation, I'd love some feedback:
    we have three databases, let's call them A, B, and C.
    All three have the same structure (same schemas, objects, etc...).
    Databases A and B have different non-overlapping data.
    We want to consolidate (ie, merge) databases A and B into C.
    While A and B were tiny, this was trivial; we simply used nightly exports
    and imports (with ignore=y) into C frrom A and B.
    This was convenient as:
    1. code was simple to create/maintain/debug/support
    2. all the processing was done on the server database C was on
    Now that A and B are huge, this is no longer practical.
    Looking through the streams replication doc I noticed the ability
    to support multisource replication along with downstream capture.
    This sounds perfect for us. Consider our current requirments:
    1. this is a nightly process, not real time - so using downstream capture
    to mine the arch logs after they've been shipped over is perfectly acceptable
    2. we need all the resources on databases A and B, so using downstream capture
    to do all the leg work on the server where C is on is great
    3. this is not bidirectional replication. Databases A and B do not know each
    other exist and never need to share data. Database C is read only and would
    never have to send any changes to A or B.
    so basically, we have a multisource, one way replication environment.
    I'm curious if anyone else has a similar setup and if you're using strreams (in particular
    downstream capture) as a solution.
    thanks!

    Thanks Serge. Yeah, from the doc and various metalink notes I suspected it
    could be done. What would be helpful is feedback regarding personal experience
    with a similar configurance (problems, considerations, joyful experiences, etc...)
    thanks,
    ant

  • Downstream Capture using archivelog files

    DB Version 11.2
    I have not been able to find a demo/tutorial of using Streams with Downstream Capture where only archivelogs from the source DB are available (they were moved to a remote system using a DVD/CD/tape). All the examples that I have been able to find use a network connection.
    Does anyone know of such an example?
    Thank you.

    Hi!
    Please can you elaborate your question more clearly..
    Explanation of Downstream Capture...
    I am assuming that we are having two databases one is production database and one is DR database (disaster database).
    we want that changes from Production database should get replicate to DR database.
    For performance reasons we want that no process should be running on Production.
    In order to achieve that we use downstream capture in which capture and apply both process will be running on DR database.Now here problem arises that how that capture process will capture changes from production database.
    so we configure data guard configuration b/w these two databases.production database should be in archive-log mode where as DR database can be in no-archive log mode.now archive log file from production database are copied to DR database using network connection automatically and then from these archives capture process captures changes.
    Now why do you want archives to be copied on DVDs...
    regards
    Usman

Maybe you are looking for