Multisource env. + downstream capture

Hi all,
Ok, here's our situation, I'd love some feedback:
we have three databases, let's call them A, B, and C.
All three have the same structure (same schemas, objects, etc...).
Databases A and B have different non-overlapping data.
We want to consolidate (ie, merge) databases A and B into C.
While A and B were tiny, this was trivial; we simply used nightly exports
and imports (with ignore=y) into C frrom A and B.
This was convenient as:
1. code was simple to create/maintain/debug/support
2. all the processing was done on the server database C was on
Now that A and B are huge, this is no longer practical.
Looking through the streams replication doc I noticed the ability
to support multisource replication along with downstream capture.
This sounds perfect for us. Consider our current requirments:
1. this is a nightly process, not real time - so using downstream capture
to mine the arch logs after they've been shipped over is perfectly acceptable
2. we need all the resources on databases A and B, so using downstream capture
to do all the leg work on the server where C is on is great
3. this is not bidirectional replication. Databases A and B do not know each
other exist and never need to share data. Database C is read only and would
never have to send any changes to A or B.
so basically, we have a multisource, one way replication environment.
I'm curious if anyone else has a similar setup and if you're using strreams (in particular
downstream capture) as a solution.
thanks!

Thanks Serge. Yeah, from the doc and various metalink notes I suspected it
could be done. What would be helpful is feedback regarding personal experience
with a similar configurance (problems, considerations, joyful experiences, etc...)
thanks,
ant

Similar Messages

  • Multiple sources with single downstream capture

    Is it possible to have multiple source machines all send their redo logs to a single downstream capture DB that will collect LCRs and queue the LCRs for all the source machines?

    Yes, that's wht downstream replication is all about. read the 11G manual if you want to know how to do that.

  • Configure log file transfer to downstream capture daabase!

    Dear all,
    I am setting up bidirectional replication among 2 database server that are Linux based and oracle 11gR1 is the database.
    I am following the document Oracle Streams Administrator Guide, Have completed all the pre-configuration tasks but I have confusion with this step where we have to configure log file transfer to downstream capture database.
    I am unable to understand this from the documentation.
    I mean how do I configure Oracle Net so that the source database can communicate with each other in bi-directional replication?
    Configure authentication at both databases to support the transfer of redo data? How can I do this?
    Third thing is the paramter setting that obviously i can do
    Kindly help me through this step.
    Regards, Imran

    and what about this:
    Configure authentication at both databases to support the transfer of redo data?
    Thanks, ImranFor communication between the two databases, you create streams administrator at both the databases. The strmadmin talk to each other.
    Regards,
    S.K.

  • Downstream Capturing - Two Sources

    Hi
    We are planning to have two source databases and one consolidated/merge database.We are planning to use streams.I just configured downstream real time capture from one source database.
    As I can have only one real-time downstream capture process and one archive-log downstream capture process.
    How do I configure this archivelog-downstream capture process.Where in the code that differentiates between real time and archivelog.
    Thanks
    Sree

    You will find the steps for configuring downstream capture in the 11.2 Streams Replication Administrator's Guide towards the end of chapter 1. Here is the URL to the online doc section that gives examples of the source init.ora parameter log_arch_dest_2 for real-time mining and archive log mining:
    http://docs.oracle.com/cd/E11882_01/server.112/e10705/prep_rep.htm#i1014093
    In addition to the different log_arch_dest_* parameter settings between real-time and archive log mining, real-time mining requires standby logfiles at the downstream mining database. The instructions for that are also in the Streams Replication Administrator's Guide.
    Finally, the capture parameter DOWNSTREAM_REAL_TIME_MINE must be set to Y for real-time mining. The default is N for archive log mining.
    Chapter 2 in that same manual includes how to configure downstream capture using the MAINTAIN_SCHEMAS procedure

  • RAC for downstreams capture process

    I have created a real-time downstreams capture process in a RAC to protect the process of any failure but I have some doubts about this:
    1- I need create the group of standby redo log for each instance in cluster or its shared for all ?
    2- if one instance goes down and we perform Redos sending from the source via the following service depicted in the source TNSNAME:
    RAC_STR=
    (DESCRIPTION=
    (ADDRESS_LIST=
    (ADDRESS=
    (PROTOCOL=TCP)
    (HOST=VIP-instance1)
    (PORT=1521)
    (ADDRESS=
    (PROTOCOL=TCP)
    (HOST=VIP-instance1)
    (PORT=1521)
    (CONNECT_DATA=
    (SERVICE_NAME=RAC-global_name)
    configured process will be able to continue capturing changes without data redo loss ?
    Appreciate any explanation.

    >
    if one instance goes down and we perform Redos sending from the source via the following service depicted in the source TNSNAME:
    RAC_STR=
    (DESCRIPTION=
    (ADDRESS_LIST=
    (ADDRESS=
    (PROTOCOL=TCP)
    (HOST=VIP-instance1)
    (PORT=1521)
    (ADDRESS=
    (PROTOCOL=TCP)
    (HOST=VIP-instance1)
    (PORT=1521)
    (CONNECT_DATA=
    (SERVICE_NAME=RAC-global_name)
    configured process will be able to continue capturing changes without data redo loss ?You will not expirience with data loss if one of RAC instances goes down - next one will overtake yours downstream capture process and continues to mine redo from source database. But you definitly need to correct your tnsnames, because it is pointing twice to the same RAC instance "VIP-instance1"
    The downstream capture on RAC unfortunatly has other problems, with I've already expirienced, but maybe it will not concern your configuration. The undocumented problems (or bugs which are open and not solved yet) are:
    1. if your RAC DB has the phys. standby than it can happened that it discontinue to register redo from upstream Streams database.
    2. if your RAC DB has both downstream and local capture then if more as 2 RAC instances are running, the local capture can't continue with current redolog (only after log switch)

  • Queue-to-Queue Propagation VS. Downstream Capture

    Can someone please provide some insights into the advantages and disadvantages of using Queue-to-Queue Propagation VS. Downstream Capture ?
    Thanks for your input.
    -Reid

    As far as my knowldege is concerned "Q-to-Q propagation" is a way of messaging between diffrent queeues belonging to diffrent stages of replication like staging, propgation and has its own job processes, where as downstream capture is just a capture where changes are captured at some diffrent database other then where actaully changes occured. The database where these changes occur is called as "local database" while the database where these changes are captuerd is called as downstream database because from there the changes will be "downstreamed" to diffrent nodes where apply process resides.
    Kapil

  • Source DB on RAC, Archived Log Downstream Capture:Logs could not be shipped

    I don't have much experience in Oracle RAC.
    We are implementing Oracle Streams using Archived-Log Downstream capture. Source and Target DBs are 11gR2.
    The source DB is in RAC (uses scan listeners).
    To prevents, users from accessing the source DB, the DBA of the source DB shutdown the listener on port 1521 (changed the port number to 0000 in some file). There was one more listener on port 1523 that was up and running. We used port 1523 to create DB link between the 2 databases.
    But, because the listener on Port 1521 was down, the archived logs from the source DB could not be shipped to the shared rive. As per the source DB DBA, the two instances in RAC use this listener/port to communicate with each other.
    As such, when we ran DBMS_CAPTURE_ADM.CREATE_CAPTURE procedure from the target DB, the Logminer Data Dictionary that was extracted from the source DB to the Redo Logs was not avaialble to the target DB and the streams implementation failed.
    It seems that for the archived logs to ship from the source DB to the shared drive, we need the listener on the port 1521 up and running. (Correct me if I am wrong ).
    My question is:
    Is there a way to shutdown a listener to prevent users from accessing the DB and have another listsener up so that the archived logs can be shipped to the shared drive ? If so, can you please give the details/example ?
    We asked the same question to the DBA of the source DB and we were told that it could not be done.
    Thanks in advance.

    Make sure that the dblink "using" clause is referencing a service name that uses a listener that is up and operational. There is no requirement that the listener be on port 1521 for Streams or for shipping logs.
    Chapter 4 of the 2Day+ Data Replication and Integration manual has instructions for configuring downstream capture in Tutorial: Configuring Two-Database Replication with a Downstream Capture Process
    http://docs.oracle.com/cd/E11882_01/server.112/e17516/tdpii_repcont.htm#BABIJCDG

  • Restart downstream capture  server,dequeue operation did not work

    Hi, every one. I created real time downstream capture on SUSE Linux 10 with Oracle 10R2, it worked. But after I restarted downstream DB, the replication didn't work while capture, apply,propagation process was all in normal status . In V$Buffered_queue I saw not dequeued messages spilled to disk. Later I configured real time downstream capture on Windows XP platform, and got the same situation. I searched on this forum and found a thread about this. I changed the AQ_TM_PROCESSES from 0 to 1 on Windows XP Platform, the replication started to work. But when I changed this parameter on SUSE Linux Platform, the replication still didn't work and messages were spilled to disk in V$Buffered_queue. I don't know why. I did all of this in vmware workstation. I'm not a DBA and don't have a metalink account. Any help about this is appreciated.
    thanks.
    JunWang

    Spilled to disk? are they on the aq$_<queue_name>p table or in the queue table? Anything into dbaapply_error?
    Can you give the following output :
    set lines 190
    -- reader
    col DEQUEUED_MESSAGE_NUMBER for 999999999999
    SELECT ap.APPLY_NAME, DECODE(ap.APPLY_CAPTURED,'YES','Captured LCRS', 'NO','User-Enqueued','UNKNOWN') APPLY_CAPTURED,
           SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME, r.STATE, r.TOTAL_MESSAGES_DEQUEUED, r.sga_used
           FROM V$STREAMS_APPLY_READER r, V$SESSION s, DBA_APPLY ap
           WHERE r.SID = s.SID AND
                 r.SERIAL# = s.SERIAL# AND
                 r.APPLY_NAME = ap.APPLY_NAME;
    SELECT APPLY_NAME, sid rsid , (DEQUEUE_TIME-DEQUEUED_MESSAGE_CREATE_TIME)*86400 LATENCY,
            TO_CHAR(DEQUEUED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD') CREATION, TO_CHAR(DEQUEUE_TIME,'HH24:MI:SS MM/DD') deqt,
            DEQUEUED_MESSAGE_NUMBER  FROM V$STREAMS_APPLY_READER;
    -- coordinator : compare to reader to see if there is an effective apply problem
    col totr format 999999999 head "Total|Received"
    col tad format 9999999 head "Total|Admin"
    col appn format A22 head "Apply name"
    col terr format 9999999 head "Total|Errors"
    col twd format 9999999 head "Total|Wait"
    col TOTAL_ROLLBACKS format 9999999 head "Total|Rollback"
    col twc format 9999999 head "Total|Wait|Commits"
    select apply_name appn, apply#,sid,state, total_received totr, total_applied tap, total_wait_deps twd, TOTAL_ROLLBACKS,
          total_wait_commits twc, total_errors terr, to_char(hwm_time,'DD-MM HH24:MI:SS')hwt
    from v$streams_apply_coordinator order by apply_name
    -- any errors?
    SELECT queue_name,source_commit_scn scn, message_count, source_database,LOCAL_TRANSACTION_ID, ERROR_MESSAGE FROM DBA_APPLY_ERROR order by message_count desc;

  • Error running Archived-Log Downstream Capture Process

    I have created a Archived-Log Downstream Capture Process with ref. to following link
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_ccap.htm#i1011654
    After executing the capture process get following error in trace
    ============================================================================
    Trace file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_13572.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORACLE_HOME = /home/oracle/app/oracle/product/11.2.0/dbhome_1
    System name: Linux
    Node name: localhost.localdomain
    Release: 2.6.18-194.el5
    Version: #1 SMP Fri Apr 2 14:58:14 EDT 2010
    Machine: x86_64
    Instance name: orcl
    Redo thread mounted by this instance: 1
    Oracle process number: 37
    Unix process pid: 13572, image: [email protected] (CP01)
    *** 2011-08-20 14:21:38.899
    *** SESSION ID:(146.2274) 2011-08-20 14:21:38.899
    *** CLIENT ID:() 2011-08-20 14:21:38.899
    *** SERVICE NAME:(SYS$USERS) 2011-08-20 14:21:38.899
    *** MODULE NAME:(STREAMS) 2011-08-20 14:21:38.899
    *** ACTION NAME:(STREAMS Capture) 2011-08-20 14:21:38.899
    knlcCopyPartialCapCtx(), setting default poll freq to 0
    knlcUpdateMetaData(), before copy IgnoreUnsuperrTable:
    source:
    Ignore Unsupported Error Table: 0 entries
    target:
    Ignore Unsupported Error Table: 0 entries
    knlcUpdateMetaData(), after copy IgnoreUnsuperrTable:
    source:
    Ignore Unsupported Error Table: 0 entries
    target:
    Ignore Unsupported Error Table: 0 entries
    knlcfrectx_Init: rs=STRMADMIN.RULESET$_66, nrs=., cuid=0, cuid_prv=0, flags=0x0
    knlcObtainRuleSetNullLock: rule set name "STRMADMIN"."RULESET$_66"
    knlcObtainRuleSetNullLock: rule set name
    knlcmaInitCapPrc+
    knlcmaGetSubsInfo+
    knlqgetsubinfo
    subscriber name EMP_DEQ
    subscriber dblinke name
    subscriber name APPLY_EMP
    subscriber dblinke name
    knlcmaTerm+
    knlcmaTermSrvs+
    knlcmaTermSrvs-
    knlcmaTerm-
    knlcCCAInit()+, err = 26802
    knlcnShouldAbort: examining error stack
    ORA-26802: Queue "STRMADMIN"."STREAMS_QUEUE" has messages.
    knlcnShouldAbort: examing error 26802
    knlcnShouldAbort: returning FALSE
    knlcCCAInit: no combined capture and apply optimization err = 26802
    knlzglr_GetLogonRoles: usr = 91,
    knlqqicbk - AQ access privilege checks:
    userid=91, username=STRMADMIN
    agent=STRM05_CAPTURE
    knlqeqi()
    knlcRecInit:
    Combined Capture and Apply Optimization is OFF
    Apply-state checkpoint mode is OFF
    last_enqueued, last_acked
    0x0000.00000000 [0] 0x0000.00000000 [0]
    captured_scn, applied_scn, logminer_start, enqueue_filter
    0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908]
    flags=0
    Starting persistent Logminer Session : 13
    krvxats retval : 0
    CKPT_FREE event=FALSE CCA=FALSE Checkptfreq=1000 AV/CDC flags=0
    krvxssp retval : 0
    krvxsda retval : 0
    krvxcfi retval : 0
    #1: krvxcfi retval : 0
    #2: krvxcfi retval : 0
    About to call krvxpsr : startscn: 0x0000.0004688c
    state before krvxpsr: 0
    dbms_logrep_util.get_checkpoint_scns(): logminer sid = 13 applied_scn = 288908
    dbms_logrep_util.get_checkpoint_scns(): prev_ckpt_scn = 0 curr_ckpt_scn = 0
    *** 2011-08-20 14:21:41.810
    Begin knlcDumpCapCtx:*******************************************
    Error 1304 : ORA-01304: subordinate process error. Check alert and trace logs
    Capture Name: STRM05_CAPTURE : Instantiation#: 65
    *** 2011-08-20 14:21:41.810
    ++++ Begin KNST dump for Sid: 146 Serial#: 2274
    Init Time: 08/20/2011 14:21:38
    ++++Begin KNSTCAP dump for : STRM05_CAPTURE
    Capture#: 1 Logminer_Id: 13 State: DICTIONARY INITIALIZATION [ 08/20/2011 14:21:38]
    Capture_Message_Number: 0x0000.00000000 [0]
    Capture_Message_Create_Time: 01/01/1988 00:00:00
    Enqueue_Message_Number: 0x0000.00000000 [0]
    Enqueue_Message_Create_Time: 01/01/1988 00:00:00
    Total_Messages_Captured: 0
    Total_Messages_Created: 0 [ 01/01/1988 00:00:00]
    Total_Messages_Enqueued: 0 [ 01/01/1988 00:00:00]
    Total_Full_Evaluations: 0
    Elapsed_Capture_Time: 0 Elapsed_Rule_Time: 0
    Elapsed_Enqueue_Time: 0 Elapsed_Lcr_Time: 0
    Elapsed_Redo_Wait_Time: 0 Elapsed_Pause_Time: 0
    Apply_Name :
    Apply_DBLink :
    Apply_Messages_Sent: 0
    ++++End KNSTCAP dump
    ++++ End KNST DUMP
    +++ Begin DBA_CAPTURE dump for: STRM05_CAPTURE
    Capture_Type: DOWNSTREAM
    Version:
    Source_Database: ORCL2.LOCALDOMAIN
    Use_Database_Link: NO
    Logminer_Id: 13 Logfile_Assignment: EXPLICIT
    Status: ENABLED
    First_Scn: 0x0000.0004688c [288908]
    Start_Scn: 0x0000.0004688c [288908]
    Captured_Scn: 0x0000.0004688c [288908]
    Applied_Scn: 0x0000.0004688c [288908]
    Last_Enqueued_Scn: 0x0000.00000000 [0]
    Capture_User: STRMADMIN
    Queue: STRMADMIN.STREAMS_QUEUE
    Rule_Set_Name[+]: "STRMADMIN"."RULESET$_66"
    Checkpoint_Retention_Time: 60
    +++ End DBA_CAPTURE dump
    +++ Begin DBA_CAPTURE_PARAMETERS dump for: STRM05_CAPTURE
    PARALLELISM = 1 Set_by_User: NO
    STARTUP_SECONDS = 0 Set_by_User: NO
    TRACE_LEVEL = 7 Set_by_User: YES
    TIME_LIMIT = -1 Set_by_User: NO
    MESSAGE_LIMIT = -1 Set_by_User: NO
    MAXIMUM_SCN = 0xffff.ffffffff [281474976710655] Set_by_User: NO
    WRITE_ALERT_LOG = TRUE Set_by_User: NO
    DISABLE_ON_LIMIT = FALSE Set_by_User: NO
    DOWNSTREAM_REAL_TIME_MINE = FALSE Set_by_User: NO
    MESSAGE_TRACKING_FREQUENCY = 2000000 Set_by_User: NO
    SKIP_AUTOFILTERED_TABLE_DDL = TRUE Set_by_User: NO
    SPLIT_THRESHOLD = 1800 Set_by_User: NO
    MERGE_THRESHOLD = 60 Set_by_User: NO
    +++ End DBA_CAPTURE_PARAMETERS dump
    +++ Begin DBA_CAPTURE_EXTRA_ATTRIBUTES dump for: STRM05_CAPTURE
    USERNAME Include:YES Row_Attribute: YES DDL_Attribute: YES
    +++ End DBA_CAPTURE_EXTRA_ATTRIBUTES dump
    ++ LogMiner Session Dump Begin::
    SessionId: 13 SessionName: STRM05_CAPTURE
    Start SCN: 0x0000.00000000 [0]
    End SCN: 0x0000.00046c2d [289837]
    Processed SCN: 0x0000.0004689e [288926]
    Prepared SCN: 0x0000.000468d4 [288980]
    Read SCN: 0x0000.000468e2 [288994]
    Spill SCN: 0x0000.00000000 [0]
    Resume SCN: 0x0000.00000000 [0]
    Branch SCN: 0x0000.00000000 [0]
    Branch Time: 01/01/1988 00:00:00
    ResetLog SCN: 0x0000.00000001 [1]
    ResetLog Time: 08/18/2011 16:46:59
    DB ID: 740348291 Global DB Name: ORCL2.LOCALDOMAIN
    krvxvtm: Enabled threads: 1
    Current Thread Id: 1, Thread State 0x01
    Current Log Seqn: 107, Current Thrd Scn: 0x0000.000468e2 [288994]
    Current Session State: 0x20005, Current LM Compat: 0xb200000
    Flags: 0x3f2802d8, Real Time Apply is Off
    +++ Additional Capture Information:
    Capture Flags: 4425
    Logminer Start SCN: 0x0000.0004688c [288908]
    Enqueue Filter SCN: 0x0000.0004688c [288908]
    Low SCN: 0x0000.00000000 [0]
    Capture From Date: 01/01/1988 00:00:00
    Capture To Date: 01/01/1988 00:00:00
    Restart Capture Flag: NO
    Ping Pending: NO
    Buffered Txn Count: 0
    -- Xid Hash entry --
    -- LOB Hash entry --
    -- No TRIM LCR --
    Unsupported Reason: Unknown
    --- LCR Dump not possible ---
    End knlcDumpCapCtx:*********************************************
    *** 2011-08-20 14:21:41.810
    knluSetStatus()+{
    *** 2011-08-20 14:21:44.917
    knlcapUpdate()+{
    Updated streams$_capture_process
    finished knlcapUpdate()+ }
    finished knluSetStatus()+ }
    knluGetObjNum()+
    knlsmRaiseAlert: keltpost retval is 0
    kadso = 0 0
    KSV 1304 error in slave process
    *** 2011-08-20 14:21:44.923
    ORA-01304: subordinate process error. Check alert and trace logs
    knlz_UsrrolDes()
    knstdso: state object 0xb644b568, action 2
    knstdso: releasing so 0xb644b568 for session 146, type 0
    knldso: state object 0xa6d0dea0, action 2 memory 0x0
    kadso = 0 0
    knldso: releasing so 0xa6d0dea0
    OPIRIP: Uncaught error 447. Error stack:
    ORA-00447: fatal error in background process
    ORA-01304: subordinate process error. Check alert and trace logs
    Any suggestions???

    Output of above query
    ==============================
    CAPTURE_NAME STATUS ERROR_MESSAGE
    STRM05_CAPTURE ABORTED ORA-01304: subordinate process error. Check alert and trace logs
    Alert log.xml
    =======================
    <msg time='2011-08-25T16:58:01.865+05:30' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='localhost.localdomain' host_addr='127.0.0.1' module='STREAMS'
    pid='30921'>
    <txt>Errors in file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_30921.trc:
    ORA-01304: subordinate process error. Check alert and trace logs
    </txt>
    </msg>
    The orcl_cp01_30921.trc has the same thing posted in the first message.

  • Downstream Capture using archivelog files

    DB Version 11.2
    I have not been able to find a demo/tutorial of using Streams with Downstream Capture where only archivelogs from the source DB are available (they were moved to a remote system using a DVD/CD/tape). All the examples that I have been able to find use a network connection.
    Does anyone know of such an example?
    Thank you.

    Hi!
    Please can you elaborate your question more clearly..
    Explanation of Downstream Capture...
    I am assuming that we are having two databases one is production database and one is DR database (disaster database).
    we want that changes from Production database should get replicate to DR database.
    For performance reasons we want that no process should be running on Production.
    In order to achieve that we use downstream capture in which capture and apply both process will be running on DR database.Now here problem arises that how that capture process will capture changes from production database.
    so we configure data guard configuration b/w these two databases.production database should be in archive-log mode where as DR database can be in no-archive log mode.now archive log file from production database are copied to DR database using network connection automatically and then from these archives capture process captures changes.
    Now why do you want archives to be copied on DVDs...
    regards
    Usman

  • Downstream capture

    Hi all!
    We have a non-rac production database and i've successfully done duplicating it to a asm rac using rman duplicate. Now, my plan is to make the rac instance as our report server and to configure it for advance replication using downstream capture from our production server.
    Duplicating my prod server takes time (more than 12hrs). How will i make my rac instance as fresh as my prod? The 12hrs of restoration creates a gap between my prod and my report server. Will the SCN mentioned on using downstream capture will solve it? As i can understand, downstream capture uses archivelogs. Or do i need to roll forward my rac instance using incremental backup? I need my report server as fresh as my prod because users generate reports everyday.
    Please guide me.
    Thanks a lot!

    Hi,
    Golden gate is a good solution for replication, I have used it. Its straight forward than streams.
    You need to dive in and explore since you wont find that much practical documentation on it . This installation guide might help you for start http://download.oracle.com/docs/cd/E18101_01/doc.1111/e17799.pdf
    Regards,
    Mansoor

  • Downstream Capture and Extended Datatype Support

    Wondering if anyone has tried downstream capture on tables with datatypes that are not natively supported (e.g. MDSYS.SDO_GEOMETRY)?
    I've had a look at this for upstream capture but at my site we want to,
    a) reduce possible impact of the capture processes on the source database
    and
    b) avoid adding triggers to the source tables being replicated as the application vendor isn't keen on changes to "their" schema. The shadow tables might not be quite such an issue in the source schema but if they can be avoided then that would bve abonus too.
    My first thoughts are that with downstream capture, EDS would still need to be applied on these tables in the source database to handle these data types.
    If anyone has been down this path and has some insights I'd love to hear them.
    Thanks

    Hi Bernard,
    I have read the generated SQL-Files (read README-File in ( Metalink: Extended Datatype Support (EDS) for Streams [ID 556742.1]) :
    run1_source_data_1.sql
    run1_source_streams_2.sql
    run1_dest_data_1.sql
    run1_dest_streams_2.sqland understand now how it works.
    But I still have some problems to find die Description of the undocumented functions like:
    SYS_OP_NII(...),
    SYS_ET_IMAGE_TO_BLOB and
    the HINT /*+ relational("A") restrict_all_ref_cons*/ in INSERT and UPDATEI have marked it as answered because I have waited too long and nobody gave me a comment or answer.
    I have thought nobody take a look at my case.
    But now you are the first one, I'm happy. :-)
    regards
    hqt200475
    Edited by: hqt200475 on Jul 1, 2011 4:53 AM
    Edited by: hqt200475 on Jul 1, 2011 5:18 AM

  • Can you help me about change data captures in 10.2.0.3

    Hi,
    I made research about Change Data Capture and I try to implement it between two databases for two small tables in 10g release 2.MY CDC implementation uses archive logs to replicate data.
    Change Data Capture Mode Asynchronous autolog archive mode..It works correctly( except for ddl).Now I have some questions about CDC implementation for large tables.
    I have one senario to implement but I do not find exactly how can I do it correctly.
    I have one table (name test) that consists of 100 000 000 rows , everyday 1 000 000 transections occurs on this table and I archive the old
    data more than one year manually.This table is in the source db.I want to replicate this table by using Change Data Capture to other stage database.
    There are some questions about my senario in the following.
    1.How can I make the first load operations? (test table has 100 000 000 rows in the source db)
    2.In CDC, it uses change table (name test_ch) it consists of extra rows related to opearations for stage table.But, I need the orjinal table (name test) for applicaton works in stage database.How can I move the data from change table (test_ch) to orjinal table (name test) in stage database? (I don't prefer to use view for test table)
    3.How can I remove some data from change table(name test_ch) in stage db?It cause problem or not?
    4.There is a way to replicate ddl operations between two database?
    5. How can I find the last applied log on stage db in CDC?How can I find archive gap between source db and stage db?
    6.How can I make the maintanence of change tables in stage db?

    Asynchronous CDC uses Streams to generate the change records. Basically, it is a pre-packaged DML Handler that converts the changes into inserts into the change table. You indicated that you want the changes to be written to the original table, which is the default behavior of Streams replication. That is why I recommended that you use Streams directly.
    <p>
    Yes, it is possible to capture changes from a production redo/archive log at another database. This capability is called "downstream" capture in the Streams manuals. You can configure this capability using the MAINTAIN_* procedures in DBMS_STREAMS_ADM package (where * is one of TABLES, SCHEMAS, or GLOBAL depending on the granularity of change capture).
    <p>
    A couple of tips for using these procedures for downstream capture:
    <br>1) Don't forget to set up log shipping to the downstream capture database. Log shipping is setup exactly the same way for Streams as for Data Guard. Instructions can be found in the Streams Replication Administrator's Guide. This configuration has probably already been done as part of your initial CDC setup.
    <br>2) Run the command at the database that will perform the downstream capture. This database can also be the destination (or target) database where the changes are to be applied.
    <br>3) Explicitly define the parameters capture_queue_name and apply_queue_name to be the same queue name. Example:
    <br>capture_queue_name=>'STRMADMIN.STREAMS_QUEUE'
    <br>apply_queue_name=>'STRMADMIN.STREAMS_QUEUE'

  • Capture streaming does not work after upgrade of the source database.

    Hello,
    We have a complex system with 2 X RAC databases 10.2.0.4 (source) and 2 X single databases (target) 11.2.0.2
    Streaming is running only from source to target.
    After upgrading RAC databases to 11.2.0.2 , streaming is working only from one RAC to one single database.
    First RAC streaming is flowing to first single database only, and second RAC to second single only.
    First source-target is streaming fine, second capture are aborted just after starting with following errors:
    Streams CAPTURE CP05 for STREAMS started with pid=159, OS id=21174
    Wed Mar 28 10:41:55 2012
    Propagation Sender/Receiver (CCA) for Streams Capture and Apply STREAMS with pid=189, OS id=21176 started.
    Wed Mar 28 10:43:05 2012
    Streams APPLY AP05 for STREAMS started with pid=134, OS id=21696
    Wed Mar 28 10:43:06 2012
    Streams Apply Reader for STREAMS started AS0G with pid=191 OS id=21709
    Wed Mar 28 10:43:06 2012
    Streams Apply Server for STREAMS started AS04 with pid=192 OS id=21711
    Wed Mar 28 10:43:30 2012
    Streams CAPTURE CP05 for STREAMS with pid=159, OS id=21174 is in combined capture and apply mode.
    Capture STREAMS is handling 1 applies.
    Streams downstream capture STREAMS uses downstream_real_time_mine: TRUE
    Starting persistent Logminer Session with sid = 621 for Streams Capture STREAMS
    LOGMINER: Parameters summary for session# = 621
    LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
    LOGMINER: Memory Size = 10M, Checkpoint interval = 1000M
    LOGMINER: SpillScn 0, ResetLogScn 7287662065313
    LOGMINER: summary for session# = 621
    LOGMINER: StartScn: 12620843936763 (0x0b7a.84eb6bfb)
    LOGMINER: EndScn: 0
    LOGMINER: HighConsumedScn: 12620843936763 (0x0b7a.84eb6bfb)
    LOGMINER: session_flag 0x1
    LOGMINER: LowCkptScn: 12620843920280 (0x0b7a.84eb2b98)
    LOGMINER: HighCkptScn: 12620843920281 (0x0b7a.84eb2b99)
    LOGMINER: SkipScn: 12620843920280 (0x0b7a.84eb2b98)
    Wed Mar 28 10:44:53 2012
    LOGMINER: session#=621 (STREAMS), reader MS00 pid=198 OS id=22578 sid=1148 started
    Wed Mar 28 10:44:53 2012
    LOGMINER: session#=621 (STREAMS), builder MS01 pid=199 OS id=22580 sid=1338 started
    Wed Mar 28 10:44:53 2012
    LOGMINER: session#=621 (STREAMS), preparer MS02 pid=200 OS id=22582 sid=1519 started
    LOGMINER: Begin mining logfile for session 621 thread 1 sequence 196589, /opt/app/oracle/admin/singledb/stdbyarch/singledb_1_196589_569775692.arc
    Errors in file /opt/app/oracle/diag/rdbms/singledb/singledb/trace/singledb_ms00_22578.trc (incident=113693):
    ORA-00600: internal error code, arguments: [krvxruts004], [11.2.0.0.0], [10.2.0.4.0], [], [], [], [], [], [], [], [], []
    Incident details in: /opt/app/oracle/diag/rdbms/singledb/singledb/incident/incdir_113693/singledb_ms00_22578_i113693.trc
    Use ADRCI or Support Workbench to package the incident.
    See Note 411.1 at My Oracle Support for error and packaging details.
    krvxerpt: Errors detected in process 198, role reader.
    We have 5 streaming processes running.
    When we rebuilded one of them, everything works fine, but other are too big for rebuilding.
    Has anybody met with such a behaviour ?
    Oracle development is already working on it but we need faster solution.
    Thanks
    Jurrai

    wwn wrote:I got this after a former kernel update and I can give you only a typical windows advice: reinstall all the Bumblebeestuff after uninstallation and after a reboot. Sounds strange but worked for me.
    What exactly did you reinstall? I am experiencing the same problem.

  • Capture Sticks Hangs on SCN until Out Of Memory

    newbie streaming to upgrade from 10.1 enterprise edition database on Red Hat 2 to 10.2 on Red Hat 4. Downstream capture and apply from archived logs. Has worked fine in test. In production I hit something huge, or weird, or whatever, and capture keeps working on one SCN until it runs out of memory after capturing, creaing and enqueing mega mesaages Takes it a couple hours to finally croak. This has happened a couple times now on two separate attempts to replicate database (ie NOT the same SCN each time). Where can I look to see what's causing this? Thanks very much.
    error:
    ++++ Begin KNST dump for Sid: 263 Serial#: 10
    Init Time: 12/04/2008 20:06:12
    ++++Begin KNSTCAP dump for : STRM01_CAPTURE
    Capture#: 1 Logminer_Id: 1 State: CREATING LCR [ 12/05/2008 00:30:09]
    Capture_Message_Number: 0x0002.d82c2bcc [12216708044]
    Capture_Message_Create_Time: 12/03/2008 10:07:28
    Enqueue_Message_Number: 0x0002.d82c2bcc [12216708044]
    Enqueue_Message_Create_Time: 12/03/2008 10:07:28
    Total_Messages_Captured: 549588
    Total_Messages_Created: 1063856 [ 12/05/2008 00:30:09]
    Total_Messages_Enqueued: 202946 [ 12/05/2008 00:30:09]
    Total_Full_Evaluations: 2
    Elapsed_Capture_Time: 26378 Elapsed_Rule_Time: 0
    Elapsed_Enqueue_Time: 5134 Elapsed_Lcr_Time: 1551701
    Elapsed_Redo_Wait_Time: 0 Elapsed_Pause_Time: 0
    ++++End KNSTCAP dump
    ++++ End KNST DUMP

    Hello
    Few questions:
    1. What RDBMS patchset have you applied on top of 10.2?
    2. You said "it runs out of memory", could you indicate what errors are you running into? Can you paste the complete error message? The section you pasted into the thread was only a part of the capture trace and it does not indicate any error.
    3. How are you verifying that the capture process was keep on mining some particular SCN?
    4. Let me know what value you have set for aq_tm_processes parameter.
    Thanks,
    Rijesh

Maybe you are looking for

  • Has anyone tried Mavericks on iMac 2011?

    Hey everyone, So I'm starting to feel a little more restricted with my abilities with my 2011 imac since I have lion and apps are beginning to need mavericks for updates (like fcpx, motion) or just to be installed. I have backed up my mac but am very

  • How can I sort two columns?

    I've got two columns of text (names of people, actually) in a Numbers spreadsheet. I've selected all the cells and made sure they are "text" format. I cannot figure out how to sort (alphabetically) the two columns independent of one another. I just w

  • Disjointed rollover...or something similar....

    Can someone tell me the best way to produce a flowchart of staff members so that when you move the mouse over the staff member's name, their photo appears. Ideally, I'd like the photo to appear in the same place on the page each time. The flowchart I

  • Fusion drive + Bootcamp on mid 2012 MacBook Pro 15"

    Dear all, I am in a sort of delimma now, as I am trying to set up a Fusion Drive with a Bootcamp partition on my 2012 MacBook Pro 15" with a Hitachi 750G HDD and a Crucial Micron M4 256G SSD. Here are the challenges I read up from other threads or pe

  • Cannot get g-mail to work on my Centro

    I bought a Centro last weekend.  I tried to set it up in the versamail application but whenever I clidked on get mail I received an "out of memory error" (there are 62mb left of memory so that is clearly not the issue).  I spoke with Verizon wireless