Can stream "capture process" skip an archivelog?

DB: 10.2.0.5, on Windows 2003 SP2 32-bits
A stream capture component in our database is stuck reading one the archive log file, and status in the v$streams_capture view is 'CREATING LCR' . It is not moving at all.
I think, the archivelog is corrupted and will guess skipping from reading the log can help??
Any idea?

Find the transaction identifier in the trace file; for example in this trace the transaction is '0x000a.008.00019347'
Convert it from hex to decimal; in this example '0x000a.008.00019347' will be '10.8.103239'.
Example of trace file:
++++++++++++ Dumping Current LogMiner Lcr: +++++++++++++++
++ LCR Dump Begin: 0x000007FF3F75D8A0 - cannot_support
op: 255, Original op: 255, baseobjn: 74480, objn: 74480, objv: 1
DF: 0x00000003, DF2: 0x00000010, MF: 0x08240000, MF2: 0x00000000
PF: 0x00000000, PF2: 0x00000000
MergeFlag: 0x03, FilterFlag: 0x01
Id: 1, iotPrimaryKeyCount: 0, numChgRec: 0
NumCrSpilled: 0
RedoThread#: 1, rba: 0x000604.00014fd2.014c
scn: 0x0000.36a4b03c, (scn: 0x0000.36a4b03c, scn_sqn: 1, lcr_sqn: 0)xid: *0x000a.008.00019347*, parentxid: 0x000a.008.00019347, proxyxid: 0x0000.000.00000000, unsupportedReasonCode: 0,
ncol: 5 newcount: 0, oldcount: 0
LUBA: 0x3.c004eb.8.8.122f2
Filter Flag: UNDECIDED
++ KRVXOA Dump Begin:
Object Number: 74480 BaseObjNum: 74480 BaseObjVersion: 1
Then stop the capture process and execute the following procedure:
exec dbms_capture_adm.set_parameter('your_capture_process_name','_ignore_transaction','your_transaction_id_in_decimal_notation');
Now you can restart the capture process and it will ignore the tx.

Similar Messages

  • ORA-26744: STREAMS capture process "STRING" does not support "STRING"

    Hi All,
    I have configured oracle streams using Note "How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]" at schema level
    All the changes are getting reflected perfectly and was running smooth, but today suddenly I faced the below error and capture is aborted
    ORA-26744: STREAMS capture process "STREAM_CAPTURE" does not support "AMSATMS_PAWS"."B_SEARCH_PREFERENCE" because of the following reason:
    ORA-26783: Column data type not supported
    Couple of suggestions on forum are to add a negative ruleset, please suggest me how do i add a negative rule set and if this is added to negative ruleset then how the changes to this table will reflect in target database...?
    Please help me...
    Thanks

    I do not have any idea why it treats your XMLTYPE stored as CLOB like a XMLTYPE binary. From the doc, we read :
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/ap_restrictions.htm#BABGIFEA
    Unsupported Data Types for Capture Processes
    A capture process does not capture the results of DML changes to columns of the following data types:
        *       SecureFile CLOB, NCLOB, and BLOB
        *      BFILE
        *      ROWID
        *      User-defined types (including object types, REFs, varrays, and nested tables)
        *      XMLType stored object relationally or as binary XML                   <----------------------------
        *      The following Oracle-supplied types: Any types, URI types, spatial types, and media types
    A capture process raises an error if it tries to create a row LCR for a DML change to a column of
    an unsupported data type. When a capture process raises an error, it writes the LCR that caused
    the error into its trace file, raises an ORA-26744 error, and becomes disabled. For your support
    NOTE:556742.1 - Extended Datatype Support (EDS) for Streams
    to exclude the table:
    NOTE:239623.1 - How To Exclude A Table From Schema Capture And Replication When Using Schema Level Streams Replication
    Sound like a specific patch. You did not stated which version of Oracle you are running.

  • Can I configure capture process of streams on physical standby

    we have high oltp system that has dataguard setup with physical and logical. We are planning to build a datawarehouse and have streams setup for data feeding into the datawarehouse. Instead of taxing the primary, i was wondering if i could set up the physical standby as the source for my streams (basically configure the capture process on physical standby).
    Appreciate your help in advance!
    Thanks

    Thanks for the reply Tekicora
    This means then On the primary, I will have another destination that I send the archives to (In addition to Physical) that will be my source database for streams where I can configure capture process. if this understanding is right, then i have the following questions
    1) Can i use cascaded standby to relieve my primary from having another log destination and use that database for the source
    2) Do you know if PSB can be used as source in 11g? because we are planning to move to 11g soon.
    Thanks
    Smitha

  • Capture process: Can it write to a queue table in another database?

    The capture process reads the archived redo logs. It then writes the appropriate changes into the queue table in the same database.
    Can the Capture process read the archived redo logs and write to a queue table in another database?
    HP-UX
    Oracle 9.2.0.5

    What you are asking is not possible directly in 9i i.e. capture process cannot read the logs and write to a queue somewhere else.
    If the other database is also Oracle with platform and version compatibility then, you can use the 10g downstream capture feature to accomplish this.

  • Capture process issue...archive log missing!!!!!

    Hi,
    Oracle Streams capture process is alternating between INITIALIZING and DICTIONARY INITIALIZATION state and not proceeding after this state to capture updates made on table.
    we have accidentally missing archivelogs and no backup archive logs.
    Now I am going to recreate the capture process again.
    How I can start the the capture process from new SCN ?
    And Waht is the batter way to remove the archive log files from central server, because
    SCN used by capture processes?
    Thanks,
    Faziarain
    Edited by: [email protected] on Aug 12, 2009 12:27 AM

    Using dbms_Streams_Adm to add a capture, perform also a dbms_capture_adm.build. You will see in v$archived_log at the column dictionary_begin a 'yes', which means that the first_change# of this archivelog is first suitable SCN for starting capture.
    'rman' is the prefered way in 10g+ to remove the archives as it is aware of streams constraints. If you can't use rman to purge the archives, then you need to check the min required SCN in your system by script and act accordingly.
    Since 10g, I recommend to use rman, but nevertheless, here is the script I made in 9i in the old time were rman was eating the archives needed by Streams with appetite.
    #!/usr/bin/ksh
    # program : watch_arc.sh
    # purpose : check your archive directory and if actual percentage is > MAX_PERC
    #           then undertake the action coded by -a param
    # Author : Bernard Polarski
    # Date   :  01-08-2000
    #           12-09-2005      : added option -s MAX_SIZE
    #           20-11-2005      : added option -f to check if an archive is applied on data guard site before deleting it
    #           20-12-2005      : added option -z to check if an archive is still needed by logminer in a streams operation
    # set -xv
    #--------------------------- default values if not defined --------------
    # put here default values if you don't want to code then at run time
    MAX_PERC=85
    ARC_DIR=
    ACTION=
    LOG=/tmp/watch_arch.log
    EXT_ARC=
    PART=2
    #------------------------- Function section -----------------------------
    get_perc_occup()
      cd $ARC_DIR
      if [ $MAX_SIZE -gt 0 ];then
           # size is given in mb, we calculate all in K
           TOTAL_DISK=`expr $MAX_SIZE \* 1024`
           USED=`du -ks . | tail -1| awk '{print $1}'`    # in Kb!
      else
        USED=`df -k . | tail -1| awk '{print $3}'`    # in Kb!
        if [ `uname -a | awk '{print $1}'` = HP-UX ] ;then
               TOTAL_DISK=`df -b . | cut -f2 -d: | awk '{print $1}'`
        elif [ `uname -s` = AIX ] ;then
               TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
        elif [ `uname -s` = ReliantUNIX-N ] ;then
               TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
        else
                 # works on Sun
                 TOTAL_DISK=`df -b . | sed  '/avail/d' | awk '{print $2}'`
        fi
      fi
      USED100=`expr $USED \* 100`
      USG_PERC=`expr $USED100 / $TOTAL_DISK`
      echo $USG_PERC
    #------------------------ Main process ------------------------------------------
    usage()
        cat <<EOF
                  Usage : watch_arc.sh -h
                          watch_arc.sh  -p <MAX_PERC> -e <EXTENTION> -l -d -m <TARGET_DIR> -r <PART>
                                        -t <ARCHIVE_DIR> -c <gzip|compress> -v <LOGFILE>
                                        -s <MAX_SIZE (meg)> -i <SID> -g -f
                  Note :
                           -c compress file after move using either compress or gzip (if available)
                              if -c is given without -m then file will be compressed in ARCHIVE DIR
                           -d Delete selected files
                           -e Extention of files to be processed
                           -f Check if log has been applied, required -i <sid> and -g if v8
                           -g Version 8 (use svrmgrl instead of sqlplus /
                           -i Oracle SID
                           -l List file that will be processing using -d or -m
                           -h help
                           -m move file to TARGET_DIR
                           -p Max percentage above wich action is triggered.
                              Actions are of type -l, -d  or -m
                           -t ARCHIVE_DIR
                           -s Perform action if size of target dir is bigger than MAX_SIZE (meg)
                           -v report action performed in LOGFILE
                           -r Part of files that will be affected by action :
                               2=half, 3=a third, 4=a quater .... [ default=2 ]
                           -z Check if log is still needed by logminer (used in streams),
                                    it requires -i <sid> and also -g for Oracle 8i
                  This program list, delete or move half of all file whose extention is given [ or default 'arc']
                  It check the size of the archive directory and if the percentage occupancy is above the given limit
                  then it performs the action on the half older files.
            How to use this prg :
                    run this file from the crontab, say, each hour.
         example
         1) Delete archive that is sharing common arch disk, when you are at 85% of 2500 mega perform delete half of the files
         whose extention is 'arc' using default affected file (default is -r 2)
         0,30 * * * * /usr/local/bin/watch_arc.sh -e arc -t /arc/POLDEV -s 2500 -p 85 -d -v /var/tmp/watch_arc.POLDEV.log
         2) Delete archive that is sharing common disk with oother DB in /archive, act when 90% of 140G, affect by deleting
         a quater of all files (-r 4) whose extention is 'dbf' but connect before as sysdba in POLDEV db (-i) if they are
         applied (-f is a dataguard option)
         watch_arc.sh -e dbf -t /archive/standby/CITSPRD -s 140000 -p 90 -d -f -i POLDEV -r 4 -v /tmp/watch_arc.POLDEV.log
         3) Delete archive of DB POLDEV when it reaches 75% affect 1/3 third of files, but connect in DB to check if
         logminer do not need this archive (-z). this is usefull in 9iR2 when using Rman as rman do not support delete input
         in connection to Logminer.
         watch_arc.sh -e arc -t /archive/standby/CITSPRD  -p 75 -d -z -i POLDEV -r 3 -v /tmp/watch_arc.POLDEV.log
    EOF
    #------------------------- Function section -----------------------------
    if [ "x-$1" = "x-" ];then
          usage
          exit
    fi
    MAX_SIZE=-1  # disable this feature if it is not specificaly selected
    while getopts  c:e:p:m:r:s:i:t:v:dhlfgz ARG
      do
        case $ARG in
           e ) EXT_ARC=$OPTARG ;;
           f ) CHECK_APPLIED=YES ;;
           g ) VERSION8=TRUE;;
           i ) ORACLE_SID=$OPTARG;;
           h ) usage
               exit ;;
           c ) COMPRESS_PRG=$OPTARG ;;
           p ) MAX_PERC=$OPTARG ;;
           d ) ACTION=delete ;;
           l ) ACTION=list ;;
           m ) ACTION=move
               TARGET_DIR=$OPTARG
               if [ ! -d $TARGET_DIR ] ;then
                   echo "Dir $TARGET_DIR does not exits"
                   exit
               fi;;
           r)  PART=$OPTARG ;;
           s)  MAX_SIZE=$OPTARG ;;
           t)  ARC_DIR=$OPTARG ;;
           v)  VERBOSE=TRUE
               LOG=$OPTARG
               if [ ! -f $LOG ];then
                   > $LOG
               fi ;;
           z)  LOGMINER=TRUE;;
        esac
    done
    if [ "x-$ARC_DIR" = "x-" ];then
         echo "NO ARC_DIR : aborting"
         exit
    fi
    if [ "x-$EXT_ARC" = "x-" ];then
         echo "NO EXT_ARC : aborting"
         exit
    fi
    if [ "x-$ACTION" = "x-" ];then
         echo "NO ACTION : aborting"
         exit
    fi
    if [ ! "x-$COMPRESS_PRG" = "x-" ];then
       if [ ! "x-$ACTION" =  "x-move" ];then
             ACTION=compress
       fi
    fi
    if [ "$CHECK_APPLIED" = "YES" ];then
       if [ -n "$ORACLE_SID" ];then
             export PATH=$PATH:/usr/local/bin
             export ORAENV_ASK=NO
             export ORACLE_SID=$ORACLE_SID
             . /usr/local/bin/oraenv
       fi
       if [ "$VERSION8" = "TRUE" ];then
          ret=`svrmgrl <<EOF
    connect internal
    select max(sequence#) from v\\$log_history ;
    EOF`
    LAST_APPLIED=`echo $ret | sed 's/.*------ \([^ ][^ ]* \).*/\1/' | awk '{print $1}'`
       else
        ret=`sqlplus -s '/ as sysdba' <<EOF
    set pagesize 0 head off pause off
    select max(SEQUENCE#) FROM V\\$ARCHIVED_LOG where applied = 'YES';
    EOF`
       LAST_APPLIED=`echo $ret | awk '{print $1}'`
       fi
    elif [ "$LOGMINER" = "TRUE" ];then
       if [ -n "$ORACLE_SID" ];then
             export PATH=$PATH:/usr/local/bin
             export ORAENV_ASK=NO
             export ORACLE_SID=$ORACLE_SID
             . /usr/local/bin/oraenv
       fi
        var=`sqlplus -s '/ as sysdba' <<EOF
    set pagesize 0 head off pause off serveroutput on
    DECLARE
    hScn number := 0;
    lScn number := 0;
    sScn number;
    ascn number;
    alog varchar2(1000);
    begin
      select min(start_scn), min(applied_scn) into sScn, ascn from dba_capture ;
      DBMS_OUTPUT.ENABLE(2000);
      for cr in (select distinct(a.ckpt_scn)
                 from system.logmnr_restart_ckpt\\$ a
                 where a.ckpt_scn <= ascn and a.valid = 1
                   and exists (select * from system.logmnr_log\\$ l
                       where a.ckpt_scn between l.first_change# and l.next_change#)
                  order by a.ckpt_scn desc)
      loop
        if (hScn = 0) then
           hScn := cr.ckpt_scn;
        else
           lScn := cr.ckpt_scn;
           exit;
        end if;
      end loop;
      if lScn = 0 then
        lScn := sScn;
      end if;
       select min(sequence#) into alog from v\\$archived_log where lScn between first_change# and next_change#;
      dbms_output.put_line(alog);
    end;
    EOF`
      # if there are no mandatory keep archive, instead of a number we just get the "PLS/SQL successfull"
      ret=`echo $var | awk '{print $1}'`
      if [ ! "$ret" = "PL/SQL" ];then
         LAST_APPLIED=$ret
      else
         unset LOGMINER
      fi
    fi
    PERC_NOW=`get_perc_occup`
    if [ $PERC_NOW -gt $MAX_PERC ];then
         cd $ARC_DIR
         cpt=`ls -tr *.$EXT_ARC | wc -w`
         if [ ! "x-$cpt" = "x-" ];then
              MID=`expr $cpt / $PART`
              cpt=0
              ls -tr *.$EXT_ARC |while read ARC
                  do
                     cpt=`expr $cpt + 1`
                     if [ $cpt -gt $MID ];then
                          break
                     fi
                     if [ "$CHECK_APPLIED" = "YES" -o "$LOGMINER" = "TRUE" ];then
                        VAR=`echo $ARC | sed 's/.*_\([0-9][0-9]*\)\..*/\1/' | sed 's/[^0-9][^0-9].*//'`
                        if [ $VAR -gt $LAST_APPLIED ];then
                             continue
                        fi
                     fi
                     case $ACTION in
                          'compress' ) $COMPRESS_PRG $ARC_DIR/$ARC
                                     if [ "x-$VERBOSE" = "x-TRUE" ];then
                                           echo " `date +%d-%m-%Y' '%H:%M` : $ARC compressed using $COMPRESS_PRG" >> $LOG
                                     fi ;;
                          'delete' ) rm $ARC_DIR/$ARC
                                     if [ "x-$VERBOSE" = "x-TRUE" ];then
                                           echo " `date +%d-%m-%Y' '%H:%M` : $ARC deleted" >> $LOG
                                     fi ;;
                          'list'   )   ls -l $ARC_DIR/$ARC ;;
                          'move'   ) mv  $ARC_DIR/$ARC $TARGET_DIR
                                     if [ ! "x-$COMPRESS_PRG" = "x-" ];then
                                           $COMPRESS_PRG $TARGET_DIR/$ARC
                                           if [ "x-$VERBOSE" = "x-TRUE" ];then
                                                 echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR and compressed" >> $LOG
                                           fi
                                     else
                                           if [ "x-$VERBOSE" = "x-TRUE" ];then
                                                 echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR" >> $LOG
                                           fi
                                     fi ;;
                      esac
              done
          else
              echo "Warning : The filesystem is not full due to archive logs !"
              exit
          fi
    elif [ "x-$VERBOSE" = "x-TRUE" ];then
         echo "Nothing to do at `date +%d-%m-%Y' '%H:%M`" >> $LOG
    fi

  • Streams capture aborted in 9.2

    Good Day ,
    Streams Capture process was aborted.
    This is the trace file
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    Windows 2000 Version 5.2 , CPU type 586
    Instance name: rmi
    Redo thread mounted by this instance: 1
    Oracle process number: 16
    Windows thread id: 5396, image: ORACLE.EXE (CP01)
    *** SESSION ID:(28.841) 2009-04-09 09:29:45.000
    knlcfuusrnm: = STRMADMIN
    knlqeqi()
    knlcReadLogDictSCN: Got scn from logmnr_dictstate$:0x0001.4bda483d
    knlcInitCtx:
    captured_scn, applied_scn, logminer_start, enqueue_filter
    0x0000.00051828 0x0001.4be1945a 0x0001.4be1945a 0x0001.4be1945a
    last_enqueued, last_acked
    0x0000.00000000 0x0001.4be1945a
    knlrgdpctx
    DefProc streams$_defproc# 555
    DefProc: (defproc col,segcol num)=(1,1)
    DefProc: (defproc col,segcol num)=(2,2)
    DefProc: (defproc col,segcol num)=(3,3)
    DefProc: (defproc col,segcol num)=(4,4)
    DefProc: (defproc col,segcol num)=(5,5)
    DefProc: (defproc col,segcol num)=(6,6)
    DefProc: (defproc col,segcol num)=(7,7)
    DefProc: (defproc col,segcol num)=(8,8)
    DefProc: (defproc col,segcol num)=(9,9)
    DefProc: (defproc col,segcol num)=(10,10)
    DefProc: (defproc col,segcol num)=(11,11)
    DefProc: (defproc col,segcol num)=(12,12)
    DefProc: (defproc col,segcol num)=(13,13)
    DefProc: (defproc col,segcol num)=(14,14)
    DefProc: (defproc col,segcol num)=(15,15)
    knlcgetobjcb: obj# 4
    knlcgetobjcb: obj# 16
    knlcgetobjcb: obj# 18
    knlcgetobjcb: obj# 19
    knlcgetobjcb: obj# 20
    knlcgetobjcb: obj# 21
    knlcgetobjcb: obj# 22
    knlcgetobjcb: obj# 31
    knlcgetobjcb: obj# 32
    knlcgetobjcb: obj# 156
    knlcgetobjcb: obj# 230
    knlcgetobjcb: obj# 234
    knlcgetobjcb: obj# 240
    knlcgetobjcb: obj# 245
    knlcgetobjcb: obj# 249
    knlcgetobjcb: obj# 253
    knlcgetobjcb: obj# 258
    knlcgetobjcb: obj# 283
    knlcgetobjcb: obj# 288
    knlcgetobjcb: obj# 298
    knlcgetobjcb: obj# 306
    capturing objects for queue STRMADMIN.STREAMS_QUEUE 240 31 234 16 19 230 4 283 21 20 258 298 249 22 306 32 156 253 245 288 18
    Starting persistent Logminer Session : 1
    krvxats retval : 0
    krvxssp retval : 0
    krvxsda retval : 0
    krvxcfi retval : 0
    #2: krvxcfi retval : 0
    About to call krvxpsr : startscn: 0x0001.4be1945a
    knluSetStatus()+{
    knlcapUpdate()+{
    Updated streams$_capture_process
    finished knlcapUpdate()+ }
    finished knluSetStatus()+ }
    error 308 in STREAMS process
    ORA-00308: cannot open archived log 'C:\ORACLE\ORA92\RDBMS\1_893.DBF'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    OPIRIP: Uncaught error 447. Error stack:
    ORA-00447: fatal error in background process
    ORA-00308: cannot open archived log 'C:\ORACLE\ORA92\RDBMS\1_893.DBF'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    The archived log are in correct location. why it is taking this path C:\ORACLE\ORA92\RDBMS\1_893.DBF , the archived log file 1_893.dbf was available in the archived log located in Z:\archivedlog\..
    and i got error in the propagation process
    ORA-12500: TNS:listener failed to start a dedicated server process

    Hello,
    The streams capture reader slave will read the control file and it picks the first archive log destination it sees. You can not insist streams to read the archived logs from one specific archive destination, it just picks the first one it sees being the fact that all archived logs in all destination will be mirrored copies.
    As already stated in the previous post, 9.2.0.1 is just the base release and you would have to upgrade it to atleast 9.2.0.3 (first stable version for streams) or 9.2.0.4 (for few platforms 9.2.0.3 does not exist). However if you are using 9iR2, I would suggest to upgrade it to 9.2.0.8 which is the terminal patchset for 9iR2 where majority of the bugs are fixed. Also on top of it you need to apply all the recommended one-off patches (which would be included in RDBMS bundle patches for Windows) as per Note:437838.1.
    Hope this clarifies your questions.
    Thanks,
    Rijesh

  • The (stopped) Capture process & RMAN

    Hi,
    We have a working 1-table bi-directional replication with Oracle 10.2.0.4 on SPARC/Solaris.
    Every night, RMAN backs up the database and collects/removes the archive logs (delete all inputs).
    My understanding from (Oracle Streams Concept & Administration) is that RMAN will not remove an archived log needed by a capture process (I think for the logminer session).
    Fine.
    But now, If I stop the Capture process for a long time (more than a day), whatever the reason.
    It's not clear what is the behaviour...
    I'm afraid that:
    - RMAN will collect the archived logs (since there is no more logminer session because of the stopped capture process)
    - When I'll restart the capture process, it will try to start from the last known SCN and the (new) logminer session will not find the redo logs.
    If that's correct, is it possible to restart the Capture process with an updated SCN so that I do not run into this problem ?
    How to find this SCN ?
    (In the case of a long interruption, we have a specific script which synchronize the table. It would be run first before restarting the capture process)
    Thanks for your answers.
    JD

    RMAN backup in 10g is streams aware. It will not delete any logs that contain the required_checkpoint_scn and above. This is true only if the capture process is running in the same database(local capture) as the RMAN backup is running.
    If you are using downstream capture, then RMAN is not aware of what logs that streams needs and may delete those logs. One additional reason why logs may be deleted is due to space pressure in flash recovery area.
    Please take a look at the following documentation:
    Oracle® Streams Concepts and Administration
    10g Release 2 (10.2)
    Part Number B14229-04
    CHAPTER 2 - Streams Capture Process
    Section - RMAN and Archived Redo Log Files Required by a Capture Process

  • Limit the Capture process to just INSERTS

    Hi,
    Source: 10.2.0.3
    Downstream Capture DB: 10.2.0.3
    Destination DB: 11.1.0.7
    Is it possible to limit the Streams Capture process to only include INSERTS? We are only interested in INSERTS into the table and are not concerned with capturing any updates or deletes that are performed against the table.
    When configuring the capture and apply I've set:
    include_dml => true,
    Is it possible to have the capture and apply process run at a finer granuality and just capture and apply the INSERTS that have been performed against the source database tables?
    Thanks in advance.

    Go to Morgan's Library at www.psoug.org and look up DBMS_STREAMS_ADM.
    Scroll down to where the demo shows "and_condition => ':lcr.get_command_type() != ''DELETE''');"
    That should point you in the right direction.

  • Rman-08137 can't delete archivelog because the capture process need it

    When I use the rman utility to delete the old archivelog on the server ,It shows :Rman-08137 can't delete archivelog because the capture process need it .how to resolve the problem?

    It is likely that the "extract" process still requires those archive logs, as it is monitoring transactions that have not yet been "captured" and written out to a GoldenGate trail.
    Consider the case of doing the following: ggsci> add extract foo, tranlog, begin now
    After pressing "return" on that "add extract" command, any new transactions will be monitored by GoldenGate. Even if you never start extract foo, the GoldenGate + rman integration will keep those logs around. Note that this GG+rman integration is a relatively new feature, as of GG 11.1.1.1 => if "add extract foo" prints out "extract is registered", then you have this functionality.
    Another common "problem" is deleting "extract foo", but forgetting to "unregister" it. For example, to properly "delete" a registered "extract", one has to run "dblogin" first:
    ggsci> dblogin userid <userid> password <password>
    ggsci> delete extract foo
    However, if you just do the following, the extract is deleted, but not unregistered. Only a warning is printed.
    ggsci> delete extract foo
    <warning: to unregister, run the command "unregister...">
    So then one just has to follow the instructions in the warning:
    ggsci> dblogin ...
    ggsci> unregister extract foo logretention
    But what if you didn't know the name of the old extracts, or were not even aware if there were any existing registered extracts? You can run the following to find out if any exist:
    sqlplus> select count(*) from dba_capture;
    The actual extract name is not exactly available, but it can be inferred:
    sqlplus> select capture_name, capture_user from dba_capture;
    <blockquote>
    CAPTURE_NAME CAPTURE_USER
    ================ ==================
    OGG$_EORADF4026B1 GGS
    </blockquote>
    In the above case, my actual "capture" process was called "eora". All OGG processes will be prefixed by OGG in the "capture_name" field.
    Btw, you can disable this "logretention" feature by adding in a tranlog option in the param file,
    TRANLOGOPTIONS LOGRETENTION DISABLED
    Or just manually "unregister" the extract. (Not doing a "dblogin" before "add extract" should also work in theory... but it doesn't. The extract is still registered after startup. Not sure if that's a bug or a feature.)
    Cheers,
    -Michael

  • Can we capture changes made to the objects other than tables using streams

    Hello All,
    I have setup a schema level streams replication using local capture process. I can capture all the DML changes on tables but have some issues capturing DDL. Even though streams are used for sharing data at different or within a database I was wondering if we can replicate the changes made to the objects like views, procedures, functions and triggers at the source database. I am not able to replicate the changes made to the views in my setup.
    Also, when I do a "select source_database,source_object_type,instantiation_scn from dba_apply_instantiated_objects" under the column 'object_type' I just see the TABLE in all the rows selected.
    Thanks,
    Sunny boy

    Hello
    This could be a problem with your rules configured with capture,propagation or apply. Or might be a problem with your instantiation.
    You can replicate Functions, Views, Procedure, Triggers etc using Streams Schema level replication or by configuring the rules.
    Please note that the objects like Functions, Views, Procedure, Triggers etc will not appear in the DBA_APPLY_INSTANTIATED_OBJECTS view. The reason is because you do a schema level instantiation only the INSTANTIATION_SCN in DBA_APPLY_INSTANTIATED_SCHEMAS is accounted for these objects. At the same time tables would get recursively instantiated and you would see an entry in DBA_APPLY_INSTANTIATED_OBJECTS.
    It works fine for me. Please see the below from my database (database is 10.2.0.3):
    on capture site_
    SQL> connect strmadmin/strmadmin
    Connected.
    SQL> select capture_name,rule_set_name,status from dba_capture;
    CAPTURE_NAME RULE_SET_NAME STATUS
    STREAMS_CAPTURE RULESET$_33 ENABLED
    SQL> select rule_name from dba_rule_set_rules where rule_set_name='RULESET$_33';
    RULE_NAME
    TEST41
    TEST40
    SQL> set long 100000
    SQL> select rule_condition from dba_rules where rule_name='TEST41';
    RULE_CONDITION
    ((:ddl.get_object_owner() = 'TEST' or :ddl.get_base_table_owner() = 'TEST') and
    :ddl.is_null_tag() = 'Y' and :ddl.get_source_database_name() = 'SOURCE.WORLD')
    SQL> select rule_condition from dba_rules where rule_name='TEST40';
    RULE_CONDITION
    ((:dml.get_object_owner() = 'TEST') and :dml.is_null_tag() = 'Y' and :dml.get_so
    urce_database_name() = 'SOURCE.WORLD')
    SQL> select * from global_name;
    GLOBAL_NAME
    SOURCE.WORLD
    SQL> conn test/test
    Connected.
    SQL> select object_name,object_type,status from user_objects;
    OBJECT_NAME OBJECT_TYPE STATUS
    TEST_NEW_TABLE TABLE VALID
    TEST_VIEW VIEW VALID
    PRC1 PROCEDURE VALID
    TRG1 TRIGGER VALID
    FUN1 FUNCTION VALID
    5 rows selected.
    on apply site_
    SQL> connect strmadmin/strmadmin
    Connected.
    SQL> col SOURCE_DATABASE for a22
    SQL> select source_database,source_object_owner,source_object_name,source_object_type,instantiation_scn
    2 from dba_apply_instantiated_objects;
    SOURCE_DATABASE SOURCE_OBJ SOURCE_OBJECT_NAME SOURCE_OBJE INSTANTIATION_SCN
    SOURCE.WORLD TEST TEST_NEW_TABLE TABLE 9886497863438
    SQL> select SOURCE_DATABASE,SOURCE_SCHEMA,INSTANTIATION_SCN from
    2 dba_apply_instantiated_schemas;
    SOURCE_DATABASE SOURCE_SCHEMA INSTANTIATION_SCN
    SOURCE.WORLD TEST 9886497863438
    SQL> select * from global_name;
    GLOBAL_NAME
    TARGET.WORLD
    SQL> conn test/test
    Connected.
    SQL> select object_name,object_type,status from user_objects;
    OBJECT_NAME OBJECT_TYPE STATUS
    TEST_VIEW VIEW VALID
    PRC1 PROCEDURE VALID
    TRG1 TRIGGER VALID
    FUN1 FUNCTION VALID
    TEST_NEW_TABLE TABLE VALID
    5 rows selected.
    These Functions, Views, Procedure, Trigger are created on the source and got replicated automatically to the target site TARGET.WORLD. And note that none of these objects are appearing in DBA_APPLY_INSTANTIATED_OBJECTS view.
    I have used the above given rules for capture. For propagation I dont have a ruleset itself and for apply I have same rules as of the capture rules.
    Please verify your environment and let me know if you need further help.
    Thanks,
    Rijesh

  • Develop streaming data processing applications in C# with Stream Computing Platform and Storm in HDInsight. Can this be done with Visual Studio Community sign up?

    Hello,
    I am a  student and love the Visual Studio Community 2013 to implement some of my research projects. I am currently working on a project that involves streaming data analysis. I found this article ( http://azure.microsoft.com/en-us/documentation/articles/hdinsight-hadoop-storm-scpdotnet-csharp-develop-streaming-data-processing-application/
    ) but do not have MSDN subscription (I cannot afford it)  as required in the article. Can this be done somehow with Visual Studio Community 2013 sign up?
    Thank you all in advance for your time and for your help.
    J.K.W

    Hi,
    I just confirmed that the key with Visual Studio Community is that, although it is free like Express, it does not have the limitations that Visual Studio Express had. So, to answer your question, Yes - you can do all your development work as a student with
    VS Community 2013 sign up. You can also refer to this blog for more details -
    http://blogs.msdn.com/b/quick_thoughts/archive/2014/11/12/visual-studio-community-2013-free.aspx
    Regards,
    DebarchanS
    DebarchanS - MSFT

  • Streams capture waiting for dictionary redo log

    Hi ,
    The stream capture is waiting for the dictionary redo log
    as per the alert logs ,the logminer is able to register the logfile
    RFS LogMiner: Registered logfile [TEMP_114383_1_628824420.arc] to LogMiner session id [142]
    Fri Feb 13 00:00:39 2009
    Capture Session Redo Total
    Capture Process Session Serial Entries LCRs
    Name Number ID Number State Scanned Enqueued
    C_REF C001 675 2707 WAITING FOR DICTIONARY REDO 0 0
    Capture Capture Capture
    Process Process Positive Negative Process
    Name Queue Rule Set Rule Set Status
    C_REF CA_REF RULESET$_80 ENABLED
    Capture Capture
    Capture Process Process
    Name Queue START_SCN Status STATUS_CHAN CAPTURED_SCN APPLIED_SCN USE FIRST_SCN
    C_REF CA_REF 8586133398117 ENABLED 12-Feb-2009 8586133398117 8586133398117 YES 8586133398117
    CONSUMER_NAM SEQUENCE# FIRST_SCN NEXT_SCN TO_DATE(FIR TO_DATE(NEX NAME
    C_REF 114378 8586133399062 8586162685837 12-Feb-2009 12-Feb-2009 /TEMP_114378_1_628824420.arc
    C_REF 114379 8586162685837 8586163112496 12-Feb-2009 12-Feb-2009 /TEMP_114379_1_628824420.arc
    C_REF 114380 8586163112496 8586163984886 12-Feb-2009 12-Feb-2009 /TEMP_114380_1_628824420.arc
    C_REF 114381 8586163984886 8586163986301 12-Feb-2009 12-Feb-2009 /TEMP_114381_1_628824420.arc
    C_REF 114382 8586163986301 8586163987651 12-Feb-2009 12-Feb-2009 /TEMP_114382_1_628824420.arc
    C_REF 114383 8586163987651 8586163989497 12-Feb-2009 13-Feb-2009 /TEMP_114383_1_628824420.arc
    C_REF 114384 8586163989497 8586163989674 13-Feb-2009 13-Feb-2009 /TEMP_114384_1_628824420.arc
    Capture Time of
    Name LogMiner ID Last Redo SCN Last Redo SCN
    C_REF 142 8586166339742 00:10:13 02/13/09
    i am not still able to make out even after the archivelogs are registered they are not done logmining by logminer.
    Can you please help ,i am stuck up this situation.i have rebuild streams by completely removing the stream configuration and also dropped and recreated the strmadmin.

    Perhaps I missed it in your post but I didn't see a version number or any information as to what form of Streams was implemented or how.
    There are step-by-step instructions for debugging Streams applications at metalink. I would suggest you find the directions for your version and follow them.

  • Streams Capture Error: ORA-01333: failed to establish Logminer Dictionary

    I get the following error:
    ORA-01333: failed to establish Logminer Dictionary ORA-01304: subordinate process error. Check alert and trace logs ORA-29900: operator binding does not exist
    when the capture process is started. I am trying to get[b] schema to schema replication going within a Database. I have tried a few different scripts to get this replication going for the EMPLOYEES table from a HR to HR2 schemas which are identical.
    One of the scripts I used is given below.
    If anyone could point out what could be possibly wrong? or what parameter is not set it would be greatly appreciated. The database is Oracle 11g running in ARCHIVELOG Mode.
    CREATE TABLESPACE streams_tbs
    DATAFILE 'C:\app\oradata\stream_files\ORCL\streams_tbs.dbf' SIZE 25M;
    -- Create the Streams administrator user in the database, as follows:
    CREATE USER strmadmin
    IDENTIFIED BY strmadmin
    DEFAULT TABLESPACE streams_tbs
    TEMPORARY TABLESPACE temp
    QUOTA UNLIMITED ON streams_tbs;
    -- Grant the CONNECT, RESOURCE, and DBA roles to the Streams administrator:
    GRANT CONNECT, RESOURCE, DBA
    TO strmadmin;
    --Grant the required privileges to the Streams administrator:
    BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
    grantee => 'strmadmin',
    grant_privileges => true);
    END;
    --Granting these roles can assist with administration:
    GRANT SELECT_CATALOG_ROLE
    TO strmadmin;
    GRANT SELECT ANY DICTIONARY
    TO strmadmin;
    commit;
    -- Setup queues
    CONNECT strmadmin/strmadmin@ORCL
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE();
    END;
    --Capture
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'HR.EMPLOYEES',
    streams_type => 'capture',
    streams_name => 'capture_stream',
    queue_name =>
    'strmadmin.streams_queue',
    include_dml => true,
    include_ddl => true,
    inclusion_rule => true);
    END;
    --Apply
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'HR2.EMPLOYEES',
    streams_type => 'apply',
    streams_name => 'apply_stream',
    queue_name =>
    'strmadmin.streams_queue',
    include_dml => true,
    include_ddl => true,
    source_database => 'ORCL',
    inclusion_rule => true);
    END;
    --Start Capture
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name =>
    'capture_stream');
    END;
    --Start the apply Process
    BEGIN
    DBMS_APPLY_ADM.SET_PARAMETER(
    apply_name => 'apply_stream',
    parameter => 'disable_on_error',
    value => 'n');
    END;
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'apply_stream');
    END;
    Any suggestions?

    From what I can understand from the Alert logs and the trace is that some how the Oracle is not able to allocate new log and also it cannot load library SYS.XMLSEQUENCEFROMXMLTYPE.
    I logged into the EM and tried to log for issues, under the recovery settings it showed 100% of the allocated space being used for the ARCHIVE_LOGS. I'm trying to change that and restart the database server and see if I can get it to work.
    Here are some of the extracts from the alert log:
    Logminer Bld: Done
    STREAMS: dictionary dumped, now wait for inflight txn
    knlciWaitForInflightTxns: wait for inflight txns at this scn:
    scn: 0x0000.008905a3
    [8979875]
    knlciWaitForInflightTxns: Done with waiting for inflight txns at this scn:
    scn: 0x0000.008905a3
    [8979875]
    Thread 1 cannot allocate new log, sequence 417
    Checkpoint not complete
    Current log# 2 seq# 416 mem# 0: C:\APP\ORADATA\ORCL\REDO02.LOG
    Thu May 22 09:04:45 2008
    Thread 1 advanced to log sequence 417
    Current log# 3 seq# 417 mem# 0: C:\APP\ORADATA\ORCL\REDO03.LOG
    Thu May 22 09:04:45 2008
    Logminer Bld: Build started
    Thread 1 cannot allocate new log, sequence 418
    Checkpoint not complete
    Current log# 3 seq# 417 mem# 0: C:\APP\ORADATA\ORCL\REDO03.LOG
    Thread 1 advanced to log sequence 418
    Current log# 1 seq# 418 mem# 0: C:\APP\ORADATA\ORCL\REDO01.LOG
    Thu May 22 09:04:48 2008
    Logminer Bld: Lockdown Complete. DB_TXN_SCN is 0 8980165 LockdownSCN is 8980165
    Thread 1 cannot allocate new log, sequence 419
    Checkpoint not complete
    Current log# 1 seq# 418 mem# 0: C:\APP\ORADATA\ORCL\REDO01.LOG
    Thread 1 advanced to log sequence 419
    Current log# 2 seq# 419 mem# 0: C:\APP\ORADATA\ORCL\REDO02.LOG
    Thu May 22 09:04:57 2008
    Thu May 22 09:04:57 2008
    Logminer Bld: Done
    AND then other part
    rrors in file c:\app\diag\rdbms\orcl\orcl\trace\orcl_ms01_1500.trc:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-29900: operator binding does not exist
    ORA-06540: PL/SQL: compilation error
    ORA-06553: PLS-907: cannot load library unit SYS.XMLSEQUENCEFROMXMLTYPE (referenced by SYS.XMLSEQUENCE)
    ORA-06512: at "SYS.LOGMNR_KRVRDREPDICT3", line 83
    ORA-06512: at line 1
    LOGMINER: session#=601, builder MS01 pid=53 OS id=1500 sid=127 stopped
    Thanks for your help. I'll post again if I still cannot get it to work.

  • Capture process status waiting for Dictionary Redo: first scn....

    Hi
    i am facing Issue in Oracle Streams.
    below message found in Capture State
    waiting for Dictionary Redo: first scn 777777777 (Eg)
    Archive_log_dest=USE_DB_RECOVERY_FILE_DEST
    i have space related issue....
    i restored the archive log to another partition eg. /opt/arc_log
    what should i do
    1) db start reading archive log from above location
    or
    2) how to move some archive log to USE_DB_RECOVERY_FILE_DEST from /opt/arc_log so db start processing ...
    Regard's

    Hi -
    Bad news.
    As per note 418755.1
    A. Confirm checkpoint retention. Periodically, the mining process checkpoints itself for quicker restart. These checkpoints are maintained in the SYSAUX tablespace by default. The capture parameter, checkpoint_retention_time, controls the amount of checkpoint data retained by moving the FIRST_SCN of the capture process forward. The FIRST_SCN is the lowest possible scn available for capturing changes. When the checkpoint_retention_time is exceeded (default = 60 days), the FIRST_SCN is moved and the Streams metadata tables previous to this scn (FIRST_SCN) can be purged and space in the SYSAUX tablespace reclaimed. To alter the checkpoint_retention_time, use the DBMS_CAPTURE_ADM.ALTER_CAPTURE procedure.
    Check if the archived redologfile it is requesting is about 60 days old. You need all archived redologs from the requested logfile onwards; if any are missing then you are out of luck. It doesnt matter that there have been mined and captured already; capture still needs these files for a restart. It has always been like this and IMHO is a significant limitation for streams.
    If you cannot recover the logfiles, then you will need to rebuild the captiure process and ensure that any gap in data captures has been resynced manually using tags tofix the data.
    Rgds
    Mark Teehan
    Singapore

  • Location of Capture Process and Perf Overhead

    Hi,
    We are just starting to look at Streams technology. I am reading the doc and it implies that the capture process is run on the source database node. I am concerned of the overhead on the OLTP box. I have a few questions I was hoping to get clarification on.
    1. Can I send the redo log to another node/db with data dictionary info and run the capture there? I would like to offload the perf overhead to another box and I thought Logminer could do it, so why not Streams.
    2. If I run the capture process on one node/db can the initial queue I write to be on another node/db or is it implicit to where I run the capture process? I think I know this answer but would like to hear yours.
    3. Is there any performance atomics on the cost of the capture process to an OLTP system? I realize there are many variables but am wondering if I should even be concerned with offloading the capture process.
    Many thanks in advance for your time.
    Regards,
    Tom

    In the current release, Oracle Streams performs all capture activities at the source site. The ability to capture the changes from the redo logs at an alternative site is planned for a future release. Captured changes are stored in an in-memory buffer queue on the local database. Multi-cpu servers with enough available memory should be able to handle the overhead of capture.

Maybe you are looking for