First SCN

10.2.0.5 4 Node RAC cluster with Stream -
We have few XA transactions on the source database and the commit/rollback/failure of the transaction is not understood by streams. The first SCN position is set to the start_scn of that transaction - It never advances. In certain scenario When we re-start streams it goes back to the first_scn -
I have few questions -
1) How can we monitor the first_scn of the capture process- Is there any dba views we can refer to
2)once we encounter a First SCN which is void we need to manually advance it. - We need to get the min(scn) from gv$transaction and compare it to the checkpoint scn.
First_scn will be set to the scn whichever is lower . Is there any procedure we can use for altering the capture process first scn.
Thanks in advance

You can query dba_capture:
SELECT CAPTURE_NAME, FIRST_SCN FROM DBA_CAPTURE;
You can reset the start SCN using the alter capture subprogram:
BEGIN
  DBMS_CAPTURE_ADM.ALTER_CAPTURE(
    capture_name => 'your_capture_name',
    start_scn    => your_new_start_SCN_as_a_number_not_string);
END;
/

Similar Messages

  • Hung on "Waiting for dictionary redo: first scn"

    Hi I'm on Oracle 10.2.0.4 with the following Streams setup with databases in brackets: [capture process -> capture queue -> propagation] -> [apply queue -> apply process].
    The capture was running but is now stuck on "waiting for dictionary redo: first scn 12345". What caused the capture to initially pause was the target database was brought down for a week which caused the propagation on the source to become disabled after it couldn't connect anymore. I stopped/started the capture and propagation on the source once the target database was brought back online.
    All of the archived log files back to the log file containing the dictionary build and specified SCN have been restored to the original archive destination directory on the source database server. I've tried stopping and starting the capture after restoring the archived log files, but it remains in the same status. The first 40 out of roughly 700 archived log files are currently registered in DBA_REGISTERED_ARCHIVED_LOG.
    Do all of the archived log files need to be registered before Streams capture will start scanning/capturing again from the beginning? Or if not, what else might be needed to get the capture moving once again?
    Thanks for any help.

    For what it's worth, I was able to get the capture to start capture changes again.
    What I had to do was...
    1. stop the existing capture process
    2. record the necessary first/captured SCN values for recreating the capture process (queues were empty)
    3. drop the existing capture process while preserving the rule set
    4. recreate the capture process with the existing rule set and appropriate first & start SCN values
    5. manually register all archived logfiles, including the one with the dictionary build/first SCN and up
    6. start the new capture process
    I still have no idea what the issue was to begin with.

  • Capture process status waiting for Dictionary Redo: first scn....

    Hi
    i am facing Issue in Oracle Streams.
    below message found in Capture State
    waiting for Dictionary Redo: first scn 777777777 (Eg)
    Archive_log_dest=USE_DB_RECOVERY_FILE_DEST
    i have space related issue....
    i restored the archive log to another partition eg. /opt/arc_log
    what should i do
    1) db start reading archive log from above location
    or
    2) how to move some archive log to USE_DB_RECOVERY_FILE_DEST from /opt/arc_log so db start processing ...
    Regard's

    Hi -
    Bad news.
    As per note 418755.1
    A. Confirm checkpoint retention. Periodically, the mining process checkpoints itself for quicker restart. These checkpoints are maintained in the SYSAUX tablespace by default. The capture parameter, checkpoint_retention_time, controls the amount of checkpoint data retained by moving the FIRST_SCN of the capture process forward. The FIRST_SCN is the lowest possible scn available for capturing changes. When the checkpoint_retention_time is exceeded (default = 60 days), the FIRST_SCN is moved and the Streams metadata tables previous to this scn (FIRST_SCN) can be purged and space in the SYSAUX tablespace reclaimed. To alter the checkpoint_retention_time, use the DBMS_CAPTURE_ADM.ALTER_CAPTURE procedure.
    Check if the archived redologfile it is requesting is about 60 days old. You need all archived redologs from the requested logfile onwards; if any are missing then you are out of luck. It doesnt matter that there have been mined and captured already; capture still needs these files for a restart. It has always been like this and IMHO is a significant limitation for streams.
    If you cannot recover the logfiles, then you will need to rebuild the captiure process and ensure that any gap in data captures has been resynced manually using tags tofix the data.
    Rgds
    Mark Teehan
    Singapore

  • WAITING FOR DICTIONARY REDO: FIRST SCN

    Hii All
    I have configured streams correctly almost 4 mount ago and there wasnt a problem until today.Last night source database has power down due to ups problem.So Now I am seeing WAITING FOR DICTIONARY REDO: FIRST SCN 83683730 error in V$STREAMS_CAPTURE.The scn is pointing 4 mounts ago and I havent that archive log in my rman backups.
    What is your suggestions ?
    Best Regards

    Hi -
    Bad news.
    As per note 418755.1
    A. Confirm checkpoint retention. Periodically, the mining process checkpoints itself for quicker restart. These checkpoints are maintained in the SYSAUX tablespace by default. The capture parameter, checkpoint_retention_time, controls the amount of checkpoint data retained by moving the FIRST_SCN of the capture process forward. The FIRST_SCN is the lowest possible scn available for capturing changes. When the checkpoint_retention_time is exceeded (default = 60 days), the FIRST_SCN is moved and the Streams metadata tables previous to this scn (FIRST_SCN) can be purged and space in the SYSAUX tablespace reclaimed. To alter the checkpoint_retention_time, use the DBMS_CAPTURE_ADM.ALTER_CAPTURE procedure.
    Check if the archived redologfile it is requesting is about 60 days old. You need all archived redologs from the requested logfile onwards; if any are missing then you are out of luck. It doesnt matter that there have been mined and captured already; capture still needs these files for a restart. It has always been like this and IMHO is a significant limitation for streams.
    If you cannot recover the logfiles, then you will need to rebuild the captiure process and ensure that any gap in data captures has been resynced manually using tags tofix the data.
    Rgds
    Mark Teehan
    Singapore

  • Get mor information from customer  AI_SC_GET_SAP_CUSTOMER_NUMBERS

    Hello,
    In our solution manager, we are trying to get more customer information from sap.
    In report AI_SC_GET_SAP_CUSTOMER_NUMBERS that call function BCSN_Z001_GET_VARS_DEBITS in remote RFC  SAP-OSS to get all customer numbers.
    i see that this function fill the table l_debits that have more information than the customer as well as name, phone, country, etc..
    but we only can get the customer numbers.
    is there any possibility to get more information as name, address, contact person, etc.. from Sap to  solution manager synchronization process to get all customers data up-to-date ? thought this report or another one ?
    thanks:
    Luis
    [update 09/08/2013] - just for try to answers my first scn post but still interested on any information from SAP for allow that information that can be important to have on solution manager after SMP update.

    up

  • FIRST_TIME and NEXT_TIME columns in v$archived_log view

    Oracle Version      : 11.2
    OS Platform      : Solaris 10
    Could anyone please explain what FIRST_TIME and NEXT_TIME columns in v$archived_log are?
    Oracle Documentation (below) and googling didn't help.
    http://docs.oracle.com/cd/E18283_01/server.112/e17110/dynviews_1016.htm
    Did quite understand what
    FIRST_TIME      DATE      Timestamp of the first change
    NEXT_TIME      DATE      Timestamp of the next changeis.
    Did a search on OTN as well. A similair OTN post unfortunately ended up in Abuse and Insults !
    description of NEXT_TIME colume in V$ARCHIVED_LOG View

    For each archive log file, there would be First_time & Next_time.
    first_time refers to the timestamp when when first SCN is recorded, Also next_time refers to the timestamp of the next archivelog change.
    Here next_change/next_time of archive value is equal to the next logs first_change/first_time.
    SEQUENCE#         FIRST_CHANGE# TO_CHAR(FIRST_TIME,'DD-MON-YYYYHH24:MI:SS')                                          NEXT_CHANGE# TO_CHAR(NEXT_TIME,'DD-MON-YYYYHH24:MI:SS')
        116329            1042518947 16-DEC-2011 04:05:16                                                                   1042534917 16-DEC-2011 04:07:04
        116330            1042534917 16-DEC-2011 04:07:04                                                                   1042550495 16-DEC-2011 04:08:24

  • Capture  WAITING FOR DICTIONARY REDO: FILE

    Hi Experts,
    Based on a large transaction, source A1 database archive action was auto switch for redlog files.
    After this event, we destination A2 DB capture get a state---WAITING FOR DICTIONARY REDO: FILE G:\ORACLE\ORADATA\VMSDBSEA\ARCHIVE\ARC10201_0639211808.001.
    we use bi-direction stream at oracle10GR4 in 32 bit windoe 2003.
    I think that is a capture problem in A2 only. last weekend, stream stop capture process due low SGA.
    I restart A2 capture thsi morning.
    What do I need to do? just add a new first SCN or register redo file?
    Please help me in detail.
    I find this case at http://stanford.edu/dept/itss/docs/oracle/10g/server.101/b10727/capture.htm#1006565
    1. A capture process is configured to capture changes to tables.
    2. A database administrator stops the capture process. When the capture process is stopped, it records the SCN of the change it was currently capturing.
    3. User applications continue to make changes to the tables while the capture process is stopped.
    4. The capture process is restarted three hours after it was stopped.
    However, I do not see how to set new SCN for restart capture.
    Thanks,
    Jim
    Edited by: user589812 on Dec 29, 2008 12:11 PM

    i try to register archivelog file and got message as
    SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE 'ARC23250_0621282506.001' FOR 'STREAM_CAPTURE';
    ALTER DATABASE REGISTER LOGICAL LOGFILE 'ARC23250_0621282506.001' FOR 'STREAM_CAPTURE'
    ERROR at line 1:
    ORA-01284: file cannot be opened
    So what issue?
    Thanks
    JIm

  • Unable to register one extract,but the other twos are ok.

    Enviroment:
    OS:Windows Server 2003 64bit
    GG:
    Oracle GoldenGate Command Interpreter for Oracle
    Version 11.2.1.0.6 16211226 OGGCORE_11.2.1.0.6_PLATFORMS_130418.1829
    Windows x64 (optimized), Oracle 11g on Apr 19 2013 17:38:40
    There are three extract on the same server.Now I'd like to register for the three extracts.The other twos are ok,but there is a one extract,meet the below error message:
    2013-12-22 21:43:50  ERROR   OGG-01755  Cannot register or unregister EXTRACT E_
    ZF_K because of the following SQL error: OCI Error ORA-00001: unique constraint
    (SYSTEM.LOGMNR_SESSION_UK1) violated
    ORA-06512: at "SYS.DBMS_CAPTURE_ADM_INTERNAL", line 453
    ORA-06512: at "SYS.DBMS_CAPTURE_ADM", line 289
    ORA-06512: at line 1 (status = 1). See Extract user privileges in the Oracle Gol
    denGate for Oracle Installation and Setup Guide.
    What should I do?And I also want to know the detailed cause of this error.Thanks in acvance!

    Hi,
    Ongoing DDL operations on the Oracle database do not allow the PL/SQL PackageDBMS_CAPTURE_ADM.BUILD to build the LogMiner Data Dictionary, as part of the GGSCI REGISTER EXTRACT command.
    NOTE: The DBMS_CAPTURE_ADM.BUILD procedure is the same as the DBMS_LOGMNR_D.BUILD procedure.
    exec dbms_goldengate_auth.grant_admin_privilege('USER_NAME');
    Connect to the Oracle database as SYSDBA  # sqlplus /nologin
    SQL> connect / as sysdba
    In the SYSDBA session, determine that there are no EXCLUSIVE DDL sessions.
    (Summary)  SQL> select mode_held, count(*) from dba_ddl_locks group by mode_held;
    (In detail)  SQL> select mode_held, name, type from dba_ddl_locks where mode_held = 'Exclusive' order by mode_held;
    (To identify a specific user process)  select l.name, l.type, l.mode_held, s.sid, s.program, s.username, p.spid, p.pidfrom dba_ddl_locks l, v$session s, v$process p where l.mode_held = 'Exclusive' and l.session_id = s.sid and s.paddr = p.addr;
    When there are no EXCLUSIVE mode DDL locks, re-run the GGSCI REGISTER EXTRACT command.
    In case this is not working then try for following steps,
    1. turn on sql trace
    2. identify SQL and its bind variables:
    declare
    extract_name varchar2(100) := :1;
    source_global_name varchar2(4000) := :2;
    firstScn number := :3;
    outbound_server_name varchar2(30);
    outbound_capture_name varchar2(30);
    capture_queue_name varchar2(30);
    queue_table_name varchar2(30);
    outbound_comment varchar2(125);
    BEGIN
    dbms_xstream_gg_adm.wait_for_inflight_txns := 'n';
    dbms_xstream_gg_adm.synchronization := 'none';
    dbms_xstream_gg_adm.is_goldengate := true; /* Construct the queue table name */
    queue_table_name := SUBSTR('OGG$Q_TAB_' || extract_name, 1, 30); /* Construct the capture queue name */
    capture_queue_name := SUBSTR('OGG$Q_' || extract_name, 1, 30); /* create capture queue */
    dbms_streams_adm.set_up_queue(queue_table => queue_table_name, queue_name => capture_queue_name); /* Construct the outbound capture name */
    outbound_capture_name := SUBSTR('OGG$CAP_' || extract_name, 1, 30); /* create capture specifying the first scn */ DBMS_XSTREAM_GG.SET_GG_SESSION();
    dbms_capture_adm.create_capture(queue_name => capture_queue_name,
    capture_name => outbound_capture_name,
    first_scn => firstScn,
    source_database => source_global_name); /* Construct the outbound server name */
    outbound_server_name := SUBSTR('OGG$' || extract_name, 1, 30); /* Construct the comment assosciated with this outbound server */ outbound_comment := extract_name || ' GoldenGate Extract';
    DBMS_XSTREAM_GG_ADM.ADD_OUTBOUND(server_name => outbound_server_name,
    capture_name=> outbound_capture_name,
    source_database=> source_global_name,
    committed_data_only => FALSE,
    wait_for_inflight_txns => 'N',
    synchronization => 'NONE',
    start_scn => firstScn,
    comment => outbound_comment);
    DBMS_XSTREAM_GG.SET_GG_SESSION(FALSE);
    END;
    3. run the sql manually under sqlplus.
    it hits:
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01502: index 'SYS.I_WRI$_OPTSTAT_TAB_OBJ#_ST' or partition of such index is
    in unusable state
    ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 746
    ORA-06512: at line 16
    4.
    SQL> select owner,status from dba_indexes where index_name='I_WRI$_OPTSTAT_TAB_OBJ#_ST';
    OWNER STATUS
    SYS UNUSABLE
    5.rebuild th index, but it did not help:
    alter index I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST rebuild;
    6. drop and recreate the index. Then the extract can be registered
    Thanks,
    GG Lover

  • How to delete the standby archive log files in ASM?

    Hi Experts
    we have a realtime downstream replication that is using the a location in ASM to put the shipped logs files.
    set up by
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='LOCATION=+BOBOASM/NANPUT/standbyarchs/
    VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)' Scope=BOTH;
    What shall i do to clean up those files ?
    Any procedure or script to do that?
    Thanks

    Hello Haggylein
    check this out, seems to work
    --- redologs used or not?
    ---- when purgeable we can delete it
    COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
    COLUMN NAME HEADING 'Archived Redo Log|File Name' FORMAT A25
    COLUMN FIRST_SCN HEADING 'First SCN' FORMAT 99999999999
    COLUMN NEXT_SCN HEADING 'Next SCN' FORMAT 99999999999
    COLUMN PURGEABLE HEADING 'Purgeable?' FORMAT A10
    SELECT r.CONSUMER_NAME,
    r.NAME,
    r.FIRST_SCN,
    r.NEXT_SCN,
    r.PURGEABLE
    FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
    WHERE r.CONSUMER_NAME = c.CAPTURE_NAME and PURGEABLE = 'YES';
    -- Now the script
    -- to be executed on the downstream database
    -- generate the list of logs to be purged and executed in a ksh script
    -- sqlplus "/as sysdba" @$HOME/bin/generate_list.sql
    SET NEWPAGE 0
    SET SPACE 0
    SET LINESIZE 150
    SET PAGESIZE 0
    SET TERMOUT OFF
    SET ECHO OFF
    SET FEEDBACK OFF
    SET HEADING OFF
    SET MARKUP HTML OFF SPOOL OFF
    spool list_purgeable_arch_redologs.ksh
    SELECT 'asmcmd ls ' || r.NAME
    FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
    WHERE r.CONSUMER_NAME = c.CAPTURE_NAME and PURGEABLE = 'YES';
    spool off
    exit
    # eventually we can call it from a script
    # !ksh
    # delete of the shipped redologs
    # to be performed on node 2
    # not to be used on
    $HOME/bin/export ORACLE_SID=+ASM2
    ./list_purgeable_arch_redologs.ksh
    exit

  • 10gR2 Logical Standby database not applying logs

    No errors are appearing in the logs and I've started the apply process :ALTER DATABASE START LOGICAL STANDBY APPLY but when I query dba_logstdby_log, none of the logs for the last 4 days shows as applied and the first SCN is still listed as current. Any thoughts on where I should start looking?
    the latest event in DBA_LOGSTDBY_EVENTS is the startup of the log mining and apply.
    I do not have standby redo logs so I cannot do real time apply, though I am looking to implementing this. Obviously, this is pretty new to me.

    Sorry I didn't mention this before, the logs are being transferred, I verified their location on the os and it matches the location in the dba_logstdby_log view.

  • One to Many table level replication

    Hi All,
    I was configuring Streams replication between one table to many(3) tables in the same database(10.2.0.4)
    Below figure states my requirement.
                                        |--------->TEST2.TAB2(Destination) 
                                        |
    TEST1.TAB1(Source) ---------------->|--------->TEST3.TAB3(Destination)
                                        |
                                        |--------->TEST4.TAB4(Destination)Below are the steps i followed. But replication is not working.
    CREATE USER strmadmin
    IDENTIFIED BY strmadmin
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE, DBA to strmadmin;
    BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
    grantee => 'strmadmin',
    grant_privileges => true);
    END;
    check  that the streams admin is created:
    SELECT * FROM dba_streams_administrator;
    SELECT supplemental_log_data_min,
    supplemental_log_data_pk,
    supplemental_log_data_ui,
    supplemental_log_data_fk,
    supplemental_log_data_all FROM v$database;
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    alter table test1.tab1 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    alter table test2.tab2 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    alter table test3.tab3 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    alter table test4.tab4 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    conn strmadmin/strmadmin
    var first_scn number;
    set serveroutput on
    DECLARE  scn NUMBER;
    BEGIN
      DBMS_CAPTURE_ADM.BUILD(
             first_scn => scn);
      DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn);
      :first_scn := scn;
    END;
    exec dbms_capture_adm.prepare_table_instantiation(table_name=>'test1.tab1');
    begin
    dbms_streams_adm.set_up_queue(
    queue_table => 'strm_tab',
    queue_name => 'strm_q',
    queue_user => 'strmadmin');
    end;
    var first_scn number;
    exec :first_scn:= 2914584
    BEGIN
      DBMS_CAPTURE_ADM.CREATE_CAPTURE(
         queue_name         => 'strm_q',
         capture_name       => 'capture_tab1',
         rule_set_name      => NULL,
         source_database    => 'SIVIN1',
         use_database_link  => false,
         first_scn          => :first_scn,
         logfile_assignment => 'implicit');
    END;
    BEGIN
      DBMS_STREAMS_ADM.ADD_TABLE_RULES(
         table_name         => 'test1.tab1',
         streams_type       => 'capture',
         streams_name       => 'capture_tab1',
         queue_name         => 'strm_q',
         include_dml        => true,
         include_ddl        => false,
         include_tagged_lcr => true,
         source_database    => 'SIVIN1',
         inclusion_rule     => true);
    END;
    BEGIN 
      DBMS_STREAMS_ADM.ADD_TABLE_RULES(
              table_name         => 'test2.tab2',
              streams_type       => 'apply',
              streams_name       => 'apply_tab2',
              queue_name         => 'strm_q',
              include_dml        => true,
              include_ddl        => false,
              include_tagged_lcr => true,
              source_database    => 'SIVIN1',
              inclusion_rule     => true);
    END;
    BEGIN 
      DBMS_STREAMS_ADM.ADD_TABLE_RULES(
              table_name         => 'test3.tab3',
              streams_type       => 'apply',
              streams_name       => 'apply_tab3',
              queue_name         => 'strm_q',
              include_dml        => true,
              include_ddl        => false,
              include_tagged_lcr => true,
              source_database    => 'SIVIN1',
              inclusion_rule     => true);
    END;
    BEGIN 
      DBMS_STREAMS_ADM.ADD_TABLE_RULES(
              table_name         => 'test4.tab4',
              streams_type       => 'apply',
              streams_name       => 'apply_tab4',
              queue_name         => 'strm_q',
              include_dml        => true,
              include_ddl        => false,
              include_tagged_lcr => true,
              source_database    => 'SIVIN1',
              inclusion_rule     => true);
    END;
    select STREAMS_NAME,
          STREAMS_TYPE,
          TABLE_OWNER,
          TABLE_NAME,
          RULE_TYPE,
          RULE_NAME
    from DBA_STREAMS_TABLE_RULES;
    begin 
      dbms_streams_adm.rename_table(
           rule_name       => 'TAB245' ,
           from_table_name => 'test1.tab1',
           to_table_name   => 'test2.tab2',
           step_number     => 0,
           operation       => 'add');
    end;
    begin 
      dbms_streams_adm.rename_table(
           rule_name       => 'TAB347' ,
           from_table_name => 'test1.tab1',
           to_table_name   => 'test3.tab3',
           step_number     => 0,
           operation       => 'add');
    end;
    begin 
      dbms_streams_adm.rename_table(
           rule_name       => 'TAB448' ,
           from_table_name => 'test1.tab1',
           to_table_name   => 'test4.tab4',
           step_number     => 0,
           operation       => 'add');
    end;
    col apply_scn format 999999999999
    select dbms_flashback.get_system_change_number apply_scn from dual;
    begin 
      dbms_apply_adm.set_table_instantiation_scn(
      source_object_name   => 'test1.tab1',
      source_database_name => 'SIVIN1',
      instantiation_scn    => 2916093);
    end;
    exec dbms_capture_adm.start_capture('capture_tab1');
    exec dbms_apply_adm.start_apply('apply_tab2');
    exec dbms_apply_adm.start_apply('apply_tab3');
    exec dbms_apply_adm.start_apply('apply_tab4');Could anyone please help me....Please let me where i have gone wrong.
    If above steps are not correct, then please let me know the desired steps.
    -Yasser

    First of all I suggest implement it to one destination side.
    here is a good example, which I have done
    Just use it and test. Then prepare your other schema`s table (3 destination I mean)
    alter system set global_names =TRUE scope=both;
    oracle@ulfet-laptop:/MyNewPartition/oradata/my$ mkdir Archive
    shutdown immediate
    startup mount
    alter database archivelog
    alter database open
    ALTER SYSTEM SET log_archive_format='MY_%t_%s_%r.arc' SCOPE=spfile;
    ALTER SYSTEM SET log_archive_dest_1='location=/MyNewPartition/oradata/MY/Archive MANDATORY' SCOPE=spfile;
    # alter system set streams_pool_size=25M scope=both;
    create tablespace streams_tbs datafile '/MyNewPartition/oradata/MY/streams_tbs01.dbf' size 25M autoextend on maxsize unlimited;
    grant dba to strmadmin identified by streams;
    alter user strmadmin default tablespace streams_tbs quota unlimited on streams_tbs;
    exec dbms_streams_auth.grant_admin_privilege( -
    grantee => 'strmadmin', -
    grant_privileges => true)
    grant dba to demo identified by demo;
    create table DEMO.EMP as select * from HR.EMPLOYEES;
    alter table demo.emp add constraint emp_emp_id_pk primary key (employee_id);
    begin
    dbms_streams_adm.set_up_queue (
         queue_table     => 'strmadmin.streams_queue_table',
         queue_name     => 'strmadmin.streams_queue');
    end;
    select name, queue_table from dba_queues where owner='STRMADMIN';
    set linesize 150
    col rule_owner for a10
    select rule_owner, streams_type, streams_name, rule_set_name, rule_name from dba_streams_rules;
    BEGIN
         dbms_streams_adm.add_table_rules(
         table_name          => 'HR.EMPLOYEES',
         streams_type          => 'CAPTURE',
         streams_name          => 'CAPTURE_EMP',
         queue_name          => 'STRMADMIN.STREAMS_QUEUE',
         include_dml          => TRUE,
         include_ddl          => FALSE,
         inclusion_rule     => TRUE);
    END;
    select capture_name, rule_set_name,capture_user from dba_capture;
    BEGIN
         DBMS_CAPTURE_ADM.INCLUDE_EXTRA_ATTRIBUTE(
         capture_name           => 'CAPTURE_EMP',
         attribute_name     => 'USERNAME',
         include          => true);
    END;
    select source_object_owner, source_object_name, instantiation_scn from dba_apply_instantiated_objects;
    --no rows returned   - why?
    DECLARE
         iscn      NUMBER;
    BEGIN
    iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_nUMBER();
         DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
         source_object_name     => 'HR.EMPLOYEES',
         source_database_name     => 'MY',
         instantiation_scn     => iscn);
    END;
    conn strmadmin/streams
    SET SERVEROUTPUT ON
    DECLARE
         emp_rule_name_dml VARCHAR2(30);
         emp_rule_name_ddl VARCHAR2(30);
    BEGIN
         DBMS_STREAMS_ADM.ADD_TABLE_RULES(
         table_name           => 'hr.employees',
         streams_type           => 'apply',
         streams_name           => 'apply_emp',
         queue_name           => 'strmadmin.streams_queue',
         include_dml           => true,
         include_ddl           => false,
         source_database      => 'my',
         dml_rule_name      => emp_rule_name_dml,
         ddl_rule_name      => emp_rule_name_ddl);
    DBMS_OUTPUT.PUT_LINE('DML rule name: '||emp_rule_name_dml);
    DBMS_OUTPUT.PUT_LINE('DDL rule name: '||emp_rule_name_ddl);
    END;
    BEGIN
         DBMS_APPLY_ADM.SET_PARAMETER(
         apply_name => 'apply_emp',
         parameter => 'disable_on_error',
         value => 'n');
    END;
    SELECT a.apply_name, a.rule_set_name, r.rule_owner, r.rule_name
    FROM dba_apply a, dba_streams_rules r
    WHERE a.rule_set_name=r.rule_set_name;
    -- select rule_name field`s value and write below -- example EMPLOYEES16
    BEGIN
         DBMS_STREAMS_ADM.RENAME_TABLE(
         rule_name           => 'STRMADMIN.EMPLOYEES14',
         from_table_name     => 'HR.EMPLOYEES',
         to_table_name          => 'DEMO.EMP',
         operation          => 'ADD'); -- can be ADD or REMOVE
    END;
    BEGIN
         DBMS_APPLY_ADM.START_APPLY(
         apply_name => 'apply_emp');
    END;
    BEGIN
         DBMS_CAPTURE_ADM.START_CAPTURE(
         capture_name => 'capture_emp');
    END;
    alter user hr identified by hr;
    alter user hr account unlock;
    conn hr/hr
    insert into employees values
    (400,'Ilqar','Ibrahimov','[email protected]','123456789',sysdate,'ST_MAN',30000,0,110,110);
    insert into employees values
    (500,'Ulfet','Tanriverdiyev','[email protected]','123456789',sysdate,'ST_MAN',30000,0,110,110);
    conn demo/demo
    grant all on emp to public;
    select last_name, first_name from emp where employee_id=300;
    strmadmin/streams
    select apply_name, queue_name,status from dba_apply;
    select capture_name, status from dba_capture;

  • ORA-26667 during checkpoint retention check

    HI Guys
    I have an environment which is being replicated in 2 ways (DML,DDL ) RDBMS 10.2.0.4 , each RDBMS is running in RAC (3 nodes one RDBMS) and another running (2 nodes RAC) everything was running without problems for the last 3 months however last weekend I got
    Sat Feb 14 23:18:41 2009
    STREAMS Warning: ORA-26667 during checkpoint retention check <= this message is the first kind of message that we got
    Sun Feb 15 05:18:43 2009
    STREAMS Capture C 1: first scn changed.
    scn: 0x0009.79468518
    However I took a look that day and I dint find any issue I looked for that problem in metalink but I didn't find any problem , this message came out when the checkpoint retention time moved
    the checkpoint retention time in streams is a number of days that the capture process retains checkpoints before purging them automatically . A capture process periodically computes the age of a checkpoint by subtracting the next_time of the archived redo log that corresponds to the checkpoint from dba_capture.first_time of the archived redo log file containing the required checkpoint scn
    if the resulting value is grater than the checkpoint retention time then the capture process automatically purges the checkpoint by advancing the dba_capture.first_scn value all the archived retained by the dba_caprure process can be seen using dba_registered_archived_log view
    the default retention time is 60 days
    i didnt find any error in the RDBMS therefore I didnt do anything but on Monday morning I got a lot of replication errors caused by
    ORA-01403 , after fixing all those errors I changes of the database that I got that error were not replicated across the other database , it could explain why many tables got out of sync , seems the reason was the propagation process was hanging , I stopped and restarted no changes were received in the other database , as a workaround I recreate it and the problem was fixed . Have you had a problem like this before ?
    Rgds

    browsing on Metalink seems the way to trace the root cause of this kind of problem is running the Oracle Streams Healtcheck right away after we got this kind of problem , seems the error that I got was just a coincidence because there is nothing related

  • Rman syntext for deleting archivelogs

    I am looking for rman syntax to delete all archivelogs but leave last 30 minutes on the disk. I tried following but it did not work
    delete archivelog until time ‘sysdate – 30/(24* 60)’
    any help would be appreciated.
    Thanks,
    mkjoco

    There was no error but RMAN was keep deleting extra logs that were created after the requested time. But good news is that it is working now with following command:
    "delete copy of archivelog all completed before 'sysdate - 30/(24*60)';"
    I found this on the net. It seems to me that RMAN was looking for first SCN time, in the log with "UNTIL TIME " but with "COPY OF" its look for file creation time.
    Don't take my words this is my guess .... If someone can verify this would be nice
    Thanks

  • Rman granularity backup

    Hi all,
    I have a question about the rman policy implementation. I do backups of one of my database via rman. I do a L0 backup on Sunday and a L1 every other days. I backup also the archivelog files. My retention policy is 14 days.
    Now, due a space disk problem, I would delete the backupped archivelog older than 7 days.
    I tried to add this line in the daily rman script:
    "delete noprompt backup of archivelog until time 'SYSDATE-7' "
    but in this way I delete also the needed archivelog to restore the L1 backup.
    In other words: I would use 2 type of granularity backup:
    - from now to 7 day: all archivelog (so I can restore the db in each SCN)
    - from 7 days to 14 days: only the L0 and L1 with and only the needed archivelog. (I can restore only with the incremental backup).
    Is there a way to resolve this problem?
    thanks
    Federico

    Hi Khurram,
    I cannot increase my retention policy due disk space problem.
    I don't understand your advice. Consider this:
    - backup tag "first" SCN 100 (10 day old)
    - all archivelog from SCN 90 to 100 (10 day old)
    - backup tag "second" SCN 110 (3 day old)
    - all archivelog from SCN 101 110 (3 day old)
    I would like to find a way (maybe an automatic way) to delete the older archivelog than 7 day but I must to keep the archivelog containing the SCN 100 because without it I cannot restore the DB to the 1° backup.
    Federico

  • Required_checkpoint_scn and first_scn

    Hi All,
    Could anyone please help me in understanding required_checkpoint_scn and first_scn usage in STREAMS.
    Seems like both are used for same functionality according to manuals.
    From link: http://download.oracle.com/docs/cd/B14117_01/server.101/b10727/capture.htm#1011367
    When the capture process is restarted, it scans the redo log from the required checkpoint SCN forward. Therefore, the redo log file that includes the required checkpoint SCN, and all subsequent redo log files, must be available to the capture process.
    The first SCN for a capture process can be reset to a higher value, but it cannot be reset to a lower value. Therefore, a capture process will never need the redo log files that contain information prior to its first SCN. Query the DBA_LOGMNR_PURGED_LOG data dictionary view to determine which archived redo log files will never be needed by any capture process.Please help me in understanding dependency or difference of these two SCN values.
    -Yasser

    See this link.
    REQUIRED_CHECKPOINT_SCN

Maybe you are looking for