Lowest SCN in an existing LogMiner dictionary

What is the SQL query to know the lowest SCN in the existing Logminer dictionary that can be used as the FIRST_SCN of a NEW capture process?
Thanks.

Sound like, your sentence has been weighted as your wording poses problem, specially 'existing' and 'new' which restrict the context. For my answer, I will suppose that the proposed data to replicate does not exists at target site.
You would like to know how far back in time you could go to replicate source tables. In theory there is no limit. You could even instantiated the new target table to SCN 1 and load all the archives that ever existed on your DB, prepare this table with SCN=0, create empty structure on target and the new capture will mine all your archived logs.
Alas this is false. If you do this you will see on the target site alert log 'MULTI VERSION DATA DICTIONARY' nick named MTVD. For there is another player: the dictionary export into archived logs. Streams needs to validate any LCR against a 'fixed in time' version of data dictionary to guarantee past consistency (meantime, did you add a column?). This export is made by DBMS_CAPTURE_ADM.build export the current data dict into an archive logs ( done automatically when you set up a new capture with DBMS_STREAMS_ADM.ADD_(TABLE/SCHEMA)_RULES ).
So the real restriction to your request is the presence of a copy of past point in time data dictionary into an archived log. And this is also the answer to your request : the lowest SCN where v$archived_log.dictionary_begin='YES'
The export itself, which install a set of old data into the target, could be resolved by restoring a copy of the source DB at this time in past you want to start the replication and creating an export from this restored back-in-time DB.
But you can't do anything to enforce the presence of a data dictionary into a past archive log.
So once you have re-created a back in time export and spotted an old SCN in v$archived_log where dictionary_begin='YES' then you may load all, the archives starting of this archive, create your NEW capture, instantiate source and apply site to your export SCN (whose SCN is also comprised between FIRSTCHANGE# and LAST_CHANGE# of this archive with dictionary_begin='YES'.
But all this is true if it is a NEW capture, for if you want to reuse an existing capture, then it is a bit more complicate to inject old data of not-yet-ever-replicated-table you want to add.

Similar Messages

  • Streams Capture Error: ORA-01333: failed to establish Logminer Dictionary

    I get the following error:
    ORA-01333: failed to establish Logminer Dictionary ORA-01304: subordinate process error. Check alert and trace logs ORA-29900: operator binding does not exist
    when the capture process is started. I am trying to get[b] schema to schema replication going within a Database. I have tried a few different scripts to get this replication going for the EMPLOYEES table from a HR to HR2 schemas which are identical.
    One of the scripts I used is given below.
    If anyone could point out what could be possibly wrong? or what parameter is not set it would be greatly appreciated. The database is Oracle 11g running in ARCHIVELOG Mode.
    CREATE TABLESPACE streams_tbs
    DATAFILE 'C:\app\oradata\stream_files\ORCL\streams_tbs.dbf' SIZE 25M;
    -- Create the Streams administrator user in the database, as follows:
    CREATE USER strmadmin
    IDENTIFIED BY strmadmin
    DEFAULT TABLESPACE streams_tbs
    TEMPORARY TABLESPACE temp
    QUOTA UNLIMITED ON streams_tbs;
    -- Grant the CONNECT, RESOURCE, and DBA roles to the Streams administrator:
    GRANT CONNECT, RESOURCE, DBA
    TO strmadmin;
    --Grant the required privileges to the Streams administrator:
    BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
    grantee => 'strmadmin',
    grant_privileges => true);
    END;
    --Granting these roles can assist with administration:
    GRANT SELECT_CATALOG_ROLE
    TO strmadmin;
    GRANT SELECT ANY DICTIONARY
    TO strmadmin;
    commit;
    -- Setup queues
    CONNECT strmadmin/strmadmin@ORCL
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE();
    END;
    --Capture
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'HR.EMPLOYEES',
    streams_type => 'capture',
    streams_name => 'capture_stream',
    queue_name =>
    'strmadmin.streams_queue',
    include_dml => true,
    include_ddl => true,
    inclusion_rule => true);
    END;
    --Apply
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'HR2.EMPLOYEES',
    streams_type => 'apply',
    streams_name => 'apply_stream',
    queue_name =>
    'strmadmin.streams_queue',
    include_dml => true,
    include_ddl => true,
    source_database => 'ORCL',
    inclusion_rule => true);
    END;
    --Start Capture
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name =>
    'capture_stream');
    END;
    --Start the apply Process
    BEGIN
    DBMS_APPLY_ADM.SET_PARAMETER(
    apply_name => 'apply_stream',
    parameter => 'disable_on_error',
    value => 'n');
    END;
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'apply_stream');
    END;
    Any suggestions?

    From what I can understand from the Alert logs and the trace is that some how the Oracle is not able to allocate new log and also it cannot load library SYS.XMLSEQUENCEFROMXMLTYPE.
    I logged into the EM and tried to log for issues, under the recovery settings it showed 100% of the allocated space being used for the ARCHIVE_LOGS. I'm trying to change that and restart the database server and see if I can get it to work.
    Here are some of the extracts from the alert log:
    Logminer Bld: Done
    STREAMS: dictionary dumped, now wait for inflight txn
    knlciWaitForInflightTxns: wait for inflight txns at this scn:
    scn: 0x0000.008905a3
    [8979875]
    knlciWaitForInflightTxns: Done with waiting for inflight txns at this scn:
    scn: 0x0000.008905a3
    [8979875]
    Thread 1 cannot allocate new log, sequence 417
    Checkpoint not complete
    Current log# 2 seq# 416 mem# 0: C:\APP\ORADATA\ORCL\REDO02.LOG
    Thu May 22 09:04:45 2008
    Thread 1 advanced to log sequence 417
    Current log# 3 seq# 417 mem# 0: C:\APP\ORADATA\ORCL\REDO03.LOG
    Thu May 22 09:04:45 2008
    Logminer Bld: Build started
    Thread 1 cannot allocate new log, sequence 418
    Checkpoint not complete
    Current log# 3 seq# 417 mem# 0: C:\APP\ORADATA\ORCL\REDO03.LOG
    Thread 1 advanced to log sequence 418
    Current log# 1 seq# 418 mem# 0: C:\APP\ORADATA\ORCL\REDO01.LOG
    Thu May 22 09:04:48 2008
    Logminer Bld: Lockdown Complete. DB_TXN_SCN is 0 8980165 LockdownSCN is 8980165
    Thread 1 cannot allocate new log, sequence 419
    Checkpoint not complete
    Current log# 1 seq# 418 mem# 0: C:\APP\ORADATA\ORCL\REDO01.LOG
    Thread 1 advanced to log sequence 419
    Current log# 2 seq# 419 mem# 0: C:\APP\ORADATA\ORCL\REDO02.LOG
    Thu May 22 09:04:57 2008
    Thu May 22 09:04:57 2008
    Logminer Bld: Done
    AND then other part
    rrors in file c:\app\diag\rdbms\orcl\orcl\trace\orcl_ms01_1500.trc:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-29900: operator binding does not exist
    ORA-06540: PL/SQL: compilation error
    ORA-06553: PLS-907: cannot load library unit SYS.XMLSEQUENCEFROMXMLTYPE (referenced by SYS.XMLSEQUENCE)
    ORA-06512: at "SYS.LOGMNR_KRVRDREPDICT3", line 83
    ORA-06512: at line 1
    LOGMINER: session#=601, builder MS01 pid=53 OS id=1500 sid=127 stopped
    Thanks for your help. I'll post again if I still cannot get it to work.

  • Capture and Logminer Dictionary

    Hi All,
    Please clarify my understanding and doubts in STREAMS.
    1. FIRST_SCN in dba_capture shows the mark from where Log Dictionary has been stored in the log file, but FIRST_SCN keeps advancing forward due to CHEKPOINT_FREQUENCY and CHECKPOINT_RETENTION_TIME , these log files which are present below the FIRST_SCN are marked as purgeable in DBA_LOGMNR_PURGED_LOG view.
    My doubt is how object name and column name are resolved using Logminer Dictionary when FIRST_SCN keeps on advancing and Logminer Dictionary containing log file is deleted by querying DBA_LOGMNR_PURGED_LOG.
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    set linesize 172
    set pagesize 66
    column  Status format a10     heading "Status"
    column  first_change# format 999999999999
    column  next_change#  format 999999999999
    column  ft    format a21      heading "First time" justify c
    column  nt    format a21      heading "Next time" justify c
    column  sta   format a7       heading "Standby|Dest" justify c
    column  del   format a7       heading "Deleted|By Rman" justify c
    column  dic   format a3       heading "Dic|Beg"
    prompt
    select thread#, SEQUENCE#,  FIRST_TIME, NEXT_TIME,
           applied, '    '|| status status, sta, del, registrar,  DICTIONARY_BEGIN Dic
        from ( SELECT thread#, SEQUENCE# ,  FIRST_TIME, NEXT_TIME,
                      applied, status, '  '||standby_dest  sta,'  '||deleted del, registrar , DICTIONARY_BEGIN
                      FROM V$ARCHIVED_LOG ORDER BY first_time desc)
       where rownum <= 40
                                                                                     Standby Deleted                       Dic
       THREAD#  SEQUENCE# FIRST_TIME         NEXT_TIME          APPLIED   Status      Dest   By Rman REGISTRAR             Beg
             2       3139 18-FEB-10          18-FEB-10          NO            A        NO      NO    ARCH                  NO
             2       3139 18-FEB-10          18-FEB-10          NO            A        YES     NO    ARCH                  NO
             1       3509 18-FEB-10          18-FEB-10          NO            A        NO      NO    ARCH                  NO
             1       3509 18-FEB-10          18-FEB-10          YES           A        YES     NO    ARCH                  NO
             2       3138 18-FEB-10          18-FEB-10          NO            A        NO      NO    ARCH                  NO
             2       3138 18-FEB-10          18-FEB-10          YES           A        YES     NO    ARCH                  NO
             1       3508 18-FEB-10          18-FEB-10          NO            A        NO      NO    ARCH                  NO
             1       3508 18-FEB-10          18-FEB-10          YES           A        YES     NO    ARCH                  NO
             1       3507 17-FEB-10          18-FEB-10          NO            A        NO      NO    ARCH                  NO
             1       3507 17-FEB-10          18-FEB-10          YES           A        YES     NO    ARCH                  NO
             1       3506 17-FEB-10          17-FEB-10          NO            A        NO      NO    ARCH                  NO
             1       3506 17-FEB-10          17-FEB-10          YES           A        YES     NO    ARCH                  NO
             2       3137 17-FEB-10          18-FEB-10          YES           A        YES     NO    ARCH                  NO
             2       3137 17-FEB-10          18-FEB-10          NO            A        NO      NO    ARCH                  NO
             2       3136 17-FEB-10          17-FEB-10          NO            A        NO      NO    ARCH                  NO
             2       3136 17-FEB-10          17-FEB-10          YES           A        YES     NO    ARCH                  NO
             1       3505 17-FEB-10          17-FEB-10          NO            A        NO      NO    ARCH                  NO
             1       3505 17-FEB-10          17-FEB-10          YES           A        YES     NO    ARCH                  NO
             2       3135 17-FEB-10          17-FEB-10          NO            A        NO      NO    ARCH                  NO
             2       3135 17-FEB-10          17-FEB-10          YES           A        YES     NO    ARCH                  NO
             1       3504 16-FEB-10          17-FEB-10          NO            A        NO      NO    ARCH                  NO
             1       3504 16-FEB-10          17-FEB-10          YES           A        YES     NO    ARCH                  NO
             1       3503 16-FEB-10          16-FEB-10          NO            A        NO      NO    ARCH                  NO
             1       3503 16-FEB-10          16-FEB-10          YES           A        YES     NO    ARCH                  NO
             2       3134 16-FEB-10          17-FEB-10          NO            A        NO      NO    ARCH                  NO
             2       3134 16-FEB-10          17-FEB-10          YES           A        YES     NO    ARCH                  NO
             2       3133 16-FEB-10          16-FEB-10          NO            D        NO      YES   ARCH                  NO
             2       3133 16-FEB-10          16-FEB-10          YES           A        YES     NO    ARCH                  NO
             1       3502 16-FEB-10          16-FEB-10          NO            D        NO      YES   ARCH                  NO
             1       3502 16-FEB-10          16-FEB-10          YES           A        YES     NO    ARCH                  NO
             1       3501 15-FEB-10          16-FEB-10          NO            D        NO      YES   FGRD                  NO
             1       3501 15-FEB-10          16-FEB-10          YES           A        YES     NO    ARCH                  NO
             2       3132 15-FEB-10          16-FEB-10          NO            D        NO      YES   ARCH                  NO
             2       3132 15-FEB-10          16-FEB-10          YES           A        YES     NO    ARCH                  NO
             2       3131 15-FEB-10          15-FEB-10          NO            D        NO      YES   ARCH                  NO
             2       3131 15-FEB-10          15-FEB-10          YES           A        YES     NO    ARCH                  NO
             1       3500 15-FEB-10          15-FEB-10          NO            D        NO      YES   ARCH                  NO
             1       3500 15-FEB-10          15-FEB-10          YES           A        YES     NO    ARCH                  NO
             1       3500 15-FEB-10          15-FEB-10          YES           A        YES     NO    ARCH                  NO
             2       3130 15-FEB-10          15-FEB-10          NO            D        NO      YES   ARCH                  NO
    40 rows selected.None of the log file is having Logminer Dictionary, what will happen if restart Capture process....From which SCN(START_SCN,CAPTURED_SCN,APPLIED_SCN,FIRST_SCN,REQUIRED_CHECKPOINT_SCN,MAX_CHECKPOINT_SCN) Capture starts mining or capturing changes.
    SQL> select START_SCN,CAPTURED_SCN,APPLIED_SCN,FIRST_SCN,REQUIRED_CHECKPOINT_SCN,MAX_CHECKPOINT_SCN from dba_capture;
    START_SCN CAPTURED_SCN APPLIED_SCN  FIRST_SCN REQUIRED_CHECKPOINT_SCN MAX_CHECKPOINT_SCN
    3.6116E+10   3.7127E+10  3.7127E+10 3.6116E+10              3.7127E+10         3.7127E+102. At last how to find whether bi-directional streams are in sync and how to find the gap in terms of LCR or Time metrics??
    -Yasser

    Thanks a lot for replying.
    Please find details below.
    SQL> select bytes/1024/1024 MB from dba_segments where segment_name='LOGMNR_RESTART_CKPT$' and owner='SYSTEM';
            MB
            15How come without Logminer Dictionary information Capture process is able to resolve object and column names. Where does the dictionary infromation is stored??
    From which SCN Capture process starts mining, if i restart Capture process??
    What is point-in-time recovery using streams, and why we need to set first_scn here??
    When START_SCN , CAPTURED_SCN , APPLIED_SCN , FIRST_SCN , REQUIRED_CHECKPOINT_SCN are used in Streams.....these SCN values creates too much confusion....
    -Yasser

  • DB6CONV/DB6 - Is there a way to reuse existing compression dictionary

    As some of you probably know, DB6CONV handles compression by retaining the Compression flag while performing the table move , but unfortunately not the compression dictionary. This is causing some issues for tables to increase in size because of the way new dictionary gets built based on sample 1000 rows. Please see an example below where the Data Size of a compressed table increased after the tablemove.
    For table SAPR3. /BI0/ACCA_O0900, please see the below statistics.
    Old Table:
    Data Size = 10,423,576 KB
    Index Size =  9,623,776 KB
    New Table (both data and index) with Index Compression (Moved  using DB6CONV, took 1 hr for 100 million row table):
    Data Size = 16,352,352 KB
    Index Size =  4,683,296 KB
    Reorg table with reset dictionary (By DB2 CLP takes 1hr and 32 Min)
    Data Size = 8,823,776 KB
    Index Size =  4,677,792 KB
    We are on DB2 9.5 and will soon be migrating to DB2 9.7. In order to use the reclaimable table space feature that comes with DB2 9.7, We are planning on creating new tablespaces ( especially for objects like PSA/Change logs in BW) and then move the compressed tables after enabling index compression, but the DB6CONV is not going to be the right tool based on our experience.
    Is there a way for DB6CONV or DB2 to reuse an existing compressed dictionary of the source table and reuse it when it performs a table move to a new tablespace.
    Thanks,
    Raja

    hi raja,
    no, DB6CONV cannot reuse the existing compression dictionary - this is in general not possible.
    BUT:  the good news is, that the next version V5 of DB6CONV will (amongst other new features) handle compression in a much better way! like R3load and online_table_move the compression dictionary will then be created based on  (if possible) 20MB of sampled data ensuring optimal compression.
    this new version will become generally available within the next few weeks.
    regards, frank

  • Changes to existing Reports,Dictionary objects when we migrate DataBase to SAP HANA

    Hi Experts,
               Can you please let me know the changes that effect to existing ABAP Dictionary objects, Reports, MPP etc... that we build by using ABAP if we migrate our underlying database to SAP HANA from any RDBMS.
                           Thanks in advance.
    Regards,
    Sandeep Rajanala

    Dear Sandeep Rajanala,
    In general the existing ABAP code(reports, classes, module pool programs, function modules etc) runs on SAP HANA as before and the existing dictionary objects like tables, views, DDIC types and structures continues to exist and work as before after migration.
    However if the existing ABAP Code relies on the technical specifics of the old database , ABAP Code changes might be necessary(which is the case for any database migration ).
    For example,
    Your ABAP code uses a feature that is specific to the old database and not in the SQL standard and therefore not in Open SQL(consumed via Native SQL using EXEC SQL or ADBC interface)
    You have ABAP code that relies on unspecified undocumented implementation behaviour of the old database
    the above cases may need minor code changes in ABAP during HANA migration inorder to ensure that your ABAP Code runs with functional correctness after migration.
    The SAP note 1785057 gives guidelines on migrating the existing ABAP Custom Code for HANA.
    Additionally we have several code inspector checks to find the ABAP code that requires changes and you can find details about these check in SAP note 1912445.
    In addition you can find an elaborate guide for transitioning custom ABAP code to HANA here which has a dedicated section on required and recommended adaptations during custom code migration.
    You can also find the recording of the SAP Teched 2013  hands on session which focuses completely on ABAP Custom Code migration aspects for HANA here
    Hope this answers your question and gives you some pointers on how you could prepare your ABAP for migrating to HANA.
    Please revert back if you have further questions.
    Best Regards
    Sundar.

  • Table or view does not exist - Data Dictionary Import Wizard(Data Modeler)

    Hi All,
    In Data Modeler, Data Dictionary Import Wizard, I'm able to connect to database.But while going to the second stage (Select Schema/Database), I'm getting an error "ORA-00942: table or view does not exist".
    I am able to select the table with select * from all_tables and I can open many tables as well.
    Could anyone tell me, whether I'm missing any privilege, that causing this error.
    Thanks in advance for you support.
    Thanks.

    Hi,
    Thanks for your response, sorry for my late reply as I was away from my place.
    Yes, it is showing "Connection established successfully".
    log file as below-
    2012-08-02 10:37:26,471 [main] INFO ApplicationView - Oracle SQL Developer Data Modeler 3.1.1.703
    2012-08-02 10:39:42,889 [AWT-EventQueue-0] ERROR AbstractDBMExtractionWizardImpl - java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist
    Pls see oracle version details-
    Oracle Database 11g Enterprise Edition
    11.1.0.6.0
    64bit Production
    Thanks again and waiting for reply.

  • How do you add a function to an existing Rules Dictionary via the SDK?

    The how to SDK describes creating a new dictionary and adding functions and other items. However, in reading the Java API docs, it is not apparent how one would add a new function to an existing dictionary.
    Once you have a reference to the RuleDictionary, there is a getDataModel method, but it returns a oracle.rules.sdk.datamodel.DataModel instead of the oracle.rules.sdk.editor.datamodel.DataModel. It appears that this DataMode class is different than the DataModel class in the editor sub-package. Further this DataModel class is not described in the API docs.
    There is an addFunction method, but I do not see how to create an instance of the Function class.
    Thanks,
    Bret

    Look at the oracle.rules.sdk.editor.datamodel.DataModel Constructor:
    Constructor Summary
    DataModel(RuleDictionary dict)
    Constructor used to edit a datamodel.
    So, the call would look something like:
    RuleDictionary dict = ...;
    oracle.rules.sdk.editor.datamodel.DataModel dm = new oracle.rules.sdk.editor.datamodel.DataModel(dict);

  • Standby Database fails to read dictionary from redo log

    hi,
    I am attempting to create a Logical standby database on same machine as the primary database. I have executed the steps outlined in Oracle Documentation several times, but end up with the same error. Detailes of setup and error are provided below. Please help. Thanks.
    ==========
    OS: REdhat 8 (2.4.18-14)
    RDBMS: Oracle EE Server 9.2.0.3.0
    primary db init details:
    *.log_archive_dest_1='LOCATION=/usr3/oracle/admin/lbsp/archive/ MANDATORY'
    *.log_archive_dest_2='SERVICE=STDBY'
    standby db init details:
    log_archive_dest_1='LOCATION=/usr3/oracle/admin/stdby/archive/'
    standby_archive_dest='/usr3/oracle/admin/lbsp/archive_pdb/'
    Standby alert log file (tail)
    LOGSTDBY event: ORA-01332: internal Logminer Dictionary error
    Sun Jul 13 11:37:20 2003
    Errors in file /usr3/oracle/admin/stdby/bdump/stdby_lsp0_13691.trc:
    ORA-01332: internal Logminer Dictionary error
    LSP process trace file:
    Instance name: stdby
    Redo thread mounted by this instance: 1
    Oracle process number: 18
    Unix process pid: 13691, image: oracle@prabhu (LSP0)
    *** 2003-07-13 11:37:19.972
    *** SESSION ID:(12.165) 2003-07-13 11:37:19.970
    <krvrd.c:krvrdfdm>: DDL or Dict mine error exit. 384<krvrd.c:krvrdids>: Failed to mine dictionary. flgs 180
    knahcapplymain: encountered error=1332
    *** 2003-07-13 11:37:20.217
    ksedmp: internal or fatal error
    . (memory dump)
    KNACDMP: Unassigned txns = { }
    KNACDMP: *******************************************************
    error 1332 detected in background process
    OPIRIP: Uncaught error 447. Error stack:
    ORA-00447: fatal error in background process
    ORA-01332: internal Logminer Dictionary error
    another trace file created by error is: stdby_p001_13695.trc
    Instance name: stdby
    Redo thread mounted by this instance: 1
    Oracle process number: 20
    Unix process pid: 13695, image: oracle@prabhu (P001)
    *** 2003-07-13 11:37:19.961
    *** SESSION ID:(22.8) 2003-07-13 11:37:19.908
    krvxmrs: Leaving by exception: 604
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01031: insufficient privileges
    ORA-06512: at "SYS.LOGMNR_KRVRDREPDICT3", line 68
    ORA-06512: at line 1
    there are no errors anywhere during the creation, mounting or opening of standby database. After the initial log register, any log switch on primary is communicated to standby and visible in DBA_LOGSTDBY_LOG. Also, archived logs from primary are successfuly copied by system to directory pointed by standby db's standby_archive_dest parameter.
    I noticed, somehow everytime I issue "ALTER DATABASE START LOGICAL STANDBY APPLY" command the procedures and packages related to logmnr get invalid. I compile them and again after "APPLY" they become invalid.
    Invalid object list:
    OBJECT_TYPE OBJECT_NAME
    VIEW DBA_LOGSTDBY_PROGRESS
    PACKAGE BODY DBMS_INTERNAL_LOGSTDBY
    PACKAGE BODY DBMS_STREAMS_ADM_UTL
    VIEW LOGMNR_DICT
    PACKAGE BODY LOGMNR_DICT_CACHE
    PROCEDURE LOGMNR_GTLO3
    PROCEDURE LOGMNR_KRVRDA_TEST_APPLY
    Anybody point out what I am doing wrong. Thanks for the help

    ORA-15001: diskgroup "ORAREDO3" does not exist or is not mounted
    ORA-15001: diskgroup "ORAREDO3" does not exist or is not mountedhave you mentioned parameter LOG_FILE_NAME_CONVERT in standby when online redo log locations are different?
    post from standby:-
    SQL> select name, state From v$asm_diskgroup;
    FAL[client, MRP0]: Error 1031 connecting to MKS01P_PRD for fetching gap sequence
    ORA-01031: insufficient privilegesPost from primary & standby
    SQL> select * from v$pwfile_users;
    User Profile for 919131
    919131     
    Handle:     919131  
    Status Level:     Newbie
    Registered:     Mar 6, 2012
    Total Posts:     16
    Total Questions:     8 (8 unresolved)
    OTN failed 100% to help you, then why you posted another question?
    First close all your old answered threads and then better continue your updates in your thread.
    Edited by: CKPT on Jul 9, 2012 11:45 AM

  • Resetting SCN from removed Capture Process

    I've come across a problem in Oracle Streams where the Capture Processes seem to get stuck. There are no reported errors in the alert log and no trace files, but the capture process fails to continue capturing changes. It stays enabled, but in an awkward state where the OEM Console reports zeros across the board (0 messages, 0 enqueued), when in fact there had been accurate totals in the past.
    Restarting the Capture process does no good. The Capture process seems to switch its state back and forth from Dictionary Initialization to Initializing and vice versa. The only thing that seems to kickstart Streams again is to remove the Capture process and recreate the same process.
    However my problem is that I want to set the start_scn of the new capture process to the captured_scn of the remove capture process so that the new one can start from where the old one left off? However, I'm getting an error that this cannot be performed (cannot capture from specified SCN).
    Am I understanding this correctly? Or should the new Capture process start from where the removed left off automatically?
    Thanks

    Hi,
    I seem to have the same problem.
    I now have a latency of round about 3 days while nothing happened in the database so I want to be able to set the capture process to a further SCN. Setting the Start_SCN gives me an error (can't remember it now unfortunately). Somethimes it seems that the capture process gets stuck in an archived log. It then takes a long time for it to go further and when it goes further it sprints through a bunch of logs before it gets stuck again. During that time all the statuses look good, no heavy cpu-usage is monitored. We saw that the capture-builder has the highest cpu-load, where I would expect the capture-reader to be busy.
    I am able to set the first_scn. So a rebuild of the logminer dictionary might help a bit. But then again: why would the capture process need such a long time to process the archived-logs where no relevant events are expected.
    In my case the Streams solution is considered as a candidate for a replication solution where Quest's Sharedplex is considered to be expensive and unable to meet the requirements. One main reason it is considered inadaquate is that it is not able to catch up after a database-restart of a heavy batch. Now it seems that our capture-process might suffer from the same problem. I sincerly hope I'm wrong and it proofs to be capable.
    Regards,
    Martien

  • Problem with logminer in Data Guard configuration

    Hi all,
    I experience strange problem with applying of the logs in DataGuard configuration on the logical standby database side.
    I've set up the configuration step by step as it is described in documentation (Oracle Data Guard Concepts and Administration, chapter 4).
    Everything went fine until I issued
    ALTER DATABASE START LOGICAL STANDBY APPLY;
    I saw that log applying process was started by checking the output of
    SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME = 'coordinator state';
    and
    SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
    but in few minutes it stoped and quering DBA_LOGSTDBY_EVENTS I saw the following records:
    ORA-16111: log mining and apply setting up
    ORA-01332: internal Logminer Dictionary error
    Alert log says the following:
    LOGSTDBY event: ORA-01332: internal Logminer Dictionary error
    Wed Jan 21 16:57:57 2004
    Errors in file /opt/oracle/admin/whouse/bdump/whouse_lsp0_5817.trc:
    ORA-01332: internal Logminer Dictionary error
    Here is the end of the whouse_lsp0_5817.trc
    error 1332 detected in background process
    OPIRIP: Uncaught error 447. Error stack:
    ORA-00447: fatal error in background process
    ORA-01332: internal Logminer Dictionary error
    But the most useful info I found in one more trace file (whouse_p001_5821.trc):
    krvxmrs: Leaving by exception: 604
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01031: insufficient privileges
    ORA-06512: at "SYS.LOGMNR_KRVRDREPDICT3", line 68
    ORA-06512: at line 1
    Seems that somewhere the correct privileges were not given or smth like this. By the way I was doing all the operations under SYS account (as SYSDBA).
    Could smb give me a clue where could be my mistake or what was done in the wrong way?
    Thank you in advance.

    Which is your SSIS version?
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Patching ST_PI, open conversions in ABAP dictionary

    Hello
    during a patching of ST-21 2008 the tool stops at CHECK_REQUIREMENTS phase with the screen:
    open conversions in ABAP dictionary
    Some open conversion requests still exist in the ABAP Dictionary for the
    following ABAP Dictionary objects. To avoid inconsistencies and loss of 
    data, you must process these conversions first.                                                                               
    Proceed as follows:                                                                               
    - Open a new session.                                                   
    - Start the Database Utility (transaction SE14).                        
    - Correct the inconsistencies for the specified objects.                
    - Repeat the import phase. If no more inconsistencies are found, the    
      import continues.                                                     
    INDEX GLPCA-Z01
    Now , no index exists in dictionary and database but in the past probably it existed and was cancelled . Anyway the table GLPCA is consistent  in database and like runtime object.
    I've activated it again but the problem is always the same
    any idea?
    thanks
    Nicola

    - start transaction SE14
    - use the menue to "DB Requests - Created with import"
    Check if there are any outstanding conversions.
    Markus

  • A way to reuse existing classes instead of generating the stub ones?

    Hello to all,
    I am using eclipse and weblogic 10.3 as an application server.
    I have a 1st project deployed with some exposed web services. I need to access these services from a 2nd project, so i run the clientgen ant task, which generates the client interface, along with some stub classes. These stub classes are basically a copy of the ones from the 1st project.
    My question is this:
    Is there a way to reuse the original objects that the 1st project is using, by putting the first project as a dependency on the second? Or do i have to use the generated stub classes?
    Thanks in advance! Any help is appreciated.

    hi raja,
    no, DB6CONV cannot reuse the existing compression dictionary - this is in general not possible.
    BUT:  the good news is, that the next version V5 of DB6CONV will (amongst other new features) handle compression in a much better way! like R3load and online_table_move the compression dictionary will then be created based on  (if possible) 20MB of sampled data ensuring optimal compression.
    this new version will become generally available within the next few weeks.
    regards, frank

  • Logical standby stuck waiting for dictionary logs

    Hello,
    I created a logical standby (v. 10.2.0.3). It is in state 'WAITING FOR DICTIONARY LOGS', for a few days now.
    Does anyone know how to get past this? It looks like the log that contained the Logminer Dictionary was applied
    already.
    Thanks in advance for any insight you can provide.

    Make sure the archivelogs are reaching the standby. Check your alert logs on both databases to confirm there is no archival problem.

  • Tables not exist in BW

    Hello,
    Someone can help with this?, when I check the largest tables on my SAP BW, some tables have suffix with "@kj" and when I try to check it does not exist on dictionary, what can I do to delete its content or verify is not growing space on hard disc?
    Regards
    DBA

    Thanks Raf,
    Looks like this is my problem and this note help to solve:
    The consistency check between the database objects and dictionary objects in DB02 or DBACockpit reports tables with the following pattern in section "Unknown objects in ABAP Dictionary" (the <*> means multiple alphanumeric characters and the # is one numeric character):
    /BI0/<*>@k# (e.g., /BI0/F0BE_C01@k0)
    /BIC/<*>@k# (e.g., /BIC/FZSALES@k3 or /BIC/B0000200@k9)
    You see large tables in DB02 with the above naming convention
    The attempt to delete a PSA Request fails with one of the following errors:
               RSDU_TABLE_DROP_PARTITION_MSS: Error While Creating Clone Table
               RSDU_TABLE_TRUNC_PARTITION_MSS: Error While Calling Module MSS_TRUNC_PARTITION_FROM_TABLE
    The attempt to delete Infocube requests or compress an Infocube fails with the error:  CL_RSDRD_REQ_PHYS_DEL:REQUEST_DELETE RSDU_FACT_REQUID_DELETE SQL-ERROR
    I follow note and fix the problem successfully
    Regards
    DBA

  • (V9I) ORACLE9I NEW FEATURE : LOGMINER 의 기능과 사용방법

    제품 : ORACLE SERVER
    작성날짜 : 2002-11-13
    (V9I) ORACLE9I NEW FEATURE : LOGMINER 의 기능과 사용방법
    ==========================================
    Purpose
    Oracle 9i LogMiner 의 새로운 기능과 사용방법에 대해 알아보도록 한다.
    (8I LogMiner의 일반적인 개념과 기능은 BUL. 12033 참조)
    Explanation
    LogMiner 는 8I 에서부터 새롭게 제공하는 기능으로 Oracle 8 이상의 Redo
    log file 또는 Archive log file 분석을 위해 이용된다.
    9I 에서는 기존 8I 에서 제공하던 기능을 모두 포함하고 그 외 새로운 점은
    LogMiner 분석을 위해 dictionary 정보를 Flat file(output)로 생성이 가능하
    였으나 9I 에서부터는 Flat file 과 함께 On line redo log 를 이용하여
    dictionary 정보를 저장할 수 있게 되었고, Block corruption이 발생하였을
    경우 해당 부분만 skip할 수 있는 기능이 추가되었다.
    9I 에서의 New feature 를 정리하면 다음과 같다.
    1. 9I New feature
    1) DDL 지원 단, 9I 이상의 Redo/Archive log file만 분석 가능
         : V$LOGMNR_CONTENTS 의 OPERATION column에서 DDL 확인
    2) LogMiner 분석을 위해 생성한 dictioanry 정보를 online redo 에 저장 가능
         : 반드시 Archive log mode 로 운영 중이어야 한다.
         : DBMS_LOGMNR_D.BUILD를 사용하여 dictionary file 생성
         : 기존 Flat file 또는 Redo log 에 생성 가능
         : 예) Flat file
              - SQL> EXECUTE dbms_logmnr_d.build
                   (DICTIONARY_FILENAME => 'dictionary.ora'
                   ,DICTIONARY_LOCATION => '/oracle/database'
                   ,OPTIONS => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
         예) Redo log
              - SQL> EXECUTE dbms_logmnr_d.build
                   (OPTIONS => DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
    3) Redo log block corruption 이 발생하였을 경우 corruption 된 부분을 skip하고 분석
         : 8I 에서 log corruption 발생 시 LogMiner 가 종료되고 분석 위해 다시 시도
         : 9I 에서는 DBMS_LOGMNR.START_LOGMNR 의 SKIP_CORRUPTION option 으로 skip 가능
    4) Commit 된 transaction 에 대해서만 display
         : DBMS_LOGMNR.START_LOGMNR 의 COMMITTED_DATA_ONLY option
    5) Index clustered 와 연관된 DML 지원 (8I 제공 안 됨)
    6) Chained and Migrated rows 분석
    2. 제약 사항(9I LogMiner 에서 지원하지 않는 사항)
    1) LONG and LOB data type
    2) Object types
    3) Nested tables
    4) Object Refs
    5) IOT(Index-Organized Table)
    3. LogMiner Views
    1) V$LOGMNR_CONTENTS - 현재 분석되고 있는 redo log file의 내용
    2) V$LOGMNR_DICTIONARY - 사용 중인 dictionary file
    3) V$LOGMNR_LOGS - 분석되고 있는 redo log file
    4) V$LOGMNR_PARAMETERS - LogMiner에 Setting된 현재의 parameter의 값
    4. LogMiner 를 이용하기 위한 Setup
    1) LogMiner 를 위한 dictionary 생성(flatfile or on line redo log)
    2) Archive log file or Redo log file 등록
    3) Redo log 분석 시작
    4) Redo log 내용 조회
    5) LogMiner 종료
    5. LogMiner Example
    1) flatfile이 생성될 location을 확인
    SQL> show parameter utl
    NAME TYPE VALUE
    utl_file_dir string /home/ora920/product/9.2.0/smlee
    2) dictionary 정보를 저장할 flatfile 정의 -> dictionary.ora 로 지정
    SQL> execute dbms_logmnr_d.build -
    (dictionary_filename => 'dictionary.ora', -
    dictionary_location => '/home/ora920/product/9.2.0/smlee', -
    options => dbms_logmnr_d.store_in_flat_file);PL/SQL procedure successfully completed.
    3) logfile을 switch 하고 current logfile name과 current time을 기억한다.
    SQL> alter system switch logfile;
    System altered.
    SQL> select member from v$logfile, v$log
    2 where v$logfile.group# = v$log.group#
    3 and v$log.status='CURRENT';
    MEMBER
    /home/ora920/oradata/ORA920/redo02.log
    SQL> select current_timestamp from dual;
    CURRENT_TIMESTAMP
    13-NOV-02 10.37.14.887671 AM +09:00
    4) test를 위해 table emp30 을 생성하고 update -> drop 수행
    SQL> create table emp30 as
    2 select employee_id, last_name, salary from hr.employees
    3 where department_id=30;
    Table created.
    SQL> alter table emp30 add (new_salary number(8,2));
    Table altered.
    SQL> update emp30 set new_salary = salary * 1.5;
    6 rows updated.
    SQL> rollback;
    Rollback complete.
    SQL> update emp30 set new_salary = salary * 1.2;
    6 rows updated.
    SQL> commit;
    Commit complete.
    SQL> drop table emp30;
    select
    Table dropped.
    SQL> select current_timestamp from dual;
    CURRENT_TIMESTAMP
    13-NOV-02 10.39.20.390685 AM +09:00
    5) logminer start (다른 session을 열어 작업)
    SQL> connect /as sysdba
    Connected.
    SQL> execute dbms_logmnr.add_logfile ( -
    logfilename => -
    '/home/ora920/oradata/ORA920/redo02.log', -
    options => dbms_logmnr.new)PL/SQL procedure successfully completed.
    SQL> execute dbms_logmnr.start_logmnr( -
    dictfilename => '/home/ora920/product/9.2.0/smlee/dictionary.ora', -
    starttime => to_date('13-NOV-02 10:37:44','DD_MON_RR HH24:MI:SS'), -
    endtime => to_date('13-NOV-02 10:39:20','DD_MON_RR HH24:MI:SS'), -
    options => dbms_logmnr.ddl_dict_tracking + dbms_logmnr.committed_data_only)PL/SQL procedure successfully completed.
    6) v$logmnr_contents view를 조회
    SQL> select timestamp, username, operation, sql_redo
    2 from v$logmnr_contents
    3 where username='HR'
    4 and (seg_name = 'EMP30' or seg_name is null);
    TIMESTAMP                USERNAME           OPERATION           SQL_REDO           
    13-NOV-02 10:38:20          HR               START               set transaction read write;
    13-NOV-02 10:38:20          HR               DDL               CREATE TABLE emp30 AS
                                                      SELECT EMPLOYEE_ID, LAST_NAME,
                                                      SALARY FROM HR.EMPLOYEES
                                                      WHERE DEPARTMENT_ID=30;
    13-NOV-02 10:38:20          HR                COMMIT
    commit;
    13-NOV-02 10:38:50          HR               DDL               ALTER TABLE emp30 ADD
                                                      (new_salary NUMBER(8,2));
    13-NOV-02 10:39:02          HR               UPDATE          UPDATE "HR"."EMP30" set
                                                      "NEW_SALARY" = '16500' WHERE
                                                      "NEW_SALARY" IS NULL AND ROWID
                                                      ='AAABnFAAEAALkUAAA';
    13-NOV-02 10:39:02-10     HR               DDL               DROP TABLE emp30;
    7) logminer 를 종료한다.
    SQL> execute dbms_logmnr.end_logmnr
    PL/SQL procedure successfully completed.
    Reference Documents
    Note. 148616.1

Maybe you are looking for