Statspack Rollback per transaction % vs Log Miner rollback = 1

Hi,
I'm investigating high Rollback per transaction % = 30% in our 9.2.0.8 EE databse .
First think was checking all archive logs from period equal to statspack snapshot range
but looks like there is huge discrepancy between that two tools .
What I've found from v$logmnr_contents where rollback = 1 was that
only 0,5% transactions was rollbacked .
I'm wondering what could cause such difference is that true
unique constraint violation (or other exceptions) is not recorded in redologs as rollbacked transaction ?
Any ideas ?
Major problem is still what is causing high rollback % .
Regards
G

I'm investigating increase in undo usage and Rollback per transaction % = 30 .
Strange issue because v$transacion is not showing demanding transacions , rather small quick (oltp system).
Here is sp report:
DB Name     DB Id  Instance   Inst Num Release   Cluster Host
XXXX     1497360911 XXXXX        1 9.2.0.8.0  NO   XXXXX
       Snap Id   Snap Time   Sessions Curs/Sess Comment
Begin Snap:   89346 23-Sep-09 08:00:02   638   13.0
End Snap:   89365 23-Sep-09 20:00:05   710   19.1
  Elapsed:       720.05 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
        Buffer Cache:   3,072M   Std Block Size:     8K
      Shared Pool Size:   2,048M     Log Buffer:   2,000K
Load Profile
~~~~~~~~~~~~              Per Second    Per Transaction
         Redo size:      344,858.21       6,796.87
       Logical reads:      131,932.52       2,600.28
       Block changes:       2,211.70         43.59
       Physical reads:       7,457.34        146.98
      Physical writes:        236.52         4.66
         User calls:       6,197.31        122.14
           Parses:       2,236.26         44.07
        Hard parses:         20.63         0.41
           Sorts:        330.72         6.52
           Logons:         0.34         0.01
          Executes:       3,097.97         61.06
        Transactions:         50.74
% Blocks changed per Read:  1.68  Recursive Call %:   35.95
Rollback per transaction %:  30.58    Rows per Sort:   31.38
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      Buffer Nowait %:  99.97    Redo NoWait %:  100.00
      Buffer Hit  %:  94.37  In-memory Sort %:  100.00
      Library Hit  %:  99.63    Soft Parse %:   99.08
     Execute to Parse %:  27.82     Latch Hit %:   99.83
Parse CPU to Parse Elapsd %:  30.18   % Non-Parse CPU:   95.65
Shared Pool Statistics    Begin  End
       Memory Usage %: 100.00 100.00
  % SQL with executions>1:  38.24  35.74
% Memory for SQL w/exec>1:  76.94  86.29
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~                           % Total
Event                        Waits  Time (s) Ela Time
CPU time                           109,020  32.54
db file sequential read            40,144,341   96,883  28.91
db file scattered read             26,524,497   66,948  19.98
sbtwrite2                    4,141,351   23,994   7.16
SQL*Net message from dblink          33,497,630   17,097   5.10
     -------------------------------------------------------------

Similar Messages

  • Rollback per transaction too high

    Hi, I am investigating performance problems on our database which is used by a third party application.
    Using the stats' pack I have established that the
    Rollback per transaction % is very high (between 60-70%).
    I am on version 8.17 or Oracle, the application is written in powerbuilder. And it users lots of sessions per user.
    A typical user users 5-7 sessions .
    How can i find what is causing all this rollback?
    could it be related to all these sessions ?

    Hi,
    The Rollback Per Transaction statistic will report on all statements that were rolled back, regardless of whether there is anything to rollback or not.
    You can see more information about statspack report into metlink note 228913.1 - Systemwide Tuning using STATSPACK Reports
    Nicolas.

  • Rollback per transaction issue

    hi
    i found in our live database Rollback per transaction: 92.86% how i can troubleshoot this value and what is the rezone and how i can solve it

    Hi,
    this is more likely to be an application issue, not a database one. I heard of webservers that do a rollback after each query (!), maybe it's one of them, or a similar error in the application design. I have also once come across a case when there was a great number of fake commits on the system (so called "readonly commits") which were caused by an internal Oracle bug. The problem was resolved by applying a patch -- so you may want to check MOS articles for similar symptoms.
    Best regards,
    Nikolay

  • Log miner doesn't show all transactions on a table

    I'm playing a little with log miner on oracle 11gR2 on a 32bit CentOS Linux install, but it looks like it's not showing me all DML on my test table. Am I doing something wrong?
    Hi, there's my test case:
    - Session #1, create table and insert first row:
    SQL> create table stolf.test_table (
    col1 number,
    col2 varchar(10),
    col3 varchar(10),
    col4 varchar(10));
    2 3 4 5
    Table created.
    SQL> insert into stolf.test_table (col1, col2, col3, col4) values ( 0, 20100305, 0, 0);
    1 row created.
    SQL> commit;
    SQL> select t.ora_rowscn, t.* from stolf.test_table t;
    ORA_ROWSCN COL1 COL2 COL3 COL4
    1363624 0 20100305 0 0
    - Execute shell script to insert a thousand lines into table:
    for i in `seq 1 1000`; do
    sqlplus -S stolf/<passwd><<-EOF
    insert into stolf.test_table (col1, col2, col3, col4) values ( $ , 20100429, ${i}, ${i} );
    commit;
    EOF
    done
    - Session #1, switch logfiles:
    SQL> alter system switch logfile;
    System altered.
    SQL> alter system switch logfile;
    System altered.
    SQL> alter system switch logfile;
    System altered.+
    - Session #2, start logminer with continuous_mine on, startscn = first row ora_rowscn, endscn=right now. The select on v$logmnr_contents should return at least a thousand rows, but it returns three rows instead :
    BEGIN
    SYS.DBMS_LOGMNR.START_LOGMNR(STARTSCN=>1363624, ENDSCN=>timestamp_to_scn(sysdate), OPTIONS => sys.DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + sys.DBMS_LOGMNR.COMMITTED_DATA_ONLY + SYS.DBMS_LOGMNR.CONTINUOUS_MINE);
    END;
    SQL> select SCN, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS where SQL_REDO IS NOT NULL AND seg_owner = 'STOLF';
    SCN
    SQL_REDO
    SQL_UNDO
    1365941
    insert into "STOLF"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('378','20100429','378','378');
    delete from "STOLF"."TEST_TABLE" where "COL1" = '378' and "COL2" = '20100429' and "COL3" = '378' and "COL4" = '378' and ROWID = 'AAASOHAAEAAAATfAAB';
    1367335
    insert into "STOLF"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('608','20100429','608','608');
    delete from "STOLF"."TEST_TABLE" where "COL1" = '608' and "COL2" = '20100429' and "COL3" = '608' and "COL4" = '608' and ROWID = 'AAASOHAAEAAAATfAAm';
    1368832
    insert into "STOLF"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('849','20100429','849','849');
    delete from "STOLF"."TEST_TABLE" where "COL1" = '849' and "COL2" = '20100429' and "COL3" = '849' and "COL4" = '849' and ROWID = 'AAASOHAAEAAAATbAAA';+

    Enable supplemental logging.
    Please see below,
    SQL> shut immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup mount;
    ORACLE instance started.
    Total System Global Area  422670336 bytes
    Fixed Size                  1300352 bytes
    Variable Size             306186368 bytes
    Database Buffers          109051904 bytes
    Redo Buffers                6131712 bytes
    alter databsDatabase mounted.
    SQL>
      2
    SQL> alter database archivelog;
    Database altered.
    SQL> alter database open;
    Database altered.
    SQL> alter system checkpoint;
    System altered.
    SQL> drop table test_Table purge;
    Table dropped.
    SQL> create table test_table(
      2  col1 number,
    col2 varchar(10),
    col3 varchar(10),
    col4 varchar(10));  3    4    5
    Table created.
    SQL> insert into test_table (col1, col2, col3, col4) values ( 0, 20100305, 0, 0);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select t.ora_rowscn, t.* from test_table t;
    ORA_ROWSCN       COL1 COL2       COL3       COL4
       1132572          0 20100305   0          0
    SQL> for i in 1..1000 loop
    SP2-0734: unknown command beginning "for i in 1..." - rest of line ignored.
    SQL> begin
      2  for i in 1..1000 loop
      3  insert into test_table values(i,20100429,i,i);
      4  end loop; commit;
      5  end;
      6  /
    PL/SQL procedure successfully completed.
    SQL> alter system switch logfile;
    System altered.
    SQL> /
    SQL> select * from V$version;
    BANNER
    --------------------------------------------------------------------------------Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS for Linux: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - ProductionIn the second session,
    SQL> l
      1  select SCN, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS where SQL_REDO IS NOT NULL
      2* and seg_owner='SYS' and table_name='TEST_TABLE'
    --------------------------------------------------------------------------------insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('2','20100429','2','2');
    delete from "SYS"."TEST_TABLE" where "COL1" = '2' and "COL2" = '20100429' and "COL3" = '2' and "COL4" = '2' and ROWID = 'AAASPKAABAAAVpSAAC';
       1132607
    insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('3','2010042
           SCN
    SQL_REDO
    --------------------------------------------------------------------------------SQL_UNDO
    --------------------------------------------------------------------------------9','3','3');
    delete from "SYS"."TEST_TABLE" where "COL1" = '3' and "COL2" = '20100429' and "COL3" = '3' and "COL4" = '3' and ROWID = 'AAASPKAABAAAVpSAAD';
       1132607
    insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('4','20100429','4','4');
    <<trimming the output>>
    --------------------------------------------------------------------------------429','997','997');
    delete from "SYS"."TEST_TABLE" where "COL1" = '997' and "COL2" = '20100429' and
    "COL3" = '997' and "COL4" = '997' and ROWID = 'AAASPKAABAAAVpVACU';
       1132607
    insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('998','20100429','998','998');
           SCN
    SQL_REDO
    --------------------------------------------------------------------------------SQL_UNDO
    --------------------------------------------------------------------------------delete from "SYS"."TEST_TABLE" where "COL1" = '998' and "COL2" = '20100429' and
    "COL3" = '998' and "COL4" = '998' and ROWID = 'AAASPKAABAAAVpVACV';
       1132607
    insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('999','20100429','999','999');
    delete from "SYS"."TEST_TABLE" where "COL1" = '999' and "COL2" = '20100429' and
           SCN
    SQL_REDO
    --------------------------------------------------------------------------------SQL_UNDO
    --------------------------------------------------------------------------------"COL3" = '999' and "COL4" = '999' and ROWID = 'AAASPKAABAAAVpVACW';
       1132607
    insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('1000','20100429','1000','1000');
    delete from "SYS"."TEST_TABLE" where "COL1" = '1000' and "COL2" = '20100429' and "COL3" = '1000' and "COL4" = '1000' and ROWID = 'AAASPKAABAAAVpVACX';
           SCN
    SQL_REDO
    --------------------------------------------------------------------------------SQL_UNDO
    1000 rows selected.
    SQL>HTH
    Aman....

  • Enq: MN - contention with log miner

    One of our customers is hitting enq: MN - contention wait event.
    UKJA@ukja102> exec print_table('select * from v$lock_type where type = ''MN'');
    TYPE                          : MN
    NAME                          : LogMiner
    ID1_TAG                       : session ID
    ID2_TAG                       : 0
    IS_USER                       : NO
    DESCRIPTION                   : Synchronizes updates to the LogMiner dictionary
    and prevents multiple instances from preparing the same LogMiner session
    -----------------The situation is as following
    - Batch job is generating massive redo
    - Frequent log file switching occurs during this job
    - Multiple sessions are mining archive logs due to some business requirement
    - From time to time, one session holds MN lock in excluisve mode and other sessions wait for the ML lock to be released.
    Holding session and waiting sessions are executing same SQL statement like following.
    SELECT SCN, SQL_REDO, SEG_OWNER, SEG_NAME, OPERATION_CODE, CSF,DATA_OBJ#,
    XIDUSN || '_' || XIDSLT || '_' || XIDSQN) AS XID, ROW_ID, ROLLBACK, TIMESTAMP FROM V$LOGMNR_CONTENTS
    WHERE (OPERATION_CODE IN (7, 36)
    OR ( ( ROLLBACK = 0 OR (ROLLBACK = 1 AND CSF = 0) ) AND ( OPERATION_CODE IN (1, 2, 3)
    AND ((SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE1')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE2')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE3')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE4')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE5')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE6')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE7')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE8')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE9')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE10')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE'11)
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE12')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE13')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE14')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE15')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE16')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE17')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE18')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE19')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE20')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE21')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE22')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE23')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE24')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE25')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE26')
    OR (SEG_OWNER = 'TEST1' AND TABLE_NAME = 'TABLE27')))))But my experiments show that normal log miner operations do not need MN lock thus no MN lock contention is reproduceable. Same under the situation of very frequent log file switching.
    Does anyone have experience and/or information on this lock?
    ================================
    Dion Cho - Oracle Performance Storyteller
    http://dioncho.wordpress.com (english)
    http://ukja.tistory.com (korean)
    ================================

    My first test case was flawed, so posted wrong info. :(
    Further investigation shows that at certain steps of typical log miner operations need MN lock in exclusive mode. For example, dbms_logmnr.start_logmnr procedure needs exclusive MN lock.
    Excerpt from 10704 event trace file.
    *** 2009-05-19 13:57:53.812
    ksqgtl *** MN-00000000-00000000 mode=6 flags=0x21 timeout=600 ***
    ksqgtl: no transaction
    ksqgtl: use existing ksusetxn DID
    ksqgtl:
         ksqlkdid: 0001-0016-00000014
    *** 2009-05-19 13:57:53.828
    *** ksudidTrace: ksqgtl
         ksusesdi:   0000-0000-00000000
         ksusetxn:   0001-0016-00000014
    ksqgtl: RETURNS 0
    *** 2009-05-19 13:57:53.828
    ksqrcl: MN,0,0
    ksqrcl: returns 0Starting log mining operation would require modification on log miner dictionary.
    This means that multiple sessions can't start log mining concurrently, but once they've started successfully other types of jobs can be done concurrently.
    Any operation that should access the log miner dictionary would require MN lock. I would contact the customer who reported this problem and let them have more investigation.
    ================================
    Dion Cho - Oracle Performance Storyteller
    http://dioncho.wordpress.com (english)
    http://ukja.tistory.com (korean)
    ================================

  • Monitor/trace per transaction performance stats

    Is it possible to trace stats like cpu time and num of IOs for each committed transaction and log them one row per transaction? Thanks, Stan

    Yes it is possible through statspack which give you all other performance related information. But it is also very resource consuming job. so be carefull while using it.

  • Not getting SCN details in Log Miner

    Oracle 11g
    Windows 7
    Hi DBA's,
    I am not getting the SCN details in log miner. Below are steps for the same:-
    SQL> show parameter utl_file_dir
    NAME                                 TYPE        VALUE                         
    utl_file_dir                         string                                    
    SQL> select name,issys_modifiable from v$parameter where name ='utl_file_dir';
    NAME               ISSYS_MOD                                                                      
    utl_file_dir    FALSE                                                          
    SQL>  alter system set utl_file_dir='G:\oracle11g' scope=spfile;
    System altered.
    SQL> shut immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 1071333376 bytes                                                          
    Fixed Size                  1334380 bytes                                                          
    Variable Size             436208532 bytes                                                          
    Database Buffers          629145600 bytes                                                          
    Redo Buffers                4644864 bytes                                                          
    Database mounted.
    Database opened.
    SQL> show parameter utl_file_dir
    NAME                                 TYPE        VALUE                                             
    utl_file_dir                         string      G:\oracle11g\logminer_dir 
    SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
    SUPPLEME                                                                                           
    NO                                                                                                 
    SQL>  ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    Database altered.
    SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
    SUPPLEME                                                                                           
    YES                                                                                                
    SQL> /* Minimum supplemental logging is now enabled. */
    SQL>
    SQL> alter system switch logfile;
    System altered.
    SQL> select g.group# , g.status , m.member
      2       from v$log g, v$logfile m
      3       where g.group# = m.group#
      4       and g.status = 'CURRENT';
        GROUP# STATUS                                                                                 
    MEMBER                                                                                             
             1 CURRENT                                                                                 
    G:\ORACLE11G\ORADATA\MY11G\REDO01.LOG                                                              
    SQL> /* start fresh with a new log file which is the group 1.*/
    SQL> create table scott.test_logmnr
      2  (id  number,
      3  name varchar2(10)
      4  );
    Table created.
    SQL> BEGIN
      2    DBMS_LOGMNR_D.build (
      3      dictionary_filename => 'logminer_dic.ora',
      4      dictionary_location => 'G:\oracle11g');
      5  END;
      6  /
    PL/SQL procedure successfully completed.
    SQL> /*
    SQL>   This has recorded the dictionary information into the file
    SQL>   "G:\oracle11g\logminer_dic.ora".
    SQL> */
    SQL> conn scott/
    Connected.
    SQL> insert into test_logmnr values (1,'TEST1');
    1 row created.
    SQL> insert into test_logmnr values (2,'TEST2');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select * from test_logmnr;
            ID NAME                                                                                    
             1 TEST1                                                                                   
             2 TEST2                                                                                   
    SQL> update test_logmnr set name = 'TEST';
    2 rows updated.
    SQL> select * from test_logmnr;
            ID NAME                                                                                    
             1 TEST                                                                                    
             2 TEST                                                                                    
    SQL> commit;
    Commit complete.
    SQL> delete from test_logmnr;
    2 rows deleted.
    SQL> commit;
    Commit complete.
    SQL> select * from test_logmnr;
    no rows selected
    SQL> conn / as sysdba
    Connected.
    SQL> select g.group# , g.status , m.member
      2       from v$log g, v$logfile m
      3       where g.group# = m.group#
      4       and g.status = 'CURRENT';
        GROUP#         STATUS                                         MEMBER                                                                                             
             1             CURRENT                           G:\ORACLE11G\ORADATA\MY11G\REDO01.LOG                                                              
    SQL> begin
      2        dbms_logmnr.add_logfile
      3        (
      4         logfilename => 'G:\oracle11g\oradata\my11g\REDO01.LOG',
      5         options     => dbms_logmnr.new
      6        );
      7      
      8       /
    PL/SQL procedure successfully completed.
    SQL> select filename from v$logmnr_logs;
    FILENAME                                                                                           
    G:\oracle11g\oradata\my11g\REDO01.LOG                                                              
    PL/SQL procedure successfully completed.
    SQL> BEGIN
      2    -- Start using all logs
      3    DBMS_LOGMNR.start_logmnr (
      4      dictfilename => 'G:\oracle11g\logminer_dic.ora');
      5 
      6   END;
      7  /
    PL/SQL procedure successfully completed.
    SQL> DROP TABLE myLogAnalysis;
    Table dropped.
    SQL> create table myLogAnalysis
      2       as
      3       select * from v$logmnr_contents;
    Table created.
    SQL> begin
      2         DBMS_LOGMNR.END_LOGMNR();
      3       end;
      4       /
    PL/SQL procedure successfully completed.
    SQL> set lines 1000
    SQL> set pages 500
    SQL> column scn format a6
    SQL> column username format a8
    SQL> column seg_name format a11
    SQL> column sql_redo format a33
    SQL> column sql_undo format a33
    SQL> select scn , seg_name , sql_redo , sql_undo
      2  from   myLogAnalysis
      3  where username = 'SCOTT'
      4  AND (seg_owner is null OR seg_owner = 'SCOTT');
    SCN SEG_NAME
    SQL_REDO                     
    SQL_UNDO                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
    set transaction read write;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
    commit;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
    set transaction read write;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
    ########## TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
    ("ID","NAME") values ('1','TEST1'  where "ID" = '1' and "NAME" = 'T                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    EST1' and ROWID = 'AAARjeAAEAAAAD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    PAAA';                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
    ########## TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    ("ID","NAME") values ('2','TEST2'  where "ID" = '2' and "NAME" = 'T                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    EST2' and ROWID = 'AAARjeAAEAAAAD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    PAAB';                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
    commit;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
    set transaction read write;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
    ########## TEST_LOGMNR update "SCOTT"."TEST_LOGMNR" set  update "SCOTT"."TEST_LOGMNR" set                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
    "NAME" = 'TEST' where "NAME" = 'T "NAME" = 'TEST1' where "NAME" = '                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    EST1' and ROWID = 'AAARjeAAEAAAAD TEST' and ROWID = 'AAARjeAAEAAAAD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    PAAA';                       
    PAAA';                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
    ########## TEST_LOGMNR update "SCOTT"."TEST_LOGMNR" set  update "SCOTT"."TEST_LOGMNR" set                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
    "NAME" = 'TEST' where "NAME" = 'T "NAME" = 'TEST2' where "NAME" = '                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    EST2' and ROWID = 'AAARjeAAEAAAAD TEST' and ROWID = 'AAARjeAAEAAAAD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        

    Kindly type
    Desc v$logmnr_contents
    Please notice the scn is a *number* column,not varchar2
    By using format a6 you are forcing Oracle to display a too big number as a char. Hence the ##.
    Sybrand Bakker
    Senior Oracle DBA

  • Chatty replication protocol (2.6 messages per transaction): bad settings?

    Hi,
    I'm trying to determine why for a two-site replication setup (Linux, BDB 4.6.21 with the latest patch) it seems like a single transaction can take up to 100 ms to be replicated under heavy load when the non-replicated transaction takes 5 ms.
    The transaction in question is a fully-synchronized update (ie: a commit at site R1 should not return until R2 has also committed and flushed the transaction to disk), and there are only two sites in the replication group, the master and the replica.
    Here are the statistics I gathered from my application, at the replica after it was promoted to the master (the master having been killed):
    Replication: txnsApplied=3584, repMsgsProcessed=9390, repMsgsSent=17, repMsgsSendFailures=4, nextLSN=1/1162804, nextPages=0, pagesRecords=6, pagesRequested=0, status=3, waitingLSN=0/0, waitingPages=0
    Replication Manager: connectionAttemptsFailed=0, connectionsDropped=1, repMessagesDropped=0, repMessagesQueued=0, permFailed=4
    It seems like the replication protocol is quite chatty (2.6 messages per transaction), which may explain some of the latency.
    I use 2048-byte pages, and the typical transaction size is no more than 1024 bytes; I experimented with relaxing the transactional guarantees (setTxnNoSync et al) but had a similar experience.
    Does anyone have any suggestions for this environment configuration to increase the throughput of this kind of replication setup? Your help would be kindly appreciated.

    Replication: txnsApplied=3584, repMsgsProcessed=9390,
    repMsgsSent=17, repMsgsSendFailures=4,
    nextLSN=1/1162804, nextPages=0, pagesRecords=6,
    pagesRequested=0, status=3, waitingLSN=0/0,
    waitingPages=0
    Replication Manager: connectionAttemptsFailed=0,
    connectionsDropped=1, repMessagesDropped=0,
    repMessagesQueued=0, permFailed=4
    It seems like the replication protocol is quite
    chatty (2.6 messages per transaction), which may
    explain some of the latency.The replication protocol is that each log record written to the
    log is also transmitted to the client. Therefore, if your transaction
    has a lot of operations, or if even a single update results in several
    log records written (splitting a page, for example), each of those
    log records will be transmitted to the client.
    Does anyone have any suggestions for this environment
    configuration to increase the throughput of this kind
    of replication setup? Your help would be kindly
    appreciated.You can try the bulk message configuration option. Look at
    the dbenv->rep_set_config method with the DB_REP_CONF_BULK
    flag. That option collects the log records locally in memory and
    then sends them all either on a txn_commit operation or if the
    buffer fills up. That should result in fewer messages.
    However, it is also interesting that there were 4 message send failures
    as well as 4 permFailed counts in your statistics. It is likely whatever
    caused those resulted in some messages to rerequest the missing
    records.
    Sue LoVerso
    Oracle

  • Can I use multiple Apple store gift cards per transaction or is it has to be one gift card per transaction?

    Can I use multiple Apple store gift cards (£25 each) per transaction or is it has to be only one gift card per transaction?? Does anyone know thanks

    Hola, Eddy
    notice that you have posted to an old thread (Oct-Nov2013) that actually took quite some time to get attention back then**
    I recommend that you contact the Store Group regarding your purchase question - don't let the "iTune-ish" URL and page title throw you off... it's the right place
    Store Group - Contact Us - Apple Support
    ** next time you have a question, I advise to check the dates carefully and if older than a month or so, AND it has no answer to your specific issue, start a NEW Question. You did a fine job of stating your exact issue - many folks, not so much.
    buenos dias
    CCC

  • Log miner end-of-file on communcation channel

    Hi,
    I'm trying to use log miner but when I perform a select from the
    v$logmnr_conents table such as
    select operation from v$logmnr_conents where username = 'FRED'
    I get a ORA-03113: end-of-file on communcation channel.
    the trace files given no information expect the very unuseful 'internal
    error'.
    Anyone had this problem? is it possible to read the archive log without
    logminer?? I really need to read the logs because someone updated the wrong data in the database and I need to recover this.
    Thanks in advance,
    steve.

    Hi Joel,
    Here is SGA information:
    select * from v$sgastat where name = 'free memory';
    POOL NAME BYTES
    shared pool free memory 75509528
    large pool free memory 16777216
    java pool free memory 83886080
    Thank you for your time,
    Katya

  • Date time in the transaction access log of ST03N

    Hi,
    How do I get date time in the log of transaction access by user.
    in transaction ST03N
    Pls let me know if any other transaction is available there
    I can get transaction access logs userwise.
    Points will be rewarded.
    Thanks

    Hi,
    SM19 - Configuration / SM20 - evaluation (if audit log maintained) will fetch the date and time of tcode accessed by user.
    Rakesh

  • Large number of objets - log miner scalability?

    We have been consolidating several departmental databases into one big RAC database. Moreover, in tests databases we are cloning test cells (for example, an application schema is getting cloned hundred of times so that our users may test independently from each others).
    So, our acception test database now have about 500,000 objects in it. We have production databases with over 2 millions objects in it.
    We are using streams. At this time we're using a local capture, but our architecture aims to use downstream capture soon... We are concerned about the resources required for the log miner data dictionary build.
    We are currently not using DBMS_LOGMNR_D.build directly, but rather indirectly through the DBMS_STREAMS_ADM.add_table_rule. We only want to replicate about 30 tables.
    We are surprised to find that the log miner always build a complete data dictionary for every objets of the database (tables, partitions, columns, users, and so on).
    Apparently there is no way to create a partial data dictionary even by using DBMS_LOGMNR_D.BUILD directly...
    Lately, it took more than 2 hours just to build the log miner data dictionary on a busy system! And we ended up with an ORA-01280 error. So we started all over again...
    We just increased our redo log size recently. I haven't had a chance to test after the change. Our redo log was only 4MB, we increased it to 64MB to reduce checkpoint activity. This will probably help...
    Does anybody has encountered slow log miner dictionary build?
    Any advice?
    Thanks you in advance.
    Jocelyn

    Hello Jocelyn,
    In streams environment, the logminer dictionary build is done using DBMS_CAPTURE_ADM.BUILD procedure. You should not be using DBMS_LOGMNR_D.BUILD for this.
    In Streams Environment, DBMS_STREAMS_ADM.ADD_TABLE_RULE will dump the dictionary only on the first time when you call this, since the capture process is not yet created and it will be created only when you call DBMS_STREAMS_ADM.ADD_TABLE_RULE and a dictionary dump as well. Logminer dictionary will have the information about all the objects like tables, partitions, columns, users and etc.. The dictionary dump will take time depends on the number of objects in the database since if the number of objects are very high in the database then the data dictionary itself will be big.
    Your redo size 64MB and this is too small for a production system, you should consider having a redo log size of 200M atleast.
    You can have a complete logminer dictionary build using DBMS_CAPTURE_ADM.BUILD and then create a capture process using the FIRST_SCN returned from the BUILD procedure.
    Let me know if you have more doubts.
    Thanks,
    Rijesh

  • Log Miner is finding DDL for "new" tables, but also DML for "older" tables.

    oracle 10.2.0.5.0 Standard Edition
    (at some point in the past it was "downgraded" from enterprise edition).
    It's making me crazy,  i create a table then insert/update rows.  Log miner only shows me the create.
    However, if i do insert/update on an "older" table,  i see the DML.  The tables are in the same tablespace, both logging, and the database is forcing logging.
    I'm out of ideas, and input would be appreciated.
    thanks!
    ####### CREATE THE ORACLE.LOGMNR1 TABLE ########
    SQL> create table ORACLE.LOGMNR1
      2  (col1 varchar2(100));
    Table created.
    ####### INSERT  ROW AND UPDATE A ROW IN ORACLE.LOGMNR1 TABLE ########
    SQL> insert into ORACLE.LOGMNR1 values ('testing insert');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> update ORACLE.LOGMNR1 set col1 = 'TESTING UPDATE';
    1 row updated.
    SQL> commit;
    Commit complete.
    ####### INSERT 2 ROWS INTO AN OLD TABLE EPACS.COLUMN_COUNTS  ########
    SQL> insert into epacs.column_counts
      2  values ('TEST1',99,'TEST2',88,SYSDATE);
    1 row created.
    insert into epacs.column_counts
       values('TEST3',77,'TEST4',66,SYSDATE);
    1 row created.
    SQL> COMMIT;
    Commit complete.
    ####### INSERT ANOTHER ROW INTO ORACLE.LOGMNR1 TABLE ########
    SQL> insert into LOGMNR1 values ('ONE MORE TEST');
    1 row created.
    SQL> COMMIT;
    Commit complete.
    ####### CREATE THE ORACLE.LOGMNRAFTER TABLE ########
    SQL> CREATE TABLE LOGMNRAFTER (COL1 VARCHAR2(100));
    Table created.
    ####### INSERT A ROW INTO ORACLE.LOGMNRAFTER TABLE ########
    SQL> INSERT INTO LOGMNRAFTER VALUES('FINISHED');
    1 row created.
    SQL> COMMIT;
    Commit complete.
    ####### MINE THE LOGS FOR ACTIVITY ########
    SQL> edit
    Wrote file afiedt.buf
      1  select to_char(timestamp,'yyyy/mm/dd hh24:mi:ss'), username,
      2          operation, sql_redo
      3          from v$logmnr_contents
      4      where
      5      seg_owner in( 'ORACLE','EPACS')
      6  and
      7      operation <> 'UNSUPPORTED'  
      8*          order by timestamp
    SQL> /
    ####### IT FINDS THECREATE THE ORACLE.LOGMNR1 TABLE, BUT NO INSERTS ########
    2013/10/09 14:02:05 ORACLE                                                     
    DDL                                                                            
    create table LOGMNR1                                                           
    (col1 varchar2(100));                                                          
    ####### IT DOES FIND INSERTS FOR THE OLD EPACS.COLUMN_COUNTS TABLE ########                                                   
    2013/10/09 14:03:54 ORACLE                                                     
    INSERT                                                                         
    insert into "EPACS"."COLUMN_COUNTS"("TABLE_NM","TABLE_ROW_QTY","COLUMN_NM","COLU
    MN_QTY","LAST_UPDATE") values ('TEST1','99','TEST2','88','09-OCT-13');         
    2013/10/09 14:05:09 ORACLE                                                     
    INSERT                                                                         
    insert into "EPACS"."COLUMN_COUNTS"("TABLE_NM","TABLE_ROW_QTY","COLUMN_NM","COLU
    MN_QTY","LAST_UPDATE") values ('TEST3','77','TEST4','66','09-OCT-13');         
    ####### AND IT FIND THE CREATE FOR THE ORACLE.LOGMNRAFTER TABLE ########                                                      
    2013/10/09 14:06:11 ORACLE                                                     
    DDL                                                                            
    CREATE TABLE LOGMNRAFTER (COL1 VARCHAR2(100));                                 
    ###### BOTH TABLES ARE "LOGGING" AND LIVE IN THE SAME TABLESPACE ######
    ###### LOGGING IS FORCED AT THE DATABASE LEVEL ####
    SQL> select force_logging from v$database;
    YES                                                                            
    SQL> select owner,table_name,logging
      2  from dba_tables where owner in ('EPACS','ORACLE')
      3  and table_name in('COLUMN_COUNTS','LOGMNR1');
    EPACS                          COLUMN_COUNTS                  YES              
    ORACLE                         LOGMNR1                        YES              
    SQL> SPOOL OFF

    Nither the table showing only DDL nor the table showing DML have supplemental logging.
    thanks.
    select count(*) from ALL_LOG_GROUPS
       where LOG_GROUP_TYPE='ALL COLUMN LOGGING' and OWNER='ORACLE' and table_name='LMTEST1'
    SQL> /
      COUNT(*)
             0
        select count(*) from ALL_LOG_GROUPS
       where LOG_GROUP_TYPE='ALL COLUMN LOGGING' and OWNER='EPACS' and table_name='COLUMN_COUNTS'
      COUNT(*)
             0
    Message was edited by: user12156890
    apparently this is an issue with the database configuration and not log miner.  I ran the same test against the prodcution database and got both the DDL and DML.  I used exactly the same test script including the logminer "setup" , obviously changing the name of the log files and the name of a directory.

  • Enabling log miner

    I have to enable log miner on 11g r2 database
    i am following How to Setup LogMiner [ID 111886.1]
    1. Make sure to specify an existing directory that Oracle has permissions
       to write to by the PL/SQL procedure by setting the initialization
       parameter UTL_FILE_DIR in the init.ora.
       For example, set the following to use /oracle/logs:
         UTL_FILE_DIR =/oracle/database
       Be sure to shutdown and restart the instance after adding UTL_FILE_DIR to the init or spfile. 
    I am using spfile, how can i modify the initialization parameter, without modifiying the file,
    We are using oracle fail safe manager, we restart the database using GUI, when i restart it will always read the spfile thats why i want to know how i can add the parameter
    UTL_FILE_DIR without modified init.iora file
    if i do
    1) create pfile from spfile
    and then how can i start the database from this pfile,
    2) create spfile from pfile

    hi,
    you can do with scope=spfile if you don't want to create/modify pfile, for example:
    SQL> alter system set utl_file_dir='/backup/logminer' scope=spfile;
    SQL> shutdown immediate;
    SQL> startup
    SQL> sho parameter utl_file_dir
    NAME TYPE VALUE
    utl_file_dir string /backup/logminer
    -- edited
    to startup from pfile Nikolay Ivankin told you the correct answer
    Edited by: Fran on 16-abr-2012 6:45

  • Document types per transaction

    Hi Experts,
    How to know which document types are determined per transaction?
    Ex: what document type can be defaulted for the transactions like FB60 / FB65 etc.
    Please indicate the config area.
    warm regards
    marias

    Check the SPRO node under below path:
    Financial Accounting (New)>Accounts Receivable and Accounts Payable>Business Transactions>Incoming Invoices/Credit Memos>Incoming Invoices/Credit Memos - Enjoy>Define Document Types for Enjoy Transaction
    Regards
    Sreenivas

Maybe you are looking for

  • In controlling secondary cost element cost is considerble for financial rep

    in controlling secondary cost element cost is it considerble for financial reporting requirment. In controlling for pm orders based on planning in kp26 they are booking man hours  activity price. when at time of confirm of the order planning converte

  • Split a Library

    Is it possible to split an existing library into two libraries? I have a library with projects and albums I would like to separate out and archive. There are other projects and libraries I will continue to work with. Is it possible to divide the exis

  • How to Create Oracle AQ in oracle XE 10.2.0.1.0

    i am try to create Advanced Queue in oracle xe 10.2.0.1.0 . for that i need one clarification.whatever i mensiond in the below packeges. is it required to create Queue?. if it is required. how we get this in our database. is this packeges automatical

  • E mail error when trying to send email

    I have an application that I want to send an email alert when there is an alarm condition. I have an email VI that works when run outside of my application. When I try to have my application load the VI into a form and run I get the following error:

  • WITH CLAUSE: ORA-32034

    Hello, I have a query that requires WITH clauses since it reiterates a similar select, so I'm down with 2 WITH clause which I cannot unionize. How can I do this? Thanks Pierre with DURCDR as (select CALLINGPARTYNUMBER,CALLINGPARTYIMSI,CALLINGPARTYIME