Enabling log miner

I have to enable log miner on 11g r2 database
i am following How to Setup LogMiner [ID 111886.1]
1. Make sure to specify an existing directory that Oracle has permissions
   to write to by the PL/SQL procedure by setting the initialization
   parameter UTL_FILE_DIR in the init.ora.
   For example, set the following to use /oracle/logs:
     UTL_FILE_DIR =/oracle/database
   Be sure to shutdown and restart the instance after adding UTL_FILE_DIR to the init or spfile. 
I am using spfile, how can i modify the initialization parameter, without modifiying the file,
We are using oracle fail safe manager, we restart the database using GUI, when i restart it will always read the spfile thats why i want to know how i can add the parameter
UTL_FILE_DIR without modified init.iora file
if i do
1) create pfile from spfile
and then how can i start the database from this pfile,
2) create spfile from pfile

hi,
you can do with scope=spfile if you don't want to create/modify pfile, for example:
SQL> alter system set utl_file_dir='/backup/logminer' scope=spfile;
SQL> shutdown immediate;
SQL> startup
SQL> sho parameter utl_file_dir
NAME TYPE VALUE
utl_file_dir string /backup/logminer
-- edited
to startup from pfile Nikolay Ivankin told you the correct answer
Edited by: Fran on 16-abr-2012 6:45

Similar Messages

  • Not getting SCN details in Log Miner

    Oracle 11g
    Windows 7
    Hi DBA's,
    I am not getting the SCN details in log miner. Below are steps for the same:-
    SQL> show parameter utl_file_dir
    NAME                                 TYPE        VALUE                         
    utl_file_dir                         string                                    
    SQL> select name,issys_modifiable from v$parameter where name ='utl_file_dir';
    NAME               ISSYS_MOD                                                                      
    utl_file_dir    FALSE                                                          
    SQL>  alter system set utl_file_dir='G:\oracle11g' scope=spfile;
    System altered.
    SQL> shut immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 1071333376 bytes                                                          
    Fixed Size                  1334380 bytes                                                          
    Variable Size             436208532 bytes                                                          
    Database Buffers          629145600 bytes                                                          
    Redo Buffers                4644864 bytes                                                          
    Database mounted.
    Database opened.
    SQL> show parameter utl_file_dir
    NAME                                 TYPE        VALUE                                             
    utl_file_dir                         string      G:\oracle11g\logminer_dir 
    SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
    SUPPLEME                                                                                           
    NO                                                                                                 
    SQL>  ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    Database altered.
    SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
    SUPPLEME                                                                                           
    YES                                                                                                
    SQL> /* Minimum supplemental logging is now enabled. */
    SQL>
    SQL> alter system switch logfile;
    System altered.
    SQL> select g.group# , g.status , m.member
      2       from v$log g, v$logfile m
      3       where g.group# = m.group#
      4       and g.status = 'CURRENT';
        GROUP# STATUS                                                                                 
    MEMBER                                                                                             
             1 CURRENT                                                                                 
    G:\ORACLE11G\ORADATA\MY11G\REDO01.LOG                                                              
    SQL> /* start fresh with a new log file which is the group 1.*/
    SQL> create table scott.test_logmnr
      2  (id  number,
      3  name varchar2(10)
      4  );
    Table created.
    SQL> BEGIN
      2    DBMS_LOGMNR_D.build (
      3      dictionary_filename => 'logminer_dic.ora',
      4      dictionary_location => 'G:\oracle11g');
      5  END;
      6  /
    PL/SQL procedure successfully completed.
    SQL> /*
    SQL>   This has recorded the dictionary information into the file
    SQL>   "G:\oracle11g\logminer_dic.ora".
    SQL> */
    SQL> conn scott/
    Connected.
    SQL> insert into test_logmnr values (1,'TEST1');
    1 row created.
    SQL> insert into test_logmnr values (2,'TEST2');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select * from test_logmnr;
            ID NAME                                                                                    
             1 TEST1                                                                                   
             2 TEST2                                                                                   
    SQL> update test_logmnr set name = 'TEST';
    2 rows updated.
    SQL> select * from test_logmnr;
            ID NAME                                                                                    
             1 TEST                                                                                    
             2 TEST                                                                                    
    SQL> commit;
    Commit complete.
    SQL> delete from test_logmnr;
    2 rows deleted.
    SQL> commit;
    Commit complete.
    SQL> select * from test_logmnr;
    no rows selected
    SQL> conn / as sysdba
    Connected.
    SQL> select g.group# , g.status , m.member
      2       from v$log g, v$logfile m
      3       where g.group# = m.group#
      4       and g.status = 'CURRENT';
        GROUP#         STATUS                                         MEMBER                                                                                             
             1             CURRENT                           G:\ORACLE11G\ORADATA\MY11G\REDO01.LOG                                                              
    SQL> begin
      2        dbms_logmnr.add_logfile
      3        (
      4         logfilename => 'G:\oracle11g\oradata\my11g\REDO01.LOG',
      5         options     => dbms_logmnr.new
      6        );
      7      
      8       /
    PL/SQL procedure successfully completed.
    SQL> select filename from v$logmnr_logs;
    FILENAME                                                                                           
    G:\oracle11g\oradata\my11g\REDO01.LOG                                                              
    PL/SQL procedure successfully completed.
    SQL> BEGIN
      2    -- Start using all logs
      3    DBMS_LOGMNR.start_logmnr (
      4      dictfilename => 'G:\oracle11g\logminer_dic.ora');
      5 
      6   END;
      7  /
    PL/SQL procedure successfully completed.
    SQL> DROP TABLE myLogAnalysis;
    Table dropped.
    SQL> create table myLogAnalysis
      2       as
      3       select * from v$logmnr_contents;
    Table created.
    SQL> begin
      2         DBMS_LOGMNR.END_LOGMNR();
      3       end;
      4       /
    PL/SQL procedure successfully completed.
    SQL> set lines 1000
    SQL> set pages 500
    SQL> column scn format a6
    SQL> column username format a8
    SQL> column seg_name format a11
    SQL> column sql_redo format a33
    SQL> column sql_undo format a33
    SQL> select scn , seg_name , sql_redo , sql_undo
      2  from   myLogAnalysis
      3  where username = 'SCOTT'
      4  AND (seg_owner is null OR seg_owner = 'SCOTT');
    SCN SEG_NAME
    SQL_REDO                     
    SQL_UNDO                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
    set transaction read write;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
    commit;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
    set transaction read write;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
    ########## TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
    ("ID","NAME") values ('1','TEST1'  where "ID" = '1' and "NAME" = 'T                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    EST1' and ROWID = 'AAARjeAAEAAAAD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    PAAA';                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
    ########## TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    ("ID","NAME") values ('2','TEST2'  where "ID" = '2' and "NAME" = 'T                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    EST2' and ROWID = 'AAARjeAAEAAAAD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    PAAB';                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
    commit;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
    set transaction read write;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
    ########## TEST_LOGMNR update "SCOTT"."TEST_LOGMNR" set  update "SCOTT"."TEST_LOGMNR" set                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
    "NAME" = 'TEST' where "NAME" = 'T "NAME" = 'TEST1' where "NAME" = '                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    EST1' and ROWID = 'AAARjeAAEAAAAD TEST' and ROWID = 'AAARjeAAEAAAAD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    PAAA';                       
    PAAA';                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
    ########## TEST_LOGMNR update "SCOTT"."TEST_LOGMNR" set  update "SCOTT"."TEST_LOGMNR" set                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
    "NAME" = 'TEST' where "NAME" = 'T "NAME" = 'TEST2' where "NAME" = '                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    EST2' and ROWID = 'AAARjeAAEAAAAD TEST' and ROWID = 'AAARjeAAEAAAAD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        

    Kindly type
    Desc v$logmnr_contents
    Please notice the scn is a *number* column,not varchar2
    By using format a6 you are forcing Oracle to display a too big number as a char. Hence the ##.
    Sybrand Bakker
    Senior Oracle DBA

  • Log miner doesn't show all transactions on a table

    I'm playing a little with log miner on oracle 11gR2 on a 32bit CentOS Linux install, but it looks like it's not showing me all DML on my test table. Am I doing something wrong?
    Hi, there's my test case:
    - Session #1, create table and insert first row:
    SQL> create table stolf.test_table (
    col1 number,
    col2 varchar(10),
    col3 varchar(10),
    col4 varchar(10));
    2 3 4 5
    Table created.
    SQL> insert into stolf.test_table (col1, col2, col3, col4) values ( 0, 20100305, 0, 0);
    1 row created.
    SQL> commit;
    SQL> select t.ora_rowscn, t.* from stolf.test_table t;
    ORA_ROWSCN COL1 COL2 COL3 COL4
    1363624 0 20100305 0 0
    - Execute shell script to insert a thousand lines into table:
    for i in `seq 1 1000`; do
    sqlplus -S stolf/<passwd><<-EOF
    insert into stolf.test_table (col1, col2, col3, col4) values ( $ , 20100429, ${i}, ${i} );
    commit;
    EOF
    done
    - Session #1, switch logfiles:
    SQL> alter system switch logfile;
    System altered.
    SQL> alter system switch logfile;
    System altered.
    SQL> alter system switch logfile;
    System altered.+
    - Session #2, start logminer with continuous_mine on, startscn = first row ora_rowscn, endscn=right now. The select on v$logmnr_contents should return at least a thousand rows, but it returns three rows instead :
    BEGIN
    SYS.DBMS_LOGMNR.START_LOGMNR(STARTSCN=>1363624, ENDSCN=>timestamp_to_scn(sysdate), OPTIONS => sys.DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + sys.DBMS_LOGMNR.COMMITTED_DATA_ONLY + SYS.DBMS_LOGMNR.CONTINUOUS_MINE);
    END;
    SQL> select SCN, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS where SQL_REDO IS NOT NULL AND seg_owner = 'STOLF';
    SCN
    SQL_REDO
    SQL_UNDO
    1365941
    insert into "STOLF"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('378','20100429','378','378');
    delete from "STOLF"."TEST_TABLE" where "COL1" = '378' and "COL2" = '20100429' and "COL3" = '378' and "COL4" = '378' and ROWID = 'AAASOHAAEAAAATfAAB';
    1367335
    insert into "STOLF"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('608','20100429','608','608');
    delete from "STOLF"."TEST_TABLE" where "COL1" = '608' and "COL2" = '20100429' and "COL3" = '608' and "COL4" = '608' and ROWID = 'AAASOHAAEAAAATfAAm';
    1368832
    insert into "STOLF"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('849','20100429','849','849');
    delete from "STOLF"."TEST_TABLE" where "COL1" = '849' and "COL2" = '20100429' and "COL3" = '849' and "COL4" = '849' and ROWID = 'AAASOHAAEAAAATbAAA';+

    Enable supplemental logging.
    Please see below,
    SQL> shut immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup mount;
    ORACLE instance started.
    Total System Global Area  422670336 bytes
    Fixed Size                  1300352 bytes
    Variable Size             306186368 bytes
    Database Buffers          109051904 bytes
    Redo Buffers                6131712 bytes
    alter databsDatabase mounted.
    SQL>
      2
    SQL> alter database archivelog;
    Database altered.
    SQL> alter database open;
    Database altered.
    SQL> alter system checkpoint;
    System altered.
    SQL> drop table test_Table purge;
    Table dropped.
    SQL> create table test_table(
      2  col1 number,
    col2 varchar(10),
    col3 varchar(10),
    col4 varchar(10));  3    4    5
    Table created.
    SQL> insert into test_table (col1, col2, col3, col4) values ( 0, 20100305, 0, 0);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select t.ora_rowscn, t.* from test_table t;
    ORA_ROWSCN       COL1 COL2       COL3       COL4
       1132572          0 20100305   0          0
    SQL> for i in 1..1000 loop
    SP2-0734: unknown command beginning "for i in 1..." - rest of line ignored.
    SQL> begin
      2  for i in 1..1000 loop
      3  insert into test_table values(i,20100429,i,i);
      4  end loop; commit;
      5  end;
      6  /
    PL/SQL procedure successfully completed.
    SQL> alter system switch logfile;
    System altered.
    SQL> /
    SQL> select * from V$version;
    BANNER
    --------------------------------------------------------------------------------Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS for Linux: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - ProductionIn the second session,
    SQL> l
      1  select SCN, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS where SQL_REDO IS NOT NULL
      2* and seg_owner='SYS' and table_name='TEST_TABLE'
    --------------------------------------------------------------------------------insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('2','20100429','2','2');
    delete from "SYS"."TEST_TABLE" where "COL1" = '2' and "COL2" = '20100429' and "COL3" = '2' and "COL4" = '2' and ROWID = 'AAASPKAABAAAVpSAAC';
       1132607
    insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('3','2010042
           SCN
    SQL_REDO
    --------------------------------------------------------------------------------SQL_UNDO
    --------------------------------------------------------------------------------9','3','3');
    delete from "SYS"."TEST_TABLE" where "COL1" = '3' and "COL2" = '20100429' and "COL3" = '3' and "COL4" = '3' and ROWID = 'AAASPKAABAAAVpSAAD';
       1132607
    insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('4','20100429','4','4');
    <<trimming the output>>
    --------------------------------------------------------------------------------429','997','997');
    delete from "SYS"."TEST_TABLE" where "COL1" = '997' and "COL2" = '20100429' and
    "COL3" = '997' and "COL4" = '997' and ROWID = 'AAASPKAABAAAVpVACU';
       1132607
    insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('998','20100429','998','998');
           SCN
    SQL_REDO
    --------------------------------------------------------------------------------SQL_UNDO
    --------------------------------------------------------------------------------delete from "SYS"."TEST_TABLE" where "COL1" = '998' and "COL2" = '20100429' and
    "COL3" = '998' and "COL4" = '998' and ROWID = 'AAASPKAABAAAVpVACV';
       1132607
    insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('999','20100429','999','999');
    delete from "SYS"."TEST_TABLE" where "COL1" = '999' and "COL2" = '20100429' and
           SCN
    SQL_REDO
    --------------------------------------------------------------------------------SQL_UNDO
    --------------------------------------------------------------------------------"COL3" = '999' and "COL4" = '999' and ROWID = 'AAASPKAABAAAVpVACW';
       1132607
    insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('1000','20100429','1000','1000');
    delete from "SYS"."TEST_TABLE" where "COL1" = '1000' and "COL2" = '20100429' and "COL3" = '1000' and "COL4" = '1000' and ROWID = 'AAASPKAABAAAVpVACX';
           SCN
    SQL_REDO
    --------------------------------------------------------------------------------SQL_UNDO
    1000 rows selected.
    SQL>HTH
    Aman....

  • Enable Log feature on WRTP54G

    Hi-
    I have the Linksys WRTP54G (Firmware Version: 5.01.04) and Vonage VOIP service. I just want to enable the Log feature on the router available under the Administration tab, but everytime I specify an ip to push log messages to, I get a prompt for a Username and Password that is different than the admin username and password credentials that I have. Does anyone know the default username and password or has this been locked down by Vonage? I did purchase the router through them when I ordered their service.
    Thanks,
    merker19

    This is one of the irritating things about this router.  Many of the features of this router are "owned" by Vonage, including router firmware upgrades.
    To enable logging, you must call Vonage and get the password for the ID "Admin", (yes, it is an upper case "a").
    This password changes every 5 min and you must contact Vonage to get it, then if you want to change the IP, you must call them back and get the new password.  As such, it is best to set the IP to *.*.*.255 (IE; 192.168.1.255).  This will broadcast the logging data to all addresses in your network so you don't have issues with DHCP or chagning the destination in the future.

  • Log miner end-of-file on communcation channel

    Hi,
    I'm trying to use log miner but when I perform a select from the
    v$logmnr_conents table such as
    select operation from v$logmnr_conents where username = 'FRED'
    I get a ORA-03113: end-of-file on communcation channel.
    the trace files given no information expect the very unuseful 'internal
    error'.
    Anyone had this problem? is it possible to read the archive log without
    logminer?? I really need to read the logs because someone updated the wrong data in the database and I need to recover this.
    Thanks in advance,
    steve.

    Hi Joel,
    Here is SGA information:
    select * from v$sgastat where name = 'free memory';
    POOL NAME BYTES
    shared pool free memory 75509528
    large pool free memory 16777216
    java pool free memory 83886080
    Thank you for your time,
    Katya

  • How to Enable logging of the ASA 5525?

    I need help to enable logging of the ASA 5525 for all new rules created today from the firewall module, rules changed, deleted desabilidas and disabled rules.
    Not found in the historic level of the ID on new firewall rules.
    0 or emergencies—System is unusable.
    1 or alerts—Immediate action needed.
    2 or critical—Critical conditions.
    3 or errors—Error conditions.
    4 or warnings—Warning conditions.
    5 or notifications—Normal but significant conditions.
    6 or informational—Informational messages.
    7 or debugging—Debugging messages.
    Thank you.

    You cannot log only those changes but you can log *all* changes.
    The messages 111008 and 111010 are the ones to look for (as described in this post).

  • Large number of objets - log miner scalability?

    We have been consolidating several departmental databases into one big RAC database. Moreover, in tests databases we are cloning test cells (for example, an application schema is getting cloned hundred of times so that our users may test independently from each others).
    So, our acception test database now have about 500,000 objects in it. We have production databases with over 2 millions objects in it.
    We are using streams. At this time we're using a local capture, but our architecture aims to use downstream capture soon... We are concerned about the resources required for the log miner data dictionary build.
    We are currently not using DBMS_LOGMNR_D.build directly, but rather indirectly through the DBMS_STREAMS_ADM.add_table_rule. We only want to replicate about 30 tables.
    We are surprised to find that the log miner always build a complete data dictionary for every objets of the database (tables, partitions, columns, users, and so on).
    Apparently there is no way to create a partial data dictionary even by using DBMS_LOGMNR_D.BUILD directly...
    Lately, it took more than 2 hours just to build the log miner data dictionary on a busy system! And we ended up with an ORA-01280 error. So we started all over again...
    We just increased our redo log size recently. I haven't had a chance to test after the change. Our redo log was only 4MB, we increased it to 64MB to reduce checkpoint activity. This will probably help...
    Does anybody has encountered slow log miner dictionary build?
    Any advice?
    Thanks you in advance.
    Jocelyn

    Hello Jocelyn,
    In streams environment, the logminer dictionary build is done using DBMS_CAPTURE_ADM.BUILD procedure. You should not be using DBMS_LOGMNR_D.BUILD for this.
    In Streams Environment, DBMS_STREAMS_ADM.ADD_TABLE_RULE will dump the dictionary only on the first time when you call this, since the capture process is not yet created and it will be created only when you call DBMS_STREAMS_ADM.ADD_TABLE_RULE and a dictionary dump as well. Logminer dictionary will have the information about all the objects like tables, partitions, columns, users and etc.. The dictionary dump will take time depends on the number of objects in the database since if the number of objects are very high in the database then the data dictionary itself will be big.
    Your redo size 64MB and this is too small for a production system, you should consider having a redo log size of 200M atleast.
    You can have a complete logminer dictionary build using DBMS_CAPTURE_ADM.BUILD and then create a capture process using the FIRST_SCN returned from the BUILD procedure.
    Let me know if you have more doubts.
    Thanks,
    Rijesh

  • Log Miner is finding DDL for "new" tables, but also DML for "older" tables.

    oracle 10.2.0.5.0 Standard Edition
    (at some point in the past it was "downgraded" from enterprise edition).
    It's making me crazy,  i create a table then insert/update rows.  Log miner only shows me the create.
    However, if i do insert/update on an "older" table,  i see the DML.  The tables are in the same tablespace, both logging, and the database is forcing logging.
    I'm out of ideas, and input would be appreciated.
    thanks!
    ####### CREATE THE ORACLE.LOGMNR1 TABLE ########
    SQL> create table ORACLE.LOGMNR1
      2  (col1 varchar2(100));
    Table created.
    ####### INSERT  ROW AND UPDATE A ROW IN ORACLE.LOGMNR1 TABLE ########
    SQL> insert into ORACLE.LOGMNR1 values ('testing insert');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> update ORACLE.LOGMNR1 set col1 = 'TESTING UPDATE';
    1 row updated.
    SQL> commit;
    Commit complete.
    ####### INSERT 2 ROWS INTO AN OLD TABLE EPACS.COLUMN_COUNTS  ########
    SQL> insert into epacs.column_counts
      2  values ('TEST1',99,'TEST2',88,SYSDATE);
    1 row created.
    insert into epacs.column_counts
       values('TEST3',77,'TEST4',66,SYSDATE);
    1 row created.
    SQL> COMMIT;
    Commit complete.
    ####### INSERT ANOTHER ROW INTO ORACLE.LOGMNR1 TABLE ########
    SQL> insert into LOGMNR1 values ('ONE MORE TEST');
    1 row created.
    SQL> COMMIT;
    Commit complete.
    ####### CREATE THE ORACLE.LOGMNRAFTER TABLE ########
    SQL> CREATE TABLE LOGMNRAFTER (COL1 VARCHAR2(100));
    Table created.
    ####### INSERT A ROW INTO ORACLE.LOGMNRAFTER TABLE ########
    SQL> INSERT INTO LOGMNRAFTER VALUES('FINISHED');
    1 row created.
    SQL> COMMIT;
    Commit complete.
    ####### MINE THE LOGS FOR ACTIVITY ########
    SQL> edit
    Wrote file afiedt.buf
      1  select to_char(timestamp,'yyyy/mm/dd hh24:mi:ss'), username,
      2          operation, sql_redo
      3          from v$logmnr_contents
      4      where
      5      seg_owner in( 'ORACLE','EPACS')
      6  and
      7      operation <> 'UNSUPPORTED'  
      8*          order by timestamp
    SQL> /
    ####### IT FINDS THECREATE THE ORACLE.LOGMNR1 TABLE, BUT NO INSERTS ########
    2013/10/09 14:02:05 ORACLE                                                     
    DDL                                                                            
    create table LOGMNR1                                                           
    (col1 varchar2(100));                                                          
    ####### IT DOES FIND INSERTS FOR THE OLD EPACS.COLUMN_COUNTS TABLE ########                                                   
    2013/10/09 14:03:54 ORACLE                                                     
    INSERT                                                                         
    insert into "EPACS"."COLUMN_COUNTS"("TABLE_NM","TABLE_ROW_QTY","COLUMN_NM","COLU
    MN_QTY","LAST_UPDATE") values ('TEST1','99','TEST2','88','09-OCT-13');         
    2013/10/09 14:05:09 ORACLE                                                     
    INSERT                                                                         
    insert into "EPACS"."COLUMN_COUNTS"("TABLE_NM","TABLE_ROW_QTY","COLUMN_NM","COLU
    MN_QTY","LAST_UPDATE") values ('TEST3','77','TEST4','66','09-OCT-13');         
    ####### AND IT FIND THE CREATE FOR THE ORACLE.LOGMNRAFTER TABLE ########                                                      
    2013/10/09 14:06:11 ORACLE                                                     
    DDL                                                                            
    CREATE TABLE LOGMNRAFTER (COL1 VARCHAR2(100));                                 
    ###### BOTH TABLES ARE "LOGGING" AND LIVE IN THE SAME TABLESPACE ######
    ###### LOGGING IS FORCED AT THE DATABASE LEVEL ####
    SQL> select force_logging from v$database;
    YES                                                                            
    SQL> select owner,table_name,logging
      2  from dba_tables where owner in ('EPACS','ORACLE')
      3  and table_name in('COLUMN_COUNTS','LOGMNR1');
    EPACS                          COLUMN_COUNTS                  YES              
    ORACLE                         LOGMNR1                        YES              
    SQL> SPOOL OFF

    Nither the table showing only DDL nor the table showing DML have supplemental logging.
    thanks.
    select count(*) from ALL_LOG_GROUPS
       where LOG_GROUP_TYPE='ALL COLUMN LOGGING' and OWNER='ORACLE' and table_name='LMTEST1'
    SQL> /
      COUNT(*)
             0
        select count(*) from ALL_LOG_GROUPS
       where LOG_GROUP_TYPE='ALL COLUMN LOGGING' and OWNER='EPACS' and table_name='COLUMN_COUNTS'
      COUNT(*)
             0
    Message was edited by: user12156890
    apparently this is an issue with the database configuration and not log miner.  I ran the same test against the prodcution database and got both the DDL and DML.  I used exactly the same test script including the logminer "setup" , obviously changing the name of the log files and the name of a directory.

  • When using wusa.exe to install MSU update package and enabling logging using the /log switch, what format are the log files in?

    I have been installing a number of hotfixes for Windows 7 using MSU files and the wusa.exe utility included in Windows. I thought it would be a good idea to generate separate log files for each update as it was installed since wusa.exe now supports this
    option using /log:<file name>. However, the log files created do not seem to be regular text files or any other log file format that I immediately recognize. When opened in Notepad or Wordpad you can see that they contain a lot of additional binary data
    which can't be read by a regular text viewer.
    Does anyone know what format these log files are in? What tool should you use to read them?

    I have been installing a number of hotfixes for Windows 7 using MSU files and the wusa.exe utility included in Windows. I thought it would be a good idea to generate separate log files for each update as it was installed since wusa.exe now supports this
    option using /log:<file name>. However, the log files created do not seem to be regular text files or any other log file format that I immediately recognize. When opened in Notepad or Wordpad you can see that they contain a lot of additional binary data
    which can't be read by a regular text viewer.
    Does anyone know what format these log files are in? What tool should you use to read them?
    Only Microsoft can manage to design something as stupid as this. If you start wusa from the command line, it pops up the alternative command line switches. For log, it just says: " /log     - installer will enable logging". It doesn't say that you
    should specify the log file name, hence not HOW you specify it (" /log:<path\filename>.") It doesn't say what extension to use for the file, alas which file type it is (" /log:<path\filename.evtx>."). You open it in notepad, you cannot read it.
    You open it in sccm's cmtrace.exe/trace32.exe, you get nothing. IT IS NOT POSSIBLE TO IMPLEMENT THIS IN A WORSE WAY. How can something be so bad? How can Microsoft get such stupidity inhouse, when you need to go through six interviews or something to get in??
    I cannot believe it - unfortunely, this is seen again and again. 

  • When creating a tablespace why should we enable LOGGING when a database is already on ARCHIVE LOG mode

    Question :
    When creating a tablespace why should we enable LOGGING when a database is already on ARCHIVE LOG mode ?
    Example:
    Create Tablespace
    CREATE SMALLFILE TABLESPACE "TEST_DATA"
    LOGGING
    DATAFILE '+DG_TEST_DATA_01(DATAFILE)' SIZE 10G
    AUTOEXTEND ON NEXT  500K MAXSIZE 31000M
    EXTENT MANAGEMENT LOCAL
    SEGMENT SPACE MANAGEMENT AUTO;
    LOGGING: Generate redo logs for creation of tables, indexes and  partitions, and for subsequent inserts. Recoverable
    Are they not logged and not recoverable if we do not enable LOGGING? What is that ARCHIVELOG mode does?

    What is that ARCHIVELOG Mode Does?
    Whenever your database is in archive log mode , Oracle will backup the redo log files in the form of Archives so that we can recover the database to the consistent state in case of any failure.
    Archive logging is essential for production databases where the loss of a transaction might be fatal.
    Why Logging?
    Logging is safest method to ensure that all the changes made at the tablespace will be captured and available for recovery in the redo logs.
    It is just the level at which we defines:
    Force Logging at DB level
    Logging at Tablespace Level
    Logging at schema Level
    Before the existence of FORCE LOGGING, Oracle provided logging and nologging options. These two options have higher precedence at the schema object level than the tablespace level; therefore, it was possible to override the logging settings at the tablespace level with nologging setting at schema object level.

  • How to enable logging in OBIEE 10G

    I have LDAP authentication in OBIEE 10 G and users are only residing in LDAP.
    I have a requirement to enable logging for few users. Can suggest me what all steps are required. I created user in RPD and enabled logging for that user. Kept same user ID as in LDAP. Still i cant see the logs.

    oh.. its 10g.. okay.
    In this case NQS authentication will override the LDAP authentication. Do not go for rpd users this can be taken care using init blocks as below.
    When using LDAP we can set the loglevel using Init blocks(see below). this can be set runtime using Set varaible from Answers->Advanced tab.
    You Init block suppose to be like
    case when :user in ('a','b') then 2 ELSE 0 END
    set to LOGLEVEL variable
    Pls mark correct/helpful

  • Enable Logging for JSF?

    Some of my JSF actions are not working correctly, and I'd like to enable logging for the faces framework, to be able to see which actions are being called, etc. Is there some way to get faces to write out to a containe rlog file, or even the console? (I'm using Log4J, and the faces JARs are in my WEB-INF/lib directory, if that helps).
    thanks,

    Faces uses the standard java logging facility. You need to configure logging.properties which is in lib directory for the jvm.

  • Log miner

    begin dbmn_logmnr.start_logmnr(starttime => '01-oct-2011 00:00:00',endtime =>'21-feb-2012 00:00:00',options => dbms_logmnr.dict_from_online_catalog+dbms_logmnr.continuous_mine);
    ERROR at line 1:
    ORA-06550: line 1, column 7:
    PLS-00201: identifier 'DBMN_LOGMNR.START_LOGMNR' must be declared
    ORA-06550: line 1, column 7:
    PL/SQL: Statement ignored
    how to fix this problem....?
    i need to start log_miner

    915855 wrote:
    begin dbmn_logmnr.start_logmnr(starttime => '01-oct-2011 00:00:00',endtime =>'21-feb-2012 00:00:00',options => dbms_logmnr.dict_from_online_catalog+dbms_logmnr.continuous_mine);
    ERROR at line 1:
    ORA-06550: line 1, column 7:
    PLS-00201: identifier 'DBMN_LOGMNR.START_LOGMNR' must be declared
    ORA-06550: line 1, column 7:
    PL/SQL: Statement ignored
    how to fix this problem....?
    i need to start log_minerPlease read, there is a complete chapter on how to use Log Miner.
    http://docs.oracle.com/cd/E11882_01/server.112/e22490/logminer.htm#SUTIL019
    And as mentioned by Vivek, check the spellings of the package.
    Aman....

  • Does Yosemite 10.10.3 now enable Mac Mini 2014 to drive 4K SST displays at 60z?

    Per Using 4K displays and Ultra HD TVs with your Mac - Apple Support update 10 APR 15, Apple indicates Yosemite 10.10.3 now enables the late 2014 Mac Mini to drive 4K SST displays @ 60z.  Several others on the web have said this isn't accurate.  Before I buy a new display, can someone clarify the situation?  Can this  software update enable the Mini to now drive these displays?  I thought it was a hardware issue.
    Thanks!

    Specs are:
    Graphics Card
    Intel HD Graphics 5000 (1.4 GHz) Intel Iris 5100 (2.6 and 2.8 GHz)
    Graphics Memory
    Up to 1.5 GB shared from main memory
    Display Connection
    1 - HDMI port or Thunderbolt digital video output
    Display Support
    Supports an HDMI-compatible device while using one Thunderbolt display or support for two Thunderbolt displays
    Display Modes
    Dual display extended, video mirroring, and AirPlay Mirroring
    External Resolution
    Up to 2560 by 1600 (Thunderbolt) or 4096 by 2160 (HDMI)
    As you can see it does support 4K using HDMI.

  • Use of LOG MINER - VERY URGENT

    Hello,
    I have a situation:
    I want to recover the database till Friday (Last week).
    I am ready to loose the data of Sat/Sun/Mon and Tuesday.
    In other words, I need to get the Friday's database state. I have backups of last 30 days. My database is running in archivelog mode.
    Can I use the LOGMINER to go back to the Friday's state? If yes then HOW? I have never used the Log miner neither I have been involved in the recovery, so pleaes guide me!!!
    If anyone knows the answer and the methodology can they suggest it to me? It will be a greate help!!!!
    Thanks in advance.
    Himanshu

    Hi,
    you got it wrong as you think that you will able to go back on friday's database state through logminer.i would say that logminer is used to get the information from the redo log as it got the historic changes about the database,so log miner is used for variety of purposes.
    1.the cahnges made to the database (insert,update,delete,ddl)
    2.the scn at which the changes were made.
    3.the scn at which the changes were commited.
    4.the name of the user who issued the the changes etc.
    so you can get these information by using log miner either throgh command based utility or oem.log miner gives you a pin point information.assume that if you don't know the time and name of the dropped table what would you do to get around these sort of problem.
    you can do one thing with log miner is that you cab get scn no of corrosponding changes which dropped the table.
    and you can restore your database prior to thaat scn no.
    suppode you get the scn no abc of that changes.
    then
    run {
    set until scn (the no u get through log miner );
    restore database ;
    recover database ;
    you get the disired state of database which you willing to accomplish.
    i still feel that your one question remain unanswerd.in which is you saying that i diden't recieve the media failure and your database is still up and running smothly.
    so dear i would love to tell you if want to performa theses sort of recoveries then you need to restore your database this is mendetory.the reason is "trancation activity can only be rolled forward to desired time not rolled back to desired time this is why you need to restore all your database files to back in desired time."
    thanks..

Maybe you are looking for

  • Bar code not scanned by scanner machine.

    Hello!        Problem is there after print the report, bar code is not scanned by  scanner. Thank You.

  • ITunes, what have you done with my files?

    Irrelevant backstory; I've actually had quite a turbulent time with technology lately- External harddrives blowing up, iTouch screens cracking in my pocket. Not to mention the screen on this ruddy Mac of mine. But I'm fairly competent with technology

  • How to get the total no.of fields count in TFS

    How to check this: You can define no more than 1,024 work item fields in the same team project collection, and you can set no more than 1,024 fields to reportable in all team project collections. I have created some new fields in my customized proces

  • Zero stock query

    hi experts, how can i make a query that will display all items with zero stock? I have tried linking OITM and OITW('manage by warehouse' is selected), and using the 'onhand' attribute, but with no success.  thanks Michael

  • Updating from 3.1.2 to 5.1 on 3rd gen?

    Well, the only concern I have is this: I have a 64 gig, so of course, I have many, many apps and music files. I just bought this new laptop, and I can't authorize the iTunes library, so my music and videos aren't really recovered/transfered. Now I al