Logminer problem
Hi :
why my v$logmnr_contents have NULL in Session_info and username columns.
I hope this is not an error of my DB.
I'm workin with 10G r2 in windows 2003 server
thank
Remi
From http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm#sthref1894
>
You must enable supplemental logging prior to generating log files that will be analyzed by LogMiner.
When you enable supplemental logging, additional information is recorded in the redo stream that is needed to make the information in the redo log files useful to you. Therefore, at the very least, you must enable minimal supplemental logging, as the following SQL statement shows:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
To determine whether supplemental logging is enabled, query the V$DATABASE view, as the following SQL statement shows:
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
Similar Messages
-
Problem with logminer in Data Guard configuration
Hi all,
I experience strange problem with applying of the logs in DataGuard configuration on the logical standby database side.
I've set up the configuration step by step as it is described in documentation (Oracle Data Guard Concepts and Administration, chapter 4).
Everything went fine until I issued
ALTER DATABASE START LOGICAL STANDBY APPLY;
I saw that log applying process was started by checking the output of
SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME = 'coordinator state';
and
SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
but in few minutes it stoped and quering DBA_LOGSTDBY_EVENTS I saw the following records:
ORA-16111: log mining and apply setting up
ORA-01332: internal Logminer Dictionary error
Alert log says the following:
LOGSTDBY event: ORA-01332: internal Logminer Dictionary error
Wed Jan 21 16:57:57 2004
Errors in file /opt/oracle/admin/whouse/bdump/whouse_lsp0_5817.trc:
ORA-01332: internal Logminer Dictionary error
Here is the end of the whouse_lsp0_5817.trc
error 1332 detected in background process
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-01332: internal Logminer Dictionary error
But the most useful info I found in one more trace file (whouse_p001_5821.trc):
krvxmrs: Leaving by exception: 604
ORA-00604: error occurred at recursive SQL level 1
ORA-01031: insufficient privileges
ORA-06512: at "SYS.LOGMNR_KRVRDREPDICT3", line 68
ORA-06512: at line 1
Seems that somewhere the correct privileges were not given or smth like this. By the way I was doing all the operations under SYS account (as SYSDBA).
Could smb give me a clue where could be my mistake or what was done in the wrong way?
Thank you in advance.Which is your SSIS version?
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My MSDN Page
My Personal Blog
My Facebook Page -
LogMiner puzzle - CLOB datatype
Hello, everybody!
Sorry for the cross-post here and in "Database\SQL and PL/SQL" forum, but the problem I am trying to dig is somewhere between those two areas.
I need a bit of an advice whether the following behavior is wrong an requires SR to be initiated – or I am just missing something.
Setting:
- Oracle 11.2.0.3 Enterprise Edition 64-bit on Win 2008.
- Database is running in ARCHIVELOG mode with supplemental logging enabled
- DB_SECUREFILE=PERMITTED (so, by default LOBs will be created as BasicFiles - but I didn't notice any behavior difference comparing to SecureFile implementation)
Test #1. Initial discovery of a problem
1. Setup:
<li> I created a table MISHA_TEST that contains CLOB column
create table misha_test (a number primary key, b_cl CLOB)<li> I run anonymous block that would insert into this table WITHOUT referencing CLOB column
begin
insert into misha_test (a) values (1);
commit;
end;2. I looked at generated logs via the LogMiner and found the following entries in V$LOGMNG_CONTENTS:
SQL_REDO
set transaction read write;
insert into "MISHA_TEST"("A","B_CL") values ('1',EMPTY_CLOB());
set transaction read write;
commit;
update "MISHA_TEST" set "B_CL" = NULL where "A" = '1' and ROWID = 'AAAj90AAKAACfqnAAA';
commit;And here I am puzzled: why do we have two operations for a single insert – first write EMPTY_CLOB into B_CL and then update it to NULL? But I didn’t even touch the column B_CL! Seems very strange – why can’t we write NULL to B_CL from the very beginning instead of first creating a pointer and than destroying it.
Key question:
- why NULL value in CLOB column should be handled differently than NULL value in VARCHAR2 column?
Test #2. Quantification
Question:
- having LOB column in the table seems to cause an overhead of generating more logs. But could it be quantified?
Assumption:
- My understanding is that CLOBs defined with “storage in row enabled = true” (default) up to ~ 4k of size behave like Varchar2(4000) and only when the size goes above 4k we start using real LOB mechanisms.
Basic test:
1. Two tables:
<li> With CLOB:
create table misha_test_clob2 (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl CLOB)<li>With VARCHAR2:
create table misha_test_clob (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl VARCHAR2(4000))2. Switch logfile/Insert 1000 rows and populate only A_NR/Switch logfile
insert into misha_test_clob (a_nr)
select level
from dual
connect by level < 10013. Check sizes of generated logs:
<li>With CLOB – 689,664 bytes
<li>With Varchar2 – 509.440 (<b>or about 26% reduction</b>)
Summary:
<li>the overhead is real. It means that table with VARCHAR2 column is cheaper to maintain, even if you are not using that column. So adding LOB columns to a table "just in case" is a really bad idea.
<li>Having LOB columns in the table that has tons of INSERT operations is expensive.
Just to clarify a real business case - I have a table with some number of attributes, one attribute has CLOB datatype. Frequency of inserts in this table is pretty high, frequency of using CLOB column is pretty low (NOT NULL ~0.1%). But because of that CLOB column I generate a lot more LOG data than I need (about 30% extra). Seems like a real waste of time! For now I requested development team to split the table into two, but that's still a bandage.
So, does anybody care? Comments/suggestions are very welcome!
Thanks a lot!
Michael RosenblumHello, everybody!
Sorry for the cross-post here and in "Database\SQL and PL/SQL" forum, but the problem I am trying to dig is somewhere between those two areas.
I need a bit of an advice whether the following behavior is wrong an requires SR to be initiated – or I am just missing something.
Setting:
- Oracle 11.2.0.3 Enterprise Edition 64-bit on Win 2008.
- Database is running in ARCHIVELOG mode with supplemental logging enabled
- DB_SECUREFILE=PERMITTED (so, by default LOBs will be created as BasicFiles - but I didn't notice any behavior difference comparing to SecureFile implementation)
Test #1. Initial discovery of a problem
1. Setup:
<li> I created a table MISHA_TEST that contains CLOB column
create table misha_test (a number primary key, b_cl CLOB)<li> I run anonymous block that would insert into this table WITHOUT referencing CLOB column
begin
insert into misha_test (a) values (1);
commit;
end;2. I looked at generated logs via the LogMiner and found the following entries in V$LOGMNG_CONTENTS:
SQL_REDO
set transaction read write;
insert into "MISHA_TEST"("A","B_CL") values ('1',EMPTY_CLOB());
set transaction read write;
commit;
update "MISHA_TEST" set "B_CL" = NULL where "A" = '1' and ROWID = 'AAAj90AAKAACfqnAAA';
commit;And here I am puzzled: why do we have two operations for a single insert – first write EMPTY_CLOB into B_CL and then update it to NULL? But I didn’t even touch the column B_CL! Seems very strange – why can’t we write NULL to B_CL from the very beginning instead of first creating a pointer and than destroying it.
Key question:
- why NULL value in CLOB column should be handled differently than NULL value in VARCHAR2 column?
Test #2. Quantification
Question:
- having LOB column in the table seems to cause an overhead of generating more logs. But could it be quantified?
Assumption:
- My understanding is that CLOBs defined with “storage in row enabled = true” (default) up to ~ 4k of size behave like Varchar2(4000) and only when the size goes above 4k we start using real LOB mechanisms.
Basic test:
1. Two tables:
<li> With CLOB:
create table misha_test_clob2 (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl CLOB)<li>With VARCHAR2:
create table misha_test_clob (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl VARCHAR2(4000))2. Switch logfile/Insert 1000 rows and populate only A_NR/Switch logfile
insert into misha_test_clob (a_nr)
select level
from dual
connect by level < 10013. Check sizes of generated logs:
<li>With CLOB – 689,664 bytes
<li>With Varchar2 – 509.440 (<b>or about 26% reduction</b>)
Summary:
<li>the overhead is real. It means that table with VARCHAR2 column is cheaper to maintain, even if you are not using that column. So adding LOB columns to a table "just in case" is a really bad idea.
<li>Having LOB columns in the table that has tons of INSERT operations is expensive.
Just to clarify a real business case - I have a table with some number of attributes, one attribute has CLOB datatype. Frequency of inserts in this table is pretty high, frequency of using CLOB column is pretty low (NOT NULL ~0.1%). But because of that CLOB column I generate a lot more LOG data than I need (about 30% extra). Seems like a real waste of time! For now I requested development team to split the table into two, but that's still a bandage.
So, does anybody care? Comments/suggestions are very welcome!
Thanks a lot!
Michael Rosenblum -
A clob datatype and LogMiner question?
HI,
I am using Logminer to caputer all DMLs agaist rows with clob datatype, find a problem.
--log in as scott/tiger
conn scott/tiger
SQL> desc clobtest
Name Null? Type
SNO NUMBER
CLOBTYPE CLOB
--make a update
update clobtest set CLOBTYPE = 'Hello New York' where sno = 11;
commit;
after using LogMiner to analyze redo log files, to query.
select sql_redo from v$logmnr_contents where username = 'SCOTT';
update "SCOTT"."CLOBTEST" set "CLOBTYPE" = 'Hello New York' where and ROWID = 'AAD0ZqAAEAAAAhsAAC';
My quesion:
As to caputured DML
update "SCOTT"."CLOBTEST" set "CLOBTYPE" = 'Hello New York' where and ROWID = 'AAD0ZqAAEAAAAhsAAC';
it shows "where and", why there is missing after where clause????? --(anyway, I can overcome this by using REGEXP_REPLACE(sql_redo,'where and','where ')
Thanks
Roy
Edited by: ROY123 on Mar 16, 2010 10:25 AMI checked logminer documetation:
http://74.125.93.132/search?q=cache:19bBhYX3Xs4J:download.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm+NOTE:LogMiner+does+not+support+these+datatypes+and+table+storage+attributes:&cd=1&hl=en&ct=clnk&gl=us
it says 10GR2 support LOB datatype.
but why "where clause" omit the clob datatype column (becaume "where and rowid")????
Edited by: ROY123 on Mar 16, 2010 2:12 PM -
Error using Logminer, Please help !
I am trying to understand how to use logminer.
I have a database (oracle 9.2.0.4)and have completed the following steps:
1) set the UTL_FILE_DIR parameter.
2) Run the dbmslm.sql and dbmslmd.sql scripts
3) Created a directory and a file under that database
/oracle/vbi/eltest/logmnr/dict_01.ora
where :
Database name : eltest
Directory name: logmnr ( lrwxrwxrwx permissions)
Dict file : dict_01.ora (-rwxrwxrwx permissions)
4) I checked for the V$LOGMNR_CONTENTS, V$LOGMNR_DICTIONARY views just to make sure whether the dbmslm.sql and dbmslmd.sql were executed.
Now I am trying to extract the data dictionary to an external file, but it gives me error.
SQL> EXECUTE LOGMNR_DBMS_D.BUILD('dict_01.ora','/oracle/vbi/eltest/logmnr/', OPTIONS=>DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
ERROR at line 1:
ORA-01336: specified dictionary file cannot be opened
ORA-29280: invalid directory path
ORA-06512: at "SYS.DBMS_LOGMNR_D", line 928
ORA-06512: at "SYS.DBMS_LOGMNR_D", line 2016
ORA-06512: at line 1
Can anyone please tell me why I am getting this error and what I need to resolve it ?
Thanks in advance.could you post the utl_file_dir parameter value?
This is clearly indicates that the problem is related to the directory. -
Logminer error while creating a dictionary file
Hi ,
I am working on logminer.
i got the following error while creating a dictionary file :--
please help me out.
SQL> show parameter utl_
NAME TYPE VALUE
utl_file_dir string $ORACLE_BASE/utl_file/cecms
SQL> execute dbms_logmnr_d.build(dictionary_filename => 'dictionary.ora',dictionary_location => '/u00/oracle/utl_file/cecms');
BEGIN dbms_logmnr_d.build(dictionary_filename => 'dictionary.ora',dictionary_location => '/u00/oracle/utl_file/cecms'); END;
ERROR at line 1:
ORA-01336: specified dictionary file cannot be opened
ORA-29280: invalid directory path
ORA-06512: at "SYS.DBMS_LOGMNR_D", line 928
ORA-06512: at "SYS.DBMS_LOGMNR_D", line 2052
ORA-06512: at line 1
AsifORA-01336: specified dictionary file cannot be opened
Cause: The dictionary file or directory does not exist or is inaccessible.
Action: Make sure that the dictionary file and directory exist and are accessible.
Probably an issue with the rights on the directories.
To confirm the same, try writing a sample procedure with UTL_FILE to create a file at this location and capture the error. It will give you more details about the problem. -
Error when trying to use LogMiner with Oracle 8.1.6.0.0
Hi everybody,
I'm trying to use LogMiner with Oracle 8.1.6.0.0. When I execute the following code with SQL*Plus, I have an error.
BEGIN
DBMS_LOGMNR.START_LOGMNR
(options =>
dbms_logmnr.dict_from_online_catalog);
END;
The error displayed by SQL*Plus is:
PLS-00302: 'DICT_FROM_ONLINE_CATALOG' must be declared.
Please, how to solve this problem?
Thanks you in advance for your answers.user639304 wrote:
Hi everybody,
I'm trying to use LogMiner with Oracle 8.1.6.0.0. When I execute the following code with SQL*Plus, I have an error.
BEGIN
DBMS_LOGMNR.START_LOGMNR
(options =>
dbms_logmnr.dict_from_online_catalog);
END;
The error displayed by SQL*Plus is:
PLS-00302: 'DICT_FROM_ONLINE_CATALOG' must be declared.
Please, how to solve this problem?
Thanks you in advance for your answers.Looking at the 8.1.7 doc set (the oldest available on tahiti) I get no hits when searching for 'dict_from_online_catalog'. Searching the 9.2 doc set turns up a reference. Looks like you are trying to use an option that isn't available in your version of Oracle. -
Oracle streams configuration problem
Hi all,
i'm trying to configure oracle stream to my source database (oracle 9.2) and when i execute the package DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS'); i got an error bellow:
ERROR at line 1:
ORA-01353: existing Logminer session
ORA-06512: at "SYS.DBMS_LOGMNR_D", line 2238
ORA-06512: at line 1
When checking some docs, they told i have to destraoy all logminer session, and i verify to v$session view and cannot identify logminer session. If someone can help me because i need this sttream tools for schema synchronization of my production database and datawarehouse database.
That i want is how to destroy or stop logminer session.
Thnaks for your help
regards
raitsarevoThanks Werner, it's ok now my problem is solved and here bellow the output of your script.
I profit if you have some docs or some advise for my database schema synchronisation, is using oracle sctrems is the best or can i use anything else but not Dataguard concept or standby database because i only want to apply DMl changes not DDL. If you have some docs for Oracle streams and especially for schema synchronization not tables.
many thanks again, and please send to my email address [email protected] if needed
ABILLITY>DELETE FROM system.logmnr_uid$;
1 row deleted.
ABILLITY>DELETE FROM system.logmnr_session$;
1 row deleted.
ABILLITY>DELETE FROM system.logmnrc_gtcs;
0 rows deleted.
ABILLITY>DELETE FROM system.logmnrc_gtlo;
13 rows deleted.
ABILLITY>EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
PL/SQL procedure successfully completed.
regards
raitsarevo -
Problem in add_subset_rules
Applying subset rules in Oracle streams
Posted: Feb 28, 2008 8:47 PM Reply
Hi All,
I am working to configure Streams.I am abe to do repliacation on table table unidirectional & bidirectional. I am facing problem in add_subset rules as capture,propagation & apply process is not showing error. The fillowing is the script i am using to configure add_subset_rules. Please guide me what is the wrong & how to go about it.
he Global Database Name of the Source Database is POCSRC. The Global Database Name of the Destination Database is POCDESTN. In the example setup, DEPT table belonging to SCOTT schema has been used for demonstration purpose.
Section 1 - Initialization Parameters Relevant to Streams
• COMPATIBLE: 9.2.0.
• GLOBAL_NAMES: TRUE
• JOB_QUEUE_PROCESSES : 2
• AQ_TM_PROCESSES : 4
• LOGMNR_MAX_PERSISTENT_SESSIONS : 4
• LOG_PARALLELISM: 1
• PARALLEL_MAX_SERVERS:4
• SHARED_POOL_SIZE: 350 MB
• OPEN_LINKS : 4
• Database running in ARCHIVELOG mode.
Steps to be carried out at the Destination Database (POCDESTN.)
1. Create Streams Administrator :
connect SYS/pocdestn@pocdestn as SYSDBA
create user STRMADMIN identified by STRMADMIN default tablespace users;
2. Grant the necessary privileges to the Streams Administrator :
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to STRMADMIN;
GRANT SELECT ANY DICTIONARY TO STRMADMIN;
GRANT EXECUTE ON DBMS_AQ TO STRMADMIN;
GRANT EXECUTE ON DBMS_AQADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_FLASHBACK TO STRMADMIN;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_CAPTURE_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_APPLY_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_RULE_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO STRMADMIN;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'ENQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'DEQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'MANAGE_ANY',
grantee => 'STRMADMIN',
admin_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
3. Create streams queue :
connect STRMADMIN/STRMADMIN@POCDESTN
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'STREAMS_QUEUE_TABLE',
queue_name => 'STREAMS_QUEUE',
queue_user => 'STRMADMIN');
END;
4. Add apply rules for the table at the destination database :
BEGIN
DBMS_STREAMS_ADM.ADD_SUBSET_RULES(
TABLE_NAME=>'SCOTT.EMP',
STREAMS_TYPE=>'APPLY',
STREAMS_NAME=>'STRMADMIN_APPLY',
QUEUE_NAME=>'STRMADMIN.STREAMS_QUEUE',
DML_CONDITION=>'empno =7521',
INCLUDE_TAGGED_LCR=>FALSE,
SOURCE_DATABASE=>'POCSRC');
END;
5. Specify an 'APPLY USER' at the destination database:
BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name => 'STRMADMIN_APPLY',
apply_user => 'SCOTT');
END;
6. BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'STRMADMIN_APPLY',
parameter => 'DISABLE_ON_ERROR',
value => 'N' );
END;
7. Start the Apply process :
BEGIN
DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_APPLY');
END;
Section 3 - Steps to be carried out at the Source Database (POCSRC.)
1. Move LogMiner tables from SYSTEM tablespace:
By default, all LogMiner tables are created in the SYSTEM tablespace. It is a good practice to create an alternate tablespace for the LogMiner tables.
CREATE TABLESPACE LOGMNRTS DATAFILE 'd:\oracle\oradata\POCSRC\logmnrts.dbf' SIZE 25M AUTOEXTEND ON MAXSIZE UNLIMITED;
BEGIN
DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
END;
2. Turn on supplemental logging for DEPT table :
connect SYS/password as SYSDBA
ALTER TABLE scott.emp ADD SUPPLEMENTAL LOG GROUP emp_pk
(empno) ALWAYS;
3. Create Streams Administrator and Grant the necessary privileges :
3.1 Create Streams Administrator :
connect SYS/password as SYSDBA
create user STRMADMIN identified by STRMADMIN default tablespace users;
3.2 Grant the necessary privileges to the Streams Administrator :
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to STRMADMIN;
GRANT SELECT ANY DICTIONARY TO STRMADMIN;
GRANT EXECUTE ON DBMS_AQ TO STRMADMIN;
GRANT EXECUTE ON DBMS_AQADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_FLASHBACK TO STRMADMIN;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_CAPTURE_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_APPLY_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_RULE_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO STRMADMIN;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'ENQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'DEQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'MANAGE_ANY',
grantee => 'STRMADMIN',
admin_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
4. Create a database link to the destination database :
connect STRMADMIN/STRMADMIN@pocsrc
CREATE DATABASE LINK POCDESTN connect to
STRMADMIN identified by STRMADMIN using 'POCDESTN';
Test the database link to be working properly by querying against the destination database.
Eg : select * from global_name@POCDESTN;
5. Create streams queue:
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_name => 'STREAMS_QUEUE',
queue_table =>'STREAMS_QUEUE_TABLE',
queue_user => 'STRMADMIN');
END;
6. Add capture rules for the table at the source database:
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'SCOTT.EMP',
streams_type => 'CAPTURE',
streams_name => 'STRMADMIN_CAPTURE',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'POCSRC');
END;
7. Add propagation rules for the table at the source database.
This step will also create a propagation job to the destination database.
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'SCOTT.emp’,
streams_name => 'STRMADMIN_PROPAGATE',
source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@POCDESTN',
include_dml => true,
include_ddl => true,
source_database => 'POCSRC');
END;
Section 4 - Export, import and instantiation of tables from Source to Destination Database
1. If the objects are not present in the destination database, perform an export of the objects from the source database and import them into the destination database
Export from the Source Database:
Specify the OBJECT_CONSISTENT=Y clause on the export command.
By doing this, an export is performed that is consistent for each individual object at a particular system change number (SCN).
exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.TEST FILE=DEPT.dmp GRANTS=Y ROWS=Y LOG=exportDEPT.log OBJECT_CONSISTENT=Y INDEXES=Y STATISTICS = NONE
Import into the Destination Database:
Specify STREAMS_INSTANTIATION=Y clause in the import command.
By doing this, the streams metadata is updated with the appropriate information in the destination database corresponding to the SCN that is recorded in the export file.
imp USERID=SYSTEM/POCDESTN@POCDESTN FULL=Y CONSTRAINTS=Y FILE=DEPT.dmp IGNORE=Y GRANTS=Y ROWS=Y COMMIT=Y LOG=importDEPT.log STREAMS_INSTANTIATION=Y
2. If the objects are already present in the desination database, check that they are also consistent at data level, otherwise the apply process may fail with error ORA-1403 when apply a DML on a not consistent row. There are 2 ways of instanitating the objects at the destination site.
1. By means of Metadata-only export/import :
Export from the Source Database by specifying ROWS=N
exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.DEPT FILE=tables.dmp
ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.EMP FILE=tables.dmp
ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
For Test table -
exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.TEST FILE=tables.dmp
ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
Import into the destination database using IGNORE=Y
imp USERID=SYSTEM/POCDESTN@POCDESTN FULL=Y FILE=tables.dmp IGNORE=Y
LOG=importTables.log STREAMS_INSTANTIATION=Y
2. By Manaually instantiating the objects
Get the Instantiation SCN at the source database:
connect STRMADMIN/STRMADMIN@POCSRC
set serveroutput on
DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
END;
Instantiate the objects at the destination database with this SCN value.
The SET_TABLE_INSTANTIATION_SCN procedure controls which LCRs for a table are to be applied by the apply process. If the commit SCN of an LCR from the source database is less than or equal to this instantiation SCN , then the apply process discards the LCR. Else, the apply process applies the LCR.
connect STRMADMIN/STRMADMIN@POCDESTN
BEGIN
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'SCOTT.DEPT',
source_database_name => 'POCSRC',
instantiation_scn => &iscn);
END;
connect STRMADMIN/STRMADMIN@POCDESTN
BEGIN
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'SCOTT.EMP',
source_database_name => 'POCSRC',
instantiation_scn => &iscn);
END;
Enter value for iscn:
<Provide the value of SCN that you got from the source database>
Finally start the Capture Process:
connect STRMADMIN/STRMADMIN@POCSRC
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => ‘STRMADMIN_CAPTURE');
END;
Please mail me [email protected]
Thanks.
RaghunathA) need to check that the table is effectively correctly instantied on both sites :
did you checked that the table instantiation is done on both DB for both CAPTURE and Apply ?
since it is multimaster you should see rows in system.logmnr_gtlo and system.lgomrn_gtcs :
break on OWNERNAME on table_name on report
col global_name format a25
col OWNERNAME format a20
select OWNERNAME, lvl0name table_name , start_scn,global_name,baseobj#, INTCOLS,PROPERTY
from system.logmnrc_gtlo o, system.logmnrc_dbname_uid_map m where m.logmnr_uid=o.logmnr_uid order by 1,2;
on each site you should see at least 2 rows per table with the object_id and init SCN for each site as they exists on each site. You may see also older variation of your objects as these 2 tables are usually never purged.
B) we need to see what rules are produced from the setup you published and were they apply:
please post the rules sets and the contents rules :
SELECT a.RULE_SET_OWNER, a.RULE_SET_NAME, b.rule_owner||'.'|| b.rule_name rnol,b.RULE_SET_rule_COMMENT
from dba_rule_sets a, dba_rule_set_rules b where
--rule_set_eval_context_name not like 'AQ$%' and
a.rule_set_owner = b.rule_set_owner (+)
and a.rule_set_name = b.rule_set_name (+) order by RULE_SET_OWNER,b.rule_set_name, b.rule_owner,b.rule_name;
select rule_owner,rule_name,substr(rule_condition,1,200) rc from dba_rules order by rule_owner,rule_name;
after that we should see the subset rule appearing linked to a rules set -
Hi all,
I was trying to use the 'Change data capture' feature in ODI. I was able to start the journal for 1 of my models. In the operator the process failed giving the error :
java.sql.SQLException: ORA-00439: feature not enabled: Streams Capture
Then, I thought the problem might have been because the db user did not have privileges. So i executed the following pl sql block in the sql prompt :
BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(GRANTEE => 'DATA2');
END;
/ as per the instructions in the designer(DATA2 is the name of the db user, from where the model takes the data ). Still the same error came. Then I figured that in the V$OPTION view the value of the parameter 'Streams Capture' was false. Now I am trying to set this 'Streams Capture' parameter to 'TRUE'. The 'update' command didnt seem to work. The error I got was :
ORA-02030: can only select from fixed tables/views
How do I set the 'Streams Capture' parameter to 'TRUE'?
And am I on the right track?Please help.
P.S : I am using the Oracle 10g Express Edition.
Regards,
DivyaI'm not sure that Express has the LogMiner fiunctionality available. I think this may be an Enterprise feature.
-
I want to start logminer tool from java
To start logminer tool the first step is
alter system set utl_file_dir='C:\oracle\product\10.2.0\logminer_dir' scope=spfile;
shutdown immediate
startup
I am working on eclipse
The alter command is executed, but to startup and shutdown immediate, I am trying to use the below program. I getting the following error in eclipse
"Exception in thread "main" java.lang.Error: Unresolved compilation problems:
DatabaseStartupMode cannot be resolved or is not a field
DatabaseShutdownMode cannot be resolved or is not a field
DatabaseShutdownMode cannot be resolved or is not a field
at DBStartup1.main(DBStartup1.java:35)"
How to resolve it?????
import java.sql.*;
import java.util.Properties;
import oracle.jdbc.OracleConnection;
import oracle.jdbc.pool.OracleDataSource;
* To logon as sysdba, you need to create a password file for user "sys":
* orapwd file=C:\oracle\product\10.2.0\db_1\dbs\orapw entries=5
* and add the following setting in init.ora:
* REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
* then restart the database.
public class DBStartup1
static final String DB_URL = "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=XYZ.com)(PORT=1521))"
+ "(CONNECT_DATA=(SERVICE_NAME=rdbms.devplmt.XYZ.com)))";
public static void main(String[] argv) throws Exception
// Starting up the database:
OracleDataSource ds = new OracleDataSource();
Properties prop = new Properties();
prop.setProperty("user","system");
prop.setProperty("password","oracle");
prop.setProperty("internal_logon","sysdba");
prop.setProperty("prelim_auth","true");
ds.setConnectionProperties(prop);
ds.setURL(DB_URL);
OracleConnection conn = (OracleConnection)ds.getConnection();
//OracleConnection.getMode();
conn.startup (OracleConnection.DatabaseStartupMode.NO_RESTRICTION);
conn.close();
// Mounting and opening the database
OracleDataSource ds1 = new OracleDataSource();
Properties prop1 = new Properties();
prop1.setProperty("user","sys");
prop1.setProperty("password","manager");
prop1.setProperty("internal_logon","sysdba");
ds1.setConnectionProperties(prop1);
ds1.setURL(DB_URL);
OracleConnection conn1 = (OracleConnection)ds1.getConnection();
Statement stmt = conn1.createStatement();
stmt.executeUpdate("ALTER DATABASE MOUNT");
stmt.executeUpdate("ALTER DATABASE OPEN");
stmt.close();
conn1.close();
// Shutting down the database
OracleDataSource ds2 = new OracleDataSource();
Properties prop2 = new Properties();
prop.setProperty("user","sys");
prop.setProperty("password","manager");
prop.setProperty("internal_logon","sysdba");
ds2.setConnectionProperties(prop);
ds2.setURL(DB_URL);
OracleConnection conn2 = (OracleConnection)ds2.getConnection();
conn2.shutdown(OracleConnection.DatabaseShutdownMode.IMMEDIATE);
Statement stmt1 = conn2.createStatement();
stmt1.executeUpdate("ALTER DATABASE CLOSE NORMAL");
stmt1.executeUpdate("ALTER DATABASE DISMOUNT");
stmt1.close();
conn2.shutdown(OracleConnection.DatabaseShutdownMode.FINAL);
conn2.close();
}See other thread with the same name!
-
Problem building logical standby database
Hi all,
i am trying to build a logical standby database on platform Sun OS 10/Oracle 10gR2. I am following the Oracle document http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ls.htm#BEIGHEIA
I have created a physical standby database and converting it to a logical standby database. I ensured that my physical Standby is in Sync with primary.
Procedure DBMS_LOGSTDBY.BUILD executes successfully on primary.
The problem is that the command *'alter database recover to logical standby test;'* DOESN'T END. No error in archive log. I have identified the archived redo log that contains the LogMiner dictionary and the starting SCN and applied that archive log on standby. Still the the above command doesn't end.
Any Help is appreciated.SQL> alter database recover to logical standby m2test;
This command doesn't return an sql> prompt. Alert log says it is waiting for log sequence 25. The command is running is for more than 5 hours, but still not competed.
Alertlog:
Thu Feb 5 22:14:25 2009
alter database recover to logical standby m2test
Thu Feb 5 22:14:25 2009
Media Recovery Start: Managed Standby Recovery (mtest)
Thu Feb 5 22:14:25 2009
Managed Standby Recovery not using Real Time Apply
parallel recovery started with 2 processes
Media Recovery Waiting for thread 1 sequence 25
Document says :-
If a dictionary build is not successfully performed on the primary database, this command will never complete.
But the dictionary build on primary is successful.
SQL> execute dbms_logstdby.build;
PL/SQL procedure successfully completed.
I used the following queries and to find which archive log contains dictionary build and made sure that the log archive sequence 22 is applied on standby.
SQL> SELECT NAME FROM V$ARCHIVED_LOG
WHERE (SEQUENCE#=(SELECT MAX(SEQUENCE#)
FROM V$ARCHIVED_LOG
WHERE DICTIONARY_BEGIN = 'YES' AND STANDBY_DEST='NO')); 2 3 4
NAME
/oradata/mtest/archive/mtest_1_22_677975686.arc
SQL> SELECT MAX(FIRST_CHANGE#) FROM V$ARCHIVED_LOG
WHERE DICTIONARY_BEGIN='YES'; 2
MAX(FIRST_CHANGE#)
177407
SQL>
Edited by: user592715 on Feb 6, 2009 3:22 PM -
Performance problem - event : cursor: pin S wait on X
Hi,
Bellow is 17 min awr report , of oracle PeopleSoft DB on 10204 instance on HPUX machine.
During this time the customers complained on poor performance.
There were 4,104.23 execution per second and 3,784.95 parses
which mean that almost any statment was parsed. since the Soft Parse %= 99.77
its seems that most of the parses are soft parse.
During those 17 min , the DB Time = 721.74 min and the "Top 5 Timed Events"
shows : "cursor: pin S wait on X" at the top of the Timed Events
Attached some details for the awr report
Could you please suggest where to focus ?
Thanks
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
xxxx 2993006132 xxxx 1 10.2.0.4.0 NO xxxx
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 18085 25-Mar-10 10:30:41 286 14.9
End Snap: 18086 25-Mar-10 10:48:39 301 15.1
Elapsed: 17.96 (mins)
DB Time: 721.74 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
Buffer Cache: 4,448M 4,368M Std Block Size: 8K
Shared Pool Size: 2,736M 2,816M Log Buffer: 2,080K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 3,831,000.13 271,096.84
Logical reads: 164,733.47 11,657.20
Block changes: 17,757.42 1,256.59
Physical reads: 885.19 62.64
Physical writes: 504.92 35.73
User calls: 5,775.09 408.67
Parses: 3,784.95 267.84
Hard parses: 8.55 0.60
Sorts: 212.37 15.03
Logons: 0.77 0.05
Executes: 4,104.23 290.43
Transactions: 14.13
% Blocks changed per Read: 10.78 Recursive Call %: 24.14
Rollback per transaction %: 0.18 Rows per Sort: 57.86
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.98 Redo NoWait %: 99.97
Buffer Hit %: 99.47 In-memory Sort %: 100.00
Library Hit %: 99.73 Soft Parse %: 99.77
Execute to Parse %: 7.78 Latch Hit %: 99.77
Parse CPU to Parse Elapsd %: 3.06 % Non-Parse CPU: 89.23
Shared Pool Statistics Begin End
Memory Usage %: 34.44 34.78
% SQL with executions>1: 76.52 60.40
% Memory for SQL w/exec>1: 73.75 99.18
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
cursor: pin S wait on X 1,378,354 13,462 10 31.1 Concurrenc
db file sequential read 878,684 8,779 10 20.3 User I/O
CPU time 4,998 11.5
local write wait 2,692 2,442 907 5.6 User I/O
cursor: pin S 1,932,830 2,270 1 5.2 Other
Time Model Statistics DB/Inst: xxxx/xxxx Snaps: 18085-18086
Statistic Name Time (s) % of DB Time
sql execute elapsed time 21,690.6 50.1
parse time elapsed 17,504.9 40.4
DB CPU 4,998.0 11.5
hard parse elapsed time 372.1 .9
connection management call elapsed time 183.9 .4
sequence load elapsed time 125.8 .3
PL/SQL execution elapsed time 89.2 .2
PL/SQL compilation elapsed time 9.2 .0
inbound PL/SQL rpc elapsed time 5.5 .0
hard parse (sharing criteria) elapsed time 5.5 .0
hard parse (bind mismatch) elapsed time 0.5 .0
failed parse elapsed time 0.1 .0
repeated bind elapsed time 0.0 .0
DB time 43,304.1 N/A
background elapsed time 3,742.3 N/A
background cpu time 114.8 N/A
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
Concurrency 1,413,633 97.5 14,283 10 92.8
User I/O 925,010 .3 11,485 12 60.7
Other 1,984,969 .2 2,858 1 130.3
Application 1,342 46.4 1,873 1396 0.1
Configuration 12,116 63.6 1,857 153 0.8
System I/O 582,094 .0 1,444 2 38.2
Commit 17,253 .6 1,057 61 1.1
Network 6,180,701 .0 68 0 405.9
Wait Events DB/Inst: xxxx/xxxx Snaps: 18085-18086
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
cursor: pin S wait on X 1,378,354 100.0 13,462 10 90.5
db file sequential read 878,684 .0 8,779 10 57.7
local write wait 2,692 91.2 2,442 907 0.2
cursor: pin S 1,932,830 .0 2,270 1 126.9
log file switch (checkpoint 2,669 49.1 1,510 566 0.2
enq: RO - fast object reuse 542 86.5 1,398 2580 0.0
log file sync 17,253 .6 1,057 61 1.1
control file sequential read 450,043 .0 579 1 29.6
log file parallel write 17,903 .0 558 31 1.2
enq: TX - row lock contentio 295 52.2 475 1610 0.0
buffer busy waits 7,338 4.4 348 47 0.5
buffer exterminate 322 92.5 302 938 0.0
read by other session 24,694 .0 183 7 1.6
library cache lock 59 94.9 167 2825 0.0
log file sequential read 109,494 .0 161 1 7.2
latch: cache buffers chains 18,662 .0 149 8 1.2
log buffer space 2,493 .0 139 56 0.2
Log archive I/O 3,592 .0 131 37 0.2
free buffer waits 6,420 99.1 130 20 0.4
latch free 42,812 .0 121 3 2.8
Streams capture: waiting for 845 6.0 106 125 0.1
latch: library cache 2,074 .0 96 46 0.1
db file scattered read 12,437 .0 80 6 0.8
enq: SQ - contention 150 14.0 71 471 0.0
SQL*Net more data from clien 331,961 .0 41 0 21.8
latch: shared pool 320 .0 32 100 0.0
LGWR wait for redo copy 5,307 49.1 29 5 0.3
SQL*Net more data to client 254,217 .0 17 0 16.7
control file parallel write 1,038 .0 15 14 0.1
latch: library cache lock 477 .4 14 29 0.0
latch: row cache objects 6,013 .0 10 2 0.4
SQL*Net message to client 5,587,878 .0 10 0 366.9
latch: redo allocation 1,274 .0 9 7 0.1
log file switch completion 62 .0 6 92 0.0
Streams AQ: qmn coordinator 1 100.0 5 4882 0.0
latch: cache buffers lru cha 434 .0 4 9 0.0
block change tracking buffer 111 .0 4 35 0.0
wait list latch free 135 .0 3 21 0.0
enq: TX - index contention 132 .0 2 17 0.0
latch: session allocation 139 .0 2 14 0.0
latch: object queue header o 379 .0 2 4 0.0
row cache lock 15 .0 2 107 0.0
latch: redo copy 56 .0 1 17 0.0
latch: library cache pin 184 .0 1 5 0.0
write complete waits 14 28.6 1 51 0.0
latch: redo writing 251 .0 1 3 0.0
enq: MN - contention 3 .0 1 206 0.0
enq: CF - contention 16 .0 0 23 0.0
log file single write 24 .0 0 13 0.0
os thread startup 3 .0 0 102 0.0
reliable message 66 .0 0 4 0.0
enq: JS - queue lock 2 .0 0 136 0.0
latch: cache buffer handles 46 .0 0 5 0.0
buffer deadlock 65 100.0 0 4 0.0
latch: undo global data 73 .0 0 3 0.0
change tracking file synchro 24 .0 0 6 0.0
change tracking file synchro 30 .0 0 3 0.0
kksfbc child completion 2 100.0 0 52 0.0
SQL*Net break/reset to clien 505 .0 0 0 0.0
db file parallel read 3 .0 0 30 0.0
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
SQL*Net more data from dblin 127 .0 0 0 0.0
SQL*Net more data to dblink 319 .0 0 0 0.0
latch: enqueue hash chains 20 .0 0 2 0.0
latch: checkpoint queue latc 5 .0 0 5 0.0
SQL*Net message to dblink 6,199 .0 0 0 0.4
enq: TX - allocate ITL entry 1 .0 0 22 0.0
direct path read 5,316 .0 0 0 0.3
latch: messages 24 .0 0 1 0.0
enq: US - contention 3 .0 0 4 0.0
direct path write 1,178 .0 0 0 0.1
rdbms ipc reply 1 .0 0 1 0.0
library cache load lock 2 .0 0 0 0.0
direct path write temp 3 .0 0 0 0.0
direct path read temp 3 .0 0 0 0.0
SQL*Net message from client 5,587,890 .0 135,002 24 366.9
wait for unread message on b 7,809 21.8 3,139 402 0.5
LogMiner: client waiting for 262,604 .1 3,021 12 17.2
LogMiner: wakeup event for b 1,405,104 2.4 2,917 2 92.3
Streams AQ: qmn slave idle w 489 .0 2,650 5420 0.0
LogMiner: wakeup event for p 123,723 32.1 2,453 20 8.1
Streams AQ: waiting for time 9 55.6 1,790 198928 0.0
LogMiner: reader waiting for 45,193 51.3 1,526 34 3.0
Streams AQ: waiting for mess 297 99.3 1,052 3542 0.0
Streams AQ: qmn coordinator 470 33.8 1,050 2233 0.0
Streams AQ: delete acknowled 405 32.3 1,049 2591 0.0
jobq slave wait 379 77.8 958 2529 0.0
LogMiner: wakeup event for r 16,591 10.6 125 8 1.1
SGA: MMAN sleep for componen 3,928 99.3 35 9 0.3
SQL*Net message from dblink 6,199 .0 31 5 0.4
single-task message 108 .0 8 74 0.0
class slave wait 3 .0 0 0 0.0
Background Wait Events DB/Inst: xxxx/xxxx Snaps: 18085-18086
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 17,916 .0 558 31 1.2
Log archive I/O 3,592 .0 131 37 0.2
log file sequential read 3,636 .0 47 13 0.2
events in waitclass Other 6,149 42.4 40 7 0.4
log file switch (checkpoint 30 53.3 19 619 0.0
control file parallel write 1,038 .0 15 14 0.1
db file sequential read 1,166 .0 6 5 0.1
control file sequential read 2,986 .0 6 2 0.2
latch: shared pool 4 .0 4 917 0.0
latch: library cache 5 .0 3 646 0.0
free buffer waits 160 98.8 2 10 0.0
buffer busy waits 2 .0 1 404 0.0
latch: redo writing 19 .0 0 23 0.0
log file single write 24 .0 0 13 0.0
os thread startup 3 .0 0 102 0.0
log buffer space 7 .0 0 35 0.0
latch: cache buffers chains 16 .0 0 8 0.0
log file switch completion 1 .0 0 71 0.0
latch: library cache lock 3 66.7 0 11 0.0
latch: redo copy 1 .0 0 20 0.0
direct path read 5,316 .0 0 0 0.3
latch: row cache objects 3 .0 0 1 0.0
direct path write 1,174 .0 0 0 0.1
latch: library cache pin 3 .0 0 0 0.0
rdbms ipc message 20,401 24.2 11,112 545 1.3
Streams AQ: qmn slave idle w 489 .0 2,650 5420 0.0
Streams AQ: waiting for time 9 55.6 1,790 198928 0.0
pmon timer 379 94.5 1,050 2771 0.0
Streams AQ: delete acknowled 406 32.3 1,050 2586 0.0
Streams AQ: qmn coordinator 470 33.8 1,050 2233 0.0
smon timer 146 .0 1,039 7118 0.0
SGA: MMAN sleep for componen 3,928 99.3 35 9 0.3
Operating System Statistics DB/Inst: xxxx/xxxx Snaps: 18085-18086
Statistic Total
AVG_BUSY_TIME 68,992
AVG_IDLE_TIME 37,988
AVG_IOWAIT_TIME 28,529
AVG_SYS_TIME 11,748
AVG_USER_TIME 57,214
BUSY_TIME 552,209
IDLE_TIME 304,181
IOWAIT_TIME 228,489
SYS_TIME 94,253
USER_TIME 457,956
LOAD 2
OS_CPU_WAIT_TIME 147,872,604,500
RSRC_MGR_CPU_WAIT_TIME 0
VM_IN_BYTES 49,152
VM_OUT_BYTES 0
PHYSICAL_MEMORY_BYTES 25,630,269,440
NUM_CPUS 8
NUM_CPU_SOCKETS 8mbobak wrote:
So, this is a parsing related wait. You already mentioned that you're doing lots of parsing, mostly soft. Do you have session_cursor_cache parameter set to a reasonable value? 10g, I believe the default is 50, which is probably not a bad starting point. You may get additional benefits with moderate increases, perhaps to 100-200 range. It can be costly to do so, but can the extra parsing be addressed in the application? Is there anything you can do to reduce parsing in the application? When the problem occurs, how is the CPU consumption on the box? Are the CPUs pegged? Are you bottlenecked on CPU resources? Finally, there are bugs around 10.2.0.x and mutexes, so, you may want to open an SR w/ Oracle support, and determine if the root cause is actually a bug.
Mark,
I think you might read a little more into the stats than you have done - averaging etc. notwithstanding.
There are 8.55 "hard" parses per second - which in 17.96 minutes is about 9,500 hard parses - and there are 1.3M pin S wait on X: which is about 130 per hard parse (and 1.9M pin S). So the average statistics might be showing an interesting impact on individual actions.
The waits on "local write wait" are worth nothing. There are various reasons for this, one of which is the segment header block writes and index root block writes when you truncate a table - which could also be a cause of the "enq: RO - fast object reuse" waits in the body of the report.
Truncating tables tends to invalidate cursors and cause hard parsing.
So I would look for code that is popular, executed from a number of sessions, and truncates tables.
There were some bugs in this area relating to global temporay tables - but they should have been fixed in 10.2.0.4.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
+"Science is more than a body of knowledge; it is a way of thinking"+
+Carl Sagan+ -
Understanding logminer results -- inserting row into table with CLOB field
In using log miner I have noticed that inserts into rows that contain a CLOB (I assume this applies to other LOB type fields as well, have only tested with CLOB so far) field are actually recorded as two DML entries.
--the first entry is the insert operation that inserts all values with an EMPTY_CLOB() for the CLOB field
--the second entry is the update that sets the actual CLOB value (+this is true even if the value of the CLOB field is not being set explicitly+)
This separation makes sense as there may be separate locations that the values are being stored etc.
However, what I am tripping over is the fact the first entry, the Insert, has a RowId value of 'AAAAAAAAAAAAAAAAAA' which is invalid if I attempt to use it in a flashback query such as:
SELECT * FROM PERSON AS OF SCN #####' where RowId = 'AAAAAAAAAAAAAAAAAA'The second operation, the Update of the CLOB field, has the valid RowId.
Now, again, this makes sense if the insert of the new row is not really considered "+done+" until the two steps are done. However, is there some way to group these operations together when analyzing the log contents to know that these two operations are a "+matched set+"?
Not a total deal breaker, but would be nice to know what is happening under the hood here so I don't act on any false assumptions.
Thanks for any input.
To replicate:
Create a table with CLOB field:
CREATE TABLE DEVUSER.TESTTABLE
ID NUMBER
, FULLNAME VARCHAR2(50)
, AGE NUMBER
, DESCRIPTION CLOB
);Capture the before SCN:
SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;Insert a new row in the test table:
INSERT INTO TESTTABLE(ID,FULLNAME,AGE) VALUES(1,'Robert BUILDER',35);
COMMIT;Capture the after SCN:
SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;Start logminer session with the bracketing scn values and options etc:
EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTSCN=>2619174, ENDSCN=>2619191, -
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE + -
DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.NO_ROWID_IN_STMT + DBMS_LOGMNR.NO_SQL_DELIMITER)Query the logs for the changes in that range:
SELECT
commit_scn, xid,operation,table_name,row_id
,sql_redo,sql_undo, rs_id,ssn
FROM V$LOGMNR_CONTENTS
ORDER BY xid asc,sequence# ascResults:
2619178 0C00070028000000 START AAAAAAAAAAAAAAAAAA set transaction read write
2619178 0C00070028000000 INSERT TESTTABLE AAAAAAAAAAAAAAAAAA insert into "DEVUSER"."TESTTABLE" ...
2619178 0C00070028000000 UPDATE TESTTABLE AAAFEXAABAAALEJAAB update "DEVUSER"."TESTTABLE" set "DESCRIPTION" = NULL ...
2619178 0C00070028000000 COMMIT AAAAAAAAAAAAAAAAAA commitEdited by: 958701 on Sep 12, 2012 9:05 AM
Edited by: 958701 on Sep 12, 2012 9:07 AMScott,
Thanks for the reply.
I am inserting into the table over a database link.
I am using the new version of HTML Db (2.0)
HTML Db is connected to an Oracle 10 database I think, however the table I am trying to insert data into (via the database link) is in an Oracle 8 database - this is why we created a link to it as we couldn't have the HTML Db interacting with the Oracle 8 database directly due to compatibility problems (or so I've been told)
Simon -
Oracle physru rolling upgrade problem
Hi, I'm having a problem with the Oracle Physru script provided from MOS note # 949322 I was hoping I could get some help with.
My system contains of two Oracle 11.1.0.7.0, one primary and one physical standby.
The hosts are two Oracle Solaris 10 Sparc x64 machines.
My goal is to update from 11.1.0.7.0 to 11.1.0.7.8 using the script that is provided, however I'm bumping into problems at the step where the script is checking the Apply lag on the logical standby (function "get_apply_lag"). The lag seems to increase, and to be that indicates a problem with the redo apply process. However, when I query the DBA_LOGSTDBY_EVENTS View i get the following:
SQL> SELECT EVENT_TIME, STATUS, EVENT FROM DBA_LOGSTDBY_EVENTS ORDER BY EVENT_TIMESTAMP, COMMIT_SCN;
EVENT_TIME STATUS EVENT
20-SEP-11 16:02:12 ORA-16111: log mining and apply setting up
20-SEP-11 16:02:12 Apply LWM 848622, HWM 848622, SCN 848622
Showing the output from Primary archive dest 2:
SQL> show parameter log_archive_dest_2;
NAME TYPE VALUE
log_archive_dest_2 string service="test08db2", LGWR ASYNC NOAFFIRM delay=0 OPTIONAL compression=DISABLE max_failure=0 max_connections=1 reopen=300 db_unique_name="db_test08db2" net_timeout=30 valid_for=(online_logfile,primary_role)
SQL> select from v$LOGSTDBY_STATS;*
NAME VALUE
logminer session id 1
number of preparers 1
number of appliers 11
server processes in use 15
maximum SGA for LCR cache (MB) 50
maximum events recorded 2000000000
preserve commit order TRUE
transaction consistency FULL
record skipped errors Y
record skipped DDLs Y
record applied DDLs N
NAME VALUE
record unsupported operations Y
realtime apply Y
apply delay (minutes) 0
coordinator state IDLE
coordinator startup time 20-SEP-11 16:02:11
coordinator uptime (seconds) 727
txns received from logminer 62
txns assigned to apply 29
txns applied 29
txns discarded during restart 33
large txns waiting to be assigned 0
NAME VALUE
rolled back txns mined 4
DDL txns mined 2
CTAS txns mined 0
bytes of redo mined 8195884
bytes paged out 0
pageout time (seconds) 0
bytes checkpointed 709802
checkpoint time (seconds) 0
system idle time (seconds) 479
standby redo logs mined 0
archived logs mined 4
NAME VALUE
gap fetched logs mined 2
standby redo log reuse detected 0
logfile open failures 0
current logfile wait (seconds) 0
total logfile wait (seconds) 0
thread enable mined 0
thread disable mined 0
distinct txns in queue 0
41 rows selected.
SQL> select type, high_scn, status, pid from v$logstdby order by type;
TYPE HIGH_SCN STATUS PID
ANALYZER 850320 ORA-16116: no work available 20702
APPLIER 848808 ORA-16116: no work available 20719
APPLIER 850320 ORA-16116: no work available 20731
APPLIER 850204 ORA-16116: no work available 20727
APPLIER 848895 ORA-16116: no work available 20725
APPLIER 848665 ORA-16116: no work available 20705
APPLIER 848677 ORA-16116: no work available 20709
APPLIER 848728 ORA-16116: no work available 20713
APPLIER 848740 ORA-16116: no work available 20715
APPLIER 848796 ORA-16116: no work available 20717
APPLIER 848842 ORA-16116: no work available 20721
TYPE HIGH_SCN STATUS PID
APPLIER 848854 ORA-16116: no work available 20723
BUILDER 850320 ORA-16116: no work available 20221
COORDINATOR 850326 ORA-16116: no work available 20119
PREPARER 850318 ORA-16116: no work available 20223
READER 850326 ORA-16116: no work available 20217
16 rows selected.
Physru Script Output:_
### Initialize script to either start over or resume execution
Sep 20 14:35:41 2011 [0-1] Identifying rdbms software version
Sep 20 14:35:41 2011 [0-1] database nobilldb is at version 11.1.0.7.0
Sep 20 14:35:42 2011 [0-1] database nobilldb is at version 11.1.0.7.0
Sep 20 14:35:44 2011 [0-1] verifying flashback database is enabled at db_test08db1 and db_test08db2
Sep 20 14:35:44 2011 [0-1] verifying available flashback restore points
Sep 20 14:35:45 2011 [0-1] verifying DG Broker is disabled
Sep 20 14:35:46 2011 [0-1] looking up prior execution history
Sep 20 14:35:46 2011 [0-1] purging script execution state from database db_test08db1
Sep 20 14:35:46 2011 [0-1] purging script execution state from database db_test08db2
Sep 20 14:35:47 2011 [0-1] starting new execution of script
### Stage 1: Backup user environment in case rolling upgrade is aborted
Sep 20 14:35:47 2011 [1-1] stopping media recovery on db_test08db2
Sep 20 14:35:48 2011 [1-1] creating restore point PRUP_0000_0001 on database db_test08db2
Sep 20 14:35:49 2011 [1-1] backing up current control file on db_test08db2
Sep 20 14:35:50 2011 [1-1] created backup control file /opt/oracle/product/11.1.0.7/dbs/PRUP_0001_db_test08db2_f.f
Sep 20 14:35:50 2011 [1-1] creating restore point PRUP_0000_0001 on database db_test08db1
Sep 20 14:35:51 2011 [1-1] backing up current control file on db_test08db1
Sep 20 14:35:52 2011 [1-1] created backup control file /opt/oracle/product/11.1.0.7/dbs/PRUP_0001_db_test08db1_f.f
NOTE: Restore point PRUP_0000_0001 and backup control file PRUP_0001_db_test08db2_f.f
can be used to restore db_test08db2 back to its original state as a
physical standby, in case the rolling upgrade operation needs to be aborted
prior to the first switchover done in Stage 4.
### Stage 2: Create transient logical standby from existing physical standby
Sep 20 14:35:53 2011 [2-1] verifying RAC is disabled at db_test08db2
Sep 20 14:35:53 2011 [2-1] verifying database roles
Sep 20 14:35:54 2011 [2-1] verifying physical standby is mounted
Sep 20 14:35:54 2011 [2-1] verifying database protection mode
Sep 20 14:35:55 2011 [2-1] verifying transient logical standby datatype support
Sep 20 14:36:00 2011 [2-2] starting media recovery on db_test08db2
Sep 20 14:36:11 2011 [2-2] confirming media recovery is running
Sep 20 14:36:12 2011 [2-2] waiting for v$dataguard_stats view to initialize
Sep 20 14:36:13 2011 [2-2] waiting for apply lag on db_test08db2 to fall below 30 seconds
Sep 20 14:36:13 2011 [2-2] apply lag is now less than 30 seconds
Sep 20 14:36:14 2011 [2-2] stopping media recovery on db_test08db2
Sep 20 14:36:15 2011 [2-2] executing dbms_logstdby.build on database db_test08db1
Sep 20 14:36:27 2011 [2-2] converting physical standby into transient logical standby
Sep 20 14:36:52 2011 [2-3] opening database db_test08db2
Sep 20 14:37:28 2011 [2-4] configuring transient logical standby parameters for rolling upgrade
Sep 20 14:37:29 2011 [2-4] starting logical standby on database db_test08db2
Sep 20 14:37:37 2011 [2-4] waiting until logminer dictionary has fully loaded
Sep 20 14:37:58 2011 [2-4] dictionary load 17% complete
Sep 20 14:38:09 2011 [2-4] dictionary load 32% complete
Sep 20 14:38:19 2011 [2-4] dictionary load 43% complete
Sep 20 14:38:30 2011 [2-4] dictionary load 59% complete
Sep 20 14:38:40 2011 [2-4] dictionary load 62% complete
Sep 20 14:38:50 2011 [2-4] dictionary load 70% complete
Sep 20 14:39:01 2011 [2-4] dictionary load 72% complete
Sep 20 14:39:11 2011 [2-4] dictionary load 75% complete
Sep 20 14:40:54 2011 [2-4] dictionary load is complete
Sep 20 14:41:00 2011 [2-4] waiting for v$dataguard_stats view to initialize
Sep 20 14:41:01 2011 [2-4] waiting for apply lag on db_test08db2 to fall below 30 seconds
Sep 20 14:42:02 2011 [2-4] current apply lag: 316
Sep 20 14:42:32 2011 [2-4] current apply lag: 316
Sep 20 14:43:03 2011 [2-4] current apply lag: 376
Sep 20 14:43:33 2011 [2-4] current apply lag: 376
Sep 20 14:44:03 2011 [2-4] current apply lag: 437
Sep 20 14:44:34 2011 [2-4] current apply lag: 437
Sep 20 14:45:04 2011 [2-4] current apply lag: 497
Sep 20 14:45:35 2011 [2-4] current apply lag: 497
Sep 20 14:46:05 2011 [2-4] current apply lag: 558
Sep 20 14:46:36 2011 [2-4] current apply lag: 558
Sep 20 14:47:06 2011 [2-4] current apply lag: 618
I would appreciate any help I could get, I'm stuck =/
Regards,http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-upgrades-made-easy-131972.pdf
Maybe you are looking for
-
I have a number of Ms Word documents because I used to have Word for Mac before the last OS release. Now, of course, when I double-click on them, I get a message saying that PowerPC isn't supported. Now Pages will open these files. Is there any way
-
Hi! I need to copy (scc9) productive system to QA and have mistake what the overflow lock table. I implement note 1039834, set parameter enque/table_size 16384, run 2 enq process in central instance. But the problem has not dared. Please help me.
-
Journals of iPhoto not in Photos for iOS 8
The journal feature of iPhoto has been lost. Will it be included in Photos in the near future? Are there third party apps with which I can accomplish the same?
-
Problem in finding the link location of a particular address
hello, i am developing a travelling salesman problem project using oracle 10g utilities and the APRESS book's data. shortly, in the system, the user enters the address info and the application records each address info (when user enters it) to use it
-
Federated portal network - missing tab on consumer role
Hi, We're on EP 7.01, SP4, and on two occantions we've had problems with missing tabs on a consumer role in our federated portal network setup. An entire workset on both occations is missing from the RDL-shared role. In the two cases the problem has