Oracle 11g: Oracle insert/update operation is taking more time.
Hello All,
In Oracle 11g (Windows 2008 32 bit environment) we are facing following issue.
1) We are inserting/updating data on some tables (4-5 tables and we are firing query with very high rate).
2) After sometime (say 15 days with same load) we are feeling that the Oracle operation (insert/update) is taking more time.
Query1: How to find actually oracle is taking more time in insert/updates operation.
Query2: How to rectify the problem.
We are having multithread environment.
Thanks
With Regards
Hemant.
Liron Amitzi wrote:
Hi Nicolas,
Just a short explanation:
If you have a table with 1 column (let's say a number). The table is empty and you have an index on the column.
When you insert a row, the value of the column will be inserted to the index. To insert 1 value to an index with 10 values in it will be fast. It will take longer to insert 1 value to an index with 1 million values in it.
My second example was if I take the same table and let's say I insert 10 rows and delete the previous 10 from the table. I always have 10 rows in the table so the index should be small. But this is not correct. If I insert values 1-10 and then delete 1-10 and insert 11-20, then delete 11-20 and insert 21-30 and so on, because the index is sorted, where 1-10 were stored I'll now have empty spots. Oracle will not fill them up. So the index will become larger and larger as I insert more rows (even though I delete the old ones).
The solution here is simply revuild the index once in a while.
Hope it is clear.
Liron Amitzi
Senior DBA consultant
[www.dbsnaps.com]
[www.orbiumsoftware.com]Hmmm, index space not reused ? Index rebuild once a while ? That was what I understood from your previous post, but nothing is less sure.
This is a misconception of how indexes are working.
I would suggest the reading of the following interasting doc, they are a lot of nice examples (including index space reuse) to understand, and in conclusion :
http://richardfoote.files.wordpress.com/2007/12/index-internals-rebuilding-the-truth.pdf
"+Index Rebuild Summary+
+•*The vast majority of indexes do not require rebuilding*+
+•Oracle B-tree indexes can become “unbalanced” and need to be rebuilt is a myth+
+•*Deleted space in an index is “deadwood” and over time requires the index to be rebuilt is a myth*+
+•If an index reaches “x” number of levels, it becomes inefficient and requires the index to be rebuilt is a myth+
+•If an index has a poor clustering factor, the index needs to be rebuilt is a myth+
+•To improve performance, indexes need to be regularly rebuilt is a myth+"
Good reading,
Nicolas.
Similar Messages
-
Update query which taking more time
Hi
I am running an update query which takeing more time any help to run this fast.
update arm538e_tmp t
set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
where m.vndr#=t.vndr#
and m.cust_type_cd=t.cust_type
and m.cust_type_cd<>13
and m.yymm between 201301 and 201303
group by m.vndr#,m.cust_type_cd;
help will be appreciable
thank you
Edited by: 960991 on Apr 16, 2013 7:11 AM960991 wrote:
Hi
I am running an update query which takeing more time any help to run this fast.
update arm538e_tmp t
set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
where m.vndr#=t.vndr#
and m.cust_type_cd=t.cust_type
and m.cust_type_cd13
and m.yymm between 201301 and 201303
group by m.vndr#,m.cust_type_cd;
help will be appreciable
thank youUpdates with subqueries can be slow. Get an execution plan for the update to see what SQL is doing.
Some things to look at ...
1. Are you sure you posted the right SQL? I could not "balance" the parenthesis - 4 "(" and 3 ")"
2. Unnecessary "(" ")" in the subquery "(sum" are confusing
3. Updates with subqueries can be slow. The tqtr5 value seems to evaluate to a constant. You might improve performance by computing the value beforehand and using a variable instead of the subquery
4. Subquery appears to be correlated - good! Make sure the subquery is properly indexed if it reads < 20% of the rows in the table (this figure depends on the version of Oracle)
5. Is tqtr5 part of an index? It is a bad idea to update indexed columns -
Update query in sql taking more time
Hi
I am running an update query which takeing more time any help to run this fast.
update arm538e_tmp t
set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
where m.vndr#=t.vndr#
and m.cust_type_cd=t.cust_type
and m.cust_type_cd<>13
and m.yymm between 201301 and 201303
group by m.vndr#,m.cust_type_cd;
help will be appreciable
thank youThis is the Reports forum. Ask this in the SQL and PL/SQL forum.
-
Query in timesten taking more time than query in oracle database
Hi,
Can anyone please explain me why query in timesten taking more time
than query in oracle database.
I am mentioning in detail what are my settings and what have I done
step by step.........
1.This is the table I created in Oracle datababase
(Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
CREATE TABLE student (
id NUMBER(9) primary keY ,
first_name VARCHAR2(10),
last_name VARCHAR2(10)
2.THIS IS THE ANONYMOUS BLOCK I USE TO
POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
declare
firstname varchar2(12);
lastname varchar2(12);
catt number(9);
begin
for cntr in 1..2599999 loop
firstname:=(cntr+8)||'f';
lastname:=(cntr+2)||'l';
if cntr like '%9999' then
dbms_output.put_line(cntr);
end if;
insert into student values(cntr,firstname, lastname);
end loop;
end;
3. MY DSN IS SET THE FOLLWING WAY..
DATA STORE PATH- G:\dipesh3repo\db
LOG DIRECTORY- G:\dipesh3repo\log
PERM DATA SIZE-1000
TEMP DATA SIZE-1000
MY TIMESTEN VERSION-
C:\Documents and Settings\dipesh>ttversion
TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
Instance admin: dipesh
Instance home directory: G:\TimestTen\TT70_32
Daemon home directory: G:\TimestTen\TT70_32\srv\info
THEN I CONNECT TO THE TIMESTEN DATABASE
C:\Documents and Settings\dipesh> ttisql
command>connect "dsn=dipesh3;oraclepwd=tiger";
4. THEN I START THE AGENT
call ttCacheUidPwdSet('SCOTT','TIGER');
Command> CALL ttCacheStart();
5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
create readonly cache group rc_student autorefresh
interval 5 seconds from student
(id int not null primary key, first_name varchar2(10), last_name varchar2(10));
load cache group rc_student commit every 100 rows;
6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
I SET THE TIMING..
command>TIMING 1;
consider this query now..
Command> select * from student where first_name='2155666f';
< 2155658, 2155666f, 2155660l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
another query-
Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
2206: Table SCOTT.STUDENTS not found
Execution time (SQLPrepare) = 0.074964 seconds.
The command failed.
Command> SELECT * FROM STUDENT where first_name='2093434f';
< 2093426, 2093434f, 2093428l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
Command>
7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
ID FIRST_NAME LAST_NAME
1498663 1498671f 1498665l
Elapsed: 00:00:00.15
Can anyone please explain me why query in timesten taking more time
that query in oracle database.
Message was edited by: Dipesh Majumdar
user542575
Message was edited by:
user542575TimesTen
Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
Version: 7.0.4.0.0 64 bit
Schema:
create usermanaged cache group factCache from
MV_US_DATAMART
ORDER_DATE DATE,
IF_SYSTEM VARCHAR2(32) NOT NULL,
GROUPING_ID TT_BIGINT,
TIME_DIM_ID TT_INTEGER NOT NULL,
BUSINESS_DIM_ID TT_INTEGER NOT NULL,
ACCOUNT_DIM_ID TT_INTEGER NOT NULL,
ORDERTYPE_DIM_ID TT_INTEGER NOT NULL,
INSTR_DIM_ID TT_INTEGER NOT NULL,
EXECUTION_DIM_ID TT_INTEGER NOT NULL,
EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
NO_ORDERS TT_BIGINT,
FILLED_QUANTITY TT_BIGINT,
CNT_FILLED_QUANTITY TT_BIGINT,
QUANTITY TT_BIGINT,
CNT_QUANTITY TT_BIGINT,
COMMISSION BINARY_FLOAT,
CNT_COMMISSION TT_BIGINT,
FILLS_NUMBER TT_BIGINT,
CNT_FILLS_NUMBER TT_BIGINT,
AGGRESSIVE_FILLS TT_BIGINT,
CNT_AGGRESSIVE_FILLS TT_BIGINT,
NOTIONAL BINARY_FLOAT,
CNT_NOTIONAL TT_BIGINT,
TOTAL_PRICE BINARY_FLOAT,
CNT_TOTAL_PRICE TT_BIGINT,
CANCELLED_ORDERS_COUNT TT_BIGINT,
CNT_CANCELLED_ORDERS_COUNT TT_BIGINT,
ROUTED_ORDERS_NO TT_BIGINT,
CNT_ROUTED_ORDERS_NO TT_BIGINT,
ROUTED_LIQUIDITY_QTY TT_BIGINT,
CNT_ROUTED_LIQUIDITY_QTY TT_BIGINT,
REMOVED_LIQUIDITY_QTY TT_BIGINT,
CNT_REMOVED_LIQUIDITY_QTY TT_BIGINT,
ADDED_LIQUIDITY_QTY TT_BIGINT,
CNT_ADDED_LIQUIDITY_QTY TT_BIGINT,
AGENT_CHARGES BINARY_FLOAT,
CNT_AGENT_CHARGES TT_BIGINT,
CLEARING_CHARGES BINARY_FLOAT,
CNT_CLEARING_CHARGES TT_BIGINT,
EXECUTION_CHARGES BINARY_FLOAT,
CNT_EXECUTION_CHARGES TT_BIGINT,
TRANSACTION_CHARGES BINARY_FLOAT,
CNT_TRANSACTION_CHARGES TT_BIGINT,
ORDER_MANAGEMENT BINARY_FLOAT,
CNT_ORDER_MANAGEMENT TT_BIGINT,
SETTLEMENT_CHARGES BINARY_FLOAT,
CNT_SETTLEMENT_CHARGES TT_BIGINT,
RECOVERED_AGENT BINARY_FLOAT,
CNT_RECOVERED_AGENT TT_BIGINT,
RECOVERED_CLEARING BINARY_FLOAT,
CNT_RECOVERED_CLEARING TT_BIGINT,
RECOVERED_EXECUTION BINARY_FLOAT,
CNT_RECOVERED_EXECUTION TT_BIGINT,
RECOVERED_TRANSACTION BINARY_FLOAT,
CNT_RECOVERED_TRANSACTION TT_BIGINT,
RECOVERED_ORD_MGT BINARY_FLOAT,
CNT_RECOVERED_ORD_MGT TT_BIGINT,
RECOVERED_SETTLEMENT BINARY_FLOAT,
CNT_RECOVERED_SETTLEMENT TT_BIGINT,
CLIENT_AGENT BINARY_FLOAT,
CNT_CLIENT_AGENT TT_BIGINT,
CLIENT_ORDER_MGT BINARY_FLOAT,
CNT_CLIENT_ORDER_MGT TT_BIGINT,
CLIENT_EXEC BINARY_FLOAT,
CNT_CLIENT_EXEC TT_BIGINT,
CLIENT_TRANS BINARY_FLOAT,
CNT_CLIENT_TRANS TT_BIGINT,
CLIENT_CLEARING BINARY_FLOAT,
CNT_CLIENT_CLEARING TT_BIGINT,
CLIENT_SETTLE BINARY_FLOAT,
CNT_CLIENT_SETTLE TT_BIGINT,
CHARGEABLE_TAXES BINARY_FLOAT,
CNT_CHARGEABLE_TAXES TT_BIGINT,
VENDOR_CHARGE BINARY_FLOAT,
CNT_VENDOR_CHARGE TT_BIGINT,
ROUTING_CHARGES BINARY_FLOAT,
CNT_ROUTING_CHARGES TT_BIGINT,
RECOVERED_ROUTING BINARY_FLOAT,
CNT_RECOVERED_ROUTING TT_BIGINT,
CLIENT_ROUTING BINARY_FLOAT,
CNT_CLIENT_ROUTING TT_BIGINT,
TICKET_CHARGES BINARY_FLOAT,
CNT_TICKET_CHARGES TT_BIGINT,
RECOVERED_TICKET_CHARGES BINARY_FLOAT,
CNT_RECOVERED_TICKET_CHARGES TT_BIGINT,
PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
READONLY);
No of rows: 2228558
Config:
< CkptFrequency, 600 >
< CkptLogVolume, 0 >
< CkptRate, 0 >
< ConnectionCharacterSet, US7ASCII >
< ConnectionName, tt_us_dma >
< Connections, 64 >
< DataBaseCharacterSet, AL32UTF8 >
< DataStore, e:\andrew\datacache\usDMA >
< DurableCommits, 0 >
< GroupRestrict, <NULL> >
< LockLevel, 0 >
< LockWait, 10 >
< LogBuffSize, 65536 >
< LogDir, e:\andrew\datacache\ >
< LogFileSize, 64 >
< LogFlushMethod, 1 >
< LogPurge, 0 >
< Logging, 1 >
< MemoryLock, 0 >
< NLS_LENGTH_SEMANTICS, BYTE >
< NLS_NCHAR_CONV_EXCP, 0 >
< NLS_SORT, BINARY >
< OracleID, NYCATP1 >
< PassThrough, 0 >
< PermSize, 4000 >
< PermWarnThreshold, 90 >
< PrivateCommands, 0 >
< Preallocate, 0 >
< QueryThreshold, 0 >
< RACCallback, 0 >
< SQLQueryTimeout, 0 >
< TempSize, 514 >
< TempWarnThreshold, 90 >
< Temporary, 1 >
< TransparentLoad, 0 >
< TypeMode, 0 >
< UID, OS_OWNER >
ORACLE:
Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
Schema:
CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
TABLESPACE TS_OS
PARTITION BY RANGE (ORDER_DATE)
PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
LOGGING
NOCOMPRESS
TABLESPACE TS_OS
NOCACHE
NOCOMPRESS
NOPARALLEL
BUILD DEFERRED
USING INDEX
TABLESPACE TS_OS_INDEX
REFRESH FAST ON DEMAND
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT order_date, if_system,
GROUPING_ID (order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id
) GROUPING_ID,
/* ============ DIMENSIONS ============ */
time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
instr_dim_id, execution_dim_id, exec_exchange_dim_id,
/* ============ MEASURES ============ */
-- o.FX_RATE /* FX_RATE */,
COUNT (*) no_orders,
-- SUM(NO_ORDERS) NO_ORDERS,
-- COUNT(NO_ORDERS) CNT_NO_ORDERS,
SUM (filled_quantity) filled_quantity,
COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
COUNT (quantity) cnt_quantity, SUM (commission) commission,
COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
COUNT (fills_number) cnt_fills_number,
SUM (aggressive_fills) aggressive_fills,
COUNT (aggressive_fills) cnt_aggressive_fills,
SUM (fx_rate * filled_quantity * average_price) notional,
COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
SUM (fx_rate * fills_number * average_price) total_price,
COUNT (fx_rate * fills_number * average_price) cnt_total_price,
SUM (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END) cancelled_orders_count,
COUNT (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END
) cnt_cancelled_orders_count,
-- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
-- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
-- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
SUM (routed_orders_no) routed_orders_no,
COUNT (routed_orders_no) cnt_routed_orders_no,
SUM (routed_liquidity_qty) routed_liquidity_qty,
COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
SUM (removed_liquidity_qty) removed_liquidity_qty,
COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
SUM (added_liquidity_qty) added_liquidity_qty,
COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
SUM (agent_charges) agent_charges,
COUNT (agent_charges) cnt_agent_charges,
SUM (clearing_charges) clearing_charges,
COUNT (clearing_charges) cnt_clearing_charges,
SUM (execution_charges) execution_charges,
COUNT (execution_charges) cnt_execution_charges,
SUM (transaction_charges) transaction_charges,
COUNT (transaction_charges) cnt_transaction_charges,
SUM (order_management) order_management,
COUNT (order_management) cnt_order_management,
SUM (settlement_charges) settlement_charges,
COUNT (settlement_charges) cnt_settlement_charges,
SUM (recovered_agent) recovered_agent,
COUNT (recovered_agent) cnt_recovered_agent,
SUM (recovered_clearing) recovered_clearing,
COUNT (recovered_clearing) cnt_recovered_clearing,
SUM (recovered_execution) recovered_execution,
COUNT (recovered_execution) cnt_recovered_execution,
SUM (recovered_transaction) recovered_transaction,
COUNT (recovered_transaction) cnt_recovered_transaction,
SUM (recovered_ord_mgt) recovered_ord_mgt,
COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
SUM (recovered_settlement) recovered_settlement,
COUNT (recovered_settlement) cnt_recovered_settlement,
SUM (client_agent) client_agent,
COUNT (client_agent) cnt_client_agent,
SUM (client_order_mgt) client_order_mgt,
COUNT (client_order_mgt) cnt_client_order_mgt,
SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
SUM (client_trans) client_trans,
COUNT (client_trans) cnt_client_trans,
SUM (client_clearing) client_clearing,
COUNT (client_clearing) cnt_client_clearing,
SUM (client_settle) client_settle,
COUNT (client_settle) cnt_client_settle,
SUM (chargeable_taxes) chargeable_taxes,
COUNT (chargeable_taxes) cnt_chargeable_taxes,
SUM (vendor_charge) vendor_charge,
COUNT (vendor_charge) cnt_vendor_charge,
SUM (routing_charges) routing_charges,
COUNT (routing_charges) cnt_routing_charges,
SUM (recovered_routing) recovered_routing,
COUNT (recovered_routing) cnt_recovered_routing,
SUM (client_routing) client_routing,
COUNT (client_routing) cnt_client_routing,
SUM (ticket_charges) ticket_charges,
COUNT (ticket_charges) cnt_ticket_charges,
SUM (recovered_ticket_charges) recovered_ticket_charges,
COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
FROM us_datamart_raw
GROUP BY order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id;
-- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
-- by Oracle with the associated materialized view.
CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
NOLOGGING
NOPARALLEL
COMPRESS 7;
No of rows: 2228558
The query (taken Mondrian) I run against each of them is:
select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
--, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
--, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
--, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
--, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
--, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
--, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
--, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
--, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
--, sum("MV_US_DATAMART"."COMMISSION") as "m9"
--, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
--, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
--,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
--,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
--, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
--, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
--, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
--, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
--,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
--, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
No Columns ORACLE TimesTen
1 1.05 0.94
2 1.07 1.47
3 2.04 1.8
4 2.06 2.08
5 2.09 2.4
6 3.01 2.67
7 4.02 3.06
8 4.03 3.37
9 4.04 3.62
10 4.06 4.02
11 4.08 4.31
12 4.09 4.61
13 5.01 4.76
14 5.02 5.06
15 5.04 5.25
16 5.05 5.48
17 5.08 5.84
18 6 6.21
19 6.02 6.34
20 6.04 6.75 -
Oracle Drop command taking more time in AIX When compared to Windows
Hi
We are firing a normal Drop command on our database and the databse version is 10.2.0.4.The catabase is running on AIX v5.The command is taking more time than usual .
There are no waits on session.Let me try to put the scenario in more simple terms.I have a nested table NT_TEST with record count 0 and i am dropping the statement using "Drop table NT_TEST" in windows it is getting dropped say some 218 Milli seconds.
The same table when i am dropping it in AIX it is taking around 6 seconds.
There are no locks ,Flashback is off,Triggers are not involved.Drop is not waiting on any transaction to complete.
Could anybody help me out on this.Quite possible. On most Unixes asynchronous I/O requires purchasing extra options, and most sites just purchase the default O/S. Or asynchronous I/O is not enabled by default or the database is located on an ufs file system.
You provided no info at all on your AIX configuration. AIX does have asynchronous I/O, it is not always enabled, and the number of slaves is usually way too low.
As root, run smit aio and report the results.
At his point your unfavorable comparison of Windows and AIX is quite unfair. Unix doesn't run out of the box.
You know that famous joke 'If operating systems were airplanes'?
In that joke, the Unix airplane is assembled on the spot, but the Windows airplane is three months late.
Sybrand Bakker
Senior Oracle DBA -
Taking More Time while inserting into the table (With foriegn key)
Hi All,
I am facing problem while inserting the values into the master table.
The problem,
Table A -- User Master Table (Reg No, Name, etc)
Table B -- Transaction Table (Foreign key reference with Table A).
While inserting the data's in Table B, i need to insert the reg no also in table B which is mandatory. I followed the logic which is mentioned in the SRDemo.
While inserting we need to query the Table A first to have the values in TableABean.java.
final TableA tableA= (TableA )uow.executeQuery("findUser",TableA .class, regNo);
Then, we need to create the instance for TableB
TableB tableB= (TableB)uow.newInstance(TableB.class);
tableB.setID(bean.getID);
tableA.addTableB(tableB); --- this is for to insert the regNo of TableA in TableB.. This line is executing the query "select * from TableB where RegNo = <tableA.getRegNo>".
This query is taking too much time if values are more in the TableB for that particular registrationNo. Because of this its taking more time to insert into the TableB.
For Ex: TableA -- regNo : 101...having less entry in TableB means...inserting record is taking less than 1 sec
regNo : 102...having more entry in TableB means...inserting record is taking more than 2 sec
Time delay is there for different users when they enter transaction in TableB.
I need to avoid this since in future it will take more time...from 2 sec to 10 sec, if volume of data increases mean.
Please help me to resolve this issue...I am facing it now in production.
Thanks & Regards
VBHello,
Looks like you have a 1:M relationship from TableA to TableB, with a 1:1 back pointer from TableB to TableA. If triggering the 1:M relationship is causing you delays that you want to avoid there might be two quick ways I can see:
1) Don't map it. Leave the TableA->TableB 1:M unmapped, and instead just query for relationship when you do need it. This means you do not need to call tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA), so that the TableB->TableA relation gets set. Might not be the best option, but it depends on your application's usage. It does allow you to potentially page the TableB results or add other query query performance options when you do need the data though.
2) You are currently using Lazy loading for the TableA->TableB relationship - if it is untriggered, don't bother calling tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA). This of course requires using TopLink api to a) verify the collection is an IndirectCollection type, and b) that it is hasn't been triggered. If it has been triggered, you will still need to call tableA.addTableB(tableB), but it won't result in a query. Check out the oracle.toplink.indirection.IndirectContainer class and it's isInstantiated() method. This can cause problems though in highly concurrent environments, as other threads may have triggered the indirection before you commit your transaction, so that the A->B collection is not up to date - this might require refreshing the TableA if so.
Change tracking would probably be the best option to use here, and is described in the EclipseLink wiki:
http://wiki.eclipse.org/Introduction_to_EclipseLink_Transactions_(ELUG)#Attribute_Change_Tracking_Policy
Best Regards,
Chris -
My update query is taking more tablespace.. so please advice
Hi All,
My update statement is taking more tablespace suddently. so I got error 'unable to extend the tablespace'. After extended the tablespace, I got completed..but those extended tablespace also got filled immetiately. Can you guys please suggest me for some solutions for this case.
"-- Proc to update the lvl 2 dfus in the dfuprojdraftstatic table with cust orders
procedure update_lvl2_dpds_custorders is
cv cur_type;
v_lvl1_dmdgroup varchar2(50) := null;
v_lvl1_loc varchar2(50) := null;
v_first_one number;
a number;
begin
for x in (select distinct dmdgroup lvl2_dmdgroup, loc lvl2_loc
from dfuprojdraftstatic s
where (dmdunit,dmdgroup,loc) in (select dmdunit,dmdgroup,loc from dfuview v
where u_dfu_lvl = 2)
and dmdunit in (select dmdunit from dmdunit
where u_moe = in_moe) ) loop
v_first_one := 1;
-- Using dynamic sql, loop through the lvl 1 dmdgroups and locs for this lvl 2 dmdgroup and loc
--open cv for v_sql_stmt using v_cntry_cd, x.lvl2_dmdgroup, v_cntry_cd, x.lvl2_loc;
open cv for v_sql_stmt using x.lvl2_dmdgroup,in_moe;
loop
fetch cv into v_lvl1_dmdgroup, v_lvl1_loc;
exit when cv%notfound;
UPDATE dfuprojdraftstatic s
SET u_cust_ordr = (SELECT decode(v_first_one, 1, 0, u_cust_ordr) + nvl(sum(qty), 0)
FROM cust_orders o, mars_week_cal c
WHERE o.mrkt_moe = in_moe
AND o.dmdunit = s.dmdunit
AND o.dmdgroup = v_lvl1_dmdgroup
AND o.loc = v_lvl1_loc
AND o.start_date BETWEEN c.week_start_dat AND c.week_end_dat
AND c.week_start_dat = s.startdate
WHERE startdate >= v_max_dpd
AND (dmdunit,dmdgroup,loc) in (select dmdunit,dmdgroup,loc from dfuview v
where u_dfu_lvl = 2 )
AND dmdunit in (select dmdunit from dmdunit
where u_moe = in_moe )
AND s.dmdgroup = x.lvl2_dmdgroup
AND s.loc = x.lvl2_loc;
dbms_output.put_line('A '||a+1);
if v_first_one = 1 then
v_first_one := 0;
end if;
end loop;
close cv;
end loop;
commit;
end; -- update_lvl2_dpds_custorders
Thanks and Regards,
Sudhakar.MWithout seeing the full code like, for example, the definition of v_sql_stmt, and without knowing you tables and data, my guess would be that, based on the code provided, the update process is extremely inefficient and is likely updating the same row multiple times in this procedure, and even more likely in the entire set of provedures.
Based on the erro message, it appears that you are running a package called batch_upd_dfuprojdraftstatic. Based on the name of the procedure you show, I also suspect that there are several other procedures that update single columns of dfuprojdraftstatic one after the other.
I know that the table dfuprojdraftstatic is at least part of the base for a materialized view (mlog$_dfuprojdraftstatic is the materialized view log attached to the table to track changes for refreshing the materialized view). For each update you issue, Oracle creates a record in the materialized view log tracking what changed. If you do update col1, that creates a record in the log for each row , if you subsequently do update col2 that creates a second record for each row. However, if you do update col1, col2, that only creates a single record for each row. So, updating all of the columns requiring updates at the same time will reduce the number of the materialized view log entries, which should reduce the space required.
You may alos want to loko at purging some of the data form the log using the dbms_mview package.
John -
Hi
update TBLMCUSTOMERUSAGEDETAIL set UNUSEDVALUE=0 where CUSTOMERUSAGEDETAILID is not null;
in this table only 440 record is there but is taking more time and not execute?
what is the reason for this.
your early response is appericated.
Thanks in advance
Edited by: user647572 on Sep 13, 2010 4:13 AMSteps to generate 10046 trace
SQL >alter session set max_dump_file_size=unlimited;
SQL >alter session set timed_statistics=true;
SQL >alter session set events '10046 trace name context forever, level 12';
SQL > ------- execute the query-------
SQL > ---If it completes just EXIT
A tracefile will be generated in your user_dump_dest
Format this file with TKPROF
$ tkprof <output_from_10046> <new_file_name> sys=no explain =<username>/<password>
*username password of the user who executed the query
Upload ALL the tracefiles to Metalink.
1) Check on what is waiting for?
2) Anything Else where its more time? parse execute or fetch?
This is how you can do analysis
Kind Regards,
Rakesh Jayappa
Edited by: Rakesh jayappa on Sep 13, 2010 11:52 PM -
Custom Report taking more time to complete Normat
Hi All,
Custom report(Aging Report) in oracle is taking more time to complete Normal.
In one instance, the same report is taking 5 min and the other instance this is taking 40-50 min to complete.
We have enabled the trace and checked the trace file, but all the queries are working fine.
Could you please suggest me regarding this issue.
Thanks in advance!!TKPROF: Release 10.1.0.5.0 - Production on Tue Jun 5 10:49:32 2012
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Sort options: prsela exeela fchela
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
Error in CREATE TABLE of EXPLAIN PLAN table: APPS.prof$plan_table
ORA-00922: missing or invalid option
parse error offset: 1049
EXPLAIN PLAN option disabled.
SELECT DISTINCT OU.ORGANIZATION_ID , OU.BUSINESS_GROUP_ID
FROM
HR_OPERATING_UNITS OU WHERE OU.SET_OF_BOOKS_ID =:B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.05 11 22 0 3
total 3 0.00 0.05 11 22 0 3
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
3 HASH UNIQUE (cr=22 pr=11 pw=0 time=52023 us cost=10 size=66 card=1)
3 NESTED LOOPS (cr=22 pr=11 pw=0 time=61722 us)
3 NESTED LOOPS (cr=20 pr=11 pw=0 time=61672 us cost=9 size=66 card=1)
3 NESTED LOOPS (cr=18 pr=11 pw=0 time=61591 us cost=7 size=37 card=1)
3 NESTED LOOPS (cr=16 pr=11 pw=0 time=61531 us cost=7 size=30 card=1)
3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=11 pr=9 pw=0 time=37751 us cost=6 size=22 card=1)
18 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK1 (cr=1 pr=1 pw=0 time=17135 us cost=1 size=0 card=18)(object id 43610)
3 TABLE ACCESS BY INDEX ROWID HR_ALL_ORGANIZATION_UNITS (cr=5 pr=2 pw=0 time=18820 us cost=1 size=8 card=1)
3 INDEX UNIQUE SCAN HR_ORGANIZATION_UNITS_PK (cr=2 pr=0 pw=0 time=26 us cost=0 size=0 card=1)(object id 43657)
3 INDEX UNIQUE SCAN HR_ALL_ORGANIZATION_UNTS_TL_PK (cr=2 pr=0 pw=0 time=32 us cost=0 size=7 card=1)(object id 44020)
3 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=2 pr=0 pw=0 time=52 us cost=1 size=0 card=1)(object id 330960)
3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=2 pr=0 pw=0 time=26 us cost=2 size=29 card=1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 11 0.01 0.05
asynch descriptor resize 2 0.00 0.00
INSERT INTO FND_LOG_MESSAGES ( ECID_ID, ECID_SEQ, CALLSTACK, ERRORSTACK,
MODULE, LOG_LEVEL, MESSAGE_TEXT, SESSION_ID, USER_ID, TIMESTAMP,
LOG_SEQUENCE, ENCODED, NODE, NODE_IP_ADDRESS, PROCESS_ID, JVM_ID, THREAD_ID,
AUDSID, DB_INSTANCE, TRANSACTION_CONTEXT_ID )
VALUES
( SYS_CONTEXT('USERENV', 'ECID_ID'), SYS_CONTEXT('USERENV', 'ECID_SEQ'),
:B16 , :B15 , SUBSTRB(:B14 ,1,255), :B13 , SUBSTRB(:B12 , 1, 4000), :B11 ,
NVL(:B10 , -1), SYSDATE, FND_LOG_MESSAGES_S.NEXTVAL, :B9 , SUBSTRB(:B8 ,1,
60), SUBSTRB(:B7 ,1,30), SUBSTRB(:B6 ,1,120), SUBSTRB(:B5 ,1,120),
SUBSTRB(:B4 ,1,120), :B3 , :B2 , :B1 ) RETURNING LOG_SEQUENCE INTO :O0
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 20 0.00 0.03 4 1 304 20
Fetch 0 0.00 0.00 0 0 0 0
total 21 0.00 0.03 4 1 304 20
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
1 LOAD TABLE CONVENTIONAL (cr=1 pr=4 pw=0 time=36498 us)
1 SEQUENCE FND_LOG_MESSAGES_S (cr=0 pr=0 pw=0 time=24 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 4 0.01 0.03
SELECT MESSAGE_TEXT, MESSAGE_NUMBER, TYPE, FND_LOG_SEVERITY, CATEGORY,
SEVERITY
FROM
FND_NEW_MESSAGES M, FND_APPLICATION A WHERE :B3 = M.MESSAGE_NAME AND :B2 =
M.LANGUAGE_CODE AND :B1 = A.APPLICATION_SHORT_NAME AND M.APPLICATION_ID =
A.APPLICATION_ID
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.03 4 12 0 2
total 5 0.00 0.03 4 12 0 2
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
1 NESTED LOOPS (cr=6 pr=2 pw=0 time=15724 us cost=3 size=134 card=1)
1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=20 us cost=1 size=9 card=1)
1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
1 TABLE ACCESS BY INDEX ROWID FND_NEW_MESSAGES (cr=4 pr=2 pw=0 time=15693 us cost=2 size=125 card=1)
1 INDEX UNIQUE SCAN FND_NEW_MESSAGES_PK (cr=3 pr=1 pw=0 time=6386 us cost=1 size=0 card=1)(object id 34367)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 4 0.00 0.03
DELETE FROM MO_GLOB_ORG_ACCESS_TMP
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.02 3 4 4 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.02 3 4 4 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
0 DELETE MO_GLOB_ORG_ACCESS_TMP (cr=4 pr=3 pw=0 time=29161 us)
1 TABLE ACCESS FULL MO_GLOB_ORG_ACCESS_TMP (cr=3 pr=2 pw=0 time=18165 us cost=2 size=13 card=1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 3 0.01 0.02
SET TRANSACTION READ ONLY
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.01 0 0 0 0
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
begin Fnd_Concurrent.Init_Request; end;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 148 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 148 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
log file sync 1 0.01 0.01
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
declare X0rv BOOLEAN; begin X0rv := FND_INSTALLATION.GET(:APPL_ID,
:DEP_APPL_ID, :STATUS, :INDUSTRY); :X0 := sys.diutil.bool_to_int(X0rv);
end;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 9 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 9 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 8 0.00 0.00
SQL*Net message from client 8 0.00 0.00
begin fnd_global.bless_next_init('FND_PERMIT_0000');
fnd_global.initialize(:session_id, :user_id, :resp_id, :resp_appl_id,
:security_group_id, :site_id, :login_id, :conc_login_id, :prog_appl_id,
:conc_program_id, :conc_request_id, :conc_priority_request, :form_id,
:form_application_id, :conc_process_id, :conc_queue_id, :queue_appl_id,
:server_id); fnd_profile.put('ORG_ID', :org_id);
fnd_profile.put('MFG_ORGANIZATION_ID', :mfg_org_id);
fnd_profile.put('MFG_CHART_OF_ACCOUNTS_ID', :coa);
fnd_profile.put('APPS_MAINTENANCE_MODE', :amm); end;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 8 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 8 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
SELECT FPI.STATUS, FPI.INDUSTRY, FPI.PRODUCT_VERSION, FOU.ORACLE_USERNAME,
FPI.TABLESPACE, FPI.INDEX_TABLESPACE, FPI.TEMPORARY_TABLESPACE,
FPI.SIZING_FACTOR
FROM
FND_PRODUCT_INSTALLATIONS FPI, FND_ORACLE_USERID FOU, FND_APPLICATION FA
WHERE FPI.APPLICATION_ID = FA.APPLICATION_ID AND FPI.ORACLE_ID =
FOU.ORACLE_ID AND FA.APPLICATION_SHORT_NAME = :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 7 0 1
total 4 0.00 0.00 0 7 0 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
1 NESTED LOOPS (cr=7 pr=0 pw=0 time=89 us)
1 NESTED LOOPS (cr=6 pr=0 pw=0 time=93 us cost=4 size=76 card=1)
1 NESTED LOOPS (cr=5 pr=0 pw=0 time=77 us cost=3 size=67 card=1)
1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=19 us cost=1 size=9 card=1)
1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
1 TABLE ACCESS BY INDEX ROWID FND_PRODUCT_INSTALLATIONS (cr=3 pr=0 pw=0 time=51 us cost=2 size=58 card=1)
1 INDEX RANGE SCAN FND_PRODUCT_INSTALLATIONS_PK (cr=2 pr=0 pw=0 time=27 us cost=1 size=0 card=1)(object id 22583)
1 INDEX UNIQUE SCAN FND_ORACLE_USERID_U1 (cr=1 pr=0 pw=0 time=7 us cost=0 size=0 card=1)(object id 22597)
1 TABLE ACCESS BY INDEX ROWID FND_ORACLE_USERID (cr=1 pr=0 pw=0 time=7 us cost=1 size=9 card=1)
SELECT P.PID, P.SPID, AUDSID, PROCESS, SUBSTR(USERENV('LANGUAGE'), INSTR(
USERENV('LANGUAGE'), '.') + 1)
FROM
V$SESSION S, V$PROCESS P WHERE P.ADDR = S.PADDR AND S.AUDSID =
USERENV('SESSIONID')
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1883 us cost=1 size=108 card=2)
1 HASH JOIN (cr=0 pr=0 pw=0 time=1869 us cost=1 size=100 card=2)
1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1059 us cost=0 size=58 card=2)
182 FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=285 us cost=0 size=1288 card=161)
1 FIXED TABLE FIXED INDEX X$KSUSE (ind:1) (cr=0 pr=0 pw=0 time=617 us cost=0 size=21 card=1)
181 FIXED TABLE FULL X$KSUPR (cr=0 pr=0 pw=0 time=187 us cost=0 size=10500 card=500)
1 FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=4 us cost=0 size=4 card=1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
asynch descriptor resize 2 0.00 0.00
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 6 0.00 0.00 0 0 0 0
Execute 6 0.01 0.02 0 165 0 4
Fetch 1 0.00 0.00 0 0 0 1
total 13 0.01 0.02 0 165 0 5
Misses in library cache during parse: 0
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 37 0.00 0.00
SQL*Net message from client 37 1.21 2.19
log file sync 1 0.01 0.01
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 49 0.00 0.00 0 0 0 0
Execute 89 0.01 0.07 7 38 336 24
Fetch 29 0.00 0.09 15 168 0 27
total 167 0.02 0.16 22 206 336 51
Misses in library cache during parse: 3
Misses in library cache during execute: 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
asynch descriptor resize 6 0.00 0.00
db file sequential read 22 0.01 0.14
48 user SQL statements in session.
1 internal SQL statements in session.
49 SQL statements in session.
0 statements EXPLAINed in this session.
Trace file compatibility: 10.01.00
Sort options: prsela exeela fchela
1 session in tracefile.
48 user SQL statements in trace file.
1 internal SQL statements in trace file.
49 SQL statements in trace file.
48 unique SQL statements in trace file.
928 lines in trace file.
1338833753 elapsed seconds in trace file. -
Snapshot Refresh taking More Time
Dear All,
We are facing a Snapshot refresh problem currently in Production Environment.
Oracle Version : Oracle8i Enterprise Edition Release 8.1.6.1.0
Currently we have created a Snapshot on a Join with 2 remote tables using Synonyms.
ex:
CREATE SNAPSHOT XYZ REFRESH COMPLETE WITH ROWID
AS
SELECT a.* FROM SYN1 a, SYN2 b
Where b.ACCT_NO=a.ACCT_NO;
We have created a Index on the above Snapshot XYZ.
Create index XYZ_IDX1 on XYZ (ACCT_NO);
a. The Explain plan of the above query shows the Index Scan on SYN1.
If we query above Select Statement,it hardly takes 2 seconds to exedute.
b. But the Complete Refresh of Snapshot XYZ is almost taking 20 Mins for just truncating and inserting 500 records and is generating huge Disk Reads as SYN1 in remote table consists of 32 Million records whereas SYN2 contains only 500 Records.
If we truncate and insert inot a table as performed by the Complete refresh of Snapshot,it hardly takes 4 Seconds to refresh the table.
Please let me know what might be the possible reasons for the Complete refresh of Snapshot taking more time.Dear All,
While refreshing the Snapshot XYZ,I could find the following.
a. Sort/Merge operation was performed while inserting the data into Snapshot.
INSERT /*+ APPEND */ INTO "XYZ"
SELECT a.* FROM SYN1 a, SYN2 b Where b.ACCT_NO=a.ACCT_NO;
The above operation performed huge disk reads.
b. By Changing the session parameter sort_area_size ,the time decreased by 50% but still the disk reads are huge.
I would like to know why Sort/Merge Operation is performed in the above Insert?
Edited by: Prashanth Deshmukh on Mar 13, 2009 10:54 AM
Edited by: Prashanth Deshmukh on Mar 13, 2009 10:55 AM -
Bulk Collect taking more time. Please suggest .
I am working on oracle 11g
I have one normal insert proc
CREATE OR REPLACE PROCEDURE test2
AS
BEGIN
INSERT INTO first_table
(citiversion, financialcollectionid,
dataitemid, dataitemvalue,
unittypeid, financialinstanceid,
VERSION, providerid, user_comment,
userid, insert_timestamp,
latestflag, finalflag, filename,
final_ytdltm_flag, nmflag , partition_key
SELECT citiversion, financialcollectionid,
dataitemid, dataitemvalue, unittypeid,
new_fi, VERSION, providerid,
user_comment, userid,
insert_timestamp, latestflag,
finalflag, filename, '', nmflag,1
FROM secon_table
WHERE financialinstanceid = 1
AND changeflag = 'A'
AND processed_flg = 'N';
END test2;
To impove performance i have normal insert into convert it to bulk collect :
CREATE OR REPLACE PROCEDURE test
AS
BEGIN
DECLARE
CURSOR get_cat_fin_collection_data(n_instanceid NUMBER) IS
SELECT citiversion,
financialcollectionid,
dataitemid,
dataitemvalue,
unittypeid,
new_fi,
VERSION,
providerid,
user_comment,
userid,
insert_timestamp,
latestflag,
finalflag,
filename,
nmflag,
1
FROM secon_table
WHERE financialinstanceid = n_instanceid
AND changeflag = 'A'
AND processed_flg = 'N';
TYPE data_array IS TABLE OF get_cat_fin_collection_data%ROWTYPE;
l_data data_array;
BEGIN
OPEN get_cat_fin_collection_data(1);
LOOP
FETCH get_cat_fin_collection_data BULK COLLECT
INTO l_data limit 100;
FORALL i IN 1 .. l_data.COUNT
INSERT INTO first_table VALUES l_data (i);
EXIT WHEN l_data.count =0;
END LOOP;
CLOSE get_cat_fin_collection_data;
END;
END test;
But bulk collect is taking more time.
below is the timings
SQL> set timing on
SQL> exec test
PL/SQL procedure successfully completed
Executed in 16.703 seconds
SQL> exec test2
PL/SQL procedure successfully completed
Executed in 9.406 seconds
SQL> rollback;
Rollback complete
Executed in 2.75 seconds
SQL> exec test
PL/SQL procedure successfully completed
Executed in 16.266 seconds
SQL> rollback;
Rollback complete
Executed in 2.812 seconds
Normal insert :- 9.4 second
Bulk insert:- 16.266 seconds
I am processing 1 lakh rows.
Can you please tell me the reason why bulk collect is taking more time. ? According to my knowledge it should take less time.
Please suggect do i need to check any parameter?
Please help.
Edited by: 976747 on Feb 4, 2013 1:12 AM>
Can you please tell me the reason why bulk collect is taking more time. ? According to my knowledge it should take less time.
Please suggect do i need to check any parameter?In that case, your knowledge is flawed.
Pure SQL is almost always faster than PL/SQL.
If your Insert into Select is executing slow, then it is probably because the Select statement is taking long to execute. How many rows are being Selected and Inserted from your query?
You might also consider tuning the Select statement. For more information on Posting a Tuning request, read {message:id=3292438} and post the relevant information. -
Dear Gurus/Masters/All,
I request your valuble assistance in tuning one of my SQL.
We are using oracle 10.2.04 version and OS is HP-UX 11.23(ia64) version
In my production environment one SQL is taking more time to complete the task. According to EXPLAIN PLAN, i observed that one of it's WHERE condition execution is causing the issue.
I took the explain plan of the WHERE condition which is causing the issue. It is going for full table scan to satisfy the criteria. But a normal index exists on this column.
Main Query WHERE condition and Explain Plan.
SELECT column list ....
FROM
SIEBEL.S_ADDR_PER T1,
SIEBEL.S_PTY_PAY_PRFL T2,
SIEBEL.S_INVLOC T3,
SIEBEL.S_ORDER T4,
SIEBEL.S_ORG_EXT T5,
SIEBEL.S_POSTN T6,
SIEBEL.S_PARTY T7,
SIEBEL.S_PROJ T8,
SIEBEL.S_CON_ADDR T9,
SIEBEL.S_ORG_EXT T10,
SIEBEL.S_USER T11,
SIEBEL.S_DOC_QUOTE T12,
SIEBEL.S_ACCNT_POSTN T13,
SIEBEL.S_INS_CLAIM T14,
SIEBEL.S_USER T15,
SIEBEL.S_ORG_EXT T16,
SIEBEL.S_ASSET T17,
SIEBEL.S_ORDER_TNTX T18,
SIEBEL.S_ORG_EXT_TNTX T19,
SIEBEL.S_PERIOD T20,
SIEBEL.S_DEPOSIT_TNT T21,
SIEBEL.S_ADDR_PER T22,
SIEBEL.S_PAYMENT_TERM T23,
SIEBEL.S_ORG_EXT_X T24,
SIEBEL.S_ORG_EXT T25,
SIEBEL.S_INSCLM_ELMNT T26,
SIEBEL.S_INVOICE T27
WHERE
T25.BU_ID = T10.PAR_ROW_ID (+) AND
T26.INSCLM_ID = T14.ROW_ID (+) AND
T27.ELEMENT_ID = T26.ROW_ID (+) AND
T27.LAST_UPD_BY = T15.PAR_ROW_ID (+) AND
T4.QUOTE_ID = T12.ROW_ID (+) AND
T3.CG_ASSSET_ID = T17.ROW_ID (+) AND
T27.BL_ADDR_ID = T22.ROW_ID (+) AND
T8.BU_ID = T5.PAR_ROW_ID (+) AND
T27.PER_PAY_PRFL_ID = T2.ROW_ID (+) AND
T27.REMIT_ORG_EXT_ID = T16.PAR_ROW_ID (+) AND
T27.PROJ_ID = T8.ROW_ID (+) AND
T27.BL_PERIOD_ID = T20.ROW_ID (+) AND
T27.PAYMENT_TERM_ID = T23.ROW_ID (+) AND
T12.BU_ID = T19.PAR_ROW_ID (+) AND
T27.ACCNT_ID = T25.PAR_ROW_ID (+) AND
T27.ORDER_ID = T18.ROW_ID (+) AND
T4.SRC_INVLOC_ID = T3.ROW_ID (+) AND
T27.ORDER_ID = T4.ROW_ID (+) AND
T27.ACCNT_ID = T24.PAR_ROW_ID (+) AND
T18.PR_DEPOSIT_ID = T21.ROW_ID (+) AND
T27.BL_ADDR_ID = T9.ADDR_PER_ID (+) AND T27.ACCNT_ID = T9.ACCNT_ID (+) AND
T27.BL_ADDR_ID = T1.ROW_ID (+) AND
T25.PR_POSTN_ID = T13.POSITION_ID (+) AND T25.ROW_ID = T13.OU_EXT_ID (+) AND
T13.POSITION_ID = T7.ROW_ID (+) AND
T13.POSITION_ID = T6.PAR_ROW_ID (+) AND
T6.PR_EMP_ID = T11.PAR_ROW_ID (+) AND
(T27.INVC_TYPE_CD = :1)
ORDER BY
T27.INVC_DT;
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2576210427
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 39M| 71G| | 624M (1)|278:42:59 |
| 1 | SORT ORDER BY | | 39M| 71G| 150G| 624M (1)|278:42:59 |
| 2 | NESTED LOOPS OUTER | | 39M| 71G| | 610M (1)|272:11:24 |
| 3 | NESTED LOOPS OUTER | | 39M| 70G| | 515M (1)|229:48:41 |
| 4 | NESTED LOOPS OUTER | | 39M| 69G| | 483M (1)|215:41:04 |
| 5 | NESTED LOOPS OUTER | | 39M| 68G| | 483M (1)|215:41:04 |
| 6 | NESTED LOOPS OUTER | | 39M| 67G| | 483M (1)|215:41:04 |
| 7 | NESTED LOOPS OUTER | | 39M| 66G| | 406M (1)|181:17:50 |
| 8 | NESTED LOOPS OUTER | | 39M| 65G| | 343M (1)|153:12:57 |
| 9 | NESTED LOOPS OUTER | | 39M| 64G| | 311M (1)|139:04:56 |
| 10 | NESTED LOOPS OUTER | | 39M| 63G| | 185M (1)| 82:37:56 |
| 11 | NESTED LOOPS OUTER | | 39M| 54G| | 108M (1)| 48:11:29 |
| 12 | NESTED LOOPS OUTER | | 39M| 53G| | 108M (1)| 48:11:29 |
| 13 | NESTED LOOPS OUTER | | 39M| 51G| | 76M (1)| 34:03:51 |
| 14 | NESTED LOOPS OUTER | | 39M| 49G| | 76M (1)| 34:03:51 |
| 15 | NESTED LOOPS OUTER | | 39M| 46G| | 76M (1)| 34:03:51 |
| 16 | NESTED LOOPS OUTER | | 39M| 44G| | 76M (1)| 34:03:51 |
| 17 | NESTED LOOPS OUTER | | 39M| 40G| | 65M (1)| 29:25:49 |
| 18 | NESTED LOOPS OUTER | | 39M| 39G| | 65M (1)| 29:25:49 |
| 19 | NESTED LOOPS OUTER | | 39M| 38G| | 65M (1)| 29:25:49 |
| 20 | NESTED LOOPS OUTER | | 39M| 34G| | 65M (1)| 29:17:44 |
| 21 | NESTED LOOPS OUTER | | 39M| 32G| | 65M (1)| 29:17:08 |
| 22 | NESTED LOOPS OUTER | | 39M| 31G| | 65M (1)| 29:09:04 |
| 23 | NESTED LOOPS OUTER | | 39M| 30G| | 2043K (9)| 00:54:42 |
| 24 | NESTED LOOPS OUTER | | 39M| 30G| | 2043K (9)| 00:54:42 |
| 25 | NESTED LOOPS OUTER | | 39M| 25G| | 2015K (7)| 00:53:57 |
| 26 | NESTED LOOPS OUTER | | 39M| 22G| | 2015K (7)| 00:53:57 |
| 27 | NESTED LOOPS OUTER | | 39M| 16G| | 2015K (7)| 00:53:57 |
|* 28 | TABLE ACCESS FULL | S_INVOICE | 39M| 9G| | 2015K (7)| 00:53:57 |
| 29 | TABLE ACCESS BY INDEX ROWID| S_PROJ | 1 | 188 | | 1 (0)| 00:00:01 |
|* 30 | INDEX UNIQUE SCAN | S_PROJ_P1 | 1 | | | 1 (0)| 00:00:01 |
| 31 | TABLE ACCESS BY INDEX ROWID | S_PAYMENT_TERM | 1 | 156 | | 1 (0)| 00:00:01 |
|* 32 | INDEX UNIQUE SCAN | S_PAYMENT_TERM_P1 | 1 | | | 1 (0)| 00:00:01 |
| 33 | TABLE ACCESS BY INDEX ROWID | S_INSCLM_ELMNT | 1 | 77 | | 1 (0)| 00:00:01 |
|* 34 | INDEX UNIQUE SCAN | S_INSCLM_ELMNT_P1 | 1 | | | 1 (0)| 00:00:01 |
| 35 | TABLE ACCESS BY INDEX ROWID | S_INS_CLAIM | 1 | 134 | | 1 (0)| 00:00:01 |
|* 36 | INDEX UNIQUE SCAN | S_INS_CLAIM_P1 | 1 | | | 1 (0)| 00:00:01 |
| 37 | TABLE ACCESS BY INDEX ROWID | S_PERIOD | 1 | 19 | | 1 (0)| 00:00:01 |
|* 38 | INDEX UNIQUE SCAN | S_PERIOD_P1 | 1 | | | 1 (0)| 00:00:01 |
| 39 | TABLE ACCESS BY INDEX ROWID | S_USER | 1 | 25 | | 2 (0)| 00:00:01 |
|* 40 | INDEX UNIQUE SCAN | S_USER_U2 | 1 | | | 1 (0)| 00:00:01 |
| 41 | TABLE ACCESS BY INDEX ROWID | S_ORDER_TNTX | 1 | 26 | | 2 (0)| 00:00:01 |
|* 42 | INDEX UNIQUE SCAN | S_ORDER_TNTX_P1 | 1 | | | 1 (0)| 00:00:01 |
| 43 | TABLE ACCESS BY INDEX ROWID | S_DEPOSIT_TNT | 1 | 45 | | 1 (0)| 00:00:01 |
|* 44 | INDEX UNIQUE SCAN | S_DEPOSIT_TNT_P1 | 1 | | | 1 (0)| 00:00:01 |
| 45 | TABLE ACCESS BY INDEX ROWID | S_ORDER | 1 | 101 | | 2 (0)| 00:00:01 |
|* 46 | INDEX UNIQUE SCAN | S_ORDER_P1 | 1 | | | 1 (0)| 00:00:01 |
| 47 | TABLE ACCESS BY INDEX ROWID | S_INVLOC | 1 | 47 | | 1 (0)| 00:00:01 |
|* 48 | INDEX UNIQUE SCAN | S_INVLOC_P1 | 1 | | | 1 (0)| 00:00:01 |
| 49 | TABLE ACCESS BY INDEX ROWID | S_DOC_QUOTE | 1 | 21 | | 1 (0)| 00:00:01 |
|* 50 | INDEX UNIQUE SCAN | S_DOC_QUOTE_P1 | 1 | | | 1 (0)| 00:00:01 |
|* 51 | TABLE ACCESS FULL | S_ORG_EXT_TNTX | 1 | 94 | | 0 (0)| 00:00:01 |
| 52 | TABLE ACCESS BY INDEX ROWID | S_PTY_PAY_PRFL | 1 | 74 | | 1 (0)| 00:00:01 |
|* 53 | INDEX UNIQUE SCAN | S_PTY_PAY_PRFL_P1 | 1 | | | 1 (0)| 00:00:01 |
| 54 | TABLE ACCESS BY INDEX ROWID | S_ADDR_PER | 1 | 84 | | 2 (0)| 00:00:01 |
|* 55 | INDEX UNIQUE SCAN | S_ADDR_PER_P1 | 1 | | | 1 (0)| 00:00:01 |
| 56 | TABLE ACCESS BY INDEX ROWID | S_ADDR_PER | 1 | 57 | | 1 (0)| 00:00:01 |
|* 57 | INDEX UNIQUE SCAN | S_ADDR_PER_P1 | 1 | | | 1 (0)| 00:00:01 |
| 58 | TABLE ACCESS BY INDEX ROWID | S_ORG_EXT | 1 | 32 | | 1 (0)| 00:00:01 |
|* 59 | INDEX UNIQUE SCAN | S_ORG_EXT_U3 | 1 | | | 1 (0)| 00:00:01 |
| 60 | TABLE ACCESS BY INDEX ROWID | S_ORG_EXT | 1 | 32 | | 1 (0)| 00:00:01 |
|* 61 | INDEX UNIQUE SCAN | S_ORG_EXT_U3 | 1 | | | 1 (0)| 00:00:01 |
| 62 | TABLE ACCESS BY INDEX ROWID | S_ORG_EXT | 1 | 256 | | 2 (0)| 00:00:01 |
|* 63 | INDEX UNIQUE SCAN | S_ORG_EXT_U3 | 1 | | | 1 (0)| 00:00:01 |
| 64 | TABLE ACCESS BY INDEX ROWID | S_ACCNT_POSTN | 1 | 32 | | 3 (0)| 00:00:01 |
|* 65 | INDEX RANGE SCAN | S_ACCNT_POSTN_U1 | 1 | | | 2 (0)| 00:00:01 |
| 66 | TABLE ACCESS BY INDEX ROWID | S_POSTN | 1 | 21 | | 1 (0)| 00:00:01 |
|* 67 | INDEX UNIQUE SCAN | S_POSTN_U2 | 1 | | | 1 (0)| 00:00:01 |
| 68 | TABLE ACCESS BY INDEX ROWID | S_USER | 1 | 25 | | 2 (0)| 00:00:01 |
|* 69 | INDEX UNIQUE SCAN | S_USER_U2 | 1 | | | 1 (0)| 00:00:01 |
| 70 | TABLE ACCESS BY INDEX ROWID | S_ORG_EXT | 1 | 32 | | 2 (0)| 00:00:01 |
|* 71 | INDEX UNIQUE SCAN | S_ORG_EXT_U3 | 1 | | | 1 (0)| 00:00:01 |
| 72 | TABLE ACCESS BY INDEX ROWID | S_ASSET | 1 | 24 | | 2 (0)| 00:00:01 |
|* 73 | INDEX UNIQUE SCAN | S_ASSET_P1 | 1 | | | 2 (0)| 00:00:01 |
| 74 | TABLE ACCESS BY INDEX ROWID | S_CON_ADDR | 1 | 36 | | 3 (0)| 00:00:01 |
|* 75 | INDEX RANGE SCAN | S_CON_ADDR_U1 | 1 | | | 2 (0)| 00:00:01 |
|* 76 | INDEX UNIQUE SCAN | S_PARTY_P1 | 1 | 12 | | 1 (0)| 00:00:01 |
| 77 | TABLE ACCESS BY INDEX ROWID | S_ORG_EXT_X | 1 | 37 | | 2 (0)| 00:00:01 |
|* 78 | INDEX RANGE SCAN | S_ORG_EXT_X_U1 | 1 | | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
28 - filter("T27"."INVC_TYPE_CD"=:1)
30 - access("T27"."PROJ_ID"="T8"."ROW_ID"(+))
32 - access("T27"."PAYMENT_TERM_ID"="T23"."ROW_ID"(+))
34 - access("T27"."ELEMENT_ID"="T26"."ROW_ID"(+))
36 - access("T26"."INSCLM_ID"="T14"."ROW_ID"(+))
38 - access("T27"."BL_PERIOD_ID"="T20"."ROW_ID"(+))
40 - access("T27"."LAST_UPD_BY"="T15"."PAR_ROW_ID"(+))
42 - access("T27"."ORDER_ID"="T18"."ROW_ID"(+))
44 - access("T18"."PR_DEPOSIT_ID"="T21"."ROW_ID"(+))
46 - access("T27"."ORDER_ID"="T4"."ROW_ID"(+))
48 - access("T4"."SRC_INVLOC_ID"="T3"."ROW_ID"(+))
50 - access("T4"."QUOTE_ID"="T12"."ROW_ID"(+))
51 - filter("T12"."BU_ID"="T19"."PAR_ROW_ID"(+))
53 - access("T27"."PER_PAY_PRFL_ID"="T2"."ROW_ID"(+))
55 - access("T27"."BL_ADDR_ID"="T1"."ROW_ID"(+))
57 - access("T27"."BL_ADDR_ID"="T22"."ROW_ID"(+))
59 - access("T8"."BU_ID"="T5"."PAR_ROW_ID"(+))
61 - access("T27"."REMIT_ORG_EXT_ID"="T16"."PAR_ROW_ID"(+))
63 - access("T27"."ACCNT_ID"="T25"."PAR_ROW_ID"(+))
65 - access("T25"."ROW_ID"="T13"."OU_EXT_ID"(+) AND "T25"."PR_POSTN_ID"="T13"."POSITION_ID"(+))
67 - access("T13"."POSITION_ID"="T6"."PAR_ROW_ID"(+))
69 - access("T6"."PR_EMP_ID"="T11"."PAR_ROW_ID"(+))
71 - access("T25"."BU_ID"="T10"."PAR_ROW_ID"(+))
73 - access("T3"."CG_ASSSET_ID"="T17"."ROW_ID"(+))
75 - access("T27"."BL_ADDR_ID"="T9"."ADDR_PER_ID"(+) AND "T27"."ACCNT_ID"="T9"."ACCNT_ID"(+))
filter("T27"."ACCNT_ID"="T9"."ACCNT_ID"(+))
76 - access("T13"."POSITION_ID"="T7"."ROW_ID"(+))
78 - access("T27"."ACCNT_ID"="T24"."PAR_ROW_ID"(+))
117 rows selected.SQL> EXPLAIN PLAN FOR
2 SELECT * FROM SIEBEL.S_INVOICE T27 WHERE T27.INVC_TYPE_CD=:1;
Explained.
SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
PLAN_TABLE_OUTPUT
Plan hash value: 1810797629
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 39M| 9G| 2016K (8)| 00:53:59 |
|* 1 | TABLE ACCESS FULL| S_INVOICE | 39M| 9G| 2016K (8)| 00:53:59 |
Predicate Information (identified by operation id):
1 - filter("T27"."INVC_TYPE_CD"=:1)
13 rows selected.Edited by: KODS on Feb 13, 2013 1:08 PMDear Ivan,
Please find the details below.
select * from dba_indexes where index_name = 'S_INVOICE_U1';
OWNER INDEX_NAME INDEX_TYPE TABLE_OWNER TABLE_NAME TABLE_TYPE UNIQUENESS COMPRESSION PREFIX_LENGTH TABLESPACE_NAME INI_TRANS MAX_TRANS INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE PCT_THRESHOLD INCLUDE_COLUMN FREELISTS FREELIST_GROUPS PCT_FREE LOGGING BLEVEL LEAF_BLOCKS DISTINCT_KEYS AVG_LEAF_BLOCKS_PER_KEY AVG_DATA_BLOCKS_PER_KEY CLUSTERING_FACTOR STATUS NUM_ROWS SAMPLE_SIZE LAST_ANALYZED DEGREE INSTANCES PARTITIONED TEMPORARY GENERATED SECONDARY BUFFER_POOL USER_STATS DURATION PCT_DIRECT_ACCESS ITYP_OWNER ITYP_NAME PARAMETERS GLOBAL_STATS DOMIDX_STATUS DOMIDX_OPSTATUS FUNCIDX_STATUS JOIN_INDEX IOT_REDUNDANT_PKEY_ELIM DROPPED
SIEBEL S_INVOICE_U1 NORMAL SIEBEL S_INVOICE TABLE UNIQUE DISABLED CRMSBL_AEM_INDEX 2 255 65536 1 2147483645 10 NO 3 902796 196739390 1 1 125598294 VALID 196739390 196739390 10-02-13 1 1 NO N N N DEFAULT NO YES NO NO NO
select * from dba_ind_columns where index_name = 'S_INVOICE_U1' order by column_position;
INDEX_OWNER INDEX_NAME TABLE_OWNER TABLE_NAME COLUMN_NAME COLUMN_POSITION COLUMN_LENGTH CHAR_LENGTH DESCEND
SIEBEL S_INVOICE_U1 SIEBEL S_INVOICE INVC_NUM 1 200 50 ASC
SIEBEL S_INVOICE_U1 SIEBEL S_INVOICE INVC_TYPE_CD 2 120 30 ASC
SIEBEL S_INVOICE_U1 SIEBEL S_INVOICE CONFLICT_ID 3 60 15 ASC -
Taking more time in INDEX RANGE SCAN compare to the full table scan
Hi all ,
Below are the version og my database.
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for HPUX: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
I have gather the table statistics and plan change for sql statment.
SELECT P1.COMPANY, P1.PAYGROUP, P1.PAY_END_DT, P1.PAYCHECK_OPTION,
P1.OFF_CYCLE, P1.PAGE_NUM, P1.LINE_NUM, P1.SEPCHK FROM PS_PAY_CHECK P1
WHERE P1.FORM_ID = :1 AND P1.PAYCHECK_NBR = :2 AND
P1.CHECK_DT = :3 AND P1.PAYCHECK_OPTION <> 'R'
Plan before the gather stats.
Plan hash value: 3872726522
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | *14306* (100)| |
| 1 | *TABLE ACCESS FULL| PS_PAY_CHECK* | 1 | 51 | 14306 (4)| 00:02:52 |
Plan after the gather stats:
Operation Object Name Rows Bytes Cost
SELECT STATEMENT Optimizer Mode=CHOOSE
1 4
*TABLE ACCESS BY INDEX ROWID SYSADM.PS_PAY_CHECK* 1 51 *4*
*INDEX RANGE SCAN SYSADM.PS0PAY_CHECK* 1 3After gather stats paln look good . but when i am exeuting the query it take 5 hours. before the gather stats it finishing the within 2 hours. i do not want to restore my old statistics. below are the data for the tables.and when i am obserrving it lot of db files scatter rea
NAME TYPE VALUE
_optimizer_cost_based_transformation string OFF
filesystemio_options string asynch
object_cache_optimal_size integer 102400
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string choose
optimizer_secure_view_merging boolean TRUE
plsql_optimize_level integer 2
SQL> select count(*) from sysadm.ps_pay_check;
select num_rows,blocks from dba_tables where table_name ='PS_PAY_CHECK';
COUNT(*)
1270052
SQL> SQL> SQL>
NUM_ROWS BLOCKS
1270047 63166
Event Waits Time (s) (ms) Time Wait Class
db file sequential read 1,584,677 6,375 4 36.6 User I/O
db file scattered read 2,366,398 5,689 2 32.7 User I/Oplease let me know why it taking more time in INDEX RANGE SCAN compare to the full table scan?suresh.ratnaji wrote:
NAME TYPE VALUE
_optimizer_cost_based_transformation string OFF
filesystemio_options string asynch
object_cache_optimal_size integer 102400
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string choose
optimizer_secure_view_merging boolean TRUE
plsql_optimize_level integer 2
please let me know why it taking more time in INDEX RANGE SCAN compare to the full table scan?Suresh,
Any particular reason why you have a non-default value for a hidden parameter, optimizercost_based_transformation ?
On my 10.2.0.1 database, its default value is "linear". What happens when you reset the value of the hidden parameter to default? -
Hi Experts,
We are using ATG 9.4 and we have around 70 thousand products, i think its not large number. but when we push projects from BCC it is taking more time.
here is my problem- we have 2 data centeres ,each data centere is having around more than 100 instances (agents) . we need to push to each data center. BCC is taking more time to push(hours). can you suggest me , what best can be done to avoid those issues. any soulution would be much appreciated
Thanks
KrishWe usually find performance problems can be traced back to the database layer.
Have you considered cleaning up the versioned schemas, either by using the OOTB purging functionality:
http://docs.oracle.com/cd/E26180_01/Platform.94/ATGCAProgGuide/html/s1301purgingassetversions01.html
or a complete rebaselining of your Publishing environment
For the two data centres you may want to consider using a product like GoldenGate
FAQ: Using Oracle GoldenGate with Oracle Commerce (Doc ID 1670439.1)
https://support.oracle.com/rs?type=doc&id=1670439.1
Whitepaper: Oracle ATG Web Commerce Maximum Availability Architecture (MAA) on Exadata and Exalogic (Doc ID 1590928.1)
https://support.oracle.com/rs?type=doc&id=1590928.1
++++
Thanks
Gareth
Please mark any update as "Correct Answer" or "Helpful Answer" if that update helps/answers your question, so that
others can identify the Correct/helpful update between many updates. -
Admin server taking more time to start
Hi All,
I installed OIM 11g R2 in windows server2008 ,
while starting the admin server it is taking the more time, .
Steps i have done.
changed memory parameters for setSOADomainEnv, setOIMDomainEnv, oimenv.bat files.
while starting the admin server it is showing the memory parameters are Xms768m.....xms1024m like that.
can anyone please suggest what changes needs to do to run fast
Thanks,
Valliuser623166 wrote:
Hi Gurus
my database oracle 10.2.0.3 in AIX
I have started expdp to export selective around 130 tables but it is taking more time to start , almost 20 min passed since it start.
$ expdp system/*** parfile=parfile.par
Export: Release 10.2.0.3.0 - 64bit Production on Friday, 22 June, 2012 17:21:26
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."SYS_EXPORT_TABLE_02": system/******** parfile=parfile.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
I am not sure that this is enough information for us to suggest something. For starters, can you try selecting less tables than 120 and see what's going on?
Aman....
Maybe you are looking for
-
Price conditions on Sales Orders via EDI
Hi, I need to know if there is a way to bring prices over on the IDOC while posting a sales order via EDI. I have a situation where the price does not have to be determined based on condition records setup within SAP but rather update the price sent
-
ICloud for a family account does not work
We could not use iCloud for our family for multiple computers and devices with one [email protected] The details.. These are our devices - Two iPhone 4's (me and my wife) One iPhone 3G (my daughter) MacBook Pro MacPro Mac Mini iPad2 We want to use on
-
I used Migration Assistant to transfer from MBP (Leopard OSX 10.6.8) to Mac Mini (Lion OSX 10.7). Documents cannot be "seen" or "found". Ditto Music , movies, and pictures.Stumped!
-
Disable "Change Password" funtionality on EDMS 7.0
Hi Is it possible to disable/hide the "Change Password" functionality on the EDMS 7.0 logon screen? If so, how? Regards, CJ
-
Email prefs won't save changes
my isp help line cant help. He says"permissions not saving changes. com.apple.mail.plist wont save changes. He says the emailer could be corrupted. HELP! Is there an alternate email program I could use? Thanks! Mail version 7.1