ADDM Findings Top SQL Statements
Hi,
I have run addmrpt.sql for 1 month of time using snapshots and every ADDM report says that sql statement having id **** listed under top sql statements findings and recommendation is to tune using sql tunning advisor. It has 2-4 seconds elapsed time.But when i go through AWR report instance efficiency report and all are normal.
Can anyone please confirm whether I should recommend to tune this SQL statement to my vendor ?
Thanks,
TK
below is the sql I found in ADDM. As I understood it is an intername SQL not an application SQL. When i searched it in internet found that it is related with check_oracle_health plugin. can you please help me to understand this nagios concept and the relationship it has with oracle software. And also how should I identify which methods we are using to monitor database internally?
SELECT
a.tablespace_name "Tablespace",
b.status "Status",
b.contents "Type",
b.extent_management "Extent Mgmt",
a.bytes bytes,
a.maxbytes bytes_max,
c.bytes_free + NVL(d.bytes_expired,0) bytes_free
FROM
-- belegter und maximal verfuegbarer platz pro datafile
-- nach tablespacenamen zusammengefasst
-- => bytes
-- => maxbytes
SELECT
a.tablespace_name,
SUM(a.bytes) bytes,
SUM(DECODE(a.autoextensible, 'YES', a.maxbytes, 'NO', a.bytes))
maxbytes
FROM
dba_data_files a
GROUP BY
tablespace_name
) a,
sys.dba_tablespaces b,
-- freier platz pro tablespace
-- => bytes_free
SELECT
a.tablespace_name,
SUM(a.bytes) bytes_free
FROM
dba_free_space a
GROUP BY
tablespace_name
) c,
-- freier platz durch expired extents
-- speziell fuer undo tablespaces
-- => bytes_expired
SELECT
a.tablespace_name,
SUM(a.bytes) bytes_expired
FROM
dba_undo_extents a
WHERE
status = 'EXPIRED'
GROUP BY
tablespace_name
) d
WHERE
a.tablespace_name = c.tablespace_name
AND a.tablespace_name = b.tablespace_name@@@@@@@@
AND a.tablespace_name = d.tablespace_name
UNION ALL
SELECT
d.tablespace_name "Tablespace",
b.status "Status",
b.contents "Type",
b.extent_management "Extent Mgmt",
sum(a.bytes_free + a.bytes_used) bytes, -- allocated
SUM(DECODE(d.autoextensible, 'YES', d.maxbytes, 'NO', d.bytes))
bytes_max,
SUM(a.bytes_free + a.bytes_used - NVL(c.bytes_used, 0)) bytes_free
FROM
sys.v_$TEMP_SPACE_HEADER a,
sys.dba_tablespaces b,
sys.v_$Temp_extent_pool c,
dba_temp_files d
WHERE
c.file_id(+) = a.file_id
and c.tablespace_name(+) = a.tablespace_name
and d.file_id = a.file_id
and d.tablespace_name = a.tablespace_name
and b.tablespace_name = a.tablespace_name
GROUP BY
b.status,
b.contents,
b.extent_management,
d.tablespace_name
ORDER BY
1
Similar Messages
-
Help for identifing top sql statements
Hi all,
We are doing load testing on oracle 9i with 500 concurent users. At some point of
time the database was hang. I would like to know which query is taking more time/
which resource occuping more. Can any body help in this regard.
Thanks in advanceSome useful information can be found in V$SQLAREA, for example :
SQL> select SQL_TEXT, EXECUTIONS, DISK_READS, BUFFER_GETS, ROWS_PROCESSED, CPU_TIME, ELAPSED_TIME
2 from v$sqlarea
3* order by CPU_TIME desc; -
Dear DBAs
Application team has asked me to provide TOP Sql statements in testing database which can help them to improve code. Developer gave OEM top sql example from his last job.
Currently I do not have OEM installed. I am on 10.2.0.1.0 . I was thinking
to create a job to run sql and email everyday.
What sql can I use to generate that report ? What does OEM reports on when it show TOP Sql ?
ThanksOEM will show you TOP sql by different catogories (each of which can be sorted) such as:
Disk Reads Per Execution
Buffer Gets Per Execution
Executions
Disk Reads
Buffer Gets
Buffer Gets Per Row
Buffer Cache Hit Ration
Sorts
Shareable memory
Rows Processed
CPU Time
Elapsed Time
Now having said that....just because the TOP sql statement is the TOP sql statement does not mean that anything needs tuned. Your developers should be examining their SQL using insite gained from reading articles and documentation.
Tuning use explain plan:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:231814117467
SQL Tuning
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/sql_1016.htm#sthref1061
Automatic SQL Tuning
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/sql_tune.htm#g42443
In particular:
12.2.4 Using SQL Tuning Advisor with Oracle Enterprise Manager
These links should point you in the right direction:
Of course there are books and Oracle classes that concentrate on these issues as well.
Regards
Tim Boles -
How could I extract past executed SQL statement from dictionary ?
We use oracle11gR2 on win2008R2.
How could I extract the past executed SQL statement from dictionary ? (Is it possible from V$SQLTEXT or other dictionary table ?)IvicaArsov wrote:
Hi,
You can find executed SQL in V$SQLAREA view (if it's still in memory). If it does'n't exist and you have AWR enabled you can query DBA_HIST_SQLTEXT table, but remember that the statistic information in AWR tables is filtered (1 out of 10).
I.Arsov
As you know when taking snapshot oracle will capture only top sql statements and stored in AWR.
But can you please link(source) dba_hist_sqltext contain only 1 out of 10 filtered data?. In dba_hist_active_sess_history yes it contain 1 out of 10 sampled data. -
Expensive SQL Statements of the day?
Hi...
How can i find out Expensive SQL Statements of the day please.?
Our SAP Version is ECC 6.0 on Oracle 10.2.0.4.0
RgdsHi Srinivas,
On ECC 6.0, go to ST04 t-code --> performance --> SQL Cache --> You see SQL Statements where you need to sort by SQL Statements by Total Execution Time(ms).
I have tried on one of my R/3 4.7EE system, ST04n --> Resource COnsumption --> Top SQL Statements --> you will see 50 Top most expensive SQL Statements interms of wait time.
also Read: [Re: Diagnosis of the expensive ABAP programs having database time > 90%;
[Please Read before Posting in the Performance and Tuning Forum;
Regards,
Kanthi Kiran -
I'm trying to understand the difference vs TOP SQL statement vs. what is showing up in the longops table. I pull a report showing my top 25 SQL statements but yet when I look at the long ops table the statements taking more this 1 minute are different then what shows up as my top SQL.
What should I be spending my time tuning whats in the long ops or my top SQL statements?Top sql shows in the statspack or AWR report shows the top sql during the time period for which you run the report.
e.g if you run report for 4-5 pm,it will show the sql which consumes most resources(CPU,gets,reads etc)
V$session_longops This view displays the status of various operations that run for longer than 6 seconds (in absolute time). These operations currently include many backup and recovery functions, statistics gathering, and query execution, and more
operations are added for every Oracle release."You can also set rows in this view using the procedure dbms_application_info.set_session_longops"
By Jonatan Lewis
sqlexecprogression_cost with the description: "sql execution progression monitoring cost threshold" and the default value of 1,000
cost = 1000 equates to elapsed time = 6 seconds -
HOW TO GET TOP AND BOTTOM RECORDS IN SQL STATEMENT, URGENT
Hi,
I want to get the TOP 2 and BOTTOM 2 records (TOP 2 SAL , BOTTOM 2 SAL) from the following query result for each department . How do I get it using a SQL statement ? Thanks
SQL> SELECT A.DNAME, B.ENAME, B.SAL FROM DEPT A, EMP B WHERE A.DEPTNO = B.DEPTNO ORDER BY DNAME, SAL
DNAME------------ENAME--------SAL
ACCOUNTING-------KING--------5000
----------------CLARK--------2450
---------------MILLER--------1300
RESEARCH--------SCOTT--------3000
-----------------FORD--------3000
----------------JONES--------2975
----------------ADAMS--------1100
----------------SMITH---------800
SALES-----------BLAKE--------2850
----------------ALLEN--------1600
---------------TURNER--------1500
-----------------WARD--------1250
---------------MARTIN--------1250
----------------JAMES---------950
14 rows selected.Search for "top-N query" in oracle doucmentation.
Example :
for top 2
SELECT * FROM
(SELECT empno FROM emp ORDER BY sal)
WHERE ROWNUM < 3;
for bottom 2
SELECT * FROM
(SELECT empno FROM emp ORDER BY sal desc)
WHERE ROWNUM < 3; -
This SQL statement always in Top Activity, with PX Deq Credit: send blkd
Hi gurus,
The following SQL statement is always among the Top Activity. I can see the details in Enerprise manager that it suffers from PX Deq Credit: send blkd
This is the statement:
SELECT S.Product, S.WH_CODE, S.RACK, S.BATCH, S.EXP_DATE, FLOOR(Qty_Beg) QtyBeg_B,
ROUND(f_convert_qty(S.PRODUCT, Qty_Beg-FLOOR(Qty_Beg), P.UOM_K ), 0) QtyBeg_K,
FLOOR(Qty_In) QtyIn_B, ROUND(f_convert_qty(S.PRODUCT, Qty_In-FLOOR(Qty_In), P.UOM_K), 0) QtyIn_K,
FLOOR(Qty_Out) QtyOut_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Out-FLOOR(Qty_Out), P.UOM_K ), 0) QtyOut_K,
FLOOR(Qty_Adj) QtyAdj_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Adj-FLOOR(Qty_Adj), P.UOM_K ), 0) QtyAdj_K,
FLOOR(Qty_End) QtyEnd_B, ROUND(f_convert_qty(S.PRODUCT, Qty_End-FLOOR(Qty_End), P.UOM_K ), 0) QtyEnd_K,
S.LOC_CODE
FROM V_STOCK_DETAIL S
JOIN PRODUCTS P ON P.PRODUCT = S.PRODUCT
WHERE S.Product = :pProduct AND S.WH_CODE = :pWhCode AND S.LOC_CODE = :pLocCode;The statement is invoked by our front end (web based app) for a browse table displayed on a web page. The result can be 10 to 8000. It is used to display the current stock availability for a particular product in a particular warehouse. The stock availability it self is kept in a View : V_Stock_Detail
These are the parameters relevant to the optimizer:
SQL> show parameter user_dump_dest
user_dump_dest string /u01/app/oracle/admin/ITTDB/udump
SQL> show parameter optimizer
_optimizer_cost_based_transformation string OFF
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter db_file_multi
db_file_multiblock_read_count integer 16
SQL> show parameter db_block_size column sname format a20 column pname format a20
db_block_size integer 8192Here is the output of EXPLAIN PLAN:
SQL> explain plan for
SELECT S.Product, S.WH_CODE, S.RACK, S.BATCH, S.EXP_DATE, FLOOR(Qty_Beg) QtyBeg_B,
ROUND(f_convert_qty(S.PRODUCT, Qty_Beg-FLOOR(Qty_Beg), P.UOM_K ), 0) QtyBeg_K,
FLOOR(Qty_In) QtyIn_B, ROUND(f_convert_qty(S.PRODUCT, Qty_In-FLOOR(Qty_In), P.UOM_K), 0) QtyIn_K,
FLOOR(Qty_Out) QtyOut_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Out-FLOOR(Qty_Out), P.UOM_K ), 0) QtyOut_K,
FLOOR(Qty_Adj) QtyAdj_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Adj-FLOOR(Qty_Adj), P.UOM_K ), 0) QtyAdj_K,
FLOOR(Qty_End) QtyEnd_B, ROUND(f_convert_qty(S.PRODUCT, Qty_End-FLOOR(Qty_End), P.UOM_K ), 0) QtyEnd_K,
S.LOC_CODE
FROM V_STOCK_DETAIL S
JOIN PRODUCTS P ON P.PRODUCT = S.PRODUCT
WHERE S.Product = :pProduct AND S.WH_CODE = :pWhCode AND S.LOC_CODE = :pLocCode
Explain complete.
Elapsed: 00:00:00:31
SQL> select * from table(dbms_xplan.display)
PLAN_TABLE_OUTPUT
Plan hash value: 3252950027
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ
Distrib |
| 0 | SELECT STATEMENT | | 1 | 169 | 6 (17)| 00:00:01 | | |
|
| 1 | PX COORDINATOR | | | | | | | |
|
| 2 | PX SEND QC (RANDOM) | :TQ10003 | 1 | 169 | 6 (17)| 00:00:01 | Q1,03 | P->S | QC
(RAND) |
| 3 | HASH GROUP BY | | 1 | 169 | 6 (17)| 00:00:01 | Q1,03 | PCWP |
|
| 4 | PX RECEIVE | | 1 | 169 | 6 (17)| 00:00:01 | Q1,03 | PCWP |
|
| 5 | PX SEND HASH | :TQ10002 | 1 | 169 | 6 (17)| 00:00:01 | Q1,02 | P->P | HA
SH |
| 6 | HASH GROUP BY | | 1 | 169 | 6 (17)| 00:00:01 | Q1,02 | PCWP |
|
| 7 | NESTED LOOPS OUTER | | 1 | 169 | 5 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 8 | MERGE JOIN CARTESIAN | | 1 | 119 | 4 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 9 | SORT JOIN | | | | | | Q1,02 | PCWP |
|
| 10 | NESTED LOOPS | | 1 | 49 | 4 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 11 | BUFFER SORT | | | | | | Q1,02 | PCWC |
|
| 12 | PX RECEIVE | | | | | | Q1,02 | PCWP |
|
| 13 | PX SEND BROADCAST | :TQ10000 | | | | | | S->P | BR
OADCAST |
|* 14 | INDEX RANGE SCAN | PRODUCTS_IDX2 | 1 | 25 | 2 (0)| 00:00:01 | | |
|
| 15 | PX BLOCK ITERATOR | | 1 | 24 | 2 (0)| 00:00:01 | Q1,02 | PCWC |
|
|* 16 | MAT_VIEW ACCESS FULL | MV_CONVERT_UOM | 1 | 24 | 2 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 17 | BUFFER SORT | | 1 | 70 | 2 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 18 | BUFFER SORT | | | | | | Q1,02 | PCWC |
|
| 19 | PX RECEIVE | | 1 | 70 | 4 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 20 | PX SEND BROADCAST | :TQ10001 | 1 | 70 | 4 (0)| 00:00:01 | | S->P | BR
OADCAST |
|* 21 | TABLE ACCESS BY INDEX ROWID| STOCK | 1 | 70 | 4 (0)| 00:00:01 | | |
|
|* 22 | INDEX RANGE SCAN | STOCK_PK | 1 | | 2 (0)| 00:00:01 | | |
|
|* 23 | TABLE ACCESS BY INDEX ROWID | MV_TRANS_STOCK | 1 | 50 | 3 (0)| 00:00:01 | Q1,02 | PCWP |
|
|* 24 | INDEX RANGE SCAN | MV_TRANS_STOCK_IDX1 | 1 | | 2 (0)| 00:00:01 | Q1,02 | PCWP |
|
Predicate Information (identified by operation id):
14 - access("P"."PRODUCT"=:PPRODUCT)
16 - filter("CON"."PRODUCT"=:PPRODUCT)
21 - filter("STOCK"."LOC_CODE"=:PLOCCODE)
22 - access("STOCK"."PRODUCT"=:PPRODUCT AND "STOCK"."WH_CODE"=:PWHCODE)
23 - filter("STS"(+)='N')
24 - access("PRODUCT"(+)=:PPRODUCT AND "WH_CODE"(+)=:PWHCODE AND "LOC_CODE"(+)=:PLOCCODE AND "RACK"(+)="STOCK"."RACK" AND
"BATCH"(+)="STOCK"."BATCH" AND "EXP_DATE"(+)="STOCK"."EXP_DATE")
42 rows selected.
Elapsed: 00:00:00:06Here is the output of SQL*Plus AUTOTRACE including the TIMING information:
SQL> SELECT S.Product, S.WH_CODE, S.RACK, S.BATCH, S.EXP_DATE, FLOOR(Qty_Beg) QtyBeg_B,
ROUND(f_convert_qty(S.PRODUCT, Qty_Beg-FLOOR(Qty_Beg), P.UOM_K ), 0) QtyBeg_K,
FLOOR(Qty_In) QtyIn_B, ROUND(f_convert_qty(S.PRODUCT, Qty_In-FLOOR(Qty_In), P.UOM_K), 0) QtyIn_K,
FLOOR(Qty_Out) QtyOut_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Out-FLOOR(Qty_Out), P.UOM_K ), 0) QtyOut_K,
FLOOR(Qty_Adj) QtyAdj_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Adj-FLOOR(Qty_Adj), P.UOM_K ), 0) QtyAdj_K,
FLOOR(Qty_End) QtyEnd_B, ROUND(f_convert_qty(S.PRODUCT, Qty_End-FLOOR(Qty_End), P.UOM_K ), 0) QtyEnd_K,
S.LOC_CODE
FROM V_STOCK_DETAIL S
JOIN PRODUCTS P ON P.PRODUCT = S.PRODUCT
WHERE S.Product = :pProduct AND S.WH_CODE = :pWhCode AND S.LOC_CODE = :pLocCode
Execution Plan
0 SELECT STATEMENT Optimizer Mode=ALL_ROWS 1 169 6
1 0 PX COORDINATOR
2 1 PX SEND QC (RANDOM) SYS.:TQ10003 1 169 6 :Q1003 P->S QC (RANDOM)
3 2 HASH GROUP BY 1 169 6 :Q1003 PCWP
4 3 PX RECEIVE 1 169 6 :Q1003 PCWP
5 4 PX SEND HASH SYS.:TQ10002 1 169 6 :Q1002 P->P HASH
6 5 HASH GROUP BY 1 169 6 :Q1002 PCWP
7 6 NESTED LOOPS OUTER 1 169 5 :Q1002 PCWP
8 7 MERGE JOIN CARTESIAN 1 119 4 :Q1002 PCWP
9 8 SORT JOIN :Q1002 PCWP
10 9 NESTED LOOPS 1 49 4 :Q1002 PCWP
11 10 BUFFER SORT :Q1002 PCWC
12 11 PX RECEIVE :Q1002 PCWP
13 12 PX SEND BROADCAST SYS.:TQ10000 S->P BROADCAST
14 13 INDEX RANGE SCAN ITT_NEW.PRODUCTS_IDX2 1 25 2
15 10 PX BLOCK ITERATOR 1 24 2 :Q1002 PCWC
16 15 MAT_VIEW ACCESS FULL ITT_NEW.MV_CONVERT_UOM 1 24 2 :Q1002 PCWP
17 8 BUFFER SORT 1 70 2 :Q1002 PCWP
18 17 BUFFER SORT :Q1002 PCWC
19 18 PX RECEIVE 1 70 4 :Q1002 PCWP
20 19 PX SEND BROADCAST SYS.:TQ10001 1 70 4 S->P BROADCAST
21 20 TABLE ACCESS BY INDEX ROWID ITT_NEW.STOCK 1 70 4
22 21 INDEX RANGE SCAN ITT_NEW.STOCK_PK 1 2
23 7 TABLE ACCESS BY INDEX ROWID ITT_NEW.MV_TRANS_STOCK 1 50 3 :Q1002 PCWP
24 23 INDEX RANGE SCAN ITT_NEW.MV_TRANS_STOCK_IDX1 1 2 :Q1002 PCWP
Statistics
570 recursive calls
0 physical write total IO requests
0 physical write total multi block requests
0 physical write total bytes
0 physical writes direct temporary tablespace
0 java session heap live size max
0 java session heap object count
0 java session heap object count max
0 java session heap collected count
0 java session heap collected bytes
83 rows processed
Elapsed: 00:00:03:24
SQL> disconnect
Commit complete
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining optionsThe TKPROF output for this statement looks like the following:
TKPROF: Release 10.2.0.3.0 - Production on Thu Apr 23 12:39:29 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: ittdb_ora_9566_mytrace1.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SELECT S.Product, S.WH_CODE, S.RACK, S.BATCH, S.EXP_DATE, FLOOR(Qty_Beg) QtyBeg_B,
ROUND(f_convert_qty(S.PRODUCT, Qty_Beg-FLOOR(Qty_Beg), P.UOM_K ), 0) QtyBeg_K,
FLOOR(Qty_In) QtyIn_B, ROUND(f_convert_qty(S.PRODUCT, Qty_In-FLOOR(Qty_In), P.UOM_K), 0) QtyIn_K,
FLOOR(Qty_Out) QtyOut_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Out-FLOOR(Qty_Out), P.UOM_K ), 0) QtyOut_K,
FLOOR(Qty_Adj) QtyAdj_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Adj-FLOOR(Qty_Adj), P.UOM_K ), 0) QtyAdj_K,
FLOOR(Qty_End) QtyEnd_B, ROUND(f_convert_qty(S.PRODUCT, Qty_End-FLOOR(Qty_End), P.UOM_K ), 0) QtyEnd_K,
S.LOC_CODE
FROM V_STOCK_DETAIL S
JOIN PRODUCTS P ON P.PRODUCT = S.PRODUCT
WHERE S.Product = :pProduct AND S.WH_CODE = :pWhCode AND S.LOC_CODE = :pLocCode
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.04 0.12 0 10 4 0
Fetch 43 0.05 2.02 0 73 0 83
total 45 0.10 2.15 0 83 4 83
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 164
Rows Row Source Operation
83 PX COORDINATOR (cr=83 pr=0 pw=0 time=2086576 us)
0 PX SEND QC (RANDOM) :TQ10003 (cr=0 pr=0 pw=0 time=0 us)
0 HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HASH :TQ10002 (cr=0 pr=0 pw=0 time=0 us)
0 HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS OUTER (cr=0 pr=0 pw=0 time=0 us)
0 MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=0 us)
0 SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
1 INDEX RANGE SCAN PRODUCTS_IDX2 (cr=2 pr=0 pw=0 time=62 us)(object id 135097)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)
0 MAT_VIEW ACCESS FULL MV_CONVERT_UOM (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST :TQ10001 (cr=0 pr=0 pw=0 time=0 us)
83 TABLE ACCESS BY INDEX ROWID STOCK (cr=78 pr=0 pw=0 time=1635 us)
83 INDEX RANGE SCAN STOCK_PK (cr=4 pr=0 pw=0 time=458 us)(object id 135252)
0 TABLE ACCESS BY INDEX ROWID MV_TRANS_STOCK (cr=0 pr=0 pw=0 time=0 us)
0 INDEX RANGE SCAN MV_TRANS_STOCK_IDX1 (cr=0 pr=0 pw=0 time=0 us)(object id 143537)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
PX Deq: Join ACK 17 0.00 0.00
PX qref latch 2 0.00 0.00
PX Deq Credit: send blkd 72 1.95 2.00
PX Deq: Parse Reply 26 0.01 0.01
SQL*Net message to client 43 0.00 0.00
PX Deq: Execute Reply 19 0.00 0.01
SQL*Net message from client 43 0.00 0.04
PX Deq: Signal ACK 12 0.00 0.00
enq: PS - contention 1 0.00 0.00
********************************************************************************The DBMS_XPLAN.DISPLAY_CURSOR output:
SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'))
PLAN_TABLE_OUTPUT
SQL_ID 402b8st7vt6ku, child number 2
SELECT /*+ gather_plan_statistics */ S.Product, S.WH_CODE, S.RACK, S.BATCH, S.EXP_DATE, FLOOR(Qty_Beg) QtyBeg_B,
ROUND(f_convert_qty(S.PRODUCT, Qty_Beg-FLOOR(Qty_Beg), P.UOM_K ), 0) QtyBeg_K, FLOOR(Qty_In) QtyIn_B, ROUND(f_convert_qty(S.P
RODUCT,
Qty_In-FLOOR(Qty_In), P.UOM_K), 0) QtyIn_K, FLOOR(Qty_Out) QtyOut_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Out-FLOOR(Qty_Out), P
.UOM_K ),
0) QtyOut_K, FLOOR(Qty_Adj) QtyAdj_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Adj-FLOOR(Qty_Adj), P.UOM_K ), 0) QtyAdj_K,
FLOOR(Qty_End) QtyEnd_B, ROUND(f_convert_qty(S.PRODUCT, Qty_End-FLOOR(Qty_End), P.UOM_K ), 0) QtyEnd_K, S.LOC_CODE FROM
V_STOCK_DETAIL S JOIN PRODUCTS P ON P.PRODUCT = S.PRODUCT WHERE S.Product = :pProduct AND S.WH_CODE = :pWhCode AND S.LOC
_CODE =
:pLocCode
Plan hash value: 3252950027
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem |
1Mem | Used-Mem |
| 1 | PX COORDINATOR | | 1 | | 83 |00:00:02.25 | 83 | |
| |
| 2 | PX SEND QC (RANDOM) | :TQ10003 | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 3 | HASH GROUP BY | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 4 | PX RECEIVE | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 5 | PX SEND HASH | :TQ10002 | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 6 | HASH GROUP BY | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 7 | NESTED LOOPS OUTER | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 8 | MERGE JOIN CARTESIAN | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 9 | SORT JOIN | | 0 | | 0 |00:00:00.01 | 0 | 73728 |
73728 | |
| 10 | NESTED LOOPS | | 0 | 1 | 0 |00:00:00.01 | 0 | |
| |
| 11 | BUFFER SORT | | 0 | | 0 |00:00:00.01 | 0 | 73728 |
73728 | |
| 12 | PX RECEIVE | | 0 | | 0 |00:00:00.01 | 0 | |
| |
| 13 | PX SEND BROADCAST | :TQ10000 | 0 | | 0 |00:00:00.01 | 0 | |
| |
|* 14 | INDEX RANGE SCAN | PRODUCTS_IDX2 | 1 | 1 | 1 |00:00:00.01 | 2 | |
| |
| 15 | PX BLOCK ITERATOR | | 0 | 1 | 0 |00:00:00.01 | 0 | |
| |
|* 16 | MAT_VIEW ACCESS FULL | MV_CONVERT_UOM | 0 | 1 | 0 |00:00:00.01 | 0 | |
| |
| 17 | BUFFER SORT | | 0 | 21 | 0 |00:00:00.01 | 0 | 73728 |
73728 | |
| 18 | BUFFER SORT | | 0 | | 0 |00:00:00.01 | 0 | 73728 |
73728 | |
| 19 | PX RECEIVE | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 20 | PX SEND BROADCAST | :TQ10001 | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
|* 21 | TABLE ACCESS BY INDEX ROWID| STOCK | 1 | 21 | 83 |00:00:00.01 | 78 | |
| |
|* 22 | INDEX RANGE SCAN | STOCK_PK | 1 | 91 | 83 |00:00:00.01 | 4 | |
| |
|* 23 | TABLE ACCESS BY INDEX ROWID | MV_TRANS_STOCK | 0 | 1 | 0 |00:00:00.01 | 0 | |
| |
|* 24 | INDEX RANGE SCAN | MV_TRANS_STOCK_IDX1 | 0 | 1 | 0 |00:00:00.01 | 0 | |
| |
Predicate Information (identified by operation id):
14 - access("P"."PRODUCT"=:PPRODUCT)
16 - access(:Z>=:Z AND :Z<=:Z)
filter("CON"."PRODUCT"=:PPRODUCT)
21 - filter("STOCK"."LOC_CODE"=:PLOCCODE)
22 - access("STOCK"."PRODUCT"=:PPRODUCT AND "STOCK"."WH_CODE"=:PWHCODE)
23 - filter("STS"='N')
24 - access("PRODUCT"=:PPRODUCT AND "WH_CODE"=:PWHCODE AND "LOC_CODE"=:PLOCCODE AND "RACK"="STOCK"."RACK" AND "BATCH"="STOCK"."B
ATCH" AND
"EXP_DATE"="STOCK"."EXP_DATE")
53 rows selected.
Elapsed: 00:00:00:12I'm looking forward for suggestions how to improve the performance of this statement.
Thank you very much,
xtantoxtanto wrote:
Hi sir,
How to prevent the query from doing parallel query ?
Because as you see actually I am not issuing any Parallel hints in the query.
Thank you,
xtantoKristanto,
there are a couple of points to consider:
1. Your SQL*Plus version seems to be outdated. Please use a SQL*Plus version that corresponds to your database version. E.g. the AUTOTRACE output is odd.
2. I would suggest to repeat your exercise using serial execution (the plan, the autotrace, the tracing). You can disable parallel queries by issuing this in your session:
ALTER SESSION DISABLE PARALLEL QUERY;
This way the output of the tools is much more meaningful, however you might get a different execution plan, therefore the results might not be representative for your parallel execution.
3. The function calls might pose a problem. If they are, one possible damage limitation has been provided by hoek. Even better would be then to replace the PL/SQL function with equivalent plain SQL. However since you say that it generates not too many rows it might not harm here too much. You can check the impact of the functions by running a similar query but omitting the function calls.
4. The parallel execution plan contains a MERGE JOIN CARTESIAN operation which could be an issue if the estimates of the optimizer are incorrect. If the serial execution still uses this operation the TKPROF and DBMS_XPLAN.DISPLAY_CURSOR output will reveal whether this is a problem or not.
5. The execution of the statement seems to take on 2-3 seconds in your tests. Is this in the right ballpark? If yes, why should this statement then be problematic? How often does it get executed?
6. The statement uses bind variables, so you might have executions that use different execution plans depending on the bind values passed when the statement got optimized. You can use DBMS_XPLAN.DISPLAY_CURSOR using NULL as "child_number" parameter or DBMS_XPLAN.DISPLAY_AWR (if you have a AWR license) to check if you have multiple execution plans for the statement. Please note that older versions might have already been aged out of the shared pool, so the AWR repository might be a more reliable source (but only if the statement has been sampled).
7. You have disabled cost based transformations: "_optimizer_cost_based_transformation" = OFF. Why?
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
High-load sql statement using v$sql
Hi,
Can any one please tell me, how do we find high load sql statement and it's user from v$SQL view.
what is the query to find it.
Thank you!Hello,
You can run ADDM report and check its findings it will tell you tome stuff like the following:
Finding
67 SQL statements consuming significant database time were found.
40.7 Time spent on the CPU by the instance was responsible for a substantial part of database time.
20.7 Individual SQL statements responsible for significant user I/O wait were found.
13.7 Individual database segments responsible for significant user I/O wait were found.Kind regards
Mohamed Elazab -
Emailing Top SQL report metrics
I have performance monitoring configured on a database using 10g Em grid control. Is there a mechanism through which I could email the 'Top SQL' page that we get to see using EM? Ideally I would like to come up with the sql that EM uses to get the top SQL for a particular 24 hour period. Based on what I found, most of the information is in perfstat.stats$sql_summary. Has anyone done this before, and if so, could you share the sql script.?
Thanks,
Sunil.Hello Alan,
Thank you very much for your answer. I implemented you solution as follows: I programmed a job exactly as you said and then I created a custom report which diplayed the 'output' column from mgmt$job_step_history. Unfortunately, although the 'output' column is CLOB and displays very nicely in any sql developing tool, when included in a report it seems that Information Publisher takes all the end-of-line out of it and thus it is displayed horribly, with no "enters"... Maybe if i used some pl/sql i would somehow manage to trick Information Publisher into displaying end-of-line characters...
In the end, i will use one of the two workarounds:
- program a job just like you said and set Grid control to notify me by email when succeded; this also includes the job's output, and so I can get the Top SQL reports via email. If more than one people would have to be notified by this, an email group would have to be set up by the email administrators in the company, and also a new user in grid control (a simple administrator with view privileges on the target, no super-user is necesary) with a notification rule that would send emails on succeed for the programmed job(s).
- send the output of the ADDM report per email from the target db - the grid control is no longer involved; the steps are:
select task_name tname from dba_advisor_tasks where created = (select max(created) from dba_advisor_tasks);
SELECT dbms_advisor.get_task_report('&addm_task_name', 'TEXT', 'TYPICAL') FROM dual;
Send the result by email. See Note 730746.1.
Thanks a lot,
Ana. -
Top sql elasped time, how true is that
hi guys,
i am viewing an addm report and is concern about a top sql activity.
i have an sql having top elasped time.
<i>
2,372 2,275 128,690,535 0.00 4.28 fphtjnquzzpud [email protected] (TNS V1-V3) SELECT USER_ID FROM SUBSCRIBER...
</i>
q1) what is the meaning of elasped time ?
the time taken per execution of an sql statement ?
q2) is the elasped time in the report shown above = sum of all the the elapsed time per execution of the sql ?
if that the case, the most/frequently i run a SQL statement, the higher the elasped time it will be and thus higher db time.
but how can we say that it is an inefficient sql since it has the most elasped time just because it is run way too frequently then other sqls ?
Please advise
Regards,
NoobHi,
It is
in the below addmrpt.sql the time which show 20393 is the elapsed time of the sql..
SQL statements consuming significant database time were found.
RECOMMENDATION 1: SQL Tuning, 24% benefit (20393 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"9tkc7559d5drs".
RELEVANT OBJECT: SQL statement with SQL_ID 9tkc7559d5drs and
PLAN_HASH 1303522576But in awrrrpt.sql the elapsed time showing is for the total number of times the sql is executed in the DB.
^LSQL ordered by Elapsed Time DB/Inst: STEELP/STEELP1 Snaps: 13011-13012
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
6,761 1,869 1 6761.3 11.2 4hsz1t5dsmhy4
Module: XXLOTDETSo check the addmrpt report.
thanks.
baskar.l -
Hi All,
I need to get TOP sql query in 11g Database.
I dont know about PT.
User will carry on some testing activity on the Database, i need to get the top sql from the database.
There is no access to AWR, ADDM and Entriprise console.
I need to get the output of the queries by executing some sql.
Can some body help me in this regards.
Thanking You
Pramodh>
Whether it is possible to get the full set of sql, i tried executing this it is gives the report part by part.
>
I am not sure what the problem is.
You get the top 5 SQL statements because of the "<6" part of the code. Would be top 10 if "<11" and so on.
Maybe you have got to format the output a little in SQL*Plus to get it better readable - I leave that up to you :-)
Kind regards
Uwe
http://uhesse.wordpress.com -
Can someone help me correct this sql statement in a jsp page?
ive been getting the java.sql.SQLException: Incorrect syntax error for one of my sql nested statements. i cant seem to find similar egs online, so reckon if anyone here could help, really appreciate it.
as im putting the nested sql in jsp page, it has to be with lots of " " n crap. very confusing if there are nested.
heres the sql statement without those "" that i want to use:
select top 5 * from(
select top+"'"+offset+"'"+" * from prod where cat=" +"'" cat "'"+"
)order by prodID desc
when i put this in my jsp pg, i had to add "" to become:
String sql = "select top 5 * from("+"select top"+"'"+offset+"'"+" * from prod where cat=" +"'" +cat+ "'"+")order by prodID desc";cat=" +"'" cat "'"+")order by prodID desc";
all those "" are confusing me to no end, so i cant figure out what should be the correct syntax. the error says the syntax error is near the offset.If offset is, say, 10, and cat is, say, "new", then it looks like you're going to produce the SQL:
select top 5 * from(
select top '10' * from prod where cat='new'
)order by prodID descThat looks exactly like incorrect syntax to me... top almost certainly can't handle a string literal as its operand... you almost certainly would want "top 10" instead of "top '10'"...
If you use PreparedStatement, you don't have to remember what you quote and what you don't and you can have your SQL in a single static final string to boot... -
Need to take monthly report for SQL statements...Is there any possiblity?
Hi,
We have a requirement to find out the list of expensive sql statements in our ECC 6.0 system.
I am aware that we can see expensive sql statements which are being executed online in the TCode ST04 or DB02old.
But I want the list of statements on a monthly wise.
Is there any possibility to find out the list of expensive sql statements for the previous 3 to 4 months?If so, how do we do that?
Any report or Tcode with navigations.
Please help.
Regards,
Sudheer.Hi,
> We have a requirement to find out the list of expensive sql statements in our ECC 6.0 system.
nice.
> I am aware that we can see expensive sql statements which are being executed online in the TCode ST04 or DB02old.
expensive SQL in DB02, really?
> But I want the list of statements on a monthly wise.
> Is there any possibility to find out the list of expensive sql statements for the previous 3 to 4 months?
> If so, how do we do that?
Up to now there is no transaction available with such information. The databases partially offer solutions.
These are switched on by default of have to be setup before they could be used. So it depends on the
database and the database version which are you using.
On ORACLE 10g for example we have the AWR (Automatic Workload Repository) which should be switched on by default (depends again on the license). On ORACLE 9i statspack is available but has to be activated first.
On DB6 (latest database and SAP release) we have the so called performance warehouse.
On other databases there might be solutions too.
Besides that you can always built your own solution and grab the top 10 SQL statements from the SQL cache and
persist them in regular intervals. I have seen such solutions as well.
Kind regards,
Hermann -
Extracting SQL statement from a Webi document's data provider using SDK.
Hi all,
Is it possible to extract the SQL statement from an existing Webi document's data provider using BO SDK? I've searched through the class library but haven't found any information on this yet. If you have done it, could you provide some guidance. Many thanks.I found the following Java code that might be of some help to you. I realize you are using .NET but this might push you down the right path.
The trick here is to use the Report Engine SDK to get the DataProvider of the DocumentInstance. Then, look at the SQLDataProvider to get your SQLContainer.
My apologies for the poor formatting. This didn't copy and paste over to the forums very well. I've cleaned up as much as I could.
<%@ page import="com.crystaldecisions.sdk.framework.*" %>
<%@ page import="com.crystaldecisions.sdk.exception.SDKException" %>
<%@ page import="com.crystaldecisions.sdk.occa.infostore.*" %>
<%@ page import="com.businessobjects.rebean.wi.*" %>
<%
boolean loginSuccessful = false;
IEnterpriseSession oEnterpriseSession = null;
String username = "username";
String password = "password";
String cmsname = "cms_name";
String authenticationType = "secEnterprise";
try
//Log in. oEnterpriseSession = CrystalEnterprise.getSessionMgr().logon( username, password, cmsname, authenticationType);
if (oEnterpriseSession == null)
out.print("<FONT COLOR=RED><B>Unable to login.</B></FONT>");
else
{ loginSuccessful = true;
catch (SDKException sdkEx)
{ out.print("<FONT COLOR=RED><B>ERROR ENCOUNTERED</B><BR>" + sdkEx + "</FONT>");}
if (loginSuccessful) { IInfoObject oInfoObject = null;
String docname = "WebI document name";
//Grab the InfoStore from the httpsession IInfoStore oInfoStore = (IInfoStore) oEnterpriseSession.getService("", "InfoStore"); //Query for the report object in the CMS. See the Developer Reference guide for more information the query language. String query = "SELECT TOP 1 * " + "FROM CI_INFOOBJECTS " + "WHERE SI_INSTANCE = 0 And SI_Kind = 'Webi' " + "AND SI_NAME='" + docname + "'";
IInfoObjects oInfoObjects = (IInfoObjects) oInfoStore.query(query);
if (oInfoObjects.size() > 0)
//Retrieve the latest instance of the report oInfoObject = (IInfoObject) oInfoObjects.get(0);
// Initialize the Report Engine ReportEngines oReportEngines = (ReportEngines)
oEnterpriseSession.getService("ReportEngines");
ReportEngine oReportEngine = (ReportEngine) oReportEngines.getService(ReportEngines.ReportEngineType.WI_REPORT_ENGINE);
// Openning the document DocumentInstance oDocumentInstance = oReportEngine.openDocument(oInfoObject.getID());
DataProvider oDataProvider = null;
SQLDataProvider oSQLDataProvider = null;
SQLContainer oSQLContainer_root = null;
SQLNode oSQLNode = null;
SQLSelectStatement oSQLSelectStatement = null;
String sqlStatement = null;
out.print("<TABLE BORDER=1>");
for (int i=0; i<oDocumentInstance.getDataProviders().getCount(); i++)
oDataProvider = oDocumentInstance.getDataProviders().getItem(i);
out.print("<TR><TD COLSPAN=2 BGCOLOR=KHAKI>Data Provider Name: " + oDataProvider.getName() + "</TD></TR>");
if (oDataProvider instanceof SQLDataProvider)
oSQLDataProvider = (SQLDataProvider) oDataProvider;
oSQLContainer_root = oSQLDataProvider.getSQLContainer();
if (oSQLContainer_root != null)
for (int j=0; j<oSQLContainer_root.getChildCount(); j++)
oSQLNode = (SQLNode) oSQLContainer_root.getChildAt(j);
oSQLSelectStatement = (SQLSelectStatement) oSQLNode;
sqlStatement = oSQLSelectStatement.getSQL();
out.print("<TR><TD>" + (j+1) + "</TD><TD>" + sqlStatement + "</TD></TR>");
else
out.print("<TR><TD COLSPAN=2>Data Provider is not a SQLDataProvider. SQL Statement can not be retrieved.</TD></TR>"); } } out.print("</TABLE>");
oDocumentInstance.closeDocument(); }
oEnterpriseSession.logoff();}%>
Maybe you are looking for
-
2GB OR NOT 2GB - FILE LIMITS IN ORACLE
제품 : ORACLE SERVER 작성날짜 : 2002-04-11 2GB OR NOT 2GB - FILE LIMITS IN ORACLE ====================================== Introduction ~~~~~~~~~~~~ This article describes "2Gb" issues. It gives information on why 2Gb is a magical number and outlines the iss
-
Arranging multiple images on a single page
Apologies if this has already been answered, but if so I couldn't find the answer. Is there a way to display two or more photos on a page and then move them around, using PE3 in WinXP? Thanks. Philip Aisen
-
Brick Wall Problem: Array of Clusters, Size of array from reference
Good Morning Forums, I have hit a brick wall with my application and I am hoping somebody can help! I have an array of clusters of indeterminate size (the cluster is generic, data contained not relevant.). Inputs to the VI are: a reference to the arr
-
Burning 16:9 video and DVD Playback
Hi, I have been trying to burn a 16:9 video (exported from FCP 5 at 853x480) in iDVD 5.0.1 and play it back on my Toshiba SD4000 DVD player. However, everything I try (even setting the player to 16:9 video) results in the video getting stretched to a
-
Mark message as READ on device after reading it in Exchange
After being a Windows Mobile user for years and years and many many devices, I have taken the plunge and switched to the Tour. There are things I love more about it and some things that I terribly miss. One of the things I miss is after reading a mes