PX Deq Credit: send blkd (AWR)
Hello,
We are having some performance issue. and i tried taking an AWR report and the top events are as follows.
PX Deq Credit: send blkd 170,984 14,575 85 82.0 Other
CPU time 1,544 8.7
PX Deq: Signal ACK 183,850 979 5 5.5 Other
db file sequential read 20,979 102 5 .6 User I/O
kksfbc child completion 860 33 38 .2 Other
what in the world is PX Deq Credit: send blkd event? As i use Database control to monitor the performance. And for active session its allways at the top with "OTHERS". i am talking about avg active session. the pink which indicates OTHER is allways high. And when i look in that others catageory. i get pointed to PX Deq Credit: send blkd ...
so what in the world is PX Deq Credit: send blkd and how can we get this down. This is 10g on AIX. some of the helpfull parameters
sga set for 8gigs
pga about 1 gig
processess is 150
parallel_max_servers is 80
this box have 16 gigs memory and only one database on it.
SQL> show parameter parallel;
NAME TYPE
VALUE
fast_start_parallel_rollback string
LOW
parallel_adaptive_multi_user boolean
TRUE
parallel_automatic_tuning boolean
FALSE
parallel_execution_message_size integer
2152
parallel_instance_group string
NAME TYPE
VALUE
parallel_max_servers integer
80
parallel_min_percent integer
0
parallel_min_servers integer
0
parallel_server boolean
FALSE
NAME TYPE
VALUE
parallel_server_instances integer
1
parallel_threads_per_cpu integer
2
recovery_parallelism integer
Well if you do have only 4 cpu's then I think you should address the degree of parallelsim you're using first.
However having said that I'd urge you to just experiment with the message size. I can think of theoretical reasons why either small or large sizes could be beneficial here, and I found that a higher size gave fewer waits and a longer average duration that netted out to a marginal benefit. The default is at the lower end of possible values.
First though, don't choke those CPU's too hard if you can avoid it.
Similar Messages
-
PX Deq Credit: send blkd At AWR "Top 5 Timed Events"
PX Deq Credit: send blkd At Top 5 Timed Events
Hi ,
Below are examples of "Top 5 Timed Events" in my Staging data warehouse database.
ALWAYS , at the most Top 5 Timed Events is the event : PX Deq Credit: send blkd.
Oracle saids that its an idel event, but since it always at the the top of my AWR reports
and all the others events are far behind it , i have a feeling that it may indicate of
a problem.
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
PX Deq Credit: send blkd 3,152,038 255,152 81 95.6 Other
direct path read 224,839 4,046 18 1.5 User I/O
CPU time 3,217 1.2
direct path read temp 109,209 2,407 22 0.9 User I/O
db file scattered read 31,110 1,436 46 0.5 User I/O
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
PX Deq Credit: send blkd 6,846,579 16,359 2 50.4 Other
direct path read 101,363 5,348 53 16.5 User I/O
db file scattered read 105,377 4,991 47 15.4 User I/O
CPU time 3,795 11.7
direct path read temp 70,208 940 13 2.9 User I/O
Hir some more information:
Its a 500GB database on linux Red hat 4 with 8 CPUs and 16GB memory.
Its based on an ASM file system.
From the spfile:
SQL> show parameter parallel
NAME_COL_PLUS_SHOW_PARAM VALUE_COL_PLUS_SHOW_PARAM
parallel_adaptive_multi_user TRUE
parallel_automatic_tuning FALSE
parallel_execution_message_size 4096
parallel_instance_group
parallel_max_servers 240
parallel_min_percent 0
parallel_min_servers 0
parallel_server FALSE
parallel_server_instances 1
parallel_threads_per_cpu 2
recovery_parallelism 0
Thanks.>
Metalink Note:280939.1 said:
"Consider the use of different number for the DOP on your tables.
On large tables and their indexes use high degree like #CPU.
For smaller tables use DOP (#CPU)/2 as start value.
Question 1:
"On large tables"--> Does Metalink mean to a large
table by its size (GB) or by number of rows ?
That's one of those vague things that people say without thinking that it
could have different meanings. Most people assume that a table that is
large in Gb is also large in number of rows.
As far as PQ is concerned I think that large numbers of rows may be more significant than large size, because (a) in multi-layer queries you pass rows around and (b) although the initial rows may be big you might not need all the columns to run the query, so Gb become less relevant once the data scan is complete
As a strategy for keeping DOP on the tables, by the way, it sounds quite
good. The difficulty is in the fine-tuning.
Question 2:
I checked how many parallel operations had been
downgraded and found that less than 4% had been
downgraded. Do you think that i still have to consider
reducing the DOP ?
Having lots of slaves means you are less likely to get downgrades. But it's the number of slaves active for a single query that introduce the dequeue waits - so yes, I think you do need to worry about the DOP. (Counter-intuitively, the few downgraded queries may have been performing better than the ones running at full DOP).
The difficulty is this - do you need to choose a strategy, or do you just need to fix a couple of queries.
Strategy 1: set DOP to 1 on all tables and indexes, then hint all queries that you think need to run parallel, possibly identifying a few tables and indexes that could benefit from an explicit setting for DOP.
Strategy 2: set DOP to #CPUs on all very large tables and their indexes and #CPUs/2 on the less large tables and their indexes. Check for any queries that perform very badly and either hint different degrees, or fine-tune the degree on a few tables.
Strategy 3: leave parallelism at default, identify particularly badly performing queries and either put in hints for DOP, or use them to identify any tables that need specific settings for DOP.
Starting from scratch, I would want to adopt strategy 1.
Starting from where you are at present, I would spend a little time checking to see if I could get some clues from any extreme queries - i.e. following strategy 3; but if under a lot of time pressure and saw no improvement I would switch to strategy 2.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk -
This SQL statement always in Top Activity, with PX Deq Credit: send blkd
Hi gurus,
The following SQL statement is always among the Top Activity. I can see the details in Enerprise manager that it suffers from PX Deq Credit: send blkd
This is the statement:
SELECT S.Product, S.WH_CODE, S.RACK, S.BATCH, S.EXP_DATE, FLOOR(Qty_Beg) QtyBeg_B,
ROUND(f_convert_qty(S.PRODUCT, Qty_Beg-FLOOR(Qty_Beg), P.UOM_K ), 0) QtyBeg_K,
FLOOR(Qty_In) QtyIn_B, ROUND(f_convert_qty(S.PRODUCT, Qty_In-FLOOR(Qty_In), P.UOM_K), 0) QtyIn_K,
FLOOR(Qty_Out) QtyOut_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Out-FLOOR(Qty_Out), P.UOM_K ), 0) QtyOut_K,
FLOOR(Qty_Adj) QtyAdj_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Adj-FLOOR(Qty_Adj), P.UOM_K ), 0) QtyAdj_K,
FLOOR(Qty_End) QtyEnd_B, ROUND(f_convert_qty(S.PRODUCT, Qty_End-FLOOR(Qty_End), P.UOM_K ), 0) QtyEnd_K,
S.LOC_CODE
FROM V_STOCK_DETAIL S
JOIN PRODUCTS P ON P.PRODUCT = S.PRODUCT
WHERE S.Product = :pProduct AND S.WH_CODE = :pWhCode AND S.LOC_CODE = :pLocCode;The statement is invoked by our front end (web based app) for a browse table displayed on a web page. The result can be 10 to 8000. It is used to display the current stock availability for a particular product in a particular warehouse. The stock availability it self is kept in a View : V_Stock_Detail
These are the parameters relevant to the optimizer:
SQL> show parameter user_dump_dest
user_dump_dest string /u01/app/oracle/admin/ITTDB/udump
SQL> show parameter optimizer
_optimizer_cost_based_transformation string OFF
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter db_file_multi
db_file_multiblock_read_count integer 16
SQL> show parameter db_block_size column sname format a20 column pname format a20
db_block_size integer 8192Here is the output of EXPLAIN PLAN:
SQL> explain plan for
SELECT S.Product, S.WH_CODE, S.RACK, S.BATCH, S.EXP_DATE, FLOOR(Qty_Beg) QtyBeg_B,
ROUND(f_convert_qty(S.PRODUCT, Qty_Beg-FLOOR(Qty_Beg), P.UOM_K ), 0) QtyBeg_K,
FLOOR(Qty_In) QtyIn_B, ROUND(f_convert_qty(S.PRODUCT, Qty_In-FLOOR(Qty_In), P.UOM_K), 0) QtyIn_K,
FLOOR(Qty_Out) QtyOut_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Out-FLOOR(Qty_Out), P.UOM_K ), 0) QtyOut_K,
FLOOR(Qty_Adj) QtyAdj_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Adj-FLOOR(Qty_Adj), P.UOM_K ), 0) QtyAdj_K,
FLOOR(Qty_End) QtyEnd_B, ROUND(f_convert_qty(S.PRODUCT, Qty_End-FLOOR(Qty_End), P.UOM_K ), 0) QtyEnd_K,
S.LOC_CODE
FROM V_STOCK_DETAIL S
JOIN PRODUCTS P ON P.PRODUCT = S.PRODUCT
WHERE S.Product = :pProduct AND S.WH_CODE = :pWhCode AND S.LOC_CODE = :pLocCode
Explain complete.
Elapsed: 00:00:00:31
SQL> select * from table(dbms_xplan.display)
PLAN_TABLE_OUTPUT
Plan hash value: 3252950027
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ
Distrib |
| 0 | SELECT STATEMENT | | 1 | 169 | 6 (17)| 00:00:01 | | |
|
| 1 | PX COORDINATOR | | | | | | | |
|
| 2 | PX SEND QC (RANDOM) | :TQ10003 | 1 | 169 | 6 (17)| 00:00:01 | Q1,03 | P->S | QC
(RAND) |
| 3 | HASH GROUP BY | | 1 | 169 | 6 (17)| 00:00:01 | Q1,03 | PCWP |
|
| 4 | PX RECEIVE | | 1 | 169 | 6 (17)| 00:00:01 | Q1,03 | PCWP |
|
| 5 | PX SEND HASH | :TQ10002 | 1 | 169 | 6 (17)| 00:00:01 | Q1,02 | P->P | HA
SH |
| 6 | HASH GROUP BY | | 1 | 169 | 6 (17)| 00:00:01 | Q1,02 | PCWP |
|
| 7 | NESTED LOOPS OUTER | | 1 | 169 | 5 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 8 | MERGE JOIN CARTESIAN | | 1 | 119 | 4 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 9 | SORT JOIN | | | | | | Q1,02 | PCWP |
|
| 10 | NESTED LOOPS | | 1 | 49 | 4 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 11 | BUFFER SORT | | | | | | Q1,02 | PCWC |
|
| 12 | PX RECEIVE | | | | | | Q1,02 | PCWP |
|
| 13 | PX SEND BROADCAST | :TQ10000 | | | | | | S->P | BR
OADCAST |
|* 14 | INDEX RANGE SCAN | PRODUCTS_IDX2 | 1 | 25 | 2 (0)| 00:00:01 | | |
|
| 15 | PX BLOCK ITERATOR | | 1 | 24 | 2 (0)| 00:00:01 | Q1,02 | PCWC |
|
|* 16 | MAT_VIEW ACCESS FULL | MV_CONVERT_UOM | 1 | 24 | 2 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 17 | BUFFER SORT | | 1 | 70 | 2 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 18 | BUFFER SORT | | | | | | Q1,02 | PCWC |
|
| 19 | PX RECEIVE | | 1 | 70 | 4 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 20 | PX SEND BROADCAST | :TQ10001 | 1 | 70 | 4 (0)| 00:00:01 | | S->P | BR
OADCAST |
|* 21 | TABLE ACCESS BY INDEX ROWID| STOCK | 1 | 70 | 4 (0)| 00:00:01 | | |
|
|* 22 | INDEX RANGE SCAN | STOCK_PK | 1 | | 2 (0)| 00:00:01 | | |
|
|* 23 | TABLE ACCESS BY INDEX ROWID | MV_TRANS_STOCK | 1 | 50 | 3 (0)| 00:00:01 | Q1,02 | PCWP |
|
|* 24 | INDEX RANGE SCAN | MV_TRANS_STOCK_IDX1 | 1 | | 2 (0)| 00:00:01 | Q1,02 | PCWP |
|
Predicate Information (identified by operation id):
14 - access("P"."PRODUCT"=:PPRODUCT)
16 - filter("CON"."PRODUCT"=:PPRODUCT)
21 - filter("STOCK"."LOC_CODE"=:PLOCCODE)
22 - access("STOCK"."PRODUCT"=:PPRODUCT AND "STOCK"."WH_CODE"=:PWHCODE)
23 - filter("STS"(+)='N')
24 - access("PRODUCT"(+)=:PPRODUCT AND "WH_CODE"(+)=:PWHCODE AND "LOC_CODE"(+)=:PLOCCODE AND "RACK"(+)="STOCK"."RACK" AND
"BATCH"(+)="STOCK"."BATCH" AND "EXP_DATE"(+)="STOCK"."EXP_DATE")
42 rows selected.
Elapsed: 00:00:00:06Here is the output of SQL*Plus AUTOTRACE including the TIMING information:
SQL> SELECT S.Product, S.WH_CODE, S.RACK, S.BATCH, S.EXP_DATE, FLOOR(Qty_Beg) QtyBeg_B,
ROUND(f_convert_qty(S.PRODUCT, Qty_Beg-FLOOR(Qty_Beg), P.UOM_K ), 0) QtyBeg_K,
FLOOR(Qty_In) QtyIn_B, ROUND(f_convert_qty(S.PRODUCT, Qty_In-FLOOR(Qty_In), P.UOM_K), 0) QtyIn_K,
FLOOR(Qty_Out) QtyOut_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Out-FLOOR(Qty_Out), P.UOM_K ), 0) QtyOut_K,
FLOOR(Qty_Adj) QtyAdj_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Adj-FLOOR(Qty_Adj), P.UOM_K ), 0) QtyAdj_K,
FLOOR(Qty_End) QtyEnd_B, ROUND(f_convert_qty(S.PRODUCT, Qty_End-FLOOR(Qty_End), P.UOM_K ), 0) QtyEnd_K,
S.LOC_CODE
FROM V_STOCK_DETAIL S
JOIN PRODUCTS P ON P.PRODUCT = S.PRODUCT
WHERE S.Product = :pProduct AND S.WH_CODE = :pWhCode AND S.LOC_CODE = :pLocCode
Execution Plan
0 SELECT STATEMENT Optimizer Mode=ALL_ROWS 1 169 6
1 0 PX COORDINATOR
2 1 PX SEND QC (RANDOM) SYS.:TQ10003 1 169 6 :Q1003 P->S QC (RANDOM)
3 2 HASH GROUP BY 1 169 6 :Q1003 PCWP
4 3 PX RECEIVE 1 169 6 :Q1003 PCWP
5 4 PX SEND HASH SYS.:TQ10002 1 169 6 :Q1002 P->P HASH
6 5 HASH GROUP BY 1 169 6 :Q1002 PCWP
7 6 NESTED LOOPS OUTER 1 169 5 :Q1002 PCWP
8 7 MERGE JOIN CARTESIAN 1 119 4 :Q1002 PCWP
9 8 SORT JOIN :Q1002 PCWP
10 9 NESTED LOOPS 1 49 4 :Q1002 PCWP
11 10 BUFFER SORT :Q1002 PCWC
12 11 PX RECEIVE :Q1002 PCWP
13 12 PX SEND BROADCAST SYS.:TQ10000 S->P BROADCAST
14 13 INDEX RANGE SCAN ITT_NEW.PRODUCTS_IDX2 1 25 2
15 10 PX BLOCK ITERATOR 1 24 2 :Q1002 PCWC
16 15 MAT_VIEW ACCESS FULL ITT_NEW.MV_CONVERT_UOM 1 24 2 :Q1002 PCWP
17 8 BUFFER SORT 1 70 2 :Q1002 PCWP
18 17 BUFFER SORT :Q1002 PCWC
19 18 PX RECEIVE 1 70 4 :Q1002 PCWP
20 19 PX SEND BROADCAST SYS.:TQ10001 1 70 4 S->P BROADCAST
21 20 TABLE ACCESS BY INDEX ROWID ITT_NEW.STOCK 1 70 4
22 21 INDEX RANGE SCAN ITT_NEW.STOCK_PK 1 2
23 7 TABLE ACCESS BY INDEX ROWID ITT_NEW.MV_TRANS_STOCK 1 50 3 :Q1002 PCWP
24 23 INDEX RANGE SCAN ITT_NEW.MV_TRANS_STOCK_IDX1 1 2 :Q1002 PCWP
Statistics
570 recursive calls
0 physical write total IO requests
0 physical write total multi block requests
0 physical write total bytes
0 physical writes direct temporary tablespace
0 java session heap live size max
0 java session heap object count
0 java session heap object count max
0 java session heap collected count
0 java session heap collected bytes
83 rows processed
Elapsed: 00:00:03:24
SQL> disconnect
Commit complete
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining optionsThe TKPROF output for this statement looks like the following:
TKPROF: Release 10.2.0.3.0 - Production on Thu Apr 23 12:39:29 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: ittdb_ora_9566_mytrace1.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SELECT S.Product, S.WH_CODE, S.RACK, S.BATCH, S.EXP_DATE, FLOOR(Qty_Beg) QtyBeg_B,
ROUND(f_convert_qty(S.PRODUCT, Qty_Beg-FLOOR(Qty_Beg), P.UOM_K ), 0) QtyBeg_K,
FLOOR(Qty_In) QtyIn_B, ROUND(f_convert_qty(S.PRODUCT, Qty_In-FLOOR(Qty_In), P.UOM_K), 0) QtyIn_K,
FLOOR(Qty_Out) QtyOut_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Out-FLOOR(Qty_Out), P.UOM_K ), 0) QtyOut_K,
FLOOR(Qty_Adj) QtyAdj_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Adj-FLOOR(Qty_Adj), P.UOM_K ), 0) QtyAdj_K,
FLOOR(Qty_End) QtyEnd_B, ROUND(f_convert_qty(S.PRODUCT, Qty_End-FLOOR(Qty_End), P.UOM_K ), 0) QtyEnd_K,
S.LOC_CODE
FROM V_STOCK_DETAIL S
JOIN PRODUCTS P ON P.PRODUCT = S.PRODUCT
WHERE S.Product = :pProduct AND S.WH_CODE = :pWhCode AND S.LOC_CODE = :pLocCode
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.04 0.12 0 10 4 0
Fetch 43 0.05 2.02 0 73 0 83
total 45 0.10 2.15 0 83 4 83
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 164
Rows Row Source Operation
83 PX COORDINATOR (cr=83 pr=0 pw=0 time=2086576 us)
0 PX SEND QC (RANDOM) :TQ10003 (cr=0 pr=0 pw=0 time=0 us)
0 HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HASH :TQ10002 (cr=0 pr=0 pw=0 time=0 us)
0 HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS OUTER (cr=0 pr=0 pw=0 time=0 us)
0 MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=0 us)
0 SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
1 INDEX RANGE SCAN PRODUCTS_IDX2 (cr=2 pr=0 pw=0 time=62 us)(object id 135097)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)
0 MAT_VIEW ACCESS FULL MV_CONVERT_UOM (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST :TQ10001 (cr=0 pr=0 pw=0 time=0 us)
83 TABLE ACCESS BY INDEX ROWID STOCK (cr=78 pr=0 pw=0 time=1635 us)
83 INDEX RANGE SCAN STOCK_PK (cr=4 pr=0 pw=0 time=458 us)(object id 135252)
0 TABLE ACCESS BY INDEX ROWID MV_TRANS_STOCK (cr=0 pr=0 pw=0 time=0 us)
0 INDEX RANGE SCAN MV_TRANS_STOCK_IDX1 (cr=0 pr=0 pw=0 time=0 us)(object id 143537)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
PX Deq: Join ACK 17 0.00 0.00
PX qref latch 2 0.00 0.00
PX Deq Credit: send blkd 72 1.95 2.00
PX Deq: Parse Reply 26 0.01 0.01
SQL*Net message to client 43 0.00 0.00
PX Deq: Execute Reply 19 0.00 0.01
SQL*Net message from client 43 0.00 0.04
PX Deq: Signal ACK 12 0.00 0.00
enq: PS - contention 1 0.00 0.00
********************************************************************************The DBMS_XPLAN.DISPLAY_CURSOR output:
SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'))
PLAN_TABLE_OUTPUT
SQL_ID 402b8st7vt6ku, child number 2
SELECT /*+ gather_plan_statistics */ S.Product, S.WH_CODE, S.RACK, S.BATCH, S.EXP_DATE, FLOOR(Qty_Beg) QtyBeg_B,
ROUND(f_convert_qty(S.PRODUCT, Qty_Beg-FLOOR(Qty_Beg), P.UOM_K ), 0) QtyBeg_K, FLOOR(Qty_In) QtyIn_B, ROUND(f_convert_qty(S.P
RODUCT,
Qty_In-FLOOR(Qty_In), P.UOM_K), 0) QtyIn_K, FLOOR(Qty_Out) QtyOut_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Out-FLOOR(Qty_Out), P
.UOM_K ),
0) QtyOut_K, FLOOR(Qty_Adj) QtyAdj_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Adj-FLOOR(Qty_Adj), P.UOM_K ), 0) QtyAdj_K,
FLOOR(Qty_End) QtyEnd_B, ROUND(f_convert_qty(S.PRODUCT, Qty_End-FLOOR(Qty_End), P.UOM_K ), 0) QtyEnd_K, S.LOC_CODE FROM
V_STOCK_DETAIL S JOIN PRODUCTS P ON P.PRODUCT = S.PRODUCT WHERE S.Product = :pProduct AND S.WH_CODE = :pWhCode AND S.LOC
_CODE =
:pLocCode
Plan hash value: 3252950027
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem |
1Mem | Used-Mem |
| 1 | PX COORDINATOR | | 1 | | 83 |00:00:02.25 | 83 | |
| |
| 2 | PX SEND QC (RANDOM) | :TQ10003 | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 3 | HASH GROUP BY | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 4 | PX RECEIVE | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 5 | PX SEND HASH | :TQ10002 | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 6 | HASH GROUP BY | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 7 | NESTED LOOPS OUTER | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 8 | MERGE JOIN CARTESIAN | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 9 | SORT JOIN | | 0 | | 0 |00:00:00.01 | 0 | 73728 |
73728 | |
| 10 | NESTED LOOPS | | 0 | 1 | 0 |00:00:00.01 | 0 | |
| |
| 11 | BUFFER SORT | | 0 | | 0 |00:00:00.01 | 0 | 73728 |
73728 | |
| 12 | PX RECEIVE | | 0 | | 0 |00:00:00.01 | 0 | |
| |
| 13 | PX SEND BROADCAST | :TQ10000 | 0 | | 0 |00:00:00.01 | 0 | |
| |
|* 14 | INDEX RANGE SCAN | PRODUCTS_IDX2 | 1 | 1 | 1 |00:00:00.01 | 2 | |
| |
| 15 | PX BLOCK ITERATOR | | 0 | 1 | 0 |00:00:00.01 | 0 | |
| |
|* 16 | MAT_VIEW ACCESS FULL | MV_CONVERT_UOM | 0 | 1 | 0 |00:00:00.01 | 0 | |
| |
| 17 | BUFFER SORT | | 0 | 21 | 0 |00:00:00.01 | 0 | 73728 |
73728 | |
| 18 | BUFFER SORT | | 0 | | 0 |00:00:00.01 | 0 | 73728 |
73728 | |
| 19 | PX RECEIVE | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 20 | PX SEND BROADCAST | :TQ10001 | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
|* 21 | TABLE ACCESS BY INDEX ROWID| STOCK | 1 | 21 | 83 |00:00:00.01 | 78 | |
| |
|* 22 | INDEX RANGE SCAN | STOCK_PK | 1 | 91 | 83 |00:00:00.01 | 4 | |
| |
|* 23 | TABLE ACCESS BY INDEX ROWID | MV_TRANS_STOCK | 0 | 1 | 0 |00:00:00.01 | 0 | |
| |
|* 24 | INDEX RANGE SCAN | MV_TRANS_STOCK_IDX1 | 0 | 1 | 0 |00:00:00.01 | 0 | |
| |
Predicate Information (identified by operation id):
14 - access("P"."PRODUCT"=:PPRODUCT)
16 - access(:Z>=:Z AND :Z<=:Z)
filter("CON"."PRODUCT"=:PPRODUCT)
21 - filter("STOCK"."LOC_CODE"=:PLOCCODE)
22 - access("STOCK"."PRODUCT"=:PPRODUCT AND "STOCK"."WH_CODE"=:PWHCODE)
23 - filter("STS"='N')
24 - access("PRODUCT"=:PPRODUCT AND "WH_CODE"=:PWHCODE AND "LOC_CODE"=:PLOCCODE AND "RACK"="STOCK"."RACK" AND "BATCH"="STOCK"."B
ATCH" AND
"EXP_DATE"="STOCK"."EXP_DATE")
53 rows selected.
Elapsed: 00:00:00:12I'm looking forward for suggestions how to improve the performance of this statement.
Thank you very much,
xtantoxtanto wrote:
Hi sir,
How to prevent the query from doing parallel query ?
Because as you see actually I am not issuing any Parallel hints in the query.
Thank you,
xtantoKristanto,
there are a couple of points to consider:
1. Your SQL*Plus version seems to be outdated. Please use a SQL*Plus version that corresponds to your database version. E.g. the AUTOTRACE output is odd.
2. I would suggest to repeat your exercise using serial execution (the plan, the autotrace, the tracing). You can disable parallel queries by issuing this in your session:
ALTER SESSION DISABLE PARALLEL QUERY;
This way the output of the tools is much more meaningful, however you might get a different execution plan, therefore the results might not be representative for your parallel execution.
3. The function calls might pose a problem. If they are, one possible damage limitation has been provided by hoek. Even better would be then to replace the PL/SQL function with equivalent plain SQL. However since you say that it generates not too many rows it might not harm here too much. You can check the impact of the functions by running a similar query but omitting the function calls.
4. The parallel execution plan contains a MERGE JOIN CARTESIAN operation which could be an issue if the estimates of the optimizer are incorrect. If the serial execution still uses this operation the TKPROF and DBMS_XPLAN.DISPLAY_CURSOR output will reveal whether this is a problem or not.
5. The execution of the statement seems to take on 2-3 seconds in your tests. Is this in the right ballpark? If yes, why should this statement then be problematic? How often does it get executed?
6. The statement uses bind variables, so you might have executions that use different execution plans depending on the bind values passed when the statement got optimized. You can use DBMS_XPLAN.DISPLAY_CURSOR using NULL as "child_number" parameter or DBMS_XPLAN.DISPLAY_AWR (if you have a AWR license) to check if you have multiple execution plans for the statement. Please note that older versions might have already been aged out of the shared pool, so the AWR repository might be a more reliable source (but only if the statement has been sampled).
7. You have disabled cost based transformations: "_optimizer_cost_based_transformation" = OFF. Why?
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
"PX Deq Credit: send blkd" , Oracle 11g, ASH, and OEM
In our 10g databases (monitored by OEM 10g) we can sometimes see this event depicted magnificently in Pepto-Bismol pink, when a developer has overdone it with a parallelism hint or table parallelism setting.
Last night, on one of our 11g databases, monitored by OEM 12c, we were trying to see if parallelism would help 2 long-running queries. (it was not a success.) I was specifically watching for the PX Deq Credit: send blkd and similar waits via OEM, and I also ran several ASH reports to verify there were no unusual events. No sign of these waits.
Overnight, the developer who was working with me sent me a screen shot from Toad showing these waits. I looked again and still saw no signs via ASH or OEM. I queried DBA_HIST_ACTIVE_SESS_HISTORY and found some logged there from days or weeks ago but that was all. I ran an AWR report and sure enough:
Avg
%Time Total Wait wait Waits % DB
Event Waits -outs Time (s) (ms) /txn time
PX Deq Credit: send blkd 13,656,665 0 314,939 23 2,474.0(this was over an 8-hour span. )
Has something changed in Oracle 11g that prevents these from showing up in ASH or OEM?
Thanks,
MikeHi,
Its could be due to flush/Purge Logs runing every night as default DB Maintennace jobs
Thanks,
Ajay more -
"PX Deq Credit: send blkd" wait event as pink line on EM
Hi,
We have waits "Other" and see pink lines on EM on a particular query.
Wait type is "PX Deq Credit: send blkd".
After i checked this wait type on google, i learned that this wait happens because of parallel queries.
My question is, we have a lot of queries in our DB but why we see this event type only when selecting from a materialized view? This mv has just 19.000 rows and that is so small when comparing to 100G tables in our 2.5TB db.
What could be the another reason to this pink lines? Do materialized views cause such a thing?
Thanks in advance,KAYSERI wrote:
After i checked this wait type on google, i learned that this wait happens because of parallel queries.
My question is, we have a lot of queries in our DB but why we see this event type only when selecting from a materialized view? This mv has just 19.000 rows and that is so small when comparing to 100G tables in our 2.5TB db.
What could be the another reason to this pink lines? Do materialized views cause such a thing?For comments about whether the event is idle or not, here's something I wrote a couple of years ago on this forum:
Re: PX Deq Credit: send blkd At AWR "Top 5 Timed Events"
When you are "selecting from a materialized view" is the MV the only thing in the query, and are you selecting from it with an explicit reference to the MV name ? Or do you mean that you are running queries that in include the MV as one of the tables - perhaps after rewrite ?
MVs do not cause parallel execution. However I would check your parallel execution settings at the database level and the table and index level for this MV; and I would check the statistics on this MV to see if they are consistent with your opinion of its size.
Regards
Jonathan Lewis -
PX Deq Credit: send blkd wait event
Hi all,
I'm working on a performance issue for an Oracle 10.2.0.5 DB under HP UX.
While taking a look at the AWR I can see the following in the Top 5 Timed Events section:
The first is:
PX Deq Credit: send blkd
I checked and tables and indexes are not in parallel, may be the use queries with PARALLEL hint.
In Time Model Statistics setion, the first is sql execute elapsed time (95% of db time)
Is "PX Deq Credit: send blkd" a problem?
I mean, is the DB waiting for that or it's just an idle time and I shouldn't worry about that.
in wait classs, Other is 94% of db time.
Thanks in advance.
Edited by: Diego on 20-dic-2011 6:35Hi jgarry,
How can I check how many slaves there are?
This is the output of the TOP command:
User CPU % Thrd Disk Memory Block
Process Name PID Name (1600% max) Cnt IOrate RSS/VSS On
ora_s000_dbs 23134 oracle 56.7 1 15.2 413.8mb 427.6mb SLEEP
ora_p007_dbs 23239 oracle 12.1 1 0.0 399.3mb 413.0mb SLEEP
ora_p009_dbs 23243 oracle 11.9 1 0.0 399.3mb 413.0mb SLEEP
ora_p010_dbs 23245 oracle 11.9 1 0.0 399.3mb 413.0mb SLEEP
ora_p017_dbs 23259 oracle 11.7 1 0.0 399.3mb 413.0mb SLEEP
ora_p013_dbs 23251 oracle 11.7 1 0.0 399.3mb 413.0mb SLEEP
ora_p011_dbs 23247 oracle 11.7 1 0.0 399.3mb 413.0mb SLEEP
ora_p002_dbs 23229 oracle 11.5 1 0.0 399.3mb 413.0mb SLEEP
ora_p003_dbs 23231 oracle 11.5 1 0.0 399.3mb 413.0mb SLEEP
ora_p008_dbs 23241 oracle 11.5 1 0.0 399.3mb 413.0mb SLEEP
ora_p005_dbs 23235 oracle 11.5 1 0.0 399.3mb 413.0mb SLEEP
ora_p015_dbs 23255 oracle 11.3 1 0.0 399.3mb 413.0mb SLEEP
ora_p006_dbs 23237 oracle 11.3 1 0.0 399.3mb 413.0mb SLEEP
ora_p012_dbs 23249 oracle 11.1 1 0.0 399.3mb 413.0mb SLEEP
ora_p001_dbs 23227 oracle 11.1 1 0.0 399.4mb 413.1mb SLEEP
ora_p016_dbs 23257 oracle 11.1 1 0.0 399.3mb 413.0mb SLEEP
ora_p019_dbs 23263 oracle 10.9 1 0.0 399.3mb 413.0mb SLEEP
ora_p014_dbs 23253 oracle 10.9 1 0.0 399.3mb 413.0mb SLEEP
ora_p018_dbs 23261 oracle 10.9 1 0.0 399.3mb 413.0mb SLEEP
ora_p004_dbs 23233 oracle 10.9 1 0.0 399.3mb 413.0mb SLEEP
ora_p023_dbs 23271 oracle 10.7 1 0.0 399.3mb 413.0mb SLEEP
ora_p000_dbs 23225 oracle 10.5 1 0.0 399.4mb 413.1mb SLEEP
ora_p020_dbs 23265 oracle 10.5 1 0.0 399.3mb 413.0mb SLEEP
ora_p025_dbs 23275 oracle 10.5 1 0.0 399.3mb 413.0mb SLEEP
ora_p022_dbs 23269 oracle 10.3 1 0.0 399.3mb 413.0mb SLEEP
ora_p028_dbs 23281 oracle 10.3 1 0.0 399.3mb 413.0mb SLEEP
ora_p021_dbs 23267 oracle 10.1 1 0.0 399.3mb 413.0mb SLEEP
ora_p029_dbs 23283 oracle 10.1 1 0.0 399.3mb 413.0mb SLEEP
ora_p024_dbs 23273 oracle 10.1 1 0.0 399.3mb 413.0mb SLEEP
ora_p027_dbs 23279 oracle 9.9 1 0.0 399.3mb 413.0mb SLEEP
ora_p026_dbs 23277 oracle 9.7 1 0.0 399.3mb 413.0mb SLE
What is ora_s000_dbs ? it uses more than 50% of CPU.
Could the problem be that there are too many slaves?
How Can I know how many there are and what can I do to reduce them ?
I think The more CPU Oracle has, the more slaves/threads are fired.
Thanks a lot. -
PX Deq Credit: send blkd -
Linux 2.6.18-128.e15 x86-64
Oracle DB 11.1.0.7
Clusterware 11.1.0.7
ASM 11.1.0.7
Eight (8) node rac cluster
I am stumped and am asking for some guidance before resorting to creating a SR.
The query: " select count(*) from gv$transaction where start_date < :b1 " hangs when more than one node in the cluster is up. The sql statement is executed from within a stored procedure. Checking wait events yields the "PX Deq Credit: send blkd". If I execute the same sql statement standalone or within an anonomous pl/sql block from any node the statement works fine. When invoked from within the application it hangs every time. The package is wrapped so I cannot see what has transpired prior to this statement. The offending statement never completes. I have googled and checked metalink without much success.
Ran the racdiag.sql script in hope I could spot something obvious and in preparation to open up a SR (sigh). Again I am stumped for the moment. Thank you.
RCVR RCVRINST SID SNDR EVENT SNDRINST SNDRSID
A QC 4 2075 P#### PX Deq Credit: send blkd 1
PZ98 4 2089 QC PX Deq: Execution Msg 196609 2140
Username QC/Slave SlaveSet SID Slave INS STATE WAIT_EVENT QC SID QC INS Req. DOP Actual DOP
GENEVA_ADMIN QC 2075 4 WAIT PX Deq Credit: send blkd 2075
SYS QC 2140 4 WAIT PX Deq: Execute Reply 2140
- pz98 (Slave) 1 2058 1 WAIT PX Deq: Execution Msg 2140 4 8 8
- pz98 (Slave) 1 2028 2 WAIT PX Deq: Execution Msg 2140 4 8 8
- pz98 (Slave) 1 2127 3 WAIT PX Deq: Execution Msg 2140 4 8 8
- pz98 (Slave) 1 2096 4 WAIT PX Deq: Execution Msg 2140 4 8 8
- pz98 (Slave) 1 2164 5 WAIT PX Deq: Execution Msg 2140 4 8 8
- pz98 (Slave) 1 2034 6 WAIT PX Deq: Execution Msg 2140 4 8 8
- pz98 (Slave) 1 2110 7 WAIT PX Deq: Execution Msg 2140 4 8 8
- pz98 (Slave) 1 2009 8 WAIT PX Deq: Execution Msg 2140 4 8 8
10 rows selected.Hi,
Even I got the same issue on ORACLE 10gR2. I was told that it is a ORACLE bug . Refer to Metalink note 762412.1 . This might be helpful.
We are still planning to implenment the solution but the patch upgrade si not yet scheduled. Please let me know, if this solves u r issue. -
PX DEQ CREDIT SEND BLKD on GV$
Hi All,
We experience "PX DEQ CREDIT SEND BLKD" wait , the pink spike on OEM, when there is query on GV$ in 10G(10.2.0.5), esp on
SELECT event, sql_id, sql_plan_hash_value, sql_opcode, session_id, session_serial#,
module, action, client_id, DECODE(wait_time, 0, 'W', 'C'), 1, time_waited,
service_hash, user_id, program, sample_time, p1, p2, p3, current_file#,
current_obj#, current_block#, qc_session_id, qc_instance_id, INST_ID
FROM gv$active_session_history WHERE sample_time >= :1 AND sample_time <= :2By DBSNMP user.
Even tested just
Select * from gv$active_session_history;The results is same waits on PX DEQ CREDIT SEND BLKD.
Since the issue we have increased
parallel_execution_message_size to 4k from 2K
Still its showing same waits.
I have some of my findings and reading on it though:
"PX Deq Credit: Send Blked" - there are two different scenarios where it can appear - one as an "idle" event and one as a performance threat.
When PX slaves feed a query co-ordinator (QC), only one can supply data at a time and the others go into the "PX Deq Credit: Send blkd" with a timeout of 2 seconds.
The end user doesn't see the result set appearing any more slowly because of this.
When one layer of PX slaves is passing data up the tree to the next layer, then there is competition for the PX table queues (virtual tables) with PX slaves blocked and unable to write into the virtual tables. Waits at this point are time-wasting events.
PX Deq Credit: send blkd indicate that a producer wants to send data to a consumer, but the consumer is still busy with previous requests so isn’t ready to receive it. i.e.
it’s falling behind. Reducing the DOP would reduce the number of times this happens and how long for. But we are not setting DOP on the query as its auto run by DBSNMP user.
I would be testing it with 8K soon. But I would like to know if anyone has any ideas or suggestions on the issue and if anyone else has encountered it.
ThanksAnyone can please give in any pointers
-
PX Deq Credit: send blkd is getting hang while refreshing materialized view
Hi All,
When we are refreshing materialized view. It is taking more than 2.30 mins. Initially it was taking 1.40 Mins.
We are using parallel and base tables are partitioned. When i checked the tkprof report i see lots of insert query is mostly waiting for PX Deq Credit: send blkd event. When i check the ASH report I don't find any query related to MV was running but still MV refresh was going on
TKPROF: Release 11.2.0.1.0 - Development on Wed Jun 5 16:27:29 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Trace file: CHDFCI_p001_43384918_PARALLEL.trc
Sort options: exeela prsela fchela
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
EXPLAIN PLAN option disabled.
SQL ID: 2x210q5g30m4t
Plan Hash: 2058446196
INSERT /*+ BYPASS_RECURSIVE_CHECK APPEND */ INTO
"APPS"."GL_BAL_MV" SELECT * FROM
GL_BAL_V
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 362.20 9372.04 1158765 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 362.20 9372.04 1158765 0 0 0
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 175 (recursive depth: 1)
Rows Row Source Operation
0 LOAD AS SELECT (cr=0 pr=0 pw=0 time=0 us)
0 PX COORDINATOR (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND QC (RANDOM) :TQ10003 (cr=0 pr=0 pw=0 time=0 us cost=1041298 size=389555904 card=2028937)
78448967 HASH JOIN BUFFERED (cr=0 pr=1158765 pw=1158765 time=276842112 us cost=1041298 size=389555904 card=2028937)
410944 BUFFER SORT (cr=0 pr=0 pw=0 time=492466 us)
410944 PX RECEIVE (cr=0 pr=0 pw=0 time=34526636 us cost=64715 size=147944250 card=1643825)
0 PX SEND HASH :TQ10001 (cr=0 pr=0 pw=0 time=0 us cost=64715 size=147944250 card=1643825)
0 PARTITION RANGE ALL PARTITION: 1 39 (cr=0 pr=0 pw=0 time=0 us cost=64715 size=147944250 card=1643825)
0 TABLE ACCESS FULL GL_CODE_COMBINATIONS PARTITION: 1 39 (cr=0 pr=0 pw=0 time=0 us cost=64715 size=147944250 card=1643825)
78448967 PX RECEIVE (cr=0 pr=0 pw=0 time=2453949696 us cost=976582 size=395060280 card=3873140)
0 PX SEND HASH :TQ10002 (cr=0 pr=0 pw=0 time=0 us cost=976582 size=395060280 card=3873140)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us cost=976582 size=395060280 card=3873140)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us cost=32 size=133920 card=2480)
0 PX SEND BROADCAST :TQ10000 (cr=0 pr=0 pw=0 time=0 us cost=32 size=133920 card=2480)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us cost=32 size=133920 card=2480)
0 TABLE ACCESS FULL GL_SETS_OF_BOOKS (cr=0 pr=0 pw=0 time=0 us cost=7 size=108 card=6)
0 TABLE ACCESS FULL GL_PERIODS (cr=0 pr=0 pw=0 time=0 us cost=24 size=44640 card=1240)
0 PX BLOCK ITERATOR PARTITION: 1 39 (cr=0 pr=0 pw=0 time=0 us cost=976550 size=30099548160 card=627073920)
0 TABLE ACCESS FULL GL_BALANCES PARTITION: 1 39 (cr=0 pr=0 pw=0 time=0 us cost=976550 size=30099548160 card=627073920)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
PX Deq: Execution Msg 3 0.16 0.17
PX Deq Credit: send blkd 1061004 1.99 5084.61
PX Deq: Table Q Normal 250856 2.00 2306.87
asynch descriptor resize 1 0.00 0.00
Disk file operations I/O 10 0.23 0.26
direct path write temp 3608 1.20 958.39
latch free 26 0.02 0.19
PX qref latch 7647924 0.05 11.85
direct path read temp 578 0.43 35.19
PX Deq Credit: need buffer 4037 0.08 5.84
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 0 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
PX Deq: Execution Msg 3 0.47 0.75
PX Deq: Slave Session Stats 1 0.15 0.15
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 362.20 9372.04 1158765 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 362.20 9372.04 1158765 0 0 0
Misses in library cache during parse: 0
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
PX Deq: Execution Msg 3 0.16 0.17
PX Deq Credit: send blkd 1061004 1.99 5084.61
PX Deq: Table Q Normal 250856 2.00 2306.87
asynch descriptor resize 1 0.00 0.00
Disk file operations I/O 10 0.23 0.26
direct path write temp 3608 1.20 958.39
latch free 26 0.02 0.19
PX qref latch 7647924 0.05 11.85
direct path read temp 578 0.43 35.19
PX Deq Credit: need buffer 4037 0.08 5.84
1 user SQL statements in session.
0 internal SQL statements in session.
1 SQL statements in session.
0 statements EXPLAINed in this session.
Trace file: CHDFCI_p001_43384918_PARALLEL.trc
Trace file compatibility: 11.1.0.7
Sort options: exeela prsela fchela
1 session in tracefile.
1 user SQL statements in trace file.
0 internal SQL statements in trace file.
1 SQL statements in trace file.
1 unique SQL statements in trace file.
8986825 lines in trace file.
9372 elapsed seconds in trace file.When i checked the ASH report during this time. I don't see anything running related to MV.
I am using parallel degree 8 for GL_BALANCES.
Please suggest.Hi
After enabling the DML also, same plan is getting generated.
MV refresh is taking same time.
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
PLAN_TABLE_OUTPUT
| 0 | INSERT STATEMENT | | | | 1027K(100)| | | | | | |
| 1 | LOAD AS SELECT | | | | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10003 | 1998K| 365M| 1027K (1)|999:59:59 | | | Q1,03 | P->S | QC (RAND) |
| 4 | HASH JOIN BUFFERED | | 1998K| 365M| 1027K (1)|999:59:59 | | | Q1,03 | PCWP | |
| 5 | BUFFER SORT | | | | | | | | Q1,03 | PCWC | |
| 6 | PX RECEIVE | | 1642K| 141M| 64715 (0)|999:59:59 | | | Q1,03 | PCWP | |
| 7 | PX SEND HASH | :TQ10001 | 1642K| 141M| 64715 (0)|999:59:59 | | | | S->P | HASH |
| 8 | PARTITION RANGE ALL | | 1642K| 141M| 64715 (0)|999:59:59 | 1 | 39 | | | |
| 9 | TABLE ACCESS FULL | GL_CODE_COMBINATIONS | 1642K| 141M| 64715 (0)|999:59:59 | 1 | 39 | | | |
| 10 | PX RECEIVE | | 3820K| 371M| 963K (1)|999:59:59 | | | Q1,03 | PCWP | |
PLAN_TABLE_OUTPUT
| 11 | PX SEND HASH | :TQ10002 | 3820K| 371M| 963K (1)|999:59:59 | | | Q1,02 | P->P | HASH |
| 12 | HASH JOIN | | 3820K| 371M| 963K (1)|999:59:59 | | | Q1,02 | PCWP | |
| 13 | BUFFER SORT | | | | | | | | Q1,02 | PCWC | |
| 14 | PX RECEIVE | | 2480 | 130K| 32 (4)| 00:40:12 | | | Q1,02 | PCWP | |
| 15 | PX SEND BROADCAST | :TQ10000 | 2480 | 130K| 32 (4)| 00:40:12 | | | | S->P | BROADCAST |
| 16 | HASH JOIN | | 2480 | 130K| 32 (4)| 00:40:12 | | | | | |
| 17 | TABLE ACCESS FULL| GL_SETS_OF_BOOKS | 6 | 108 | 7 (0)| 00:08:48 | | | | | |
| 18 | TABLE ACCESS FULL| GL_PERIODS | 1240 | 44640 | 24 (0)| 00:30:09 | | | | | |
| 19 | PX BLOCK ITERATOR | | 618M| 27G| 963K (1)|999:59:59 | 1 | 39 | Q1,02 | PCWC | |
| 20 | TABLE ACCESS FULL | GL_BALANCES | 618M| 27G| 963K (1)|999:59:59 | 1 | 39 | Q1,02 | PCWP | |
--------------------------------------------------------------------------------------------------------------------------------------------------Please find the completion time for MV refresh.
14:58:47 SQL> alter session enable parallel dml;
Session altered.
Elapsed: 00:00:00.27
14:59:50 SQL> exec dbms_mview.REFRESH ('GL_BAL_MV','C',atomic_refresh=>FALSE);
PL/SQL procedure successfully completed.
Elapsed: 02:30:58.37
Thanks -
Top Time Event "PX Deq Credit: send blkd"
Hi,
I have this event on TOP of my Wait Events Statspack Report. Anyone know how to minimize the time of this event?? Increase the number os parallel_max_servers will help me???
Tks,
Paulo.What's your Oracle version and OS ?
Are you using 8i with OPS (Oracle Parallel Server) ? -
How to set the correct shared pool size and db_buffer_cache using awr
Hi All,
I want to how to set the correct size for shared_pool_size and db_cache_size using shared pool advisory and buffer pool advisory of awr report. I have paste the shared and buffer pool advisory of awr report.
Shared Pool Advisory
* SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor
* Note there is often a 1:Many correlation between a single logical object in the Library Cache, and the physical number of memory objects associated with it. Therefore comparing the number of Lib Cache objects (e.g. in v$librarycache), with the number of Lib Cache Memory Objects is invalid.
Shared Pool Size(M) SP Size Factr Est LC Size (M) Est LC Mem Obj Est LC Time Saved (s) Est LC Time Saved Factr Est LC Load Time (s) Est LC Load Time Factr Est LC Mem Obj Hits (K)
4,096 1.00 471 25,153 184,206 1.00 149 1.00 9,069
4,736 1.16 511 27,328 184,206 1.00 149 1.00 9,766
5,248 1.28 511 27,346 184,206 1.00 149 1.00 9,766
5,760 1.41 511 27,346 184,206 1.00 149 1.00 9,766
6,272 1.53 511 27,346 184,206 1.00 149 1.00 9,766
6,784 1.66 511 27,346 184,206 1.00 149 1.00 9,766
7,296 1.78 511 27,346 184,206 1.00 149 1.00 9,766
7,808 1.91 511 27,346 184,206 1.00 149 1.00 9,766
8,320 2.03 511 27,346 184,206 1.00 149 1.00 9,766
Buffer Pool Advisory
* Only rows with estimated physical reads >0 are displayed
* ordered by Block Size, Buffers For Estimate
P Size for Est (M) Size Factor Buffers (thousands) Est Phys Read Factor Estimated Phys Reads (thousands) Est Phys Read Time Est %DBtime for Rds
D 4,096 0.10 485 1.02 1,002 1 0.00
D 8,192 0.20 970 1.00 987 1 0.00
D 12,288 0.30 1,454 1.00 987 1 0.00
D 16,384 0.40 1,939 1.00 987 1 0.00
D 20,480 0.50 2,424 1.00 987 1 0.00
D 24,576 0.60 2,909 1.00 987 1 0.00
D 28,672 0.70 3,394 1.00 987 1 0.00
D 32,768 0.80 3,878 1.00 987 1 0.00
D 36,864 0.90 4,363 1.00 987 1 0.00
D 40,960 1.00 4,848 1.00 987 1 0.00
D 45,056 1.10 5,333 1.00 987 1 0.00
D 49,152 1.20 5,818 1.00 987 1 0.00
D 53,248 1.30 6,302 1.00 987 1 0.00
D 57,344 1.40 6,787 1.00 987 1 0.00
D 61,440 1.50 7,272 1.00 987 1 0.00
D 65,536 1.60 7,757 1.00 987 1 0.00
D 69,632 1.70 8,242 1.00 987 1 0.00
D 73,728 1.80 8,726 1.00 987 1 0.00
D 77,824 1.90 9,211 1.00 987 1 0.00
D 81,920 2.00 9,696 1.00 987 1 0.00
My shared pool size is 4gb and db_cache_size is 40Gb.
Please help me in configuring the correct size for this.
Thanks and Regards,Hi ,
Actually batch load is taking too much time.
Please find below the 1 hr awr report
Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 6557 27-Nov-11 16:00:06 126 1.3
End Snap: 6558 27-Nov-11 17:00:17 130 1.6
Elapsed: 60.17 (mins)
DB Time: 34.00 (mins)
Report Summary
Cache Sizes
Begin End
Buffer Cache: 40,960M 40,960M Std Block Size: 8K
Shared Pool Size: 4,096M 4,096M Log Buffer: 25,908K
Load Profile
Per Second Per Transaction Per Exec Per Call
DB Time(s): 0.6 1.4 0.00 0.07
DB CPU(s): 0.5 1.2 0.00 0.06
Redo size: 281,296.9 698,483.4
Logical reads: 20,545.6 51,016.4
Block changes: 1,879.5 4,667.0
Physical reads: 123.7 307.2
Physical writes: 66.4 164.8
User calls: 8.2 20.4
Parses: 309.4 768.4
Hard parses: 8.5 21.2
W/A MB processed: 1.7 4.3
Logons: 0.7 1.6
Executes: 1,235.9 3,068.7
Rollbacks: 0.0 0.0
Transactions: 0.4
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.66 In-memory Sort %: 100.00
Library Hit %: 99.19 Soft Parse %: 97.25
Execute to Parse %: 74.96 Latch Hit %: 99.97
Parse CPU to Parse Elapsd %: 92.41 % Non-Parse CPU: 98.65
Shared Pool Statistics
Begin End
Memory Usage %: 80.33 82.01
% SQL with executions>1: 90.90 86.48
% Memory for SQL w/exec>1: 90.10 86.89
Top 5 Timed Foreground Events
Event Waits Time(s) Avg wait (ms) % DB time Wait Class
DB CPU 1,789 87.72
db file sequential read 27,531 50 2 2.45 User I/O
db file scattered read 26,322 30 1 1.47 User I/O
row cache lock 1,798 20 11 0.96 Concurrency
OJVM: Generic 36 15 421 0.74 Other
Host CPU (CPUs: 24 Cores: 12 Sockets: )
Load Average Begin Load Average End %User %System %WIO %Idle
0.58 1.50 2.8 0.7 0.1 96.6
Instance CPU
%Total CPU %Busy CPU %DB time waiting for CPU (Resource Manager)
2.2 63.6 0.0
Memory Statistics
Begin End
Host Mem (MB): 131,072.0 131,072.0
SGA use (MB): 50,971.4 50,971.4
PGA use (MB): 545.5 1,066.3
% Host Mem used for SGA+PGA: 39.30 39.70
RAC Statistics
Begin End
Number of Instances: 2 2
Global Cache Load Profile
Per Second Per Transaction
Global Cache blocks received: 3.09 7.68
Global Cache blocks served: 1.86 4.62
GCS/GES messages received: 78.64 195.27
GCS/GES messages sent: 53.82 133.65
DBWR Fusion writes: 0.52 1.30
Estd Interconnect traffic (KB) 65.50
Global Cache Efficiency Percentages (Target local+remote 100%)
Buffer access - local cache %: 99.65
Buffer access - remote cache %: 0.02
Buffer access - disk %: 0.34
Global Cache and Enqueue Services - Workload Characteristics
Avg global enqueue get time (ms): 0.0
Avg global cache cr block receive time (ms): 1.7
Avg global cache current block receive time (ms): 1.0
Avg global cache cr block build time (ms): 0.0
Avg global cache cr block send time (ms): 0.0
Global cache log flushes for cr blocks served %: 1.4
Avg global cache cr block flush time (ms): 0.9
Avg global cache current block pin time (ms): 0.0
Avg global cache current block send time (ms): 0.0
Global cache log flushes for current blocks served %: 0.1
Avg global cache current block flush time (ms): 0.0
Global Cache and Enqueue Services - Messaging Statistics
Avg message sent queue time (ms): 0.0
Avg message sent queue time on ksxp (ms): 0.4
Avg message received queue time (ms): 0.5
Avg GCS message process time (ms): 0.0
Avg GES message process time (ms): 0.0
% of direct sent messages: 79.13
% of indirect sent messages: 17.10
% of flow controlled messages: 3.77
Cluster Interconnect
Begin End
Interface IP Address Pub Source IP Pub Src
en9 10.51.10.61 N Oracle Cluster Repository
Main Report
* Report Summary
* Wait Events Statistics
* SQL Statistics
* Instance Activity Statistics
* IO Stats
* Buffer Pool Statistics
* Advisory Statistics
* Wait Statistics
* Undo Statistics
* Latch Statistics
* Segment Statistics
* Dictionary Cache Statistics
* Library Cache Statistics
* Memory Statistics
* Streams Statistics
* Resource Limit Statistics
* Shared Server Statistics
* init.ora Parameters
More RAC Statistics
* RAC Report Summary
* Global Messaging Statistics
* Global CR Served Stats
* Global CURRENT Served Stats
* Global Cache Transfer Stats
* Interconnect Stats
* Dynamic Remastering Statistics
Back to Top
Statistic Name Time (s) % of DB Time
sql execute elapsed time 1,925.20 94.38
DB CPU 1,789.38 87.72
connection management call elapsed time 99.65 4.89
PL/SQL execution elapsed time 89.81 4.40
parse time elapsed 46.32 2.27
hard parse elapsed time 25.01 1.23
Java execution elapsed time 21.24 1.04
PL/SQL compilation elapsed time 11.92 0.58
failed parse elapsed time 9.37 0.46
hard parse (sharing criteria) elapsed time 8.71 0.43
sequence load elapsed time 0.06 0.00
repeated bind elapsed time 0.02 0.00
hard parse (bind mismatch) elapsed time 0.01 0.00
DB time 2,039.77
background elapsed time 122.00
background cpu time 113.42
Statistic Value End Value
NUM_LCPUS 0
NUM_VCPUS 0
AVG_BUSY_TIME 12,339
AVG_IDLE_TIME 348,838
AVG_IOWAIT_TIME 221
AVG_SYS_TIME 2,274
AVG_USER_TIME 9,944
BUSY_TIME 299,090
IDLE_TIME 8,375,051
IOWAIT_TIME 6,820
SYS_TIME 57,512
USER_TIME 241,578
LOAD 1 2
OS_CPU_WAIT_TIME 312,200
PHYSICAL_MEMORY_BYTES 137,438,953,472
NUM_CPUS 24
NUM_CPU_CORES 12
GLOBAL_RECEIVE_SIZE_MAX 1,310,720
GLOBAL_SEND_SIZE_MAX 1,310,720
TCP_RECEIVE_SIZE_DEFAULT 16,384
TCP_RECEIVE_SIZE_MAX 9,223,372,036,854,775,807
TCP_RECEIVE_SIZE_MIN 4,096
TCP_SEND_SIZE_DEFAULT 16,384
TCP_SEND_SIZE_MAX 9,223,372,036,854,775,807
TCP_SEND_SIZE_MIN 4,096
Back to Wait Events Statistics
Back to Top
Operating System Statistics - Detail
Snap Time Load %busy %user %sys %idle %iowait
27-Nov 16:00:06 0.58
27-Nov 17:00:17 1.50 3.45 2.79 0.66 96.55 0.08
Back to Wait Events Statistics
Back to Top
Foreground Wait Class
* s - second, ms - millisecond - 1000th of a second
* ordered by wait time desc, waits desc
* %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
* Captured Time accounts for 95.7% of Total DB time 2,039.77 (s)
* Total FG Wait Time: 163.14 (s) DB CPU time: 1,789.38 (s)
Wait Class Waits %Time -outs Total Wait Time (s) Avg wait (ms) %DB time
DB CPU 1,789 87.72
User I/O 61,229 0 92 1 4.49
Other 102,743 40 31 0 1.50
Concurrency 3,169 10 24 7 1.16
Cluster 58,920 0 11 0 0.52
System I/O 45,407 0 6 0 0.29
Configuration 107 7 1 5 0.03
Commit 383 0 0 1 0.01
Network 15,275 0 0 0 0.00
Application 52 8 0 0 0.00
Back to Wait Events Statistics
Back to Top
Foreground Wait Events
* s - second, ms - millisecond - 1000th of a second
* Only events with Total Wait Time (s) >= .001 are shown
* ordered by wait time desc, waits desc (idle events last)
* %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn % DB time
db file sequential read 27,531 0 50 2 18.93 2.45
db file scattered read 26,322 0 30 1 18.10 1.47
row cache lock 1,798 0 20 11 1.24 0.96
OJVM: Generic 36 42 15 421 0.02 0.74
db file parallel read 394 0 7 19 0.27 0.36
control file sequential read 22,248 0 6 0 15.30 0.28
reliable message 4,439 0 4 1 3.05 0.18
gc current grant busy 7,597 0 3 0 5.22 0.16
PX Deq: Slave Session Stats 2,661 0 3 1 1.83 0.16
DFS lock handle 3,208 0 3 1 2.21 0.16
direct path write temp 4,842 0 3 1 3.33 0.15
library cache load lock 39 0 3 72 0.03 0.14
gc cr multi block request 37,008 0 3 0 25.45 0.14
IPC send completion sync 5,451 0 2 0 3.75 0.10
gc cr block 2-way 4,669 0 2 0 3.21 0.09
enq: PS - contention 3,183 33 1 0 2.19 0.06
gc cr grant 2-way 5,151 0 1 0 3.54 0.06
direct path read temp 1,722 0 1 1 1.18 0.05
gc current block 2-way 1,807 0 1 0 1.24 0.03
os thread startup 6 0 1 108 0.00 0.03
name-service call wait 12 0 1 47 0.01 0.03
PX Deq: Signal ACK RSG 2,046 50 0 0 1.41 0.02
log file switch completion 3 0 0 149 0.00 0.02
rdbms ipc reply 3,610 0 0 0 2.48 0.02
gc current grant 2-way 1,432 0 0 0 0.98 0.02
library cache pin 903 32 0 0 0.62 0.02
PX Deq: reap credit 35,815 100 0 0 24.63 0.01
log file sync 383 0 0 1 0.26 0.01
Disk file operations I/O 405 0 0 0 0.28 0.01
library cache lock 418 3 0 0 0.29 0.01
kfk: async disk IO 23,159 0 0 0 15.93 0.01
gc current block busy 4 0 0 35 0.00 0.01
gc current multi block request 1,206 0 0 0 0.83 0.01
ges message buffer allocation 38,526 0 0 0 26.50 0.00
enq: FB - contention 131 0 0 0 0.09 0.00
undo segment extension 8 100 0 6 0.01 0.00
CSS initialization 8 0 0 6 0.01 0.00
SQL*Net message to client 14,600 0 0 0 10.04 0.00
enq: HW - contention 96 0 0 0 0.07 0.00
CSS operation: action 8 0 0 4 0.01 0.00
gc cr block busy 33 0 0 1 0.02 0.00
latch free 30 0 0 1 0.02 0.00
enq: TM - contention 49 6 0 0 0.03 0.00
enq: JQ - contention 19 100 0 1 0.01 0.00
SQL*Net more data to client 666 0 0 0 0.46 0.00
asynch descriptor resize 3,179 100 0 0 2.19 0.00
latch: shared pool 3 0 0 3 0.00 0.00
CSS operation: query 24 0 0 0 0.02 0.00
PX Deq: Signal ACK EXT 72 0 0 0 0.05 0.00
KJC: Wait for msg sends to complete 269 0 0 0 0.19 0.00
latch: object queue header operation 4 0 0 1 0.00 0.00
gc cr block congested 5 0 0 0 0.00 0.00
utl_file I/O 11 0 0 0 0.01 0.00
enq: TO - contention 3 33 0 0 0.00 0.00
SQL*Net message from client 14,600 0 219,478 15033 10.04
jobq slave wait 7,726 100 3,856 499 5.31
PX Deq: Execution Msg 10,556 19 50 5 7.26
PX Deq: Execute Reply 2,946 31 27 9 2.03
PX Deq: Parse Reply 3,157 35 3 1 2.17
PX Deq: Join ACK 2,976 28 2 1 2.05
PX Deq Credit: send blkd 7 14 0 4 0.00
Back to Wait Events Statistics
Back to Top
Background Wait Events
* ordered by wait time desc, waits desc (idle events last)
* Only events with Total Wait Time (s) >= .001 are shown
* %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn % bg time
os thread startup 140 0 13 90 0.10 10.35
db file parallel write 8,233 0 6 1 5.66 5.08
log file parallel write 3,906 0 6 1 2.69 4.62
log file sequential read 350 0 5 16 0.24 4.49
control file sequential read 13,737 0 5 0 9.45 3.72
DFS lock handle 2,990 27 2 1 2.06 1.43
db file sequential read 921 0 2 2 0.63 1.39
SQL*Net break/reset to client 18 0 1 81 0.01 1.19
control file parallel write 2,455 0 1 1 1.69 1.12
ges lms sync during dynamic remastering and reconfig 24 100 1 50 0.02 0.98
library cache load lock 35 0 1 24 0.02 0.68
ASM file metadata operation 3,483 0 1 0 2.40 0.65
enq: CO - master slave det 1,203 100 1 0 0.83 0.46
kjbdrmcvtq lmon drm quiesce: ping completion 9 0 1 62 0.01 0.46
enq: WF - contention 11 0 0 35 0.01 0.31
CGS wait for IPC msg 32,702 100 0 0 22.49 0.19
gc object scan 28,788 100 0 0 19.80 0.15
row cache lock 535 0 0 0 0.37 0.14
library cache pin 370 55 0 0 0.25 0.12
ksxr poll remote instances 19,119 100 0 0 13.15 0.11
name-service call wait 6 0 0 19 0.00 0.10
gc current block 2-way 304 0 0 0 0.21 0.09
gc cr block 2-way 267 0 0 0 0.18 0.08
gc cr grant 2-way 355 0 0 0 0.24 0.08
ges LMON to get to FTDONE 3 100 0 24 0.00 0.06
enq: CF - contention 145 76 0 0 0.10 0.05
PX Deq: reap credit 8,842 100 0 0 6.08 0.05
reliable message 126 0 0 0 0.09 0.05
db file scattered read 19 0 0 3 0.01 0.05
library cache lock 162 1 0 0 0.11 0.04
latch: shared pool 2 0 0 27 0.00 0.04
Disk file operations I/O 504 0 0 0 0.35 0.04
gc current grant busy 148 0 0 0 0.10 0.04
gcs log flush sync 84 0 0 1 0.06 0.04
ges message buffer allocation 24,934 0 0 0 17.15 0.02
enq: CR - block range reuse ckpt 83 0 0 0 0.06 0.02
latch free 22 0 0 1 0.02 0.02
CSS operation: action 13 0 0 2 0.01 0.02
CSS initialization 4 0 0 6 0.00 0.02
direct path read 1 0 0 21 0.00 0.02
rdbms ipc reply 153 0 0 0 0.11 0.01
db file parallel read 2 0 0 8 0.00 0.01
direct path write 5 0 0 3 0.00 0.01
gc current multi block request 49 0 0 0 0.03 0.01
gc current block busy 5 0 0 2 0.00 0.01
enq: PS - contention 24 50 0 0 0.02 0.01
gc cr multi block request 54 0 0 0 0.04 0.01
ges generic event 1 100 0 10 0.00 0.01
gc current grant 2-way 35 0 0 0 0.02 0.01
kfk: async disk IO 183 0 0 0 0.13 0.01
Log archive I/O 3 0 0 2 0.00 0.01
gc buffer busy acquire 2 0 0 3 0.00 0.00
LGWR wait for redo copy 123 0 0 0 0.08 0.00
IPC send completion sync 18 0 0 0 0.01 0.00
enq: TA - contention 11 0 0 0 0.01 0.00
read by other session 2 0 0 2 0.00 0.00
enq: TM - contention 9 89 0 0 0.01 0.00
latch: ges resource hash list 135 0 0 0 0.09 0.00
PX Deq: Slave Session Stats 12 0 0 0 0.01 0.00
KJC: Wait for msg sends to complete 89 0 0 0 0.06 0.00
enq: TD - KTF dump entries 8 0 0 0 0.01 0.00
enq: US - contention 7 0 0 0 0.00 0.00
CSS operation: query 12 0 0 0 0.01 0.00
enq: TK - Auto Task Serialization 6 100 0 0 0.00 0.00
PX Deq: Signal ACK RSG 24 50 0 0 0.02 0.00
log file single write 6 0 0 0 0.00 0.00
enq: WL - contention 2 100 0 1 0.00 0.00
ADR block file read 13 0 0 0 0.01 0.00
ADR block file write 5 0 0 0 0.00 0.00
latch: object queue header operation 1 0 0 1 0.00 0.00
gc cr block busy 1 0 0 1 0.00 0.00
rdbms ipc message 103,276 67 126,259 1223 71.03
PX Idle Wait 6,467 67 12,719 1967 4.45
wait for unread message on broadcast channel 7,240 100 7,221 997 4.98
gcs remote message 218,809 84 7,213 33 150.49
DIAG idle wait 203,228 95 7,185 35 139.77
shared server idle wait 121 100 3,630 30000 0.08
ASM background timer 3,343 0 3,611 1080 2.30
Space Manager: slave idle wait 723 100 3,610 4993 0.50
heartbeat monitor sleep 722 100 3,610 5000 0.50
ges remote message 73,089 52 3,609 49 50.27
dispatcher timer 66 88 3,608 54660 0.05
pmon timer 1,474 82 3,607 2447 1.01
PING 1,487 19 3,607 2426 1.02
Streams AQ: qmn slave idle wait 125 0 3,594 28754 0.09
Streams AQ: qmn coordinator idle wait 250 50 3,594 14377 0.17
smon timer 18 50 3,505 194740 0.01
JOX Jit Process Sleep 73 100 976 13370 0.05
class slave wait 56 0 605 10806 0.04
KSV master wait 2,215 98 1 0 1.52
SQL*Net message from client 109 0 0 2 0.07
PX Deq: Parse Reply 27 44 0 1 0.02
PX Deq: Join ACK 30 40 0 1 0.02
PX Deq: Execute Reply 20 30 0 0 0.01
Streams AQ: RAC qmn coordinator idle wait 259 100 0 0 0.18
Back to Wait Events Statistics
Back to Top
Wait Event Histogram
* Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
* % of Waits: value of .0 indicates value was <.05%; value of null is truly 0
* % of Waits: column heading of <=1s is truly <1024ms, >1s is truly >=1024ms
* Ordered by Event (idle events last)
% of Waits
Event Total Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
ADR block file read 13 100.0
ADR block file write 5 100.0
ADR file lock 6 100.0
ARCH wait for archivelog lock 3 100.0
ASM file metadata operation 3483 99.6 .1 .1 .2
CGS wait for IPC msg 32.7K 100.0
CSS initialization 12 50.0 50.0
CSS operation: action 21 28.6 9.5 61.9
CSS operation: query 36 86.1 5.6 8.3
DFS lock handle 6198 98.6 1.2 .1 .1
Disk file operations I/O 909 95.7 3.6 .7
IPC send completion sync 5469 99.9 .1 .0 .0
KJC: Wait for msg sends to complete 313 100.0
LGWR wait for redo copy 122 100.0
Log archive I/O 3 66.7 33.3
OJVM: Generic 36 55.6 44.4
PX Deq: Signal ACK EXT 72 98.6 1.4
PX Deq: Signal ACK RSG 2070 99.7 .0 .1 .0 .1
PX Deq: Slave Session Stats 2673 99.7 .2 .1 .0
PX Deq: reap credit 44.7K 100.0
SQL*Net break/reset to client 20 95.0 5.0
SQL*Net message to client 14.7K 100.0
SQL*Net more data from client 32 100.0
SQL*Net more data to client 689 100.0
asynch descriptor resize 3387 100.0
buffer busy waits 2 100.0
control file parallel write 2455 96.6 2.2 .6 .6 .1
control file sequential read 36K 99.4 .3 .1 .1 .1 .1 .0
db file parallel read 397 8.8 .8 5.5 12.6 17.4 46.3 8.6
db file parallel write 8233 85.4 10.3 2.3 1.4 .4 .1
db file scattered read 26.3K 79.2 1.5 8.2 10.5 .6 .1 .0
db file sequential read 28.4K 60.2 3.3 18.0 18.1 .3 .1 .0
db file single write 2 100.0
direct path read 2 50.0 50.0
direct path read temp 1722 95.8 2.8 .1 .5 .8 .1
direct path write 6 83.3 16.7
direct path write temp 4842 96.3 2.7 .5 .2 .0 .0 .2
enq: AF - task serialization 1 100.0
enq: CF - contention 145 99.3 .7
enq: CO - master slave det 1203 98.9 .8 .2
enq: CR - block range reuse ckpt 83 100.0
enq: DR - contention 2 100.0
enq: FB - contention 131 100.0
enq: HW - contention 97 100.0
enq: JQ - contention 19 89.5 10.5
enq: JS - job run lock - synchronize 3 100.0
enq: MD - contention 1 100.0
enq: MW - contention 2 100.0
enq: PS - contention 3207 99.5 .4 .1
enq: TA - contention 11 100.0
enq: TD - KTF dump entries 8 100.0
enq: TK - Auto Task Serialization 6 100.0
enq: TM - contention 58 100.0
enq: TO - contention 3 100.0
enq: TQ - DDL contention 1 100.0
enq: TS - contention 1 100.0
enq: UL - contention 1 100.0
enq: US - contention 7 100.0
enq: WF - contention 11 81.8 18.2
enq: WL - contention 2 50.0 50.0
gc buffer busy acquire 2 50.0 50.0
gc cr block 2-way 4934 99.9 .1 .0 .0
gc cr block busy 35 68.6 31.4
gc cr block congested 6 100.0
gc cr disk read 2 100.0
gc cr grant 2-way 4824 100.0 .0
gc cr grant congested 2 100.0
gc cr multi block request 37.1K 99.8 .2 .0 .0 .0 .0 .0
gc current block 2-way 2134 99.9 .0 .0
gc current block busy 7 14.3 14.3 14.3 28.6 28.6
gc current block congested 2 100.0
gc current grant 2-way 1337 99.9 .1
gc current grant busy 7123 99.2 .2 .2 .0 .0 .3 .1
gc current grant congested 2 100.0
gc current multi block request 1260 99.8 .2
gc object scan 28.8K 100.0
gcs log flush sync 65 95.4 3.1 1.5
ges LMON to get to FTDONE 3 100.0
ges generic event 1 100.0
ges inquiry response 2 100.0
ges lms sync during dynamic remastering and reconfig 24 16.7 29.2 54.2
ges message buffer allocation 63.1K 100.0
kfk: async disk IO 23.3K 100.0 .0 .0
kjbdrmcvtq lmon drm quiesce: ping completion 9 11.1 88.9
ksxr poll remote instances 19.1K 100.0
latch free 52 59.6 40.4
latch: call allocation 2 100.0
latch: gc element 1 100.0
latch: gcs resource hash 1 100.0
latch: ges resource hash list 135 100.0
latch: object queue header operation 5 40.0 40.0 20.0
latch: shared pool 5 40.0 20.0 20.0 20.0
library cache load lock 74 9.5 5.4 8.1 17.6 10.8 13.5 35.1
library cache lock 493 99.2 .4 .4
library cache pin 1186 98.4 .3 1.2 .1
library cache: mutex X 6 100.0
log file parallel write 3897 72.9 1.5 17.1 7.5 .6 .3 .1
log file sequential read 350 4.6 3.1 59.4 30.0 2.9
log file single write 6 100.0
log file switch completion 3 33.3 66.7
log file sync 385 90.4 3.6 4.7 .8 .5
name-service call wait 18 5.6 5.6 5.6 16.7 44.4 22.2
os thread startup 146 100.0
rdbms ipc reply 3763 99.7 .3
read by other session 2 50.0 50.0
reliable message 4565 99.7 .2 .0 .0 .1
row cache lock 2334 99.3 .2 .1 .1 .3
undo segment extension 8 50.0 37.5 12.5
utl_file I/O 11 100.0
ASM background timer 3343 57.0 .3 .1 .1 .1 21.1 21.4
DIAG idle wait 203.2K 3.4 .2 .4 18.0 41.4 14.8 21.8
JOX Jit Process Sleep 73 2.7 97.3
KSV master wait 2213 99.4 .1 .2 .3
PING 1487 81.0 19.0
PX Deq Credit: send blkd 7 57.1 14.3 14.3 14.3
PX Deq: Execute Reply 2966 59.8 .8 9.5 5.6 10.2 2.6 11.4
PX Deq: Execution Msg 10.6K 72.4 12.1 2.6 2.5 .1 5.6 4.6 .0
PX Deq: Join ACK 3006 77.9 22.1 .1
PX Deq: Parse Reply 3184 67.1 31.1 1.6 .2
PX Idle Wait 6466 .2 8.7 4.3 4.8 .3 .1 5.0 76.6
SQL*Net message from client 14.7K 72.4 2.8 .8 .5 .9 .4 2.8 19.3
Space Manager: slave idle wait 722 100.0
Streams AQ: RAC qmn coordinator idle wait 259 100.0
Streams AQ: qmn coordinator idle wait 250 50.0 50.0
Streams AQ: qmn slave idle wait 125 100.0
class slave wait 55 67.3 7.3 1.8 5.5 1.8 7.3 9.1
dispatcher timer 66 6.1 93.9
gcs remote message 218.6K 7.7 1.8 1.2 1.6 1.7 15.7 70.3
ges remote message 72.9K 29.7 5.1 2.7 2.2 1.5 4.0 54.7
heartbeat monitor sleep 722 100.0
jobq slave wait 7725 .1 .0 99.9
pmon timer 1474 18.4 81.6
rdbms ipc message 103.3K 20.7 2.7 1.5 1.3 .9 .7 40.7 31.6
shared server idle wait 121 100.0
smon timer 18 100.0
wait for unread message on broadcast channel 7238 .3 99.7
Back to Wait Events Statistics
Back to Top
Wait Event Histogram Detail (64 msec to 2 sec)
* Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
* Units for % of Total Waits: ms is milliseconds s is 1024 milliseconds (approximately 1 second)
* % of Total Waits: total waits for all wait classes, including Idle
* % of Total Waits: value of .0 indicates value was <.05%; value of null is truly 0
* Ordered by Event (only non-idle events are displayed)
% of Total Waits
Event Waits 64ms to 2s <32ms <64ms <1/8s <1/4s <1/2s <1s <2s >=2s
ASM file metadata operation 6 99.8 .1 .1
DFS lock handle 6 99.9 .1 .0
OJVM: Generic 16 55.6 2.8 41.7
PX Deq: Signal ACK RSG 3 99.9 .0 .1
PX Deq: Slave Session Stats 3 99.9 .0 .0 .0
SQL*Net break/reset to client 1 95.0 5.0
control file sequential read 1 100.0 .0
db file parallel read 34 91.4 8.6
db file scattered read 4 100.0 .0 .0
db file sequential read 6 100.0 .0 .0 .0
direct path write temp 11 99.8 .1 .1 .0
enq: WF - contention 2 81.8 18.2
gc cr block 2-way 1 100.0 .0
gc cr multi block request 1 100.0 .0
gc current block 2-way 1 100.0 .0
gc current block busy 2 71.4 28.6
gc current grant busy 8 99.9 .0 .1
ges lms sync during dynamic remastering and reconfig 13 45.8 20.8 33.3
kjbdrmcvtq lmon drm quiesce: ping completion 8 11.1 11.1 77.8
latch: shared pool 1 80.0 20.0
library cache load lock 26 64.9 14.9 12.2 4.1 4.1
log file parallel write 2 99.9 .0 .0
log file sequential read 10 97.1 2.0 .6 .3
log file switch completion 2 33.3 66.7
name-service call wait 4 77.8 22.2
os thread startup 146 100.0
reliable message 4 99.9 .0 .1
row cache lock 2 99.7 .0 .0 .3
Back to Wait Events Statistics
Back to Top
Wait Event Histogram Detail (4 sec to 2 min)
* Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
* Units for % of Total Waits: s is 1024 milliseconds (approximately 1 second) m is 64*1024 milliseconds (approximately 67 seconds or 1.1 minutes)
* % of Total Waits: total waits for all wait classes, including Idle
* % of Total Waits: value of .0 indicates value was <.05%; value of null is truly 0
* Ordered by Event (only non-idle events are displayed)
% of Total Waits
Event Waits 4s to 2m <2s <4s <8s <16s <32s < 1m < 2m >=2m
row cache lock 6 99.7 .3
Back to Wait Events Statistics
Back to Top
Wait Event Histogram Detail (4 min to 1 hr)
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Service Statistics
* ordered by DB Time
Service Name DB Time (s) DB CPU (s) Physical Reads (K) Logical Reads (K)
ubshost 1,934 1,744 445 73,633
SYS$USERS 105 45 1 404
SYS$BACKGROUND 0 0 1 128
ubshostXDB 0 0 0 0
Back to Wait Events Statistics
Back to Top
Service Wait Class Stats
* Wait Class info for services in the Service Statistics section.
* Total Waits and Time Waited displayed for the following wait classes: User I/O, Concurrency, Administrative, Network
* Time Waited (Wt Time) in seconds
Service Name User I/O Total Wts User I/O Wt Time Concurcy Total Wts Concurcy Wt Time Admin Total Wts Admin Wt Time Network Total Wts Network Wt Time
ubshost 60232 90 2644 4 0 0 13302 0
SYS$USERS 997 2 525 19 0 0 1973 0
SYS$BACKGROUND 1456 2 1258 14 0 0 0 0
I am not able to paste the whole awr report. I have paste some of the sections of awr report.
Please help.
Thanks and Regards, -
'%Total Call Time ' in AWR report
Hi.
I have a quick question here,
For the awr report, the ‘Top 5 Timed Events’ section, does anybody knows how the ‘%Total Call Time ‘ is calculated for each event listed there?
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
PX Deq Credit: send blkd 5,682,600 3,816 1 39.8 Other
db file scattered read 91,236 1,681 18 17.5 User I/O
CPU time 1,347 14.0
log file sync 99,426 752 8 7.8 Commit
log file parallel write 97,921 523 5 5.4 System I/O
Thanks,
Lei922884 wrote:
Hi,
Db version is 10.2.0.4.0
What is meaning of waits and % Total Call Time in Top 5 Timed Events in AWR report? Waits is the number of times a session waited on a particular call
% Total Call Time is the total time spent in this event divided by the db time, converted to a percentage.
It gives you some idea of how signifcant this event was in the total time spent waiting by the user. Unfortunately the SQLNet times are excluded in the calculations, so there is a component of time that (from the end-users' perspective) is lost.
How to read AWR report?Where i have to start from it?
The best place to start is probably still the white paper about statspack produced by Oracle 11 years ago: http://www.oracle.com/technetwork/database/focus-areas/performance/statspack-opm4-134117.pdf
How the values are calculated in awr? For example db time is 556.15 and elapsed time is 1,439.73 .
elapsed time is the clock time between the start and end time of the snapshots, reportedin in minutes - in your case your report covers 24 hours which is generally far too long to be useful.
db time is the time your sessions were active "inside" the database - again in minutes - and it's a measure of how much time you spent working. It is the sum of wait time and CPU time.
Regards
Jonathan Lewis -
Request for interpretation of AWR sections
Hi there
I am new to the world of AWRs and performance tuning.
Could someone please let me know how to interpret the following information in an AWR report:
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %:
100.00
Redo NoWait %:
100.00
Buffer Hit %:
99.98
In-memory Sort %:
100.00
Library Hit %:
99.07
Soft Parse %:
98.73
Execute to Parse %:
72.75
Latch Hit %:
96.90
Parse CPU to Parse Elapsd %:
76.90
% Non-Parse CPU:
99.97
Time Model Statistics
Total time in database user-calls (DB Time): 134448s
Statistics including the word "background" measure background process time, and so do not contribute to the DB time statistic
Ordered by % or DB time desc, Statistic name
Statistic Name
Time (s)
% of DB Time
sql execute elapsed time
130,172.50
96.82
DB CPU
71,318.25
53.05
PL/SQL execution elapsed time
2,710.48
2.02
parse time elapsed
69.60
0.05
hard parse elapsed time
53.39
0.04
repeated bind elapsed time
25.46
0.02
PL/SQL compilation elapsed time
20.26
0.02
hard parse (sharing criteria) elapsed time
10.04
0.01
sequence load elapsed time
4.17
0.00
hard parse (bind mismatch) elapsed time
1.99
0.00
Wait Class
s - second
cs - centisecond - 100th of a second
ms - millisecond - 1000th of a second
us - microsecond - 1000000th of a second
ordered by wait time desc, waits desc
%Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Wait Class
Waits
%Time -outs
Total Wait Time (s)
Avg wait (ms)
%Total Call Time
CPU time
71,318
53.05
Other
1,671,221
2
32,638
20
24.28
User I/O
7,330,309
0
28,679
4
21.33
System I/O
771,496
4,678
6
3.48
Wait Events
s - second
cs - centisecond - 100th of a second
ms - millisecond - 1000th of a second
us - microsecond - 1000000th of a second
ordered by wait time desc, waits desc (idle events last)
%Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Event
Waits
%Time -outs
Total Wait Time (s)
Avg wait (ms)
Waits /txn
PX Deq Credit: send blkd
1,598,158
0
32,525
20
11.00
direct path read
199,076
9,380
47
1.37
db file sequential read
3,632,410
8,563
2
25.01
direct path read temp
714,772
5,375
8
4.92
db file scattered read
286,910
4,496
16
1.98
Based on these sections, what can be said about the database performance?
My manager has asked me to provide the analysis so I think just telling him that "all is well" may not be enough :-)
Best regards,Sorry I know this AWR was for 24-hr period and as such is not very useful. The DB is not OLTP but for batch processing.
I spotted one query that has " SELECT /*+ parallel(ABC 4) */ ... ". Would removing the "parallel" hint possibly improve?
The new server has 16CPUs vs the old one has 8. New one has 128GB of RAM and the old one has 32GB.
Sorry but what is the "top n cpu" section you have referred to above?
I just ran two reports for 30-min period and here is I see for both:
3:30pm to 4:00pm (when I see high CPU usage):
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %:
100.00
Redo NoWait %:
100.00
Buffer Hit %:
100.00
In-memory Sort %:
100.00
Library Hit %:
98.44
Soft Parse %:
98.69
Execute to Parse %:
59.63
Latch Hit %:
93.83
Parse CPU to Parse Elapsd %:
189.47
% Non-Parse CPU:
100.00
Time Model Statistics
Statistic Name
Time (s)
% of DB Time
DB CPU
7,209.95
99.87
sql execute elapsed time
7,052.04
97.69
parse time elapsed
0.36
0.00
PL/SQL execution elapsed time
0.34
0.00
hard parse elapsed time
0.12
0.00
PL/SQL compilation elapsed time
0.04
0.00
hard parse (sharing criteria) elapsed time
0.04
0.00
hard parse (bind mismatch) elapsed time
0.02
0.00
connection management call elapsed time
0.02
0.00
repeated bind elapsed time
0.00
0.00
DB time
7,219.01
Wait Class
Wait Class
Waits
%Time -outs
Total Wait Time (s)
Avg wait (ms)
%Total Call Time
CPU time
7,210
99.87
System I/O
3,388
4
1
0.06
9:00am - 9:30am (High Wait time):
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %:
100.00
Redo NoWait %:
100.00
Buffer Hit %:
100.00
In-memory Sort %:
100.00
Library Hit %:
98.91
Soft Parse %:
98.91
Execute to Parse %:
63.54
Latch Hit %:
100.00
Parse CPU to Parse Elapsd %:
74.07
% Non-Parse CPU:
99.98
Time Model Statistics
Statistic Name
Time (s)
% of DB Time
sql execute elapsed time
10,560.70
99.99
DB CPU
964.60
9.13
parse time elapsed
0.51
0.00
hard parse elapsed time
0.41
0.00
PL/SQL execution elapsed time
0.37
0.00
connection management call elapsed time
0.17
0.00
PL/SQL compilation elapsed time
0.07
0.00
sequence load elapsed time
0.01
0.00
hard parse (bind mismatch) elapsed time
0.00
0.00
hard parse (sharing criteria) elapsed time
0.00
0.00
repeated bind elapsed time
0.00
0.00
DB time
10,562.04
Wait Class
Wait Class
Waits
%Time -outs
Total Wait Time (s)
Avg wait (ms)
%Total Call Time
Other
30,720
0
8,678
282
82.16
Network
36,427,917
1,292
0
12.23
CPU time
965
9.13
Does this info gives better picture? I know this is a very very small portion of the report.
Best regards -
Hi,
Some process are damn slow when i did run V$session_wait I got many "PX Deq: Execution Msg" and "PX Deq Credit: send blkd" event it is total 40 events and 137 is "SQL*Net message from client".
ThanksLook in v$session at the status column for the sessions in question. If the status = 'INACTIVE' and the value of the column last_call_et is large (its in seconds) then these could be dead/runaway sessions.
That is the front-end process may no longer exist in which case you would want to kill the sessions. Also you can see sessions like this for connection pooled applications where the application opens a lot of connections by default but does not have enough of a load to use the connections. The sessions will be idle.
Inactive sesssions normally do not have much if any performance impact on Oracle except that an inactvie session can be holding resouces especially locks for uncommitted table changes. Killing the session frees the locks and resouces (PGA memory for one) held by the sessions.
The statspack/AWR information is more difficult. You have to just review it and get a general feel for the reports then compare good period verse bad period reports to see where you can find what appear to be significant differences. The reports do have a heavy hitter SQL section. The source of the problem is often located within this section.
If the customer has identified any specific process as having an issue then tracing that process and using tkprof on the trace may yield faster results as it is likely an application tuning issue and you are working from a known issue rather looking database wide and trying to locate the problem areas.
HTH -- Mark D Powell -- -
Hi,
Statspack report me that on our database, there is wait event about this PX Deq :
PX Deq: Table Q Normal
PX Deq: Execute Reply
PX Deq Credit: send blkd
What is this exactly ?
Nicolas.Here is some additional info from metalink.
PX Deq: Table Q Normal
Indicates that the slave wait for data to arrive on its input table queue.
In a parallel execution environment we have a producer consumer model.
One slave set works on the data ( e.g. read data from disk , do a join )
called the produces slave set and the other slave set waits to get the data
that the can start the work. The slaves in this slave set are called consumer.
The wait event "PX Deq: Table Q Normal" means that the slaves in the consumer
slave have to wait for rows( data ) from the other slave set that they can
start there work.
PX Deq: Execute Reply
The QC is expecting a response (acknowledgement) to a control message
from the slaves or is expecting to dequeue data from the producer slave set.
This means he waits that the slaves finished to execute the SQL statement
and that they send the result of the query back to the QC.
Maybe you are looking for
-
How can I use my IPAD to stream Musik to two Air Port Clients in a network at the same time?
I want to stream my musik via I PAD to more than one Air Port Clients at the same time. Via MAC it is possible to select "more speaker". In the I PAD I can only select Station 1 or Station 2.
-
Problem In Integrating 3 Internal Tables to make data for email attachment
Hi All, I am facing a problem. I am having some data in 3 internal tables. Consider it Header Data, Details Data, Tail Data in 3 different Internal tables with different Fields. Now I want to send this data as an attachment in email. Now the problem
-
How to change local adjustment brush size when unselecting ?
Perhaps a stupid beginners question but i cannot seem to find this one.. Using the local adjustment brush i can change the size of the brush using the [ and ] brackets However.. when i make a small error and want to erase brushed-area by using the AL
-
What the meaning of Bounding Leaf, and when we should use it
hi, i have read book about java 3d programming that provide by sun . In chapter 3 i read about Bounding Leaf but i can't get the point, would someone explain it to me in simple way and easy to understood? thanks
-
Do I need to uninstall trial version before I can download full verions via cloud?
Having trouble downloading full version. I used a trial of CS6 and now want to use full current version. Thanks