Buffer(sort) operator
Hi,
i'm trying to understand what "buffer sort" operation is in the following explain plan:
0 SELECT STATEMENT
-1 MERGE JOIN CARTESIAN
--2 TABLE ACCESS FULL PLAYS
--3 BUFFER SORT
---4 TABLE ACCESS FULL MOVIE
In Oracle 9i DataBase Performance Guide and Reference, "buffer sort" is not mentioned although all other explain plan's operations are.
What does it mean? Does it take place in main memory or is it an external sort?
Thank you.
A BUFFER SORT typically means that Oracle reads data blocks into private memory,because the block will be accessed multiple times in the context of the SQL statement execution. in other words, Oracle sacrifies some extra memory to
reduce the overhead of accessing blocks multiple times in shared memory.
Hope this will clear your doubts.
Thanks.
Similar Messages
-
Long time on buffer sort with a insert and select through a dblink
I am doing a fairly simple "insert into select from" statement through a dblink, but something is going very wrong on the other side of the link. I am getting a huge buffer sort time in the explain plan (line 9) and I'm not sure why. When I try to run sql tuning on it from the other side of the dblink, I get an ora-600 error "ORA-24327: need explicit attach before authenticating a user".
Here is the original sql:
INSERT INTO PACE_IR_MOISTURE@PRODDMT00 (SCHEDULE_SEQ, LAB_SAMPLE_ID, HSN, SAMPLE_TYPE, MATRIX, SYSTEM_ID)
SELECT DISTINCT S.SCHEDULE_SEQ, PI.LAB_SAMPLE_ID, PI.HSN, SAM.SAMPLE_TYPE, SAM.MATRIX, :B1 FROM SCHEDULES S
JOIN PERMANENT_IDS PI ON PI.HSN = S.SCHEDULE_ID
JOIN SAMPLES SAM ON PI.HSN = SAM.HSN
JOIN PROJECT_SAMPLES PS ON PS.HSN = SAM.HSN
JOIN PROJECTS P ON PS.PROJECT_SEQ = PS.PROJECT_SEQ
WHERE S.PROC_CODE = 'DRY WEIGHT' AND S.ACTIVE_FLAG = 'C' AND S.COND_CODE = 'CH' AND P.WIP_STATUS IN ('WP','HO')
AND SAM.WIP_STATUS = 'WP';
Here is the sql as it appears on proddmt00:
INSERT INTO "PACE_IR_MOISTURE" ("SCHEDULE_SEQ","LAB_SAMPLE_ID","HSN","SAMPLE_TYPE","MATRIX","SYSTEM_ID")
SELECT DISTINCT "A6"."SCHEDULE_SEQ","A5"."LAB_SAMPLE_ID","A5"."HSN","A4"."SAMPLE_TYPE","A4"."MATRIX",:B1
FROM "SCHEDULES"@! "A6","PERMANENT_IDS"@! "A5","SAMPLES"@! "A4","PROJECT_SAMPLES"@! "A3","PROJECTS"@! "A2"
WHERE "A6"."PROC_CODE"='DRY WEIGHT' AND "A6"."ACTIVE_FLAG"='C' AND "A6"."COND_CODE"='CH' AND ("A2"."WIP_STATUS"='WP' OR "A2"."WIP_STATUS"='HO') AND "A4"."WIP_STATUS"='WP' AND "A3"."PROJECT_SEQ"="A3"."PROJECT_SEQ" AND "A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A4"."HSN" AND "A5"."HSN"="A6"."SCHEDULE_ID";
Here is the explain plan on proddmt00:
PLAN_TABLE_OUTPUT
SQL_ID cvgpfkhdhn835, child number 0
INSERT INTO "PACE_IR_MOISTURE" ("SCHEDULE_SEQ","LAB_SAMPLE_ID","HSN","SAMPLE_TYPE","MATRIX","SYSTEM_ID")
SELECT DISTINCT "A6"."SCHEDULE_SEQ","A5"."LAB_SAMPLE_ID","A5"."HSN","A4"."SAMPLE_TYPE","A4"."MATRIX",:B1
FROM "SCHEDULES"@! "A6","PERMANENT_IDS"@! "A5","SAMPLES"@! "A4","PROJECT_SAMPLES"@! "A3","PROJECTS"@! "A2"
WHERE "A6"."PROC_CODE"='DRY WEIGHT' AND "A6"."ACTIVE_FLAG"='C' AND "A6"."COND_CODE"='CH' AND
("A2"."WIP_STATUS"='WP' OR "A2"."WIP_STATUS"='HO') AND "A4"."WIP_STATUS"='WP' AND
"A3"."PROJECT_SEQ"="A3"."PROJECT_SEQ" AND "A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A4"."HSN" AND
"A5"."HSN"="A6"."SCHEDULE_ID"
Plan hash value: 3310593411
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | INSERT STATEMENT | | | | | 5426M(100)| | | |
| 1 | HASH UNIQUE | | 1210K| 118M| 262M| 5426M (3)|999:59:59 | | |
|* 2 | HASH JOIN | | 763G| 54T| 8152K| 4300M (1)|999:59:59 | | |
| 3 | REMOTE | | 231K| 5429K| | 3389 (2)| 00:00:41 | ! | R->S |
| 4 | MERGE JOIN CARTESIAN | | 1254G| 61T| | 1361M (74)|999:59:59 | | |
| 5 | MERGE JOIN CARTESIAN| | 3297K| 128M| | 22869 (5)| 00:04:35 | | |
| 6 | REMOTE | SCHEDULES | 79 | 3002 | | 75 (0)| 00:00:01 | ! | R->S |
| 7 | BUFFER SORT | | 41830 | 122K| | 22794 (5)| 00:04:34 | | |
| 8 | REMOTE | PROJECTS | 41830 | 122K| | 281 (2)| 00:00:04 | ! | R->S |
| 9 | BUFFER SORT | | 380K| 4828K| | 1361M (74)|999:59:59 | | |
| 10 | REMOTE | PROJECT_SAMPLES | 380K| 4828K| | 111 (0)| 00:00:02 | ! | R->S |
Predicate Information (identified by operation id):
2 - access("A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A6"."SCHEDULE_ID")Please use code tags... your formatted message is below:
From the looks of your explain plan... these entries :
Id Operation Name Rows Bytes TempSpc Cost (%CPU) Time Inst IN-OUT
4 MERGE JOIN CARTESIAN 1254G 61T 1361M (74) 999:59:59
5 MERGE JOIN CARTESIAN 3297K 128M 22869 (5) 00:04:35 Are causing extensive cpu processing, probably due to the cartesian join (includes sorting)... does "61T" mean 61 terabytes? Holy hell
From the looks of the explain plan these tables don't look partitioned.... can you confirm?
Why are you selecting distinct? If this is for ETL or data warehouse related procedure it ain't a good idea to use distinct... well ever... it's horrible for performance.
INSERT INTO PACE_IR_MOISTURE@PRODDMT00 (SCHEDULE_SEQ, LAB_SAMPLE_ID, HSN, SAMPLE_TYPE, MATRIX, SYSTEM_ID)
SELECT DISTINCT S.SCHEDULE_SEQ, PI.LAB_SAMPLE_ID, PI.HSN, SAM.SAMPLE_TYPE, SAM.MATRIX, :B1 FROM SCHEDULES S
JOIN PERMANENT_IDS PI ON PI.HSN = S.SCHEDULE_ID
JOIN SAMPLES SAM ON PI.HSN = SAM.HSN
JOIN PROJECT_SAMPLES PS ON PS.HSN = SAM.HSN
JOIN PROJECTS P ON PS.PROJECT_SEQ = PS.PROJECT_SEQ
WHERE S.PROC_CODE = 'DRY WEIGHT' AND S.ACTIVE_FLAG = 'C' AND S.COND_CODE = 'CH' AND P.WIP_STATUS IN ('WP','HO')
AND SAM.WIP_STATUS = 'WP';
Here is the sql as it appears on proddmt00:
INSERT INTO "PACE_IR_MOISTURE" ("SCHEDULE_SEQ","LAB_SAMPLE_ID","HSN","SAMPLE_TYPE","MATRIX","SYSTEM_ID")
SELECT DISTINCT "A6"."SCHEDULE_SEQ","A5"."LAB_SAMPLE_ID","A5"."HSN","A4"."SAMPLE_TYPE","A4"."MATRIX",:B1
FROM "SCHEDULES"@! "A6","PERMANENT_IDS"@! "A5","SAMPLES"@! "A4","PROJECT_SAMPLES"@! "A3","PROJECTS"@! "A2"
WHERE "A6"."PROC_CODE"='DRY WEIGHT' AND "A6"."ACTIVE_FLAG"='C' AND "A6"."COND_CODE"='CH' AND ("A2"."WIP_STATUS"='WP' OR "A2"."WIP_STATUS"='HO') AND "A4"."WIP_STATUS"='WP' AND "A3"."PROJECT_SEQ"="A3"."PROJECT_SEQ" AND "A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A4"."HSN" AND "A5"."HSN"="A6"."SCHEDULE_ID";
Here is the explain plan on proddmt00:
PLAN_TABLE_OUTPUT
SQL_ID cvgpfkhdhn835, child number 0
INSERT INTO "PACE_IR_MOISTURE" ("SCHEDULE_SEQ","LAB_SAMPLE_ID","HSN","SAMPLE_TYPE","MATRIX","SYSTEM_ID")
SELECT DISTINCT "A6"."SCHEDULE_SEQ","A5"."LAB_SAMPLE_ID","A5"."HSN","A4"."SAMPLE_TYPE","A4"."MATRIX",:B1
FROM "SCHEDULES"@! "A6","PERMANENT_IDS"@! "A5","SAMPLES"@! "A4","PROJECT_SAMPLES"@! "A3","PROJECTS"@! "A2"
WHERE "A6"."PROC_CODE"='DRY WEIGHT' AND "A6"."ACTIVE_FLAG"='C' AND "A6"."COND_CODE"='CH' AND
("A2"."WIP_STATUS"='WP' OR "A2"."WIP_STATUS"='HO') AND "A4"."WIP_STATUS"='WP' AND
"A3"."PROJECT_SEQ"="A3"."PROJECT_SEQ" AND "A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A4"."HSN" AND
"A5"."HSN"="A6"."SCHEDULE_ID"
Plan hash value: 3310593411
Id Operation Name Rows Bytes TempSpc Cost (%CPU) Time Inst IN-OUT
0 INSERT STATEMENT 5426M(100)
1 HASH UNIQUE 1210K 118M 262M 5426M (3) 999:59:59
* 2 HASH JOIN 763G 54T 8152K 4300M (1) 999:59:59
3 REMOTE 231K 5429K 3389 (2) 00:00:41 ! R->S
4 MERGE JOIN CARTESIAN 1254G 61T 1361M (74) 999:59:59
5 MERGE JOIN CARTESIAN 3297K 128M 22869 (5) 00:04:35
6 REMOTE SCHEDULES 79 3002 75 (0) 00:00:01 ! R->S
7 BUFFER SORT 41830 122K 22794 (5) 00:04:34
8 REMOTE PROJECTS 41830 122K 281 (2) 00:00:04 ! R->S
9 BUFFER SORT 380K 4828K 1361M (74) 999:59:59
10 REMOTE PROJECT_SAMPLES 380K 4828K 111 (0) 00:00:02 ! R->S
Predicate Information (identified by operation id):
2 - access("A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A6"."SCHEDULE_ID")Edited by: TheDudeNJ on Oct 13, 2009 1:11 PM -
Confusion in FILTER and SORT operations in the execution plan
Hi
I have been working on tuning of a sql query:
SELECT SUM(DECODE(CR_FLG, 'C', NVL(TOT_AMT, 0), 0)),
SUM(DECODE(CR_FLG, 'C', 1, 0)),
SUM(DECODE(CR_FLG, 'R', NVL(TOT_AMT, 0), 0)),
SUM(DECODE(CR_FLG, 'R', 1, 0)),
SUM(DECODE(CR_FLG, 'C', NVL(TOT_AMT, 0), -1 * NVL(TOT_AMT, 0))),
SUM(1)
FROM TS_TEST
WHERE SMY_DT BETWEEN TO_DATE(:1, 'DD-MM-YYYY') AND
TO_DATE(:1, 'DD-MM-YYYY');Table TS_TEST is range partitioned on smy_dt and there is an index on smy_dt column. Explain plan of the query is:
SQL> explain plan for SELECT SUM(DECODE(CR_FLG, 'C', NVL(TOT_AMT, 0), 0)),
2 SUM(DECODE(CR_FLG, 'C', 1, 0)),
3 SUM(DECODE(CR_FLG, 'R', NVL(TOT_AMT, 0), 0)),
4 SUM(DECODE(CR_FLG, 'R', 1, 0)),
5 SUM(DECODE(CR_FLG, 'C', NVL(TOT_AMT, 0), -1 * NVL(TOT_AMT, 0))),
6 SUM(1)
7 FROM TS_TEST
8 WHERE SMY_DT BETWEEN TO_DATE(:1, 'DD-MM-YYYY') AND
9 TO_DATE(:1, 'DD-MM-YYYY');
Explained.
SQL> @E
PLAN_TABLE_OUTPUT
Plan hash value: 766961720
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 14 | 15614 (1)| 00:03:08 | | |
| 1 | SORT AGGREGATE | | 1 | 14 | | | | |
|* 2 | FILTER | | | | | | | |
| 3 | TABLE ACCESS BY GLOBAL INDEX ROWID| T_TEST | 79772 | 1090K| 15614 (1)| 00:03:08 | ROWID | ROWID |
|* 4 | INDEX RANGE SCAN | I_SMY_DT | 143K| | 442 (1)| 00:00:06 | | |
Predicate Information (identified by operation id):
2 - filter(TO_DATE(:1,'DD-MM-YYYY')<=TO_DATE(:1,'DD-MM-YYYY'))
4 - access("SMY_DT">=TO_DATE(:1,'DD-MM-YYYY') AND "SMY_DT"<=TO_DATE(:1,'DD-MM-YYYY'))
17 rows selected.
SQL>I am not able to understand the FILTER & SORT operations. As there is an index on SMY_DT column, so index range scan is fine. But why a FILTER (Step no 2) and SORT (Step no 1) operation after that ?
Oracle version is 10.2.0.3 on AIX 5.3 64 bit.
Any other information required please let me know.
Regards,
Amardeep SidhuSort aggregate tells you that there was performed an aggregate operation which returns one row, in opposite to sort order by or hash group by which indicates you have grouping, and there more than one row can be returned.
SQL> SELECT SUM(comm) FROM emp;
SUM(COMM)
2200
Plan wykonywania
Plan hash value: 2083865914
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 2 | 3 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 2 | | |
| 2 | TABLE ACCESS FULL| EMP | 14 | 28 | 3 (0)| 00:00:01 |
SQL> SELECT AVG(comm) FROM emp;
AVG(COMM)
550
Plan wykonywania
Plan hash value: 2083865914
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 2 | 3 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 2 | | |
| 2 | TABLE ACCESS FULL| EMP | 14 | 28 | 3 (0)| 00:00:01 |
SQL> SELECT MIN(comm) FROM emp;
MIN(COMM)
0
Plan wykonywania
Plan hash value: 2083865914
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 2 | 3 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 2 | | |
| 2 | TABLE ACCESS FULL| EMP | 14 | 28 | 3 (0)| 00:00:01 |
SQL> SELECT deptno, SUM(comm) FROM emp GROUP BY deptno;
DEPTNO SUM(COMM)
30 2200
20
10
Plan wykonywania
Plan hash value: 4067220884
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 3 | 15 | 4 (25)| 00:00:01 |
| 1 | HASH GROUP BY | | 3 | 15 | 4 (25)| 00:00:01 |
| 2 | TABLE ACCESS FULL| EMP | 14 | 70 | 3 (0)| 00:00:01 |
SQL>Edited by: Łukasz Mastalerz on Jan 14, 2009 11:41 AM -
Please review following SQL and it's execution plan. Why am I seeing 2 WINDOW SORT operations even though in sql . analytic function "row_number" has been used only once?
Also, In step 3 of the plan, why "bytes" goes from 35 GB(4th step) to 88GB when row count remains the same. In fact , since I'm selecting just 1st row , both row count as well as "bytes" should have gone down. Shouldn't it?
SELECT orddtl.ord_dtl_key, orddtl.ld_nbr, orddtl.actv_flg,
orddtl.ord_nbr
FROM (SELECT /*+ parallel(od, 8) parallel(sc,8) */ od.ord_dtl_key, od.ld_nbr, od.actv_flg,
od.ord_nbr,
ROW_NUMBER () OVER (PARTITION BY od.ord_dtl_key, od.START_TS ORDER BY sc.START_TS DESC)
rownbr
FROM edw.order_detail od LEFT OUTER JOIN edw.srvc_code sc
ON ( sc.srvc_cd_key = od.srvc_cd_key
AND od.part_nbr = sc.part_nbr
AND od.item_cre_dt >= sc.START_TS
AND od.item_cre_dt < sc.END_TS
WHERE od.part_nbr = 11 ) orddtl
WHERE orddtl.rownbr = 1;Execution Plan
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 88M| 121G| | 2353K (65)| 00:33:07 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | 88M| 121G| | 2353K (65)| 00:33:07 | | | Q1,02 | P->S | QC (RAND) |
|* 3 | VIEW | | 88M| 121G| | 2353K (65)| 00:33:07 | | | Q1,02 | PCWP | |
|* 4 | WINDOW SORT PUSHED RANK | | 88M| 35G| 75G| 2353K (65)| 00:33:07 | | | Q1,02 | PCWP | |
| 5 | PX RECEIVE | | 88M| 35G| | 2353K (65)| 00:33:07 | | | Q1,02 | PCWP | |
| 6 | PX SEND HASH | :TQ10001 | 88M| 35G| | 2353K (65)| 00:33:07 | | | Q1,01 | P->P | HASH |
|* 7 | WINDOW CHILD PUSHED RANK| | 88M| 35G| | 2353K (65)| 00:33:07 | | | Q1,01 | PCWP | |
|* 8 | HASH JOIN RIGHT OUTER | | 88M| 35G| | 1610K (92)| 00:22:39 | | | Q1,01 | PCWP | |
| 9 | PX RECEIVE | | 1133K| 32M| | 1197 (20)| 00:00:02 | | | Q1,01 | PCWP | |
| 10 | PX SEND BROADCAST | :TQ10000 | 1133K| 32M| | 1197 (20)| 00:00:02 | | | Q1,00 | P->P | BROADCAST |
| 11 | PX BLOCK ITERATOR | | 1133K| 32M| | 1197 (20)| 00:00:02 | KEY | KEY | Q1,00 | PCWC | |
| 12 | TABLE ACCESS FULL | SRVC_CODE | 1133K| 32M| | 1197 (20)| 00:00:02 | 1 | 1 | Q1,00 | PCWP | |
| 13 | PX BLOCK ITERATOR | | 88M| 32G| | 188K (27)| 00:02:39 | KEY | KEY | Q1,01 | PCWC | |
| 14 | TABLE ACCESS FULL | ORDER_DETAIL | 88M| 32G| | 188K (27)| 00:02:39 | 1 | 1 | Q1,01 | PCWP | |
Predicate Information (identified by operation id):
3 - filter("orddtl"."rownbr"=1)
4 - filter(ROW_NUMBER() OVER ( PARTITION BY "od"."ORD_DTL_KEY","od"."START_TS" ORDER BY INTERNAL_FUNCTION("SC"."START_TS"(+))
DESC )<=1)
7 - filter(ROW_NUMBER() OVER ( PARTITION BY "od"."ORD_DTL_KEY","od"."START_TS" ORDER BY INTERNAL_FUNCTION("SC"."START_TS"(+))
DESC )<=1)
8 - access("od"."part_nbr"="SC"."part_nbr"(+) AND "SC"."SRVC_CD_KEY"(+)="od"."SRVC_CD_KEY")
filter("od"."ITEM_CRE_DT"<"SC"."END_TS"(+) AND "od"."ITEM_CRE_DT">="SC"."START_TS"(+))Thanks Jonathan for your reply.
This type of pattern happens quite frequently in parallel execution with aggregation. A layer of slave processes can do partial aggregation before passing a reduced result set to the query co-ordinator to finish the job.
I wouldn't be 100% sure without building a model to check, but I think the logic of your quer allows the eight slaves to identify each "rownumber() = 1" for the data set they have collected, and the allows the query coordinator to do the window sort on the eight incoming rows (for each key) and determine which one of the eight is ultimate the highest date.So is it a normal pattern? Will step#7 & #4 do the same amount work as stated in PREDICATE information part of execution plan.?
You’re correct! There are 8 slave processes appears to be performing WINDOW PUSHED RANK ( Step#7 in Execution Plan ) as you see in following output. Per execution plan and your comment, each one appears to be finding partial set of rows row_num <= 1. It’s apparently doing lots of work and very slow even with 8 processes. So not sure , how slow would be QC doing the same work just by itself.
And as you see below , it’s [Step#7 ] very slow and half of the slaves performing multi pass sort operation. Even though , it was estimated 35GB for that operation, why it’s estimating work area size of only 6-14MB only? Also, It’s allocating so low amount of PGA than expected. P_A_T was set to approx 11 GB. Currently this was the only query/operation on the Instance.
Why it’s not allocating more PGA for that operation? [My apologies for diverting from my original question ].
I have included PGA stats as well here which was taken 5-10 minutes later than other PQ session information. It’s still shows that there is no shortage of PGA.
Moreover, I have observed this behavior (under allocation of PGA) especially for WINDOWS SORT operations for other SQLs too. Is it normal behavior ? I’m on 10.2.0.4.
select
decode(px.qcinst_id,NULL,username,
' - '||lower(substr(pp.SERVER_NAME,
length(pp.SERVER_NAME)-4,4) ) )"Username",
decode(px.qcinst_id,NULL, 'QC', '(Slave)') "QC/Slave" ,
to_char( px.server_set) "SlaveSet",
to_char(s.sid) "SID",
to_char(px.inst_id) "Slave INST",
decode(sw.state,'WAITING', 'WAIT', 'NOT WAIT' ) as STATE,
case sw.state WHEN 'WAITING' THEN substr(sw.event,1,30) ELSE NULL end as wait_event ,
to_char(s.ROW_WAIT_OBJ#) wait_OBID,
decode(px.qcinst_id, NULL ,to_char(s.sid) ,px.qcsid) "QC SID",
to_char(px.qcinst_id) "QC INST",
px.req_degree "Req. DOP",
px.degree "Actual DOP"
from gv$px_session px,
gv$session s ,
gv$px_process pp,
gv$session_wait sw
where px.sid=s.sid (+)
and px.serial#=s.serial#(+)
and px.inst_id = s.inst_id(+)
and px.sid = pp.sid (+)
and px.serial#=pp.serial#(+)
and sw.sid = s.sid
and sw.inst_id = s.inst_id
order by
decode(px.QCINST_ID, NULL, px.INST_ID, px.QCINST_ID),
px.QCSID,
decode(px.SERVER_GROUP, NULL, 0, px.SERVER_GROUP),
px.SERVER_SET,
px.INST_ID
UNAME QC/Slave SlaveSet SID Slave INS STATE WAIT_EVENT WAIT_OBID QC SID QC INS Req. DOP Actual DOP
APPS_ORD QC 1936 2 WAIT PX Deq: Execute Reply 71031 1936
- p006 (Slave) 1 1731 2 WAIT PX Deq: Execution Msg 71021 1936 2 8 8
- p007 (Slave) 1 2159 2 WAIT PX Deq: Execution Msg 71021 1936 2 8 8
- p002 (Slave) 1 2090 2 WAIT PX Deq: Execution Msg 71021 1936 2 8 8
- p005 (Slave) 1 1965 2 WAIT PX Deq: Execution Msg 71021 1936 2 8 8
- p001 (Slave) 1 1934 2 WAIT PX Deq: Execution Msg 71021 1936 2 8 8
- p004 (Slave) 1 1843 2 WAIT PX Deq: Execution Msg 71021 1936 2 8 8
- p000 (Slave) 1 1778 2 WAIT PX Deq: Execution Msg 71021 1936 2 8 8
- p003 (Slave) 1 1751 2 WAIT PX Deq: Execution Msg 71021 1936 2 8 8
- p009 (Slave) 2 2138 2 NOT WAIT 71031 1936 2 8 8
- p012 (Slave) 2 1902 2 NOT WAIT 71031 1936 2 8 8
- p008 (Slave) 2 1921 2 NOT WAIT 71031 1936 2 8 8
- p013 (Slave) 2 2142 2 NOT WAIT 71031 1936 2 8 8
- p015 (Slave) 2 2091 2 NOT WAIT 71031 1936 2 8 8
- p014 (Slave) 2 2122 2 NOT WAIT 71031 1936 2 8 8
- p010 (Slave) 2 2146 2 NOT WAIT 71031 1936 2 8 8
- p011 (Slave) 2 1754 2 NOT WAIT 71031 1936 2 8 8
SELECT operation_type AS type ,
workarea_address WADDR,
operation_id as OP_ID,
policy ,
vwa.sql_id,
vwa.inst_id i#,
vwa.sid ,
vwa.qcsid QCsID,
vwa.QCINST_ID QC_I#,
s.username uname,
ROUND(active_time /1000000,2) AS a_sec ,
ROUND(work_area_size /1024/1024,2) AS wsize ,
ROUND(expected_size /1024/1024,2) AS exp ,
ROUND(actual_mem_used/1024/1024,2) AS act ,
ROUND(max_mem_used /1024/1024,2) AS MAX ,
number_passes AS p#,
ROUND(tempseg_size/1024/1024,2) AS temp
FROM gv$sql_workarea_active vwa ,
gv$session s
where vwa.sid = s.sid
and vwa.inst_id = s.inst_id
order by vwa.sql_id, operation_id, vwa.inst_id, username, vwa.qcsid
TYPE WADDR OP_ID POLI SQL_ID I# SID QCSID QC_I# UNAME A_SEC WSIZE EXP ACT MAX P# TEMP
WINDOW (SORT) 07000003D2B03F90 7 AUTO 8z5s5wdy94ty3 2 2146 1936 2 APPS_ORD 1181.22 13.59 13.59 7.46 90.98 1 320
WINDOW (SORT) 07000003D2B03F90 7 AUTO 8z5s5wdy94ty3 2142 1936 2 APPS_ORD 1181.07 7.03 7.03 4.02 90.98 0 288
WINDOW (SORT) 07000003D2B03F90 7 AUTO 8z5s5wdy94ty3 2091 1936 2 APPS_ORD 1181.06 7.03 7.03 4.5 90.98 0 288
WINDOW (SORT) 07000003D2B03F90 7 AUTO 8z5s5wdy94ty3 1921 1936 2 APPS_ORD 1181.09 13.59 13.59 2.24 90.98 1 320
WINDOW (SORT) 07000003D2B03F90 7 AUTO 8z5s5wdy94ty3 2138 1936 2 APPS_ORD 1181.16 7.03 7.03 1.34 90.98 0 288
WINDOW (SORT) 07000003D2B03F90 7 AUTO 8z5s5wdy94ty3 1754 1936 2 APPS_ORD 1181.09 14.06 14.06 5.77 90.98 1 320
WINDOW (SORT) 07000003D2B03F90 7 AUTO 8z5s5wdy94ty3 2122 1936 2 APPS_ORD 1181.15 6.56 6.56 .24 90.98 0 288
WINDOW (SORT) 07000003D2B03F90 7 AUTO 8z5s5wdy94ty3 1902 1936 2 APPS_ORD 1181.12 14.06 14.06 9.12 90.98 1 320
HASH-JOIN 07000003D2B03F28 8 AUTO 8z5s5wdy94ty3 2142 1936 2 APPS_ORD 1183.24 98.64 98.64 100.44 100.44 0
HASH-JOIN 07000003D2B03F28 8 AUTO 8z5s5wdy94ty3 2138 1936 2 APPS_ORD 1183.24 98.64 98.64 100.44 100.44 0
HASH-JOIN 07000003D2B03F28 8 AUTO 8z5s5wdy94ty3 2122 1936 2 APPS_ORD 1183.24 98.64 98.64 100.44 100.44 0
HASH-JOIN 07000003D2B03F28 8 AUTO 8z5s5wdy94ty3 2091 1936 2 APPS_ORD 1183.24 98.64 98.64 100.44 100.44 0
HASH-JOIN 07000003D2B03F28 8 AUTO 8z5s5wdy94ty3 1921 1936 2 APPS_ORD 1183.24 98.64 98.64 100.44 100.44 0
HASH-JOIN 07000003D2B03F28 8 AUTO 8z5s5wdy94ty3 1902 1936 2 APPS_ORD 1183.24 98.64 98.64 100.44 100.44 0
HASH-JOIN 07000003D2B03F28 8 AUTO 8z5s5wdy94ty3 2146 1936 2 APPS_ORD 1183.24 98.64 98.64 100.44 100.44 0
HASH-JOIN 07000003D2B03F28 8 AUTO 8z5s5wdy94ty3 1754 1936 2 APPS_ORD 1183.24 98.64 98.64 100.44 100.44 0
sum 872.07 838.21
PGA Stats – taken 5-10 minutes later than above.
select name, decode(unit,'bytes',round(value/1048576,2)||' MB', value) value from v$pgastat
NAME VALUE
aggregate PGA target parameter 11264 MB
aggregate PGA auto target 9554.7 MB
global memory bound 1024 MB
total PGA inuse 902.21 MB
total PGA allocated 3449.64 MB
maximum PGA allocated 29155.44 MB
total freeable PGA memory 2140.56 MB
process count 107
max processes count 379
PGA memory freed back to OS 77240169.56 MB
total PGA used for auto workareas 254.14 MB
maximum PGA used for auto workareas 22797.02 MB
total PGA used for manual workareas 0 MB
maximum PGA used for manual workareas 16.41 MB
over allocation count 0
bytes processed 323796668.77 MB
extra bytes read/written 183362312.02 MB
cache hit percentage 63.84
recompute count (total) 2054320
SELECT
PGA_TARGET_FOR_ESTIMATE/1048576 ESTMTD_PGA_MB,
PGA_TARGET_FACTOR PGA_TGT_FCTR,
ADVICE_STATUS ADV_STS,
BYTES_PROCESSED/1048576 ESTMTD_MB_PRCD,
ESTD_EXTRA_BYTES_RW/1048576 ESTMTD_XTRA_MB,
ESTD_PGA_CACHE_HIT_PERCENTAGE ESTMTD_CHIT_PCT,
ESTD_OVERALLOC_COUNT O_ALOC_CNT
FROM V$PGA_TARGET_ADVICE
ESTMTD_PGA_MB PGA_TGT_FCTR ADV ESTMTD_MB_PRCD ESTMTD_XTRA_MB ESTMTD_CHIT_PCT O_ALOC_CNT
1,408 .125 ON 362,905,053 774,927,577 32 19973
2,816 .25 ON 362,905,053 571,453,995 39 709
5,632 .5 ON 362,905,053 249,201,001 59 5
8,448 .75 ON 362,905,053 216,717,381 63 0
11,264 1 ON 362,905,053 158,762,256 70 0
13,517 1.2 ON 362,905,053 153,025,642 70 0
15,770 1.4 ON 362,905,053 153,022,337 70 0
18,022 1.6 ON 362,905,053 153,022,337 70 0
20,275 1.8 ON 362,905,053 153,022,337 70 0
22,528 2 ON 362,905,053 153,022,337 70 0
33,792 3 ON 362,905,053 153,022,337 70 0
45,056 4 ON 362,905,053 153,022,337 70 0
67,584 6 ON 362,905,053 153,022,337 70 0
90,112 8 ON 362,905,053 153,022,337 70 0 -
Hi,
I m getting buffer sort in the explain plan.
please let me know what it is means
Thanks,
Kumar.It means that oracle is caching some data from the row source into private memory in order to avoid having to read it multiple times.
-
I have a query that shows me that one of my tables is doing a Full Scan and a Buffer Sort in it, but I dont have any order by clause, or distinct or nothing... Why the Buffer Sort appears???
Thanks!Probably because the database needs to take an interim result set and put it in a certain sequenct in order to make the next phase of the query more efficient.
Something like
take the username and dept number from the emp table, sort in dept number sequence, then go to the dept table to get the department name. -
Swap, temporary tablespace and sort operations
Hello.
I have an Oracle 8.1.7 on Linux RH7.1. I see a very interesting situation: when users begin to execute large selects with many sorts operation swapping grows, but temporary tablespace does'nt grow. As I know, when Oracle has no memory to use as "sort_area_size" it uses temporary tablespace. But looks like that when Oracle ask for memory Linux begin to swap (in order to give memory for Oracle). I mean that Oracle don't use temporary tablespace but use swap instead of it. Is it true? Is it problem? Is it Oracle, Linux or my own configuration bug? Is it better to use swap or to use temporary tablespace? What is faster?
Thanx for all advises and ideas. ANd sorry for pure Englishlogin to your database as DBA (SYS AS SYSDBA) and issue the following query:
SQL> select name, value from v$parameter where name = 'sga_max_size' ;
see the value defined for this parameter. If this is larger than what you have configured as your SGA size,
Oracle will assume that it can expand the SGA to "sga_max_size" value, and will try to expand the SGA when
required. This will result in Oracle asking more memory from the linux kernel and then linux starts to use
the swap space.
Try changing the value of this parameter and see if it helps. -
Is max function will carry sort operation..?
Hi friends,
Is the max function will perform sort operation in temporary
segments..? Please list the set of operations that will take
place for the following Query (Apart from Syntax & Fetching the
data).
Ex:
delete from tab1 p1
where p1.rowid < (select max(p2.rowid)
from tab1 p2 where
p1.c1 = p2.c2);
Regards,
G. Rajakumar.you can get that information using explain plan command.
DELETE STATEMENT
DELETE TAB1
FILTER
TABLE ACCESS FULL TAB1
SORT AGGREGATE TAB1
TABLE ACCESS FULL TAB1 -
Hi All,
Can any one please explain where and why SORT operation will performed while executing below code?
PROCEDURE process_all_rows
IS
TYPE employees_aat IS TABLE OF employees%ROWTYPE INDEX BY PLS_INTEGER;
l_employees employees_aat;
BEGIN
SELECT * BULK COLLECT INTO l_employees FROM employees;
FOR index IN 1 .. l_employees.COUNT
LOOP
analyze_compensation
(l_employees(indx));
END LOOP;
END process_all_rows;
The code from below link:
http://www.oracle.com/technetwork/issue-archive/2008/08-mar/o28plsql-095155.html
Thanks in advance.Red Penyon wrote:
An associative array (formerly called PL/SQL table or index-by table) is a set of key-value pairs. Each key is a unique index, used to locate the associated value with the syntax variable_name(index).
The data type of index can be either a string type or PLS_INTEGER. Indexes are stored in sort order, not creation order. For string types, sort order is determined by the initialization parameters NLS_SORT and NLS_COMP.So then associative arrays with name-value pairs where the name is numeric, are not sorted? Why would an associative array indexed by number not be sorted? -
Why Sort operation on clustered columstore index insert?
Looking at the execution plan for a clustered columnstore index insert I noticed a Sort operation. My T-SQL has no sort and I understand that the clustered columnstore is not a sorted index. Why would there be a Sort operation in the execution plan?
This is running on:
Microsoft SQL Server 2014 - 12.0.2000.8 (X64)
Feb 20 2014 20:04:26
Copyright (c) Microsoft Corporation
Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)Hello,
It's because how a columnstore index works: The index is created & compressed on column Level, not on row level. SQL Server orders the data to have the same data after each other to calculate the compressed index values.
Olaf Helper
[ Blog] [ Xing] [ MVP] -
Group By, Sort operation on pl/sql collection
Hi,
Is it possible to do a group by or a order by operation on a pl/sql table of records?
Thanks,
AshishIf you are building your cllection from a database then why dont you SORT in the SQL that you use toget data from the database. If that dose not help then this link will help you.
sort data of pl/sql table
Thanks -
Sort operations in several orders
I have a requirement for a report where I could list all the operations for a revision.
The user wants to be able to sort these operations to be able to perform the maintenance in the right order. One idea was to use the sort term on the operation but then the user has to go in on each and every order and then to the operation and then update the field.
Any ideas on how to solve this?
Has anyone had any similar requirements?Hi Kristoffer,
' I have a requirement for a report where I could list all the operations for a revision. '
The Revision Selection field in General/Administration Tab of Tcode IW37n , is giving the desired results. Isn't this the requirement?
See this picture.
Jogeswara Rao K -
10g: parallel pipelined table func - distributing DISTINCT data sets
Hi,
i want to distribute data records, selected from cursor, via parallel pipelined table function to multiple worker threads for processing and returning result records.
The tables, where i am selecting data from, are partitioned and subpartitioned.
All tables share the same partitioning/subpartitioning schema.
Each table has a column 'Subpartition_Key', which is hashed to a physical subpartition.
E.g. the Subpartition_Key ranges from 000...999, but we have only 10 physical subpartitions.
The select of records is done partition-wise - one partition after another (in bulks).
The parallel running worker threads select more data from other tables for their processing (2nd level select)
Now my goal is to distribute initial records to the worker threads in a way, that they operate on distinct subpartitions - to decouple the access to resources (for the 2nd level select)
But i cannot just use 'parallel_enable(partition curStage1 by hash(subpartition_key))' for the distribution.
hash(subpartition_key) (hashing A) does not match with the hashing B used to assign the physical subpartition for the INSERT into the tables.
Even when i remodel the hashing B, calculate some SubPartNo(subpartition_key) and use that for 'parallel_enable(partition curStage1 by hash(SubPartNo))' it doesn't work.
Also 'parallel_enable(partition curStage1 by range(SubPartNo))' doesn't help. The load distribution is unbalanced - some worker threads get data of one subpartition, some of multiple subpartitions, some are idle.
How can i distribute the records to the worker threads according a given subpartition-schema?
+[amendment:+
Actually the hashing for the parallel_enable is counterproductive here - it would be better to have some 'parallel_enable(partition curStage1 by SubPartNo)'.]
- many thanks!
best regards,
Frank
Edited by: user8704911 on Jan 12, 2012 2:51 AMHello
A couple of things to note. 1, when you use partition by hash(or range) on 10gr2 and above, there is an additional BUFFER SORT operation vs using partition by ANY. For small datasets this is not necessarily an issue, but the temp space used by this stage can be significant for larger data sets. So be sure to check temp space usage for this process or you could run into problems later.
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 8168 | 1722K| 24 (0)| 00:00:01 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 8168 | 1722K| 24 (0)| 00:00:01 | | | Q1,01 | P->S | QC (RAND) |
| 3 |****BUFFER SORT**** | | 8168 | 1722K| | | | | Q1,01 | PCWP | |
| 4 | VIEW | | 8168 | 1722K| 24 (0)| 00:00:01 | | | Q1,01 | PCWP | |
| 5 | COLLECTION ITERATOR PICKLER FETCH| TF | | | | | | | Q1,01 | PCWP | |
| 6 | PX RECEIVE | | 100 | 4800 | 2 (0)| 00:00:01 | | | Q1,01 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | 100 | 4800 | 2 (0)| 00:00:01 | | | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | 100 | 4800 | 2 (0)| 00:00:01 | 1 | 10 | Q1,00 | PCWC | |
| 9 | TABLE ACCESS FULL | TEST_TAB | 100 | 4800 | 2 (0)| 00:00:01 | 1 | 20 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------------------------------------------It may be in this case that you can use clustering with partition by any to achieve your goal...
create or replace package test_pkg as
type Test_Tab_Rec_t is record (
Tracking_ID number(19),
Partition_Key date,
Subpartition_Key number(3),
sid number
type Test_Tab_Rec_Tab_t is table of Test_Tab_Rec_t;
type Test_Tab_Rec_Hash_t is table of Test_Tab_Rec_t index by binary_integer;
type Test_Tab_Rec_HashHash_t is table of Test_Tab_Rec_Hash_t index by binary_integer;
type Cur_t is ref cursor return Test_Tab_Rec_t;
procedure populate;
procedure report;
function tf(cur in Cur_t)
return test_list pipelined
parallel_enable(partition cur by hash(subpartition_key));
function tf_any(cur in Cur_t)
return test_list PIPELINED
CLUSTER cur BY (Subpartition_Key)
parallel_enable(partition cur by ANY);
end;
create or replace package body test_pkg as
procedure populate
is
Tracking_ID number(19) := 1;
Partition_Key date := current_timestamp;
Subpartition_Key number(3) := 1;
begin
dbms_output.put_line(chr(10) || 'populate data into Test_Tab...');
for Subpartition_Key in 0..99
loop
for ctr in 1..1
loop
insert into test_tab (tracking_id, partition_key, subpartition_key)
values (Tracking_ID, Partition_Key, Subpartition_Key);
Tracking_ID := Tracking_ID + 1;
end loop;
end loop;
dbms_output.put_line('...done (populate data into Test_Tab)');
end;
procedure report
is
recs Test_Tab_Rec_Tab_t;
begin
dbms_output.put_line(chr(10) || 'list data per partition/subpartition...');
for item in (select partition_name, subpartition_name from user_tab_subpartitions where table_name='TEST_TAB' order by partition_name, subpartition_name)
loop
dbms_output.put_line('partition/subpartition = ' || item.partition_name || '/' || item.subpartition_name || ':');
execute immediate 'select * from test_tab SUBPARTITION(' || item.subpartition_name || ')' bulk collect into recs;
if recs.count > 0
then
for i in recs.first..recs.last
loop
dbms_output.put_line('...' || recs(i).Tracking_ID || ', ' || recs(i).Partition_Key || ', ' || recs(i).Subpartition_Key);
end loop;
end if;
end loop;
dbms_output.put_line('... done (list data per partition/subpartition)');
end;
function tf(cur in Cur_t)
return test_list pipelined
parallel_enable(partition cur by hash(subpartition_key))
is
sid number;
input Test_Tab_Rec_t;
output test_object;
begin
select userenv('SID') into sid from dual;
loop
fetch cur into input;
exit when cur%notfound;
output := test_object(input.tracking_id, input.partition_key, input.subpartition_key,sid);
pipe row(output);
end loop;
end;
function tf_any(cur in Cur_t)
return test_list PIPELINED
CLUSTER cur BY (Subpartition_Key)
parallel_enable(partition cur by ANY)
is
sid number;
input Test_Tab_Rec_t;
output test_object;
begin
select userenv('SID') into sid from dual;
loop
fetch cur into input;
exit when cur%notfound;
output := test_object(input.tracking_id, input.partition_key, input.subpartition_key,sid);
pipe row(output);
end loop;
end;
end;
XXXX> with parts as (
2 select --+ materialize
3 data_object_id,
4 subobject_name
5 FROM
6 user_objects
7 WHERE
8 object_name = 'TEST_TAB'
9 and
10 object_type = 'TABLE SUBPARTITION'
11 )
12 SELECT
13 COUNT(*),
14 parts.subobject_name,
15 target.sid
16 FROM
17 parts,
18 test_tab tt,
19 test_tab_part_hash target
20 WHERE
21 tt.tracking_id = target.tracking_id
22 and
23 parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
24 GROUP BY
25 parts.subobject_name,
26 target.sid
27 ORDER BY
28 target.sid,
29 parts.subobject_name
30 /
XXXX> INSERT INTO test_tab_part_hash select * from table(test_pkg.tf(CURSOR(select * from test_tab)))
2 /
100 rows created.
Elapsed: 00:00:00.14
XXXX>
XXXX> INSERT INTO test_tab_part_any_cluster select * from table(test_pkg.tf_any(CURSOR(select * from test_tab)))
2 /
100 rows created.
--using partition by hash
XXXX> with parts as (
2 select --+ materialize
3 data_object_id,
4 subobject_name
5 FROM
6 user_objects
7 WHERE
8 object_name = 'TEST_TAB'
9 and
10 object_type = 'TABLE SUBPARTITION'
11 )
12 SELECT
13 COUNT(*),
14 parts.subobject_name,
15 target.sid
16 FROM
17 parts,
18 test_tab tt,
19 test_tab_part_hash target
20 WHERE
21 tt.tracking_id = target.tracking_id
22 and
23 parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
24 GROUP BY
25 parts.subobject_name,
26 target.sid
27 /
COUNT(*) SUBOBJECT_NAME SID
3 SYS_SUBP31 1272
1 SYS_SUBP32 1272
1 SYS_SUBP33 1272
3 SYS_SUBP34 1272
1 SYS_SUBP36 1272
1 SYS_SUBP37 1272
3 SYS_SUBP38 1272
1 SYS_SUBP39 1272
1 SYS_SUBP32 1280
2 SYS_SUBP33 1280
2 SYS_SUBP34 1280
1 SYS_SUBP35 1280
2 SYS_SUBP36 1280
1 SYS_SUBP37 1280
2 SYS_SUBP38 1280
1 SYS_SUBP40 1280
2 SYS_SUBP33 1283
2 SYS_SUBP34 1283
2 SYS_SUBP35 1283
2 SYS_SUBP36 1283
1 SYS_SUBP37 1283
1 SYS_SUBP38 1283
2 SYS_SUBP39 1283
1 SYS_SUBP40 1283
1 SYS_SUBP32 1298
1 SYS_SUBP34 1298
1 SYS_SUBP36 1298
2 SYS_SUBP37 1298
4 SYS_SUBP38 1298
2 SYS_SUBP40 1298
1 SYS_SUBP31 1313
1 SYS_SUBP33 1313
1 SYS_SUBP39 1313
1 SYS_SUBP40 1313
1 SYS_SUBP32 1314
1 SYS_SUBP35 1314
1 SYS_SUBP38 1314
1 SYS_SUBP40 1314
2 SYS_SUBP33 1381
1 SYS_SUBP34 1381
1 SYS_SUBP35 1381
3 SYS_SUBP36 1381
3 SYS_SUBP37 1381
1 SYS_SUBP38 1381
2 SYS_SUBP36 1531
1 SYS_SUBP37 1531
2 SYS_SUBP38 1531
1 SYS_SUBP39 1531
1 SYS_SUBP40 1531
2 SYS_SUBP33 1566
1 SYS_SUBP34 1566
1 SYS_SUBP35 1566
1 SYS_SUBP37 1566
1 SYS_SUBP38 1566
2 SYS_SUBP39 1566
3 SYS_SUBP40 1566
1 SYS_SUBP32 1567
3 SYS_SUBP33 1567
3 SYS_SUBP35 1567
3 SYS_SUBP36 1567
1 SYS_SUBP37 1567
2 SYS_SUBP38 1567
62 rows selected.
--using partition by any cluster by subpartition_key
Elapsed: 00:00:00.26
XXXX> with parts as (
2 select --+ materialize
3 data_object_id,
4 subobject_name
5 FROM
6 user_objects
7 WHERE
8 object_name = 'TEST_TAB'
9 and
10 object_type = 'TABLE SUBPARTITION'
11 )
12 SELECT
13 COUNT(*),
14 parts.subobject_name,
15 target.sid
16 FROM
17 parts,
18 test_tab tt,
19 test_tab_part_any_cluster target
20 WHERE
21 tt.tracking_id = target.tracking_id
22 and
23 parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
24 GROUP BY
25 parts.subobject_name,
26 target.sid
27 ORDER BY
28 target.sid,
29 parts.subobject_name
30 /
COUNT(*) SUBOBJECT_NAME SID
11 SYS_SUBP37 1253
10 SYS_SUBP34 1268
4 SYS_SUBP31 1289
10 SYS_SUBP40 1314
7 SYS_SUBP39 1367
9 SYS_SUBP35 1377
14 SYS_SUBP36 1531
5 SYS_SUBP32 1572
13 SYS_SUBP33 1577
17 SYS_SUBP38 1609
10 rows selected.Bear in mind though that this does require a sort of the incomming dataset but does not require buffering of the output...
PLAN_TABLE_OUTPUT
Plan hash value: 2570087774
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 8168 | 1722K| 24 (0)| 00:00:01 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 8168 | 1722K| 24 (0)| 00:00:01 | | | Q1,00 | P->S | QC (RAND) |
| 3 | VIEW | | 8168 | 1722K| 24 (0)| 00:00:01 | | | Q1,00 | PCWP | |
| 4 | COLLECTION ITERATOR PICKLER FETCH| TF_ANY | | | | | | | Q1,00 | PCWP | |
| 5 | SORT ORDER BY | | | | | | | | Q1,00 | PCWP | |
| 6 | PX BLOCK ITERATOR | | 100 | 4800 | 2 (0)| 00:00:01 | 1 | 10 | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL | TEST_TAB | 100 | 4800 | 2 (0)| 00:00:01 | 1 | 20 | Q1,00 | PCWP | |
----------------------------------------------------------------------------------------------------------------------------------------------HTH
David -
what does this last statement mean? it is as though the query runs just like without any hints.
oracle doc:
Using Parallel Execution
Examples of Distributed Transaction Parallelization
This section contains several examples of distributed transaction processing.
Example 1 Distributed Transaction Parallelization
In this example, the DML statement queries a remote object:
INSERT /* APPEND PARALLEL (t3,2) */ INTO t3 SELECT * FROM t4@dblink;
The query operation is executed serially without notification because it references a remote object.Randolf,
As far as I have a real db link why not test it myself (see my questions at the end of this thread)
SQL> insert /*+ append parallel(lcl) */ into local_tab lcl select /*+ parralel(dst) */ dst.* from distant_tab dst;
51 rows created.
SQL> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
SQL_ID 1nhmuzb4ayq7x, child number 0
insert /*+ append parallel(lcl) */ into local_tab lcl select /*+
parralel(dst) */ dst.* from distant_tab dst
Plan hash value: 2098243032
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | INSERT STATEMENT | | | | 57 (100)| | | |
| 1 | LOAD AS SELECT | | | | | | | |
| 2 | REMOTE | distant_tab | 51 | 3009 | 57 (0)| 00:00:01 | XLCL~ | R->S |
Remote SQL Information (identified by operation id):
2 - SELECT /*+ OPAQUE_TRANSFORM */ "DST_ID","NAME_FR","NAME_NL","DST_UIC_CODE","DST_VOI
C_CODE","BUR_UIC_CODE","BUR_VOIC_CODE","INFO_NEEDED","TRANSFERRED","VALID_FROM_DATE",
"VALID_TO_DATE","SORTING","SO_NEEDED" FROM "distant_tab" "DST" (accessing'XLCL_XDST.WORLD' )
Let enable parallel DML and repeat the same insert
SQL> alter session enable parallel dml;
Session altered.
SQL> insert /*+ append parallel(lcl) */ into local_tab lcl select /*+ parralel(dst) */ dst.* from distant_tab dst;
51 rows created.
SQL> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
SQL_ID 1nhmuzb4ayq7x, child number 1
insert /*+ append parallel(lcl) */ into local_tab lcl select /*+ parralel(dst) */ dst.* from distant_tab dst
Plan hash value: 2511483212
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ/Ins |IN-OUT| PQ Distrib |
| 0 | INSERT STATEMENT | | | | 57 (100)| | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 51 | 3009 | 57 (0)| 00:00:01 | Q1,01 | P->S | QC (RAND) |
| 3 | LOAD AS SELECT | | | | | | Q1,01 | PCWP | |
| 4 | PX RECEIVE | | 51 | 3009 | 57 (0)| 00:00:01 | Q1,01 | PCWP | |
| 5 | PX SEND ROUND-ROBIN| :TQ10000 | 51 | 3009 | 57 (0)| 00:00:01 | | S->P | RND-ROBIN |
| 6 | REMOTE | distant_tab | 51 | 3009 | 57 (0)| 00:00:01 | XLCL~ | R->S | |
Remote SQL Information (identified by operation id):
6 - SELECT /*+ OPAQUE_TRANSFORM */ "DST_ID","NAME_FR","NAME_NL","DST_UIC_CODE","DST_VOIC_CODE","BUR_UIC_COD
E","BUR_VOIC_CODE","INFO_NEEDED","TRANSFERRED","VALID_FROM_DATE","VALID_TO_DATE","SORTING","SO_NEEDED"
FROM "distant_tab" "DST" (accessing 'XLCL_XDST.WORLD' )
SQL> select * from local_tab;
select * from local_tab
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel
SQL> select
2 dfo_number,
3 tq_id,
4 server_type,
5 process,
6 num_rows,
7 bytes,
8 waits,
9 timeouts,
10 avg_latency,
11 instance
12 from
13 v$pq_tqstat
14 order by
15 dfo_number,
16 tq_id,
17 server_type desc,
18 process
19 ;
DFO_NUMBER TQ_ID SERVER_TYP PROCES NUM_ROWS BYTES WAITS TIMEOUTS AVG_LATENCY INSTANCE
1 0 Producer QC 51 4451 0 0 0 1
1 1 Consumer QC 1 683 14 6 0 1
This time parallel DML has been used
What If I create a trigger on the local_tab table and repeat the insert?
SQL> create or replace trigger local_tab_trg
2 before insert on local_tab
3 for each row
4 begin
5 null;
6 end;
7 /
Trigger created.
SQL> insert /*+ append parallel(lcl) */ into local_tab lcl select /*+ parralel(dst) */ dst.* from distant_tab dst;
51 rows created.
SQL> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
SQL_ID 1nhmuzb4ayq7x, child number 1
insert /*+ append parallel(lcl) */ into local_tab lcl select /*+
parralel(dst) */ dst.* from distant_tab dst
Plan hash value: 1788691278
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | INSERT STATEMENT | | | | 57 (100)| | | |
| 1 | LOAD TABLE CONVENTIONAL | | | | | | | |
| 2 | REMOTE | distant_tab | 51 | 3009 | 57 (0)| 00:00:01 | XLCL~ | R->S |
Remote SQL Information (identified by operation id):
2 - SELECT /*+ OPAQUE_TRANSFORM */ "DST_ID","NAME_FR","NAME_NL","DST_UIC_CODE","DST_VOIC_CODE",
"BUR_UIC_CODE","BUR_VOIC_CODE","INFO_NEEDED","TRANSFERRED","VALID_FROM_DATE","VALID_TO_DATE","S
ORTING","SO_NEEDED" FROM "distant_tab" "DST" (accessing 'XLCL_XDST.WORLD' )
Parallel run has been disabled by the existence of this trigger in both distant and local database
SQL> drop trigger local_tab_trg;
Trigger dropped.
Now I want to test an insert using only the append hint as shown below
SQL> insert /*+ append */ into local_tab lcl select /*+ parralel(dst) */ dst.* from distant_tab dst;
51 rows created.
SQL> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
SQL_ID 4pkxbmy8410s9, child number 0
insert /*+ append */ into local_tab lcl select /*+ parralel(dst) */
dst.* from distant_tab dst
Plan hash value: 2098243032
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | INSERT STATEMENT | | | | 57 (100)| | | |
| 1 | LOAD AS SELECT | | | | | | | |
| 2 | REMOTE | distant_tab | 51 | 3009 | 57 (0)| 00:00:01 | XLCL~ | R->S |
Remote SQL Information (identified by operation id):
2 - SELECT /*+ OPAQUE_TRANSFORM */ "DST_ID","NAME_FR","NAME_NL","DST_UIC_CODE","DST_VOI
C_CODE","BUR_UIC_CODE","BUR_VOIC_CODE","INFO_NEEDED","TRANSFERRED","VALID_FROM_DATE","V
ALID_TO_DATE","SORTING","SO_NEEDED" FROM "distant_tab" "DST" (accessing
'XLCL_XDST.WORLD' )
SQL> select * from local_tab;
select * from local_tab
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel
Question 1 : What does this last ORA-128838 means if the execution plan is not showing a parallel DML insert?
Particularly when I repeat the same insert using a parallel hint without the append hint
SQL> insert /*+ parallel(lcl) */ into local_tab lcl select /*+ parralel(dst) */ dst.* from distant_tab dst;
51 rows created.
SQL> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
SQL_ID 40uqkc82n1mqn, child number 0
insert /*+ parallel(lcl) */ into local_tab lcl select /*+ parralel(dst)
*/ dst.* from distant_tab dst
Plan hash value: 2511483212
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ/Ins |IN-OUT| PQ Distrib |
| 0 | INSERT STATEMENT | | | | 57 (100)| | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 51 | 3009 | 57 (0)| 00:00:01 | Q1,01 | P->S | QC (RAND) |
| 3 | LOAD AS SELECT | | | | | | Q1,01 | PCWP | |
| 4 | PX RECEIVE | | 51 | 3009 | 57 (0)| 00:00:01 | Q1,01 | PCWP | |
| 5 | PX SEND ROUND-ROBIN| :TQ10000 | 51 | 3009 | 57 (0)| 00:00:01 | | S->P | RND-ROBIN |
| 6 | REMOTE | distant_tab | 51 | 3009 | 57 (0)| 00:00:01 | A1124~ | R->S | |
Remote SQL Information (identified by operation id):
6 - SELECT /*+ OPAQUE_TRANSFORM */ "DST_ID","NAME_FR","NAME_NL","DST_UIC_CODE","DST_VOI
C_CODE","BUR_UIC_CODE","BUR_VOIC_CODE","INFO_NEEDED","TRANSFERRED","VALID_FROM_DATE","V
ALID_TO_DATE","SORTING","SO_NEEDED" FROM "distant_tab" "DST" (accessing'XLCL_XDST.WORLD' )
SQL> select * from local_tab;
select * from local_tab
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel
"in that particular case the REMOTE data was joined to some local data and there was an additional BUFFER SORT operation in the execution plan, which (in most cases) shows up when a serial to parallel distribution (S->P) is involved in an operation with further re-distribution of data, and the BUFFER SORT had to buffer all the data from remote before the local operation could continue - defeating any advantage of speeding up the local write operation by parallel DML. A serial direct path insert was actually faster in that particular case."
Question 2 : Isn’t this a normal behavior of distributed DML where the driving site is always the site where the DML operation is done? And this is why the BUFFER SORT had to buffer all the data from remote to the driving site (local site here)?
http://jonathanlewis.wordpress.com/2008/12/05/distributed-dml/
Best regards
Mohamed Houri -
Dear All,
Please let me know how to publish my explain plan output properly here. Earlier i used [code] ..... [/code]
Thanks,
KodsWho knows?
Are you just looking at random explain plans to find a problem?
Or did you look at the explain plan for a specific performance problem?
Please format code and explain plan output with code tags.
See also this thread, How to post a SQL tuning request:
HOW TO: Post a SQL statement tuning request - template posting
At the end of the day are the row estimates accurate?
If they are, then everything's probably ok.
If not, then everything might be ok or it might not be.
Merge Join Cartesian + Buffer Sort operations are often, but not always, an indication of a problem particularly when the associated estimates are inaccurate for whatever reason.
Maybe you are looking for
-
Hello, Is it possible to print Barcode(for Work order) in ALV report...
-
Open Quantity of PO in GRPO PLD
Hi All How do you display the open quantity (which is still not delivered) of purchase order into the PLD of Goods Receipt PO? Thanks SV Reddy
-
How to embed swf in flash player with skin
Hi, I tried to embed a swf in asp.net page. The code is OBJECT classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=10,0,0,0" WIDTH="498" HEIGHT="288" >
-
Unable to Remote connect to a SQL 2008 server
I just set up a virtual server with Godaddy with CF and SQL Server 2008 The SQL server appears to be running because if I RDC into the server and connect using the Server management Studio, it works. To find the instance I have to look at network dev
-
Tax code -Condition types in TAXINN
I am using TAXINN procedure. My tax is ED8%ECess2%HCess 1% (Deductible). I have maintained in FV11 8% for JMOP, 2% for JECP and 1% for JSEP. For the setoff total condition do we need to maintain any percentage? 1. Do I have to maintain 100% in the fo