Tuning issues
Hi,
I want to improve myself about plsql tuning.
Just now I am only looking to explain plans and if there are any full table scan, I am trying to force plsql engine for using index table scans.
Sometimes we are creating new indexes. But I have lack of experience of plsql tuning.
Now I am watching videos of Steven Feuerstein about mistakes in plsql development.
Then I want to develop a plan for learning and improving myself about plsql tuning.
As I know trace and alert log is used for sql tuning or performance issues in Oracle databases. I also heard a method like tkprof.
Could you please advice me some tutorials or books about plsql tuning?
I will also be glad if you advice me concepts like trace method which I can use in tuning and performance improvement operations.
Regards&Thanks
Hi,
firstly you have to understand how the database works...I would recommend you to read Oracle Database Concepts:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/toc.htm
BTW, there is also a book in Oracle documentation - Performance Tuning Guide:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/toc.htm
Similar Messages
-
Profitability analysis activated in POS Interface, performane/tuning issue?
We are about to golive with a SAP retail system. All purchases made by customers in store are sent into SAP via and idoc via the so called POS (point of sales) Interface.
A reciept received via an idoc creates material docs, invoice docs, accounting documents, controlling documents, profit center docs and profit ability analysis documents.
Each day we recive all receipt from each store collected in one idoc per store.
When deactivate our profit ability analysis an average store are posted in about 40 seconds with sales from an average day. When we post and profit ability analysis are activated the average time per store are almost 75 seconds.
How can simple postings to profit ability analysis increase the posting time with almost 50 %? Is this a performance/tuning issue?
Best regards
Carl-Johan
Point will be assigned generously for info that leads to better performance!which CO document does the system creates : CCA document ?
on which cost centre ? PCA document ?
What is the CE category of the CE used for posting the variance ? -
A SQL tuning issue-sql runs much slower in test than in production?
Hi Buddies,
I am working on a sql tuning issue. A sql runs much slower in test than in production.
I compared the two explain plans in test and production
seems in test, CBO refuses to use index SUBLEDGER_ENTRY_I2.
we rebuile it and re-gether that index statistcs. run, still slow..
I compared the init.ora parameters like hash_area_size, sort_area_size in test, they are same as production.
I wonder if any expert friend can show some light.
in production,
SQL> set autotrace traceonly
SQL> SELECT rpt_horizon_subledger_entry_vw.onst_offst_cd,
2 rpt_horizon_subledger_entry_vw.bkng_prd,
3 rpt_horizon_subledger_entry_vw.systm_afflt_cd,
4 rpt_horizon_subledger_entry_vw.jrnl_id,
5 rpt_horizon_subledger_entry_vw.ntrl_accnt_cd,
6 rpt_horizon_subledger_entry_vw.gnrl_ldgr_chrt_of_accnt_nm,
7 rpt_horizon_subledger_entry_vw.lgl_entty_brnch_cd,
8 rpt_horizon_subledger_entry_vw.crprt_melob_cd AS corp_mlb_cd,
rpt_horizon_subledger_entry_vw.onst_offst_cd, SUM (amt) AS amount
9 10 FROM rpt_horizon_subledger_entry_vw
11 WHERE rpt_horizon_subledger_entry_vw.bkng_prd = '092008'
12 AND rpt_horizon_subledger_entry_vw.jrnl_id = 'RCS0002100'
13 AND rpt_horizon_subledger_entry_vw.systm_afflt_cd = 'SAFF01'
14 GROUP BY rpt_horizon_subledger_entry_vw.onst_offst_cd,
15 rpt_horizon_subledger_entry_vw.bkng_prd,
16 rpt_horizon_subledger_entry_vw.systm_afflt_cd,
17 rpt_horizon_subledger_entry_vw.jrnl_id,
18 rpt_horizon_subledger_entry_vw.ntrl_accnt_cd,
19 rpt_horizon_subledger_entry_vw.gnrl_ldgr_chrt_of_accnt_nm,
20 rpt_horizon_subledger_entry_vw.lgl_entty_brnch_cd,
21 rpt_horizon_subledger_entry_vw.crprt_melob_cd,
22 rpt_horizon_subledger_entry_vw.onst_offst_cd;
491 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=130605 Card=218764 B
ytes=16407300)
1 0 SORT (GROUP BY) (Cost=130605 Card=218764 Bytes=16407300)
2 1 VIEW OF 'RPT_HORIZON_SUBLEDGER_ENTRY_VW' (Cost=129217 Ca
rd=218764 Bytes=16407300)
3 2 SORT (UNIQUE) (Cost=129217 Card=218764 Bytes=35877296)
4 3 UNION-ALL
5 4 HASH JOIN (Cost=61901 Card=109382 Bytes=17719884)
6 5 TABLE ACCESS (FULL) OF 'GNRL_LDGR_CHRT_OF_ACCNT'
(Cost=2 Card=111 Bytes=3774)
7 5 HASH JOIN (Cost=61897 Card=109382 Bytes=14000896
8 7 TABLE ACCESS (FULL) OF 'SUBLEDGER_CHART_OF_ACC
OUNT' (Cost=2 Card=57 Bytes=1881)
9 7 HASH JOIN (Cost=61893 Card=109382 Bytes=103912
90)
10 9 TABLE ACCESS (FULL) OF 'HORIZON_LINE' (Cost=
34 Card=4282 Bytes=132742)
11 9 HASH JOIN (Cost=61833 Card=109390 Bytes=7000
960)
12 11 TABLE ACCESS (BY INDEX ROWID) OF 'SUBLEDGE
R_ENTRY' (Cost=42958 Card=82076 Bytes=3611344)
13 12 INDEX (RANGE SCAN) OF 'SUBLEDGER_ENTRY_I
2' (NON-UNIQUE) (Cost=1069 Card=328303)
14 11 TABLE ACCESS (FULL) OF 'HORIZON_SUBLEDGER_
LINK' (Cost=14314 Card=9235474 Bytes=184709480)
15 4 HASH JOIN (Cost=61907 Card=109382 Bytes=18157412)
16 15 TABLE ACCESS (FULL) OF 'GNRL_LDGR_CHRT_OF_ACCNT'
(Cost=2 Card=111 Bytes=3774)
17 15 HASH JOIN (Cost=61903 Card=109382 Bytes=14438424
18 17 TABLE ACCESS (FULL) OF 'SUBLEDGER_CHART_OF_ACC
OUNT' (Cost=2 Card=57 Bytes=1881)
19 17 HASH JOIN (Cost=61899 Card=109382 Bytes=108288
18)
20 19 TABLE ACCESS (FULL) OF 'HORIZON_LINE' (Cost=
34 Card=4282 Bytes=132742)
21 19 HASH JOIN (Cost=61838 Card=109390 Bytes=7438
520)
22 21 TABLE ACCESS (BY INDEX ROWID) OF 'SUBLEDGE
R_ENTRY' (Cost=42958 Card=82076 Bytes=3939648)
23 22 INDEX (RANGE SCAN) OF 'SUBLEDGER_ENTRY_I
2' (NON-UNIQUE) (Cost=1069 Card=328303)
24 21 TABLE ACCESS (FULL) OF 'HORIZON_SUBLEDGER_
LINK' (Cost=14314 Card=9235474 Bytes=184709480)
Statistics
25 recursive calls
18 db block gets
343266 consistent gets
370353 physical reads
0 redo size
15051 bytes sent via SQL*Net to client
1007 bytes received via SQL*Net from client
34 SQL*Net roundtrips to/from client
1 sorts (memory)
1 sorts (disk)
491 rows processed
in test
SQL> set autotrace traceonly
SQL> SELECT rpt_horizon_subledger_entry_vw.onst_offst_cd,
2 rpt_horizon_subledger_entry_vw.bkng_prd,
3 rpt_horizon_subledger_entry_vw.systm_afflt_cd,
4 rpt_horizon_subledger_entry_vw.jrnl_id,
5 rpt_horizon_subledger_entry_vw.ntrl_accnt_cd,
rpt_horizon_subledger_entry_vw.gnrl_ldgr_chrt_of_accnt_nm,
6 7 rpt_horizon_subledger_entry_vw.lgl_entty_brnch_cd,
8 rpt_horizon_subledger_entry_vw.crprt_melob_cd AS corp_mlb_cd,
9 rpt_horizon_subledger_entry_vw.onst_offst_cd, SUM (amt) AS amount
10 FROM rpt_horizon_subledger_entry_vw
11 WHERE rpt_horizon_subledger_entry_vw.bkng_prd = '092008'
12 AND rpt_horizon_subledger_entry_vw.jrnl_id = 'RCS0002100'
AND rpt_horizon_subledger_entry_vw.systm_afflt_cd = 'SAFF01'
13 14 GROUP BY rpt_horizon_subledger_entry_vw.onst_offst_cd,
15 rpt_horizon_subledger_entry_vw.bkng_prd,
16 rpt_horizon_subledger_entry_vw.systm_afflt_cd,
17 rpt_horizon_subledger_entry_vw.jrnl_id,
18 rpt_horizon_subledger_entry_vw.ntrl_accnt_cd,
rpt_horizon_subledger_entry_vw.gnrl_ldgr_chrt_of_accnt_nm,
rpt_horizon_subledger_entry_vw.lgl_entty_brnch_cd,
rpt_horizon_subledger_entry_vw.crprt_melob_cd,
rpt_horizon_subledger_entry_vw.onst_offst_cd; 19 20 21 22
no rows selected
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=92944 Card=708 Bytes
=53100)
1 0 SORT (GROUP BY) (Cost=92944 Card=708 Bytes=53100)
2 1 VIEW OF 'RPT_HORIZON_SUBLEDGER_ENTRY_VW' (Cost=92937 Car
d=708 Bytes=53100)
3 2 SORT (UNIQUE) (Cost=92937 Card=708 Bytes=124962)
4 3 UNION-ALL
5 4 HASH JOIN (Cost=46456 Card=354 Bytes=60180)
6 5 TABLE ACCESS (FULL) OF 'SUBLEDGER_CHART_OF_ACCOU
NT' (Cost=2 Card=57 Bytes=1881)
7 5 NESTED LOOPS (Cost=46453 Card=354 Bytes=48498)
8 7 HASH JOIN (Cost=11065 Card=17694 Bytes=1362438
9 8 HASH JOIN (Cost=27 Card=87 Bytes=5133)
10 9 TABLE ACCESS (FULL) OF 'HORIZON_LINE' (Cos
t=24 Card=87 Bytes=2175)
11 9 TABLE ACCESS (FULL) OF 'GNRL_LDGR_CHRT_OF_
ACCNT' (Cost=2 Card=111 Bytes=3774)
12 8 TABLE ACCESS (FULL) OF 'HORIZON_SUBLEDGER_LI
NK' (Cost=11037 Card=142561 Bytes=2566098)
13 7 TABLE ACCESS (BY INDEX ROWID) OF 'SUBLEDGER_EN
TRY' (Cost=2 Card=1 Bytes=60)
14 13 INDEX (UNIQUE SCAN) OF 'SUBLEDGER_ENTRY_PK'
(UNIQUE) (Cost=1 Card=1)
15 4 HASH JOIN (Cost=46456 Card=354 Bytes=64782)
16 15 TABLE ACCESS (FULL) OF 'SUBLEDGER_CHART_OF_ACCOU
NT' (Cost=2 Card=57 Bytes=1881)
17 15 NESTED LOOPS (Cost=46453 Card=354 Bytes=53100)
18 17 HASH JOIN (Cost=11065 Card=17694 Bytes=1362438
19 18 HASH JOIN (Cost=27 Card=87 Bytes=5133)
20 19 TABLE ACCESS (FULL) OF 'HORIZON_LINE' (Cos
t=24 Card=87 Bytes=2175)
21 19 TABLE ACCESS (FULL) OF 'GNRL_LDGR_CHRT_OF_
ACCNT' (Cost=2 Card=111 Bytes=3774)
22 18 TABLE ACCESS (FULL) OF 'HORIZON_SUBLEDGER_LI
NK' (Cost=11037 Card=142561 Bytes=2566098)
23 17 TABLE ACCESS (BY INDEX ROWID) OF 'SUBLEDGER_EN
TRY' (Cost=2 Card=1 Bytes=73)
24 23 INDEX (UNIQUE SCAN) OF 'SUBLEDGER_ENTRY_PK'
(UNIQUE) (Cost=1 Card=1)
Statistics
1134 recursive calls
0 db block gets
38903505 consistent gets
598254 physical reads
60 redo size
901 bytes sent via SQL*Net to client
461 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
34 sorts (memory)
0 sorts (disk)
0 rows processed
Thanks a lot in advance
JerryHi
Basically there are two kinds of tables
- fact
- lookup
The number of records in a lookup table is usually small.
The number of records in a fact table is usually huge.
However, in test systems the number of records in a fact table is often also small.
This results in different execution plans.
I notice again you don't post version and platform info, and you didn't make sure your explain is properly idented
Please read the FAQ to make sure it is properly idented.
Also using the word 'buddies' is as far as I am concerned nearing disrespect and rudeness.
Sybrand Bakker
Senior Oracle DBA -
Hi, I would like some help tuning the following query:
SELECT distinct
cs.study,
de.STUDY_SITE Site,
iv.last_name INV,
de.PATIENT ,
pv.visit_number Visit,
-- REPLACE(asm.assessor_name, ',', '|') assessor_name,
de.DISCREPANCY_REV_STATUS_CODE Status,
FROM
RXC.discrepancy_management de, -- a view driven by table rxc.discrepancy_enties
RXC.procedures pd,
clinical_studies cs,
ocl_investigators iv,
rxa_des.clinical_planned_events pv,
dcis dc
-- s999$stable.asmasm asm -- a view, pls see comments below
WHERE
cs.study='999'
and de.PROCEDURE_ID=pd.PROCEDURE_ID
and de.clinical_study_id=cs.clinical_study_id
and iv.investigator_id=de.investigator_id
and de.clin_plan_eve_id=pv.clin_plan_eve_id
and pv.clin_study_id=cs.clinical_study_id
and cs.clinical_study_id=dc.clinical_study_id
and de.patient not like '99%'
and de.dci_id= dc.dci_id
and de.discrepancy_status_code not in ('OBSOLETE')
and de.discrepancy_rev_status_code in ('INV REVIEW','INV PENDING')
--and de.patient = asm.pt(+)
--and de.clin_plan_eve_name = asm.cpevent(+)
--and asm.assessor_name(+) is not null
and nvl(asm.subsetsn, 99) in (12,15,99) this might not work
The above query takes 16 seconds to run but when I add in the view "s999$stable.asmasm asm" execution time drops
to about 25 mins. Looking at the related explain plans the difference is an index lookup on RXC.DISCREPANCY_ENTRIES (the driving
table used in view RXC.discrepancy_management) is dropped and replaced by a full table scan resulting in a large increase in the cost.
I've tried using the following hints without any success...
/*+ NO_MERGE(s9999$stable.asmasm) INDEX(DISCREPANCY_MANAGEMENT.DISCREPANCY_ENTRIES DISCREPANCY_ENT_CS_IDX */
Any help on the cause and fix for this issue would be much appreciated.When your query takes too long ...
HOW TO: Post a SQL statement tuning request - template posting -
Tuning Issue between 2 queries...
Hello ;
the first query is
select prog, at_pck.y(prog) z from (select distinct prog from x where stat = 'oracle');
execution time is like 0.1 sec
when i modify the query above like that;
select k.prog from (select distinct prog from x where stat = 'oracle') k where at_pck.y(k.prog)=1
execution time is longer than 2 sec
i thought second query is supposed to be faster than first query...but i was wrong and the second query was taking longer than the first query;
can someone explain me why the second query is slower than first one?
or is there any way of tuning second query?Hi,
1) please format plans you post using tags
2) when posting plans, please include predicate information
3) if possible, also provide timings and cardinality feedback -- see forum's FAQ how to do that
Having said that, I think that one of two things may be happening here.
1 - both queries are performing equally, it's just that your SQL client is not fetching all rows, thus creating an impression
that 1st query works faster
2 - the first query could, in fact, be faster, if at_pck.y is not a deterministic functions. Usage of non-deterministic functions
in WHERE clause can have undesired side effects, like calling the function more times than necessary.
So I would suggest the following:
- change array_size to 5000 and see if it makes a difference for performance of the 1st query
- check how many times at_pck.y() is called in the 2nd query (e.g. by instrumenting the function's code to log messages to a special log table and commit autonomously)
- post all requested information from above (1-3) plus the code of at_pck here
Best regards,
Nikolay
Edited by: Nikolay Savvinov on Jun 5, 2012 12:25 PM -
Hi,
In a data warehousing env, version 11.1.0.7.0 on Redhat linux.
run a query like below,select sum(case when T495255.PAYMENT_TYPE_ID = 3 then T495255.PAYMENT_ADJUSTMENT_AMOUNT end ) as c1,
max(T494961.DATE_VALUE) as c2,
sum(T495343.CURRENT_BALANCE_AMOUNT) as c3,
sum(T495343.FINAL_BILL_AMOUNT) as c4,
T495414.RECORDSTATUSIND_DESC as c5,
T495272.DATE_DESC as c6,
T495414.BTN as c7,
T495414.FINAL_RECEIVED_AMOUNT as c8,
T495414.ACCOUNT_NUMBER as c9,
T495414.DISCONNECT_REASON_DESC as c10,
T495414.LIQUIDITY_SCORE as c11,
T495414.STATE as c12,
T495414.REGION_DESCRIPTION as c13,
T495323.LIQUIDITY_SCORE_DESCRIPTION as c14,
T495360.BATCH_STATUS as c15
from
D_DATE T495294 /* D_DATE_CREATED */ ,
F_BATCH T495360 /* F_BATCH_Lscore */ ,
D_LIQUIDITY_SCORE_REASON T495323 /* D_LIQUIDITY_SCORE_REASON_DETAIL */ ,
F_LIQUIDITY_SCORE T495343 /* F_LIQUIDITY_SCORE_DETAIL */ left outer join (
F_PAYMENT_ADJUSTMENT_FINALS T495255 /* F_PAYMENT_ADJUSTMENT_FINALS_LSCORE */ left outer join D_DATE T494961 /* D_DATE_PAYMENT_ADJUSTMENT */ On T494961.DATE_ID = T495255.PAYMENT_ADJUSTMENT_DATE) On T495255.ACCOUNT_ID = T495343.ACCOUNT_ID,
D_DATE T495272 /* D_DATE_BILL_DATE */ ,
T_ACCOUNT_FINALS_DTL T495414 /* T_ACCOUNT_FINALS_DTL_LSCORE */
where ( T495294.DATE_ID = T495360.START_DATE and T495272.DATE_ID = T495414.FINAL_BILL_DATE and T495323.LIQUIDITY_SCORE_ID = T495343.LIQUIDITY_SCORE_REASON_ID and T495343.ACCOUNT_ID = T495414.ACCOUNT_ID and T495294.YEAR_DESC = 'Year 2010' and T495343.LIQUIDITY_SCORE_IMP_BATCH_ID = T495360.SOURCE_BATCH_ID and T495414.REGION_DESCRIPTION = '8-FinalsRM' )
group by T495272.DATE_DESC, T495323.LIQUIDITY_SCORE_DESCRIPTION, T495360.BATCH_STATUS, T495414.ACCOUNT_NUMBER, T495414.BTN, T495414.DISCONNECT_REASON_DESC, T495414.FINAL_RECEIVED_AMOUNT, T495414.LIQUIDITY_SCORE, T495414.RECORDSTATUSIND_DESC, T495414.REGION_DESCRIPTION, T495414.STATE;
explain is like below,
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 26M| 5784M| | 45903 (1)| 00:09:11 | | | |
| 1 | PX COORDINATOR | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10010 | 26M| 5784M| | 45903 (1)| 00:09:11 | Q1,10 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 26M| 5784M| 6243M| 45903 (1)| 00:09:11 | Q1,10 | PCWP | |
| 4 | PX RECEIVE | | 26M| 5784M| | 45903 (1)| 00:09:11 | Q1,10 | PCWP | |
| 5 | PX SEND HASH | :TQ10009 | 26M| 5784M| | 45903 (1)| 00:09:11 | Q1,09 | P->P | HASH |
| 6 | HASH GROUP BY | | 26M| 5784M| 6243M| 45903 (1)| 00:09:11 | Q1,09 | PCWP | |
|* 7 | HASH JOIN OUTER | | 26M| 5784M| | 25403 (1)| 00:05:05 | Q1,09 | PCWP | |
| 8 | PX RECEIVE | | 3648K| 657M| | 20227 (1)| 00:04:03 | Q1,09 | PCWP | |
| 9 | PX SEND HASH | :TQ10007 | 3648K| 657M| | 20227 (1)| 00:04:03 | Q1,07 | P->P | HASH |
|* 10 | HASH JOIN BUFFERED | | 3648K| 657M| | 20227 (1)| 00:04:03 | Q1,07 | PCWP | |
| 11 | PX RECEIVE | | 34707 | 779K| | 2 (0)| 00:00:01 | Q1,07 | PCWP | |
| 12 | PX SEND BROADCAST | :TQ10003 | 34707 | 779K| | 2 (0)| 00:00:01 | Q1,03 | P->P | BROADCAST |
| 13 | PX BLOCK ITERATOR | | 34707 | 779K| | 2 (0)| 00:00:01 | Q1,03 | PCWC | |
| 14 | TABLE ACCESS FULL | D_DATE | 34707 | 779K| | 2 (0)| 00:00:01 | Q1,03 | PCWP | |
|* 15 | HASH JOIN | | 3648K| 577M| | 20224 (1)| 00:04:03 | Q1,07 | PCWP | |
| 16 | BUFFER SORT | | | | | | | Q1,07 | PCWC | |
| 17 | PX RECEIVE | | 10 | 200 | | 3 (0)| 00:00:01 | Q1,07 | PCWP | |
| 18 | PX SEND BROADCAST | :TQ10001 | 10 | 200 | | 3 (0)| 00:00:01 | | S->P | BROADCAST |
| 19 | TABLE ACCESS FULL | D_LIQUIDITY_SCORE_REASON | 10 | 200 | | 3 (0)| 00:00:01 | | | |
|* 20 | HASH JOIN | | 3648K| 507M| | 20220 (1)| 00:04:03 | Q1,07 | PCWP | |
| 21 | PX RECEIVE | | 3649K| 180M| | 337 (3)| 00:00:05 | Q1,07 | PCWP | |
| 22 | PX SEND HASH | :TQ10004 | 3649K| 180M| | 337 (3)| 00:00:05 | Q1,04 | P->P | HASH |
|* 23 | HASH JOIN | | 3649K| 180M| | 337 (3)| 00:00:05 | Q1,04 | PCWP | |
| 24 | PX RECEIVE | | 360 | 5760 | | 2 (0)| 00:00:01 | Q1,04 | PCWP | |
| 25 | PX SEND BROADCAST | :TQ10002 | 360 | 5760 | | 2 (0)| 00:00:01 | Q1,02 | P->P | BROADCAST |
| 26 | PX BLOCK ITERATOR | | 360 | 5760 | | 2 (0)| 00:00:01 | Q1,02 | PCWC | |
|* 27 | TABLE ACCESS FULL | D_DATE | 360 | 5760 | | 2 (0)| 00:00:01 | Q1,02 | PCWP | |
|* 28 | HASH JOIN | | 9431K| 323M| | 334 (2)| 00:00:05 | Q1,04 | PCWP | |
| 29 | BUFFER SORT | | | | | | | Q1,04 | PCWC | |
| 30 | PX RECEIVE | | 1975 | 25675 | | 8 (0)| 00:00:01 | Q1,04 | PCWP | |
| 31 | PX SEND BROADCAST | :TQ10000 | 1975 | 25675 | | 8 (0)| 00:00:01 | | S->P | BROADCAST |
| 32 | TABLE ACCESS FULL| F_BATCH | 1975 | 25675 | | 8 (0)| 00:00:01 | | | |
| 33 | PX BLOCK ITERATOR | | 9431K| 206M| | 324 (2)| 00:00:04 | Q1,04 | PCWC | |
| 34 | TABLE ACCESS FULL | F_LIQUIDITY_SCORE | 9431K| 206M| | 324 (2)| 00:00:04 | Q1,04 | PCWP | |
| 35 | PX RECEIVE | | 15M| 1346M| | 19880 (1)| 00:03:59 | Q1,07 | PCWP | |
| 36 | PX SEND HASH | :TQ10005 | 15M| 1346M| | 19880 (1)| 00:03:59 | Q1,05 | P->P | HASH |
| 37 | PX BLOCK ITERATOR | | 15M| 1346M| | 19880 (1)| 00:03:59 | Q1,05 | PCWC | |
|* 38 | TABLE ACCESS FULL | T_ACCOUNT_FINALS_DTL | 15M| 1346M| | 19880 (1)| 00:03:59 | Q1,05 | PCWP | |
| 39 | PX RECEIVE | | 113M| 4448M| | 5165 (1)| 00:01:02 | Q1,09 | PCWP | |
| 40 | PX SEND HASH | :TQ10008 | 113M| 4448M| | 5165 (1)| 00:01:02 | Q1,08 | P->P | HASH |
| 41 | VIEW | | 113M| 4448M| | 5165 (1)| 00:01:02 | Q1,08 | PCWP | |
|* 42 | HASH JOIN RIGHT OUTER | | 113M| 3472M| | 5165 (1)| 00:01:02 | Q1,08 | PCWP | |
| 43 | PX RECEIVE | | 34707 | 474K| | 2 (0)| 00:00:01 | Q1,08 | PCWP | |
| 44 | PX SEND BROADCAST | :TQ10006 | 34707 | 474K| | 2 (0)| 00:00:01 | Q1,06 | P->P | BROADCAST |
| 45 | PX BLOCK ITERATOR | | 34707 | 474K| | 2 (0)| 00:00:01 | Q1,06 | PCWC | |
| 46 | TABLE ACCESS FULL | D_DATE | 34707 | 474K| | 2 (0)| 00:00:01 | Q1,06 | PCWP | |
| 47 | PX BLOCK ITERATOR | | 113M| 1953M| | 5151 (1)| 00:01:02 | Q1,08 | PCWC | |
| 48 | TABLE ACCESS FULL | F_PAYMENT_ADJUSTMENT_FINALS | 113M| 1953M| | 5151 (1)| 00:01:02 | Q1,08 | PCWP | |
Predicate Information (identified by operation id):
7 - access("T495255"."ACCOUNT_ID"(+)="T495343"."ACCOUNT_ID")
10 - access("T495272"."DATE_ID"="T495414"."FINAL_BILL_DATE")
15 - access("T495323"."LIQUIDITY_SCORE_ID"="T495343"."LIQUIDITY_SCORE_REASON_ID")
20 - access("T495343"."ACCOUNT_ID"="T495414"."ACCOUNT_ID")
23 - access("T495294"."DATE_ID"="T495360"."START_DATE")
27 - filter("T495294"."YEAR_DESC"='Year 2010')
28 - access("T495343"."LIQUIDITY_SCORE_IMP_BATCH_ID"="T495360"."SOURCE_BATCH_ID")
38 - filter("T495414"."REGION_DESCRIPTION"='8-FinalsRM')
42 - access("T494961"."DATE_ID"(+)="T495255"."PAYMENT_ADJUSTMENT_DATE")
Put hint into query, like
select /*+ star_transformation CACHE(T495294) PARALLEL(T495414 32,2) PARALLEL(T495255 32,2) PARALLEL(T495343 32,2) PARALLEL(T495272 32,2) PARALLEL(T495360 32,2) */ sum(case when T495255.PAYMENT_TYPE_ID = 3 then T495255.PAYMENT_ADJUSTMENT_AMOUNT end ) as c1,
max(T494961.DATE_VALUE) as c2,
sum(T495343.CURRENT_BALANCE_AMOUNT) as c3,
sum(T495343.FINAL_BILL_AMOUNT) as c4,
T495414.RECORDSTATUSIND_DESC as c5,
T495272.DATE_DESC as c6,
T495414.BTN as c7,
T495414.FINAL_RECEIVED_AMOUNT as c8,
T495414.ACCOUNT_NUMBER as c9,
T495414.DISCONNECT_REASON_DESC as c10,
T495414.LIQUIDITY_SCORE as c11,
T495414.STATE as c12,
T495414.REGION_DESCRIPTION as c13,
T495323.LIQUIDITY_SCORE_DESCRIPTION as c14,
T495360.BATCH_STATUS as c15
from
D_DATE T495294 /* D_DATE_CREATED */ ,
F_BATCH T495360 /* F_BATCH_Lscore */ ,
D_LIQUIDITY_SCORE_REASON T495323 /* D_LIQUIDITY_SCORE_REASON_DETAIL */ ,
F_LIQUIDITY_SCORE T495343 /* F_LIQUIDITY_SCORE_DETAIL */ left outer join (
F_PAYMENT_ADJUSTMENT_FINALS T495255 /* F_PAYMENT_ADJUSTMENT_FINALS_LSCORE */ left outer join D_DATE T494961 /* D_DATE_PAYMENT_ADJUSTMENT */ On T494961.DATE_ID = T495255.PAYMENT_ADJUSTMENT_DATE) On T495255.ACCOUNT_ID = T495343.ACCOUNT_ID,
D_DATE T495272 /* D_DATE_BILL_DATE */ ,
T_ACCOUNT_FINALS_DTL T495414 /* T_ACCOUNT_FINALS_DTL_LSCORE */
where ( T495294.DATE_ID = T495360.START_DATE and T495272.DATE_ID = T495414.FINAL_BILL_DATE and T495323.LIQUIDITY_SCORE_ID = T495343.LIQUIDITY_SCORE_REASON_ID and T495343.ACCOUNT_ID = T495414.ACCOUNT_ID and T495294.YEAR_DESC = 'Year 2010' and T495343.LIQUIDITY_SCORE_IMP_BATCH_ID = T495360.SOURCE_BATCH_ID and T495414.REGION_DESCRIPTION = '8-FinalsRM' )
group by T495272.DATE_DESC, T495323.LIQUIDITY_SCORE_DESCRIPTION, T495360.BATCH_STATUS, T495414.ACCOUNT_NUMBER, T495414.BTN, T495414.DISCONNECT_REASON_DESC, T495414.FINAL_RECEIVED_AMOUNT, T495414.LIQUIDITY_SCORE, T495414.RECORDSTATUSIND_DESC, T495414.REGION_DESCRIPTION, T495414.STATE;
explain plan is
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 69M| 14G| | 76473 (1)| 00:15:18 | | | |
| 1 | PX COORDINATOR | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10010 | 69M| 14G| | 76473 (1)| 00:15:18 | Q1,10 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 69M| 14G| 15G| 76473 (1)| 00:15:18 | Q1,10 | PCWP | |
| 4 | PX RECEIVE | | 69M| 14G| | 25397 (1)| 00:05:05 | Q1,10 | PCWP | |
| 5 | PX SEND HASH | :TQ10009 | 69M| 14G| | 25397 (1)| 00:05:05 | Q1,09 | P->P | HASH |
|* 6 | HASH JOIN OUTER BUFFERED | | 69M| 14G| | 25397 (1)| 00:05:05 | Q1,09 | PCWP | |
| 7 | PX RECEIVE | | 3648K| 657M| | 20221 (1)| 00:04:03 | Q1,09 | PCWP | |
| 8 | PX SEND HASH | :TQ10007 | 3648K| 657M| | 20221 (1)| 00:04:03 | Q1,07 | P->P | HASH |
|* 9 | HASH JOIN BUFFERED | | 3648K| 657M| | 20221 (1)| 00:04:03 | Q1,07 | PCWP | |
| 10 | PX RECEIVE | | 34707 | 779K| | 2 (0)| 00:00:01 | Q1,07 | PCWP | |
| 11 | PX SEND BROADCAST | :TQ10003 | 34707 | 779K| | 2 (0)| 00:00:01 | Q1,03 | P->P | BROADCAST |
| 12 | PX BLOCK ITERATOR | | 34707 | 779K| | 2 (0)| 00:00:01 | Q1,03 | PCWC | |
| 13 | TABLE ACCESS FULL | D_DATE | 34707 | 779K| | 2 (0)| 00:00:01 | Q1,03 | PCWP | |
|* 14 | HASH JOIN | | 3648K| 577M| | 20218 (1)| 00:04:03 | Q1,07 | PCWP | |
| 15 | BUFFER SORT | | | | | | | Q1,07 | PCWC | |
| 16 | PX RECEIVE | | 10 | 200 | | 3 (0)| 00:00:01 | Q1,07 | PCWP | |
| 17 | PX SEND BROADCAST | :TQ10000 | 10 | 200 | | 3 (0)| 00:00:01 | | S->P | BROADCAST |
| 18 | TABLE ACCESS FULL | D_LIQUIDITY_SCORE_REASON | 10 | 200 | | 3 (0)| 00:00:01 | | | |
|* 19 | HASH JOIN | | 3648K| 507M| | 20214 (1)| 00:04:03 | Q1,07 | PCWP | |
| 20 | PX RECEIVE | | 3649K| 180M| | 331 (3)| 00:00:04 | Q1,07 | PCWP | |
| 21 | PX SEND HASH | :TQ10004 | 3649K| 180M| | 331 (3)| 00:00:04 | Q1,04 | P->P | HASH |
|* 22 | HASH JOIN | | 3649K| 180M| | 331 (3)| 00:00:04 | Q1,04 | PCWP | |
| 23 | PX RECEIVE | | 360 | 5760 | | 2 (0)| 00:00:01 | Q1,04 | PCWP | |
| 24 | PX SEND BROADCAST | :TQ10001 | 360 | 5760 | | 2 (0)| 00:00:01 | Q1,01 | P->P | BROADCAST |
| 25 | PX BLOCK ITERATOR | | 360 | 5760 | | 2 (0)| 00:00:01 | Q1,01 | PCWC | |
|* 26 | TABLE ACCESS FULL | D_DATE | 360 | 5760 | | 2 (0)| 00:00:01 | Q1,01 | PCWP | |
|* 27 | HASH JOIN | | 9431K| 323M| | 328 (2)| 00:00:04 | Q1,04 | PCWP | |
| 28 | PX RECEIVE | | 1975 | 25675 | | 2 (0)| 00:00:01 | Q1,04 | PCWP | |
| 29 | PX SEND BROADCAST | :TQ10002 | 1975 | 25675 | | 2 (0)| 00:00:01 | Q1,02 | P->P | BROADCAST |
| 30 | PX BLOCK ITERATOR | | 1975 | 25675 | | 2 (0)| 00:00:01 | Q1,02 | PCWC | |
| 31 | TABLE ACCESS FULL| F_BATCH | 1975 | 25675 | | 2 (0)| 00:00:01 | Q1,02 | PCWP | |
| 32 | PX BLOCK ITERATOR | | 9431K| 206M| | 324 (2)| 00:00:04 | Q1,04 | PCWC | |
| 33 | TABLE ACCESS FULL | F_LIQUIDITY_SCORE | 9431K| 206M| | 324 (2)| 00:00:04 | Q1,04 | PCWP | |
| 34 | PX RECEIVE | | 15M| 1346M| | 19880 (1)| 00:03:59 | Q1,07 | PCWP | |
| 35 | PX SEND HASH | :TQ10005 | 15M| 1346M| | 19880 (1)| 00:03:59 | Q1,05 | P->P | HASH |
| 36 | PX BLOCK ITERATOR | | 15M| 1346M| | 19880 (1)| 00:03:59 | Q1,05 | PCWC | |
|* 37 | TABLE ACCESS FULL | T_ACCOUNT_FINALS_DTL | 15M| 1346M| | 19880 (1)| 00:03:59 | Q1,05 | PCWP | |
| 38 | PX RECEIVE | | 113M| 3038M| | 5165 (1)| 00:01:02 | Q1,09 | PCWP | |
| 39 | PX SEND HASH | :TQ10008 | 113M| 3038M| | 5165 (1)| 00:01:02 | Q1,08 | P->P | HASH |
| 40 | VIEW | | 113M| 3038M| | 5165 (1)| 00:01:02 | Q1,08 | PCWP | |
|* 41 | HASH JOIN OUTER | | 113M| 3472M| | 5182 (2)| 00:01:03 | Q1,08 | PCWP | |
| 42 | PX BLOCK ITERATOR | | 113M| 1953M| | 5151 (1)| 00:01:02 | Q1,08 | PCWC | |
| 43 | TABLE ACCESS FULL | F_PAYMENT_ADJUSTMENT_FINALS | 113M| 1953M| | 5151 (1)| 00:01:02 | Q1,08 | PCWP | |
| 44 | BUFFER SORT | | | | | | | Q1,08 | PCWC | |
| 45 | PX RECEIVE | | 34707 | 474K| | 2 (0)| 00:00:01 | Q1,08 | PCWP | |
| 46 | PX SEND BROADCAST | :TQ10006 | 34707 | 474K| | 2 (0)| 00:00:01 | Q1,06 | P->P | BROADCAST |
| 47 | PX BLOCK ITERATOR | | 34707 | 474K| | 2 (0)| 00:00:01 | Q1,06 | PCWC | |
| 48 | TABLE ACCESS FULL | D_DATE | 34707 | 474K| | 2 (0)| 00:00:01 | Q1,06 | PCWP | |
it return 19000 row with elapse time 2 mins (before tuning, its 3 minutes and more)
(T_ACCOUNT_FINALS_DTL (51G) AND F_PAYMENT_ADJUSTMENT_FINALS (13G) ARE TWO HUGE TALBES HERE)
Is there better way to tune it??
thanks a lot
Jerry
Edited by: jerrygreat on Jul 13, 2010 2:19 PM
Edited by: jerrygreat on Jul 13, 2010 2:57 PMCan you please (after 414 posts you should know)
a) use the templates to post a tuning question
b) format your code
c) use the tags to make sure identation is preserved.
Other than that: why are you not using the templates and using the tag?
It is minimal effort on your part, and it would take others hours to format the JUNK you are posting everytime!
Sybrand Bakker
Senior Oracle DBA -
Perf tuning issue with a query involving between clause
Hi all,
I am getting issues with performance when I try to execute this query. Given below the query and it is going for a full table scan. I think the problem is with the between clause. But I dont know how to resolve this issue
SELECT psm.member_id
FROM pre_stg_member psm
WHERE psm.map_tran_agn BETWEEN :start_transaction_agn and :end_transaction_agn
and psm.partition_key = :p_partition_key;
Having composite index on map_tran_agn and partition_key.
Please help me in this regard.
Thanks,
SwamiPlease consider the following when you post a question.
1. New features keep coming in every oracle version so please provide Your Oracle DB Version to get the best possible answer.
You can use the following query and do a copy past of the output.
select * from v$version 2. This forum has a very good Search Feature. Please use that before posting your question. Because for most of the questions
that are asked the answer is already there.
3. We dont know your DB structure or How your Data is. So you need to let us know. The best way would be to give some sample data like this.
I have the following table called sales
with sales
as
select 1 sales_id, 1 prod_id, 1001 inv_num, 120 qty from dual
union all
select 2 sales_id, 1 prod_id, 1002 inv_num, 25 qty from dual
select *
from sales 4. Rather than telling what you want in words its more easier when you give your expected output.
For example in the above sales table, I want to know the total quantity and number of invoice for each product.
The output should look like this
Prod_id sum_qty count_inv
1 145 2 5. When ever you get an error message post the entire error message. With the Error Number, The message and the Line number.
6. Next thing is a very important thing to remember. Please post only well formatted code. Unformatted code is very hard to read.
Your code format gets lost when you post it in the Oracle Forum. So in order to preserve it you need to
use the {noformat}{noformat} tags.
The usage of the tag is like this.
<place your code here>\
7. If you are posting a *Performance Related Question*. Please read
{thread:id=501834} and {thread:id=863295}.
Following those guide will be very helpful.
8. Please keep in mind that this is a public forum. Here No question is URGENT.
So use of words like *URGENT* or *ASAP* (As Soon As Possible) are considered to be rude. -
Hi folks,
I having a problem with performance tuning ... Below is a sample query
SELECT /*+ PARALLEL (K 4) */ DISTINCT ltrim(rtrim(ibc_item)), substr(IBC_BUSINESS_CLASS, 1,1)
FROM AAA K
WHERE ltrim(rtrim(ibc_item)) NOT IN
select /*+ PARALLEL (II 4) */ DISTINCT ltrim(rtrim(THIRD_MAINKEY)) FROM BBB II
WHERE SECOND_MAINKEY = 3
UNION
SELECT /*+ PARALLEL (III 4) */ DISTINCT ltrim(rtrim(BLN_BUSINESS_LINE_NAME)) FROM CCC III
WHERE BLN_BUSINESS_LINE = 3
The above query is having a cost of 460 Million. I tried creating index but oracle is not using index as a FT scan looks better. (I too feel FT scan is the best as 90% of the rows are used in the table)
After using the parallel hint the cost goes to 100 Million ....
Is there any way to decrease the cost ...
Thanks in advance for ur help !Be aware too Nalla, that the PARALLEL hint will rule out the use of an index if Oracle adheres to it.
This is what I would try:
SELECT /*+ PARALLEL (K 4) */ DISTINCT TRIM(ibc_item), substr(IBC_BUSINESS_CLASS, 1,1)
FROM AAA K
WHERE NOT EXISTS (
SELECT 1
FROM BBB II
WHERE SECOND_MAINKEY = 3
AND TRIM(THIRD_MAINKEY) = TRIM(K.ibc_item))
AND NOT EXISTS (
SELECT 1
FROM CCC III
WHERE BLN_BUSINESS_LINE = 3
AND TRIM(BLN_BUSINESS_LINE_NAME) = TRIM(K.ibc_item))But I don't like this at all: TRIM(K.ibc_item), and you never need to use DISTINCT with NOT IN or NOT EXISTS.
Try this:
SELECT DISTINCT TRIM(ibc_item), substr(IBC_BUSINESS_CLASS, 1,1)
FROM AAA K
WHERE NOT EXISTS (
SELECT 1
FROM BBB II
WHERE SECOND_MAINKEY = 3
AND TRIM(THIRD_MAINKEY) = K.ibc_item
AND NOT EXISTS (
SELECT 1
FROM CCC III
WHERE BLN_BUSINESS_LINE = 3
AND TRIM(BLN_BUSINESS_LINE_NAME) = K.ibc_itemThis may not work though, since you may have whitespaces in K.ibc_item. -
I am using oracle11g windows(client), sql developer -3.2.09
I am facing an issue with high cost and buffer gets on SORT(ORDER BY STOPKEY).
how can cost and buffergets be reduced while sorting on inline views.which is easier to read & understand?
How do I ask a question on the forums?
SQL and PL/SQL FAQ
scroll down to #9 to learn about tags!
[code]
SELECT *
FROM (SELECT *
FROM (SELECT *
FROM (SELECT locn_id,
locn_brcd,
area,
zone,
aisle,
bay,
posn,
lvl,
config_priority,
lvl_priority,
locn_pick_seq,
m_sop_config_rule_id,
config_from_locn,
config_to_locn,
config_prod_line,
locn_size,
misc_flag1,
misc_flag2,
config_vendor,
sort_priority,
sort_method,
style_check,
color_check,
size_check
FROM m_sop_vw_rt_test
WHERE config_prod_line = 'CSR'
AND locn_size = 'MIN'
AND ( ( (SELECT Count(1)
FROM m_sop_config_rule
WHERE m_sop_action_type_id = 1
AND is_active = 'Y'
AND vendor_id = '030') > 0
AND config_vendor = '030' )
OR ( (SELECT Count(1)
FROM m_sop_config_rule
WHERE m_sop_action_type_id = 1
AND is_active = 'Y'
AND vendor_id = '030') = 0
AND config_vendor = '*' ) )
AND Nvl(sort_method, 'A') = 'D'
ORDER BY lvl_priority,
sort_priority,
locn_pick_seq DESC)
UNION ALL
SELECT *
FROM (SELECT locn_id,
locn_brcd,
area,
zone,
aisle,
bay,
posn,
lvl,
config_priority,
lvl_priority,
locn_pick_seq,
m_sop_config_rule_id,
config_from_locn,
config_to_locn,
config_prod_line,
locn_size,
misc_flag1,
misc_flag2,
config_vendor,
sort_priority,
sort_method,
style_check,
color_check,
size_check
FROM m_sop_vw_rt_test
WHERE config_prod_line = 'CSR'
AND locn_size = 'MIN'
AND ( ( (SELECT Count(1)
FROM m_sop_config_rule
WHERE m_sop_action_type_id = 1
AND is_active = 'Y'
AND vendor_id = '030') > 0
AND config_vendor = '030' )
OR ( (SELECT Count(1)
FROM m_sop_config_rule
WHERE m_sop_action_type_id = 1
AND is_active = 'Y'
AND vendor_id = '030') = 0
AND config_vendor = '*' ) )
AND Nvl(sort_method, 'A') = 'A'
ORDER BY lvl_priority,
sort_priority,
locn_pick_seq)) elgbl_locn_dtls
WHERE NOT EXISTS (SELECT 1
FROM m_sop_surrounding_locns mssl
inner join pick_locn_dtl pld
ON mssl.locn_id = '002186336'
AND
pld.locn_id = mssl.surround_locn_id
inner join item_wms iw
ON iw.item_id = pld.item_id
WHERE ( elgbl_locn_dtls.style_check = 'Y'
OR elgbl_locn_dtls.color_check = 'Y'
OR elgbl_locn_dtls.size_check = 'Y' )
AND EXISTS (SELECT 1
FROM item_wms iw1
WHERE iw1.item_id = '110'
AND ( (
elgbl_locn_dtls.style_check = 'Y'
AND iw1.spl_instr_1 IS NOT NULL
AND
iw1.spl_instr_1 = iw.spl_instr_1 )
OR (
elgbl_locn_dtls.color_check = 'Y'
AND iw1.store_dept IS NOT NULL
AND iw1.store_dept = iw.store_dept
OR (
elgbl_locn_dtls.size_check = 'Y'
AND iw1.spl_instr_2 IS NOT NULL
AND
iw1.spl_instr_2 = iw.spl_instr_2 ) )
ORDER BY lvl_priority,
sort_priority)
WHERE ROWNUM <= 10;
[/code] -
Performance tuning issues -- plz help
Hi Tuning gurus
this querry works fine for lesser number of rows
eg :--
where ROWNUM <= 10 )
where rnum >=1;
but takes lot of time as we increase rownum ..
eg :--
where ROWNUM <= 10000 )
where rnum >=9990;
results are posted below
pls suggest me
oracle version -Oracle Database 10g Enterprise Edition
Release 10.2.0.1.0 - Prod
os version red hat enterprise linux ES release 4
also statistics differ when we use table
and its views
results of view v$mail
[select * from
( select a.*, ROWNUM rnum from
( SELECT M.MAIL_ID, MAIL_FROM, M.SUBJECT
AS S1,CEIL(M.MAIL_SIZE) AS MAIL_SIZE,
TO_CHAR(MAIL_DATE,'dd Mon yyyy hh:mi:ss
am') AS MAIL_DATE1, M.ATTACHMENT_FLAG,
M.MAIL_TYPE_ID, M.PRIORITY_NO, M.TEXT,
COALESCE(M.MAIL_STATUS_VALUE,0),
0 as email_address,LOWER(M.MAIL_to) as
Mail_to, M.Cc, M.MAIL_DATE AS MAIL_DATE,
lower(subject) as subject,read_ipaddress,
read_datetime,Folder_Id,compose_type,
interc_count,history_id,pined_flag,
rank() over (order by mail_date desc)
as rnk from v$mail M WHERE M.USER_ID=6 AND M.FOLDER_ID =1) a
where ROWNUM <= 10000 )
where rnum >=9990;]
result :
11 rows selected.
Elapsed: 00:00:03.84
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=14735 Card=10000 B
ytes=142430000)
1 0 VIEW (Cost=14735 Card=10000 Bytes=142430000)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=14735 Card=14844 Bytes=211230120)
4 3 WINDOW (SORT) (Cost=14735 Card=14844 Bytes=9114216)
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'MAIL' (TABLE) (C
ost=12805 Card=14844 Bytes=9114216)
6 5 INDEX (RANGE SCAN) OF 'FOLDER_USERID' (INDEX) (C
ost=43 Card=14844)
Statistics
294 recursive calls
0 db block gets
8715 consistent gets
8669 physical reads
0 redo size
7060 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select count(*) from v$mail;
Elapsed: 00:00:00.17
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=494 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'FOLDER_USERID' (INDEX) (Cost=
494 Card=804661)
Statistics
8 recursive calls
0 db block gets
2171 consistent gets
2057 physical reads
260 redo size
352 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
results of original table mail
[select * from
( select a.*, ROWNUM rnum from
( SELECT M.MAIL_ID, MAIL_FROM, M.SUBJECT
AS S1,CEIL(M.MAIL_SIZE) AS MAIL_SIZE,
TO_CHAR(MAIL_DATE,'dd Mon yyyy hh:mi:ss
am') AS MAIL_DATE1, M.ATTACHMENT_FLAG,
M.MAIL_TYPE_ID, M.PRIORITY_NO, M.TEXT,
COALESCE(M.MAIL_STATUS_VALUE,0),
0 as email_address,LOWER(M.MAIL_to) as
Mail_to, M.Cc, M.MAIL_DATE AS MAIL_DATE,
lower(subject) as subject,read_ipaddress,
read_datetime,Folder_Id,compose_type,
interc_count,history_id,pined_flag,
rank() over (order by mail_date desc)
as rnk from mail M WHERE M.USER_ID=6 AND M.FOLDER_ID =1) a
where ROWNUM <= 10000 )
where rnum >=9990;]
result :
11 rows selected.
Elapsed: 00:00:03.21
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=14735 Card=10000 B
ytes=142430000)
1 0 VIEW (Cost=14735 Card=10000 Bytes=142430000)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=14735 Card=14844 Bytes=211230120)
4 3 WINDOW (SORT) (Cost=14735 Card=14844 Bytes=9114216)
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'MAIL' (TABLE) (C
ost=12805 Card=14844 Bytes=9114216)
6 5 INDEX (RANGE SCAN) OF 'FOLDER_USERID' (INDEX) (C
ost=43 Card=14844)
Statistics
1 recursive calls
119544 db block gets
8686 consistent gets
8648 physical reads
0 redo size
13510 bytes sent via SQL*Net to client
4084 bytes received via SQL*Net from client
41 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select count(*) from mail;
Elapsed: 00:00:00.34
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=494 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'FOLDER_USERID' (INDEX) (Cost=
494 Card=804661)
Statistics
1 recursive calls
0 db block gets
2183 consistent gets
2062 physical reads
72 redo size
352 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Thanks n regards
ps : sorry i could not preserve the format plz
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBAJust to answer the OP's fundamental question:
The query starts off quick (rows between 1 and 10)
but gets increasingly slower as the start of the
window increases (eg to row 1000, 10,000, etc).
The original (unsorted) query would get first rows
very quickly, but each time you move the window, it
has to fetch and discard an increasing number of rows
before it finds the first one you want. So the time
taken is proportional to the rownumber you have
reached.
With Charles's correction (which is unavoidable), the
entire query has to be retrieved and sorted
before the rows you want can be returned. That's
horribly inefficient. This technique works for small
sets (eg 10 - 1000 rows) but I can't tell you how
wrong it is to process data in this way especially if
you are expecting lacs (that's 100,000s isn't
it) of rows returned. You are pounding your database
simply to give you the option of being able to go
back as well as forwards in your query results. The
time taken is proportional to the total number of
rows (so the time to get to the end of the entire set
is proportional to the square of the total
number of rows.
If you really need to page back and forth
through large sets, consider one of the following
options:
1) saving the set (eg as a materialised view or in a
temp table - and include "row number" as an indexed
column)
2) retrieve ALL the rowids into an array/collection
in a single pass, then go get 10 rows by rowid for
each page
3) assuming you can sort by a unique identifier, use
that (instead of rownumber) to remember the first row
in each page; use a range scan on the index on that
UID to get back the rows you want quickly (doing this
with a non-unique sort key is quite a bit harder)
Remember also that if someone else inserts into your
table while you are paging around, some of these
methods can give confusing results - because every
time you start a new query, you get a new
read-consistent point.
Anyway, try to redesign so you don't need to page
through lacs of rows....
HTH
Regards NigelYou are correct regarding the OP's original SQL statement that:
"the entire query has to be retrieved and sorted before the rows you want can be returned"
However, that is not the case with the SQL statement that I posted. The problem with the SQL statement I posted is that Oracle insists on performing full tablescans on the table. The following is a full test run with 2,000,000 rows in a table, including an analysis of the problem, and a method of working around the problem:
CREATE TABLE T1 (
MAIL_ID NUMBER(10),
USER_ID NUMBER(10),
FOLDER_ID NUMBER(10),
MAIL_DATE DATE,
PRIMARY KEY(MAIL_ID));
<br>
CREATE INDEX T1_USER_FOLDER ON T1(USER_ID,FOLDER_ID);
CREATE INDEX T1_USER_FOLDER_MAIL ON T1(USER_ID,FOLDER_ID);
<br>
INSERT INTO
T1
SELECT
ROWNUM MAIL_ID,
DBMS_RANDOM.VALUE(1,30) USER_ID,
DBMS_RANDOM.VALUE(1,5) FOLDER_ID,
TRUNC(SYSDATE-365)+ROWNUM/10000 MAIL_DATE
FROM
DUAL
CONNECT BY
LEVEL<=1000000;
<br>
INSERT INTO
T1
SELECT
ROWNUM+1000000 MAIL_ID,
DBMS_RANDOM.VALUE(1,30) USER_ID,
DBMS_RANDOM.VALUE(1,5) FOLDER_ID,
TRUNC(SYSDATE-365)+ROWNUM/10000 MAIL_DATE
FROM
DUAL
CONNECT BY
LEVEL<=1000000;
<br>
COMMIT;
<br>
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
<br>
SELECT /*+ ORDERED */
MI.MAIL_ID,
TO_CHAR(M.MAIL_DATE,'DD MON YYYY HH:MI:SS AM') AS MAIL_DATE1,
M.MAIL_DATE AS MAIL_DATE,
M.FOLDER_ID,
M.MAIL_ID,
M.USER_ID
FROM
(SELECT
MAIL_ID
FROM
(SELECT
MAIL_ID,
ROW_NUMBER() OVER (ORDER BY MAIL_DATE DESC) RN
FROM
CUSTAPP.T1
WHERE
USER_ID=6
AND FOLDER_ID=1)
WHERE
RN BETWEEN 900 AND 909) MI,
CUSTAPP.T1 M
WHERE
MI.MAIL_ID=M.MAIL_ID;
<br>
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
|* 1 | HASH JOIN | | 1 | 8801 | 10 |00:00:15.62 | 13610 | 1010K| 1010K| 930K (0)|
|* 2 | VIEW | | 1 | 8801 | 10 |00:00:00.34 | 6805 | | | |
|* 3 | WINDOW SORT PUSHED RANK| | 1 | 8801 | 910 |00:00:00.34 | 6805 | 74752 | 74752 |65536 (0)|
|* 4 | TABLE ACCESS FULL | T1 | 1 | 8801 | 8630 |00:00:00.29 | 6805 | | | |
| 5 | TABLE ACCESS FULL | T1 | 1 | 2000K| 2000K|00:00:04.00 | 6805 | | | |
<br>
Predicate Information (identified by operation id):
1 - access("MAIL_ID"="M"."MAIL_ID")
2 - filter(("RN">=900 AND "RN"<=909))
3 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("MAIL_DATE") DESC )<=909)
4 - filter(("USER_ID"=6 AND "FOLDER_ID"=1))The above performed two tablescans of the T1 table and required 15.6 seconds to complete, which was not the desired result. Now, to create an index that will be helpful for the query, and provide Oracle an additional hint:
(http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html "Pagination in Getting Rows N Through M" shows a similar approach)
DROP INDEX T1_USER_FOLDER_MAIL;
<br>
CREATE INDEX T1_USER_FOLDER_MAIL ON T1(USER_ID,FOLDER_ID,MAIL_DATE DESC,MAIL_ID);
<br>
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
<br>
SELECT /*+ ORDERED */
MI.MAIL_ID,
TO_CHAR(M.MAIL_DATE,'DD MON YYYY HH:MI:SS AM') AS MAIL_DATE1,
M.MAIL_DATE AS MAIL_DATE,
M.FOLDER_ID,
M.MAIL_ID,
M.USER_ID
FROM
(SELECT /*+ FIRST_ROWS(10) */
MAIL_ID
FROM
(SELECT
MAIL_ID,
ROW_NUMBER() OVER (ORDER BY MAIL_DATE DESC) RN
FROM
CUSTAPP.T1
WHERE
USER_ID=6
AND FOLDER_ID=1)
WHERE
RN BETWEEN 900 AND 909) MI,
CUSTAPP.T1 M
WHERE
MI.MAIL_ID=M.MAIL_ID;
<br>
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 1 | NESTED LOOPS | | 1 | 11 | 10 |00:00:00.01 | 47 | | | |
|* 2 | VIEW | | 1 | 11 | 10 |00:00:00.01 | 7 | | | |
|* 3 | WINDOW NOSORT STOPKEY | | 1 | 8711 | 909 |00:00:00.01 | 7 | 267K| 267K| |
|* 4 | INDEX RANGE SCAN | T1_USER_FOLDER_MAIL | 1 | 8711 | 910 |00:00:00.01 | 7 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| T1 | 10 | 1 | 10 |00:00:00.01 | 40 | | | |
|* 6 | INDEX UNIQUE SCAN | SYS_C0023476 | 10 | 1 | 10 |00:00:00.01 | 30 | | | |
<br>
Predicate Information (identified by operation id):
2 - filter(("RN">=900 AND "RN"<=909))
3 - filter(ROW_NUMBER() OVER ( ORDER BY "T1"."SYS_NC00005$")<=909)
4 - access("USER_ID"=6 AND "FOLDER_ID"=1)
6 - access("MAIL_ID"="M"."MAIL_ID")The above made use of both indexes, did and completed in 0.01 seconds.
SELECT /*+ ORDERED */
MI.MAIL_ID,
TO_CHAR(M.MAIL_DATE,'DD MON YYYY HH:MI:SS AM') AS MAIL_DATE1,
M.MAIL_DATE AS MAIL_DATE,
M.FOLDER_ID,
M.MAIL_ID,
M.USER_ID
FROM
(SELECT /*+ FIRST_ROWS(10) */
MAIL_ID
FROM
(SELECT
MAIL_ID,
ROW_NUMBER() OVER (ORDER BY MAIL_DATE DESC) RN
FROM
CUSTAPP.T1
WHERE
USER_ID=6
AND FOLDER_ID=1)
WHERE
RN BETWEEN 8600 AND 8609) MI,
CUSTAPP.T1 M
WHERE
MI.MAIL_ID=M.MAIL_ID;
<br>
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 1 | NESTED LOOPS | | 1 | 11 | 10 |00:00:00.11 | 81 | | | |
|* 2 | VIEW | | 1 | 11 | 10 |00:00:00.11 | 41 | | | |
|* 3 | WINDOW NOSORT STOPKEY | | 1 | 8711 | 8609 |00:00:00.09 | 41 | 267K| 267K| |
|* 4 | INDEX RANGE SCAN | T1_USER_FOLDER_MAIL | 1 | 8711 | 8610 |00:00:00.05 | 41 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| T1 | 10 | 1 | 10 |00:00:00.01 | 40 | | | |
|* 6 | INDEX UNIQUE SCAN | SYS_C0023476 | 10 | 1 | 10 |00:00:00.01 | 30 | | | |
<br>
Predicate Information (identified by operation id):
2 - filter(("RN">=8600 AND "RN"<=8609))
3 - filter(ROW_NUMBER() OVER ( ORDER BY "T1"."SYS_NC00005$")<=8609)
4 - access("USER_ID"=6 AND "FOLDER_ID"=1)
6 - access("MAIL_ID"="M"."MAIL_ID")The above made use of both indexes, did and completed in 0.11 seconds.
As the above shows, it is possible to efficiently retrieve the desired records very rapidly without having to leave the cursor open.
If this SQL statement will be used in a web browser, it probably does not make sense to leave the cursor open. If the SQL statement will be used in an application that maintains state, and the user is expected to always page from the first row toward the last, then leaving the cursor open and reading rows as needed makes sense.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Tuning issue with false positive
One of my clients moved two of their email devices to a DMZ. The both produce alerts on the mass mailing worm alert. Before they were moved to the DMZ, you would see the alert and it would have a source and destination IP. Now it only has the destination IP address of where the device is sending email to. Since the MARS does not pick up the devices new IP address, I cannot false positive tune these alerts out. How would I go about fixing this issue?
When the IDS mistakenly thinks that normal traffic is malicious then false positives happen To reduce them you have to fine tune the system by letting it know what normal traffic means on your network.
Cisco has provided some great guidance on how to reduce false positives here:
http://www.cisco.com/en/US/products/ps6241/products_user_guide_chapter09186a008072f396.html#wp1030968 -
A query run by the 'tom' user who searches on (xyz crieteria) takes less than 5 seconds.
However, when the user 'greg' ruuns the same query, it take > 14 minutes to execute for same xyz crieteria.
Note:both are application users and internally use DOCD database user.Both are using same SQL Explain plan but diffrence is disk read in case of long running query.
my question is if both queries are same and using same explain plan and returning same rows why there is disk read???? and such huge diffrence in execution time...Below is the tkprof output for both the queries.
QUICK QUERY
===========
SELECT E_NAME, E_CURVER_NUM, E_CURVER_CKO, E_PROTECTED, E_INA01, E_INA26,
E_INA47, V_FILE_NAME, E_ICON_TITLE, E_INA41, E_INA11, E_INA16, E_INA40,
E_INA15, E_INA35, E_ORG_FILENAME, E_COMMENT, E_INA28, E_INA27,
E_CREATE_DATE, E_OWNER, E_LAST_DATE, V_CHECKED_OUT, V_CHECKIN_USER, V_NAME,
V_E_NAME, V_AVAIL_STAT, V_RECLAIM, V_PERMANENT, V_CSI_STATUS, V_CD, E_INA01,
V_INA03, V_INA02, V_INA04, E_INA02, V_INA01, V_CREATE_DATE, E_INA03
FROM
ELEMENT, VERSION WHERE (NLS_UPPER(E_INA01) LIKE NLS_UPPER(:V001) AND
NLS_UPPER(E_INA26) = NLS_UPPER(:V002) AND E_INA02 IS NULL AND (E_INA03 IS
NULL OR E_INA03 = :V003) AND V_BRANCH_CURVER = :V004) AND ELEMENT.E_NAME =
VERSION.V_E_NAME ORDER BY 12, 10 DESC, 1, 26, 25
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 167 0.18 0.18 0 29466 0 1002
total 169 0.20 0.19 0 29466 0 1002
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 41
Rows Row Source Operation
1002 SORT ORDER BY (cr=29466 r=0 w=0 time=179192 us)
8847 NESTED LOOPS (cr=29466 r=0 w=0 time=146638 us)
14191 TABLE ACCESS FULL VERSION (cr=1082 r=0 w=0 time=25337 us)
8847 TABLE ACCESS BY INDEX ROWID ELEMENT (cr=28384 r=0 w=0 time=97209 us)
14191 INDEX UNIQUE SCAN UI256 (cr=14193 r=0 w=0 time=30388 us)(object id 29956)
SLOW QUERY
==========
SELECT E_NAME, E_CURVER_NUM, E_CURVER_CKO, E_PROTECTED, E_INA01, E_INA26,
E_INA47, V_FILE_NAME, E_ICON_TITLE, E_INA41, E_INA11, E_INA16, E_INA40,
E_INA15, E_INA35, E_ORG_FILENAME, E_COMMENT, E_INA28, E_INA27,
E_CREATE_DATE, E_OWNER, E_LAST_DATE, V_CHECKED_OUT, V_CHECKIN_USER, V_NAME,
V_E_NAME, V_AVAIL_STAT, V_RECLAIM, V_PERMANENT, V_CSI_STATUS, V_CD, E_INA01,
V_INA03, V_INA02, V_INA04, E_INA02, V_INA01, V_CREATE_DATE, E_INA03
FROM
ELEMENT, VERSION WHERE (NLS_UPPER(E_INA01) LIKE NLS_UPPER(:V001) AND
NLS_UPPER(E_INA26) = NLS_UPPER(:V002) AND E_INA02 IS NULL AND (E_INA03 IS
NULL OR E_INA03 = :V003) AND V_BRANCH_CURVER = :V004) AND ELEMENT.E_NAME =
VERSION.V_E_NAME ORDER BY 12, 10 DESC, 1, 26, 25
call count cpu elapsed disk query current rows
Parse 1 0.01 0.02 2 2 0 0
Execute 1 0.01 0.00 0 0 0 0
Fetch 167 0.29 1.18 2389 29466 0 1002
total 169 0.32 1.21 2391 29468 0 1002
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 41
Rows Row Source Operation
1002 SORT ORDER BY (cr=29466 r=2389 w=0 time=1180072 us)
8847 NESTED LOOPS (cr=29466 r=2389 w=0 time=1144811 us)
14191 TABLE ACCESS FULL VERSION (cr=1082 r=1078 w=0 time=134164 us)
8847 TABLE ACCESS BY INDEX ROWID ELEMENT (cr=28384 r=1311 w=0 time=984455 us)
14191 INDEX UNIQUE SCAN UI256 (cr=14193 r=137 w=0 time=127843 us)(object id 29956)Anuj,
In the future, when posting items to the forum where spacing is critical to proper understanding, please use the { code } tags (without spaces) to retain the spacing.
It appears that there is more to the story than what has been reported. It is good that you used tkprof, as it shows that there is more to the story. Taking a look at the two executions, the tkprof output shows that the first (quick) execution completed in 0.19 seconds, while the second (slow) execution completed in 1.21 seconds, even with th 2,391 blocks read from disk. My first question, if I were in the same situation, would be what would cause the first execution time to jump from 0.19 seconds to 5 seconds, and the second execution to jump from 1.21 seconds to 840 seconds (14 minutes)?
In both cases, there is a library cache miss on both the parse and the execute calls - there may be some significance to this.
In both cases, there were 167 fetch calls to return 1,002 rows, meaning that on average 6 rows were returned on each fetch.
Let's assume that 'tom', whose query executed quickly, was connected to the database over a T1 connection with a 20ms (0.02 second) ping time. For siimplicity in calculation, assume that there were 170 round trips between the server and the client. The network communication between the client and the server would require at least 3.4 seconds, for a total time to send the query and retrieve the results of about 3.6 seconds. This is a little short of 5 seconds which you reported.
Let's assume that 'greg', whose query executed slowly, was connected to the database over a satellite connection with a 2000ms (2 second) ping time. With the same number of round trips, the network communication would require about 340 seconds, for a total time to send the query and retrieve the results of about 341.21 seconds (5.7 minutes), about 8 minutes short of the target 14 minutes.
Was this the only query executed by the clients, or were there multiple queries?
Were the client computers on the same network segment?
Did you gather a 10046 trace at level 8 or 12 for the sessions? If so, manually review the wait events in the trace files to see if it is possible to determine what was happening during the 14 minutes. Guessing can be fun, but sometimes you come up 8 minutes short by guessing alone.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Performance tuning issue of 8.1.7's PL/SQL
My trouble sample code is list below,I know I can fix this proble easily at 9i,but,you know.
My procedure is called by receive a parameter,data_segmentseqno,its value maybe is 'segment1' or 'segment1,segment2,segment3'.In first case,procedure is work,it can find what I need,but,it will faile in case 2.
After I check it in DBAStudio's session,I found it was be parse to 'SELECT .. FROM .. WHERE E.SEGMENTSEQNO IN ( :1 )',oracle engine just think it has only one parameter,not three ones.SO,how should I do when I get a parameter include multi segment.
Somebody can help me,or the only way to solve this problem is use cursor instead use BULK COLLECTION.IN ORACLE 8.1.7.
create or replace package body RoundRobin is
procedure dispatchRoundRobin(
data_segmentseqno in varchar2
) is
type Cust_type is table of varchar2(18);
Cust_data Cust_type;
begin
/********** HERE IS MY TROUBLE *********
HOW SHOULD I DO FOR MULTI SEGMENTSEQNO
SELECT rowid BULK COLLECT INTO Cust_data
FROM dispatchedrecord e
where e.segmentseqno in ( data_segmentseqno ) ;
exception
when others then
dbms_output.put_line('Error'||sqlerrm);
end dispatchRoundRobin;Hello
You are using a single bind variable to represent multiple values. In this case you are asking oracle to see if e.segmentseqno is equal to 'segment1,segment2,segment3', which it isn't. What you need to do is either use separate bind variables for each value you want to test i.e.
WHERE e.segmentseqno IN (data_segmentseqno, data_segmentseqno2, data_segmentseqno3)Which isn't going to be very usefull unless you have a fixed number of values that are always used.
Another alternative would be to use dynamic SQL to form the where clause and put the values into the where clause directly
EXECUTE IMMEDIATE 'SELECT rowid FROM dispatchedrecord e
where e.segmentseqno in ('|| data_segmentseqno||')' BULK COLLECT INTO CustData ;But this isn't ideal either as you really should use bind variables for these values rather than litterals.
I'm not sure whether using a collection here for the list of segment values would help or not. I haven't used collections much in SQL statements, maybe someone else will have a better idea...
HTH
David -
Performance Tuning Issues: UNION and Stored Outlines
Hi,
I have two questions,
Firstly I have read this:
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/sql_1016.htm#i35699
What I can understand is using UNION ALL is better than UNION.
The ALL in UNION ALL is logically valid because of this exclusivity. It allows the plan to be carried out without an expensive sort to rule out duplicate rows for the two halves of the query.
Can someone explain me the following sentences.
Secondly my Oracle Database 10g is on FIRST_ROWS_1, how can stored outlines help in reducing I/O cost and response time in general?Please explain.
Thank you,
AdithUnion ALL and Union
SQL> select 1, 2 from dual
union
select 1, 2 from dual;
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 6 (67)| 00:00:01 |
| 1 | SORT UNIQUE | | 2 | 6 (67)| 00:00:01 |
| 2 | UNION-ALL | | | | |
| 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
| 4 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
11 rows selected.
SQL>select 1, 2 from dual
union all
select 1, 2 from dual;
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 4 (50)| 00:00:01 |
| 1 | UNION-ALL | | | | |
| 2 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
| 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
10 rows selected.
Adith -
Performance Tuning Issues ( How to Optimize this Code)
_How to Optimize this Code_
FORM MATL_CODE_DESC.
SELECT * FROM VBAK WHERE VKORG EQ SAL_ORG AND
VBELN IN VBELN AND
VTWEG IN DIS_CHN AND
SPART IN DIVISION AND
VKBUR IN SAL_OFF AND
VBTYP EQ 'C' AND
KUNNR IN KUNNR AND
ERDAT BETWEEN DAT_FROM AND DAT_TO.
SELECT * FROM VBAP WHERE VBELN EQ VBAK-VBELN AND
MATNR IN MATNR.
SELECT SINGLE * FROM MAKT WHERE MATNR EQ VBAP-MATNR.
IF SY-SUBRC EQ 0.
IF ( VBAP-NETWR EQ 0 AND VBAP-UEPOS NE 0 ).
IF ( VBAP-UEPVW NE 'B' AND VBAP-UEPVW NE 'C' ).
MOVE VBAP-VBELN TO ITAB1-SAL_ORD_NUM.
MOVE VBAP-POSNR TO ITAB1-POSNR.
MOVE VBAP-MATNR TO ITAB1-FREE_MATL.
MOVE VBAP-KWMENG TO ITAB1-FREE_QTY.
MOVE VBAP-KLMENG TO ITAB1-KLMENG.
MOVE VBAP-VRKME TO ITAB1-FREE_UNIT.
MOVE VBAP-WAVWR TO ITAB1-FREE_VALUE.
MOVE VBAK-VTWEG TO ITAB1-VTWEG.
MOVE VBAP-UEPOS TO ITAB1-UEPOS.
ENDIF.
ELSE.
MOVE VBAK-VBELN TO ITAB1-SAL_ORD_NUM.
MOVE VBAK-VTWEG TO ITAB1-VTWEG.
MOVE VBAK-ERDAT TO ITAB1-SAL_ORD_DATE.
MOVE VBAK-KUNNR TO ITAB1-CUST_NUM.
MOVE VBAK-KNUMV TO ITAB1-KNUMV.
SELECT SINGLE * FROM KONV WHERE KNUMV EQ VBAK-KNUMV AND
KSTEU = 'C' AND
KHERK EQ 'A' AND
KMPRS = 'X'.
IF SY-SUBRC EQ 0.
ITAB1-REMARKS = 'Manual Price Change'.
ENDIF.
SELECT SINGLE * FROM KONV WHERE KNUMV EQ VBAK-KNUMV AND
KSTEU = 'C' AND
KHERK IN ('C','D') AND
KMPRS = 'X' AND
KRECH IN ('A','B').
IF SY-SUBRC EQ 0.
IF KONV-KRECH EQ 'A'.
MOVE : KONV-KSCHL TO G_KSCHL.
G_KBETR = ( KONV-KBETR / 10 ).
MOVE G_KBETR TO G_KBETR1.
CONCATENATE G_KSCHL G_KBETR1 '%'
INTO ITAB1-REMARKS SEPARATED BY SPACE.
ELSEIF KONV-KRECH EQ 'B'.
MOVE : KONV-KSCHL TO G_KSCHL.
G_KBETR = KONV-KBETR.
MOVE G_KBETR TO G_KBETR1.
CONCATENATE G_KSCHL G_KBETR1
INTO ITAB1-REMARKS SEPARATED BY SPACE.
ENDIF.
ELSE.
ITAB1-REMARKS = 'Manual Price Change'.
ENDIF.
CLEAR : G_KBETR, G_KSCHL,G_KBETR1.
MOVE VBAP-KWMENG TO ITAB1-QTY.
MOVE VBAP-VRKME TO ITAB1-QTY_UNIT.
IF VBAP-UMVKN NE 0.
ITAB1-KLMENG = ( VBAP-UMVKZ / VBAP-UMVKN ) * VBAP-KWMENG.
ENDIF.
IF ITAB1-KLMENG NE 0.
VBAP-NETWR = ( VBAP-NETWR / VBAP-KWMENG ).
MOVE VBAP-NETWR TO ITAB1-INV_PRICE.
ENDIF.
MOVE VBAP-POSNR TO ITAB1-POSNR.
MOVE VBAP-MATNR TO ITAB1-MATNR.
MOVE MAKT-MAKTX TO ITAB1-MAKTX.
ENDIF.
SELECT SINGLE * FROM VBKD WHERE VBELN EQ VBAK-VBELN AND
BSARK NE 'DFUE'.
IF SY-SUBRC EQ 0.
ITAB1-INV_PRICE = ITAB1-INV_PRICE * VBKD-KURSK.
APPEND ITAB1.
CLEAR ITAB1.
ELSE.
CLEAR ITAB1.
ENDIF.
ENDIF.
ENDSELECT.
ENDSELECT.
ENDFORM. " MATL_CODE_DESCHi Vijay,
You could start by using INNER JOINS:
SELECT ......
FROM ( VBAK
INNER JOIN VBAP
ON VBAPVBELN = VBAKVBELN
INNER JOIN MAKT
ON MAKTMATNR = VBAPMATNR AND
MAKT~SPRAS = SYST-LANGU )
INTO TABLE itab
WHERE VBAK~VBELN IN VBELN
AND VBAK~VTWEG IN DIS_CHN
AND VBAK~SPART IN DIVISION
AND VBAK~VKBUR IN SAL_OFF
AND VBAK~VBTYP EQ 'C'
AND VBAK~KUNNR IN KUNNR
AND VBAK~ERDAT BETWEEN DAT_FROM AND DAT_TO
AND VBAP~NETWR EQ 0
AND VBAP~UEPOS NE 0
Regards,
John.
Maybe you are looking for
-
Installing Microsoft Office on 2 user accounts
Hi I have 2 user accounts on a Mac Mini (both administrator) I have installed Microsoft Office 2011 with Outlook but it has only shown up on one of user accounts. There was no option to install on both accounts when I installed it. Is there anyway to
-
What happens on a G5 if you upgrade to iCloud?
If I go to a newer mac and complete the conversion from MobileMe to iCloud what will happen on my older G5 power mac? If I do convert my account now will iDisk still work through the Mobile Me system preferences (I see online a note from Apple that t
-
How do I download iTunes to my phone.
On my iPhone 4 I had all my music. I took up iMatch and then upgraded to iPhone 5. After my sync there was no music on the phone. This is a problem when I lose internet connection e.g. In a tunnel. How can I download all my music to my phone. Thanks
-
Hi there! I have been looking around the internet for the past few hours trying to find a pendent for powermizer under Linux, but I haven't really found an answer! So my question is: how to save power with an nvidia 7600go under Linux? thx armin
-
Hi All, I read the below 2 threads: [What is the relevant Place to post?|What is the relevant Place to post?; in 2008 from SAS [Why not ABAP HR Forum|Why not ABAP HR Forum; in 2009 from Sikindar So, any good news? I am working on ABAP-HR for the past