Performance Tuning - remove hash join
Hi Every one,
Can some one help in tuning below query, i have hash join taking around 84%,
SELECT
PlanId
,ReplacementPlanId
FROM
( SELECT
pl.PlanId
,xpl.PlanId ReplacementPlanId
,ROW_NUMBER() OVER(PARTITION BY pl.PlanId ORDER BY xpl.PlanId) RN
FROM [dbo].[Plan] pl
JOIN [dbo].[Plan] xpl
ON pl.PPlanId = xpl.PPlanId
AND pl.Name = xpl.Name
WHERE
pl.SDate > (CONVERT(CHAR(10),DATEADD(M,-12,GETDATE()),120)) AND
xpl.Live = 1
AND pl.TypeId = 7
AND xpl.TypeId = 7
) p
WHERE RN = 1
Thanks in advance
Can you show an execution plan of the query? Is that possible to rewrite the query as
Sorry cannot test it right now
SELECT
PlanId
,ReplacementPlanId
FROM
(SELECT PlanId,Name,SDate,TypeId FROM [dbo].[Plan]) pl
CROSS APPLY
SELECT TOP (1) TypeId ,Live FROM [Plan] xpl
WHERE pl.PPlanId = xpl.PPlanId
AND pl.Name = xpl.Name
) AS Der
WHERE pl.SDate > (CONVERT(CHAR(10),DATEADD(M,-12,GETDATE()),120)) AND
Der.Live = 1
AND pl.TypeId = 7
AND Der.TypeId = 7
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence
Similar Messages
-
Performance- Hash Joins ?
Hi everyone,
Version :- 10.1.0.5.0
Below is the query and it takes long time to execute due to hash joins. Is there ant other way to change change the join approach or rewrite the SQL.
Tables used
rhs_premium :- 830 GB with partitons.
bu_res_line_map :- small table 4 MB
mgt_unit_res_line_map :- small table with 4 MB
mgt_unit_reserving_lines :- View
bu_reserving_lines :- View
SELECT prem.business_unit,
prem.direct_assumed_ceded_code,
prem.ledger_currency_code,
prem.mcc_code,
prem.management_unit,
prem.product,
prem.financial_product,
nvl(prem.premium_in_ledger_currency,0) premium_in_ledger_currency, -- NVL added Aug 21, 2008
nvl(prem.premium_in_original_currency,0) premium_in_original_currency, -- Added Aug 21, 2008 M4640
nvl(prem.premium_in_us_dollars,0) premium_in_us_dollars, -- NVL added Aug 21, 2008
prem.rcc,
prem.pre_explosion_rcc, -- Issue M1997 Laks 09/15
prem.acc_code,
prem.transaction_code,
prem.accounting_period,
prem.exposure_period,
prem.policy_effective_month,
prem.policy_key,
prem.policy_line_of_business,
prem.producer_code,
prem.underwriting_year,
prem.ledger_key,
prem.account_key,
prem.policy_coverage_key,
nvl(prem.original_currency_code,' ') original_currency_code, -- NVL added Issue M4640 Aug 21, 2008
prem.authorized_unauthorized,
prem.transaction_effective_date,
prem.transaction_expiration_date,
prem.exposure_year,
prem.reinsurance_mask,
prem.rcccategory,
prem.rccclassification,
prem.detail_level_indicator,
prem.bermbase_level_indicator,
prem.padb_level_indicator,
prem.sas_level_indicator, -- Issue 443
prem.cnv_adjust_indicator, -- Issue 413
prem.originating_business_unit,
prem.actuarial_blend_indicator,
ml1.management_line_of_business_id management_lob_id_act,
mu1.reserving_line_id mgt_unit_reserving_lob_id_act,
bu1.reserving_line_id business_unit_lob_id_act,
ml1.reserving_class_id mgt_unit_res_class_lob_id_act, -- added A12
bl1.reserving_class_id business_unit_class_lob_id_act,-- added A12
ml2.management_line_of_business_id management_lob_id_fin,
mu2.reserving_line_id mgt_unit_reserving_lob_id_fin,
bu2.reserving_line_id business_unit_lob_id_fin,
ml2.reserving_class_id mgt_unit_res_class_lob_id_fin, -- added A12
bl2.reserving_class_id business_unit_class_lob_id_fin -- added A12
FROM rhs_premium prem,
bu_res_line_map bu1,
mgt_unit_res_line_map mu1,
mgt_unit_reserving_lines ml1,
bu_reserving_lines bl1, -- added A12
bu_res_line_map bu2,
mgt_unit_res_line_map mu2,
mgt_unit_reserving_lines ml2,
bu_reserving_lines bl2 -- added A12
WHERE ( prem.business_unit = bu1.business_unit (+)
AND prem.product = bu1.product (+)
AND ( prem.management_unit = mu1.management_unit (+)
AND prem.product = mu1.product(+)
AND mu1.reserving_line_id = ml1.reserving_line_id (+)
AND bu1.reserving_line_id = bl1.reserving_line_id (+) -- added A12
AND ( prem.business_unit = bu2.business_unit (+)
AND prem.financial_product = bu2.product (+)
AND ( prem.management_unit = mu2.management_unit (+)
AND prem.financial_product = mu2.product(+)
AND mu2.reserving_line_id = ml2.reserving_line_id (+)
AND bu2.reserving_line_id = bl2.reserving_line_id (+);
PLAN_TABLE_OUTPUT
Plan hash value: 3475794837
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 346M| 124G| 416K (2)| 01:37:13 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10008 | 346M| 124G| 416K (2)| 01:37:13 | | | Q1,08 | P->S | QC (RAND) |
|* 3 | HASH JOIN RIGHT OUTER | | 346M| 124G| 416K (2)| 01:37:13 | | | Q1,08 | PCWP | |
| 4 | BUFFER SORT | | | | | | | | Q1,08 | PCWC | |
| 5 | PX RECEIVE | | 46 | 1196 | 7 (0)| 00:00:01 | | | Q1,08 | PCWP | |
| 6 | PX SEND BROADCAST | :TQ10000 | 46 | 1196 | 7 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 7 | VIEW | BU_RESERVING_LINES | 46 | 1196 | 7 (0)| 00:00:01 | | | | | |
| 8 | NESTED LOOPS OUTER | | 46 | 1748 | 7 (0)| 00:00:01 | | | | | |
|* 9 | TABLE ACCESS FULL | RESERVING_LINES | 46 | 1564 | 7 (0)| 00:00:01 | | | | | |
|* 10 | INDEX UNIQUE SCAN | RESERVING_CLASSES_PK | 1 | 4 | 0 (0)| 00:00:01 | | | | | |
|* 11 | HASH JOIN RIGHT OUTER | | 346M| 116G| 415K (2)| 01:37:04 | | | Q1,08 | PCWP | |
| 12 | BUFFER SORT | | | | | | | | Q1,08 | PCWC | |
| 13 | PX RECEIVE | | 124K| 2182K| 87 (2)| 00:00:02 | | | Q1,08 | PCWP | |
| 14 | PX SEND BROADCAST | :TQ10001 | 124K| 2182K| 87 (2)| 00:00:02 | | | | S->P | BROADCAST |
| 15 | TABLE ACCESS FULL | BU_RES_LINE_MAP | 124K| 2182K| 87 (2)| 00:00:02 | | | | | |
|* 16 | HASH JOIN RIGHT OUTER | | 346M| 110G| 415K (2)| 01:36:54 | | | Q1,08 | PCWP | |
| 17 | BUFFER SORT | | | | | | | | Q1,08 | PCWC | |
| 18 | PX RECEIVE | | 46 | 1196 | 7 (0)| 00:00:01 | | | Q1,08 | PCWP | |
| 19 | PX SEND BROADCAST | :TQ10002 | 46 | 1196 | 7 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 20 | VIEW | BU_RESERVING_LINES | 46 | 1196 | 7 (0)| 00:00:01 | | | | | |
| 21 | NESTED LOOPS OUTER | | 46 | 1748 | 7 (0)| 00:00:01 | | | | | |
|* 22 | TABLE ACCESS FULL | RESERVING_LINES | 46 | 1564 | 7 (0)| 00:00:01 | | | | | |
|* 23 | INDEX UNIQUE SCAN | RESERVING_CLASSES_PK | 1 | 4 | 0 (0)| 00:00:01 | | | | | |
|* 24 | HASH JOIN RIGHT OUTER | | 346M| 101G| 414K (1)| 01:36:45 | | | Q1,08 | PCWP | |
| 25 | BUFFER SORT | | | | | | | | Q1,08 | PCWC | |
| 26 | PX RECEIVE | | 124K| 2182K| 87 (2)| 00:00:02 | | | Q1,08 | PCWP | |
| 27 | PX SEND BROADCAST | :TQ10003 | 124K| 2182K| 87 (2)| 00:00:02 | | | | S->P | BROADCAST |
| 28 | TABLE ACCESS FULL | BU_RES_LINE_MAP | 124K| 2182K| 87 (2)| 00:00:02 | | | | | |
|* 29 | HASH JOIN RIGHT OUTER | | 346M| 96G| 413K (1)| 01:36:35 | | | Q1,08 | PCWP | |
| 30 | BUFFER SORT | | | | | | | | Q1,08 | PCWC | |
| 31 | PX RECEIVE | | 46 | 1794 | 11 (10)| 00:00:01 | | | Q1,08 | PCWP | |
| 32 | PX SEND BROADCAST | :TQ10004 | 46 | 1794 | 11 (10)| 00:00:01 | | | | S->P | BROADCAST |
| 33 | VIEW | MGT_UNIT_RESERVING_LINES | 46 | 1794 | 11 (10)| 00:00:01 | | | | | |
|* 34 | HASH JOIN OUTER | | 46 | 1886 | 11 (10)| 00:00:01 | | | | | |
|* 35 | TABLE ACCESS FULL | RESERVING_LINES | 46 | 1564 | 7 (0)| 00:00:01 | | | | | |
| 36 | TABLE ACCESS FULL | RESERVING_CLASSES | 107 | 749 | 3 (0)| 00:00:01 | | | | | |
|* 37 | HASH JOIN RIGHT OUTER | | 346M| 83G| 413K (1)| 01:36:26 | | | Q1,08 | PCWP | |
| 38 | BUFFER SORT | | | | | | | | Q1,08 | PCWC | |
| 39 | PX RECEIVE | | 93763 | 1648K| 59 (0)| 00:00:01 | | | Q1,08 | PCWP | |
| 40 | PX SEND BROADCAST | :TQ10005 | 93763 | 1648K| 59 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 41 | INDEX FAST FULL SCAN | MGT_UNIT_RES_LINE_MAP_UK | 93763 | 1648K| 59 (0)| 00:00:01 | | | | | |
|* 42 | HASH JOIN RIGHT OUTER | | 346M| 77G| 412K (1)| 01:36:17 | | | Q1,08 | PCWP | |
| 43 | BUFFER SORT | | | | | | | | Q1,08 | PCWC | |
| 44 | PX RECEIVE | | 46 | 1794 | 11 (10)| 00:00:01 | | | Q1,08 | PCWP | |
| 45 | PX SEND BROADCAST | :TQ10006 | 46 | 1794 | 11 (10)| 00:00:01 | | | | S->P | BROADCAST |
| 46 | VIEW | MGT_UNIT_RESERVING_LINES | 46 | 1794 | 11 (10)| 00:00:01 | | | | | |
|* 47 | HASH JOIN OUTER | | 46 | 1886 | 11 (10)| 00:00:01 | | | | | |
|* 48 | TABLE ACCESS FULL | RESERVING_LINES | 46 | 1564 | 7 (0)| 00:00:01 | | | | | |
| 49 | TABLE ACCESS FULL | RESERVING_CLASSES | 107 | 749 | 3 (0)| 00:00:01 | | | | | |
|* 50 | HASH JOIN RIGHT OUTER | | 346M| 65G| 411K (1)| 01:36:08 | | | Q1,08 | PCWP | |
| 51 | BUFFER SORT | | | | | | | | Q1,08 | PCWC | |
| 52 | PX RECEIVE | | 93763 | 1648K| 59 (0)| 00:00:01 | | | Q1,08 | PCWP | |
| 53 | PX SEND BROADCAST | :TQ10007 | 93763 | 1648K| 59 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 54 | INDEX FAST FULL SCAN| MGT_UNIT_RES_LINE_MAP_UK | 93763 | 1648K| 59 (0)| 00:00:01 | | | | | |
| 55 | PX BLOCK ITERATOR | | 346M| 59G| 411K (1)| 01:35:58 | 1 | 385 | Q1,08 | PCWC | |
| 56 | TABLE ACCESS FULL | RHS_PREMIUM_1 | 346M| 59G| 411K (1)| 01:35:58 | 1 | 385 | Q1,08 | PCWP | |
Predicate Information (identified by operation id):
3 - access("BU2"."RESERVING_LINE_ID"="BL2"."RESERVING_LINE_ID"(+))
9 - filter("RL"."RESERVING_LINE_TYPE"='BU')
10 - access("RL"."RESERVING_CLASS_ID"="RC"."RESERVING_CLASS_ID"(+))
11 - access("PREM"."BUSINESS_UNIT"="BU2"."BUSINESS_UNIT"(+) AND "PREM"."FINANCIAL_PRODUCT"="BU2"."PRODUCT"(+))
16 - access("BU1"."RESERVING_LINE_ID"="BL1"."RESERVING_LINE_ID"(+))
22 - filter("RL"."RESERVING_LINE_TYPE"='BU')
23 - access("RL"."RESERVING_CLASS_ID"="RC"."RESERVING_CLASS_ID"(+))
24 - access("PREM"."BUSINESS_UNIT"="BU1"."BUSINESS_UNIT"(+) AND "PREM"."PRODUCT"="BU1"."PRODUCT"(+))
29 - access("MU2"."RESERVING_LINE_ID"="ML2"."RESERVING_LINE_ID"(+))
34 - access("RL"."RESERVING_CLASS_ID"="RC"."RESERVING_CLASS_ID"(+))
35 - filter("RL"."RESERVING_LINE_TYPE"='MCC')
37 - access("PREM"."MANAGEMENT_UNIT"="MU2"."MANAGEMENT_UNIT"(+) AND "PREM"."FINANCIAL_PRODUCT"="MU2"."PRODUCT"(+))
42 - access("MU1"."RESERVING_LINE_ID"="ML1"."RESERVING_LINE_ID"(+))
47 - access("RL"."RESERVING_CLASS_ID"="RC"."RESERVING_CLASS_ID"(+))
48 - filter("RL"."RESERVING_LINE_TYPE"='MCC')
50 - access("PREM"."MANAGEMENT_UNIT"="MU1"."MANAGEMENT_UNIT"(+) AND "PREM"."PRODUCT"="MU1"."PRODUCT"(+))Please let me know if there is any possibilty.
Regards,
SandyBig query - 9 tables, outer joins, parallel query option. Reading/joining views - steps 7, 20, etc?
if you are reading most of the rows in both tables then hash joins are probably the most efficient way to read the data. The parallel query option may or may not be helping.
Outer joins usually make queries slower. Views can also have th same effect, and will probably affect the execution plan.
If you are reading views you can eliminate them by using the base tables to see how that affects performance. You can change the session parameter for DB_FILE_MULTIBLOCK_READ COUNT for more efficient full table scans if its set (the documentation and blog articles claim leaving it unset in the database is the most efficient use).
Edited by: riedelme on Jun 6, 2011 8:26 AM -
Tuning SQL with heavy hash join
Hi all,
My database is 11.1. Rac 4 node.
The sql is so simple.
create table leon_12345
nologging
as
SELECT *
FROM OOOOOOOOOO a
FULL OUTER JOIN PPPPPPPPPPP b ON a.PRMTN_SKID =
b.PRMTN_SKID
AND a.PROD_SKID =
b.PROD_SKID
AND a.MTH_SKID =
b.MTH_SKID
AND a.PRMTD_CATEG_ID =
b.PRMTD_CATEG_ID
AND a.PRMTN_FACT_ID =
b.PRMTN_FACT_IDSince these two tables have DOP settings, the exection plan looks like:
| Id | Operation | Name | E-Rows | OMem | 1Mem | Used-Mem |
| 0 | CREATE TABLE STATEMENT | | | | | |
| 1 | LOAD AS SELECT | | | 256K| 256K| |
| 2 | PX COORDINATOR | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10002 | 74M| | | |
| 4 | VIEW | VW_FOJ_0 | 74M| | | |
|* 5 | HASH JOIN FULL OUTER BUFFERED| | 74M| 1926M| 26M| |
| 6 | PX RECEIVE | | 12M| | | |
| 7 | PX SEND HASH | :TQ10000 | 12M| | | |
| 8 | PX BLOCK ITERATOR | | 12M| | | |
|* 9 | TABLE ACCESS FULL | PPPPPPPPPPP | 12M| | | |
| 10 | PX RECEIVE | | 74M| | | |
| 11 | PX SEND HASH | :TQ10001 | 74M| | | |
| 12 | PX BLOCK ITERATOR | | 74M| | | |
|* 13 | TABLE ACCESS FULL | OOOOOOOOOO | 74M| | | |
-----------------------------------------------------------------------------------------------------------I have noticed 8 slave sessions were up for this sql.(9 sessions totally of course)
With execution in parallel, the SQL still runs slowly. I want to know whether the parallel exectuion has some problem here.
Because after running several minutes, I saw four sessions became inactive and only 4 slave sessions were active.
Whether the workload was not distributed evenly or something else? Can I get some ideas on the direction of tuning this sql.
Best regards,
LeonLeon,
There is not much to tune about this SQL. There might be something to tune in your database configuration, so I'd move this question over to the Database General forum.
And while doing that, please be sure to include trace information and wait event information.
Regards,
Rob. -
HASH JOIN Issue - query performance issue
Hello friends:
I have a nested loop long query. When i execute this on Oracle 9i, the results comes back within no time (less than a sec) but the same query, same schema, the query takes over 20-30secs. Looking at the execution plan (pasted below) - the Oracle 10g is using HASH JOIN compared to NESTED LOOPS (Oracle 9i). The cost on 9i is much less than 10g due to the hash join form 10g --
Here's the explain plan for -- How do i make 10g plan to use the one from 9i--the nested loop?
Oracle 10g:
SORT(AGGREGATE) 1 33
HASH JOIN 25394 2082 68706
MAT_VIEW ACCESS(FULL) GMDB.NODE_ATTRIBUTES ANALYZED 17051 2082 56214
VIEW 8318 1714180 10285080
FILTER
CONNECT BY(WITH FILTERING)
Oracle 9i--
SORT(AGGREGATE) 1 40
NESTED LOOPS 166 65 2600
VIEW 36 65 845
FILTERjust noticed the "CONNECT BY" in the plan. connect by queries are special cases, should you should spell that out loud and clear for us next time. anyway, lots of problems in v10 with connect by:
metalink Note:394358.1
CONNECT BY SQL uses Full Table Scan In 10.2.0.2 - Used Index In earlier versions
solution:
alter session set "_optimizer_connect_by_cost_based"=false;
and bugs related to connect by fixed in the 10.2.0.2 release (other than the bug from the note bug, which was introduced):
metalink Note:358749.1
and
Note:3956141.8
OR filter in CONNECT BY query may be applied at table level (v 10.1.0.3) -
Dear All,
In our project we are facing lot of problems with the Performance, users are compaining about the poor performance of the few reports and all, we are in the process of fine tuning the reports by following the all methods/suggestions provided by SAP ( like removing the select queries from Loops, For all entries , Binary serach etc )
But still I want to know from you people what can we check from BASIS percpective ( all the settings ) and also ABAP percpective to improve the performance.
And also I have one more query that what is " Table Statistics " , what is the use of this ...
Please give ur valueble suggestions to us in improving the performance .
Thanks in Advance !Hi
<b>Ways of Performance Tuning</b>
1. Selection Criteria
2. Select Statements
Select Queries
SQL Interface
Aggregate Functions
For all Entries
Select Over more than one Internal table
<b>Selection Criteria</b>
1. Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement.
2. Select with selection list.
<b>Points # 1/2</b>
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
<b>Select Statements Select Queries</b>
1. Avoid nested selects
2. Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
3. When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
4. For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit.
5. Use Select Single if all primary key fields are supplied in the Where condition .
<b>Point # 1</b>
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
<b>Point # 2</b>
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
<b>Point # 3</b>
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
<b>Point # 4</b>
SELECT * FROM SBOOK INTO SBOOK_WA
UP TO 1 ROWS
WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
WHERE CARRID = 'LH'.
EXIT.
ENDSELECT.
<b>Point # 5</b>
If all primary key fields are supplied in the Where condition you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
<b>Select Statements contd.. SQL Interface</b>
1. Use column updates instead of single-row updates
to update your database tables.
2. For all frequently used Select statements, try to use an index.
3. Using buffered tables improves the performance considerably.
<b>Point # 1</b>
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
SFLIGHT_WA-SEATSOCC =
SFLIGHT_WA-SEATSOCC - 1.
UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
SET SEATSOCC = SEATSOCC - 1.
<b>Point # 2</b>
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE MANDT IN ( SELECT MANDT FROM T000 )
AND CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
<b>Point # 3</b>
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
BYPASSING BUFFER
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100 INTO T100_WA
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
<b>Select Statements contd Aggregate Functions</b>
If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
Maxno = 0.
Select * from zflight where airln = LF and cntry = IN.
Check zflight-fligh > maxno.
Maxno = zflight-fligh.
Endselect.
The above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = LF and cntry = IN.
<b>Select Statements contd For All Entries</b>
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
<u>Points to be must considered FOR ALL ENTRIES</u> Check that data is present in the driver table
Sorting the driver table
Removing duplicates from the driver table
Consider the following piece of extract
Loop at int_cntry.
Select single * from zfligh into int_fligh
where cntry = int_cntry-cntry.
Append int_fligh.
Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
Select * from zfligh appending table int_fligh
For all entries in int_cntry
Where cntry = int_cntry-cntry.
Endif.
<b>Select Statements contd Select Over more than one Internal table</b>
1. Its better to use a views instead of nested Select statements.
2. To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
3. Instead of using nested Select loops it is often better to use subqueries.
<b>Point # 1</b>
SELECT * FROM DD01L INTO DD01L_WA
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T INTO DD01T_WA
WHERE DOMNAME = DD01L_WA-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L_WA-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO DD01V_WA
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT
<b>Point # 2</b>
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
<b>Point # 3</b>
SELECT * FROM SPFLI
INTO TABLE T_SPFLI
WHERE CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
INTO SFLIGHT_WA
FOR ALL ENTRIES IN T_SPFLI
WHERE SEATSOCC < F~SEATSMAX
AND CARRID = T_SPFLI-CARRID
AND CONNID = T_SPFLI-CONNID
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
WHERE SEATSOCC < F~SEATSMAX
AND EXISTS ( SELECT * FROM SPFLI
WHERE CARRID = F~CARRID
AND CONNID = F~CONNID
AND CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK' )
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
<b>Internal Tables</b>
1. Table operations should be done using explicit work areas rather than via header lines.
2. Always try to use binary search instead of linear search. But dont forget to sort your internal table before that.
3. A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4. A binary search using secondary index takes considerably less time.
5. LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
6. Modifying selected components using MODIFY itab TRANSPORTING f1 f2.. accelerates the task of updating a line of an internal table.
<b>Point # 2</b>
READ TABLE ITAB INTO WA WITH KEY K = 'X BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
<b>Point # 3</b>
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
<b>Point # 5</b>
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
CHECK WA-K = 'X'.
ENDLOOP.
<b>Point # 6</b>
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7. Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
8. If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
9. "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to LOOP-APPEND-ENDLOOP.
10. DELETE ADJACENT DUPLICATES accelerates the task of deleting duplicate entries considerably as compared to READ-LOOP-DELETE-ENDLOOP.
11. "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to DO -DELETE-ENDDO.
<b>Point # 7</b>
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
I = SY-TABIX MOD 2.
IF I = 0.
<WA>-FLAG = 'X'.
ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
I = SY-TABIX MOD 2.
IF I = 0.
WA-FLAG = 'X'.
MODIFY ITAB FROM WA.
ENDIF.
ENDLOOP.
<b>Point # 8</b>
LOOP AT ITAB1 INTO WA1.
READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
IF SY-SUBRC = 0.
ADD: WA1-VAL1 TO WA2-VAL1,
WA1-VAL2 TO WA2-VAL2.
MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
ELSE.
INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
<b>Point # 9</b>
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
<b>Point # 10</b>
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
IF WA = PREV_LINE.
DELETE ITAB.
ELSE.
PREV_LINE = WA.
ENDIF.
ENDLOOP.
<b>Point # 11</b>
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
DELETE ITAB INDEX 450.
ENDDO.
12. Copying internal tables by using ITAB2[ ] = ITAB1[ ] as compared to LOOP-APPEND-ENDLOOP.
13. Specify the sort key as restrictively as possible to run the program faster.
<b>Point # 12</b>
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
<b>Point # 13</b>SORT ITAB BY K. makes the program runs faster as compared to SORT ITAB.
<b>Internal Tables contd
Hashed and Sorted tables</b>
1. For single read access hashed tables are more optimized as compared to sorted tables.
2. For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
<b>Point # 1</b>
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE STAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
<b>Point # 2</b>
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
<b>Reward if usefull</b> -
Query performance tuning need your suggestions
Hi,
Below is the sql query and explain plan which is taking 2 hours to execute and sometime it is breaking up( erroring out) due to memory issue.
Below it the query which i need to improve the performance of the code please need your suggestion in order to tweak so that time take for execution become less and also in less memory consumption
select a11.DATE_ID DATE_ID,
sum(a11.C_MEASURE) WJXBFS1,
count(a11.PKEY_GUID) WJXBFS2,
count(Case when a11.C_MEASURE <= 10 then a11.PKEY_GUID END) WJXBFS3,
count(Case when a11.STATUS = 'Y' and a11.C_MEASURE > 10 then a11.PKEY_GUID END) WJXBFS4,
count(Case when a11.STATUS = 'N' then a11.PKEY_GUID END) WJXBFS5,
sum(((a11.C_MEASURE ))) WJXBFS6,
a17.DESC_DATE_MM_DD_YYYY DESC_DATE_MM_DD_YYYY,
a11.DNS DNS,
a12.VVALUE VVALUE,
a12.VNAME VNAME,
a13.VVALUE VVALUE0,
a13.VNAME VNAME0,
9 a14.VVALUE VVALUE1,
a14.VNAME VNAME1,
a15.VVALUE VVALUE2,
a15.VNAME VNAME2,
a16.VVALUE VVALUE3,
a16.VNAME VNAME3,
a11.PKEY_GUID PKEY_GUID,
a11.UPKEY_GUID UPKEY_GUID,
a17.DAY_OF_WEEK DAY_OF_WEEK,
a17.D_WEEK D_WEEK,
a17.MNTH_ID DAY_OF_MONTH,
a17.YEAR_ID YEAR_ID,
a17.DESC_YEAR_FULL DESC_YEAR_FULL,
a17.WEEK_ID WEEK_ID,
a17.WEEK_OF_YEAR WEEK_OF_YEAR
from ACTIVITY_F a11
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 1 ) a12
on (a11.PKEY_GUID = a12.PKEY_GUID and
a11.DATE_ID = a12.DATE_ID and
a11.ORG = a12.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 2) a13
on (a11.PKEY_GUID = a13.PKEY_GUID and
a11.DATE_ID = a13.DATE_ID and
a11.ORG = a13.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 3 ) a14
on (a11.PKEY_GUID = a14.PKEY_GUID and
a11.DATE_ID = a14.DATE_ID and
a11.ORG = a14.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 4) a15
on (a11.PKEY_GUID = a15.PKEY_GUID and
89 a11.DATE_ID = a15.DATE_ID and
a11.ORG = a15.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 9) a16
on (a11.PKEY_GUID = a16.PKEY_GUID and
a11.DATE_ID = a16.DATE_ID and
A11.ORG = A16.ORG)
join W_DATE_D a17
ON (A11.DATE_ID = A17.ID)
join W_SALES_D a18
on (a11.TASK = a18.ID)
where (a17.TIMSTAMP between To_Date('2001-02-24 00:00:00', 'YYYY-MM-DD HH24:MI:SS') and To_Date('2002-09-12 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
and a11.ORG in (12)
and a18.SRC_TASK = 'AX012Z')
group by a11.DATE_ID,
a17.DESC_DATE_MM_DD_YYYY,
a11.DNS,
a12.VVALUE,
a12.VNAME,
a13.VVALUE,
a13.VNAME,
a14.VVALUE,
a14.VNAME,
a15.VVALUE,
a15.VNAME,
a16.VVALUE,
a16.VNAME,
a11.PKEY_GUID,
a11.UPKEY_GUID,
a17.DAY_OF_WEEK,
a17.D_WEEK,
a17.MNTH_ID,
a17.YEAR_ID,
a17.DESC_YEAR_FULL,
a17.WEEK_ID,
a17.WEEK_OF_YEAR;
Explained.
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 1245 | 47 (9)| 00:00:01 |
| 1 | HASH GROUP BY | | 1 | 1245 | 47 (9)| 00:00:01 |
|* 2 | HASH JOIN | | 1 | 1245 | 46 (7)| 00:00:01 |
|* 3 | HASH JOIN | | 1 | 1179 | 41 (5)| 00:00:01 |
|* 4 | HASH JOIN | | 1 | 1113 | 37 (6)| 00:00:01 |
|* 5 | HASH JOIN | | 1 | 1047 | 32 (4)| 00:00:01 |
|* 6 | HASH JOIN | | 1 | 981 | 28 (4)| 00:00:01 |
| 7 | NESTED LOOPS | | 1 | 915 | 23 (0)| 00:00:01 |
| 8 | NESTED LOOPS | | 1 | 763 | 20 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 1 | 611 | 17 (0)| 00:00:01 |
| 10 | NESTED LOOPS | | 1 | 459 | 14 (0)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 307 | 11 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 1 | 155 | 7 (0)| 00:00:01 |
| 13 | NESTED LOOPS | | 1 | 72 | 3 (0)| 00:00:01 |
| 14 | TABLE ACCESS BY INDEX ROWID| W_SALES_D | 1 | 13 | 2 (0)| 00:00:01 |
|* 15 | INDEX UNIQUE SCAN | CONS_UNQ_W_SALES_D_SRC_ID | 1 | | 1 (0)| 00:00:01 |
| 16 | TABLE ACCESS BY INDEX ROWID| W_DATE_D | 1 | 59 | 1 (0)| 00:00:01 |
|* 17 | INDEX UNIQUE SCAN | UIDX_DD_TIMSTAMP | 1 | | 0 (0)| 00:00:01 |
| 18 | TABLE ACCESS BY INDEX ROWID | ACTIVITY_F | 1 | 83 | 4 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | PK_ACTIVITY_F | 1 | | 3 (0)| 00:00:01 |
|* 20 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 4 (0)| 00:00:01 |
|* 21 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 22 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 23 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 24 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 25 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 26 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 27 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 28 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 29 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 30 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 31 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 32 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 33 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 34 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------------Hi,
I'm not a tuning expert but I can suggest you to post your request according to this template:
Thread: HOW TO: Post a SQL statement tuning request - template posting
HOW TO: Post a SQL statement tuning request - template posting
Then:
a) you should posting a code which is easy to read. What about formatting? Your code had to be fixed in a couple of lines.
b) You could simplify your code using the with statement. This has nothing to do with the tuning but it will help the readability of the query.
Check it below:
WITH tab1 AS (SELECT a.org AS org
, a.date_id AS date_id
, a.time_of_day_id AS time_of_day_id
, a.date_hour_id AS date_hour_id
, a.task AS task
, a.pkey_guid AS pkey_guid
, a.vname AS vname
, a.vvalue AS vvalue
, b.variable_obj
FROM w_org_d a
JOIN
w_person_d b
ON ( a.task = b.task
AND a.org = b.id
AND a.vname = b.vname))
SELECT a11.date_id date_id
, SUM (a11.c_measure) wjxbfs1
, COUNT (a11.pkey_guid) wjxbfs2
, COUNT (CASE WHEN a11.c_measure <= 10 THEN a11.pkey_guid END) wjxbfs3
, COUNT (CASE WHEN a11.status = 'Y' AND a11.c_measure > 10 THEN a11.pkey_guid END) wjxbfs4
, COUNT (CASE WHEN a11.status = 'N' THEN a11.pkey_guid END) wjxbfs5
, SUM ( ( (a11.c_measure))) wjxbfs6
, a17.desc_date_mm_dd_yyyy desc_date_mm_dd_yyyy
, a11.dns dns
, a12.vvalue vvalue
, a12.vname vname
, a13.vvalue vvalue0
, a13.vname vname0
, a14.vvalue vvalue1
, a14.vname vname1
, a15.vvalue vvalue2
, a15.vname vname2
, a16.vvalue vvalue3
, a16.vname vname3
, a11.pkey_guid pkey_guid
, a11.upkey_guid upkey_guid
, a17.day_of_week day_of_week
, a17.d_week d_week
, a17.mnth_id day_of_month
, a17.year_id year_id
, a17.desc_year_full desc_year_full
, a17.week_id week_id
, a17.week_of_year week_of_year
FROM activity_f a11
JOIN tab1 a12
ON ( a11.pkey_guid = a12.pkey_guid
AND a11.date_id = a12.date_id
AND a11.org = a12.org
AND a12.variable_obj = 1)
JOIN tab1 a13
ON ( a11.pkey_guid = a13.pkey_guid
AND a11.date_id = a13.date_id
AND a11.org = a13.org
AND a13.variable_obj = 2)
JOIN tab1 a14
ON ( a11.pkey_guid = a14.pkey_guid
AND a11.date_id = a14.date_id
AND a11.org = a14.org
AND a14.variable_obj = 3)
JOIN tab1 a15
ON ( a11.pkey_guid = a15.pkey_guid
AND a11.date_id = a15.date_id
AND a11.org = a15.org
AND a15.variable_obj = 4)
JOIN tab1 a16
ON ( a11.pkey_guid = a16.pkey_guid
AND a11.date_id = a16.date_id
AND a11.org = a16.org
AND a16.variable_obj = 9)
JOIN w_date_d a17
ON (a11.date_id = a17.id)
JOIN w_sales_d a18
ON (a11.task = a18.id)
WHERE (a17.timstamp BETWEEN TO_DATE ('2001-02-24 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
AND TO_DATE ('2002-09-12 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
AND a11.org IN (12)
AND a18.src_task = 'AX012Z')
GROUP BY a11.date_id, a17.desc_date_mm_dd_yyyy, a11.dns, a12.vvalue
, a12.vname, a13.vvalue, a13.vname, a14.vvalue
, a14.vname, a15.vvalue, a15.vname, a16.vvalue
, a16.vname, a11.pkey_guid, a11.upkey_guid, a17.day_of_week
, a17.d_week, a17.mnth_id, a17.year_id, a17.desc_year_full
, a17.week_id, a17.week_of_year;
{code}
I hope I did not miss anything while reformatting the code. I could not test it not having the proper tables.
As I said before I'm not a tuning expert nor I pretend to be but I see this:
1) Table W_PERSON_D is read in full scan. Any possibility of using indexes?
2) Tables W_SALES_D, W_DATE_D, ACTIVITY_F and W_ORG_D have TABLE ACCESS BY INDEX ROWID which definitely is not fast.
You should provide additional information for tuning your query checking the post I mentioned previously.
Regards.
Al -
Query Performance tuning and scope of imporvement
Hi All ,
I am on oracle 10g and on Linux OS.
I have this below query which I am trying to optimise :
SELECT 'COMPANY' AS device_brand, mach_sn AS device_source_id,
'COMPANY' AS device_brand_raw,
CASE
WHEN fdi.serial_number IS NOT NULL THEN
fdi.serial_number
ELSE
mach_sn || model_no
END AS serial_number_raw,
gmd.generic_meter_name AS counter_id,
meter_name AS counter_id_raw,
meter_value AS counter_value,
meter_hist_tstamp AS device_timestamp,
rcvd_tstamp AS server_timestamp
FROM rdw.v_meter_hist vmh
JOIN rdw.generic_meter_def gmd
ON vmh.generic_meter_id = gmd.generic_meter_id
LEFT OUTER JOIN fdr.device_info fdi
ON vmh.mach_sn = fdi.clean_serial_number
WHERE meter_hist_id IN
(SELECT /*+ PUSH_SUBQ */ MAX(meter_hist_id)
FROM rdw.v_meter_hist
WHERE vmh.mach_sn IN
('URR893727')
AND vmh.meter_name IN
('TotalImpressions','TotalBlackImpressions''TotalColorImpressions')
AND vmh.meter_hist_tstamp >=to_date ('04/16/2011', 'mm/dd/yyyy')
AND vmh.meter_hist_tstamp <= to_date ('04/18/2011', 'mm/dd/yyyy')
GROUP BY mach_sn, vmh.meter_def_id)
ORDER BY device_source_id, vmh.meter_def_id, meter_hist_tstamp;Earlier , it was taking too much time but it started to work faster when i added this :
/*+ PUSH_SUBQ */ in the select query.
The explain plan generated for the same is :
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 29M| 3804M| | 15M (4)| 53:14:08 |
| 1 | SORT ORDER BY | | 29M| 3804M| 8272M| 15M (4)| 53:14:08 |
|* 2 | FILTER | | | | | | |
|* 3 | HASH JOIN | | 29M| 3804M| | 8451K (2)| 28:10:19 |
| 4 | TABLE ACCESS FULL | GENERIC_METER_DEF | 11 | 264 | | 3 (0)| 00:00:01 |
|* 5 | HASH JOIN RIGHT OUTER| | 29M| 3137M| 19M| 8451K (2)| 28:10:17 |
| 6 | TABLE ACCESS FULL | DEVICE_INFO | 589K| 12M| | 799 (2)| 00:00:10 |
|* 7 | HASH JOIN | | 29M| 2527M| 2348M| 8307K (2)| 27:41:29 |
|* 8 | HASH JOIN | | 28M| 2016M| | 6331K (2)| 21:06:19 |
|* 9 | TABLE ACCESS FULL | METER_DEF | 33 | 990 | | 4 (0)| 00:00:01 |
| 10 | TABLE ACCESS FULL | METER_HIST | 3440M| 137G| | 6308K (2)| 21:01:44 |
| 11 | TABLE ACCESS FULL | MACH_XFER_HIST | 436M| 7501M| | 1233K (1)| 04:06:41 |
|* 12 | FILTER | | | | | | |
| 13 | HASH GROUP BY | | 1 | 26 | | 6631K (7)| 22:06:15 |
|* 14 | FILTER | | | | | | |
| 15 | TABLE ACCESS FULL | METER_HIST | 3440M| 83G| | 6304K (2)| 21:00:49 |
------------------------------------------------------------------------------------------------------Is there any other way to optimise it more ... PLease suggest since I am new to query tuning.
Thanks and Regards
KKHi Dom ,
Greetings. Sorry for the delayed response. I have read the How to Post document.
I will provide all the required information here now :
Version : 10.2.0.4
OS : LinuxThe SQL Query which is facing the performance issue :
SELECT mh.meter_hist_id, mxh.mach_sn, mxh.collectiontag, mxh.rcvd_tstamp,
mxh.mach_xfer_id, md.meter_def_id, md.meter_name, md.meter_type,
md.meter_units, md.meter_desc, mh.meter_value, mh.meter_hist_tstamp,
mh.max_value, md.generic_meter_id
FROM meter_hist mh JOIN mach_xfer_hist mxh
ON mxh.mach_xfer_id = mh.mach_xfer_id
JOIN meter_def md ON md.meter_def_id = mh.meter_def_id;Explain plan for this query :
Plan hash value: 1878059220
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 3424M| 497G| | 17M (1)| 56:42:49 |
|* 1 | HASH JOIN | | 3424M| 497G| | 17M (1)| 56:42:49 |
| 2 | TABLE ACCESS FULL | METER_DEF | 423 | 27918 | | 4 | 00:00:01 |
|* 3 | HASH JOIN | | 3424M| 287G| 26G| 16M (1)| 56:38:16 |
| 4 | TABLE ACCESS FULL| MACH_XFER_HIST | 432M| 21G| | 1233K (1)| 04:06:40 |
| 5 | TABLE ACCESS FULL| METER_HIST | 3438M| 115G| | 6299K (2)| 20:59:54 |
Predicate Information (identified by operation id):
1 - access("MD"."METER_DEF_ID"="MH"."METER_DEF_ID")
3 - access("MH"."MACH_XFER_ID"="MXH"."MACH_XFER_ID")Parameters :
show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 70
optimizer_index_cost_adj integer 50
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUEshow parameter db_file_multi
db_file_multiblock_read_count integer 8
show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
select sname , pname , pval1 , pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 07-12-2011 09:22
SYSSTATS_INFO DSTOP 07-12-2011 09:52
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 1153.92254
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM 4.398
SYSSTATS_MAIN MREADTIM 3.255
SYSSTATS_MAIN CPUSPEED 180
SYSSTATS_MAIN MBRC 8
SYSSTATS_MAIN MAXTHR 244841472
SYSSTATS_MAIN SLAVETHR 933888
show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192Please let me me if any other information is needed. This Query is currently taking almost one hour to run right now.
Also , we have two indexes on the columns : xfer_id and meter_def_id in both the tables , but its not getting used without any filtering ( Where clause).
Do addition of hint in the query above will be of some help.
Thanks and Regards
KK -
I am looking to compile a list of the major performance tuning techniques that can be implemented in an ABAP program.
Appreciate any feedback
JHI,
chk this.
http://www.erpgenie.com/abap/performance.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
Performance tuning for Data Selection Statement
For all entries
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of
entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the
length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
Some steps that might make FOR ALL ENTRIES more efficient:
Removing duplicates from the the driver table
Sorting the driver table
If possible, convert the data in the driver table to ranges so a BETWEEN statement is used instead of and OR statement:
FOR ALL ENTRIES IN i_tab
WHERE mykey >= i_tab-low and
mykey <= i_tab-high.
Nested selects
The plus:
Small amount of data
Mixing processing and reading of data
Easy to code - and understand
The minus:
Large amount of data
when mixed processing isnt needed
Performance killer no. 1
Select using JOINS
The plus
Very large amount of data
Similar to Nested selects - when the accesses are planned by the programmer
In some cases the fastest
Not so memory critical
The minus
Very difficult to program/understand
Mixing processing and reading of data not possible
Use the selection criteria
SELECT * FROM SBOOK.
CHECK: SBOOK-CARRID = 'LH' AND
SBOOK-CONNID = '0400'.
ENDSELECT.
SELECT * FROM SBOOK
WHERE CARRID = 'LH' AND
CONNID = '0400'.
ENDSELECT.
Use the aggregated functions
C4A = '000'.
SELECT * FROM T100
WHERE SPRSL = 'D' AND
ARBGB = '00'.
CHECK: T100-MSGNR > C4A.
C4A = T100-MSGNR.
ENDSELECT.
SELECT MAX( MSGNR ) FROM T100 INTO C4A
WHERE SPRSL = 'D' AND
ARBGB = '00'.
Select with view
SELECT * FROM DD01L
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T
WHERE DOMNAME = DD01L-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
SELECT * FROM DD01V
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
Select with index support
SELECT * FROM T100
WHERE ARBGB = '00'
AND MSGNR = '999'.
ENDSELECT.
SELECT * FROM T002.
SELECT * FROM T100
WHERE SPRSL = T002-SPRAS
AND ARBGB = '00'
AND MSGNR = '999'.
ENDSELECT.
ENDSELECT.
Select Into table
REFRESH X006.
SELECT * FROM T006 INTO X006.
APPEND X006.
ENDSELECT
SELECT * FROM T006 INTO TABLE X006.
Select with selection list
SELECT * FROM DD01L
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
ENDSELECT
SELECT DOMNAME FROM DD01L
INTO DD01L-DOMNAME
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
ENDSELECT
Key access to multiple lines
LOOP AT TAB.
CHECK TAB-K = KVAL.
ENDLOOP.
LOOP AT TAB WHERE K = KVAL.
ENDLOOP.
Copying internal tables
REFRESH TAB_DEST.
LOOP AT TAB_SRC INTO TAB_DEST.
APPEND TAB_DEST.
ENDLOOP.
TAB_DEST[] = TAB_SRC[].
Modifying a set of lines
LOOP AT TAB.
IF TAB-FLAG IS INITIAL.
TAB-FLAG = 'X'.
ENDIF.
MODIFY TAB.
ENDLOOP.
TAB-FLAG = 'X'.
MODIFY TAB TRANSPORTING FLAG
WHERE FLAG IS INITIAL.
Deleting a sequence of lines
DO 101 TIMES.
DELETE TAB_DEST INDEX 450.
ENDDO.
DELETE TAB_DEST FROM 450 TO 550.
Linear search vs. binary
READ TABLE TAB WITH KEY K = 'X'.
READ TABLE TAB WITH KEY K = 'X' BINARY SEARCH.
Comparison of internal tables
DESCRIBE TABLE: TAB1 LINES L1,
TAB2 LINES L2.
IF L1 <> L2.
TAB_DIFFERENT = 'X'.
ELSE.
TAB_DIFFERENT = SPACE.
LOOP AT TAB1.
READ TABLE TAB2 INDEX SY-TABIX.
IF TAB1 <> TAB2.
TAB_DIFFERENT = 'X'. EXIT.
ENDIF.
ENDLOOP.
ENDIF.
IF TAB_DIFFERENT = SPACE.
ENDIF.
IF TAB1[] = TAB2[].
ENDIF.
Modify selected components
LOOP AT TAB.
TAB-DATE = SY-DATUM.
MODIFY TAB.
ENDLOOP.
WA-DATE = SY-DATUM.
LOOP AT TAB.
MODIFY TAB FROM WA TRANSPORTING DATE.
ENDLOOP.
Appending two internal tables
LOOP AT TAB_SRC.
APPEND TAB_SRC TO TAB_DEST.
ENDLOOP
APPEND LINES OF TAB_SRC TO TAB_DEST.
Deleting a set of lines
LOOP AT TAB_DEST WHERE K = KVAL.
DELETE TAB_DEST.
ENDLOOP
DELETE TAB_DEST WHERE K = KVAL.
Tools available in SAP to pin-point a performance problem
The runtime analysis (SE30)
SQL Trace (ST05)
Tips and Tricks tool
The performance database
Optimizing the load of the database
Using table buffering
Using buffered tables improves the performance considerably. Note that in some cases a stament can not be used with a buffered table, so when using these staments the buffer will be bypassed. These staments are:
Select DISTINCT
ORDER BY / GROUP BY / HAVING clause
Any WHERE clasuse that contains a subquery or IS NULL expression
JOIN s
A SELECT... FOR UPDATE
If you wnat to explicitly bypass the bufer, use the BYPASS BUFFER addition to the SELECT clause.
Use the ABAP SORT Clause Instead of ORDER BY
The ORDER BY clause is executed on the database server while the ABAP SORT statement is executed on the application server. The datbase server will usually be the bottleneck, so sometimes it is better to move thje sort from the datsbase server to the application server.
If you are not sorting by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are sorting by another key, it could be better to use the ABAP SORT stament to sort the data in an internal table. Note however that for very large result sets it might not be a feasible solution and you would want to let the datbase server sort it.
Avoid ther SELECT DISTINCT Statement
As with the ORDER BY clause it could be better to avoid using SELECT DISTINCT, if some of the fields are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on an internal table, to delete duplciate rows.
Regds
Anver
if hlped pls mark points -
Can anyone send tutor for performance tuning?
can anyone send tutor for performance tuning?I like to chk my coding.
1. Unused/Dead code
Avoid leaving unused code in the program. Either comment out or delete the unused situation. Use program --> check --> extended program to check for the variables, which are not used statically.
2. Subroutine Usage
For good modularization, the decision of whether or not to execute a subroutine should be made before the subroutine is called. For example:
This is better:
IF f1 NE 0.
PERFORM sub1.
ENDIF.
FORM sub1.
ENDFORM.
Than this:
PERFORM sub1.
FORM sub1.
IF f1 NE 0.
ENDIF.
ENDFORM.
3. Usage of IF statements
When coding IF tests, nest the testing conditions so that the outer conditions are those which are most likely to fail. For logical expressions with AND , place the mostly likely false first and for the OR, place the mostly likely true first.
Example - nested IF's:
IF (least likely to be true).
IF (less likely to be true).
IF (most likely to be true).
ENDIF.
ENDIF.
ENDIF.
Example - IF...ELSEIF...ENDIF :
IF (most likely to be true).
ELSEIF (less likely to be true).
ELSEIF (least likely to be true).
ENDIF.
Example - AND:
IF (least likely to be true) AND
(most likely to be true).
ENDIF.
Example - OR:
IF (most likely to be true) OR
(least likely to be true).
4. CASE vs. nested Ifs
When testing fields "equal to" something, one can use either the nested IF or the CASE statement. The CASE is better for two reasons. It is easier to read and after about five nested IFs the performance of the CASE is more efficient.
5. MOVE statements
When records a and b have the exact same structure, it is more efficient to MOVE a TO b than to MOVE-CORRESPONDING a TO b.
MOVE BSEG TO *BSEG.
is better than
MOVE-CORRESPONDING BSEG TO *BSEG.
6. SELECT and SELECT SINGLE
When using the SELECT statement, study the key and always provide as much of the left-most part of the key as possible. If the entire key can be qualified, code a SELECT SINGLE not just a SELECT. If you are only interested in the first row or there is only one row to be returned, using SELECT SINGLE can increase performance by up to three times.
7. Small internal tables vs. complete internal tables
In general it is better to minimize the number of fields declared in an internal table. While it may be convenient to declare an internal table using the LIKE command, in most cases, programs will not use all fields in the SAP standard table.
For example:
Instead of this:
data: t_mara like mara occurs 0 with header line.
Use this:
data: begin of t_mara occurs 0,
matnr like mara-matnr,
end of t_mara.
8. Row-level processing and SELECT SINGLE
Similar to the processing of a SELECT-ENDSELECT loop, when calling multiple SELECT-SINGLE commands on a non-buffered table (check Data Dictionary -> Technical Info), you should do the following to improve performance:
o Use the SELECT into <itab> to buffer the necessary rows in an internal table, then
o sort the rows by the key fields, then
o use a READ TABLE WITH KEY ... BINARY SEARCH in place of the SELECT SINGLE command. Note that this only make sense when the table you are buffering is not too large (this decision must be made on a case by case basis).
9. READing single records of internal tables
When reading a single record in an internal table, the READ TABLE WITH KEY is not a direct READ. This means that if the data is not sorted according to the key, the system must sequentially read the table. Therefore, you should:
o SORT the table
o use READ TABLE WITH KEY BINARY SEARCH for better performance.
10. SORTing internal tables
When SORTing internal tables, specify the fields to SORTed.
SORT ITAB BY FLD1 FLD2.
is more efficient than
SORT ITAB.
11. Number of entries in an internal table
To find out how many entries are in an internal table use DESCRIBE.
DESCRIBE TABLE ITAB LINES CNTLNS.
is more efficient than
LOOP AT ITAB.
CNTLNS = CNTLNS + 1.
ENDLOOP.
12. Performance diagnosis
To diagnose performance problems, it is recommended to use the SAP transaction SE30, ABAP/4 Runtime Analysis. The utility allows statistical analysis of transactions and programs.
13. Nested SELECTs versus table views
Since releASE 4.0, OPEN SQL allows both inner and outer table joins. A nested SELECT loop may be used to accomplish the same concept. However, the performance of nested SELECT loops is very poor in comparison to a join. Hence, to improve performance by a factor of 25x and reduce network load, you should either create a view in the data dictionary then use this view to select data, or code the select using a join.
14. If nested SELECTs must be used
As mentioned previously, performance can be dramatically improved by using views instead of nested SELECTs, however, if this is not possible, then the following example of using an internal table in a nested SELECT can also improve performance by a factor of 5x:
Use this:
form select_good.
data: t_vbak like vbak occurs 0 with header line.
data: t_vbap like vbap occurs 0 with header line.
select * from vbak into table t_vbak up to 200 rows.
select * from vbap
for all entries in t_vbak
where vbeln = t_vbak-vbeln.
endselect.
endform.
Instead of this:
form select_bad.
select * from vbak up to 200 rows.
select * from vbap where vbeln = vbak-vbeln.
endselect.
endselect.
endform.
Although using "SELECT...FOR ALL ENTRIES IN..." is generally very fast, you should be aware of the three pitfalls of using it:
Firstly, SAP automatically removes any duplicates from the rest of the retrieved records. Therefore, if you wish to ensure that no qualifying records are discarded, the field list of the inner SELECT must be designed to ensure the retrieved records will contain no duplicates (normally, this would mean including in the list of retrieved fields all of those fields that comprise that table's primary key).
Secondly, if you were able to code "SELECT ... FROM <database table> FOR ALL ENTRIES IN TABLE <itab>" and the internal table <itab> is empty, then all rows from <database table> will be retrieved.
Thirdly, if the internal table supplying the selection criteria (i.e. internal table <itab> in the example "...FOR ALL ENTRIES IN TABLE <itab> ") contains a large number of entries, performance degradation may occur.
15. SELECT * versus SELECTing individual fields
In general, use a SELECT statement specifying a list of fields instead of a SELECT * to reduce network traffic and improve performance. For tables with only a few fields the improvements may be minor, but many SAP tables contain more than 50 fields when the program needs only a few. In the latter case, the performace gains can be substantial. For example:
Use:
select vbeln auart vbtyp from table vbak
into (vbak-vbeln, vbak-auart, vbak-vbtyp)
where ...
Instead of using:
select * from vbak where ...
16. Avoid unnecessary statements
There are a few cases where one command is better than two. For example:
Use:
append <tab_wa> to <tab>.
Instead of:
<tab> = <tab_wa>.
append <tab> (modify <tab>).
And also, use:
if not <tab>[] is initial.
Instead of:
describe table <tab> lines <line_counter>.
if <line_counter> > 0.
17. Copying or appending internal tables
Use this:
<tab2>[] = <tab1>[]. (if <tab2> is empty)
Instead of this:
loop at <tab1>.
append <tab1> to <tab2>.
endloop.
However, if <tab2> is not empty and should not be overwritten, then use:
append lines of <tab1> [from index1] [to index2] to <tab2>.
P.S : Please reward if you find this useful.. -
Performance tuning issues -- plz help
Hi Tuning gurus
this querry works fine for lesser number of rows
eg :--
where ROWNUM <= 10 )
where rnum >=1;
but takes lot of time as we increase rownum ..
eg :--
where ROWNUM <= 10000 )
where rnum >=9990;
results are posted below
pls suggest me
oracle version -Oracle Database 10g Enterprise Edition
Release 10.2.0.1.0 - Prod
os version red hat enterprise linux ES release 4
also statistics differ when we use table
and its views
results of view v$mail
[select * from
( select a.*, ROWNUM rnum from
( SELECT M.MAIL_ID, MAIL_FROM, M.SUBJECT
AS S1,CEIL(M.MAIL_SIZE) AS MAIL_SIZE,
TO_CHAR(MAIL_DATE,'dd Mon yyyy hh:mi:ss
am') AS MAIL_DATE1, M.ATTACHMENT_FLAG,
M.MAIL_TYPE_ID, M.PRIORITY_NO, M.TEXT,
COALESCE(M.MAIL_STATUS_VALUE,0),
0 as email_address,LOWER(M.MAIL_to) as
Mail_to, M.Cc, M.MAIL_DATE AS MAIL_DATE,
lower(subject) as subject,read_ipaddress,
read_datetime,Folder_Id,compose_type,
interc_count,history_id,pined_flag,
rank() over (order by mail_date desc)
as rnk from v$mail M WHERE M.USER_ID=6 AND M.FOLDER_ID =1) a
where ROWNUM <= 10000 )
where rnum >=9990;]
result :
11 rows selected.
Elapsed: 00:00:03.84
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=14735 Card=10000 B
ytes=142430000)
1 0 VIEW (Cost=14735 Card=10000 Bytes=142430000)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=14735 Card=14844 Bytes=211230120)
4 3 WINDOW (SORT) (Cost=14735 Card=14844 Bytes=9114216)
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'MAIL' (TABLE) (C
ost=12805 Card=14844 Bytes=9114216)
6 5 INDEX (RANGE SCAN) OF 'FOLDER_USERID' (INDEX) (C
ost=43 Card=14844)
Statistics
294 recursive calls
0 db block gets
8715 consistent gets
8669 physical reads
0 redo size
7060 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select count(*) from v$mail;
Elapsed: 00:00:00.17
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=494 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'FOLDER_USERID' (INDEX) (Cost=
494 Card=804661)
Statistics
8 recursive calls
0 db block gets
2171 consistent gets
2057 physical reads
260 redo size
352 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
results of original table mail
[select * from
( select a.*, ROWNUM rnum from
( SELECT M.MAIL_ID, MAIL_FROM, M.SUBJECT
AS S1,CEIL(M.MAIL_SIZE) AS MAIL_SIZE,
TO_CHAR(MAIL_DATE,'dd Mon yyyy hh:mi:ss
am') AS MAIL_DATE1, M.ATTACHMENT_FLAG,
M.MAIL_TYPE_ID, M.PRIORITY_NO, M.TEXT,
COALESCE(M.MAIL_STATUS_VALUE,0),
0 as email_address,LOWER(M.MAIL_to) as
Mail_to, M.Cc, M.MAIL_DATE AS MAIL_DATE,
lower(subject) as subject,read_ipaddress,
read_datetime,Folder_Id,compose_type,
interc_count,history_id,pined_flag,
rank() over (order by mail_date desc)
as rnk from mail M WHERE M.USER_ID=6 AND M.FOLDER_ID =1) a
where ROWNUM <= 10000 )
where rnum >=9990;]
result :
11 rows selected.
Elapsed: 00:00:03.21
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=14735 Card=10000 B
ytes=142430000)
1 0 VIEW (Cost=14735 Card=10000 Bytes=142430000)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=14735 Card=14844 Bytes=211230120)
4 3 WINDOW (SORT) (Cost=14735 Card=14844 Bytes=9114216)
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'MAIL' (TABLE) (C
ost=12805 Card=14844 Bytes=9114216)
6 5 INDEX (RANGE SCAN) OF 'FOLDER_USERID' (INDEX) (C
ost=43 Card=14844)
Statistics
1 recursive calls
119544 db block gets
8686 consistent gets
8648 physical reads
0 redo size
13510 bytes sent via SQL*Net to client
4084 bytes received via SQL*Net from client
41 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select count(*) from mail;
Elapsed: 00:00:00.34
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=494 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'FOLDER_USERID' (INDEX) (Cost=
494 Card=804661)
Statistics
1 recursive calls
0 db block gets
2183 consistent gets
2062 physical reads
72 redo size
352 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Thanks n regards
ps : sorry i could not preserve the format plz
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBAJust to answer the OP's fundamental question:
The query starts off quick (rows between 1 and 10)
but gets increasingly slower as the start of the
window increases (eg to row 1000, 10,000, etc).
The original (unsorted) query would get first rows
very quickly, but each time you move the window, it
has to fetch and discard an increasing number of rows
before it finds the first one you want. So the time
taken is proportional to the rownumber you have
reached.
With Charles's correction (which is unavoidable), the
entire query has to be retrieved and sorted
before the rows you want can be returned. That's
horribly inefficient. This technique works for small
sets (eg 10 - 1000 rows) but I can't tell you how
wrong it is to process data in this way especially if
you are expecting lacs (that's 100,000s isn't
it) of rows returned. You are pounding your database
simply to give you the option of being able to go
back as well as forwards in your query results. The
time taken is proportional to the total number of
rows (so the time to get to the end of the entire set
is proportional to the square of the total
number of rows.
If you really need to page back and forth
through large sets, consider one of the following
options:
1) saving the set (eg as a materialised view or in a
temp table - and include "row number" as an indexed
column)
2) retrieve ALL the rowids into an array/collection
in a single pass, then go get 10 rows by rowid for
each page
3) assuming you can sort by a unique identifier, use
that (instead of rownumber) to remember the first row
in each page; use a range scan on the index on that
UID to get back the rows you want quickly (doing this
with a non-unique sort key is quite a bit harder)
Remember also that if someone else inserts into your
table while you are paging around, some of these
methods can give confusing results - because every
time you start a new query, you get a new
read-consistent point.
Anyway, try to redesign so you don't need to page
through lacs of rows....
HTH
Regards NigelYou are correct regarding the OP's original SQL statement that:
"the entire query has to be retrieved and sorted before the rows you want can be returned"
However, that is not the case with the SQL statement that I posted. The problem with the SQL statement I posted is that Oracle insists on performing full tablescans on the table. The following is a full test run with 2,000,000 rows in a table, including an analysis of the problem, and a method of working around the problem:
CREATE TABLE T1 (
MAIL_ID NUMBER(10),
USER_ID NUMBER(10),
FOLDER_ID NUMBER(10),
MAIL_DATE DATE,
PRIMARY KEY(MAIL_ID));
<br>
CREATE INDEX T1_USER_FOLDER ON T1(USER_ID,FOLDER_ID);
CREATE INDEX T1_USER_FOLDER_MAIL ON T1(USER_ID,FOLDER_ID);
<br>
INSERT INTO
T1
SELECT
ROWNUM MAIL_ID,
DBMS_RANDOM.VALUE(1,30) USER_ID,
DBMS_RANDOM.VALUE(1,5) FOLDER_ID,
TRUNC(SYSDATE-365)+ROWNUM/10000 MAIL_DATE
FROM
DUAL
CONNECT BY
LEVEL<=1000000;
<br>
INSERT INTO
T1
SELECT
ROWNUM+1000000 MAIL_ID,
DBMS_RANDOM.VALUE(1,30) USER_ID,
DBMS_RANDOM.VALUE(1,5) FOLDER_ID,
TRUNC(SYSDATE-365)+ROWNUM/10000 MAIL_DATE
FROM
DUAL
CONNECT BY
LEVEL<=1000000;
<br>
COMMIT;
<br>
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
<br>
SELECT /*+ ORDERED */
MI.MAIL_ID,
TO_CHAR(M.MAIL_DATE,'DD MON YYYY HH:MI:SS AM') AS MAIL_DATE1,
M.MAIL_DATE AS MAIL_DATE,
M.FOLDER_ID,
M.MAIL_ID,
M.USER_ID
FROM
(SELECT
MAIL_ID
FROM
(SELECT
MAIL_ID,
ROW_NUMBER() OVER (ORDER BY MAIL_DATE DESC) RN
FROM
CUSTAPP.T1
WHERE
USER_ID=6
AND FOLDER_ID=1)
WHERE
RN BETWEEN 900 AND 909) MI,
CUSTAPP.T1 M
WHERE
MI.MAIL_ID=M.MAIL_ID;
<br>
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
|* 1 | HASH JOIN | | 1 | 8801 | 10 |00:00:15.62 | 13610 | 1010K| 1010K| 930K (0)|
|* 2 | VIEW | | 1 | 8801 | 10 |00:00:00.34 | 6805 | | | |
|* 3 | WINDOW SORT PUSHED RANK| | 1 | 8801 | 910 |00:00:00.34 | 6805 | 74752 | 74752 |65536 (0)|
|* 4 | TABLE ACCESS FULL | T1 | 1 | 8801 | 8630 |00:00:00.29 | 6805 | | | |
| 5 | TABLE ACCESS FULL | T1 | 1 | 2000K| 2000K|00:00:04.00 | 6805 | | | |
<br>
Predicate Information (identified by operation id):
1 - access("MAIL_ID"="M"."MAIL_ID")
2 - filter(("RN">=900 AND "RN"<=909))
3 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("MAIL_DATE") DESC )<=909)
4 - filter(("USER_ID"=6 AND "FOLDER_ID"=1))The above performed two tablescans of the T1 table and required 15.6 seconds to complete, which was not the desired result. Now, to create an index that will be helpful for the query, and provide Oracle an additional hint:
(http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html "Pagination in Getting Rows N Through M" shows a similar approach)
DROP INDEX T1_USER_FOLDER_MAIL;
<br>
CREATE INDEX T1_USER_FOLDER_MAIL ON T1(USER_ID,FOLDER_ID,MAIL_DATE DESC,MAIL_ID);
<br>
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
<br>
SELECT /*+ ORDERED */
MI.MAIL_ID,
TO_CHAR(M.MAIL_DATE,'DD MON YYYY HH:MI:SS AM') AS MAIL_DATE1,
M.MAIL_DATE AS MAIL_DATE,
M.FOLDER_ID,
M.MAIL_ID,
M.USER_ID
FROM
(SELECT /*+ FIRST_ROWS(10) */
MAIL_ID
FROM
(SELECT
MAIL_ID,
ROW_NUMBER() OVER (ORDER BY MAIL_DATE DESC) RN
FROM
CUSTAPP.T1
WHERE
USER_ID=6
AND FOLDER_ID=1)
WHERE
RN BETWEEN 900 AND 909) MI,
CUSTAPP.T1 M
WHERE
MI.MAIL_ID=M.MAIL_ID;
<br>
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 1 | NESTED LOOPS | | 1 | 11 | 10 |00:00:00.01 | 47 | | | |
|* 2 | VIEW | | 1 | 11 | 10 |00:00:00.01 | 7 | | | |
|* 3 | WINDOW NOSORT STOPKEY | | 1 | 8711 | 909 |00:00:00.01 | 7 | 267K| 267K| |
|* 4 | INDEX RANGE SCAN | T1_USER_FOLDER_MAIL | 1 | 8711 | 910 |00:00:00.01 | 7 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| T1 | 10 | 1 | 10 |00:00:00.01 | 40 | | | |
|* 6 | INDEX UNIQUE SCAN | SYS_C0023476 | 10 | 1 | 10 |00:00:00.01 | 30 | | | |
<br>
Predicate Information (identified by operation id):
2 - filter(("RN">=900 AND "RN"<=909))
3 - filter(ROW_NUMBER() OVER ( ORDER BY "T1"."SYS_NC00005$")<=909)
4 - access("USER_ID"=6 AND "FOLDER_ID"=1)
6 - access("MAIL_ID"="M"."MAIL_ID")The above made use of both indexes, did and completed in 0.01 seconds.
SELECT /*+ ORDERED */
MI.MAIL_ID,
TO_CHAR(M.MAIL_DATE,'DD MON YYYY HH:MI:SS AM') AS MAIL_DATE1,
M.MAIL_DATE AS MAIL_DATE,
M.FOLDER_ID,
M.MAIL_ID,
M.USER_ID
FROM
(SELECT /*+ FIRST_ROWS(10) */
MAIL_ID
FROM
(SELECT
MAIL_ID,
ROW_NUMBER() OVER (ORDER BY MAIL_DATE DESC) RN
FROM
CUSTAPP.T1
WHERE
USER_ID=6
AND FOLDER_ID=1)
WHERE
RN BETWEEN 8600 AND 8609) MI,
CUSTAPP.T1 M
WHERE
MI.MAIL_ID=M.MAIL_ID;
<br>
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 1 | NESTED LOOPS | | 1 | 11 | 10 |00:00:00.11 | 81 | | | |
|* 2 | VIEW | | 1 | 11 | 10 |00:00:00.11 | 41 | | | |
|* 3 | WINDOW NOSORT STOPKEY | | 1 | 8711 | 8609 |00:00:00.09 | 41 | 267K| 267K| |
|* 4 | INDEX RANGE SCAN | T1_USER_FOLDER_MAIL | 1 | 8711 | 8610 |00:00:00.05 | 41 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| T1 | 10 | 1 | 10 |00:00:00.01 | 40 | | | |
|* 6 | INDEX UNIQUE SCAN | SYS_C0023476 | 10 | 1 | 10 |00:00:00.01 | 30 | | | |
<br>
Predicate Information (identified by operation id):
2 - filter(("RN">=8600 AND "RN"<=8609))
3 - filter(ROW_NUMBER() OVER ( ORDER BY "T1"."SYS_NC00005$")<=8609)
4 - access("USER_ID"=6 AND "FOLDER_ID"=1)
6 - access("MAIL_ID"="M"."MAIL_ID")The above made use of both indexes, did and completed in 0.11 seconds.
As the above shows, it is possible to efficiently retrieve the desired records very rapidly without having to leave the cursor open.
If this SQL statement will be used in a web browser, it probably does not make sense to leave the cursor open. If the SQL statement will be used in an application that maintains state, and the user is expected to always page from the first row toward the last, then leaving the cursor open and reading rows as needed makes sense.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Query Degradation--Hash Join Degraded
Hi All,
I found one query degradation issue.I am on 10.2.0.3.0 (Sun OS) with optimizer_mode=ALL_ROWS.
This is a dataware house db.
All 3 tables involved are parition tables (with daily partitions).Partitions are created in advance and ELT jobs loads bulk data into daily partitions.
I have checked that CBO is not using local indexes-created on them which i believe,is appropriate because when i used INDEX HINT, elapsed time increses.
I checked giving index hint for all tables one by one but dint get any performance improvement.
Partitions are daily loaded and after loading,partition-level stats are gathered with dbms_stats.
We are collecting stats at partition level(granularity=>'PARTITION').Even after collecting global stats,there is no change in access pattern.Stats gather command is given below.
PROCEDURE gather_table_part_stats(i_owner_name,i_table_name,i_part_name,i_estimate:= DBMS_STATS.AUTO_SAMPLE_SIZE, i_invalidate IN VARCHAR2 := 'Y',i_debug:= 'N')
Only SOT_KEYMAP.IPK_SOT_KEYMAP is GLOBAL.Rest all indexes are LOCAL.
Earlier,we were having BIND PEEKING issue,which i fixed but introducing NO_INVALIDATE=>FALSE in stats gather job.
Here,Partition_name (20090219) is being passed through bind variables.
SELECT a.sotrelstg_sot_ud sotcrct_sot_ud,
b.sotkey_ud sotcrct_orig_sot_ud, a.ROWID stage_rowid
FROM (SELECT sotrelstg_sot_ud, sotrelstg_sys_ud,
sotrelstg_orig_sys_ord_id, sotrelstg_orig_sys_ord_vseq
FROM sot_rel_stage
WHERE sotrelstg_trd_date_ymd_part = '20090219'
AND sotrelstg_crct_proc_stat_cd = 'N'
AND sotrelstg_sot_ud NOT IN(
SELECT sotcrct_sot_ud
FROM sot_correct
WHERE sotcrct_trd_date_ymd_part ='20090219')) a,
(SELECT MAX(sotkey_ud) sotkey_ud, sotkey_sys_ud,
sotkey_sys_ord_id, sotkey_sys_ord_vseq,
sotkey_trd_date_ymd_part
FROM sot_keymap
WHERE sotkey_trd_date_ymd_part = '20090219'
AND sotkey_iud_cd = 'I'
--not to select logical deleted rows
GROUP BY sotkey_trd_date_ymd_part,
sotkey_sys_ud,
sotkey_sys_ord_id,
sotkey_sys_ord_vseq) b
WHERE a.sotrelstg_sys_ud = b.sotkey_sys_ud
AND a.sotrelstg_orig_sys_ord_id = b.sotkey_sys_ord_id
AND NVL(a.sotrelstg_orig_sys_ord_vseq, 1) = NVL(b.sotkey_sys_ord_vseq, 1);
During normal business hr, i found that query takes 5-7 min(which is also not acceptable), but during high load business hr,it is taking 30-50 min.
I found that most of the time it is spending on HASH JOIN (direct path write temp).We have sufficient RAM (64 GB total/41 GB available).
Below is the execution plan i got during normal business hr.
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | Writes | OMem | 1Mem | Used-Mem | Used-Tmp|
| 1 | HASH GROUP BY | | 1 | 1 | 7844K|00:05:28.78 | 16M| 217K| 35969 | | | | |
|* 2 | HASH JOIN | | 1 | 1 | 9977K|00:04:34.02 | 16M| 202K| 20779 | 580M| 10M| 563M (1)| 650K|
| 3 | NESTED LOOPS ANTI | | 1 | 6 | 7855K|00:01:26.41 | 16M| 1149 | 0 | | | | |
| 4 | PARTITION RANGE SINGLE| | 1 | 258K| 8183K|00:00:16.37 | 25576 | 1149 | 0 | | | | |
|* 5 | TABLE ACCESS FULL | SOT_REL_STAGE | 1 | 258K| 8183K|00:00:16.37 | 25576 | 1149 | 0 | | | | |
| 6 | PARTITION RANGE SINGLE| | 8183K| 326K| 327K|00:01:10.53 | 16M| 0 | 0 | | | | |
|* 7 | INDEX RANGE SCAN | IDXL_SOTCRCT_SOT_UD | 8183K| 326K| 327K|00:00:53.37 | 16M| 0 | 0 | | | | |
| 8 | PARTITION RANGE SINGLE | | 1 | 846K| 14M|00:02:06.36 | 289K| 180K| 0 | | | | |
|* 9 | TABLE ACCESS FULL | SOT_KEYMAP | 1 | 846K| 14M|00:01:52.32 | 289K| 180K| 0 | | | | |
I will attached the same for high load business hr once query gives results.It is still executing for last 50 mins.
INDEX STATS (INDEXES ARE LOCAL INDEXES)
TABLE_NAME INDEX_NAME COLUMN_NAME COLUMN_POSITION NUM_ROWS DISTINCT_KEYS CLUSTERING_FACTOR
SOT_REL_STAGE IDXL_SOTRELSTG_SOT_UD SOTRELSTG_SOT_UD 1 25461560 25461560 184180
SOT_REL_STAGE SOTRELSTG_TRD_DATE 2 25461560 25461560 184180
_YMD_PART
TABLE_NAME INDEX_NAME COLUMN_NAME COLUMN_POSITION NUM_ROWS DISTINCT_KEYS CLUSTERING_FACTOR
SOT_KEYMAP IDXL_SOTKEY_ENTORDSYS_UD SOTKEY_ENTRY_ORD_S 1 1012306940 3 38308680
YS_UD
SOT_KEYMAP IDXL_SOTKEY_HASH SOTKEY_HASH 1 1049582320 1049582320 1049579520
SOT_KEYMAP SOTKEY_TRD_DATE_YM 2 1049582320 1049582320 1049579520
D_PART
SOT_KEYMAP IDXL_SOTKEY_SOM_ORD SOTKEY_SOM_UD 1 1023998560 268949136 559414840
SOT_KEYMAP SOTKEY_SYS_ORD_ID 2 1023998560 268949136 559414840
SOT_KEYMAP IPK_SOT_KEYMAP SOTKEY_UD 1 1030369480 1015378900 24226580
TABLE_NAME INDEX_NAME COLUMN_NAME COLUMN_POSITION NUM_ROWS DISTINCT_KEYS CLUSTERING_FACTOR
SOT_CORRECT IDXL_SOTCRCT_SOT_UD SOTCRCT_SOT_UD 1 412484756 412484756 411710982
SOT_CORRECT SOTCRCT_TRD_DATE_Y 2 412484756 412484756 411710982
MD_PART
INDEX partiton stas (from dba_ind_partitions)
INDEX_NAME PARTITION_NAME STATUS BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR NUM_ROWS SAMPLE_SIZE LAST_ANALYZ GLO
IDXL_SOTCRCT_SOT_UD P20090219 USABLE 1 372 327879 216663 327879 327879 20-Feb-2009 YES
IDXL_SOTKEY_ENTORDSYS_UD P20090219 USABLE 2 2910 3 36618 856229 856229 19-Feb-2009 YES
IDXL_SOTKEY_HASH P20090219 USABLE 2 7783 853956 853914 853956 119705 19-Feb-2009 YES
IDXL_SOTKEY_SOM_ORD P20090219 USABLE 2 6411 531492 157147 799758 132610 19-Feb-2009 YES
IDXL_SOTRELSTG_SOT_UD P20090219 USABLE 2 13897 9682052 45867 9682052 794958 20-Feb-2009 YESThanks in advance.
Bhavik DesaiHi Randolf,
Thanks for the time you spent on this issue.I appreciate it.
Please see my comments below:
1. You've mentioned several times that you're passing the partition name as bind variable, but you're obviously testing the statement with literals rather than bind
variables. So your tests obviously don't reflect what is going to happen in case of the actual execution. The cardinality estimates are potentially quite different when
using bind variables for the partition key.
Yes.I intentionaly used literals in my tests.I found couple of times that plan used by the application and plan generated by AUTOTRACE+EXPLAIN PLAN command...is same and
caused hrly elapsed time.
As i pointed out earlier,last month we solved couple of bind peeking issue by intproducing NO_VALIDATE=>FALSE in stats gather procedure,which we execute just after data
load into such daily partitions and before start of jobs which executes this query.
Execution plans From AWR (with parallelism on at table level DEGREE>1)-->This plan is one which CBO has used when degradation occured.This plan is used most of the times.
ELAPSED_TIME_DELTA BUFFER_GETS_DELTA DISK_READS_DELTA CURSOR(SELECT*FROMTA
1918506000 46154275 918 CURSOR STATEMENT : 4
CURSOR STATEMENT : 4
PLAN_TABLE_OUTPUT
SQL_ID 39708a3azmks7
SELECT A.SOTRELSTG_SOT_UD SOTCRCT_SOT_UD, B.SOTKEY_UD SOTCRCT_ORIG_SOT_UD, A.ROWID STAGE_ROWID FROM (SELECT SOTRELSTG_SOT_UD,
SOTRELSTG_SYS_UD, SOTRELSTG_ORIG_SYS_ORD_ID, SOTRELSTG_ORIG_SYS_ORD_VSEQ FROM SOT_REL_STAGE WHERE SOTRELSTG_TRD_DATE_YMD_PART = :B1 AND
SOTRELSTG_CRCT_PROC_STAT_CD = 'N' AND SOTRELSTG_SOT_UD NOT IN( SELECT SOTCRCT_SOT_UD FROM SOT_CORRECT WHERE SOTCRCT_TRD_DATE_YMD_PART =
:B1 )) A, (SELECT MAX(SOTKEY_UD) SOTKEY_UD, SOTKEY_SYS_UD, SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ, SOTKEY_TRD_DATE_YMD_PART FROM
SOT_KEYMAP WHERE SOTKEY_TRD_DATE_YMD_PART = :B1 AND SOTKEY_IUD_CD = 'I' GROUP BY SOTKEY_TRD_DATE_YMD_PART, SOTKEY_SYS_UD,
SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ) B WHERE A.SOTRELSTG_SYS_UD = B.SOTKEY_SYS_UD AND A.SOTRELSTG_ORIG_SYS_ORD_ID =
B.SOTKEY_SYS_ORD_ID AND NVL(A.SOTRELSTG_ORIG_SYS_ORD_VSEQ, 1) = NVL(B.SOTKEY_SYS_ORD_VSEQ, 1)
Plan hash value: 1213870831
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | | | 19655 (100)| | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10003 | 1 | 116 | 19655 (1)| 00:05:54 | | | Q1,03 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 1 | 116 | 19655 (1)| 00:05:54 | | | Q1,03 | PCWP | |
| 4 | PX RECEIVE | | 1 | 116 | 19655 (1)| 00:05:54 | | | Q1,03 | PCWP | |
| 5 | PX SEND HASH | :TQ10002 | 1 | 116 | 19655 (1)| 00:05:54 | | | Q1,02 | P->P | HASH |
| 6 | HASH GROUP BY | | 1 | 116 | 19655 (1)| 00:05:54 | | | Q1,02 | PCWP | |
| 7 | NESTED LOOPS ANTI | | 1 | 116 | 19654 (1)| 00:05:54 | | | Q1,02 | PCWP | |
| 8 | HASH JOIN | | 1 | 102 | 19654 (1)| 00:05:54 | | | Q1,02 | PCWP | |
| 9 | PX JOIN FILTER CREATE| :BF0000 | 13M| 664M| 2427 (3)| 00:00:44 | | | Q1,02 | PCWP | |
| 10 | PX RECEIVE | | 13M| 664M| 2427 (3)| 00:00:44 | | | Q1,02 | PCWP | |
| 11 | PX SEND HASH | :TQ10000 | 13M| 664M| 2427 (3)| 00:00:44 | | | Q1,00 | P->P | HASH |
| 12 | PX BLOCK ITERATOR | | 13M| 664M| 2427 (3)| 00:00:44 | KEY | KEY | Q1,00 | PCWC | |
| 13 | TABLE ACCESS FULL| SOT_REL_STAGE | 13M| 664M| 2427 (3)| 00:00:44 | KEY | KEY | Q1,00 | PCWP | |
| 14 | PX RECEIVE | | 27M| 1270M| 17209 (1)| 00:05:10 | | | Q1,02 | PCWP | |
| 15 | PX SEND HASH | :TQ10001 | 27M| 1270M| 17209 (1)| 00:05:10 | | | Q1,01 | P->P | HASH |
| 16 | PX JOIN FILTER USE | :BF0000 | 27M| 1270M| 17209 (1)| 00:05:10 | | | Q1,01 | PCWP | |
| 17 | PX BLOCK ITERATOR | | 27M| 1270M| 17209 (1)| 00:05:10 | KEY | KEY | Q1,01 | PCWC | |
| 18 | TABLE ACCESS FULL| SOT_KEYMAP | 27M| 1270M| 17209 (1)| 00:05:10 | KEY | KEY | Q1,01 | PCWP | |
| 19 | PARTITION RANGE SINGLE| | 16185 | 221K| 0 (0)| | KEY | KEY | Q1,02 | PCWP | |
| 20 | INDEX RANGE SCAN | IDXL_SOTCRCT_SOT_UD | 16185 | 221K| 0 (0)| | KEY | KEY | Q1,02 | PCWP | |
Other Execution plan from AWR
ELAPSED_TIME_DELTA BUFFER_GETS_DELTA DISK_READS_DELTA CURSOR(SELECT*FROMTA
1053251381 0 2925 CURSOR STATEMENT : 4
CURSOR STATEMENT : 4
PLAN_TABLE_OUTPUT
SQL_ID 39708a3azmks7
SELECT A.SOTRELSTG_SOT_UD SOTCRCT_SOT_UD, B.SOTKEY_UD SOTCRCT_ORIG_SOT_UD, A.ROWID STAGE_ROWID FROM (SELECT SOTRELSTG_SOT_UD,
SOTRELSTG_SYS_UD, SOTRELSTG_ORIG_SYS_ORD_ID, SOTRELSTG_ORIG_SYS_ORD_VSEQ FROM SOT_REL_STAGE WHERE SOTRELSTG_TRD_DATE_YMD_PART = :B1 AND
SOTRELSTG_CRCT_PROC_STAT_CD = 'N' AND SOTRELSTG_SOT_UD NOT IN( SELECT SOTCRCT_SOT_UD FROM SOT_CORRECT WHERE SOTCRCT_TRD_DATE_YMD_PART =
:B1 )) A, (SELECT MAX(SOTKEY_UD) SOTKEY_UD, SOTKEY_SYS_UD, SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ, SOTKEY_TRD_DATE_YMD_PART FROM
SOT_KEYMAP WHERE SOTKEY_TRD_DATE_YMD_PART = :B1 AND SOTKEY_IUD_CD = 'I' GROUP BY SOTKEY_TRD_DATE_YMD_PART, SOTKEY_SYS_UD,
SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ) B WHERE A.SOTRELSTG_SYS_UD = B.SOTKEY_SYS_UD AND A.SOTRELSTG_ORIG_SYS_ORD_ID =
B.SOTKEY_SYS_ORD_ID AND NVL(A.SOTRELSTG_ORIG_SYS_ORD_VSEQ, 1) = NVL(B.SOTKEY_SYS_ORD_VSEQ, 1)
Plan hash value: 3434900850
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | | | 1830 (100)| | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10003 | 1 | 131 | 1830 (2)| 00:00:33 | | | Q1,03 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 1 | 131 | 1830 (2)| 00:00:33 | | | Q1,03 | PCWP | |
| 4 | PX RECEIVE | | 1 | 131 | 1830 (2)| 00:00:33 | | | Q1,03 | PCWP | |
| 5 | PX SEND HASH | :TQ10002 | 1 | 131 | 1830 (2)| 00:00:33 | | | Q1,02 | P->P | HASH |
| 6 | HASH GROUP BY | | 1 | 131 | 1830 (2)| 00:00:33 | | | Q1,02 | PCWP | |
| 7 | NESTED LOOPS ANTI | | 1 | 131 | 1829 (2)| 00:00:33 | | | Q1,02 | PCWP | |
| 8 | HASH JOIN | | 1 | 117 | 1829 (2)| 00:00:33 | | | Q1,02 | PCWP | |
| 9 | PX JOIN FILTER CREATE| :BF0000 | 1010K| 50M| 694 (1)| 00:00:13 | | | Q1,02 | PCWP | |
| 10 | PX RECEIVE | | 1010K| 50M| 694 (1)| 00:00:13 | | | Q1,02 | PCWP | |
| 11 | PX SEND HASH | :TQ10000 | 1010K| 50M| 694 (1)| 00:00:13 | | | Q1,00 | P->P | HASH |
| 12 | PX BLOCK ITERATOR | | 1010K| 50M| 694 (1)| 00:00:13 | KEY | KEY | Q1,00 | PCWC | |
| 13 | TABLE ACCESS FULL| SOT_KEYMAP | 1010K| 50M| 694 (1)| 00:00:13 | KEY | KEY | Q1,00 | PCWP | |
| 14 | PX RECEIVE | | 11M| 688M| 1129 (3)| 00:00:21 | | | Q1,02 | PCWP | |
| 15 | PX SEND HASH | :TQ10001 | 11M| 688M| 1129 (3)| 00:00:21 | | | Q1,01 | P->P | HASH |
| 16 | PX JOIN FILTER USE | :BF0000 | 11M| 688M| 1129 (3)| 00:00:21 | | | Q1,01 | PCWP | |
| 17 | PX BLOCK ITERATOR | | 11M| 688M| 1129 (3)| 00:00:21 | KEY | KEY | Q1,01 | PCWC | |
| 18 | TABLE ACCESS FULL| SOT_REL_STAGE | 11M| 688M| 1129 (3)| 00:00:21 | KEY | KEY | Q1,01 | PCWP | |
| 19 | PARTITION RANGE SINGLE| | 5209 | 72926 | 0 (0)| | KEY | KEY | Q1,02 | PCWP | |
| 20 | INDEX RANGE SCAN | IDXL_SOTCRCT_SOT_UD | 5209 | 72926 | 0 (0)| | KEY | KEY | Q1,02 | PCWP | |
EXECUTION PLAN AFTER SETTING DEGREE=1 (It was also degraded)
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 129 | | 42336 (2)| 00:12:43 | | |
| 1 | HASH GROUP BY | | 1 | 129 | | 42336 (2)| 00:12:43 | | |
| 2 | NESTED LOOPS ANTI | | 1 | 129 | | 42335 (2)| 00:12:43 | | |
|* 3 | HASH JOIN | | 1 | 115 | 51M| 42334 (2)| 00:12:43 | | |
| 4 | PARTITION RANGE SINGLE| | 846K| 41M| | 8241 (1)| 00:02:29 | 81 | 81 |
|* 5 | TABLE ACCESS FULL | SOT_KEYMAP | 846K| 41M| | 8241 (1)| 00:02:29 | 81 | 81 |
| 6 | PARTITION RANGE SINGLE| | 8161K| 490M| | 12664 (3)| 00:03:48 | 81 | 81 |
|* 7 | TABLE ACCESS FULL | SOT_REL_STAGE | 8161K| 490M| | 12664 (3)| 00:03:48 | 81 | 81 |
| 8 | PARTITION RANGE SINGLE | | 6525K| 87M| | 1 (0)| 00:00:01 | 81 | 81 |
|* 9 | INDEX RANGE SCAN | IDXL_SOTCRCT_SOT_UD | 6525K| 87M| | 1 (0)| 00:00:01 | 81 | 81 |
Predicate Information (identified by operation id):
3 - access("SOTRELSTG_SYS_UD"="SOTKEY_SYS_UD" AND "SOTRELSTG_ORIG_SYS_ORD_ID"="SOTKEY_SYS_ORD_ID" AND
NVL("SOTRELSTG_ORIG_SYS_ORD_VSEQ",1)=NVL("SOTKEY_SYS_ORD_VSEQ",1))
5 - filter("SOTKEY_TRD_DATE_YMD_PART"=20090219 AND "SOTKEY_IUD_CD"='I')
7 - filter("SOTRELSTG_CRCT_PROC_STAT_CD"='N' AND "SOTRELSTG_TRD_DATE_YMD_PART"=20090219)
9 - access("SOTRELSTG_SOT_UD"="SOTCRCT_SOT_UD" AND "SOTCRCT_TRD_DATE_YMD_PART"=20090219)2. Why are you passing the partition name as bind variable? A statement executing 5 mins. best, > 2 hours worst obviously doesn't suffer from hard parsing issues and
doesn't need to (shouldn't) share execution plans therefore. So I strongly suggest to use literals instead of bind variables. This also solves any potential issues caused
by bind variable peeking.
This is a custom application which uses bind variables to extract data from daily partitions.So,daily automated data extract from daily paritions after load and ELT process.
Here,Value of bind variable is being passed through a procedure parameter.It would be bit difficult to use literals in such application.
3. All your posted plans suffer from bad cardinality estimates. The NO_MERGE hint suggested by Timur only caused a (significant) damage limitation by obviously reducing
the row source size by the group by operation before joining, but still the optimizer is way off, apart from the obviously wrong join order (larger row set first) in
particular the NESTED LOOP operation is causing the main troubles due to excessive logical I/O, as already pointed out by Timur.
Can i ask for alternatives to NESTED LOOP?
4. Your PLAN_TABLE seems to be old (you should see a corresponding note at the bottom of the DBMS_XPLAN.DISPLAY output), because none of the operations have a
filter/access predicate information attached. Since your main issue are the bad cardinality estimates, I strongly suggest to drop any existing PLAN_TABLEs in any non-Oracle
owned schemas because 10g already provides one in the SYS schema (GTT PLAN_TABLE$) exposed via a public synonym, so that the EXPLAIN PLAN information provides the
"Predicate Information" section below the plan covering the "Filter/Access" predicates.
Please post a revised explain plan output including this crucial information so that we get a clue why the cardinality estimates are way off.
I have dropped the old plan.Got above execution plan(listed above in first point) with PREDICATE information.
"As already mentioned the usage of bind variables for the partition name makes this issue potentially worse."
Is there any workaround without replacing bind variable.I am on 10g so 11g's feature will not help !!!
How are you gathering the statistics daily, can you post the exact command(s) used?
gather_table_part_stats(i_owner_name,i_table_name,i_part_name,i_estimate:= DBMS_STATS.AUTO_SAMPLE_SIZE, i_invalidate IN VARCHAR2 := 'Y',i_debug:= 'N')
Thanks & Regards,
Bhavik Desai -
Performance tuning in merge command
HI
My merge statement takes 45 mins to merge source table to target
below is execution plan
This is no of rows merged
906703 rows merged.
later I need to merge every day stage table to traget table -- 14701358 no of rows
I suspect this will take more than two hrs how i can improve this
My merge syntax as follows
MERGE /*+ INDEX_FFS(s idx_pis_test1a_branchcode idx_pis_test1a_invoiceno idx_pis_test1a_vouchertype idx_pis_test1a_shpmtno ) */ into scott.pis_test1_A s
using
(select * from scott.pis_test11) t
on
(s.BRANCH_CODE =t.BRANCH_CODE
and nvl(s.shpmt_no,'X')=nvl(t.shpmt_no,'X')
and s.invoice_no=t.invoice_no
and s.VOUCHER_TYPE=t.VOUCHER_TYPE
and s.suffix_invoice=t.suffix_invoice
and TRIM(s.suffix_shpmt)=TRIM(t.suffix_shpmt))
when not matched
then
INSERT
values
commit
Execution Plan
0 MERGE STATEMENT Optimizer=ALL_ROWS (Cost=32310 Card=1115245
Bytes=649072590)
1 0 MERGE OF 'PIS_TEST1_A'
2 1 VIEW
3 2 HASH JOIN (RIGHT OUTER) (Cost=32310 Card=1115245 Bytes
=630113425)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'PIS_TEST1_A' (TABL
E) (Cost=827 Card=658889 Bytes=188442254)
5 4 INDEX (FULL SCAN) OF 'IDX_PIS_TEST1A_BRANCHCODE' (
INDEX) (Cost=26 Card=658889)
6 3 TABLE ACCESS (FULL) OF 'PIS_TEST1' (TABLE) (Cost=674
6 Card=1115245 Bytes=311153355)Hi Justin
my ans as below
1) Are your statistics accurate and up to date?
No i did not gather stats , itry with stats in target table no improvement
2) Why do you have to hint the query? What happens if you remove the hint?
if i take out the hint it take bit longer
3) Why are you merging 14 million rows every day? Are you sure that you have to update the existing
rows and that you can't just load each days records into a new partition with a new effective date?
yes i need to load 14 million records every daye and merge with target table everyday
the reason i cannot get one day records , the txt file come with appended records (whole years data in text file)
currently in my merge there is no update only insert
Justin
Hi greg
my asn as below
As Justin mentioned, this index hint seems unlikely to help matters
(is this hint even syntactically correct anyway?). Depending on your responses to
Justin's questions and the number of columns you are updating,
you could also consider adding a condition(s) to the ON clause that specifically
checks that the columns you want to update are actually different. The actual act of updating
can be a lot slower than just checking for differences.
AND s.col1 != t.col1
-- Iam not update at all only insert
If you are running this one time a day only, you could also consider a /*+ parallel */ hint (or hints) to speed things up.
I cannot use palallel & append command in hint because i only use insert in merge
If i use palallel & append hint it corrupted the index due to i use insert only in merge
rds
ak -
Performance tuning in 11g for said query
Hello,
I have the below query which needs to be tuned:
SELECT DISTINCT a.emplid
, a.upa_fiscal_year
, b.balance_period
, 0
, a.descr
, a.UPA_CURRENT_AMT
, a.UPA_CURRENT_AMT2
, a.UPA_CURRENT_AMT3
, a.UPA_CURRENT_AMT4
FROM ps_upa_eq_incal_vw a
, ps_upa_eq_incal_vw b
WHERE a.emplid = b.emplid
AND a.UPA_FISCAL_YEAR = b.UPA_FISCAL_YEAR
AND a.balance_period = (
SELECT MAX(balance_period)
FROM ps_upa_eq_incal_vw
WHERE emplid = a.emplid
AND upa_fiscal_year = a.upa_fiscal_year
AND descr = a.descr
AND balance_period <= b.balance_period)
AND a.descr NOT IN (
SELECT DISTINCT descr
FROM ps_upa_eq_incal_vw
WHERE emplid = b.emplid
AND upa_fiscal_year = b.upa_fiscal_year
AND balance_period = b.balance_period
AND b.balance_period > a.balance_period )
UNION
SELECT emplid
, upa_fiscal_year
, balance_period
, 1
, 'Total'
, SUM ( UPA_CURRENT_AMT )
, SUM ( UPA_CURRENT_AMT2 )
, SUM ( UPA_CURRENT_AMT3 )
, SUM ( UPA_CURRENT_AMT4 )
FROM (
SELECT DISTINCT a.emplid emplid
, a.upa_fiscal_year upa_fiscal_year
, b.balance_period balance_period
, a.descr
, a.UPA_CURRENT_AMT UPA_CURRENT_AMT
, a.UPA_CURRENT_AMT2 UPA_CURRENT_AMT2
, a.UPA_CURRENT_AMT3 UPA_CURRENT_AMT3
, a.UPA_CURRENT_AMT4 UPA_CURRENT_AMT4
FROM ps_upa_eq_incal_vw a
, ps_upa_eq_incal_vw b
WHERE a.emplid = b.emplid
AND a.UPA_FISCAL_YEAR = b.UPA_FISCAL_YEAR
AND a.BALANCE_PERIOD = (
SELECT MAX(balance_period)
FROM ps_upa_eq_incal_vw
WHERE emplid = a.emplid
AND upa_fiscal_year = a.upa_fiscal_year
AND descr = a.descr
AND balance_period <= b.balance_period)
AND a.descr NOT IN (
SELECT DISTINCT descr
FROM ps_upa_eq_incal_vw
WHERE emplid = b.emplid
AND upa_fiscal_year = b.upa_fiscal_year
AND balance_period = b.balance_period
AND b.balance_period > a.balance_period ) )
GROUP BY emplid , upa_fiscal_year , balance_period;The EXPLAIN plan for this query is as under:
Plan hash value: 3550380953
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 24 | 2784 | | 429M (53)|999:59:59 |
| 1 | SORT UNIQUE | | 24 | 2784 | | 429M (53)|999:59:59 |
| 2 | UNION-ALL | | | | | | |
|* 3 | FILTER | | | | | | |
| 4 | NESTED LOOPS | | 196M| 29G| | 205M (4)|685:32:54 |
| 5 | VIEW | PS_UPA_EQ_INCAL_VW | 4513K| 146M| | 72281 (2)| 00:14:28 |
| 6 | HASH GROUP BY | | 4513K| 249M| 329M| 72281 (2)| 00:14:28 |
|* 7 | HASH JOIN | | 4513K| 249M| | 8644 (4)| 00:01:44 |
| 8 | TABLE ACCESS FULL | PS_UPA_EQ_TRCD_TBL | 513 | 14364 | | 3 (0)| 00:00:01 |
|* 9 | TABLE ACCESS FULL | PS_UPA_EQ_DC_BAL | 4513K| 129M| | 8592 (4)| 00:01:44 |
| 10 | VIEW PUSHED PREDICATE | PS_UPA_EQ_INCAL_VW | 1 | 127 | | 46 (5)| 00:00:01 |
| 11 | SORT GROUP BY | | 33 | 1914 | | 46 (5)| 00:00:01 |
|* 12 | HASH JOIN | | 33 | 1914 | | 45 (3)| 00:00:01 |
|* 13 | TABLE ACCESS FULL | PS_UPA_EQ_TRCD_TBL | 32 | 896 | | 3 (0)| 00:00:01 |
| 14 | TABLE ACCESS BY INDEX ROWID | PS_UPA_EQ_DC_BAL | 44 | 1320 | | 41 (0)| 00:00:01 |
|* 15 | INDEX RANGE SCAN | PS_UPA_EQ_DC_BAL | 44 | | | 3 (0)| 00:00:01 |
|* 16 | FILTER | | | | | | |
| 17 | HASH GROUP BY | | 1 | 53 | | 8 (25)| 00:00:01 |
|* 18 | FILTER | | | | | | |
|* 19 | HASH JOIN | | 4 | 212 | | 7 (15)| 00:00:01 |
|* 20 | INDEX RANGE SCAN | PS_UPA_EQ_DC_BAL | 4 | 100 | | 3 (0)| 00:00:01 |
|* 21 | TABLE ACCESS FULL | PS_UPA_EQ_TRCD_TBL | 32 | 896 | | 3 (0)| 00:00:01 |
| 22 | SORT AGGREGATE | | 1 | 91 | | | |
| 23 | VIEW | PS_UPA_EQ_INCAL_VW | 1 | 91 | | 6 (0)| 00:00:01 |
| 24 | SORT GROUP BY | | 1 | 58 | | 6 (0)| 00:00:01 |
| 25 | NESTED LOOPS | | | | | | |
| 26 | NESTED LOOPS | | 1 | 58 | | 6 (0)| 00:00:01 |
| 27 | TABLE ACCESS BY INDEX ROWID | PS_UPA_EQ_DC_BAL | 2 | 60 | | 4 (0)| 00:00:01 |
|* 28 | INDEX RANGE SCAN | PS_UPA_EQ_DC_BAL | 1 | | | 3 (0)| 00:00:01 |
|* 29 | INDEX UNIQUE SCAN | PS_UPA_EQ_TRCD_TBL | 1 | | | 0 (0)| 00:00:01 |
|* 30 | TABLE ACCESS BY INDEX ROWID | PS_UPA_EQ_TRCD_TBL | 1 | 28 | | 1 (0)| 00:00:01 |
| 31 | HASH GROUP BY | | 12 | 852 | | 214M (5)|715:46:12 |
| 32 | VIEW | | 12 | 852 | | 214M (5)|715:46:12 |
| 33 | HASH UNIQUE | | 12 | 1932 | | 214M (5)|715:46:12 |
|* 34 | FILTER | | | | | | |
| 35 | NESTED LOOPS | | 196M| 29G| | 205M (4)|685:32:54 |
| 36 | VIEW | PS_UPA_EQ_INCAL_VW | 4513K| 146M| | 72281 (2)| 00:14:28 |
| 37 | HASH GROUP BY | | 4513K| 249M| 329M| 72281 (2)| 00:14:28 |
|* 38 | HASH JOIN | | 4513K| 249M| | 8644 (4)| 00:01:44 |
| 39 | TABLE ACCESS FULL | PS_UPA_EQ_TRCD_TBL | 513 | 14364 | | 3 (0)| 00:00:01 |
|* 40 | TABLE ACCESS FULL | PS_UPA_EQ_DC_BAL | 4513K| 129M| | 8592 (4)| 00:01:44 |
| 41 | VIEW PUSHED PREDICATE | PS_UPA_EQ_INCAL_VW | 1 | 127 | | 46 (5)| 00:00:01 |
| 42 | SORT GROUP BY | | 33 | 1914 | | 46 (5)| 00:00:01 |
|* 43 | HASH JOIN | | 33 | 1914 | | 45 (3)| 00:00:01 |
|* 44 | TABLE ACCESS FULL | PS_UPA_EQ_TRCD_TBL | 32 | 896 | | 3 (0)| 00:00:01 |
| 45 | TABLE ACCESS BY INDEX ROWID | PS_UPA_EQ_DC_BAL | 44 | 1320 | | 41 (0)| 00:00:01 |
|* 46 | INDEX RANGE SCAN | PS_UPA_EQ_DC_BAL | 44 | | | 3 (0)| 00:00:01 |
|* 47 | FILTER | | | | | | |
| 48 | HASH GROUP BY | | 1 | 53 | | 8 (25)| 00:00:01 |
|* 49 | FILTER | | | | | | |
|* 50 | HASH JOIN | | 4 | 212 | | 7 (15)| 00:00:01 |
|* 51 | INDEX RANGE SCAN | PS_UPA_EQ_DC_BAL | 4 | 100 | | 3 (0)| 00:00:01 |
|* 52 | TABLE ACCESS FULL | PS_UPA_EQ_TRCD_TBL | 32 | 896 | | 3 (0)| 00:00:01 |
| 53 | SORT AGGREGATE | | 1 | 91 | | | |
| 54 | VIEW | PS_UPA_EQ_INCAL_VW | 1 | 91 | | 6 (0)| 00:00:01 |
| 55 | SORT GROUP BY | | 1 | 58 | | 6 (0)| 00:00:01 |
| 56 | NESTED LOOPS | | | | | | |
| 57 | NESTED LOOPS | | 1 | 58 | | 6 (0)| 00:00:01 |
| 58 | TABLE ACCESS BY INDEX ROWID| PS_UPA_EQ_DC_BAL | 2 | 60 | | 4 (0)| 00:00:01 |
|* 59 | INDEX RANGE SCAN | PS_UPA_EQ_DC_BAL | 1 | | | 3 (0)| 00:00:01 |
|* 60 | INDEX UNIQUE SCAN | PS_UPA_EQ_TRCD_TBL | 1 | | | 0 (0)| 00:00:01 |
|* 61 | TABLE ACCESS BY INDEX ROWID | PS_UPA_EQ_TRCD_TBL | 1 | 28 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - filter( NOT EXISTS (SELECT 0 FROM SYSADM."PS_UPA_EQ_TRCD_TBL" "V",SYSADM."PS_UPA_EQ_DC_BAL" "J" WHERE
:B1>:B2 AND "J"."BALANCE_PERIOD"=:B3 AND "J"."UPA_FISCAL_YEAR"=:B4 AND "J"."EMPLID"=:B5 AND
("J"."UPA_INC_TYPE"='C' OR "J"."UPA_INC_TYPE"='D' OR "J"."UPA_INC_TYPE"='M' OR "J"."UPA_INC_TYPE"='P') AND
"J"."UPA_FISCAL_YEAR"="V"."UPA_FISCAL_YEAR" AND "J"."UPA_EQ_INC_CD"="V"."UPA_EQ_INC_CD" AND
"V"."UPA_FISCAL_YEAR"=:B6 GROUP BY "J"."EMPLID","J"."UPA_FISCAL_YEAR","J"."BALANCE_PERIOD","V"."DESCR"
HAVING "V"."DESCR"=:B7) AND "A"."BALANCE_PERIOD"= (SELECT MAX("BALANCE_PERIOD") FROM (SELECT "J"."EMPLID"
"EMPLID","J"."UPA_FISCAL_YEAR" "UPA_FISCAL_YEAR","J"."BALANCE_PERIOD" "BALANCE_PERIOD","V"."DESCR"
"DESCR",SUM(DECODE("J"."UPA_INC_TYPE",'D',"J"."UPA_CURRENT_AMT",0))
"UPA_CURRENT_AMT",SUM(DECODE("J"."UPA_INC_TYPE",'M',"J"."UPA_CURRENT_AMT",0))
"UPA_CURRENT_AMT2",SUM(DECODE("J"."UPA_INC_TYPE",'C',"J"."UPA_CURRENT_AMT",0))
"UPA_CURRENT_AMT3",SUM(DECODE("J"."UPA_INC_TYPE",'P',"J"."UPA_CURRENT_AMT",0)) "UPA_CURRENT_AMT4" FROM
SYSADM."PS_UPA_EQ_TRCD_TBL" "V",SYSADM."PS_UPA_EQ_DC_BAL" "J" WHERE "J"."BALANCE_PERIOD"<=:B8 AND
"J"."UPA_FISCAL_YEAR"=:B9 AND "J"."EMPLID"=:B10 AND ("J"."UPA_INC_TYPE"='C' OR "J"."UPA_INC_TYPE"='D' OR
"J"."UPA_INC_TYPE"='M' OR "J"."UPA_INC_TYPE"='P') AND "V"."UPA_FISCAL_YEAR"=:B11 AND
"J"."UPA_EQ_INC_CD"="V"."UPA_EQ_INC_CD" AND "V"."DESCR"=:B12 AND "J"."UPA_FISCAL_YEAR"="V"."UPA_FISCAL_YEAR"
GROUP BY "J"."EMPLID","J"."UPA_FISCAL_YEAR","J"."BALANCE_PERIOD","V"."DESCR") "PS_UPA_EQ_INCAL_VW"))
7 - access("J"."UPA_EQ_INC_CD"="V"."UPA_EQ_INC_CD" AND "J"."UPA_FISCAL_YEAR"="V"."UPA_FISCAL_YEAR")
9 - filter("J"."UPA_INC_TYPE"='C' OR "J"."UPA_INC_TYPE"='D' OR "J"."UPA_INC_TYPE"='M' OR
"J"."UPA_INC_TYPE"='P')
12 - access("J"."UPA_EQ_INC_CD"="V"."UPA_EQ_INC_CD" AND "J"."UPA_FISCAL_YEAR"="V"."UPA_FISCAL_YEAR")
13 - filter("V"."UPA_FISCAL_YEAR"="B"."UPA_FISCAL_YEAR")
15 - access("J"."EMPLID"="B"."EMPLID" AND "J"."UPA_FISCAL_YEAR"="B"."UPA_FISCAL_YEAR")
filter("J"."UPA_INC_TYPE"='C' OR "J"."UPA_INC_TYPE"='D' OR "J"."UPA_INC_TYPE"='M' OR
"J"."UPA_INC_TYPE"='P')
16 - filter("V"."DESCR"=:B1)
18 - filter(:B1>:B2)
19 - access("J"."UPA_EQ_INC_CD"="V"."UPA_EQ_INC_CD" AND "J"."UPA_FISCAL_YEAR"="V"."UPA_FISCAL_YEAR")
20 - access("J"."EMPLID"=:B1 AND "J"."UPA_FISCAL_YEAR"=:B2 AND "J"."BALANCE_PERIOD"=:B3)
filter("J"."UPA_INC_TYPE"='C' OR "J"."UPA_INC_TYPE"='D' OR "J"."UPA_INC_TYPE"='M' OR
"J"."UPA_INC_TYPE"='P')
21 - filter("V"."UPA_FISCAL_YEAR"=:B1)
28 - access("J"."EMPLID"=:B1 AND "J"."UPA_FISCAL_YEAR"=:B2 AND "J"."BALANCE_PERIOD"<=:B3)
filter("J"."UPA_INC_TYPE"='C' OR "J"."UPA_INC_TYPE"='D' OR "J"."UPA_INC_TYPE"='M' OR
"J"."UPA_INC_TYPE"='P')
29 - access("J"."UPA_EQ_INC_CD"="V"."UPA_EQ_INC_CD" AND "V"."UPA_FISCAL_YEAR"=:B1)
filter("J"."UPA_FISCAL_YEAR"="V"."UPA_FISCAL_YEAR")
30 - filter("V"."DESCR"=:B1)
34 - filter( NOT EXISTS (SELECT 0 FROM SYSADM."PS_UPA_EQ_TRCD_TBL" "V",SYSADM."PS_UPA_EQ_DC_BAL" "J" WHERE
:B1>:B2 AND "J"."BALANCE_PERIOD"=:B3 AND "J"."UPA_FISCAL_YEAR"=:B4 AND "J"."EMPLID"=:B5 AND
("J"."UPA_INC_TYPE"='C' OR "J"."UPA_INC_TYPE"='D' OR "J"."UPA_INC_TYPE"='M' OR "J"."UPA_INC_TYPE"='P') AND
"J"."UPA_FISCAL_YEAR"="V"."UPA_FISCAL_YEAR" AND "J"."UPA_EQ_INC_CD"="V"."UPA_EQ_INC_CD" AND
"V"."UPA_FISCAL_YEAR"=:B6 GROUP BY "J"."EMPLID","J"."UPA_FISCAL_YEAR","J"."BALANCE_PERIOD","V"."DESCR"
HAVING "V"."DESCR"=:B7) AND "A"."BALANCE_PERIOD"= (SELECT MAX("BALANCE_PERIOD") FROM (SELECT "J"."EMPLID"
"EMPLID","J"."UPA_FISCAL_YEAR" "UPA_FISCAL_YEAR","J"."BALANCE_PERIOD" "BALANCE_PERIOD","V"."DESCR"
"DESCR",SUM(DECODE("J"."UPA_INC_TYPE",'D',"J"."UPA_CURRENT_AMT",0))
"UPA_CURRENT_AMT",SUM(DECODE("J"."UPA_INC_TYPE",'M',"J"."UPA_CURRENT_AMT",0))
"UPA_CURRENT_AMT2",SUM(DECODE("J"."UPA_INC_TYPE",'C',"J"."UPA_CURRENT_AMT",0))
"UPA_CURRENT_AMT3",SUM(DECODE("J"."UPA_INC_TYPE",'P',"J"."UPA_CURRENT_AMT",0)) "UPA_CURRENT_AMT4" FROM
SYSADM."PS_UPA_EQ_TRCD_TBL" "V",SYSADM."PS_UPA_EQ_DC_BAL" "J" WHERE "J"."BALANCE_PERIOD"<=:B8 AND
"J"."UPA_FISCAL_YEAR"=:B9 AND "J"."EMPLID"=:B10 AND ("J"."UPA_INC_TYPE"='C' OR "J"."UPA_INC_TYPE"='D' OR
"J"."UPA_INC_TYPE"='M' OR "J"."UPA_INC_TYPE"='P') AND "V"."UPA_FISCAL_YEAR"=:B11 AND
"J"."UPA_EQ_INC_CD"="V"."UPA_EQ_INC_CD" AND "V"."DESCR"=:B12 AND "J"."UPA_FISCAL_YEAR"="V"."UPA_FISCAL_YEAR"
GROUP BY "J"."EMPLID","J"."UPA_FISCAL_YEAR","J"."BALANCE_PERIOD","V"."DESCR") "PS_UPA_EQ_INCAL_VW"))
38 - access("J"."UPA_EQ_INC_CD"="V"."UPA_EQ_INC_CD" AND "J"."UPA_FISCAL_YEAR"="V"."UPA_FISCAL_YEAR")
40 - filter("J"."UPA_INC_TYPE"='C' OR "J"."UPA_INC_TYPE"='D' OR "J"."UPA_INC_TYPE"='M' OR
"J"."UPA_INC_TYPE"='P')
43 - access("J"."UPA_EQ_INC_CD"="V"."UPA_EQ_INC_CD" AND "J"."UPA_FISCAL_YEAR"="V"."UPA_FISCAL_YEAR")
44 - filter("V"."UPA_FISCAL_YEAR"="B"."UPA_FISCAL_YEAR")
46 - access("J"."EMPLID"="B"."EMPLID" AND "J"."UPA_FISCAL_YEAR"="B"."UPA_FISCAL_YEAR")
filter("J"."UPA_INC_TYPE"='C' OR "J"."UPA_INC_TYPE"='D' OR "J"."UPA_INC_TYPE"='M' OR
"J"."UPA_INC_TYPE"='P')
47 - filter("V"."DESCR"=:B1)
49 - filter(:B1>:B2)
50 - access("J"."UPA_EQ_INC_CD"="V"."UPA_EQ_INC_CD" AND "J"."UPA_FISCAL_YEAR"="V"."UPA_FISCAL_YEAR")
51 - access("J"."EMPLID"=:B1 AND "J"."UPA_FISCAL_YEAR"=:B2 AND "J"."BALANCE_PERIOD"=:B3)
filter("J"."UPA_INC_TYPE"='C' OR "J"."UPA_INC_TYPE"='D' OR "J"."UPA_INC_TYPE"='M' OR
"J"."UPA_INC_TYPE"='P')
52 - filter("V"."UPA_FISCAL_YEAR"=:B1)
59 - access("J"."EMPLID"=:B1 AND "J"."UPA_FISCAL_YEAR"=:B2 AND "J"."BALANCE_PERIOD"<=:B3)
filter("J"."UPA_INC_TYPE"='C' OR "J"."UPA_INC_TYPE"='D' OR "J"."UPA_INC_TYPE"='M' OR
"J"."UPA_INC_TYPE"='P')
60 - access("J"."UPA_EQ_INC_CD"="V"."UPA_EQ_INC_CD" AND "V"."UPA_FISCAL_YEAR"=:B1)
filter("J"."UPA_FISCAL_YEAR"="V"."UPA_FISCAL_YEAR")
61 - filter("V"."DESCR"=:B1)Any directions as to how to tune this query would be greatly appreciated!
Thanks,
Suddhasatwalooks like these two steps hurts the most (causing the NL join take forever - because of the 4,5M rows)
|* 9 | TABLE ACCESS FULL | PS_UPA_EQ_DC_BAL | 4513K| 129M| | 8592 (4)| 00:01:44 |
|* 40 | TABLE ACCESS FULL | PS_UPA_EQ_DC_BAL | 4513K| 129M| | 8592 (4)| 00:01:44 |
9 - filter("J"."UPA_INC_TYPE"='C' OR "J"."UPA_INC_TYPE"='D' OR "J"."UPA_INC_TYPE"='M' OR
"J"."UPA_INC_TYPE"='P')
40 - filter("J"."UPA_INC_TYPE"='C' OR "J"."UPA_INC_TYPE"='D' OR "J"."UPA_INC_TYPE"='M' OR
"J"."UPA_INC_TYPE"='P')do you have an index on PS_UPA_EQ_DC_BAL.UPA_INC_TYPE column?
reducing the number of rows returned in this step would be critical to speed up the NL join -
Performance tuning in oracle 10g
Hi Guys
i hope all are well,Have a nice day Today
i have discuss with some performance tuning issue
recently , i joined the new project that project improve the efficiency of the applicaton. in this environment oracle plsql langauage are used , so if i need to improve effiency of application what are the step are taken
and what are the way to go through the process improvement
kindly help megenerate statspack/AWR reports
HOW To Make TUNING request
https://forums.oracle.com/forums/thread.jspa?threadID=2174552#9360003 -
Hello All,
We have created some reports using Interactive Reporting Studio. The volume of data in that Oracle database are huge and in some tables of the relational database are having above 3-4 crores rows individually. We have created the .oce connection file using the 'Oracle Net' option. Oracle client ver is 10g. We earlier created pivot, chart and report in those .bqy files but had to delete those where-ever possible to decrease the processing time for getting those report generated.
But deleting those from the file and retaining just the result section (the bare minimum part of the file) even not yet helped us out solving the performance issue fully. Still now, in some reports, system gives error message 'Out of Memory' at the time of processing those reports. The memory of the client PCs,wherefrom the reports are being generated are 1 - 1.5 GB. For some reports, even it takes 1-2 hours for saving the results after process. In some cases, the PCs gets hanged at the time of processing. When we extract the query of those reports in sql and run them in TOAD/SQL PLUS, they take not so much time like IR.
Would you please help us out in the aforesaid issue ASAP? Please share your views/tips/suggestions etc in respect of performance tuning for IR. All reply would be highly appreciated.
Regards,
RajSQL + & Toad are tools that send SQL and spool results; IR is a tool that sends a request to the database to run SQL and then fiddles with the results before the user is even told data has been received. You need to minimize the time spent by IR manipulating results into objects the user isn't even asking for.
When a request is made to the database, Hyperion will wait until all of the results have been received. Once ALL of the results have been received, then IR will make multiple passes to apply sorts, filters and computed items existing in the results section. For some unknown reason, those three steps are performed more inefficiently then they would be performed in a table section. Only after all of the computed items have been calculated, all filters applied and all sorts sorted, then IR will start to calculate any reports, charts and pivots. After all that is done, the report stops processing and the data has been "returned"
To increase performance, you need to fine tune your IR Services and your BQY docs. Replicate your DAS on your server - it can only transfer 2g before it dies, restarts and your requested document hangs. You can replicated the DAS multiple times and should do so to make sure there are enough resources available for any concurrent users to make necessary requests and have data delivered to them.
To tune your bqy documents...
1) Your Results section MUST be free of any sorts, filters, or computed items. Create a staging table and put any sorts or local filters there. Move as many of your computed items to your database request line and ask the database to make the calculation (either directly or through stored procedures) so you are not at the mercy of the client machine. Any computed items that cannot be moved to the request line, need to be put on your new staging table.
2) Ask the users to choose filters. Programmatically build dynamic filters based on what the user is looking for. The goal is to cast a net only as big as the user needs so you are not bringing back unnecessary data. Otherwise, you will bring your server and client machines to a grinding halt.
3) Halt any report pagination. Built your reports from their own tables and put a dummy filter on the table that forces 0 rows in the table until the report is invoked. Hyperion will paginate every report BEFORE it even tells the user it has results so this will prevent the user from waiting an hour while 1000s of pages are paginated across multiple reports
4) Halt any object rendering until request. Same as above - create a system programmically for the user to tell the bqy what they want so they are not waiting forever for a pivot and 2 reports to compile and paginate when they want just a chart.
5) Saved compressed documents
6) Unless this document can be run as a job, there should be NO results stored with the document but if you do save results with the document, store the calculations too so you at least don't have to wait for them to pass again.
7) Remove all duplicate images and keep the image file size small.
Hope this helps!
PS: I forgot to mention - aside from results sections, in documents where the results are NOT saved, additional table sections take up very, very, very small bits of file size and, as long as there are not excessively larger images the same is true for Reports, Pivots and Charts. Additionally, the impact of file size only matters when the user is requesting the document. The file size is never an issue when the user is processing the report because it has already been delivered to them and cached (in workspace and in the web client)
Edited by: user10899957 on Feb 10, 2009 6:07 AM
Maybe you are looking for
-
HP 2100TN, JetDirect Server, Appletalk & OS 10.4
I have a built-in ethernet local network using a netgear hub, 3 Apple computers and HP2100 laser printer. When running Panther, I could print when connected to the internet. That is I could print an email and not lose my connection to mail. When I up
-
Hi Experts, My client asked to give acess for Mass blocking of vendor.(T.code: XK99). If I execute this t.code what is the impication. I want to do Postivie and negative testing. Please provide detailed info. Regards, Krishna
-
I'd like to know how to permanently increase the type size. I use the Control + to increase the size, but it always goes back to the original setting when I go on to something else. == This happened == Every time Firefox opened == Always
-
Special Pricing will not sync!
Hi all, We are trying to setup webtools pl1, and when we do the initial sync we get the following error - u201CThere are less special price records in WT than in B1. This can be due to an error in synch and can cause pricing inconsistencies.u201D I'v
-
ITunes has stopped working while sync photos
I have been having problems since I updated iTunes to 10.4, it keeps crashing (itunes has stopped working) while sync photos. After uninstalling and re-installing the problem still stayed. I therefore tried sync photos a batch at a time (15 photos at