Need help in performance
hallow
i doing this select and it take more then 10 minutes and i don't now way because
object_key have just 60 entries ?
how i can slve this problem?
Regards
SELECT * FROM covp APPENDING TABLE *covp
FOR ALL ENTRIES IN object_key
WHERE kokrs EQ '1000'
AND objnr EQ object_key-objnr
AND budat IN so_budat
AND stflg EQ space .
Ways of Performance Tuning
1. Selection Criteria
2. Select Statements
Select Queries
SQL Interface
Aggregate Functions
For all Entries
Select Over more than one internal table
Selection Criteria
1. Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement.
2. Select with selection list.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
Select Statements Select Queries
1. Avoid nested selects
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
2. Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
3. When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
4. For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit.
SELECT * FROM SBOOK INTO SBOOK_WA
UP TO 1 ROWS
WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
WHERE CARRID = 'LH'.
EXIT.
ENDSELECT.
5. Use Select Single if all primary key fields are supplied in the Where condition .
If all primary key fields are supplied in the Where conditions you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
Select Statements SQL Interface
1. Use column updates instead of single-row updates
to update your database tables.
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
SFLIGHT_WA-SEATSOCC =
SFLIGHT_WA-SEATSOCC - 1.
UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
SET SEATSOCC = SEATSOCC - 1.
2. For all frequently used Select statements, try to use an index.
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE MANDT IN ( SELECT MANDT FROM T000 )
AND CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
3. Using buffered tables improves the performance considerably.
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
BYPASSING BUFFER
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100 INTO T100_WA
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
Select Statements Aggregate Functions
If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
Maxno = 0.
Select * from zflight where airln = LF and cntry = IN.
Check zflight-fligh > maxno.
Maxno = zflight-fligh.
Endselect.
The above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = LF and cntry = IN.
Select Statements For All Entries
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
Points to be must considered FOR ALL ENTRIES
Check that data is present in the driver table
Sorting the driver table
Removing duplicates from the driver table
Consider the following piece of extract
Loop at int_cntry.
Select single * from zfligh into int_fligh
where cntry = int_cntry-cntry.
Append int_fligh.
Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
Select * from zfligh appending table int_fligh
For all entries in int_cntry
Where cntry = int_cntry-cntry.
Endif.
Select Statements Select Over more than one Internal table
1. Its better to use a views instead of nested Select statements.
SELECT * FROM DD01L INTO DD01L_WA
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T INTO DD01T_WA
WHERE DOMNAME = DD01L_WA-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L_WA-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO DD01V_WA
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT
2. To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
3. Instead of using nested Select loops it is often better to use subqueries.
SELECT * FROM SPFLI
INTO TABLE T_SPFLI
WHERE CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
INTO SFLIGHT_WA
FOR ALL ENTRIES IN T_SPFLI
WHERE SEATSOCC < F~SEATSMAX
AND CARRID = T_SPFLI-CARRID
AND CONNID = T_SPFLI-CONNID
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
WHERE SEATSOCC < F~SEATSMAX
AND EXISTS ( SELECT * FROM SPFLI
WHERE CARRID = F~CARRID
AND CONNID = F~CONNID
AND CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK' )
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
1. Table operations should be done using explicit work areas rather than via header lines.
READ TABLE ITAB INTO WA WITH KEY K = 'X BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
2. Always try to use binary search instead of linear search. But dont forget to sort your internal table before that.
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
3. A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4. A binary search using secondary index takes considerably less time.
5. LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
CHECK WA-K = 'X'.
ENDLOOP.
6. Modifying selected components using MODIFY itab
TRANSPORTING f1 f2.. accelerates the task of updating a line of an internal table.
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7. Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
I = SY-TABIX MOD 2.
IF I = 0.
<WA>-FLAG = 'X'.
ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
I = SY-TABIX MOD 2.
IF I = 0.
WA-FLAG = 'X'.
MODIFY ITAB FROM WA.
ENDIF.
ENDLOOP.
8. If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
LOOP AT ITAB1 INTO WA1.
READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
IF SY-SUBRC = 0.
ADD: WA1-VAL1 TO WA2-VAL1,
WA1-VAL2 TO WA2-VAL2.
MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
ELSE.
INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
9. "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to LOOP-APPEND-ENDLOOP.
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
10. DELETE ADJACENT DUPLICATES accelerates the task of deleting duplicate entries considerably as compared to READ-LOOP-DELETE-ENDLOOP.
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
IF WA = PREV_LINE.
DELETE ITAB.
ELSE.
PREV_LINE = WA.
ENDIF.
ENDLOOP.
11. "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to DO -DELETE-ENDDO.
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
DELETE ITAB INDEX 450.
ENDDO.
12. Copying internal tables by using ITAB2[ ] = ITAB1[ ] as compared to LOOP-APPEND-ENDLOOP.
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
13. Specify the sort key as restrictively as possible to run the program faster.
SORT ITAB BY K. makes the program runs faster as compared to SORT ITAB.
Internal Tables contd
Hashed and Sorted tables
1. For single read access hashed tables are more optimized as compared to sorted tables.
2. For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
Point # 1
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE STAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
Point # 2
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
Similar Messages
-
Need help in Performance tuning for function...
Hi all,
I am using the below algorithm for calculating the Luhn Alogorithm to calculate the 15th luhn digit for an IMEI (Phone Sim Card).
But the below function is taking about 6 min for 5 million records. I had 170 million records in a table want to calculate the luhn digit for all of them which might take up to 4-5 hours.Please help me performance tuning (better way or better logic for luhn calculation) to the below function.
A wikipedia link is provided for the luhn algorithm below
Create or Replace FUNCTION AddLuhnToIMEI (LuhnPrimitive VARCHAR2)
RETURN VARCHAR2
AS
Index_no NUMBER (2) := LENGTH (LuhnPrimitive);
Multiplier NUMBER (1) := 2;
Total_Sum NUMBER (4) := 0;
Plus NUMBER (2);
ReturnLuhn VARCHAR2 (25);
BEGIN
WHILE Index_no >= 1
LOOP
Plus := Multiplier * (TO_NUMBER (SUBSTR (LuhnPrimitive, Index_no, 1)));
Multiplier := (3 - Multiplier);
Total_Sum := Total_Sum + TO_NUMBER (TRUNC ( (Plus / 10))) + MOD (Plus, 10);
Index_no := Index_no - 1;
END LOOP;
ReturnLuhn := LuhnPrimitive || CASE
WHEN MOD (Total_Sum, 10) = 0 THEN '0'
ELSE TO_CHAR (10 - MOD (Total_Sum, 10))
END;
RETURN ReturnLuhn;
EXCEPTION
WHEN OTHERS
THEN
RETURN (LuhnPrimitive);
END AddLuhnToIMEI;
http://en.wikipedia.org/wiki/Luhn_algorithmAny sort of help is much appreciated....
Thanks
RedeThere is a not needed to_number function in it. TRUNC will already return a number.
Also the MOD function can be avoided at some steps. Since multiplying by 2 will never be higher then 18 you can speed up the calculation with this.
create or replace
FUNCTION AddLuhnToIMEI_fast (LuhnPrimitive VARCHAR2)
RETURN VARCHAR2
AS
Index_no pls_Integer;
Multiplier pls_Integer := 2;
Total_Sum pls_Integer := 0;
Plus pls_Integer;
rest pls_integer;
ReturnLuhn VARCHAR2 (25);
BEGIN
for Index_no in reverse 1..LENGTH (LuhnPrimitive) LOOP
Plus := Multiplier * TO_NUMBER (SUBSTR (LuhnPrimitive, Index_no, 1));
Multiplier := 3 - Multiplier;
if Plus < 10 then
Total_Sum := Total_Sum + Plus ;
else
Total_Sum := Total_Sum + Plus - 9;
end if;
END LOOP;
rest := MOD (Total_Sum, 10);
ReturnLuhn := LuhnPrimitive || CASE WHEN rest = 0 THEN '0' ELSE TO_CHAR (10 - rest) END;
RETURN ReturnLuhn;
END AddLuhnToIMEI_fast;
/My tests gave an improvement for about 40%.
The next step to try could be to use native complilation on this function. This can give an additional big boost.
Edited by: Sven W. on Mar 9, 2011 8:11 PM -
Need help writing Performance Management Report
Hi Experts
I need some help retrieving specific Performance Management data for a report.
I have the employee pernr.
1. From this I need to determine which teams the employee belonged to for the period 1 Oct - 31 Sept.
2. What was the total performance score for the TEAM for that same period.
Can someone please help me out. The table data seems to be quite complex.
Thannks in advance
Anton Kruse
Moderator Message: Specs-dumping is not allowed. Please get back if you have a specific question
Edited by: kishan P on Mar 7, 2012 5:10 PMHi Arnold,
I think the solution provided by Vadim is the only way and it's working.
Shrikant -
Need help with performance & memory tuning in a data warehousing environment
Dear All,
Good Day.
We had successfully migrated from a 4 node half-rack V2 Exadata to a 2 node quarter rack X4-2 Exadata. However, we are facing some issues with performance only for few loads while others have in fact shown good improvement.
1. The total memory on the OS is 250GB for each node (two compute nodes for a quarter rack).
2. Would be grateful if someone could help me with the best equation to calculate the SGA and PGA ( and also in allocation of shared_pool, large_pool etc) or whether Automatic Memory Management is advisable?
3. We had run exachk report which suggested us to configure huge pages.
4. When we tried to increase the SGA to more than 30GB the system doesn't allow us to do so. We had however set the PGA to 85GB.
5. Also, we had observed that some of the queries involving joins and indexes are taking longer time.
Any advise would be greatly appreciated.
Warm Regards,
Vikram.Hi Vikram,
There is no formula about SGA and PGA, but the best practices for OLTP environments is for a give ammount of memory (which cannot be up to 80% of total RAM from server) you should make 80% to SGA and 20% to PGA. For Data Warehouse envs, the values are like 60% SGA and 40% PGA or it can be up to 50%-50%. Also, some docs disencourage you to keep the database in Automatic Memory Management when you are using a big SGA (> 10G).
As you are using a RAC environment, you should configure Huge Pages. And if the systems are not allowing you to increase memory, just take a look at the semaphore parameters, probably they are set lower values. And for the poor performance queries, we need to see explain plans, table structure and you would also analyze if smart scan is playing the game.
Regards. -
Need help on performance of this sql
Hi,
I have 2million records in my table, now i need to add one new column and have to update records into this column using sequence.
Ex. update table_name
set new_column = sequ_name.nextval
where col1 = batch_id;
My question is, if i update like this for the batch_id(same id for 1.5million records), will it affect the performace. or is it any other best way to achieve this...
kindly help..
Regards,
NagWe had a similar project and in test it took several hours to run. When I upped the sequence cache to 5000 it completed in less than a 10th of the time. Worth trialling different values for the sequence cache, as I suspect there are diminishing returns as it gets larger (setting the cache size to 1.5 million would probably not be a good idea).
-
Need help for performance tunning
Hello,
I have 16K records return by query, it takes long time to proceed for 7K it takes 7.5 sec.
Note: I used all seeded tables only.
If possible please help me to tune it.
SELECT msi.inventory_item_id,msi.segment1,msi.rimary_uom_code , msi.primary_unit_of_measure
FROM mtl_system_items_b msi, qp_list_lines qpll,qp_pricing_attributes qppr,
mtl_category_sets_tl mcs,mtl_category_sets_b mcsb,
mtl_categories_b mc, mtl_item_categories mcb
WHERE msi.enabled_flag = 'Y'
AND qpll.list_line_id = qppr.list_line_id
AND qppr.product_attr_value = TO_CHAR (msi.inventory_item_id(+))
AND qppr.product_uom_code = msi.primary_uom_code
AND mc.category_id = mcb.category_id
AND msi.inventory_item_id = mcb.inventory_item_id
AND msi.organization_id = mcb.organization_id
AND TRUNC (SYSDATE) BETWEEN NVL (qpll.start_date_active,TRUNC (SYSDATE)) AND NVL (qpll.end_date_active,TRUNC (SYSDATE))
AND mcs.category_set_name = 'LSS SALES CATEGORY'
AND mcs.language = 'US'
AND mcs.category_set_id = mcsb.category_set_id
AND mcsb.structure_id = mc.structure_id
AND msi.organization_id = :p_organization_id
AND qpll.list_header_id = :p_price_list_id
AND mcb.category_id = :p_category_id;
Thanks and regards
Akil.Thanks Helios ,
here is answers
Databse version
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
PL/SQL Release 11.1.0.7.0
explain plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
|
0 | SELECT STATEMENT
| |
1 | 149 | 9439
(1)|
|
1 | NESTED LOOPS | | 1 |
149 | 9439 (1)|
|*
2 | HASH JOIN OUTER | | 1 |
135 | 9437 (1)|
|*
3 | HASH JOIN | | 1 |
71 | 9432 (1)|
|
4 | NESTED LOOPS | | 2 |
76 | 53 (0)|
|*
5 | TABLE ACCESS BY INDEX
ROWID| QP_LIST_LINES | 2 |
44 | 49 (0)|
|*
6 | INDEX SKIP SCAN | QP_LIST_LINES_N2 |
702 | | 20
(0)|
|*
7 | INDEX RANGE SCAN | QP_PRICING_ATTRIBUTES_N3 | 1 |
16 | 2 (0)|
|*
8 | TABLE ACCESS BY INDEX
ROWID | MTL_SYSTEM_ITEMS_B | 46254
| 1490K|
9378 (1)|
|*
9 | INDEX RANGE SCAN | MTL_SYSTEM_ITEMS_B_N9 | 46254 | |
174 (1)|
|
10 | TABLE ACCESS FULL | XX_WEB_ITEM_IMAGE_TBL |
277 | 17728 | 5 (0)|
|* 11 | INDEX RANGE SCAN | MTL_ITEM_CATEGORIES_U1 |
1 | 14 | 2
(0)|
Predicate Information (identified
by operation id):
2 -
access("XWIIT"."IMAGE_CODE"(+)="MSI"."SEGMENT1")
3 -
access("QPPR"."PRODUCT_ATTR_VALUE"=TO_CHAR("MSI"."INVENTORY_ITEM_ID")
AND
"QPPR"."PRODUCT_UOM_CODE"="MSI"."PRIMARY_UOM_CODE")
5 - filter(NVL("QPLL"."START_DATE_ACTIVE",TRUNC(SYSDATE@!))<=TRUNC(SYSDATE@!)
AND
NVL("QPLL"."END_DATE_ACTIVE",TRUNC(SYSDATE@!))>=TRUNC(SYSDATE@!))
6 -
access("QPLL"."LIST_HEADER_ID"=TO_NUMBER(:P_PRICE_LIST_ID))
filter("QPLL"."LIST_HEADER_ID"=TO_NUMBER(:P_PRICE_LIST_ID))
7 -
access("QPLL"."LIST_LINE_ID"="QPPR"."LIST_LINE_ID")
filter("QPPR"."PRODUCT_UOM_CODE" IS NOT NULL)
8 - filter("MSI"."ENABLED_FLAG"='Y')
9 - access("MSI"."ORGANIZATION_ID"=TO_NUMBER(:P_ORGANIZATION_ID))
11 -
access("MCB"."ORGANIZATION_ID"=TO_NUMBER(:P_ORGANIZATION_ID)
AND
"MSI"."INVENTORY_ITEM_ID"="MCB"."INVENTORY_ITEM_ID"
AND
"MCB"."CATEGORY_ID"=TO_NUMBER(:P_CATEGORY_ID))
filter("MCB"."CATEGORY_ID"=TO_NUMBER(:P_CATEGORY_ID))
Note
- 'PLAN_TABLE' is old version
TKprof Plan
TKPROF: Release 11.1.0.7.0 - Production on Fri Nov 15 06:12:26 2013
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Trace file: LSSD_ora_19760.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SELECT msi.inventory_item_id,
msi.segment1,
primary_uom_code,
primary_unit_of_measure,
xwiit.image_url
FROM mtl_system_items_b msi,
qp_list_lines qpll,
qp_pricing_attributes qppr,
mtl_item_categories mcb,
xx_web_item_image_tbl xwiit
WHERE msi.enabled_flag = 'Y'
AND qpll.list_line_id = qppr.list_line_id
AND qppr.product_attr_value = TO_CHAR (msi.inventory_item_id)
AND qppr.product_uom_code = msi.primary_uom_code
AND msi.inventory_item_id = mcb.inventory_item_id
AND msi.organization_id = mcb.organization_id
AND TRUNC (SYSDATE) BETWEEN NVL (qpll.start_date_active,
TRUNC (SYSDATE))
AND NVL (qpll.end_date_active,
TRUNC (SYSDATE))
AND xwiit.image_code(+) = msi.segment1
AND msi.organization_id = :p_organization_id
AND qpll.list_header_id = :p_price_list_id
AND mcb.category_id = :p_category_id
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 3.84 3.85 0 432560 0 1002
total 6 3.84 3.85 0 432560 0 1002
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173
Rows Row Source Operation
501 NESTED LOOPS (cr=216280 pr=0 pw=0 time=115 us cost=9439 size=149 card=1)
2616 HASH JOIN OUTER (cr=211012 pr=0 pw=0 time=39 us cost=9437 size=135 card=1)
78568 HASH JOIN (cr=210997 pr=0 pw=0 time=3786 us cost=9432 size=71 card=1)
78571 NESTED LOOPS (cr=29229 pr=0 pw=0 time=35533 us cost=53 size=76 card=2)
78571 TABLE ACCESS BY INDEX ROWID QP_LIST_LINES (cr=9943 pr=0 pw=0 time=27533 us cost=49 size=44 card=2)
226733 INDEX SKIP SCAN QP_LIST_LINES_N2 (cr=865 pr=0 pw=0 time=4122 us cost=20 size=0 card=702)(object id 99730)
78571 INDEX RANGE SCAN QP_PRICING_ATTRIBUTES_N3 (cr=19286 pr=0 pw=0 time=0 us cost=2 size=16 card=1)(object id 99733)
128857 TABLE ACCESS BY INDEX ROWID MTL_SYSTEM_ITEMS_B (cr=181768 pr=0 pw=0 time=9580 us cost=9378 size=1526382 card=46254)
128857 INDEX RANGE SCAN MTL_SYSTEM_ITEMS_B_N9 (cr=450 pr=0 pw=0 time=1657 us cost=174 size=0 card=46254)(object id 199728)
277 TABLE ACCESS FULL XX_WEB_ITEM_IMAGE_TBL (cr=15 pr=0 pw=0 time=22 us cost=5 size=17728 card=277)
501 INDEX RANGE SCAN MTL_ITEM_CATEGORIES_U1 (cr=5268 pr=0 pw=0 time=0 us cost=2 size=14 card=1)(object id 99557)
Note: I modified query and it gives good result, now it takes 3 to 4 sec for 16000 records.
If possible can you plz explain what we have to take care while doing performance tunning
I am a fresher so don't have that much idea.
and also Thanks Hussein for your replay -
Need help in Performance tuning
Hi All,
I am facing some performance issues in my program. The program taking a hell lot of time to execute and some times timing out without giving the out put. This is a report program with ALV output. It is handling mainly Sales related data.
The program is fetching a huge volume of data from different tables and processing this bulk data inside the loops several times. In most of the queries I am unable to supply all key fields, because my requirement is like that only. I have many places in my program i am using inner loop and function modules inside loop etc.
Any pointers on this will be a great help.
Regards,
Jijeesh P G1) Make sure that any READ or LOOP inside an outer LOOP accesses the inner table with an appropriate key (either using BINARY search when reading a standard table or using sorted/hashed tables for inner tables only). This helps in most cases.
2) If the tables witdh is more than aprox. 30 bytes LOOP ASSIGNING <wa> may help a bit. Declare <wa> for each table separately using the tables line type (otherway type casting would cost some additional time).
3) You may use FM SAPGUI_PROGRESS_INDICATOR in the outer loops to inform the user while he is waiting.
4) A COMMIT from time to time will reset the measured time for timeout ( and therefore avoids timeouts - do NOT use without checking step 1) ).
4) The programm may use virtual memory if a huge amount of data is selected ( I have seen dumps in due to the fact that nor more disk space was available). So - if STEPS 1)-3) failed - look a the process for rollling in/out). -
Need help in performance tunning
Hi , i have one update statement , where it is keep on running for hours and the volume of the data is 2.2 million
version:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
below is the code :
DECLARE
Cursor C11
Is
Select Account_Num,Outlet_Num,Product_Number,Ln_Of_Biz,Generateguid('DELIVERED_PRODUCT') As Dp_Guid From (
Select Account_Num,Outlet_Num,Product_Number,Ln_Of_Biz
From transformation_flat
Group By Account_Num, Outlet_Num,Ln_Of_Biz,Product_Number
Having Count(*)> 1);
Type Actno Is Table Of Varchar2(13) Index By Binary_Integer;
Type Outlet Is Table Of NUMBER Index By Binary_Integer;
Type Pno Is Table Of VARCHAR2(255) INDEX BY BINARY_INTEGER;
Type Tn Is Table Of VARCHAR2(10) Index By Binary_Integer;
Type Vdpguid Is Table Of VARCHAR2(20) Index By Binary_Integer;
Type Vcnt Is Table Of Number Index By Binary_Integer;
Type Lob1 Is Table Of Varchar2(255) Index By Binary_Integer;
Type Offer_No Is Table Of Varchar2(255) Index By Binary_Integer;
V_Actno Actno;
V_Outlet Outlet;
V_Pno Pno;
V_Tn Tn;
V_DPGUID VDPGUID;
Vguid Varchar2(20);
V_Cnt Vcnt;
V_Lob Lob1;
V_Offer_No Offer_No;
BEGIN
Open c11;
Loop
Fetch C11 Bulk Collect Into V_Actno,V_Outlet,V_Pno,V_Lob,V_Dpguid;
Exit When C11%Notfound;
End Loop;
close c11;
Forall I In 1..V_Actno.count
Update transformation_flat Set Product_Guid=V_Dpguid(I) Where
Account_Num=V_Actno(I) And
Outlet_Num=V_Outlet(I) And
Product_Number=V_Pno(I) And
ln_of_biz=v_lob(I);
Commit;
END;
for above i do have index on that table on (account_num,outlet_num,product_number,ln_of_biz).
when i checked the memory contents for this sqlid in v$sql , below are the values :
Disk_Read:21640650
Buffer_Gets:22466856
Concurrency_Wait_Time:16923
Cluster_Wait_Time:36313694
user_io_wait_time:3594365433
I need some inputs in which area i can tune the above code..
Thanks835589 wrote:
hi i am also face same performance issue pls reply me ASApDon't use the word ASAP, it is rude and a violation of forum terms and conditions.
http://www.oracle.com/html/terms.html
>
4. Use of Community Services
Community Services are provided as a convenience to users and Oracle is not obligated to provide any technical support for, or participate in, Community Services. While Community Services may include information regarding Oracle products and services, including information from Oracle employees, they are not an official customer support channel for Oracle.
You may use Community Services subject to the following: (a) Community Services may be used solely for your personal, informational, noncommercial purposes; (b) Content provided on or through Community Services may not be redistributed; and (c) personal data about other users may not be stored or collected except where expressly authorized by Oracle
>
These people are for ASAP requests.
http://www.google.com/search?q=oracle+consultant
where i need to tune the queery.
pls i am eagarly waiting for that
Read and understand the links posted above
RPuttagunta wrote:
HOW TO: Post a SQL statement tuning request - template posting
When your query takes too long ... -
SLOW Query ... Need help improving performance
Database: Oracle 8i
Note: I don't have a whole lot of experience with writing queries, so please forgive me for any dumb mistakes I most likely made.
I have a query in which I have a SUM in two levels. I think this is probably the root cause of the very slow performance of the query. However, I just don't see any way around it, and can't come up with any other ways to speed up the query. The query itself only returns one line, the summary line. And, by slow, I mean it can take up to an hour or two. This is a query I need to run multiple times, based on some parameters that I cannot query from a database.
The query basically calculates the current unit cost of a part. It has to sum up the issue cost of the part (cost of material issued to the part's order), the actual dollars put into a part (labor, etc.), and the burden dollars associated with the part. This sum has to be divided by the total quantity of parts completed on the part's order to get the unit cost. I have to account for the possibility that the quantity complete is 0, so that I don't end up dividing by 0.
Below is my query, and sample data for it:
SELECT a.part_nbr
, a.mo_nbr
, a.unit_iss_cost
, CASE
WHEN a.qty_complete_ind ='Nonzero'
THEN SUM(a.act_dlrs/a.qty_complete)
ELSE 0
END AS unit_dlrs
, CASE
WHEN a.qty_complete_ind ='Zero'
THEN SUM(a.act_dlrs)
ELSE 0
END AS qty_0_dlrs
FROM ( SELECT act.part_nbr AS part_nbr
, act.order_nbr || '-' || act.sub_order_nbr AS mo_nbr
, ic.unit_iss_cost AS unit_iss_cost
, SUM (
act.act_dlrs_earned +
act.act_brdn_dls_earned +
act.tool_dlrs_earned +
act.act_fix_brdn_dls_ea
) AS act_dlrs
, ord.qty_complete AS qty_complete
, CASE
WHEN ord.qty_complete <>0
THEN 'Nonzero'
ELSE 'Zero'
END AS qty_complete_ind
FROM ACT act
, ISSUE_COST ic
, ORD ord
WHERE ord.ord_nbr =act.order_nbr AND
ord.sub_ord_nbr =act.sub_order_nbr AND
ord.major_seq_nbr =act.maj_seq_nbr AND
ic.ord_nbr =act.order_nbr AND
ic.sub_ord_nbr =act.sub_order_nbr AND
(act.order_nbr =LPAD(?,10,'0')) AND
(act.sub_order_nbr =LPAD(?,3,'0')) AND
(act.activity_date <=?)
GROUP BY act.part_nbr
, act.order_nbr || '-' || act.sub_order_nbr
, act.maj_seq_nbr
, ord.qty_complete
, ic.unit_iss_cost
) a
GROUP BY a.part_nbr
, a.mo_nbr
, a.unit_iss_cost
, a.qty_complete_ind
CREATE TABLE ACT
creation_date date
, c_id number (5,0)
, part_nbr varchar(25)
, order_nbr varchar(10)
, sub_order_nbr varchar(3)
, maj_seq_nbr varchar(4)
, act_dlrs_earned number (15,2)
, act_brdn_dls_earned number (15,2)
, tool_dlrs_earned number (15,2)
, act_fix_brdn_dls_ea number (15,2)
, activity_date date
CONSTRAINT ACT_PK
PRIMARY KEY (creation_date, c_id)
);--Please note, issue_cost is actually a view, not a table, but by itself it runs very quickly
CREATE TABLE ISSUE_COST
unit_iss_cost number(15,2)
, ord_nbr varchar(10)
, sub_ord_nbr varchar(3)
);--Please note, ord table has a couple of foreign keys that I did not mention
CREATE TABLE ORD
ord_nbr varchar(10)
, sub_ord_nbr varchar(3)
, major_seq_nbr varchar(4)
, qty_complete number (13,4)
);Sample tables:
ACT
creation_date c_id part_nbr order_nbr sub_order_nbr maj_seq_nbr act_dlrs_earned act_brdn_dls_earned tool_dlrs_earned act_fix_brdn_dls_ea activity_date
01/02/2000 12345 ABC-123 0000012345 001 0010 10.00 20.00 0.00 0.00 01/01/2000
01/02/2000 12345 XYZ-987 0000054321 001 0030 100.00 175.00 10.00 10.00 01/01/2000
01/03/2000 12347 ABC-123 0000012345 001 0020 25.00 75.00 5.00 1.00 01/02/2000
01/03/2000 12348 ABC-123 0000012345 001 0020 75.00 120.00 25.00 5.00 01/02/2000
01/03/2000 12349 XYZ-987 0000054321 001 0050 50.00 110.00 0.00 0.00 01/02/2000
01/04/2000 12350 ABC-123 0000012345 001 0030 25.00 40.00 0.00 0.00 01/03/2000
ISSUE_COST
unit_iss_cost ord_nbr sub_ord_nbr
125.00 0000012345 001
650.00 0000054321 001
ORD
ord_nbr sub_ord_nbr major_seq_nbr qty_complete
0000012345 001 0010 10
0000012345 001 0020 10
0000012345 001 0030 0
0000054321 001 0030 20
0000054321 001 0050 19If insert statements are needed for the sample tables, let me know and I'll go re-figure out how to write them. (I only have read-only access to the database I'm querying, so creating tables and inserting values aren't things I ever do).
Thanks in advance!For diagnosing where the time of your query is being spent, we don't need create table and insert statements. If we execute your query with only a handful of rows, the query will be very fast. What we do need to know, is the plan the optimizer takes to compute your result set, and the cardinalities of each step.
Please read When your query takes too long ... carefully and post the full explain plan and tkprof output.
Regards,
Rob. -
Need help with Performance Tuning
Following query takes 8 secs. Any help will be appreciated
SELECT SUM(FuturesMarketVal) FuturesMarketVal
FROM (SELECT CASE WHEN null IS NULL THEN FuturesMarketVal
ELSE DECODE(FUTURES_NAME, null, FuturesMarketVal, 0)
END FuturesMarketVal
FROM (SELECT SUM( (a.FUTURES_ALLOC * (NVL(b.Futures_EOD_Price,0)/100 - NVL(c.Futures_EOD_Price,0)/100) * a.CONTRACT_SIZE) / DECODE(a.CONTRACT_SIZE,100000,1,1000000,4,3000000,12,1) ) FuturesMarketVal,
a.FUTURES_NAME
FROM cms_futures_trans a,
cms_futures_price b,
cms_futures_price c
Where c.history_date (+) = TO_DATE(fas_pas_pkg.get_weekday(to_date('12/30/2005') - 1),'mm/dd/yyyy')
and a.FUTURES_NAME = b.FUTURES_NAME (+)
AND a.trade_date < TO_DATE('12/30/2005','mm/dd/yyyy')
AND b.history_date (+) = TO_DATE('12/30/2005','mm/dd/yyyy')
AND a.FUTURES_NAME = c.FUTURES_NAME (+)
GROUP BY a.FUTURES_NAME
/Eric:
But there are only 5 records in cms_futures_price and 10 in cms_futures_trans :-)
OP:
I'm not usre what you are trying to fo here, but a couple of comments.
Since NULL IS NULL will always be true, you don;t really need the CASE statement. as it stands, your query will always return FuturesMarketVal. If the results are correct, then you can do without the DECODE as well.
Why are you calling fas_pas_pkg.get_weekday with a constant value? Can you not just use whatever it returns as a constant instead of calling the function?
Are you sure you need all those outer joins? They almost guarantee full scans of the outer joined tables.
Perhaps if you post some representative data from the two tables and an explanation of what you are trying to accomplish someone may have a better idea.
John -
Need help on performing database queries with a general query
Dear experts,
I have two issues
1)
I have developed a software which connects to Oracle database
performing and executing DDL/DML statements.
I am using a Jdbc thin oracle driver.
When i perform a wrong syntax query or say a query using invalid column.I get exception accordingly.However,I want to know if there is any way by which i can know the exact position where i am making error
just as an asterik(*) comes in sql plus in case of invalid column or so.
2)
Whenever i minimize software window (made of swing components)
and again maximize it.It takes almost 12 secs to be visible otherwise it remains white.Is swing that much slow ???(1) No.
(2) It's possible to make Swing programs that slow. (I know, I have done it.) But usually the slowness is in application code (your code) and not in Swing itself. -
Need help on performance tip of USE_APPLICATION_VIEW_CACHE
Hi Experts,
Using JDEVADF_11.1.1.5.0
Can we enable application view caching by setting the value of oracle.adf.view.faces.USE_APPLICATION_VIEW_CACHE in web.xml to true for JDEVADF_11.1.1.5.0?
Because I can see a bug which is recommonding not to set this option till JDEV 11.1.1.3.0.
Support document 1234050.1.
Applies To:
Oracle JDeveloper - Version 10.1.3.4.0 to 11.1.1.3.0 [Release Oracle10g to Oracle11g]
Information in this document applies to any platform.
Regards,
DineshHi,
we would need a bug number to tell when this bug has been fixed. Given the document is not updated for later versions I assume you can use it
Frank -
Need help with performance for very very huge tables...
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production.
My DB has many tables and out of which I am interested in getting data from product and sales.
select /*parallel 32*/count(1) from (
select /*parallel 32*/distinct prod_code from product pd, sales s
where pd.prod_opt_cd is NULL
and s.sales_id = pd.sales_id
and s.creation_dts between to_date ('2012-07-01','YYYY-MM-DD') and
to_date ('2012-07-31','YYYY-MM-DD')
More information -
Total Rows in sales table - 18001217
Total rows in product table - 411800392
creation_dts dont have index on it.
I started query in background but after 30 hours I saw the error saying -
ORA-01555: snapshot too old: rollback segment number 153 with name
Is there any other way to get above data in optimized way?Formatting your query a bit (and removing the hints), it evaluates to:
SELECT COUNT(1)
FROM (SELECT DISTINCT prod_code
FROM product pd
INNER JOIN sales s
ON s.sales_id = pd.sales_id
WHERE pd.prod_opt_cd is NULL
AND s.creation_dts BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD')
AND TO_DATE('2012-07-31','YYYY-MM-DD')
);This should be equivalent to
SELECT COUNT(DISTINCT prod_code)
FROM product pd
INNER JOIN sales s
ON s.sales_id = pd.sales_id
WHERE pd.prod_opt_cd is NULL
AND s.creation_dts BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD')
AND TO_DATE('2012-07-31','YYYY-MM-DD');On the face of it, that's a ridiculously simple query If s.sales_id and pd.sales_id are both indexed, then I don't see why it would take a huge amount of time. Even having to perform a FTS on the sales table because creation_dts isn't indexed shouldn't make it a 30-hour query. If either of those two is not indexed, then it's a much uglier prospect in joining the two tables. However, if you often join the product and sales tables (which seems likely), then not having those fields indexed would be contraindicated. -
I have a query that is taking to long where it should takes less than 5 sec. How could I solve this problem? I think the u_final_result_user is the problem but i am not sure and don't know how I could fix this.
Explain
SQL Statement which produced this data:
select * from table(dbms_xplan.display)
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost |
| 0 | SELECT STATEMENT | | 584 | 3522K| | 4484 |
|* 1 | FILTER | | | | | |
|* 2 | CONNECT BY WITH FILTERING | | | | | |
|* 3 | FILTER | | | | | |
| 4 | COUNT | | | | | |
|* 5 | HASH JOIN | | 584 | 3522K| | 4484 |
| 6 | VIEW | | 74 | 80660 | | 428 |
| 7 | WINDOW SORT | | 74 | 740 | | 428 |
| 8 | SORT GROUP BY | | 74 | 740 | | 428 |
|* 9 | TABLE ACCESS FULL | U_FINALRESULT_USER | 74 | 740 | | 425 |
|* 10 | HASH JOIN OUTER | | 789 | 3918K| 3120K| 4038 |
| 11 | VIEW | | 789 | 3106K| | 3530 |
| 12 | FILTER | | | | | |
| 13 | TABLE ACCESS BY INDEX ROWID | TEST | 772K| 10M| | 3 |
| 14 | NESTED LOOPS | | 789 | 33927 | | 3530 |
| 15 | NESTED LOOPS | | 789 | 22881 | | 1163 |
| 16 | NESTED LOOPS | | 383 | 6894 | | 14 |
| 17 | NESTED LOOPS | | 1 | 10 | | 3 |
| 18 | TABLE ACCESS BY INDEX ROWID| SDG | 1 | 4 | | 2 |
|* 19 | INDEX UNIQUE SCAN | PK_SDG | 865 | | | 1 |
| 20 | TABLE ACCESS BY INDEX ROWID| SDG_USER | 1 | 6 | | 1 |
|* 21 | INDEX UNIQUE SCAN | PK_SDG_USER | 1 | | | |
| 22 | TABLE ACCESS BY INDEX ROWID | SAMPLE | 1 | 8 | | 11 |
|* 23 | INDEX RANGE SCAN | FK_SAMPLE_SDG | 383 | | | 2 |
|* 24 | TABLE ACCESS BY INDEX ROWID | ALIQUOT | 1 | 11 | | 3 |
|* 25 | INDEX RANGE SCAN | FK_ALIQUOT_SAMPLE | 2 | | | 2 |
|* 26 | INDEX RANGE SCAN | FK_TEST_ALIQUOT | 1 | | | 2 |
| 27 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 28 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 29 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 30 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 31 | VIEW | | 37 | 38998 | | 428 |
| 32 | SORT UNIQUE | | 37 | 555 | | 428 |
| 33 | WINDOW SORT | | 37 | 555 | | 428 |
|* 34 | TABLE ACCESS FULL | U_FINALRESULT_USER | 37 | 555 | | 425 |
| 35 | HASH JOIN | | | | | |
| 36 | CONNECT BY PUMP | | | | | |
| 37 | COUNT | | | | | |
|* 38 | HASH JOIN | | 584 | 3522K| | 4484 |
| 39 | VIEW | | 74 | 80660 | | 428 |
| 40 | WINDOW SORT | | 74 | 740 | | 428 |
| 41 | SORT GROUP BY | | 74 | 740 | | 428 |
|* 42 | TABLE ACCESS FULL | U_FINALRESULT_USER | 74 | 740 | | 425 |
|* 43 | HASH JOIN OUTER | | 789 | 3918K| 3120K| 4038 |
| 44 | VIEW | | 789 | 3106K| | 3530 |
| 45 | FILTER | | | | | |
| 46 | TABLE ACCESS BY INDEX ROWID | TEST | 772K| 10M| | 3 |
| 47 | NESTED LOOPS | | 789 | 33927 | | 3530 |
| 48 | NESTED LOOPS | | 789 | 22881 | | 1163 |
| 49 | NESTED LOOPS | | 383 | 6894 | | 14 |
| 50 | NESTED LOOPS | | 1 | 10 | | 3 |
| 51 | TABLE ACCESS BY INDEX ROWID| SDG | 1 | 4 | | 2 |
|* 52 | INDEX UNIQUE SCAN | PK_SDG | 865 | | | 1 |
| 53 | TABLE ACCESS BY INDEX ROWID| SDG_USER | 1 | 6 | | 1 |
|* 54 | INDEX UNIQUE SCAN | PK_SDG_USER | 1 | | | |
| 55 | TABLE ACCESS BY INDEX ROWID | SAMPLE | 1 | 8 | | 11 |
|* 56 | INDEX RANGE SCAN | FK_SAMPLE_SDG | 383 | | | 2 |
|* 57 | TABLE ACCESS BY INDEX ROWID | ALIQUOT | 1 | 11 | | 3 |
|* 58 | INDEX RANGE SCAN | FK_ALIQUOT_SAMPLE | 2 | | | 2 |
|* 59 | INDEX RANGE SCAN | FK_TEST_ALIQUOT | 1 | | | 2 |
| 60 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 61 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 62 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 63 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 64 | VIEW | | 37 | 38998 | | 428 |
| 65 | SORT UNIQUE | | 37 | 555 | | 428 |
| 66 | WINDOW SORT | | 37 | 555 | | 428 |
|* 67 | TABLE ACCESS FULL | U_FINALRESULT_USER | 37 | 555 | | 425 |
Predicate Information (identified by operation id):
1 - filter("FR_PIVOT"."MAXLEVEL"=LEVEL)
2 - filter("FR_PIVOT"."RANK"=1)
3 - filter("FR_PIVOT"."RANK"=1)
5 - access("FR_PIVOT"."REF"=TO_CHAR("FR"."U_SDG_ID")||TO_CHAR("FR"."U_TEST_TEMPLATE_ID"))
9 - filter(NVL("U_FINALRESULT_USER"."U_OVERRULED_RESULT","U_FINALRESULT_USER"."U_CALCULATED_RESULT
")<>'X' AND "U_FINALRESULT_USER"."U_SDG_ID"=TO_NUMBER(:Z))
10 - access("SD"."SDG_ID"="FR"."U_SDG_ID"(+) AND "SD"."TEST_TEMPLATE_ID"="FR"."U_TEST_TEMPLATE_ID"(
+))
19 - access("SYS_ALIAS_4"."SDG_ID"=TO_NUMBER(:Z))
21 - access("SYS_ALIAS_4"."SDG_ID"="SDG_USER"."SDG_ID")
23 - access("SYS_ALIAS_4"."SDG_ID"="SYS_ALIAS_3"."SDG_ID")
24 - filter("SYS_ALIAS_2"."STATUS"='C' OR "SYS_ALIAS_2"."STATUS"='P' OR "SYS_ALIAS_2"."STATUS"='V')
25 - access("SYS_ALIAS_2"."SAMPLE_ID"="SYS_ALIAS_3"."SAMPLE_ID")
26 - access("SYS_ALIAS_1"."ALIQUOT_ID"="SYS_ALIAS_2"."ALIQUOT_ID")
34 - filter("U_FINALRESULT_USER"."U_REQUESTED"='T' AND NVL("U_FINALRESULT_USER"."U_OVERRULED_RESULT
","U_FINALRESULT_USER"."U_CALCULATED_RESULT")<>'X' AND "U_FINALRESULT_USER"."U_SDG_ID"=
TO_NUMBER(:Z))
38 - access("FR_PIVOT"."REF"=TO_CHAR("FR"."U_SDG_ID")||TO_CHAR("FR"."U_TEST_TEMPLATE_ID"))
42 - filter(NVL("U_FINALRESULT_USER"."U_OVERRULED_RESULT","U_FINALRESULT_USER"."U_CALCULATED_RESULT
")<>'X' AND "U_FINALRESULT_USER"."U_SDG_ID"=TO_NUMBER(:Z))
43 - access("SD"."SDG_ID"="FR"."U_SDG_ID"(+) AND "SD"."TEST_TEMPLATE_ID"="FR"."U_TEST_TEMPLATE_ID"(
+))
52 - access("SYS_ALIAS_4"."SDG_ID"=TO_NUMBER(:Z))
54 - access("SYS_ALIAS_4"."SDG_ID"="SDG_USER"."SDG_ID")
56 - access("SYS_ALIAS_4"."SDG_ID"="SYS_ALIAS_3"."SDG_ID")
57 - filter("SYS_ALIAS_2"."STATUS"='C' OR "SYS_ALIAS_2"."STATUS"='P' OR "SYS_ALIAS_2"."STATUS"='V')
58 - access("SYS_ALIAS_2"."SAMPLE_ID"="SYS_ALIAS_3"."SAMPLE_ID")
59 - access("SYS_ALIAS_1"."ALIQUOT_ID"="SYS_ALIAS_2"."ALIQUOT_ID")
67 - filter("U_FINALRESULT_USER"."U_REQUESTED"='T' AND NVL("U_FINALRESULT_USER"."U_OVERRULED_RESULT
","U_FINALRESULT_USER"."U_CALCULATED_RESULT")<>'X' AND "U_FINALRESULT_USER"."U_SDG_ID"=
TO_NUMBER(:Z))
Note: cpu costing is off
Tkprof
TKPROF: Release 9.2.0.1.0 - Production on Fri Jul 13 15:03:47 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Trace file: d:\oracle\admin\nautdev\udump\nautdev_ora_13020.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
alter session set sql_trace true
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 1 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
select VALUE
from
nls_session_parameters where PARAMETER='NLS_NUMERIC_CHARACTERS'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 FIXED TABLE FULL X$NLS_PARAMETERS
select VALUE
from
nls_session_parameters where PARAMETER='NLS_DATE_FORMAT'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 FIXED TABLE FULL X$NLS_PARAMETERS
select VALUE
from
nls_session_parameters where PARAMETER='NLS_CURRENCY'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 FIXED TABLE FULL X$NLS_PARAMETERS
select to_char(9,'9C')
from
dual
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 3 0 1
total 3 0.00 0.00 0 3 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 TABLE ACCESS FULL DUAL
SELECT sd.u_bas_stockseed_code,
sd.u_bas_storage_code,
sd.description as test,
case when fr.resultcount < 8
then null
else case when fr.resultdistinct > 1
then 'spl'
else fr.resultfinal
end
end as result,
case when level >=2
then substr(sys_connect_by_path(valcount,','),2)
end as spl
FROM
SELECT sd.sdg_id,
sa.sample_id,
t.test_template_id,
sdu.u_bas_stockseed_code,
sdu.u_bas_storage_code,
t.description
FROM lims_sys.sdg sd, lims_sys.sdg_user sdu, lims_sys.sample sa, lims_sys.aliquot a,lims_sys.test t
WHERE sd.sdg_id = sdu.sdg_id
AND sd.sdg_id = sa.sdg_id
AND a.sample_id = sa.sample_id
AND t.aliquot_id = a.aliquot_id
AND a.status IN ('V','P','C')
AND sd.sdg_id IN (:SDGID)
) sd,
SELECT distinct fr.u_sdg_id,
fr.u_sample_id,
fr.u_test_template_id,
nvl(fr.u_overruled_result, fr.u_calculated_result) as Resultfinal,
count(distinct nvl(fr.u_overruled_result, fr.u_calculated_result)) over (partition by concat(fr.u_sdg_id, fr.u_test_template_id)) as resultdistinct,
count(nvl(fr.u_overruled_result, fr.u_calculated_result)) over (partition by concat(fr.u_sdg_id, fr.u_test_template_id)) as resultcount
FROM lims_sys.u_finalresult_user fr
WHERE fr.u_requested = 'T'
AND nvl(fr.u_overruled_result,fr.u_calculated_result) != 'X'
AND fr.u_sdg_id IN (:SDGID)
) fr,
SELECT concat(fr.u_sdg_id, fr.u_test_template_id) as ref,
nvl( fr.u_overruled_result, fr.u_calculated_result),
to_char(count(*)) || 'x' || nvl(fr.u_overruled_result, fr.u_calculated_result) as valcount,
row_number() over (partition by concat(fr.u_sdg_id, fr.u_test_template_id) order by count(*) desc, nvl(fr.u_overruled_result, fr.u_calculated_result)) as rank,
count(*) over (partition by concat(fr.u_sdg_id, fr.u_test_template_id)) AS MaxLevel
FROM lims_sys.u_finalresult_user fr
WHERE nvl(fr.u_overruled_result,fr.u_calculated_result) != 'X'
AND fr.u_sdg_id IN (:SDGID)
GROUP BY concat(fr.u_sdg_id, fr.u_test_template_id), nvl(fr.u_overruled_result,fr.u_calculated_result)
) fr_pivot
WHERE sd.sdg_id = fr.u_sdg_id (+)
AND sd.test_template_id = fr.u_test_template_id (+)
AND fr_pivot.ref = concat(fr.u_sdg_id,fr.u_test_template_id)
AND level = maxlevel
start with rank = 1 connect by
prior fr.u_sdg_id = fr.u_sdg_id
and prior fr.u_test_template_id = fr.u_test_template_id
and prior rank = rank - 1
call count cpu elapsed disk query current rows
Parse 1 0.03 0.58 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 1 8344.424154501457.66 15955 140391 178371 500
total 4 8344.454154501458.25 15955 140391 178371 500
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
500 FILTER
507 CONNECT BY WITH FILTERING
24731 FILTER
169667 COUNT
169667 HASH JOIN
34 VIEW
34 WINDOW SORT
34 SORT GROUP BY
312 TABLE ACCESS FULL U_FINALRESULT_USER
24731 HASH JOIN OUTER
546 VIEW
546 FILTER
546 TABLE ACCESS BY INDEX ROWID TEST
1093 NESTED LOOPS
546 NESTED LOOPS
123 NESTED LOOPS
1 NESTED LOOPS
1 TABLE ACCESS BY INDEX ROWID SDG
1 INDEX UNIQUE SCAN PK_SDG (object id 54343)
1 TABLE ACCESS BY INDEX ROWID SDG_USER
1 INDEX UNIQUE SCAN PK_SDG_USER (object id 54368)
123 TABLE ACCESS BY INDEX ROWID SAMPLE
123 INDEX RANGE SCAN FK_SAMPLE_SDG (object id 54262)
546 TABLE ACCESS BY INDEX ROWID ALIQUOT
546 INDEX RANGE SCAN FK_ALIQUOT_SAMPLE (object id 53620)
546 INDEX RANGE SCAN FK_TEST_ALIQUOT (object id 54493)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
291 VIEW
291 SORT UNIQUE
291 WINDOW SORT
291 TABLE ACCESS FULL U_FINALRESULT_USER
1312330604 HASH JOIN
169667 CONNECT BY PUMP
2036004 COUNT
2036004 HASH JOIN
408 VIEW
408 WINDOW SORT
408 SORT GROUP BY
3744 TABLE ACCESS FULL U_FINALRESULT_USER
296772 HASH JOIN OUTER
6552 VIEW
6552 FILTER
6552 TABLE ACCESS BY INDEX ROWID TEST
13116 NESTED LOOPS
6552 NESTED LOOPS
1476 NESTED LOOPS
12 NESTED LOOPS
12 TABLE ACCESS BY INDEX ROWID SDG
12 INDEX UNIQUE SCAN PK_SDG (object id 54343)
12 TABLE ACCESS BY INDEX ROWID SDG_USER
12 INDEX UNIQUE SCAN PK_SDG_USER (object id 54368)
1476 TABLE ACCESS BY INDEX ROWID SAMPLE
1476 INDEX RANGE SCAN FK_SAMPLE_SDG (object id 54262)
6552 TABLE ACCESS BY INDEX ROWID ALIQUOT
6552 INDEX RANGE SCAN FK_ALIQUOT_SAMPLE (object id 53620)
6552 INDEX RANGE SCAN FK_TEST_ALIQUOT (object id 54493)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
3492 VIEW
3492 SORT UNIQUE
3492 WINDOW SORT
3492 TABLE ACCESS FULL U_FINALRESULT_USER
select 'x'
from
dual
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 6 0 2
total 6 0.00 0.00 0 6 0 2
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 TABLE ACCESS FULL DUAL
begin :id := sys.dbms_transaction.local_transaction_id; end;
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 2
Fetch 0 0.00 0.00 0 0 0 0
total 4 0.00 0.00 0 0 0 2
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 9 0.03 0.59 0 0 0 0
Execute 11 0.00 0.00 0 0 0 2
Fetch 7 8344.424154501457.66 15955 140400 178371 506
total 27 8344.454154501458.25 15955 140400 178371 508
Misses in library cache during parse: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 40 0.00 0.00 0 0 0 0
Execute 40 0.00 0.00 0 0 0 0
Fetch 40 0.00 0.00 0 81 0 40
total 120 0.00 0.00 0 81 0 40
Misses in library cache during parse: 0
10 user SQL statements in session.
40 internal SQL statements in session.
50 SQL statements in session.
Trace file: d:\oracle\admin\nautdev\udump\nautdev_ora_13020.trc
Trace file compatibility: 9.00.01
Sort options: default
1 session in tracefile.
10 user SQL statements in trace file.
40 internal SQL statements in trace file.
50 SQL statements in trace file.
10 unique SQL statements in trace file.
544 lines in trace file.I altered the query as you suggested and added also a ordered hint. It seems that this solved my problem. Thank you.
Explain
SQL Statement which produced this data:
select * from table(dbms_xplan.display)
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost |
| 0 | SELECT STATEMENT | | 366 | 759K| | 4050 |
| 1 | SORT UNIQUE | | 366 | 759K| 1960K| 4050 |
|* 2 | FILTER | | | | | |
|* 3 | CONNECT BY WITH FILTERING | | | | | |
|* 4 | FILTER | | | | | |
| 5 | COUNT | | | | | |
| 6 | FILTER | | | | | |
|* 7 | HASH JOIN | | 366 | 759K| | 3918 |
| 8 | NESTED LOOPS | | 766 | 32938 | | 3461 |
| 9 | NESTED LOOPS | | 766 | 22214 | | 1163 |
| 10 | NESTED LOOPS | | 383 | 6894 | | 14 |
| 11 | NESTED LOOPS | | 1 | 10 | | 3 |
| 12 | TABLE ACCESS BY INDEX ROWID| SDG | 1 | 4 | | 2 |
|* 13 | INDEX UNIQUE SCAN | PK_SDG | 865 | | | 1 |
| 14 | TABLE ACCESS BY INDEX ROWID| SDG_USER | 1 | 6 | | 1 |
|* 15 | INDEX UNIQUE SCAN | PK_SDG_USER | 1 | | | |
| 16 | TABLE ACCESS BY INDEX ROWID | SAMPLE | 383 | 3064 | | 11 |
|* 17 | INDEX RANGE SCAN | FK_SAMPLE_SDG | 383 | | | 2 |
|* 18 | TABLE ACCESS BY INDEX ROWID | ALIQUOT | 2 | 22 | | 3 |
|* 19 | INDEX RANGE SCAN | FK_ALIQUOT_SAMPLE | 2 | | | 2 |
|* 20 | TABLE ACCESS BY INDEX ROWID | TEST | 1 | 14 | | 3 |
|* 21 | INDEX RANGE SCAN | FK_TEST_ALIQUOT | 1 | | | 2 |
| 22 | VIEW | | 74 | 150K| | 455 |
| 23 | WINDOW SORT | | 74 | 150K| 408K| 455 |
| 24 | VIEW | | 74 | 150K| | 428 |
| 25 | SORT UNIQUE | | 74 | 740 | | 428 |
| 26 | WINDOW SORT | | 74 | 740 | | 428 |
|* 27 | TABLE ACCESS FULL | U_FINALRESULT_USER | 74 | 740 | | 425 |
| 28 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 29 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 30 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 31 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 32 | HASH JOIN | | | | | |
| 33 | CONNECT BY PUMP | | | | | |
| 34 | COUNT | | | | | |
| 35 | FILTER | | | | | |
|* 36 | HASH JOIN | | 366 | 759K| | 3918 |
| 37 | NESTED LOOPS | | 766 | 32938 | | 3461 |
| 38 | NESTED LOOPS | | 766 | 22214 | | 1163 |
| 39 | NESTED LOOPS | | 383 | 6894 | | 14 |
| 40 | NESTED LOOPS | | 1 | 10 | | 3 |
| 41 | TABLE ACCESS BY INDEX ROWID| SDG | 1 | 4 | | 2 |
|* 42 | INDEX UNIQUE SCAN | PK_SDG | 865 | | | 1 |
| 43 | TABLE ACCESS BY INDEX ROWID| SDG_USER | 1 | 6 | | 1 |
|* 44 | INDEX UNIQUE SCAN | PK_SDG_USER | 1 | | | |
| 45 | TABLE ACCESS BY INDEX ROWID | SAMPLE | 383 | 3064 | | 11 |
|* 46 | INDEX RANGE SCAN | FK_SAMPLE_SDG | 383 | | | 2 |
|* 47 | TABLE ACCESS BY INDEX ROWID | ALIQUOT | 2 | 22 | | 3 |
|* 48 | INDEX RANGE SCAN | FK_ALIQUOT_SAMPLE | 2 | | | 2 |
|* 49 | TABLE ACCESS BY INDEX ROWID | TEST | 1 | 14 | | 3 |
|* 50 | INDEX RANGE SCAN | FK_TEST_ALIQUOT | 1 | | | 2 |
| 51 | VIEW | | 74 | 150K| | 455 |
| 52 | WINDOW SORT | | 74 | 150K| 408K| 455 |
| 53 | VIEW | | 74 | 150K| | 428 |
| 54 | SORT UNIQUE | | 74 | 740 | | 428 |
| 55 | WINDOW SORT | | 74 | 740 | | 428 |
|* 56 | TABLE ACCESS FULL | U_FINALRESULT_USER | 74 | 740 | | 425 |
| 57 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 58 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 59 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 60 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
Predicate Information (identified by operation id):
2 - filter("FR"."MAXLEVEL"=LEVEL)
3 - filter("FR"."RANK"=1)
4 - filter("FR"."RANK"=1)
7 - access("SYS_ALIAS_4"."SDG_ID"="FR"."U_SDG_ID" AND "SYS_ALIAS_1"."TEST_TEMPLATE_ID"="FR"."U_T
EST_TEMPLATE_ID")
13 - access("SYS_ALIAS_4"."SDG_ID"=TO_NUMBER(:Z))
15 - access("SYS_ALIAS_4"."SDG_ID"="SDG_USER"."SDG_ID")
17 - access("SYS_ALIAS_4"."SDG_ID"="SYS_ALIAS_3"."SDG_ID")
18 - filter("SYS_ALIAS_2"."STATUS"='V' OR "SYS_ALIAS_2"."STATUS"='P' OR "SYS_ALIAS_2"."STATUS"='C
19 - access("SYS_ALIAS_2"."SAMPLE_ID"="SYS_ALIAS_3"."SAMPLE_ID")
20 - filter("SYS_ALIAS_1"."DESCRIPTION" IS NOT NULL)
21 - access("SYS_ALIAS_1"."ALIQUOT_ID"="SYS_ALIAS_2"."ALIQUOT_ID")
27 - filter(NVL("U_FINALRESULT_USER"."U_OVERRULED_RESULT","U_FINALRESULT_USER"."U_CALCULATED_RESU
LT")<>'X' AND "U_FINALRESULT_USER"."U_SDG_ID"=TO_NUMBER(:Z))
36 - access("SYS_ALIAS_4"."SDG_ID"="FR"."U_SDG_ID" AND "SYS_ALIAS_1"."TEST_TEMPLATE_ID"="FR"."U_T
EST_TEMPLATE_ID")
42 - access("SYS_ALIAS_4"."SDG_ID"=TO_NUMBER(:Z))
44 - access("SYS_ALIAS_4"."SDG_ID"="SDG_USER"."SDG_ID")
46 - access("SYS_ALIAS_4"."SDG_ID"="SYS_ALIAS_3"."SDG_ID")
47 - filter("SYS_ALIAS_2"."STATUS"='V' OR "SYS_ALIAS_2"."STATUS"='P' OR "SYS_ALIAS_2"."STATUS"='C
48 - access("SYS_ALIAS_2"."SAMPLE_ID"="SYS_ALIAS_3"."SAMPLE_ID")
49 - filter("SYS_ALIAS_1"."DESCRIPTION" IS NOT NULL)
50 - access("SYS_ALIAS_1"."ALIQUOT_ID"="SYS_ALIAS_2"."ALIQUOT_ID")
56 - filter(NVL("U_FINALRESULT_USER"."U_OVERRULED_RESULT","U_FINALRESULT_USER"."U_CALCULATED_RESU
LT")<>'X' AND "U_FINALRESULT_USER"."U_SDG_ID"=TO_NUMBER(:Z))
Note: cpu costing is off
[.pre]
Tkprof
[pre]
TKPROF: Release 9.2.0.1.0 - Production on Mon Jul 16 11:28:18 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Trace file: d:\oracle\admin\nautdev\udump\nautdev_ora_13144.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
alter session set sql_trace true
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 1 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
select VALUE
from
nls_session_parameters where PARAMETER='NLS_NUMERIC_CHARACTERS'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 FIXED TABLE FULL X$NLS_PARAMETERS
select VALUE
from
nls_session_parameters where PARAMETER='NLS_DATE_FORMAT'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 FIXED TABLE FULL X$NLS_PARAMETERS
select VALUE
from
nls_session_parameters where PARAMETER='NLS_CURRENCY'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 FIXED TABLE FULL X$NLS_PARAMETERS
select to_char(9,'9C')
from
dual
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 3 0 1
total 3 0.00 0.00 0 3 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 TABLE ACCESS FULL DUAL
SELECT distinct sd.u_bas_stockseed_code,
sd.u_bas_storage_code,
sd.description as test,
case when fr.resultcount < 8
then null
else case when fr.resultdistinct > 1
then 'spl'
else fr.resultfinal
end
end as result,
case when level >=2 and fr.resultcount > 7
then substr(sys_connect_by_path(valcount,','),2)
end as spl
FROM
SELECT /*+ ORDERED */ sd.sdg_id,
sa.sample_id,
t.test_template_id,
sdu.u_bas_stockseed_code,
sdu.u_bas_storage_code,
t.description
FROM lims_sys.sdg sd, lims_sys.sdg_user sdu, lims_sys.sample sa, lims_sys.aliquot a,lims_sys.test t
WHERE sd.sdg_id = sdu.sdg_id
AND sd.sdg_id = sa.sdg_id
AND a.sample_id = sa.sample_id
AND t.aliquot_id = a.aliquot_id
AND a.status IN ('V','P','C')
AND t.description is not null
AND sd.sdg_id IN (:SDGID)
) sd,
SELECT u_sdg_id,
u_test_template_id,
resultfinal,
valcount,
resultcount,
resultdistinct,
row_number() over (partition by u_sdg_id, u_test_template_id order by resultfinal) as rank,
count(*) over (partition by u_sdg_id, u_test_template_id) AS MaxLevel
FROM
SELECT distinct u_sdg_id, u_test_template_id,
nvl( u_overruled_result, u_calculated_result) as resultfinal,
to_char(count(*) over (partition by u_sdg_id, u_test_template_id,nvl(u_overruled_result, u_calculated_result))) || 'x' || nvl(u_overruled_result, u_calculated_result) as valcount,
count(nvl(u_overruled_result, u_calculated_result)) over (partition by u_sdg_id, u_test_template_id) as resultcount,
count(distinct nvl(u_overruled_result, u_calculated_result)) over (partition by u_sdg_id, u_test_template_id) as resultdistinct
FROM lims_sys.u_finalresult_user
WHERE nvl(u_overruled_result,u_calculated_result) != 'X'
AND u_sdg_id IN (:SDGID)
) fr
WHERE sd.sdg_id = fr.u_sdg_id (+)
AND sd.test_template_id = fr.u_test_template_id (+)
AND level = maxlevel
start with rank = 1 connect by
prior fr.u_sdg_id = fr.u_sdg_id
and prior fr.u_test_template_id = fr.u_test_template_id
and prior rank = rank - 1
call count cpu elapsed disk query current rows
Parse 1 0.06 0.64 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 2.26 2.79 2180 29539 0 38
total 3 2.32 3.44 2180 29539 0 38
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
38 SORT UNIQUE
25381 FILTER
27648 CONNECT BY WITH FILTERING
455 FILTER
610 COUNT
610 FILTER
610 HASH JOIN
455 NESTED LOOPS
459 NESTED LOOPS
12 NESTED LOOPS
1 NESTED LOOPS
1 TABLE ACCESS BY INDEX ROWID SDG
1 INDEX UNIQUE SCAN PK_SDG (object id 54343)
1 TABLE ACCESS BY INDEX ROWID SDG_USER
1 INDEX UNIQUE SCAN PK_SDG_USER (object id 54368)
12 TABLE ACCESS BY INDEX ROWID SAMPLE
12 INDEX RANGE SCAN FK_SAMPLE_SDG (object id 54262)
459 TABLE ACCESS BY INDEX ROWID ALIQUOT
460 INDEX RANGE SCAN FK_ALIQUOT_SAMPLE (object id 53620)
455 TABLE ACCESS BY INDEX ROWID TEST
459 INDEX RANGE SCAN FK_TEST_ALIQUOT (object id 54493)
51 VIEW
51 WINDOW SORT
51 VIEW
51 SORT UNIQUE
251 WINDOW SORT
251 TABLE ACCESS FULL U_FINALRESULT_USER
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
1849 HASH JOIN
610 CONNECT BY PUMP
2440 COUNT
2440 FILTER
2440 HASH JOIN
1820 NESTED LOOPS
1836 NESTED LOOPS
48 NESTED LOOPS
4 NESTED LOOPS
4 TABLE ACCESS BY INDEX ROWID SDG
4 INDEX UNIQUE SCAN PK_SDG (object id 54343)
4 TABLE ACCESS BY INDEX ROWID SDG_USER
4 INDEX UNIQUE SCAN PK_SDG_USER (object id 54368)
48 TABLE ACCESS BY INDEX ROWID SAMPLE
48 INDEX RANGE SCAN FK_SAMPLE_SDG (object id 54262)
1836 TABLE ACCESS BY INDEX ROWID ALIQUOT
1840 INDEX RANGE SCAN FK_ALIQUOT_SAMPLE (object id 53620)
1820 TABLE ACCESS BY INDEX ROWID TEST
1836 INDEX RANGE SCAN FK_TEST_ALIQUOT (object id 54493)
204 VIEW
204 WINDOW SORT
204 VIEW
204 SORT UNIQUE
1004 WINDOW SORT
1004 TABLE ACCESS FULL U_FINALRESULT_USER
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
select 'x'
from
dual
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 6 0 2
total 6 0.00 0.00 0 6 0 2
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 TABLE ACCESS FULL DUAL
begin :id := sys.dbms_transaction.local_transaction_id; end;
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 2
Fetch 0 0.00 0.00 0 0 0 0
total 4 0.00 0.00 0 0 0 2
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 9 0.06 0.65 0 0 0 0
Execute 10 0.00 0.00 0 0 0 2
Fetch 7 2.26 2.79 2180 29548 0 44
total 26 2.32 3.45 2180 29548 0 46
Misses in library cache during parse: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 61 0.00 0.00 0 0 0 0
Execute 61 0.00 0.00 0 0 0 0
Fetch 61 0.00 0.00 0 124 0 61
total 183 0.00 0.00 0 124 0 61
Misses in library cache during parse: 0
10 user SQL statements in session.
61 internal SQL statements in session.
71 SQL statements in session.
Trace file: d:\oracle\admin\nautdev\udump\nautdev_ora_13144.trc
Trace file compatibility: 9.00.01
Sort options: default
1 session in tracefile.
10 user SQL statements in trace file.
61 internal SQL statements in trace file.
71 SQL statements in trace file.
10 unique SQL statements in trace file.
701 lines in trace file.
[.pre] -
Need help with Berkeley XML DB Performance
We need help with maximizing performance of our use of Berkeley XML DB. I am filling most of the 29 part question as listed by Oracle's BDB team.
Berkeley DB XML Performance Questionnaire
1. Describe the Performance area that you are measuring? What is the
current performance? What are your performance goals you hope to
achieve?
We are measuring the performance while loading a document during
web application startup. It is currently taking 10-12 seconds when
only one user is on the system. We are trying to do some testing to
get the load time when several users are on the system.
We would like the load time to be 5 seconds or less.
2. What Berkeley DB XML Version? Any optional configuration flags
specified? Are you running with any special patches? Please specify?
dbxml 2.4.13. No special patches.
3. What Berkeley DB Version? Any optional configuration flags
specified? Are you running with any special patches? Please Specify.
bdb 4.6.21. No special patches.
4. Processor name, speed and chipset?
Intel Xeon CPU 5150 2.66GHz
5. Operating System and Version?
Red Hat Enterprise Linux Relase 4 Update 6
6. Disk Drive Type and speed?
Don't have that information
7. File System Type? (such as EXT2, NTFS, Reiser)
EXT3
8. Physical Memory Available?
4GB
9. Are you using Replication (HA) with Berkeley DB XML? If so, please
describe the network you are using, and the number of Replica’s.
No
10. Are you using a Remote Filesystem (NFS) ? If so, for which
Berkeley DB XML/DB files?
No
11. What type of mutexes do you have configured? Did you specify
–with-mutex=? Specify what you find inn your config.log, search
for db_cv_mutex?
None. Did not specify -with-mutex during bdb compilation
12. Which API are you using (C++, Java, Perl, PHP, Python, other) ?
Which compiler and version?
Java 1.5
13. If you are using an Application Server or Web Server, please
provide the name and version?
Oracle Appication Server 10.1.3.4.0
14. Please provide your exact Environment Configuration Flags (include
anything specified in you DB_CONFIG file)
Default.
15. Please provide your Container Configuration Flags?
final EnvironmentConfig envConf = new EnvironmentConfig();
envConf.setAllowCreate(true); // If the environment does not
// exist, create it.
envConf.setInitializeCache(true); // Turn on the shared memory
// region.
envConf.setInitializeLocking(true); // Turn on the locking subsystem.
envConf.setInitializeLogging(true); // Turn on the logging subsystem.
envConf.setTransactional(true); // Turn on the transactional
// subsystem.
envConf.setLockDetectMode(LockDetectMode.MINWRITE);
envConf.setThreaded(true);
envConf.setErrorStream(System.err);
envConf.setCacheSize(1024*1024*64);
envConf.setMaxLockers(2000);
envConf.setMaxLocks(2000);
envConf.setMaxLockObjects(2000);
envConf.setTxnMaxActive(200);
envConf.setTxnWriteNoSync(true);
envConf.setMaxMutexes(40000);
16. How many XML Containers do you have? For each one please specify:
One.
1. The Container Configuration Flags
XmlContainerConfig xmlContainerConfig = new XmlContainerConfig();
xmlContainerConfig.setTransactional(true);
xmlContainerConfig.setIndexNodes(true);
xmlContainerConfig.setReadUncommitted(true);
2. How many documents?
Everytime the user logs in, the current xml document is loaded from
a oracle database table and put it in the Berkeley XML DB.
The documents get deleted from XML DB when the Oracle application
server container is stopped.
The number of documents should start with zero initially and it
will grow with every login.
3. What type (node or wholedoc)?
Node
4. Please indicate the minimum, maximum and average size of
documents?
The minimum is about 2MB and the maximum could 20MB. The average
mostly about 5MB.
5. Are you using document data? If so please describe how?
We are using document data only to save changes made
to the application data in a web application. The final save goes
to the relational database. Berkeley XML DB is just used to store
temporary data since going to the relational database for each change
will cause severe performance issues.
17. Please describe the shape of one of your typical documents? Please
do this by sending us a skeleton XML document.
Due to the sensitive nature of the data, I can provide XML schema instead.
18. What is the rate of document insertion/update required or
expected? Are you doing partial node updates (via XmlModify) or
replacing the document?
The document is inserted during user login. Any change made to the application
data grid or other data components gets saved in Berkeley DB. We also have
an automatic save every two minutes. The final save from the application
gets saved in a relational database.
19. What is the query rate required/expected?
Users will not be entering data rapidly. There will be lot of think time
before the users enter/modify data in the web application. This is a pilot
project but when we go live with this application, we will expect 25 users
at the same time.
20. XQuery -- supply some sample queries
1. Please provide the Query Plan
2. Are you using DBXML_INDEX_NODES?
Yes.
3. Display the indices you have defined for the specific query.
XmlIndexSpecification spec = container.getIndexSpecification();
// ids
spec.addIndex("", "id", XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
spec.addIndex("", "idref", XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// index to cover AttributeValue/Description
spec.addIndex("", "Description", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ELEMENT | XmlIndexSpecification.KEY_SUBSTRING, XmlValue.STRING);
// cover AttributeValue/@value
spec.addIndex("", "value", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// item attribute values
spec.addIndex("", "type", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// default index
spec.addDefaultIndex(XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ELEMENT | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
spec.addDefaultIndex(XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// save the spec to the container
XmlUpdateContext uc = xmlManager.createUpdateContext();
container.setIndexSpecification(spec, uc);
4. If this is a large query, please consider sending a smaller
query (and query plan) that demonstrates the problem.
21. Are you running with Transactions? If so please provide any
transactions flags you specify with any API calls.
Yes. READ_UNCOMMITED in some and READ_COMMITTED in other transactions.
22. If your application is transactional, are your log files stored on
the same disk as your containers/databases?
Yes.
23. Do you use AUTO_COMMIT?
No.
24. Please list any non-transactional operations performed?
No.
25. How many threads of control are running? How many threads in read
only mode? How many threads are updating?
We use Berkeley XML DB within the context of a struts web application.
Each user logged into the web application will be running a bdb transactoin
within the context of a struts action thread.
26. Please include a paragraph describing the performance measurements
you have made. Please specifically list any Berkeley DB operations
where the performance is currently insufficient.
We are clocking 10-12 seconds of loading a document from dbd when
five users are on the system.
getContainer().getDocument(documentName);
27. What performance level do you hope to achieve?
We would like to get less than 5 seconds when 25 users are on the system.
28. Please send us the output of the following db_stat utility commands
after your application has been running under "normal" load for some
period of time:
% db_stat -h database environment -c
% db_stat -h database environment -l
% db_stat -h database environment -m
% db_stat -h database environment -r
% db_stat -h database environment -t
(These commands require the db_stat utility access a shared database
environment. If your application has a private environment, please
remove the DB_PRIVATE flag used when the environment is created, so
you can obtain these measurements. If removing the DB_PRIVATE flag
is not possible, let us know and we can discuss alternatives with
you.)
If your application has periods of "good" and "bad" performance,
please run the above list of commands several times, during both
good and bad periods, and additionally specify the -Z flags (so
the output of each command isn't cumulative).
When possible, please run basic system performance reporting tools
during the time you are measuring the application's performance.
For example, on UNIX systems, the vmstat and iostat utilities are
good choices.
Will give this information soon.
29. Are there any other significant applications running on this
system? Are you using Berkeley DB outside of Berkeley DB XML?
Please describe the application?
No to the first two questions.
The web application is an online review of test questions. The users
login and then review the items one by one. The relational database
holds the data in xml. During application load, the application
retrieves the xml and then saves it to bdb. While the user
is making changes to the data in the application, it writes those
changes to bdb. Finally when the user hits the SAVE button, the data
gets saved to the relational database. We also have an automatic save
every two minues, which saves bdb xml data and saves it to relational
database.
Thanks,
Madhav
[email protected]Could it be that you simply do not have set up indexes to support your query? If so, you could do some basic testing using the dbxml shell:
milu@colinux:~/xpg > dbxml -h ~/dbenv
Joined existing environment
dbxml> setverbose 7 2
dbxml> open tv.dbxml
dbxml> listIndexes
dbxml> query { collection()[//@date-tip]/*[@chID = ('ard','zdf')] (: example :) }
dbxml> queryplan { collection()[//@date-tip]/*[@chID = ('ard','zdf')] (: example :) }Verbosity will make the engine display some (rather cryptic) information on index usage. I can't remember where the output is explained; my feeling is that "V(...)" means the index is being used (which is good), but that observation may not be accurate. Note that some details in the setVerbose command could differ, as I'm using 2.4.16 while you're using 2.4.13.
Also, take a look at the query plan. You can post it here and some people will be able to diagnose it.
Michael Ludwig
Maybe you are looking for
-
I just bought a used IPhone 2 days ago and everything was working fine. Today it went into sleep mode with 35% battery remaining. An hour later I pressed the home button and nothing happened. Then I tried the hold button and nothing. I plugged in the
-
How to find out value of a shipment in SAP?
Hello experts, If I want to view the "value" of a delivery document, what are my options? From what I understand: 1) Go back to Sales Order and find value. This can be tedious as if SO has 100 items and delivery only has 50, it would be a pain to h
-
How to limit the search scope in single site collection for SP2013?
On our SP2013 farm, there is one web application and 4 site collections under it. At the beginning I was trying to setup 4 different "Content Sources" but Sharepoint warn me that I can only setup 1 source per web application. I don't want user search
-
SOS : NO MORE SUBTITLE in Lastest CC PRO
Hi there ! 1) I have import .SCC file so I have my subtitle with entry and out timecode in Legend pannel 2) the "object" of this legend is right on the cuting table on the top line 3) I can see in this object lines of the entry and out of each legend
-
2G died should I go for the Gs
My original iphone just died..with my help (a drop) and I have the goPhone pick your plan option. As I understand it I am not eligible for an upgrade with the nice prices...so I have to purchase at full price and get a 2 year contract/.... I sooooooo