Query is slow...need tuning
Hi all
I am using 11g database and this query is a part of a cursor declaration...Is there any scope for optimisation in this query? I appreciate your answers
WITH
ESN_GENEALOGY AS
(SELECT DISTINCT G.SerialNo FROM GENEALOGY G
INNER JOIN PRODUCT P ON G.PRODUCTID=P.ID
WHERE G.LASTARCHIVEDBY=I_SerialNo AND G.ACTIVE=1 AND P.PRODUCTINVENTORYTYPE=102 UNION SELECT I_SerialNo AS SerialNo FROM DUAL)
SELECT ListParentProductID, ListCompProductNo, ListCompProductID, ListCompQuantity AS SBQty,SERIALNO,WOPRODUCTID,WORKORDERNO
FROM
SELECT BM.PARENTPRODUCTID ListParentProductID,BM.PRODUCTNO ListCompProductNo,BM.PRODUCTID ListCompProductID,
BM.QUANTITY ListCompQuantity,BM.SERIALNO,SN.PRODUCTID WOPRODUCTID,SN.WORKORDERNO,I_SerialNo SN
FROM
(SELECT PARENTPRODUCTID,PRODUCTNO,PRODUCTID,SUM(QUANTITY) AS QUANTITY,SERIALNO
FROM
SELECT PARENTPRODUCTID,PRODUCTNO,PRODUCTID,QUANTITY,SERIALNO
FROM
SELECT
(CASE WHEN PC1.LASTARCHIVEDBY IS NULL THEN P1.ID ELSE P3.ID END) AS PARENTPRODUCTID,
P2.PRODUCTNO AS PRODUCTNO,
P2.ID AS PRODUCTID,
SUM(C1.QUANTITY) AS QUANTITY,
(CASE WHEN PC1.LASTARCHIVEDBY IS NULL THEN I_SerialNo ELSE CG.SERIALNO END) AS SERIALNO
FROM WIP_ORDER WO
INNER JOIN PRODUCT_COMPONENT PC ON (
PC.PRODUCTID = WO.PRODUCTID
AND wo.wiporderno =I_WipOrderNo
INNER JOIN COMPONENT C on C.ID = PC.COMPONENTID
INNER JOIN PRODUCT P1 on (
P1.ID = C.PRODUCTID
AND C.EFFECTIVEDATE <= WO.SCHEDULEDSTARTDATE
AND (C.DISCONTINUEDATE > WO.SCHEDULEDSTARTDATE or C.DISCONTINUEDATE is null)
INNER JOIN PRODUCT P on WO.PRODUCTID = P.ID
INNER JOIN PRODUCT_COMPONENT PC1 on PC1.PRODUCTID = C.PRODUCTID
INNER JOIN COMPONENT C1 on ( C1.ID = PC1.COMPONENTID
AND C1.EFFECTIVEDATE <= WO.SCHEDULEDSTARTDATE
AND (C1.DISCONTINUEDATE > WO.SCHEDULEDSTARTDATE OR C1.DISCONTINUEDATE is null)
INNER JOIN PRODUCT P2 on P2.ID = C1.PRODUCTID
LEFT JOIN PRODUCT P3 ON PC1.LASTARCHIVEDBY=P3.PRODUCTNO
LEFT JOIN (SELECT DISTINCT G.SERIALNO, G.PRODUCTID FROM GENEALOGY G
INNER JOIN PRODUCT P ON G.PRODUCTID=P.ID
WHERE G.LASTARCHIVEDBY=I_SerialNo AND G.ACTIVE=1 AND P.PRODUCTINVENTORYTYPE=102) CG ON P3.ID=CG.PRODUCTID
WHERE (CASE WHEN PC1.LASTARCHIVEDBY IS NULL THEN I_SerialNo ELSE CG.SERIALNO END) IS NOT NULL
GROUP BY (CASE WHEN PC1.LASTARCHIVEDBY IS NULL THEN P1.ID ELSE P3.ID END) , P2.PRODUCTNO, P2.ID,
(CASE WHEN PC1.LASTARCHIVEDBY IS NULL THEN I_SerialNo ELSE CG.SERIALNO END)
) set1
UNION ALL
SELECT PARENTPRODUCTID,PRODUCTNO,PRODUCTID,QUANTITY*(-1) AS QUANTITY,SERIALNO
FROM
SELECT A.ParentProductID,
A.ProductNO,
A.ProductID,
SUM(A.Quantity) AS Quantity,
A.SERIALNO
FROM (
SELECT G.PARENTPRODUCTID,
P.PRODUCTNO,
G.PRODUCTID,
G.QUANTITY,
G.ID,
G.LASTARCHIVEDBY AS SERIALNO
FROM GENEALOGY G
INNER JOIN PRODUCT P ON G.PRODUCTID=P.ID
WHERE G.LASTARCHIVEDBY IN(SELECT SERIALNO FROM ESN_GENEALOGY) AND G.ACTIVE=1
UNION
SELECT
ParentPartID PARENTPRODUCTID,
P.PRODUCTNO,
ORIGINALPARTID PRODUCTID,
QUANTITY QUANTITY,
P.ID,
BDH.SERIALNO AS SERIALNO
FROM COB_T_BOM_DEVIATION_HISTORY BDH
INNER JOIN PRODUCT p ON P.id = BDH.ORIGINALPARTID
WHERE BDH.SERIALNO IN(SELECT SERIALNO FROM ESN_GENEALOGY)
UNION
SELECT
ParentPartID PARENTPRODUCTID,
P.PRODUCTNO,
DEVIATIONPARTID PRODUCTID,
-1 * QUANTITY QUANTITY,
P.ID,
BDH.SERIALNO AS SERIALNO
FROM COB_T_BOM_DEVIATION_HISTORY BDH
INNER JOIN PRODUCT p ON P.id = BDH.DEVIATIONPARTID
WHERE BDH.SERIALNO IN(SELECT SERIALNO FROM ESN_GENEALOGY)
union
SELECT ESB.PARENTPRODUCTID PARENTPRODUCTID,
COMPONENTPARTNUMBER PRODUCTNO,
COMPONENTPRODUCTID PRODUCTID,
QUANTITY,
ESB.ID,
ESB.SERIALNO AS SERIALNO
FROM COB_T_ENGINE_SHORT_BUILD ESB
where ESB.SERIALNO IN(SELECT SERIALNO FROM ESN_GENEALOGY)
) A
WHERE QUANTITY <> 0 AND SERIALNO IS NOT NULL
GROUP BY PARENTPRODUCTID, PRODUCTNO, PRODUCTID,SERIALNO
) set2
) GROUP BY PARENTPRODUCTID,PRODUCTNO,PRODUCTID,SERIALNO
) BM
LEFT JOIN COB_T_SERIAL_NO SN ON BM.SERIALNO=SN.SERIALNO
)BOM
WHERE ListCompQuantity <> 0;
HOW To Make TUNING request
SQL and PL/SQL FAQ
Similar Messages
-
SELECT query very slow, need suggestion.
Dear all,
Below stmt was coded in my report program.
It was taking around 14 seconds in development system. The days in s_datum was just 1 or 2 days max.
But when the same was transferred to test system, it is taking almost 10 minutes, though it we gave just 1 day.
SELECT * FROM likp INTO TABLE i_likp
WHERE erdat IN s_datum AND wadat_ist > '00000000'
AND lfart <> 'EL'
AND vstel not in s_vstel.
Some of you might suggest to make SELECT query with only s_datum, but i tried it in dev system, it was taking almost 22 secs with only s_datum. I thought it could be more worse in test system so, did not move that idea.
Can some one please suggest me why it is happening so.Hi,
The difference, as I suppose you know, happens because LIKP probably has much more records in production than in the development system.
You must think what is selective in your WHERE clause:
- erdat in s_datum is selective if you are are only using one day
- wadat_ist GE '00000000' is not selective (all deliveries with goods issue fulfill that condition)
- lfart NE 'EL' probably is not selective
- vsten not in s_vstel probably is not selective
So in the end only erdat is selective. There is no index in LIKP by ERDAT, so if you really want to make it faster you would need an index by ERDAT.
Still, if you are only making one SELECT (not inside a loop) I wouldn't expect that to take more than 10 minutes.
I would measure the program with SE30 to make sure it is really the SELECT that is taking so much time (post here the results), and if it really is the select post here the explain plan.
By the way, if you need to know all goods issues for the last day I would use change pointers instead.
Hope this helps,
Rui Dantas -
I have Oracle 9i and SUN OS 5.8
I have a Java application that have a query to the Customer table. This table has 2.000.000 of records and I have to show its in pages (20 record each page)
The user query for example the Customer that the Last Name begin with O. Then the application shows the first 20 records with this condition and order by Name.
Then I have to create 2 querys
1)
SELECT id_customer,Name
FROM Customers
WHERE Name like 'O%'
ORDER BY id_customer
But when I proved this query in TOAD it take long to do it (the time consuming was 15 minutes)
I have the index in the NAME field!!
Besides, if the user want to go to the second page the query is executed again. (The java programmers said me that)
What is your recommendation to optimize it????? I need to obtain the information in
few seconds.
2)
SELECT count(*) FROM Customers WHERE NAME like 'O%'
I have to do this query because I need to known How many pages (20 records) I need to show.
Example with 5000 records I have to have 250 pages.
But when I proved this query in TOAD it take long to do it (the time consuming was 30 seconds)
What is your recommendation to optimize it????? I need to obtain the information in
few seconds.
Thanks in advance!This appears to be a dulpicate of a post in the Query very slow! forum.
Claudio, since the same folks tend to read both forums, it is generally preferred that you post questions in only one forum. That way, multiple people don't spend time writing generally equivalent replies.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Update query is slow with merge replication
Hello friend,
I have a database with enabling merge replication.
Then the problem is update query is taking more time.
But when I disable the merge triggers then it'll update quickly.
I really appreciate your
quick response.
Thanks.Hi Manjula,
According to your description, the update query is slow after configuring merge replication. There are some proposals for you troubleshooting this issue as follows.
1. Perform regular index maintenance, update statistics, re-index, on the following Replication system tables.
•MSmerge_contents
•MSmerge_genhistory
•MSmerge_tombstone
•MSmerge_current_partition_mappings
•MSmerge_past_partition_mappings
2. Make sure that your tables involved in the query have suitable indexes. Also do the re-indexing and update the statistics for these tables. Additionally, you can use
Database Engine Tuning Advisor to tune databases for better query performance.
Here are some related articles for your reference.
http://blogs.msdn.com/b/chrissk/archive/2010/02/01/sql-server-merge-replication-best-practices.aspx
http://technet.microsoft.com/en-us/library/ms177500(v=sql.105).aspx
Thanks,
Lydia Zhang -
Why is the query so slow?
Hi,
I've got a query running fast (3 sec.)
If I try to execute it on test enviroment, it takes about 2 minutes (!)
I see in both enviroments the explain plan is the same and so are the indexes used. I've also tried to rebuild the indexes and the tables that looked quite fragmented in test, but the result is always the same. Could it be that our test enviroment is slower and with lower performances? What else could I check? (Oracle Vers. is 8.1.7)
Thanks!812809 wrote:
steps to follow:
1.whether the candidate columns has index or notSometimes and index can cause a query to slow down rather than speed up, especially if a person has created too many indexes on a table and the optimiser can't figure out the best one to use.
2.go for explain plan and look the query not to fall under the category of Full Table ScanFull table scans are not always a bad thing. Sometimes they are faster than using the index. It depends. -
Query performance slow in one instance in RAC
Hi
We have 3 node RAC. When we test onw query it slow by 40% in one instance and always physical reads are hapenning in that instance.
Below are the details. All the parameters are same. Users compains some times the query is slow.
Thanks in Advance.
From Instance 1 - 9 Sec
=============================================================
Statistics
0 recursive calls
1 db block gets
67209 consistent gets
0 physical reads
0 redo size
23465 bytes sent via SQL*Net to client
10356 bytes received via SQL*Net from client
28 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
13 rows processed
From Instance 2 - 13 Sec
=============================================================
Statistics
0 recursive calls
1 db block gets
67215 consistent gets
67193 physical reads <<------------------------ Only in one instance
0 redo size
23465 bytes sent via SQL*Net to client
10356 bytes received via SQL*Net from client
28 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
13 rows processed
From Instance 3 - 9 Sec
=============================================================
Statistics
0 recursive calls
1 db block gets
67209 consistent gets
0 physical reads
0 redo size
23465 bytes sent via SQL*Net to client
10356 bytes received via SQL*Net from client
28 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
13 rows processedYou can also check global cache statistics. Run this before and after your query :
select name, value from v$mystat s, v$statname n where s.statistic#=n.statistic# and name like '%blocks received'; -
Query Designer slows down after working some time with it
Hi all,
the new BEx Query Designer slows down when working some time with it. The longer it remains open, the slower it gets. Especially formula editing slows down extremely.
Did anyone of you encounter the same problem? Do you have an idea, how to fix this. To me it seems as if the Designer allocates more and more RAM and does not free that up.
My version: BI AddOn 7.X, Support Package 13, Revision 467
Kind regards,
PhilippI have seen a similar problem on one of my devices, the 'Samsung A-920'. Every time the system would pop up the 'Will you allow Network Access' screen , the imput from all keypresses from then on would be strangely delayed. It looked like the problem was connected with the switching from my app and the system dialog form. I tried for many many long hours / days to fix this, but just ended up hacking my phone to remove the security questions. After removing the security questions my problem went away.
I don't know if it's an option in your application, but is it possible to do everything using just one Canvas, and not switch between displayables? You may want to do an experiment using a single displayable Canvas, and just change how it draws. I know this will make user input much more complicated, but you may be able to avoid the input delays.
In my case, I think the device wasn't properly releasing / un-registering the input handling from the previous dialogs, so all keypresses still went through the non-current network-security dialog before reaching my app. -
Hi Friends ,
i am using 11.2.0.3.0 oracle db . We have a query which is running smoothly on Live and the same query runs slow on staging environment . The data is pulled from Live to staging using Golden Gate and not all columns are refreshed .
Can you please help me tune this query or let me know what best can be done for this query to run like Live environment .
Regards,
DBAppsHi,
This is a general type of question. please be specific. Golden Thumb rule is that don't use '*", instead use the column names. Analyze the table and take a execution plan and check for index usage .
Please give the problem statement also so that we can help you. -
SELECT SUM(A.NO_MONTH_CONSUMPTION),SUM(A.BASE_CONSUMPTION),SUM(A.CURRENT_DOC_AMT),SUM(A.CUR_TAX),SUM(B.CURRENT_DOC_AMT)
FROM VW_x A,(SELECT CURRENT_DOC_AMT,DOC_NO
FROM VW_y B
WHERE NVL(B.VOID_STATUS,0)=0 AND B.TR_TYPE_CODE='SW' AND B.BPREF_NO=:B4 AND B.SERVICE_CODE=:B3 AND B.BIZ_PART_CODE=:B2 AND B.CONS_CODE=:B1 ) B
WHERE A.BPREF_NO=:B4 AND A.SERVICE_CODE=:B3 AND A.BIZ_PART_CODE=:B2 AND A.CONS_CODE=:B1 AND A.BILL_MONTH >:B5 AND NVL(A.VOID_STATUS,0)=0 AND NVL(A.AVG_IND,0)= 2 AND A.DOC_NO=B.DOC_NO(+)
the above view "VW_x" has around 40 million records from two tables and avg_ind column has only 0 and 2 values. I created a functional index on both table something like create index on x1 nvl(avg,0)
TRACE OUT PUT
STATISTICS
15 recursive calls
0 db block gets
18 consistent gets
4 physical reads
0 redo size
357 bytes sent via SQL*Net to client
252 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed
but still the query is slow...please suggest the best practise to make it fast
thanksHi sorry i was out of office for a while please check the execution plan for my query.
Below query i am calling in a procedure passing the parameters
While i execute the query separatly it works fine but the same thing when i call in procedure and the procedure has loop which goes and check around 400,000 records thats where i get the problem
select sum(a.no_month_consumption),sum(a.base_consumption),sum(a.current_doc_amt),sum(a.cur_tax),sum(b.current_doc_amt
--into vnomonths,vcons,vconsamt,vtaxamt,vsewage
from bill_View a,(select current_doc_amt,doc_no from dbcr_View b where nvl(b.void_status,0)=0 and b.tr_type_code='SWGDBG' and b.bpref_no='Q12345' and b.service_code='E' and b.biz_part_code='MHEW') b
where a.bpref_no='Q12345' and a.service_code='E' and a.biz_part_code='MHEW'
and a.bill_month >'30-aPR-2011' and nvl(a.void_status,0)=0 and decode(a.avg_ind,null,0,a.avg_ind)= 2
and a.doc_no=b.doc_no(+);
I created functionaly inde on avg_ind column (nvl(avg_ind,0))
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=77 Card=1 Bytes=93
1 0 SORT (AGGREGATE)
2 1 HASH JOIN (OUTER) (Cost=77 Card=4 Bytes=372)
3 2 VIEW OF 'VW_IBS_BILL' (VIEW) (Cost=54 Card=3 Bytes=198
4 3 UNION-ALL
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'IBS_S_T_BILL' (T
ABLE) (Cost=8 Card=1 Bytes=50)
6 5 INDEX (RANGE SCAN) OF 'STBILL_BPREF_NO' (INDEX)
(Cost=3 Card=5)
7 4 TABLE ACCESS (BY INDEX ROWID) OF 'IBS_X_T_BILL' (T
ABLE) (Cost=46 Card=2 Bytes=114)
8 7 INDEX (RANGE SCAN) OF 'XTBILL' (INDEX) (Cost=3 C
ard=43)
9 2 VIEW OF 'VW_IBS_DBCR' (VIEW) (Cost=22 Card=4 Bytes=108
10 9 UNION-ALL
11 10 TABLE ACCESS (BY INDEX ROWID) OF 'IBS_T_DBCR' (TAB
LE) (Cost=2 Card=1 Bytes=54)
12 11 INDEX (RANGE SCAN) OF 'TDBCR_BPREFNO' (INDEX) (C
ost=1 Card=1)
13 10 TABLE ACCESS (BY INDEX ROWID) OF 'IBS_S_T_DBCR' (T
ABLE) (Cost=7 Card=1 Bytes=43)
14 13 INDEX (RANGE SCAN) OF 'STDBCR_BPREFNO' (INDEX) (
Cost=3 Card=4)
15 10 TABLE ACCESS (BY INDEX ROWID) OF 'IBS_X_T_DBCR' (T
ABLE) (Cost=13 Card=2 Bytes=88)
16 15 INDEX (RANGE SCAN) OF 'XTDBCR' (INDEX) (Cost=3 C
ard=11)
what is Card and Cost attributes in the above output..................... ? -
Hi Experts,
Please clarify my doubts.
1. How can we know the particular query performance slow in all?
2. How can we define a cell in BEx?
3. Info cube is info provider, Info Object is not Info Provider why?
Thanks in advanceHi,
1. How can we know the particular query performance slow in all?
When you run the query it's take more time we know that query is taken more if where that query is taking more time you can collect the statics.
like Selct your cube and set BI statics check box after that it will give the all statics data regarding your query.
DB time (Data based time),Frent end Time (Query), Agrreation time like etc. based on that we go for the perfomance aggreations, compresion, indexes etc.
2. How can we define a cell in BEx?
In Your Bex query your using two structures it's enabled. If you want create the different formulate by row wise you go for this.
3. Info cube is info provider, Info Object is not Info Provider why?
Info object also info provider,
when your info object also you can convert into info provider using " Convert as data target".
Thanks and Regards,
Venkat.
Edited by: venkatewara reddy on Jul 27, 2011 12:05 PM -
Query runs slower when using variables & faster when using hard coded value
Hi,
My query runs slower when i use variables but it runs faster when i use hard coded values. Why it is behaving like this ?
My query is in cursor definition in a procedure. Procedure runs faster when using hard coded valus and slower when using variables.
Can anybody help me out there?
Thanks in advance.Hi,
Thanks for ur reply.
here is my code with Variables:
Procedure populateCountryTrafficDetails(pWeekStartDate IN Date , pCountry IN d_geography.country_code%TYPE) is
startdate date;
AR_OrgId number(10);
Cursor cTraffic is
Select
l.actual_date, nvl(o.city||o.zipcode,'Undefined') Site,
g.country_code,d.customer_name, d.customer_number,t.contrno bcn,
nvl(r.dest_level3,'Undefined'),
Decode(p.Product_code,'820','821','821','821','801') Product_Code ,
Decode(p.Product_code,'820','Colt Voice Connect','821','Colt Voice Connect','Colt Voice Line') DProduct,
sum(f.duration),
sum(f.debamount_eur)
from d_calendar_date l,
d_geography g,
d_customer d, d_contract t, d_subscriber s,
d_retail_dest r, d_product p,
CPS_ORDER_DETAILS o,
f_retail_revenue f
where
l.date_key = f.call_date_key and
g.geography_key = f.geography_key and
r.dest_key = f.dest_key and
p.product_key = f.product_key and
--c.customer_key = f.customer_key and
d.customer_key = f.customer_key and
t.contract_key = f.contract_key and
s.SUBSCRIBER_KEY = f.SUBSCRIBER_KEY and
o.org_id(+) = AR_OrgId and
g.country_code = pCountry and
l.actual_date >= startdate and
l.actual_date <= (startdate + 90) and
o.cli(+) = s.area_subno and
p.product_code in ('800','801','802','804','820','821')
group by
l.actual_date,
o.city||o.zipcode, g.country_code,d.customer_name, d.customer_number,t.contrno,r.dest_level3, p.product_code;
Type CountryTabType is Table of country_traffic_details.Country%Type index by BINARY_INTEGER;
Type CallDateTabType is Table of country_traffic_details.CALL_DATE%Type index by BINARY_INTEGER;
Type CustomerNameTabType is Table of Country_traffic_details.Customer_name%Type index by BINARY_INTEGER;
Type CustomerNumberTabType is Table of Country_traffic_details.Customer_number%Type index by BINARY_INTEGER;
Type BcnTabType is Table of Country_traffic_details.Bcn%Type index by BINARY_INTEGER;
Type DestinationTypeTabType is Table of Country_traffic_details.DESTINATION_TYPE%Type index by BINARY_INTEGER;
Type ProductCodeTabType is Table of Country_traffic_details.Product_Code%Type index by BINARY_INTEGER;
Type ProductTabType is Table of Country_traffic_details.Product%Type index by BINARY_INTEGER;
Type DurationTabType is Table of Country_traffic_details.Duration%Type index by BINARY_INTEGER;
Type DebamounteurTabType is Table of Country_traffic_details.DEBAMOUNTEUR%Type index by BINARY_INTEGER;
Type SiteTabType is Table of Country_traffic_details.Site%Type index by BINARY_INTEGER;
CountryArr CountryTabType;
CallDateArr CallDateTabType;
Customer_NameArr CustomerNameTabType;
CustomerNumberArr CustomerNumberTabType;
BCNArr BCNTabType;
DESTINATION_TYPEArr DESTINATIONTYPETabType;
PRODUCT_CODEArr PRODUCTCODETabType;
PRODUCTArr PRODUCTTabType;
DurationArr DurationTabType;
DebamounteurArr DebamounteurTabType;
SiteArr SiteTabType;
Begin
startdate := (trunc(pWeekStartDate) + 6) - 90;
Exe_Pos := 1;
Execute Immediate 'Truncate table country_traffic_details';
dropIndexes('country_traffic_details');
Exe_Pos := 2;
/* Set org ID's as per AR */
case (pCountry)
when 'FR' then AR_OrgId := 81;
when 'AT' then AR_OrgId := 125;
when 'CH' then AR_OrgId := 126;
when 'DE' then AR_OrgId := 127;
when 'ES' then AR_OrgId := 123;
when 'IT' then AR_OrgId := 122;
when 'PT' then AR_OrgId := 124;
when 'BE' then AR_OrgId := 132;
when 'IE' then AR_OrgId := 128;
when 'DK' then AR_OrgId := 133;
when 'NL' then AR_OrgId := 129;
when 'SE' then AR_OrgId := 130;
when 'UK' then AR_OrgId := 131;
else raise_application_error (-20003, 'No such Country Code Exists.');
end case;
Exe_Pos := 3;
dbms_output.put_line('3: '||to_char(sysdate, 'HH24:MI:SS'));
populateOrderDetails(AR_OrgId);
dbms_output.put_line('4: '||to_char(sysdate, 'HH24:MI:SS'));
Exe_Pos := 4;
Open cTraffic;
Loop
Exe_Pos := 5;
CallDateArr.delete;
FETCH cTraffic BULK COLLECT
INTO CallDateArr, SiteArr, CountryArr, Customer_NameArr,CustomerNumberArr,
BCNArr,DESTINATION_TYPEArr,PRODUCT_CODEArr, PRODUCTArr, DurationArr, DebamounteurArr LIMIT arraySize;
EXIT WHEN CallDateArr.first IS NULL;
Exe_pos := 6;
FORALL i IN 1..callDateArr.last
insert into country_traffic_details
values(CallDateArr(i), CountryArr(i), Customer_NameArr(i),CustomerNumberArr(i),
BCNArr(i),DESTINATION_TYPEArr(i),PRODUCT_CODEArr(i), PRODUCTArr(i), DurationArr(i),
DebamounteurArr(i), SiteArr(i));
Exe_pos := 7;
dbms_output.put_line('7: '||to_char(sysdate, 'HH24:MI:SS'));
EXIT WHEN ctraffic%NOTFOUND;
END LOOP;
commit;
Exe_Pos := 8;
commit;
dbms_output.put_line('8: '||to_char(sysdate, 'HH24:MI:SS'));
lSql := 'CREATE INDEX COUNTRY_TRAFFIC_DETAILS_CUSTNO ON country_traffic_details (CUSTOMER_NUMBER)';
execDDl(lSql);
lSql := 'CREATE INDEX COUNTRY_TRAFFIC_DETAILS_BCN ON country_traffic_details (BCN)';
execDDl(lSql);
lSql := 'CREATE INDEX COUNTRY_TRAFFIC_DETAILS_PRODCD ON country_traffic_details (PRODUCT_CODE)';
execDDl(lSql);
lSql := 'CREATE INDEX COUNTRY_TRAFFIC_DETAILS_SITE ON country_traffic_details (SITE)';
execDDl(lSql);
lSql := 'CREATE INDEX COUNTRY_TRAFFIC_DETAILS_DESTYP ON country_traffic_details (DESTINATION_TYPE)';
execDDl(lSql);
Exe_Pos:= 9;
dbms_output.put_line('9: '||to_char(sysdate, 'HH24:MI:SS'));
Exception
When Others then
raise_application_error(-20003, 'Error in populateCountryTrafficDetails at Position: '||Exe_Pos||' The Error is '||SQLERRM);
End populateCountryTrafficDetails;
In the above procedure if i substitute the values with hard coded values i.e. AR_orgid = 123 & pcountry = 'Austria' then it runs faster.
Please let me know why it is so ?
Thanks in advance. -
Query is slow to reutn result...
The following query is slow, I have index on tzk, and the event_dtg column is date column and allow null value.
abc_zone have > 60 millions records. Statistics on table and index is current.
Any idea how to improve the query performance?
select count (*) tz6
from abc_zone
where tzk =6
and event_dtg > to_date('09/05/2009 01:00:00' , 'MM/DD/YYYY HH24:MI:SS')
and event_dtg < to_date('04/04/2010 00:00:00' , 'MM/DD/YYYY HH24:MI:SS')
Oracle 10.2.0.3 on AIX
Thanks in advance.sorry, I do have index on event_dtg...
here is the EP:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 19 | 148 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 19 | | |
| 2 | TABLE ACCESS BY INDEX ROWID| ABC_ZONE | 16 | 304 | 148 (0)| 00:00:01 |
| 3 | INDEX RANGE SCAN | ABC_ZONE_EVENT_DTG | 3439 | | 1 (0)| 00:00:01 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
2 - SEL$1 / ABC_ZONE@SEL$1
3 - SEL$1 / ABC_ZONE@SEL$1
17 rows selected.
I suspect there is some kind of conversion (date to timestamp) that is costly.
Thanks. -
Query of query - running slower on 64 bit CF than 32 bit CF
Greetings...
I am seeing behavior where pages that use query-of-query run slower on 64-bit Coldfusion 9.01 than on 32-bit Coldfusion 9.01.
My server specs are : dual processer virtual machine, 4 GIG ram, Windows 2008 Datacenter Server r2 64-bit, Coldfusion 9.01. Note that the coldfusion is literally "straight out of the box", and is using all default settings - the only thing I configured in CF is a single datasource.
The script I am using to benchmark this runs a query that returns 20,000 rows with fields id, firstname, lastname, email, city, datecreated. I then loop through all 20,000 records, and for each record, I do a query-of-query (on the same master query) to find any other record where the lastname matches that of the record I'm currently on. Note that I'm only interested in using this process for comparative benchmarking purposes, and I know that the process could be written more efficiently.
Here are my observed execution times for both 64-bit and 32-bit Coldfusion (in seconds) on the same machine.
64 bit CF 9.01: 63,49,52,52,52,48,50,49,54 (avg=52 seconds)
32 bit CF 9.01: 47,45,43,43,45,41,44,42,46 (avg=44 seconds)
It appears from this that 64-bit CF performs worse than 32-bit CF when doing query-of-query operations. Has anyone made similar observations, and is there any way I can tune the environment to improve 64 bit performance?
Thanks for any help you can provide!
By the way, here's the code that is generating these results:
<!--- Allrecs query returns 20000 rows --->
<CFQUERY NAME="ALLRECS" DATASOURCE="MyDsn">
SELECT * FROM MyTBL
</CFQUERY>
<CFLOOP QUERY="ALLRECS">
<CFQUERY NAME="SAMELASTNAME" DBTYPE="QUERY">
SELECT * FROM ALLRECS
WHERE LN=<CFQUERYPARAM VALUE="#ALLRECS.LN#" CFSQLTYPE="CF_SQL_VARCHAR">
AND ID<><CFQUERYPARAM VALUE="#AllRecs.ID#" CFSQLTYPE="CF_SQL_INTEGER">
</CFQUERY>
<CFIF SameLastName.RecordCount GT 20>
#AllRecs.LN#, #AllRecs.FN# : #SameLastName.RecordCount# other records with same lastname<BR>
</CFIF>
</CFLOOP>BoBear2681 wrote:
..follow-up: ..Thanks for the follow-up. I'll be interested to hear the progress (or otherwise, as the case may be).
As an aside. I got sick of trying to deal with Clip because it could only handle very small Clip sizes. AFAIR it was 1 second of 44.1 KHz stereo. From that point, I developed BigClip.
Unfortunately BigClip as it stands is even less able to fulfil your functional requirement than Clip, in that only one BigClip can be playing at a time. Further, it can be blocked by other sound applications (e.g. VLC Media Player, Flash in a web page..) or vice-versa. -
Complex Query which needs tuning
Hello :
I have a complex query that needs to be tuned. I have little experience in tuning the sql and hence taking the help of your guys.
The Query is as given below:
Database version 11g
SELECT DISTINCT P.RESPONSIBILITY, P.PRODUCT_MAJOR, P.PRODUCT_MINOR,
P.PRODUCT_SERIES, P.PRODUCT_CATEGORY AS Category1, SO.REGION_CODE,
SO.STORE_CODE, S.Store_Name, SOL.PRODUCT_CODE, PRI.REPLENISHMENT_TYPE,
PRI.SUPPLIER_CODE,
SOL.SOLD_WITH_NIC, SOL.SUGGESTED_PRICE,
PRI.INVOICE_COST, SOL.FIFO_COST,
SO.ORDER_TYPE_CODE, SOL.DOCUMENT_NUM,
SOS.SLSP_CD, '' AS FNAME, '' AS LNAME,
SOL.PRICE_EXCEPTION_CODE, SOL.AS_IS,
SOL.STATUS_DATE,
Sum(SOL.QUANTITY) AS SumOfQUANTITY,
Sum(SOL.EXTENDED_PRICE) AS SumOfEXTENDED_PRICE
--Format([SALES_ORDER].[STATUS_DATE],"mmm-yy") AS [Month]
FROM PRODUCT P,
PRODUCT_MAJORS PM,
SALES_ORDER_LINE SOL,
STORE S,
SALES_ORDER SO,
SALES_ORDER_SPLITS SOS,
PRODUCT_REGIONAL_INFO PRI,
REGION_MAP R
WHERE P.product_major = PM.PRODUCT_MAJOR
and SOL.PRODUCT_CODE = P.PRODUCT_CODE
and SO.STORE_CODE = S.STORE_CODE
AND SO.REGION_CODE = S.REGION_CODE
AND SOL.REGION_CODE = SO.REGION_CODE
AND SOL.DOCUMENT_NUM = SO.DOCUMENT_NUM
AND SOL.DELIVERY_SEQUENCE_NUM = SO.DELIVERY_SEQUENCE_NUM
AND SOL.STATUS_CODE = SO.STATUS_CODE
AND SOL.STATUS_DATE = SO.STATUS_DATE
AND SO.REGION_CODE = SOS.REGION_CODE
AND SO.DOCUMENT_NUM = SOS.DOCUMENT_NUM
AND SOL.PRODUCT_CODE = PRI.PRODUCT_CODE
AND PRI.REGION_CODE = R.CORP_REGION_CODE
AND SO.REGION_CODE = R.DS_REGION_CODE
AND P.PRODUCT_MAJOR In ('STEREO','TELEVISION','VIDEO')
AND SOL.STATUS_CODE = 'D'
AND SOL.STATUS_DATE BETWEEN '01-JUN-09' AND '30-JUN-09'
AND SO.STORE_CODE NOT IN
('10','20','30','40','70','91','95','93','94','96','97','98','99',
'9V','9W','9X','9Y','9Z','8Z',
'8Y','92','CZ','FR','FS','FT','FZ','FY','FX','FW','FV','GZ','GY','GU','GW','GV','GX')
GROUP BY
P.RESPONSIBILITY, P.PRODUCT_MAJOR, P.PRODUCT_MINOR, P.PRODUCT_SERIES, P.PRODUCT_CATEGORY,
SO.REGION_CODE, SO.STORE_CODE, /*S.Short Name, */
S.Store_Name, SOL.PRODUCT_CODE,
PRI.REPLENISHMENT_TYPE, PRI.SUPPLIER_CODE,
SOL.SOLD_WITH_NIC, SOL.SUGGESTED_PRICE, PRI.INVOICE_COST,
SOL.FIFO_COST, SO.ORDER_TYPE_CODE, SOL.DOCUMENT_NUM,
SOS.SLSP_CD, '', '', SOL.PRICE_EXCEPTION_CODE,
SOL.AS_IS, SOL.STATUS_DATE
Explain Plan:
SELECT STATEMENT, GOAL = ALL_ROWS Cost=583 Cardinality=1 Bytes=253
HASH GROUP BY Cost=583 Cardinality=1 Bytes=253
FILTER
NESTED LOOPS Cost=583 Cardinality=1 Bytes=253
HASH JOIN OUTER Cost=582 Cardinality=1 Bytes=234
NESTED LOOPS
NESTED LOOPS Cost=571 Cardinality=1 Bytes=229
NESTED LOOPS Cost=571 Cardinality=1 Bytes=207
NESTED LOOPS Cost=569 Cardinality=2 Bytes=368
NESTED LOOPS Cost=568 Cardinality=2 Bytes=360
NESTED LOOPS Cost=556 Cardinality=3 Bytes=435
NESTED LOOPS Cost=178 Cardinality=4 Bytes=336
NESTED LOOPS Cost=7 Cardinality=1 Bytes=49
HASH JOIN Cost=7 Cardinality=1 Bytes=39
VIEW Object owner=CORP Object name=index$_join$_015 Cost=2 Cardinality=3 Bytes=57
HASH JOIN
INLIST ITERATOR
INDEX UNIQUE SCAN Object owner=CORP Object name=PRODMJR_PK Cost=0 Cardinality=3 Bytes=57
INDEX FAST FULL SCAN Object owner=CORP Object name=PRDMJR_PR_FK_I Cost=1 Cardinality=3 Bytes=57
VIEW Object owner=CORP Object name=index$_join$_016 Cost=4 Cardinality=37 Bytes=740
HASH JOIN
INLIST ITERATOR
INDEX RANGE SCAN Object owner=CORP Object name=PRDMNR1 Cost=3 Cardinality=37 Bytes=740
INDEX FAST FULL SCAN Object owner=CORP Object name=PRDMNR_PK Cost=4 Cardinality=37 Bytes=740
INDEX UNIQUE SCAN Object owner=CORP Object name=PRODMJR_PK Cost=0 Cardinality=1 Bytes=10
MAT_VIEW ACCESS BY INDEX ROWID Object owner=CORP Object name=PRODUCTS Cost=171 Cardinality=480 Bytes=16800
INDEX RANGE SCAN Object owner=CORP Object name=PRD2 Cost=3 Cardinality=681
TABLE ACCESS BY INDEX ROWID Object owner=DS Object name=SALES_ORDER_LINE Cost=556 Cardinality=1 Bytes=145
BITMAP CONVERSION TO ROWIDS
BITMAP INDEX SINGLE VALUE Object owner=DS Object name=SOL2
TABLE ACCESS BY INDEX ROWID Object owner=DS Object name=SALES_ORDER Cost=4 Cardinality=1 Bytes=35
INDEX RANGE SCAN Object owner=DS Object name=SO1 Cost=3 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=DS Object name=REGION_MAP Cost=1 Cardinality=1 Bytes=4
INDEX RANGE SCAN Object owner=DS Object name=REGCD Cost=0 Cardinality=1
MAT_VIEW ACCESS BY INDEX ROWID Object owner=CORP Object name=PRODUCT_REGIONAL_INFO Cost=2 Cardinality=1 Bytes=23
INDEX UNIQUE SCAN Object owner=CORP Object name=PRDRI_PK Cost=1 Cardinality=1
INDEX UNIQUE SCAN Object owner=CORP Object name=BI_STORE_INFO_PK Cost=0 Cardinality=1
MAT_VIEW ACCESS BY INDEX ROWID Object owner=CORP Object name=BI_STORE_INFO Cost=1 Cardinality=1 Bytes=22
VIEW Object owner=DS cost=11 Cardinality=342 Bytes=1710
HASH JOIN Cost=11 Cardinality=342 Bytes=7866
MAT_VIEW ACCESS FULL Object owner=CORP Object name=STORE_CORP Cost=5 Cardinality=429 Bytes=3003
NESTED LOOPS Cost=5 Cardinality=478 Bytes=7648
MAT_VIEW ACCESS FULL Object owner=CORP Object name=STORE_GROUP Cost=5 Cardinality=478 Bytes=5258
INDEX UNIQUE SCAN Object owner=CORP Object name=STORE_REGIONAL_INFO_PK Cost=0 Cardinality=1 Bytes=5
INDEX RANGE SCAN Object owner=DS Object name=SOS_PK Cost=2 Cardinality=1 Bytes=19
Regards,
BMPFirst thing that i notice in this query is you are Using Distinct as well as Group by.
Your group by will always give you distinct results ,then again why do you need the Distinct?
For example
WITH t AS
(SELECT 'clm1' col1, 'contract1' col2,10 value
FROM DUAL
UNION ALL
SELECT 'clm1' , 'contract1' ,10 value
FROM DUAL
UNION ALL
SELECT 'clm1', 'contract2',10
FROM DUAL
UNION ALL
SELECT 'clm2', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm3', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm4', 'contract2',10
FROM DUAL)
SELECT distinct col1,col2,sum(value) from t
group by col1,col2Is always same as
WITH t AS
(SELECT 'clm1' col1, 'contract1' col2,10 value
FROM DUAL
UNION ALL
SELECT 'clm1' , 'contract1' ,10 value
FROM DUAL
UNION ALL
SELECT 'clm1', 'contract2',10
FROM DUAL
UNION ALL
SELECT 'clm2', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm3', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm4', 'contract2',10
FROM DUAL)
SELECT col1,col2,sum(value) from t
group by col1,col2And also
AND SOL.STATUS_DATE BETWEEN '01-JUN-09' AND '30-JUN-09'It would be best to use a to_date when hard coding your dates.
Edited by: user5495111 on Aug 6, 2009 1:32 PM -
Query for reporting need to be tuned
Hi,
I working on aoracle 10.2.0.4 on solaris platform 32 GB physical memory.This database is being used for both daily transaction and reporting purpose.One of the reporting query taking high time..
SQL> set autotrace traceonly
SQL> select substr(tr_ldt,4,6),tr_di,sum(tr_val) from tr_all
2 where tr_ay_bt>='15-APR-11'
3 group by substr(tr_ldt,4,6),tr_di
4 order by substr(tr_ldt,4,6),tr_di;
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 1488 | 37200 | 198K|
| 1 | SORT GROUP BY | | 1488 | 37200 | 198K|
| 2 | TABLE ACCESS FULL| TR_ALL | 6575K| 156M| 197K|
Statistics
1030 recursive calls
0 db block gets
721737 consistent gets
624840 physical reads
0 redo size
1682 bytes sent via SQL*Net to client
514 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
23 sorts (memory)
0 sorts (disk)
33 rows processed
and i have tried autometic sql tuning...
But the result of tuning task is null. No suggestion.
Can u tell me how can i reduce the time of execution.
1.The wait event involved is *'db file scatter read'*.
I have index on tr_ldt column and bitmap index on tr_di column.
Pls suggest how to tune this query????See template postings:
[url https://forums.oracle.com/forums/thread.jspa?threadID=863295]How to post a sql tuning request
[url https://forums.oracle.com/forums/thread.jspa?messageID=1812597]When your query takes too long
The wait event involved is 'db file scatter read'.Subject to oracle version, "db file scattered read" is the expected wait event for a FULL TABLE SCAN where the blocks are not in the buffer cache and physical IO is required.
where tr_ay_bt>='15-APR-11'As an aside, never rely on implicit conversions, use TO_DATE('15-APR-2011','DD-MON-YYYY').
It wouldn't make a difference to your full table scan, unless the relevant column was indexed but an implcit datatype conversion was preventing usage thereof.
I have index on tr_ldt column and bitmap index on tr_di column.But the query restricts by TR_AY_BT.
If this is unindexed a full table scan is pretty much inevitable.
In the continued absence of an index, perhaps doing the work in parallel is an option?
Maybe you are looking for
-
Downgrading Mountain Lion back to Snow Leopard
I have a mid summer 2010 Mac Mini that originally came with Snow Leopard and I still have the install disks. In september, I upgrade to Mountain lion and due to the slow and unacceptable performance since then, I want to go back to Snow Leopard. I h
-
Regarding Invoice Detail SMS to customer through ABAP program
Hi All , How can i send details of invoice ( invoice number,quantity,amount,total ledger balance of customer) through SMS to customer with ABAP application program. please provide some sample code for this . Thanks & Regards shailesh
-
hi , i need to update one field : ind_sign to value : Y . based on the value from other table . i am getting ora-01427 error . please help in fixing this. update reg a set a.ind_sign = ( select 'Y' from reg b, attach c where b.key = c.key and c.cde =
-
i get error message that file was corrupt when trying to download rented movie. Message says to try again but same message. Help!
-
Problem with left parenthesis key
I have had a problem with the left parenthesis key, when I hit shift to use it, it does not work. The right one works fine. I have noticed this problem since I got my Mac Book Pro which has been about 3 months. I know I can take it in to have the pro