Performance tuning of query
SELECT * FROM
FROM contracts a
WHERE a.start_date > TO_DATE ('12/01/2009', 'mm/dd/yyyy')
AND a.contracts_type IN ('C1','C2','C3','C4','C5','C6','C7','C8','C9','C10')
explain plan gives the below details
Cost: 1,351,083 Bytes: 49,313,352 Cardinality: 2,241,516
how to tune the query
Please read these.
How to post a tuning request:
HOW TO: Post a SQL statement tuning request - template posting
When your query takes too long:
When your query takes too long ...
Similar Messages
-
Dear All,
My client wants to do the performance tuning in a query where it has the Global structure of 300 CKF & 300 RKF.
They dont want to touch the MP, all they want to do some performance tuning through the front end without touching the Multiprovider.
The CKF is little bit tricky where in Each CKF has 28 RKF and In each RKF atleast 2 variables are used an with hierarchy restrictions. The query is taking 30 - 40 min to execute. Kindly guide me how to handle this.
Regards,
Suman Thangadurai.HI,
Improving query performance u2013
- Generate index.
- Build query on multiprovider and better use Constant Selection Function to bring infoset functionality to Multiprovider.
- Make your query more dynamic using variables.
- Do partitioning on IC when you have restriction on 0CALMONTH.
- Use more free charecterstics.
- Use include than exclude at Bex level.
- Utilize Cache mode and Read mode functions.
Regards,
rvc -
Performance tuning or Query tuning
Hi,
I am a PL/SQL programmer and I wanna learn Query tuning or performance tuning to extend my knowledge.
Can any one please suggest me where to start? I read somthing RBO and CBO and get into confusion is oracle supports both? which one is better? how to use them...is it differ from version to version like 9i/10g
I appreciate any kind of help.
Thanks and Regards
MaiHi,
If you a pl/sql programmer than I shall assume that you don't do much database activity and at the moment , would be upgrading yourself with the database knowledge. I shall suggest in addition to Oracle docs, these books in order to understand Query Tuning.
Effective-Oracle-By-Design
Cost-Based-Oracle-Fundamentals
Practical-Oracle8i
I have found these books as the best ones in the understandng of various very important factors in query tuning and I use these books every day.
HTH
Aman.... -
Hi,
Can anyone assist me in tuning the below query
Thanks in Advance
A.Gopal
SELECT nvl(sum(p.GRP_ACTUALQUANTITY),0)
FROM GOODSRECEIPTS r, GOODSRECEIPTPOSITIONS p
,reportparameters plt,reportparameters spl,reportparameters mat
WHERE r.GRH_NO = p.GRP_GRH_NO
AND TRUNC(r.GRH_ARRIVAL) BETWEEN NVL(TO_DATE('01-jan-09'), TRUNC(r.GRH_ARRIVAL)) AND NVL(TO_DATE('31-jan-09'), TRUNC(r.GRH_ARRIVAL))
AND (p.GRP_PLT_ID = 'VDN' OR 1 =0)
AND (r.GRH_SPL_ID = '15035122' OR 1 =0)
AND (p.GRP_MAT_ID = '1001360' OR 1 =0)
AND ('01.01.2009' = 'TOTAL' or trunc(r.GRH_ARRIVAL,'mm') = to_date('01.01.2009','dd.mm.yyyy'))
and plt.rep_field = 'PLT_ID' AND plt.rep_id = 'AN_EXC_RAT' AND plt.rep_user = 'ANANTGOP'
and spl.rep_field = 'SPL_ID' AND spl.rep_id = 'AN_EXC_RAT' AND spl.rep_user ='ANANTGOP'
and mat.rep_field = 'MAT_ID' AND mat.rep_id = 'AN_EXC_RAT' AND mat.rep_user = 'ANANTGOP'
and r.GRH_SPL_ID = decode(spl.rep_value,'All',r.GRH_SPL_ID,spl.rep_value)
and p.GRp_plt_ID = decode(plt.rep_value,'All',p.GRp_plt_ID,plt.rep_value)
and p.GRp_mat_ID = decode(mat.rep_value,'All',p.GRp_mat_ID,mat.rep_value)
AND NOT EXISTS (SELECT 1 FROM GOODSRECEIPTUSAGESTATUS u
WHERE p.GRP_GRH_NO = u.GRU_GRP_GRH_NO
AND p.GRP_NO =u.GRU_GRP_NO
AND u.GRU_UST_ID = 'CANCEL')This query is used in my Oracle report. The original query is like beow
SELECT nvl(sum(p.GRP_ACTUALQUANTITY),0)
FROM GOODSRECEIPTS r, GOODSRECEIPTPOSITIONS p
,reportparameters plt,reportparameters spl,reportparameters mat
WHERE r.GRH_NO = p.GRP_GRH_NO
AND TRUNC(r.GRH_ARRIVAL) >= NVL(:l_date_from, TRUNC(r.GRH_ARRIVAL)) AND
TRUNC(r.GRH_ARRIVAL) <=NVL(:l_date_to, TRUNC(r.GRH_ARRIVAL))
AND (p.GRP_PLT_ID = :p_plt_id OR :ip_plt_group =0)
AND (r.GRH_SPL_ID = :p_spl_id OR :ip_spl_group =0)
AND (p.GRP_MAT_ID = :p_mat_id OR :ip_mat_group =0)
AND (:l_flag = 'TOTAL' or trunc(r.GRH_ARRIVAL,'mm') = to_date(:p_month_abc_sp,'dd.mm.yyyy'))
and plt.rep_field = 'PLT_ID' AND plt.rep_id = :ip_rep_id AND plt.rep_user = :ip_rep_user
and spl.rep_field = 'SPL_ID' AND spl.rep_id = :ip_rep_id AND spl.rep_user = :ip_rep_user
and mat.rep_field = 'MAT_ID' AND mat.rep_id = :ip_rep_id AND mat.rep_user = :ip_rep_user
and r.GRH_SPL_ID = decode(spl.rep_value,'All',r.GRH_SPL_ID,spl.rep_value)
and p.GRp_plt_ID = decode(plt.rep_value,'All',p.GRp_plt_ID,plt.rep_value)
and p.GRp_mat_ID = decode(mat.rep_value,'All',p.GRp_mat_ID,mat.rep_value)
AND NOT EXISTS (SELECT 1
FROM GOODSRECEIPTUSAGESTATUS u
WHERE p.GRP_GRH_NO = u.GRU_GRP_GRH_NO
AND p.GRP_NO =u.GRU_GRP_NO
AND u.GRU_UST_ID = 'CANCEL');
:ip_rep_id = 'AN_EXC_RAT'
:ip_rep_user = 'ANANTGOP
Value of :l_flag = '01.01.2009'
(:p_month_abc_sp = '01.01.2009'
Thanks
A.Gopal -
hi all i need your help . its a big time for me now... i have 9 queries with me and 5 days .... before which i will have to come up with how are the queries performing.
I literally have no idea what so ever of what should i do ?
what result i should provide my mgr in 5 days?
All i know is to look at execution plan and see whether queries are not performing well coz of indexes etc.
please help me to overcome this ...
how do i start ?
regards
rajHI Tims...
i totally understand .... and i agree that i asked 110 questions..
but i want to let people know that i definitely go through google and have the material ready with me ..
Its honestly not that that its easy to get answers than in google.
i come here because i get the practical solutions and people will help me through what they saw or what they implemented in their reall time projects..
just wnat to say that some thing that is easy to a person might not be easy to others.. so please try to understand others .. in their shoes..
What i know you may not know... and i can say that you are asking a silly question..
i am not writing this to contradict.. i am writing this just to let people know that honestly iam in need of something and thats the reason i write to forums... to try to understand the practical solutions .. in their real time scenarios..
am not writing this to make you agree...
just wanted to let you know that 'easy to you may not be easy to others and vice verca'
i do not want to get into these arguments... coz i have to learn so many things.....and i do not have time for this arguments..
if you wish to answer you can answer else.. please simply skip the page.. do not make oracle forum a forum for these sort of discussions...
thanks and regards
raj -
Performance Tuning Query on Large Tables
Hi All,
I am new to the forums and have a very specic use case which requires performance tuning, but there are some limitations on what changes I am actualy able to make to the underlying data. Essentially I have two tables which contain what should be identical data, but for reasons of a less than optimal operational nature, the datasets are different in a number of ways.
Essentially I am querying call record detail data. Table 1 (refered to in my test code as TIME_TEST) is what I want to consider the master data, or the "ultimate truth" if you will. Table one contains the CALLED_NUMBER which is always in a consistent format. It also contains the CALLED_DATE_TIME and DURATION (in seconds).
Table 2 (TIME_TEST_COMPARE) is a reconciliation table taken from a different source but there is no consistent unique identifiers or PK-FK relations. This table contains a wide array of differing CALLED_NUMBER formats, hugely different to that in the master table. There is also scope that the time stamp may be out by up to 30 seconds, crazy I know, but that's just the way it is and I have no control over the source of this data. Finally the duration (in seconds) can be out by up to 5 seconds +/-.
I want to create a join returning all of the master data and matching the master table to the reconciliation table on CALLED_NUMBER / CALL_DATE_TIME / DURATION. I have written the query which works from a logi perspective but it performs very badly (master table = 200,000 records, rec table = 6,000,000+ records). I am able to add partitions (currently the tables are partitioned by month of CALL_DATE_TIME) and can also apply indexes. I cannot make any changes at this time to the ETL process loading the data into these tables.
I paste below the create table and insert scripts to recreate my scenario & the query that I am using. Any practical suggestions for query / table optimisation would be greatly appreciated.
Kind regards
Mike
-------------- NOTE: ALL DATA HAS BEEN DE-SENSITISED
/* --- CODE TO CREATE AND POPULATE TEST TABLES ---- */
--CREATE MAIN "TIME_TEST" TABLE: THIS TABLE HOLDS CALLED NUMBERS IN A SPECIFIED/PRE-DEFINED FORMAT
CREATE TABLE TIME_TEST ( CALLED_NUMBER VARCHAR2(50 BYTE),
CALLED_DATE_TIME DATE, DURATION NUMBER );
COMMIT;
-- CREATE THE COMPARISON TABLE "TIME_TEST_COMPARE": THIS TABLE HOLDS WHAT SHOULD BE (BUT ISN'T) IDENTICAL CALL DATA.
-- THE DATA CONTAINS DIFFERING NUMBER FORMATS, SLIGHTLY DIFFERENT CALL TIMES (ALLOW +/-60 SECONDS - THIS IS FOR A GOOD, ALBEIT UNHELPFUL, REASON)
-- AND DURATIONS (ALLOW +/- 5 SECS)
CREATE TABLE TIME_TEST_COMPARE ( CALLED_NUMBER VARCHAR2(50 BYTE),
CALLED_DATE_TIME DATE, DURATION NUMBER )
COMMIT;
--CREATE INSERT DATA FOR THE MAIN TEST TIME TABLE
INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'7721345675', TO_DATE( '11/09/2011 06:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 202);
INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'7721345675', TO_DATE( '11/09/2011 08:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 19);
INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'7721345675', TO_DATE( '11/09/2011 07:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 35);
INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'7721345675', TO_DATE( '11/09/2011 09:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 30);
INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'7721345675', TO_DATE( '11/09/2011 06:18:47 AM', 'MM/DD/YYYY HH:MI:SS AM'), 6);
INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'7721345675', TO_DATE( '11/09/2011 06:20:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 20);
COMMIT;
-- CREATE INSERT DATA FOR THE TABLE WHICH NEEDS TO BE COMPARED:
INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'7721345675', TO_DATE( '11/09/2011 06:10:51 AM', 'MM/DD/YYYY HH:MI:SS AM'), 200);
INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'00447721345675', TO_DATE( '11/09/2011 08:10:59 AM', 'MM/DD/YYYY HH:MI:SS AM'), 21);
INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'07721345675', TO_DATE( '11/09/2011 07:11:20 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'+447721345675', TO_DATE( '11/09/2011 09:10:01 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'+447721345675#181345', TO_DATE( '11/09/2011 06:18:35 AM', 'MM/DD/YYYY HH:MI:SS AM')
, 6);
INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'004477213456759777799', TO_DATE( '11/09/2011 06:19:58 AM', 'MM/DD/YYYY HH:MI:SS AM')
, 17);
COMMIT;
/* --- QUERY TO UNDERTAKE MATCHING WHICH REQUIRES OPTIMISATION --------- */
SELECT MAIN.CALLED_NUMBER AS MAIN_CALLED_NUMBER, MAIN.CALLED_DATE_TIME AS MAIN_CALL_DATE_TIME, MAIN.DURATION AS MAIN_DURATION,
COMPARE.CALLED_NUMBER AS COMPARE_CALLED_NUMBER,COMPARE.CALLED_DATE_TIME AS COMPARE_CALLED_DATE_TIME,
COMPARE.DURATION COMPARE_DURATION
FROM
SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
FROM TIME_TEST
) MAIN
LEFT JOIN
SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
FROM TIME_TEST_COMPARE
) COMPARE
ON INSTR(COMPARE.CALLED_NUMBER,MAIN.CALLED_NUMBER)<> 0
AND MAIN.CALLED_DATE_TIME BETWEEN COMPARE.CALLED_DATE_TIME-(60/86400) AND COMPARE.CALLED_DATE_TIME+(60/86400)
AND MAIN.DURATION BETWEEN MAIN.DURATION-(5/86400) AND MAIN.DURATION+(5/86400);What does your execution plan look like?
-
VAL_FIELD selection to determine RSDRI or MDX query: performance tuning
according to on of the HTG I am working on performance tuning. one of the tip is to try to query base members by using BAS(xxx) in the expension pane of BPC report.
I did so and found an interesting issue in one of the COPA report.
with income statement, when I choose one node gross_profit, saying BAS(GROSS_PROFIT), it generates RSDRI query as I can see in UJSTAT. when I choose its parent, BAS(DIRECT_INCOME), it generates MDX query!
I checked DIRECT_INCOME has three members, GROSS_PROFIT, SGA, REV_OTHER. , none of them has any formulars.
in stead of calling BAS(DIRECT_INCOME), I called BAS(GROSS_PROFIT),BAS(SGA),BAS(REV_OTHER), I got RSDRI query again.
so in smmary,
BAS(PARENT) =>MDX query.
BAS(CHILD1)=>RSDRI query.
BAS(CHILD2)=>RSDRI query.
BAS(CHILD3)=>RSDRI query.
BAS(CHILD1),BAS(CHILD2),BAS(CHILD3)=>RSDRI query
I know VAL_FIELD is SAP reserved name for BPC dimensions. my question is why BAS(PARENT) =>MDX query.?
interestingly I can repeat this behavior in my system. my intention is to always get RSDRI query,
GeorgeOk - it turns out that Crystal Reports disregards BEx Query variables when put in the Default Values section of the filter selection.
I had mine there and even though CR prompted me for the variables AND the SQL statement it generated had an INCLUDE statement with hose variables I could see by my result set that it still returned everything in the cube as if there was no restriction on Plant for instance.
I should have paid more attention to the Info message I got in the BEx Query Designed. It specifically states that the "Variable located in Default Values will be ignored in the MDX Access".
After moving the variables to the Characteristic Restrictions my report worked as expected. The slow response time is still an issue but at least it's not compounded by trying to retrieve all records in the cube while I'm expecting less than 2k.
Hope this helps someone else -
Reg: Process Chain, query performance tuning steps
Hi All,
I come across a question like, There is a process chain of 20 processes.out of which 5 processes are completed at the 6th step error occured and it cannot be rectified. I should start the chain again from the 7th step.If i go to a prticular step i can do that particular step, How can i start the entair chain again from step 7.i know that i need to use a function module but i dont know the name of FM. Please somebody help me out.
Please let me know the steps involved in query performance tuning and aggregate tuning.
Thanks & Regards
Omkar.KHi,
Process Chain
Method 1 (when it fails in a step/request)
/people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
How is it possible to restart a process chain at a failed step/request?
Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that step on to the end.
You need to set the failed request/step to green in the database as well as you need to raise the event that will force the process chain to run to the end from the next request/step on.
Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display messages'.
In the opened popup click on the tab 'Chain'.
In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following selections:
1. copy the variant from the popup to the variante of table rspcprocesslog
2. copy the instance from the popup to the instance of table rspcprocesslog
3. copy the start date from the popup to the batchdate of table rspcprocesslog
Press F8 to display the entries of table rspcprocesslog.
Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the function module and run the fm in test mode.
Now copy the entries of table rspcprocesslog to the input parameters of the function module like described as follows:
1. rspcprocesslog-log_id -> i_logid
2. rspcprocesslog-type -> i_type
3. rspcprocesslog-variante -> i_variant
4. rspcprocesslog-instance -> i_instance
5. enter 'G' for parameter i_state (sets the status to green).
Now press F8 to run the fm.
Now the actual process will be set to green and the following process in the chain will be started and the chain can run to the end.
Of course you can also set the state of a specific step in the chain to any other possible value like 'R' = ended with errors, 'F' = finished, 'X' = cancelled ....
Check out the value help on field rspcprocesslog-state in transaction se16 for the possible values.
Query performance tuning
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
/people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
/people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
Try table rsddstats to get the statistics
Using cache memoery will decrease the loading time of the report.
Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
Thanks,
JituK -
Oracle query performance tuning
Hi
I am doing Oracle programming.Iwould like to learn Query Performance Tuning.
Could you guide me , like how could i learn this online, which books to refer.
Thank youI would recommend purchasing a copy of Cary Millsap's book now:
http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/059600527X/ref=sr_1_1?ie=UTF8&qid=1248985270&sr=8-1
And Jonathan Lewis' when you feel you are at a slightly more advanced level.
http://www.amazon.com/Cost-Based-Oracle-Fundamentals-Experts-Voice/dp/1590596366/ref=pd_sim_b_2
Both belong in everyone's bookcase. -
Query performance tuning need your suggestions
Hi,
Below is the sql query and explain plan which is taking 2 hours to execute and sometime it is breaking up( erroring out) due to memory issue.
Below it the query which i need to improve the performance of the code please need your suggestion in order to tweak so that time take for execution become less and also in less memory consumption
select a11.DATE_ID DATE_ID,
sum(a11.C_MEASURE) WJXBFS1,
count(a11.PKEY_GUID) WJXBFS2,
count(Case when a11.C_MEASURE <= 10 then a11.PKEY_GUID END) WJXBFS3,
count(Case when a11.STATUS = 'Y' and a11.C_MEASURE > 10 then a11.PKEY_GUID END) WJXBFS4,
count(Case when a11.STATUS = 'N' then a11.PKEY_GUID END) WJXBFS5,
sum(((a11.C_MEASURE ))) WJXBFS6,
a17.DESC_DATE_MM_DD_YYYY DESC_DATE_MM_DD_YYYY,
a11.DNS DNS,
a12.VVALUE VVALUE,
a12.VNAME VNAME,
a13.VVALUE VVALUE0,
a13.VNAME VNAME0,
9 a14.VVALUE VVALUE1,
a14.VNAME VNAME1,
a15.VVALUE VVALUE2,
a15.VNAME VNAME2,
a16.VVALUE VVALUE3,
a16.VNAME VNAME3,
a11.PKEY_GUID PKEY_GUID,
a11.UPKEY_GUID UPKEY_GUID,
a17.DAY_OF_WEEK DAY_OF_WEEK,
a17.D_WEEK D_WEEK,
a17.MNTH_ID DAY_OF_MONTH,
a17.YEAR_ID YEAR_ID,
a17.DESC_YEAR_FULL DESC_YEAR_FULL,
a17.WEEK_ID WEEK_ID,
a17.WEEK_OF_YEAR WEEK_OF_YEAR
from ACTIVITY_F a11
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 1 ) a12
on (a11.PKEY_GUID = a12.PKEY_GUID and
a11.DATE_ID = a12.DATE_ID and
a11.ORG = a12.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 2) a13
on (a11.PKEY_GUID = a13.PKEY_GUID and
a11.DATE_ID = a13.DATE_ID and
a11.ORG = a13.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 3 ) a14
on (a11.PKEY_GUID = a14.PKEY_GUID and
a11.DATE_ID = a14.DATE_ID and
a11.ORG = a14.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 4) a15
on (a11.PKEY_GUID = a15.PKEY_GUID and
89 a11.DATE_ID = a15.DATE_ID and
a11.ORG = a15.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 9) a16
on (a11.PKEY_GUID = a16.PKEY_GUID and
a11.DATE_ID = a16.DATE_ID and
A11.ORG = A16.ORG)
join W_DATE_D a17
ON (A11.DATE_ID = A17.ID)
join W_SALES_D a18
on (a11.TASK = a18.ID)
where (a17.TIMSTAMP between To_Date('2001-02-24 00:00:00', 'YYYY-MM-DD HH24:MI:SS') and To_Date('2002-09-12 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
and a11.ORG in (12)
and a18.SRC_TASK = 'AX012Z')
group by a11.DATE_ID,
a17.DESC_DATE_MM_DD_YYYY,
a11.DNS,
a12.VVALUE,
a12.VNAME,
a13.VVALUE,
a13.VNAME,
a14.VVALUE,
a14.VNAME,
a15.VVALUE,
a15.VNAME,
a16.VVALUE,
a16.VNAME,
a11.PKEY_GUID,
a11.UPKEY_GUID,
a17.DAY_OF_WEEK,
a17.D_WEEK,
a17.MNTH_ID,
a17.YEAR_ID,
a17.DESC_YEAR_FULL,
a17.WEEK_ID,
a17.WEEK_OF_YEAR;
Explained.
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 1245 | 47 (9)| 00:00:01 |
| 1 | HASH GROUP BY | | 1 | 1245 | 47 (9)| 00:00:01 |
|* 2 | HASH JOIN | | 1 | 1245 | 46 (7)| 00:00:01 |
|* 3 | HASH JOIN | | 1 | 1179 | 41 (5)| 00:00:01 |
|* 4 | HASH JOIN | | 1 | 1113 | 37 (6)| 00:00:01 |
|* 5 | HASH JOIN | | 1 | 1047 | 32 (4)| 00:00:01 |
|* 6 | HASH JOIN | | 1 | 981 | 28 (4)| 00:00:01 |
| 7 | NESTED LOOPS | | 1 | 915 | 23 (0)| 00:00:01 |
| 8 | NESTED LOOPS | | 1 | 763 | 20 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 1 | 611 | 17 (0)| 00:00:01 |
| 10 | NESTED LOOPS | | 1 | 459 | 14 (0)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 307 | 11 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 1 | 155 | 7 (0)| 00:00:01 |
| 13 | NESTED LOOPS | | 1 | 72 | 3 (0)| 00:00:01 |
| 14 | TABLE ACCESS BY INDEX ROWID| W_SALES_D | 1 | 13 | 2 (0)| 00:00:01 |
|* 15 | INDEX UNIQUE SCAN | CONS_UNQ_W_SALES_D_SRC_ID | 1 | | 1 (0)| 00:00:01 |
| 16 | TABLE ACCESS BY INDEX ROWID| W_DATE_D | 1 | 59 | 1 (0)| 00:00:01 |
|* 17 | INDEX UNIQUE SCAN | UIDX_DD_TIMSTAMP | 1 | | 0 (0)| 00:00:01 |
| 18 | TABLE ACCESS BY INDEX ROWID | ACTIVITY_F | 1 | 83 | 4 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | PK_ACTIVITY_F | 1 | | 3 (0)| 00:00:01 |
|* 20 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 4 (0)| 00:00:01 |
|* 21 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 22 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 23 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 24 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 25 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 26 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 27 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 28 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 29 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 30 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 31 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 32 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 33 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 34 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------------Hi,
I'm not a tuning expert but I can suggest you to post your request according to this template:
Thread: HOW TO: Post a SQL statement tuning request - template posting
HOW TO: Post a SQL statement tuning request - template posting
Then:
a) you should posting a code which is easy to read. What about formatting? Your code had to be fixed in a couple of lines.
b) You could simplify your code using the with statement. This has nothing to do with the tuning but it will help the readability of the query.
Check it below:
WITH tab1 AS (SELECT a.org AS org
, a.date_id AS date_id
, a.time_of_day_id AS time_of_day_id
, a.date_hour_id AS date_hour_id
, a.task AS task
, a.pkey_guid AS pkey_guid
, a.vname AS vname
, a.vvalue AS vvalue
, b.variable_obj
FROM w_org_d a
JOIN
w_person_d b
ON ( a.task = b.task
AND a.org = b.id
AND a.vname = b.vname))
SELECT a11.date_id date_id
, SUM (a11.c_measure) wjxbfs1
, COUNT (a11.pkey_guid) wjxbfs2
, COUNT (CASE WHEN a11.c_measure <= 10 THEN a11.pkey_guid END) wjxbfs3
, COUNT (CASE WHEN a11.status = 'Y' AND a11.c_measure > 10 THEN a11.pkey_guid END) wjxbfs4
, COUNT (CASE WHEN a11.status = 'N' THEN a11.pkey_guid END) wjxbfs5
, SUM ( ( (a11.c_measure))) wjxbfs6
, a17.desc_date_mm_dd_yyyy desc_date_mm_dd_yyyy
, a11.dns dns
, a12.vvalue vvalue
, a12.vname vname
, a13.vvalue vvalue0
, a13.vname vname0
, a14.vvalue vvalue1
, a14.vname vname1
, a15.vvalue vvalue2
, a15.vname vname2
, a16.vvalue vvalue3
, a16.vname vname3
, a11.pkey_guid pkey_guid
, a11.upkey_guid upkey_guid
, a17.day_of_week day_of_week
, a17.d_week d_week
, a17.mnth_id day_of_month
, a17.year_id year_id
, a17.desc_year_full desc_year_full
, a17.week_id week_id
, a17.week_of_year week_of_year
FROM activity_f a11
JOIN tab1 a12
ON ( a11.pkey_guid = a12.pkey_guid
AND a11.date_id = a12.date_id
AND a11.org = a12.org
AND a12.variable_obj = 1)
JOIN tab1 a13
ON ( a11.pkey_guid = a13.pkey_guid
AND a11.date_id = a13.date_id
AND a11.org = a13.org
AND a13.variable_obj = 2)
JOIN tab1 a14
ON ( a11.pkey_guid = a14.pkey_guid
AND a11.date_id = a14.date_id
AND a11.org = a14.org
AND a14.variable_obj = 3)
JOIN tab1 a15
ON ( a11.pkey_guid = a15.pkey_guid
AND a11.date_id = a15.date_id
AND a11.org = a15.org
AND a15.variable_obj = 4)
JOIN tab1 a16
ON ( a11.pkey_guid = a16.pkey_guid
AND a11.date_id = a16.date_id
AND a11.org = a16.org
AND a16.variable_obj = 9)
JOIN w_date_d a17
ON (a11.date_id = a17.id)
JOIN w_sales_d a18
ON (a11.task = a18.id)
WHERE (a17.timstamp BETWEEN TO_DATE ('2001-02-24 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
AND TO_DATE ('2002-09-12 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
AND a11.org IN (12)
AND a18.src_task = 'AX012Z')
GROUP BY a11.date_id, a17.desc_date_mm_dd_yyyy, a11.dns, a12.vvalue
, a12.vname, a13.vvalue, a13.vname, a14.vvalue
, a14.vname, a15.vvalue, a15.vname, a16.vvalue
, a16.vname, a11.pkey_guid, a11.upkey_guid, a17.day_of_week
, a17.d_week, a17.mnth_id, a17.year_id, a17.desc_year_full
, a17.week_id, a17.week_of_year;
{code}
I hope I did not miss anything while reformatting the code. I could not test it not having the proper tables.
As I said before I'm not a tuning expert nor I pretend to be but I see this:
1) Table W_PERSON_D is read in full scan. Any possibility of using indexes?
2) Tables W_SALES_D, W_DATE_D, ACTIVITY_F and W_ORG_D have TABLE ACCESS BY INDEX ROWID which definitely is not fast.
You should provide additional information for tuning your query checking the post I mentioned previously.
Regards.
Al -
Parameters to be considered for tuning the QUERY Performance.
Hi Guys
I wanted to identify the Query which is taking more resources through the Dynamic performance views and not through OEM.I wanter to tune the Query.
Please suggest me on this. Also i wanted to know what are all the parameters to be considered for tuning a query like Query execution time, I/O Hits, Library cache hits, etc and from which performance view these paramaters can be taken. I also wanted to know how to find the size of library cache and how much queries can be holded in a library cache and what will be the retention of queries in library cache and how to retrive the queries from Library cache.
Thanks a lot
Edited by: user448837 on Sep 23, 2008 9:24 PMHmm there is a parameter called makemyquery=super_fast, check that ;-).
Ok jokes apart, your question is like asking a doctor that when a person is sick than how to troubleshoot his sickeness? So does a doctor start giving medicine immediately? No right? He has to check some symptoms of the patient. The same is applicable to the query also. There is no view as such which would tell you the performance of the query ie it is good or bad. Tell me if a query is taking up 5 minutes and I come and tell you this, what would you say its good or bad or the query took this much CPU time what would you say, its too much or too less? I am sure your answers would be "it depends". That's the key point.
The supermost point in the performance check is two things, one the plan of the query. What kind of plan optimizer took for the query and whether it was really good or not? There are millions os ways to do som ething but the idea is to do it in the best way. That's what the idea of reading explain plan. I would suggest get your self familiar with explain plan's various paths and their pros and cons. Also about the query's charectristics, you can check V$sql, v$sqlarea , V$sql_text views.
Your other question that you want to check the size of the library cache , its contents and the retention, I would say that its all internally managed. You never size library cache but shared pool only. Similary the best bet for a dba/developer is to check the queries are mostly shareable and there wont be any duplicate sqls. So the cursor_sharing parameter can be used for this. The control is not given to us to manage the rentention or the flushing of the queries from the cache.
HTH
Aman.... -
Analysis and Performance tuning of a query
Hi gurus,
We have few reports built on multiprovider (which is containing five basic cubes) whose response time is very slow, so i want to do some analysis to find out why they are runing very slow and also do the performance tuning.
So where do i start and how do i start like is it from Report or multiprovider
if its report or multiprovider please kindly guide me what are the things do i need to look for and how to correct them whether adding something or changing the data design.
i have four reports
1. is runing on three basic cubes
2. is runing on all the five cubes
3.& 4. runing on 2 cubes
so kindly give your inputs
thanks and regards
Neeldocs on performance available in
FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap
https://service.sap.com/bi
-> performance
effective query on MP can be found
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b03b7f4c-c270-2910-a8b8-91e0f6d77096
for nw2004s
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/a9ab011a-0e01-0010-02a1-d496b94c9c0f
modeling on multiprovider
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/2f5aa43f-0c01-0010-a990-9641d3d4eef7
hope this helps. -
Performance tuning of a query in Hyperion Interactive Reporting Studio.
Hi All,
Cud some one please tell me... what are the possible ways that one can enhance the performance of the query in Hyperion interactive Reporting studio.. ??
Thanks & Regards,
RajTopic Order Priority...Data Model Options
Use Hints/Directives
Ask DBA to review the SQL Generated by IR --- TopMenu - View - Query Log -
Query About performance tuning
Hi Guru's
May be this question is stupid one, but i think this is the forum where every user get details of all there questions.
I want to start learning Performance tuning but not know the path from where to start can anyone suggest me the same.
Thanks in advanceHi,
performance tuning is a huge area. It involves following skills:
1. Understanding how Oracle stores data (physical structure of tables and indexes) and how it reads and writes it.
2. Understanding how Oracle processes and executes queries
3. Ability to read execution plans (not just scan them for 'red flags')
4. Familiarity is most common wait events, knowing and understanding Oracle Wait Interface and its limitations
5. Ability to understand AWR and ASH data and its limitations
6. Understanding Oracle optimizer and it's inputs, knowing basic formulas for cardinality and selectivity, cardinality feedback tuning
7. Abilty to read trace files (first of all extended SQL trace, 10046, and CBO trace, 10053)
8. Understanding concurrency and read consistency and work Oracle does to maintain them.
This is probably not a complete list. Of course you won't be able to learn everything at once. You can start by reading:
1. The official Oracle Performance Tuning Guide
2. T. Kyte's books on Oracle architecture
3. Milsap's and Holt's book "Optimizing Oracle Performance"
4. J. Lewis "Cost based fundamentals"
Best regards,
Nikolay -
I am working on fine tuning below query.
SELECT SEGMENT1
FROM MTL_SYSTEM_ITEMS
WHERE ORGANIZATION_ID = 100
AND INVENTORY_ITEM_ID = :a
MTL_SYSTEM_ITEMS have 8 million records. This query is in a function that gets called probably a million times a day.
What I am thinking to do is to create a view on mtl_system_items table where organization_id = 100. Now if I use the below view, do you think the query will execute faster?
SELECT SEGMENT1
FROM MTL_SYSTEM_ITEMS_VIEW
WHERE INVENTORY_ITEM_ID = :aCheck your numbers of parsing for these particular query after each execution.
e.g,
SQL>SELECT * FROM(
2 SELECT parse_calls*executions Product, parse_calls Parses
3 ,executions Execs, sql_text FROM v$sqlarea ORDER BY 1 DESC)
4 WHERE ROWNUM <= 50
5* ORDER BY parses DESC
SQL> /
PRODUCT PARSES EXECS SQL_TEXT
2608960 155 16832 SELECT apl_dnum ,COUNT(*) "Numbers" ,SUM(q3d_amount) "Amount" ,ddate FRO
7661664 111 69024 SELECT COUNT(*) FROM Q3_LODGMENT_DTL WHERE APL_DNUM = :b1
1412853 51 27703 SELECT ZONE_DESC_SHORT FROM ZONE_STP WHERE ZONE_CODE = :b1
50 rows selected.If yours session_cached_cursors parameter have the capibilty to caching more SQL statements per session then it will reduce re-parsing.Though bind variables in
SQL code reduce parsing.
Check yours index exist,check out yours network flow,yours DBA gave you very nice job but there is no hard and fast thumb role to identify exactly what causes to degrade this specific query.
Khurram
Maybe you are looking for
-
All of my garageband loops are gone. All the drums, synths, basses, everything is not there. Only the iMovie ones are there. Help! 2 GHz Intel Macbook White Mac OS X (10.4.9)
-
Ive read other people having a problem with their ipod 160classic randomly pausing for 2 secs then resuming play from where it left off. Ive also read this is probably an inherent design flaw with to small of a cache because it happens with larger fi
-
Video failure in youtube with Safari
When I run any video in Youtube the image flickers, a sintom similar to inverted fields in image. The image flickers and tremble. This sintom happens only in Safari. When I use Google Chrome, it doesn't happen. What kind of setting I have to do to co
-
Jam - SAP Business ByDesign Integration
Hi community, are you aware of anyone working on an integration between SAP Business ByDesign and SAP Jam? From ByDesign we would have similar use case requirements as the Cloud for Customer / SAP CRM integration offers... Thanks and best regards, Ge
-
How to add user in administrator group of project server 2010 with powershell command ?
I want to add one user in Administrator group of Project Server . Please let me know how to do this through power shell command.