Query performance tuning beyond entity caching
Hi,
We have an extremely large read-only dataset stored using BDB-JE DPL. I'm seeking to tune our use of BDB while querying, and I'm wondering what options we have beyond simply attempting to cache more entities? Our expected cache hit rate is low. Can I tune things to keep more of the btree nodes and other internal structures buffered? What kind of configuration parameters should I be looking at?
Thanks,
Brian
No, you don't have to preload the leaf nodes. But if you don't preload the secondary at all, you'll see more I/O when you read by secondary index.
If you don't have enough cache to load leaf nodes, you should not call setLoadLNs(true), for primary or secondary DBs. Instead, try to load the internal nodes for all DBs if possible. You can limit the time taken by preload using PreloadConfig.
I strongly suspect that the primary DB loads faster because it is probably written in key order, while the secondaries are not.
The LRU-only setting is an environment wide setting and should apply to all databases, that is not a problem. If you are doing random access in general, this is the correct setting.
Please use preload to reduce the amount of I/O that you see in the environment stats. If performance is still not adequate, you may want to look at the IO subsystem you're using -- do you know what you're getting for seek and read times? Also, you may want to turn on the Java verbose GC option and see if full GCs are occurring -- if so, tuning the Java GC will be necessary.
--mark
Similar Messages
-
Reg: Process Chain, query performance tuning steps
Hi All,
I come across a question like, There is a process chain of 20 processes.out of which 5 processes are completed at the 6th step error occured and it cannot be rectified. I should start the chain again from the 7th step.If i go to a prticular step i can do that particular step, How can i start the entair chain again from step 7.i know that i need to use a function module but i dont know the name of FM. Please somebody help me out.
Please let me know the steps involved in query performance tuning and aggregate tuning.
Thanks & Regards
Omkar.KHi,
Process Chain
Method 1 (when it fails in a step/request)
/people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
How is it possible to restart a process chain at a failed step/request?
Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that step on to the end.
You need to set the failed request/step to green in the database as well as you need to raise the event that will force the process chain to run to the end from the next request/step on.
Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display messages'.
In the opened popup click on the tab 'Chain'.
In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following selections:
1. copy the variant from the popup to the variante of table rspcprocesslog
2. copy the instance from the popup to the instance of table rspcprocesslog
3. copy the start date from the popup to the batchdate of table rspcprocesslog
Press F8 to display the entries of table rspcprocesslog.
Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the function module and run the fm in test mode.
Now copy the entries of table rspcprocesslog to the input parameters of the function module like described as follows:
1. rspcprocesslog-log_id -> i_logid
2. rspcprocesslog-type -> i_type
3. rspcprocesslog-variante -> i_variant
4. rspcprocesslog-instance -> i_instance
5. enter 'G' for parameter i_state (sets the status to green).
Now press F8 to run the fm.
Now the actual process will be set to green and the following process in the chain will be started and the chain can run to the end.
Of course you can also set the state of a specific step in the chain to any other possible value like 'R' = ended with errors, 'F' = finished, 'X' = cancelled ....
Check out the value help on field rspcprocesslog-state in transaction se16 for the possible values.
Query performance tuning
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
/people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
/people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
Try table rsddstats to get the statistics
Using cache memoery will decrease the loading time of the report.
Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
Thanks,
JituK -
Oracle query performance tuning
Hi
I am doing Oracle programming.Iwould like to learn Query Performance Tuning.
Could you guide me , like how could i learn this online, which books to refer.
Thank youI would recommend purchasing a copy of Cary Millsap's book now:
http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/059600527X/ref=sr_1_1?ie=UTF8&qid=1248985270&sr=8-1
And Jonathan Lewis' when you feel you are at a slightly more advanced level.
http://www.amazon.com/Cost-Based-Oracle-Fundamentals-Experts-Voice/dp/1590596366/ref=pd_sim_b_2
Both belong in everyone's bookcase. -
VAL_FIELD selection to determine RSDRI or MDX query: performance tuning
according to on of the HTG I am working on performance tuning. one of the tip is to try to query base members by using BAS(xxx) in the expension pane of BPC report.
I did so and found an interesting issue in one of the COPA report.
with income statement, when I choose one node gross_profit, saying BAS(GROSS_PROFIT), it generates RSDRI query as I can see in UJSTAT. when I choose its parent, BAS(DIRECT_INCOME), it generates MDX query!
I checked DIRECT_INCOME has three members, GROSS_PROFIT, SGA, REV_OTHER. , none of them has any formulars.
in stead of calling BAS(DIRECT_INCOME), I called BAS(GROSS_PROFIT),BAS(SGA),BAS(REV_OTHER), I got RSDRI query again.
so in smmary,
BAS(PARENT) =>MDX query.
BAS(CHILD1)=>RSDRI query.
BAS(CHILD2)=>RSDRI query.
BAS(CHILD3)=>RSDRI query.
BAS(CHILD1),BAS(CHILD2),BAS(CHILD3)=>RSDRI query
I know VAL_FIELD is SAP reserved name for BPC dimensions. my question is why BAS(PARENT) =>MDX query.?
interestingly I can repeat this behavior in my system. my intention is to always get RSDRI query,
GeorgeOk - it turns out that Crystal Reports disregards BEx Query variables when put in the Default Values section of the filter selection.
I had mine there and even though CR prompted me for the variables AND the SQL statement it generated had an INCLUDE statement with hose variables I could see by my result set that it still returned everything in the cube as if there was no restriction on Plant for instance.
I should have paid more attention to the Info message I got in the BEx Query Designed. It specifically states that the "Variable located in Default Values will be ignored in the MDX Access".
After moving the variables to the Characteristic Restrictions my report worked as expected. The slow response time is still an issue but at least it's not compounded by trying to retrieve all records in the cube while I'm expecting less than 2k.
Hope this helps someone else -
Query Performance with and without cache
Hi Experts
I have a query that takes 50 seconds to execute without any caching or precalculation.
Once I have run the query in the Portal, any subsequent execution takes about 8 seconds.
I assumed that this was to do with the cache, so I went into RSRT and deleted the Main Memory Cache and the Blob cache, where my queries seemed to be.
I ran the query again and it took 8 seconds.
Does the query cache somewhere else? Maybe on the portal? or on the users local cache? Does anyone have any idea of why the reports are still fast, even though the cache is deleted?
Forum points always awarded for helpful answers!!
Many thanks!
DaveHi,
Cached data automatically becomes invalid whenever data in the InfoCube is loaded or purged and when a query is changed or regenerated. Once cached data becomes invalid, the system reverts to the fact table or associated aggregate to pull data for the query You can see the cache settings for all queries in your system using transaction SE16 to view table RSRREPDIR . The CACHEMODE field shows the settings of the individual queries. The numbers in this field correspond to the cache mode settings above.
To set the cache mode on the InfoCube, follow the path Business Information Warehouse Implementation Guide (IMG)>Reporting-Relevant Settings>General Reporting Settings>Global Cache Settings or use transaction SPRO . Setting the cache mode at the InfoCube level establishes a default for each query created from that specific InfoCube. -
Query performance tuning need your suggestions
Hi,
Below is the sql query and explain plan which is taking 2 hours to execute and sometime it is breaking up( erroring out) due to memory issue.
Below it the query which i need to improve the performance of the code please need your suggestion in order to tweak so that time take for execution become less and also in less memory consumption
select a11.DATE_ID DATE_ID,
sum(a11.C_MEASURE) WJXBFS1,
count(a11.PKEY_GUID) WJXBFS2,
count(Case when a11.C_MEASURE <= 10 then a11.PKEY_GUID END) WJXBFS3,
count(Case when a11.STATUS = 'Y' and a11.C_MEASURE > 10 then a11.PKEY_GUID END) WJXBFS4,
count(Case when a11.STATUS = 'N' then a11.PKEY_GUID END) WJXBFS5,
sum(((a11.C_MEASURE ))) WJXBFS6,
a17.DESC_DATE_MM_DD_YYYY DESC_DATE_MM_DD_YYYY,
a11.DNS DNS,
a12.VVALUE VVALUE,
a12.VNAME VNAME,
a13.VVALUE VVALUE0,
a13.VNAME VNAME0,
9 a14.VVALUE VVALUE1,
a14.VNAME VNAME1,
a15.VVALUE VVALUE2,
a15.VNAME VNAME2,
a16.VVALUE VVALUE3,
a16.VNAME VNAME3,
a11.PKEY_GUID PKEY_GUID,
a11.UPKEY_GUID UPKEY_GUID,
a17.DAY_OF_WEEK DAY_OF_WEEK,
a17.D_WEEK D_WEEK,
a17.MNTH_ID DAY_OF_MONTH,
a17.YEAR_ID YEAR_ID,
a17.DESC_YEAR_FULL DESC_YEAR_FULL,
a17.WEEK_ID WEEK_ID,
a17.WEEK_OF_YEAR WEEK_OF_YEAR
from ACTIVITY_F a11
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 1 ) a12
on (a11.PKEY_GUID = a12.PKEY_GUID and
a11.DATE_ID = a12.DATE_ID and
a11.ORG = a12.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 2) a13
on (a11.PKEY_GUID = a13.PKEY_GUID and
a11.DATE_ID = a13.DATE_ID and
a11.ORG = a13.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 3 ) a14
on (a11.PKEY_GUID = a14.PKEY_GUID and
a11.DATE_ID = a14.DATE_ID and
a11.ORG = a14.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 4) a15
on (a11.PKEY_GUID = a15.PKEY_GUID and
89 a11.DATE_ID = a15.DATE_ID and
a11.ORG = a15.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 9) a16
on (a11.PKEY_GUID = a16.PKEY_GUID and
a11.DATE_ID = a16.DATE_ID and
A11.ORG = A16.ORG)
join W_DATE_D a17
ON (A11.DATE_ID = A17.ID)
join W_SALES_D a18
on (a11.TASK = a18.ID)
where (a17.TIMSTAMP between To_Date('2001-02-24 00:00:00', 'YYYY-MM-DD HH24:MI:SS') and To_Date('2002-09-12 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
and a11.ORG in (12)
and a18.SRC_TASK = 'AX012Z')
group by a11.DATE_ID,
a17.DESC_DATE_MM_DD_YYYY,
a11.DNS,
a12.VVALUE,
a12.VNAME,
a13.VVALUE,
a13.VNAME,
a14.VVALUE,
a14.VNAME,
a15.VVALUE,
a15.VNAME,
a16.VVALUE,
a16.VNAME,
a11.PKEY_GUID,
a11.UPKEY_GUID,
a17.DAY_OF_WEEK,
a17.D_WEEK,
a17.MNTH_ID,
a17.YEAR_ID,
a17.DESC_YEAR_FULL,
a17.WEEK_ID,
a17.WEEK_OF_YEAR;
Explained.
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 1245 | 47 (9)| 00:00:01 |
| 1 | HASH GROUP BY | | 1 | 1245 | 47 (9)| 00:00:01 |
|* 2 | HASH JOIN | | 1 | 1245 | 46 (7)| 00:00:01 |
|* 3 | HASH JOIN | | 1 | 1179 | 41 (5)| 00:00:01 |
|* 4 | HASH JOIN | | 1 | 1113 | 37 (6)| 00:00:01 |
|* 5 | HASH JOIN | | 1 | 1047 | 32 (4)| 00:00:01 |
|* 6 | HASH JOIN | | 1 | 981 | 28 (4)| 00:00:01 |
| 7 | NESTED LOOPS | | 1 | 915 | 23 (0)| 00:00:01 |
| 8 | NESTED LOOPS | | 1 | 763 | 20 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 1 | 611 | 17 (0)| 00:00:01 |
| 10 | NESTED LOOPS | | 1 | 459 | 14 (0)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 307 | 11 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 1 | 155 | 7 (0)| 00:00:01 |
| 13 | NESTED LOOPS | | 1 | 72 | 3 (0)| 00:00:01 |
| 14 | TABLE ACCESS BY INDEX ROWID| W_SALES_D | 1 | 13 | 2 (0)| 00:00:01 |
|* 15 | INDEX UNIQUE SCAN | CONS_UNQ_W_SALES_D_SRC_ID | 1 | | 1 (0)| 00:00:01 |
| 16 | TABLE ACCESS BY INDEX ROWID| W_DATE_D | 1 | 59 | 1 (0)| 00:00:01 |
|* 17 | INDEX UNIQUE SCAN | UIDX_DD_TIMSTAMP | 1 | | 0 (0)| 00:00:01 |
| 18 | TABLE ACCESS BY INDEX ROWID | ACTIVITY_F | 1 | 83 | 4 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | PK_ACTIVITY_F | 1 | | 3 (0)| 00:00:01 |
|* 20 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 4 (0)| 00:00:01 |
|* 21 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 22 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 23 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 24 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 25 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 26 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 27 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 28 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 29 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 30 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 31 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 32 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 33 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 34 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------------Hi,
I'm not a tuning expert but I can suggest you to post your request according to this template:
Thread: HOW TO: Post a SQL statement tuning request - template posting
HOW TO: Post a SQL statement tuning request - template posting
Then:
a) you should posting a code which is easy to read. What about formatting? Your code had to be fixed in a couple of lines.
b) You could simplify your code using the with statement. This has nothing to do with the tuning but it will help the readability of the query.
Check it below:
WITH tab1 AS (SELECT a.org AS org
, a.date_id AS date_id
, a.time_of_day_id AS time_of_day_id
, a.date_hour_id AS date_hour_id
, a.task AS task
, a.pkey_guid AS pkey_guid
, a.vname AS vname
, a.vvalue AS vvalue
, b.variable_obj
FROM w_org_d a
JOIN
w_person_d b
ON ( a.task = b.task
AND a.org = b.id
AND a.vname = b.vname))
SELECT a11.date_id date_id
, SUM (a11.c_measure) wjxbfs1
, COUNT (a11.pkey_guid) wjxbfs2
, COUNT (CASE WHEN a11.c_measure <= 10 THEN a11.pkey_guid END) wjxbfs3
, COUNT (CASE WHEN a11.status = 'Y' AND a11.c_measure > 10 THEN a11.pkey_guid END) wjxbfs4
, COUNT (CASE WHEN a11.status = 'N' THEN a11.pkey_guid END) wjxbfs5
, SUM ( ( (a11.c_measure))) wjxbfs6
, a17.desc_date_mm_dd_yyyy desc_date_mm_dd_yyyy
, a11.dns dns
, a12.vvalue vvalue
, a12.vname vname
, a13.vvalue vvalue0
, a13.vname vname0
, a14.vvalue vvalue1
, a14.vname vname1
, a15.vvalue vvalue2
, a15.vname vname2
, a16.vvalue vvalue3
, a16.vname vname3
, a11.pkey_guid pkey_guid
, a11.upkey_guid upkey_guid
, a17.day_of_week day_of_week
, a17.d_week d_week
, a17.mnth_id day_of_month
, a17.year_id year_id
, a17.desc_year_full desc_year_full
, a17.week_id week_id
, a17.week_of_year week_of_year
FROM activity_f a11
JOIN tab1 a12
ON ( a11.pkey_guid = a12.pkey_guid
AND a11.date_id = a12.date_id
AND a11.org = a12.org
AND a12.variable_obj = 1)
JOIN tab1 a13
ON ( a11.pkey_guid = a13.pkey_guid
AND a11.date_id = a13.date_id
AND a11.org = a13.org
AND a13.variable_obj = 2)
JOIN tab1 a14
ON ( a11.pkey_guid = a14.pkey_guid
AND a11.date_id = a14.date_id
AND a11.org = a14.org
AND a14.variable_obj = 3)
JOIN tab1 a15
ON ( a11.pkey_guid = a15.pkey_guid
AND a11.date_id = a15.date_id
AND a11.org = a15.org
AND a15.variable_obj = 4)
JOIN tab1 a16
ON ( a11.pkey_guid = a16.pkey_guid
AND a11.date_id = a16.date_id
AND a11.org = a16.org
AND a16.variable_obj = 9)
JOIN w_date_d a17
ON (a11.date_id = a17.id)
JOIN w_sales_d a18
ON (a11.task = a18.id)
WHERE (a17.timstamp BETWEEN TO_DATE ('2001-02-24 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
AND TO_DATE ('2002-09-12 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
AND a11.org IN (12)
AND a18.src_task = 'AX012Z')
GROUP BY a11.date_id, a17.desc_date_mm_dd_yyyy, a11.dns, a12.vvalue
, a12.vname, a13.vvalue, a13.vname, a14.vvalue
, a14.vname, a15.vvalue, a15.vname, a16.vvalue
, a16.vname, a11.pkey_guid, a11.upkey_guid, a17.day_of_week
, a17.d_week, a17.mnth_id, a17.year_id, a17.desc_year_full
, a17.week_id, a17.week_of_year;
{code}
I hope I did not miss anything while reformatting the code. I could not test it not having the proper tables.
As I said before I'm not a tuning expert nor I pretend to be but I see this:
1) Table W_PERSON_D is read in full scan. Any possibility of using indexes?
2) Tables W_SALES_D, W_DATE_D, ACTIVITY_F and W_ORG_D have TABLE ACCESS BY INDEX ROWID which definitely is not fast.
You should provide additional information for tuning your query checking the post I mentioned previously.
Regards.
Al -
Query Performance tuning and scope of imporvement
Hi All ,
I am on oracle 10g and on Linux OS.
I have this below query which I am trying to optimise :
SELECT 'COMPANY' AS device_brand, mach_sn AS device_source_id,
'COMPANY' AS device_brand_raw,
CASE
WHEN fdi.serial_number IS NOT NULL THEN
fdi.serial_number
ELSE
mach_sn || model_no
END AS serial_number_raw,
gmd.generic_meter_name AS counter_id,
meter_name AS counter_id_raw,
meter_value AS counter_value,
meter_hist_tstamp AS device_timestamp,
rcvd_tstamp AS server_timestamp
FROM rdw.v_meter_hist vmh
JOIN rdw.generic_meter_def gmd
ON vmh.generic_meter_id = gmd.generic_meter_id
LEFT OUTER JOIN fdr.device_info fdi
ON vmh.mach_sn = fdi.clean_serial_number
WHERE meter_hist_id IN
(SELECT /*+ PUSH_SUBQ */ MAX(meter_hist_id)
FROM rdw.v_meter_hist
WHERE vmh.mach_sn IN
('URR893727')
AND vmh.meter_name IN
('TotalImpressions','TotalBlackImpressions''TotalColorImpressions')
AND vmh.meter_hist_tstamp >=to_date ('04/16/2011', 'mm/dd/yyyy')
AND vmh.meter_hist_tstamp <= to_date ('04/18/2011', 'mm/dd/yyyy')
GROUP BY mach_sn, vmh.meter_def_id)
ORDER BY device_source_id, vmh.meter_def_id, meter_hist_tstamp;Earlier , it was taking too much time but it started to work faster when i added this :
/*+ PUSH_SUBQ */ in the select query.
The explain plan generated for the same is :
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 29M| 3804M| | 15M (4)| 53:14:08 |
| 1 | SORT ORDER BY | | 29M| 3804M| 8272M| 15M (4)| 53:14:08 |
|* 2 | FILTER | | | | | | |
|* 3 | HASH JOIN | | 29M| 3804M| | 8451K (2)| 28:10:19 |
| 4 | TABLE ACCESS FULL | GENERIC_METER_DEF | 11 | 264 | | 3 (0)| 00:00:01 |
|* 5 | HASH JOIN RIGHT OUTER| | 29M| 3137M| 19M| 8451K (2)| 28:10:17 |
| 6 | TABLE ACCESS FULL | DEVICE_INFO | 589K| 12M| | 799 (2)| 00:00:10 |
|* 7 | HASH JOIN | | 29M| 2527M| 2348M| 8307K (2)| 27:41:29 |
|* 8 | HASH JOIN | | 28M| 2016M| | 6331K (2)| 21:06:19 |
|* 9 | TABLE ACCESS FULL | METER_DEF | 33 | 990 | | 4 (0)| 00:00:01 |
| 10 | TABLE ACCESS FULL | METER_HIST | 3440M| 137G| | 6308K (2)| 21:01:44 |
| 11 | TABLE ACCESS FULL | MACH_XFER_HIST | 436M| 7501M| | 1233K (1)| 04:06:41 |
|* 12 | FILTER | | | | | | |
| 13 | HASH GROUP BY | | 1 | 26 | | 6631K (7)| 22:06:15 |
|* 14 | FILTER | | | | | | |
| 15 | TABLE ACCESS FULL | METER_HIST | 3440M| 83G| | 6304K (2)| 21:00:49 |
------------------------------------------------------------------------------------------------------Is there any other way to optimise it more ... PLease suggest since I am new to query tuning.
Thanks and Regards
KKHi Dom ,
Greetings. Sorry for the delayed response. I have read the How to Post document.
I will provide all the required information here now :
Version : 10.2.0.4
OS : LinuxThe SQL Query which is facing the performance issue :
SELECT mh.meter_hist_id, mxh.mach_sn, mxh.collectiontag, mxh.rcvd_tstamp,
mxh.mach_xfer_id, md.meter_def_id, md.meter_name, md.meter_type,
md.meter_units, md.meter_desc, mh.meter_value, mh.meter_hist_tstamp,
mh.max_value, md.generic_meter_id
FROM meter_hist mh JOIN mach_xfer_hist mxh
ON mxh.mach_xfer_id = mh.mach_xfer_id
JOIN meter_def md ON md.meter_def_id = mh.meter_def_id;Explain plan for this query :
Plan hash value: 1878059220
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 3424M| 497G| | 17M (1)| 56:42:49 |
|* 1 | HASH JOIN | | 3424M| 497G| | 17M (1)| 56:42:49 |
| 2 | TABLE ACCESS FULL | METER_DEF | 423 | 27918 | | 4 | 00:00:01 |
|* 3 | HASH JOIN | | 3424M| 287G| 26G| 16M (1)| 56:38:16 |
| 4 | TABLE ACCESS FULL| MACH_XFER_HIST | 432M| 21G| | 1233K (1)| 04:06:40 |
| 5 | TABLE ACCESS FULL| METER_HIST | 3438M| 115G| | 6299K (2)| 20:59:54 |
Predicate Information (identified by operation id):
1 - access("MD"."METER_DEF_ID"="MH"."METER_DEF_ID")
3 - access("MH"."MACH_XFER_ID"="MXH"."MACH_XFER_ID")Parameters :
show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 70
optimizer_index_cost_adj integer 50
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUEshow parameter db_file_multi
db_file_multiblock_read_count integer 8
show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
select sname , pname , pval1 , pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 07-12-2011 09:22
SYSSTATS_INFO DSTOP 07-12-2011 09:52
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 1153.92254
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM 4.398
SYSSTATS_MAIN MREADTIM 3.255
SYSSTATS_MAIN CPUSPEED 180
SYSSTATS_MAIN MBRC 8
SYSSTATS_MAIN MAXTHR 244841472
SYSSTATS_MAIN SLAVETHR 933888
show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192Please let me me if any other information is needed. This Query is currently taking almost one hour to run right now.
Also , we have two indexes on the columns : xfer_id and meter_def_id in both the tables , but its not getting used without any filtering ( Where clause).
Do addition of hint in the query above will be of some help.
Thanks and Regards
KK -
Query Performance Tuning - Help
Hello Experts,
Good Day to all...
TEST@ora10g>select * from v$version;
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
"CORE 10.2.0.4.0 Production"
TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
NLSRTL Version 10.2.0.4.0 - Production
SELECT fa.user_id,
fa.notation_type,
MAX(fa.created_date) maxDate,
COUNT(*) bk_count
FROM book_notations fa
WHERE fa.user_id IN
( SELECT user_id
FROM
( SELECT /*+ INDEX(f2,FBK_AN_ID_IDX) */ f2.user_id,
MAX(f2.notatn_id) f2_annotation_id
FROM book_notations f2,
title_relation tdpr
WHERE f2.user_id IN ('100002616221644',
'100002616221645',
'100002616221646',
'100002616221647',
'100002616221648')
AND f2.pack_id=tdpr.pack_id
AND tdpr.title_id =93402
GROUP BY f2.user_id
ORDER BY 2 DESC)
WHERE ROWNUM <= 10)
GROUP BY fa.user_id,
fa.notation_type
ORDER BY 3 DESC;Cost of the Query is too much...
Below is the explain plan of the query
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 29 | 1305 | 52 (10)| 00:00:01 |
| 1 | SORT ORDER BY | | 29 | 1305 | 52 (10)| 00:00:01 |
| 2 | HASH GROUP BY | | 29 | 1305 | 52 (10)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID | book_notations | 11 | 319 | 4 (0)| 00:00:01 |
| 4 | NESTED LOOPS | | 53 | 2385 | 50 (6)| 00:00:01 |
| 5 | VIEW | VW_NSO_1 | 5 | 80 | 29 (7)| 00:00:01 |
| 6 | HASH UNIQUE | | 5 | 80 | | |
|* 7 | COUNT STOPKEY | | | | | |
| 8 | VIEW | | 5 | 80 | 29 (7)| 00:00:01 |
|* 9 | SORT ORDER BY STOPKEY | | 5 | 180 | 29 (7)| 00:00:01 |
| 10 | HASH GROUP BY | | 5 | 180 | 29 (7)| 00:00:01 |
| 11 | TABLE ACCESS BY INDEX ROWID | book_notations | 5356 | 135K| 26 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 6917 | 243K| 27 (0)| 00:00:01 |
| 13 | MAT_VIEW ACCESS BY INDEX ROWID| title_relation | 1 | 10 | 1 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | IDX_TITLE_ID | 1 | | 1 (0)| 00:00:01 |
| 15 | INLIST ITERATOR | | | | | |
|* 16 | INDEX RANGE SCAN | FBK_AN_ID_IDX | 5356 | | 4 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | FBK_AN_ID_IDX | 746 | | 1 (0)| 00:00:01 |
Table Details
SELECT COUNT(*) FROM book_notations; --111367
Columns
user_id -- nullable field - VARCHAR2(50 BYTE)
pack_id -- NOT NULL --NUMBER
notation_type-- VARCHAR2(50 BYTE) -- nullable field
CREATED_DATE - DATE -- nullable field
notatn_id - VARCHAR2(50 BYTE) -- nullable field
Index
FBK_AN_ID_IDX - Non unique - Composite columns --> (user_id and pack_id)
SELECT COUNT(*) FROM title_relation; --12678
Columns
pack_id - not null - number(38) - PK
title_id - not null - number(38)
Index
IDX_TITLE_ID - Non Unique - TITLE_ID
Please help...
Thanks...Linus wrote:
Thanks Bravid for your reply; highly appreciate that.
So as you say; index creation on the NULL column doesnt have any impact. OK fine.
What happens to the execution plan, performance and the stats when you remove the index hint?
Find below the Execution Plan and Predicate information
"PLAN_TABLE_OUTPUT"
"Plan hash value: 126058086"
"| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |"
"| 0 | SELECT STATEMENT | | 25 | 1125 | 55 (11)| 00:00:01 |"
"| 1 | SORT ORDER BY | | 25 | 1125 | 55 (11)| 00:00:01 |"
"| 2 | HASH GROUP BY | | 25 | 1125 | 55 (11)| 00:00:01 |"
"| 3 | TABLE ACCESS BY INDEX ROWID | book_notations | 10 | 290 | 4 (0)| 00:00:01 |"
"| 4 | NESTED LOOPS | | 50 | 2250 | 53 (8)| 00:00:01 |"
"| 5 | VIEW | VW_NSO_1 | 5 | 80 | 32 (10)| 00:00:01 |"
"| 6 | HASH UNIQUE | | 5 | 80 | | |"
"|* 7 | COUNT STOPKEY | | | | | |"
"| 8 | VIEW | | 5 | 80 | 32 (10)| 00:00:01 |"
"|* 9 | SORT ORDER BY STOPKEY | | 5 | 180 | 32 (10)| 00:00:01 |"
"| 10 | HASH GROUP BY | | 5 | 180 | 32 (10)| 00:00:01 |"
"| 11 | TABLE ACCESS BY INDEX ROWID | book_notations | 5875 | 149K| 28 (0)| 00:00:01 |"
"| 12 | NESTED LOOPS | | 7587 | 266K| 29 (0)| 00:00:01 |"
"| 13 | MAT_VIEW ACCESS BY INDEX ROWID| title_relation | 1 | 10 | 1 (0)| 00:00:01 |"
"|* 14 | INDEX RANGE SCAN | IDX_TITLE_ID | 1 | | 1 (0)| 00:00:01 |"
"| 15 | INLIST ITERATOR | | | | | |"
"|* 16 | INDEX RANGE SCAN | FBK_AN_ID_IDX | 5875 | | 4 (0)| 00:00:01 |"
"|* 17 | INDEX RANGE SCAN | FBK_AN_ID_IDX | 775 | | 1 (0)| 00:00:01 |"
"Predicate Information (identified by operation id):"
" 7 - filter(ROWNUM<=10)"
" 9 - filter(ROWNUM<=10)"
" 14 - access(""TDPR"".""TITLE_ID""=93402)"
" 16 - access((""F2"".""USER_ID""='100002616221644' OR ""F2"".""USER_ID""='100002616221645' OR "
" ""F2"".""USER_ID""='100002616221646' OR ""F2"".""USER_ID""='100002616221647' OR "
" ""F2"".""USER_ID""='100002616221648') AND ""F2"".""PACK_ID""=""TDPR"".""PACK_ID"")"
" 17 - access(""FA"".""USER_ID""=""$nso_col_1"")"
The cost is the same because the plan is the same. The optimiser chose to use that index anyway. The point is, now that you have removed it, the optimiser is free to choose other indexes or a full table scan if it wants to.
>
Statistics
BEGIN
DBMS_STATS.GATHER_TABLE_STATS ('TEST', 'BOOK_NOTATIONS');
END;
"COLUMN_NAME" "NUM_DISTINCT" "NUM_BUCKETS" "HISTOGRAM"
"NOTATION_ID" 110269 1 "NONE"
"USER_ID" 213 212 "FREQUENCY"
"PACK_ID" 20 20 "FREQUENCY"
"NOTATION_TYPE" 8 8 "FREQUENCY"
"CREATED_DATE" 87 87 "FREQUENCY"
"CREATED_BY" 1 1 "NONE"
"UPDATED_DATE" 2 1 "NONE"
"UPDATED_BY" 2 1 "NONE"
After removing the hint ; the query still shows the same "COST"
Autotrace
recursive calls 1
db block gets 0
consistent gets 34706
physical reads 0
redo size 0
bytes sent via SQL*Net to client 964
bytes received via SQL*Net from client 1638
SQL*Net roundtrips to/from client 2
sorts (memory) 3
sorts (disk) 0
Output of query
"USER_ID" "NOTATION_TYPE" "MAXDATE" "COUNT"
"100002616221647" "WTF" 08-SEP-11 20000
"100002616221645" "LOL" 08-SEP-11 20000
"100002616221644" "OMG" 08-SEP-11 20000
"100002616221648" "ABC" 08-SEP-11 20000
"100002616221646" "MEH" 08-SEP-11 20000Thanks...I still don't know what we're working towards at the moment. WHat is the current run time? What is the expected run time?
I can't tell you if there's a better way to write this query or if indeed there is another way to write this query because I don't know what it is attempting to achieve.
I can see that you're accessing 100k rows from a 110k row table and it's using an index to look those rows up. That seems like a job for a full table scan rather than index lookups.
David -
Outerjoin query performance tuning
Hi,
I have two tables, tab1 (7 million records) and tab2 (50,000 records).
Following query is taking more than 15 minutes to feth the result from the database.
SELECT a.col11, a.col12, b.col11, b.col12
FROM tab1 a, tab2 b
WHERE a.col1 = b.col1 (+) AND
a.col2 = b.col2 (+) AND
a.col3 = b.col3 (+) AND
a.col4 = b.col4 (+);
Please suggest the ways how to tune the above query. I am working on Oracle 9i release 2.
Thanks in advance.You should probably go through the usual steps to tune the query ... get a plan, check indexes, etc.
Ideally you could eliminate the outer join, but that's not usually an option :(.
One idea that might or might not help is to execute 2 different queries: a normal join UNIONed to the one where the join fails. In my experience it stands about a 50% chance of improving performance but you must be absolutely sure the modified query returns the same results as the original. The correlated subquery below must be use indexes for this to help. This idea looks something like
select a.col1, a.col12, b.col11, b.col12
from tab1 a, tab2 b
where a.col1 = b.col1
and a.col2 = b.col2
and a.col3 = b.col3
and a.col4 = b.col4
UNION ALL
select a.col1, null, null, null
from tab1
where not exists (
select 0
from tab2 b
where a.col1 = b.col1
and a.col2 = b.col2
and a.col3 = b.col3
and a.col4 = b.col4
) -
Query Performance Tuning from a cockpit
When running some queries from within a cockpit we are expreienceing long search times from the cubes. What methods can be used to tune queries and improve search times? Is there a way to get an explain plan (Oracle featrue) of the query? Can we trace the steps and bottle necks that a query runs into?
Ryan, Try to search for the topic posted couple a days back. All you have to do is search for Performance word.
Topics:Performance started by "thisquestion 4u"
Posted: Jul 25, 2005
Let me know if you need help.
Pete -
Hello,
Can someone please tell me if there is anyway to improve this query to perform better.I tried using temp table instead of CTE .it did'nt change anything.I think there is a problem with latter part of query.When I execute every select individually they run faster.This
query is taking hours to execute and is taking all the CPU .
lucky
luckyWhy do you need a FULL JOIN if you're applying a filter against the cte AGE anyway? A LEFT OUTER JOIN on a.PAT_ID = b.PAT_ID would do the same...
Furthermore, I wouldn't place the WHERE clause at the very end of this rather complex query.
Limt the result set of cte AGE by using
WHERE b.DOB < DATEADD(YEAR,-10,@date) AND b.DOB > DATEADD(YEAR,-23,@date)
for each section of the cte.
If you're using DISTINCT all over the place, get rid of the overhead to eliminate duplicates within the cte and use UNION ALL instead.
Normalize your table and therewith avoid the numerous calls to CA.dbo.Trans_VS_ValueSetsToCodes_2014.
Perform the UNION (ALL?) of QI.dbo.SWHP_CLAIMS_MASTER and SWHP_ANALYTICS.dbo.CLAIMS and apply the WHERE clause to the result set instad of each set separately.
Maybe even store some of the sub-results in a temp table and go from there.
That's all I can see at a first glance.
To summarize: there's huge room for improvement and most probably much more to do than just that single query... And definitely more than can/should be answered on a forum... -
Hi all,
I am facing problem to extract data and insert into another table. I need to fetch part number from all_parts_already and then based on the result I need to extract place from all_parts and insert into all_parts_already.
Sample data in the tables:
table all_parts_already:
part part_desc technique company place
1 A Engine TVS B1
1 Av Engine TVS B2
1 Ab Engine TVS B3
2 Ah Engine TVS B3
2 Ap Engine TVS B2
table all_parts:
technique company place
Engine TVS B1
Kim TVS B2
Engine TVS B3
Engine TVS B4
XXXXX TVS B5
Engine TVS B6
for c1 in (select distinct parts from all_parts_already where
technique = 'Engine' and
Company = 'TVS' ) loop
for c2 in (select distinct place from all_parts where
technique = 'Engine' and
Company ='TVS' and
minus
select distinct Place from all_parts_already where
technique = 'Engine' and
Company = 'TVS' and
parts = c1.parts ) loop
insert into all_parts_already (select c2.place,place_desc,c1.parts,c2.place from place_master where parts=c1.parts and place=c2.place);
end loop;
end loop;
the data i am dealing with is in millions. One technique may have 1000 parts. One part may have 500 places. So the loop runs that many times creating the delay.
Please tell me how to move forward.I am getting the output i need but the time it takes is too much(goes on to days)
Thanks a lotRESULT TABLE AS BELOW
part place machine country
P2 C3 M1 I1
P2 C4 M1 I1
P4 C1 M1 I1
P4 C2 M1 I1
P4 C4 M1 I1
P3 C1 M1 I1
P3 C2 M1 I1
P3 C3 M1 I1I don't get the relationship to country.
How do you determine what the missing country is?
You say on all of the above the I1 is missing and yet in table 'a' parts can have different countries?
That point aside....
This probably isn't the simplest way to do it, but maybe this is moving in the right direction:
SQL> with b as
2 (select 'c1' place, 'm1' machine from dual union all
3 select 'c2' place, 'm1' machine from dual union all
4 select 'c3' place, 'm1' machine from dual union all
5 select 'c4' place, 'm1' machine from dual)
6 , a as
7 (select 'p1' part, 'c1' place, 'm1' machine, 'i1' country from dual union all
8 select 'p1' part, 'c2' place, 'm1' machine, 'i2' country from dual union all
9 select 'p1' part, 'c3' place, 'm1' machine, 'i3' country from dual union all
10 select 'p1' part, 'c4' place, 'm1' machine, 'i1' country from dual union all
11 select 'p2' part, 'c1' place, 'm1' machine, 'i1' country from dual union all
12 select 'p2' part, 'c2' place, 'm1' machine, 'i2' country from dual union all
13 select 'p4' part, 'c3' place, 'm1' machine, 'i3' country from dual union all
14 select 'p3' part, 'c4' place, 'm1' machine, 'i1' country from dual)
15 , x as
16 (select distinct part, machine
17 from a)
18 , all_parts as
19 (select b.place, x.part, x.machine
20 from x
21 , b
22 where b.machine = x.machine)
23 select *
24 from all_parts ap
25 , a aa
26 where aa.place (+) = ap.place
27 and aa.part (+) = ap.part
28 and aa.machine (+) = ap.machine
29 --and aa.place is null
30 ;
PL PA MA PA PL MA CO
c1 p1 m1 p1 c1 m1 i1
c2 p1 m1 p1 c2 m1 i2
c3 p1 m1 p1 c3 m1 i3
c4 p1 m1 p1 c4 m1 i1
c1 p2 m1 p2 c1 m1 i1
c2 p2 m1 p2 c2 m1 i2
c3 p4 m1 p4 c3 m1 i3
c4 p3 m1 p3 c4 m1 i1
c2 p4 m1
c3 p2 m1
c3 p3 m1
c2 p3 m1
c4 p4 m1
c4 p2 m1
c1 p4 m1
c1 p3 m1
16 rows selected.
SQL> Just uncomment out the line "and aa.place is null" to get the missing rows.
Edited by: Dom Brooks on Nov 10, 2011 12:09 PM -
Hi Experts,
I have a Query which was built on a multiprovider,Which has a slow preformance.
I think the main problem comes from selecting to many records from
I think it selects 1,1 million rows to display about 500 rows in the result.
Another point could be that the complicated restricted and calculated keyfigures, especially might spend a lot of time in the OLAP processor.
Here are the Statistics of the Query.
OLAP Initialization : 3,186906
Wait Time, User : 56,971169
OLAP: Settings 0,983193
Read Cache 0,015642
Delete Cache 0,019030
Write Cache 0,087655
Data Manager 462,039167
OLAP: Data Selection 0,671566
OLAP: Data Transfer 1,257884.
ST03 Stat:
%OLAP :22,74
%DB :77,18
OLAP Time :29,2
DBTime :99,1
It seems that the maximum time is consuming in the Database
Any suggestion to speed up this Query response time would be great.
Thanks in advance.
BR
Srini.Hi,
You need to have standard Query performance tuning done for the underlying cubes like better design, aggregates, etc
Improve Performance of Queries/Reports on Multi Cubes
Refer SAP Note Number: 869487
Performance optimization for MultiCubes
How to Create Efficient Multi-Provider Queries
Please see the How to Guide "How to Create Efficient MultiProvider Queries" at http://service.sap.com/bi > SAP NetWeaver 2004 - Release-Specific Information > How-to Guides > Business Intelligence
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/how%20to%20create%20efficient%20multiprovider%20queries.pdf
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
Performance of MultiProviders
Multiprovider performance / aggregate question
Query Performance
Multicube performances
Create Efficient MultiProvider Queries
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b03b7f4c-c270-2910-a8b8-91e0f6d77096
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/751be690-0201-0010-5e80-f4f92fb4e4ab
Also try
Achieving BI Query Performance Building Business Intelligence
http://www.dmreview.com/issues/20051001/1038109-1.html
tuning, short dumps
Performance tuning in BW:
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
Also notes
0000903559 MultiProvider optimization is only partially active
0000825396 Performance in reports with many selections
multiprovider explanation i need
Note 629541 - Multiprovider: Parallel Processing
Thanks,
JituK -
Hi,
I am working on a application Developed in Forms10g and Oralce 10g.
I have few very large transaction tables in db and most of the screens in my application based on these tables only.
When user performs a query (with out any filter conditions) the whole table(s) loaded into memory and takes very long time. Further queries on the same screen perform better.
How can I keep these tables in memory (buffer) always to reduce the initial query time?
or
Is there any way to share the session buffers with other sessions, sothat it does not take long time in each session?
or
Any query performance tuning suggestions will be appreciated.
Thanks in advanceThanks a lot for your posts, very large means around
12 million rows. Yep, that's a large table
I have set the query all records to "No". Which is good. It means only enough records are fetched to fill the initial block. That's probably about 10 records. All the other records are not fetched from the database, so they're also not kept in memory at the Forms server.
Even when I try the query in SQL PLUS it is taking
long time. Sounds like a query performance problem, not a Forms issue. You're probably better of asking in the database or SQL forum. You could at least include the SELECT statement here if you want any help with it. We can't guess why a query is slow if we have no idea what the query is.
My concern is, when I execute the same query again or
in another session (some other user or same user),
can I increase the performance because the tables are
already in memory. any possibility for this? Can I
set any database parameters to share the data between
sessions like that... The database already does this. If data is retrieved from disk for one user it is cached in the SGA (Shared Global Area). Mind the word Shared. This caching information is shared by all sessions, so other users should benefit from it.
Caching also has its limits. The most obvious one is the size of the SGA of the database server. If the table is 200 megabyte and the server only has 8 megabyte of cache available, than caching is of little use.
Am I thinking in the right way? or I lost some where?Don't know.
There's two approaches:
- try to tune the query or database to have better performance. For starters, open SQL*Plus, execute "set timing on", then execute "set autotrace traceonly explain statistics", then execute your query and look at the results. it should give you an idea on how the database is executing the query and what improvements could be made. You could come back here with the SELECT statement and timing and trace results, but the database or SQL forum is probably better
- MORE IMPORTANTLY: think if it is necessary for users to perform such time consuming (and perhaps complex) queries. Do users really need the ability to query all records. Are they ever going to browse through millions of records?
>
>
Thanks -
Hi Gurus,
I m woking on performance tuning at the moment and wants some tips
regarding the Query performance tuning,if anyone can helpme in that
rfrence.
the thing is that i have got an idea about the system and now the
issues r with DB space, Abap Dumps, problem in free space in table
space, no number range buffering,cubes r using too many aggrigates,
large IC,Large ODS, and many others.
So my questionis that is anyone can tell me that how to resolve the
issues with the Large master data tables,and large ODS,and one more
importnat issue is KPI´s exceding there refrence valuses so any idea
how to deal with them.
waiting for the valuable responces.
thanks In advance
Redards
AmitHi Amit
For Query performance issue u can go for :-
Aggregates : They will help u a lot to make ur query faster becuase query doesnt hits ur cube it hits the aggregates which has very less number of records in comp to ur cube
secondly i wud suggest u is use CKF in place of formulaes if any in the query
other thing is avoid upto the extent possible the use fo nav attr . if u want to use them use it upto the minimal level reason i am saying so is during the query exec whn ever there is nav attr it provides unncessary join to ur MD and thus dec query perfo
be specifc to rows and columns if u r not sure of a kf or a char then better put it in a free char.
use filters if possible
if u follow these m sure ur query perfo will inc
Assign points if applicable
Thanks
puneet
Maybe you are looking for
-
Exported UTI doesn't work for all account
Hello, We have a QuickTime component that permits to play files with our video codec. These files also have a specific extension. It works fine with QuickTime 7 that allows "anything" content type. But on Snow Leopard, with QuickTime 10, we can not o
-
How to transfer parameters between two iViews?
Hi all, I have two WebDynpro based iViews (StartView and SecondView). when I press the button in the StartView, the EP will navigate to the SecondView. During this process, I want send a parameter value from the StartView to the Second View. In the S
-
When I send an email with a incorrect address I do not get a automatic reply.
I used to get reply that the email did not go Grant
-
Where do I find the power adapter for my Cinema Display?
I have recently aquired an older style of an Apple Cinema display. It didn't come with a power adapter, just the DVI cable coming out the back. I have done some research and understand that power and usb and video signal come through this one cable,
-
HT204053 Why can't I update and get free apps when I don't have money on my account?
I have an iCloud account. Why can't I update and add free app if my account has no money?