Why am I Observing Poor Geospatial Query Performance?
Our HANA Rev. 72 Amazon instance is performing very poorly when selecting geospatial polygons from tables. I'm hoping someone out there knows why.
Here's one example. The table below has just 51 records; one for each state (includingPuerto Rico). The table has a GEOMETRY column that contains a polygon outline of the state.
The query below uses ST_Covers() to find the state at a given latitude, longitude. It takes more than 9 seconds to execute!
SELECT STATE_NAME, STATE_FIPS as STATE_ID, SHAPE.ST_AsGeoJSON() as STATE_POLYGON
from GEO_SHAPES.US_STATES where SHAPE.ST_Covers(new ST_Point('POINT(-105.123 39.456)')) = 1
Statement 'SELECT STATE_NAME, STATE_FIPS as STATE_ID, SHAPE.ST_AsGeoJSON() as STATE_POLYGON from ...' successfully executed in 9.326 seconds (server processing time: 8.629 seconds)
Essentially all the time goes into the SHAPE.ST_Covers() function. The same query without this function runs in 6 ms.
Does anybody have any idea why?
JeffKasper wrote:
So I have been running activity monitor and what I am seeing is 40 MB of "Free" RAM with 630 MB "Wired", 2.5 GB "Inactive" and about 5 GB of "Active".
Both "Free" and "Inactive" are (supposed to be) available for use by any process that wants it.
When you first start up, most memory is, of course, "Free." As apps or system processes need memory, that's where OSX gets it. When they release it, however, it does not go back to Free, but to "Inactive" and is identified with the last process that used it. This is done to speed up assigning it back to the previous process if it requests it (which of course is quite common).
So as time passes, you'll see less and less Free memory and more and more Inactive memory; this means your Mac is working properly. In fact, after running for a long time, if there's much Free memory left, it is, in a sense, wasted!
The thing to watch for is Paging. If the "Page outs" figure is high, or changing rapidly, then OSX is having to page stuff out because it's out of both Free and Inactive memory.
A better way to monitor page-outs is via a Terminal command. (The Terminal app is in your Applications/Utilities folder.) Enter the following, exactly as shown, at the prompt:
sar -g 60 10
Leave the terminal window open, then try to re-create the unresponsive problem.
This should tell you if you are doing pageouts. You'll see a line in the Terminal window every 60 seconds for 10 minutes (or until you quit Terminal), showing the number of pageouts per second. A few pageouts is normal. If you have large numbers of pageouts, then you have a memory problem.
Similar Messages
-
Unexplained poor table query performance
Hi All
I am really open to any advice as I have hit a kind of brick wall, a developer came to me asking about y a procedure was performing so slowly in beta as opposed to dev and after looking at exactly what it did I indentified the offending
select statement.
The query was basically passing some ids into a user defined table and using that thoses ids to filter.
Select gc.id
From temperatures as gcm left outer join
gauges gc ON ( gc.id = gcm.id Or gc.id IS NULL )
AND ( gc.countryid = gcm.countryid or gcm.countryid is null )
where souriceid = 3
So the gauges table has around 90K where as the temperatures has around 3 million .
K the test on the development server and the above returns in under 3 seconds where as the beta is just over 1 minute .
The beta in terms of processing power is much fast and both have the same version of SQL2012 sp1 ( 11.0.3128 ( x64))
having ran a quick query on index fragmentation i find there are a few indexes within the temperature table that are reasonable high. I then rebuild them and see that they are pretty much back to an acceptable level. Again I try the select
and a few times and get a range of times .
I then tried a restore from the weekend just to see if there was anything that may have changed and wondering if I was beginning to clutch at straws.
low and behold the restore was not only quick but from an index fragmentation point of view not in as great shape.
Ive compared the two tables which are identical with the only difference being in data to which I copied over to the restore and got the same 2 second result.
Any help on what to do next would be great , as I could replace the table with the restored one but I would like to know why this is happening .
Many Thanks
RobertThe query is a bit strange with the NULL checks on gc.id and gcm.countryid.
Since temperatures is the retained (outer) table, you can remove the part "or gcm.countryid is null".
Also, if table gauges does not allow NULLs (or does not have NULLs) in column id, you should remove the part "OR gc.id IS NULL".
If the query can be simplified as stated above, then all you need is a compound index on (id, countryid) or on (countryid, id) on both tables.
If the problem still persists, you can check the query plan to see what is different, and that should give you a clue about the issue.
Please note that for performance related queries, it is essential to show the exact query you are using. For example, if you are using a local variable or a parameter instead of "3" in your query, that makes a big difference.
If you need more help, then please post DDL for the tables and indexes that are involved.
Gert-Jan -
Hi Team, below is the report view which is causing slowness , let me know any suggestions if you have
CREATE VIEW [REPORT].[View_MachinePerformanceBySlot] AS
SELECT
VA.SITE_NUM,
VA.SLOT_NUMBER,
VA.AREA_NAME,
VA.MANUFACTURER_NAME,
VA.ATYP_ID,
VA.HOLD_PERCENTAGE,
VA.SLOT_DENOM,
VA.SERIAL_NUM,
VA.AREA_ID,
a.THEM_NAME,
a.GAME_NAME,
a.TCAT_LONG_NAME,
a.TGRP_LONG_NAME,
VA.MTYP_NAME,
VA.OWNER_LABEL_KEY,
vsmr.SDS_Bets AS HANDLE,
vsmr.SDS_Plays AS HANDLE_PULL,
vsmr.Days_On AS DAYS_ACTIVE,
VA.GAME_TYPE,
Mtr_NamedAsstID,
Mtr_GameDay AS MVR_GDAY_DATE,
vsmr.PTYP_ID AS MVR_PTYP_ID,
Mtr_PeriodType AS PERIOD_TYPE,
CAST(SDS_Bets AS FLOAT) AS MVR_BETS,
CAST(SLIP_APJP_JACKPOT + SLIP_PROGRESSIVE_JAKPT
+ SLIP_MYST_JACKPOT + SLIP_CC_JACKPOT
+ SLIP_CELEBRATION_JACKPOT + vsmr.CASH_PROG+vsmr.NONCASH_PROG
AS FLOAT)AS JACKPOTS,
SDS_MachinePaidProgressiveWins,
MVR_THEORETICAL_WIN,
(CASE WHEN vsmr.SDS_Bets = 0 THEN 0
ELSE (MVR_THEORETICAL_WIN * 100) / vsmr.SDS_Bets
END) AS ACTUAL_PERCENTAGE,
((CASE WHEN vsmr.SDS_Bets = 0 THEN 0 ELSE (MVR_THEORETICAL_WIN * 100) / vsmr.SDS_Bets END)
- VA.HOLD_PERCENTAGE) AS VAR_HOLD_PERCENT,
Days_On AS MVR_DAYS_ONLINE_VAL,
(vsmr.SDS_1_bills + vsmr.SDS_5_bills + vsmr.SDS_10_bills + vsmr.SDS_20_bills
+ vsmr.SDS_50_bills + vsmr.SDS_100_bills + vsmr.SDS_CoinDrop) AS Bills_Coins,
CAST(vsmr.SDS_1_bills + vsmr.SDS_5_bills + vsmr.SDS_10_bills + vsmr.SDS_20_bills
+ vsmr.SDS_50_bills + vsmr.SDS_100_bills + vsmr.SDS_CoinDrop +
+ vsmr.SDS_EFTInCashablePromo + vsmr.SDS_EFTInNonCashable
+ vsmr.SDS_EFTInCashable + vsmr.SDS_TicketInCashable + vsmr.SDS_TicketInNonCashable
+ vsmr.SDS_TicketInPromoCashable
-( vsmr.SLIP_APJP_JACKPOT + vsmr.SLIP_PROG_JAKPT_ST + vsmr.SLIP_CC_JACKPOT +vsmr.SLIP_MYST_JACKPOT
+ vsmr.SLIP_DISPUTE + vsmr.SLIP_FILL+vsmr.SLIP_CELEBRATION_JACKPOT +vsmr.CASH_PROG
+vsmr.NONCASH_PROG - vsmr.SLIP_BLEED
-( vsmr.SDS_EFTOutCashablePromo+vsmr.SDS_EFTOutNonCashable+vsmr.SDS_EFTOutCashable+
SDS_TicketOutNonCashable+SDS_TicketOutCashable
AS FLOAT) AS SDS_WIN,
(vsmr.SLIP_APJP_JACKPOT + vsmr.SLIP_PROG_JAKPT_ST + vsmr.SLIP_CC_JACKPOT+vsmr.SLIP_MYST_JACKPOT
+ vsmr.SLIP_DISPUTE + vsmr.SLIP_FILL+vsmr.SLIP_CELEBRATION_JACKPOT - vsmr.SLIP_BLEED
)AS SLIP_EXPENSES,
-- WIN
CAST((vsmr.SDS_1_bills + vsmr.SDS_5_bills + vsmr.SDS_10_bills + vsmr.SDS_20_bills
+ vsmr.SDS_50_bills + vsmr.SDS_100_bills + vsmr.SDS_CoinDrop +
+ vsmr.SDS_EFTInCashablePromo + vsmr.SDS_EFTInNonCashable
+ vsmr.SDS_EFTInCashable + vsmr.SDS_TicketInCashable + vsmr.SDS_TicketInNonCashable
+ vsmr.SDS_TicketInPromoCashable
)AS FLOAT) AS WIN,
-- SHORTS
(vsmr.SLIP_APJP_JACKPOT +vsmr.SLIP_PROG_JAKPT_ST + vsmr.SLIP_CC_JACKPOT
+vsmr.SLIP_DISPUTE +vsmr.SLIP_CELEBRATION_JACKPOT
+(vsmr.SLIP_FILL - vsmr.SLIP_BLEED)
+(vsmr.SDS_EFTOutCashablePromo+vsmr.SDS_EFTOutNonCashable+vsmr.SDS_EFTOutCashable
+SDS_TicketOutNonCashable+SDS_TicketOutCashable)
)AS SHORTS,
-- MYSTERY_SHORT
vsmr.SLIP_MYST_JACKPOT + vsmr.CASH_PROG + vsmr.NONCASH_PROG AS MYSTERY_SHORT,
-- SLIP_LINK_PROG_JAKPT
ISNULL(SLIP_LINK_PROG_JAKPT, 0) AS SLIP_LINK_PROG_JAKPT,
-- ACTUAL_WIN
(ISNULL(ActualMtr.ACTUAL_CASH_COUPON_VAL, 0)
+ISNULL(ActualMtr.ACTUAL_NONCASH_COUPON_VAL, 0)
+ISNULL((ActualMtr.ACTUAL_1_BILLS), 0) + ISNULL((ActualMtr.ACTUAL_5_BILLS), 0)
+ISNULL((ActualMtr.ACTUAL_10_BILLS), 0) + ISNULL((ActualMtr.ACTUAL_20_BILLS), 0)
+ISNULL((ActualMtr.ACTUAL_50_BILLS), 0) + ISNULL((ActualMtr.ACTUAL_100_BILLS), 0)
+ISNULL(vamcr.SCALE_AMT, 0)
+ISNULL(ActualMtr.ACTUAL_TKTINCASH, 0)
+ISNULL(ActualMtr.ACTUAL_TKTINNONCASH, 0)
+ISNULL(ActualMtr.ACTUAL_TKTINPROMOCASH, 0)
+(vsmr.SDS_EFTInCashablePromo + vsmr.SDS_EFTInNonCashable + vsmr.SDS_EFTInCashable )
-(vsmr.SLIP_APJP_JACKPOT + vsmr.SLIP_PROG_JAKPT_ST + vsmr.SLIP_CC_JACKPOT+vsmr.SLIP_MYST_JACKPOT
+ vsmr.SLIP_DISPUTE + vsmr.SLIP_FILL +vsmr.SLIP_CELEBRATION_JACKPOT
+vsmr.CASH_PROG+vsmr.NONCASH_PROG - vsmr.SLIP_BLEED)
-(vsmr.SDS_EFTOutCashablePromo+vsmr.SDS_EFTOutNonCashable+vsmr.SDS_EFTOutCashable
+SDS_TicketOutNonCashable+SDS_TicketOutCashable)
-ISNULL(SLIP_LINK_PROG_JAKPT, 0)
) AS ACTUAL_WIN
,(ISNULL(ActualMtr.ACTUAL_CASH_COUPON_VAL, 0)
+ISNULL(ActualMtr.ACTUAL_NONCASH_COUPON_VAL, 0)
+ISNULL((ActualMtr.ACTUAL_1_BILLS), 0) + ISNULL((ActualMtr.ACTUAL_5_BILLS), 0)
+ISNULL((ActualMtr.ACTUAL_10_BILLS), 0) + ISNULL((ActualMtr.ACTUAL_20_BILLS), 0)
+ISNULL((ActualMtr.ACTUAL_50_BILLS), 0) + ISNULL((ActualMtr.ACTUAL_100_BILLS), 0)
+ISNULL(vamcr.SCALE_AMT, 0)
+ISNULL(ActualMtr.ACTUAL_TKTINCASH, 0)
+ISNULL(ActualMtr.ACTUAL_TKTINNONCASH, 0)
+ISNULL(ActualMtr.ACTUAL_TKTINPROMOCASH, 0)
+ISNULL(vsmr.SDS_EFTInCashablePromo,0 )
+ISNULL(vsmr.SDS_EFTInNonCashable, 0)
+ISNULL(vsmr.SDS_EFTInCashable, 0)) AS PHY_WIN
,(vsmr.CASH_PROG_SLOT_CONTRIBUTION + vsmr.CASH_PSR_ARV_AMT_VAL+vsmr.NON_CASH_PROG_SLOT_CONTRIBUTION
+vsmr.NON_CASH_PSR_ARV_AMT_VAL ) AS PROVISION,
ISNULL(a.PTBL_NO_OF_PAYLINES,0) AS PTBL_NO_OF_PAYLINES -- check with ARV
FROM ACCOUNTING.VIEW_SDS_METER_ROLLUP_WITH_PROG AS vsmr
JOIN REPORT.VIEW_ASSET VA
ON VA. NAMED_ASSET_ID = Mtr_NamedAsstID
LEFT JOIN Accounting.VIEW_ACTUAL_METER_PERIODIC_ROLLUP AS ActualMtr
ON (ActualMtr.NAMEDASSTID = vsmr.Mtr_NamedAsstID)
AND (ActualMtr.GAMEDAY = vsmr.Mtr_GameDay)
AND (ActualMtr.PTYP_ID = vsmr.PTYP_ID)
LEFT JOIN ACCOUNTING.VIEW_ACTUAL_METER_COIN_ROLLUP vamcr
ON vamcr.CN_NAMEDASSTID = vsmr.Mtr_NamedAsstID
AND vamcr.CN_GAMEDAY = vsmr.Mtr_GameDay
AND vamcr.CN_PTYP_ID = vsmr.PTYP_ID
LEFT JOIN
(SELECT NAGI_NAST_ID,AT.THEM_NAME,TC.TCAT_LONG_NAME,Tg.TGRP_LONG_NAME ,PTBL.PTBL_NO_OF_PAYLINES,aco.USER_CUSTOM10 AS GAME_NAME FROM
ACCOUNTING.NAMED_ASSET_GAME_INFO
LEFT JOIN ACCOUNTING.GAME_INFO gf
ON gf.GINFO_ID = NAGI_GINFO_ID
AND NAGI_IS_LATEST=1
JOIN ACCOUNTING.PAYTABLE PTBL
ON PTBL.PTBL_ID = gf.GINFO_PTBL_ID
JOIN ASSET.THEME AT
ON AT.THEM_ID= gf.GINFO_ASST_THME_ID
JOIN ASSET.THEME_CATEGORY tc
ON TC.TCAT_ID = AT.THEME_PARENT_ID
JOIN asset.THEME_GROUP tg
on tc.TCAT_TGRP_ID=tg.TGRP_ID
JOIN asset.THEME_TYPE TT
on TT.TTYP_ID=AT.TTYP_ID
JOIN ACCOUNTING.NAMED_ASSET na
on na.NAST_ID = NAGI_NAST_ID
JOIN ASSET.ASSET_CONFIGURATION ac
on ac.ACNF_NUMBER = na.NAST_NAME AND ac.ACNF_DELETED_TS is null
JOIN ASSET.ASSET_CONFIGURATION_OPTION aco
on aco.ACNF_ID = ac.ACNF_ID
) as a
ON vsmr.Mtr_NamedAsstID =a.NAGI_NAST_IDI would change the part below with a CTE
LEFT JOIN
(SELECT NAGI_NAST_ID,AT.THEM_NAME,TC.TCAT_LONG_NAME,Tg.TGRP_LONG_NAME ,PTBL.PTBL_NO_OF_PAYLINES,aco.USER_CUSTOM10 AS GAME_NAME FROM
ACCOUNTING.NAMED_ASSET_GAME_INFO
LEFT JOIN ACCOUNTING.GAME_INFO gf
ON gf.GINFO_ID = NAGI_GINFO_ID
AND NAGI_IS_LATEST=1
JOIN ACCOUNTING.PAYTABLE PTBL
ON PTBL.PTBL_ID = gf.GINFO_PTBL_ID
JOIN ASSET.THEME AT
ON AT.THEM_ID= gf.GINFO_ASST_THME_ID
JOIN ASSET.THEME_CATEGORY tc
ON TC.TCAT_ID = AT.THEME_PARENT_ID
JOIN asset.THEME_GROUP tg
on tc.TCAT_TGRP_ID=tg.TGRP_ID
JOIN asset.THEME_TYPE TT
on TT.TTYP_ID=AT.TTYP_ID
JOIN ACCOUNTING.NAMED_ASSET na
on na.NAST_ID = NAGI_NAST_ID
JOIN ASSET.ASSET_CONFIGURATION ac
on ac.ACNF_NUMBER = na.NAST_NAME AND ac.ACNF_DELETED_TS is null
JOIN ASSET.ASSET_CONFIGURATION_OPTION aco
on aco.ACNF_ID = ac.ACNF_ID
) as a -
SQL query performance question
So I had this long query that looked like this:
SELECT a.BEGIN_DATE, a.END_DATE, a.DEAL_KEY, (select name from ideal dd where a.deal_key = dd.deal_key) DEALNAME, a.deal_term_key
FROM
ideal d, ideal_term a,( select deal_key, deal_term_key, max(createdOn) maxdate from Ideal_term B
where createdOn <= '03-OCT-12 10.03.00 AM' group by deal_key, deal_term_key ) B
WHERE a.begin_date <= '20-MAR-09 01.01.00 AM'
* and a.end_date >= '19-MAR-09 01.00.00 AM'*
* and A.deal_key = b.deal_key*
* and A.deal_term_key = b.deal_term_key*
* and a.createdOn = b.maxdate*
* and d.deal_key = a.deal_key*
* and d.name like 'MVPP1 B'*
order by
* a.begin_date, a.deal_key, a.deal_term_key;*
This performed very poorly for a record in one of the tables that has 43,000+ revisions. It took about 1 minute and 40 seconds. I asked the database guy at my company for help with it and he re-wrote it like so:
SELECT a.BEGIN_DATE, a.END_DATE, a.DEAL_KEY, (select name from ideal dd where a.deal_key = dd.deal_key) DEALNAME, a.deal_term_key
FROM ideal d
INNER JOIN (SELECT deal_key,
deal_term_key,
MAX(createdOn) maxdate
FROM Ideal_term B2
WHERE '03-OCT-12 10.03.00 AM' >= createdOn
GROUP BY deal_key, deal_term_key) B1
ON d.deal_key = B1.deal_key
INNER JOIN ideal_term a
ON B1.deal_key = A.deal_key
AND B1.deal_term_key = A.deal_term_key
AND B1.maxdate = a.createdOn
AND d.deal_key = a.deal_key + 0
WHERE a.begin_date <= '20-MAR-09 01.01.00 AM'
AND a.end_date >= '19-MAR-09 01.00.00 AM'
AND d.name LIKE 'MVPP1 B'
ORDER BY a.begin_date, a.deal_key, a.deal_term_key
this works much better, it only takes 0.13 seconds. I've bee trying to figure out why exaclty his version performs so much better. His only epxlanation was that the "+ 0" in the WHERE clause prevented Oracle from using an index for that column which created a bad plan initially.
I think there has to be more to it than that though. Can someone give me a detailed explanation of why the second version of the query performed so much faster.
Thanks.
Edited by: su**** on Oct 10, 2012 1:31 PMI used Autotrace in SQL developer. Is that sufficient? Here is the Autotrace and Explain for the slow query:
and for the fast query:
I said that I thought there was more to it because when my team members and I looked at the re-worked query the database guy sent us, our initial thoughts were that in the slow query some of the tables didn't have joins and because of that the query formed a Cartesian product and this resulted in a huge 43,000+ rows matrix.
In his version all tables had joins properly defined and in addition he had that +0 which told it to ignore the index for the attribute deal_key of table ideal_term. I spoke with the database guy today and he confirmed our theory. -
Poor query performance in Prod.
I am facing lots of issues in my queries.
The query is working fine in Dev. but after i transported it to Prod. the query is taking too much time to retreive the result.
Why i am facing this issue.
How can i do the performance tuning for the query.?
The query is built on multiprovider and it is also jumping to the ODS for ODS query.
But the query performance is really low and poor in Production.
And to surprise the query is wroking perfectly and faster in Dev.
What can be the suggestion.
Please send documents for performnace tuning, notes number... etc.Are datavolumes huge in Prod Box...dat may be cause 4 d slow runtimes.
<b>Look at below performance improving techs</b>
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cbd2d390-0201-0010-8eab-a8a9269a23c2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/aec09790-0201-0010-8eb9-e82df5763455
Business Intelligence Performance Tuning [original link is broken]
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
Note 565725 - Optimizing the performance of ODS objects -
Poor query performance when joining CONTAINS to another table
We just recently began evaluating Oracle Text for a search solution. We need to be able to search a table that can have over 20+ million rows. Each user may only have visibility to a tiny fraction of those rows. The goal is to have a single Oracle Text index that represents all of the searchable columns in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending order by score. What we're seeing is that query performance from TOAD is extremely fast when we write a simple CONTAINS query against the Oracle Text indexed table. However, when we attempt to first reduce the rows the CONTAINS query needs to search by using a WITH we find that the query performance degrades significantly.
For example, we can find all the records a user has access to from our base table by the following query:
SELECT d.duns_loc
FROM duns d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id = :employeeID;
This query can execute in <100 ms. In the working example, this query returns around 1200 rows of the primary key duns_loc.
Our search query looks like this:
SELECT score(1), d.*
FROM duns d
WHERE CONTAINS(TEXT_KEY, :search,1) > 0
ORDER BY score(1) DESC;
The :search value in this example will be 'highway'. The query can return 246k rows in around 2 seconds.
2 seconds is good, but we should be able to have a much faster response if the search query did not have to search the entire table, right? Since each user can only "view" records they are assigned to we reckon that if the search operation only had to scan a tiny tiny percent of the TEXT index we should see faster (and more relevant) results. If we now write the following query:
WITH subset
AS
(SELECT d.duns_loc
FROM duns d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id = :employeeID
SELECT score(1), d.*
FROM duns d
JOIN subset s
ON d.duns_loc = s.duns_loc
WHERE CONTAINS(TEXT_KEY, :search,1) > 0
ORDER BY score(1) DESC;
For reasons we have not been able to identify this query actually takes longer to execute than the sum of the durations of the contributing parts. This query takes over 6 seconds to run. We nor our DBA can seem to figure out why this query performs worse than a wide open search. The wide open search is not ideal as the query would end up returning records to the user they don't have access to view.
Has anyone ever ran into something like this? Any suggestions on what to look at or where to go? If anyone would like more information to help in diagnosis than let me know and i'll be happy to produce it here.
Thanks!!Sometimes it can be good to separate the tables into separate sub-query factoring (with) clauses or inline views in the from clause or an in clause as a where condition. Although there are some differences, using a sub-query factoring (with) clause is similar to using an inline view in the from clause. However, you should avoid duplication. You should not have the same table in two different places, as in your original query. You should have indexes on any columns that the tables are joined on, your statistics should be current, and your domain index should have regular synchronization, optimization, and periodically rebuild or drop and recreate to keep it performing with maximum efficiency. The following demonstration uses a composite domain index (cdi) with filter by, as suggested by Roger, then shows the explained plans for your original query, and various others. Your original query has nested loops. All of the others have the same plan without the nested loops. You could also add index hints.
SCOTT@orcl_11gR2> -- tables:
SCOTT@orcl_11gR2> CREATE TABLE duns
2 (duns_loc NUMBER,
3 text_key VARCHAR2 (30))
4 /
Table created.
SCOTT@orcl_11gR2> CREATE TABLE primary_contact
2 (duns_loc NUMBER,
3 emp_id NUMBER)
4 /
Table created.
SCOTT@orcl_11gR2> -- data:
SCOTT@orcl_11gR2> INSERT INTO duns VALUES (1, 'highway')
2 /
1 row created.
SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1)
2 /
1 row created.
SCOTT@orcl_11gR2> INSERT INTO duns
2 SELECT object_id, object_name
3 FROM all_objects
4 WHERE object_id > 1
5 /
76027 rows created.
SCOTT@orcl_11gR2> INSERT INTO primary_contact
2 SELECT object_id, namespace
3 FROM all_objects
4 WHERE object_id > 1
5 /
76027 rows created.
SCOTT@orcl_11gR2> -- indexes:
SCOTT@orcl_11gR2> CREATE INDEX duns_duns_loc_idx
2 ON duns (duns_loc)
3 /
Index created.
SCOTT@orcl_11gR2> CREATE INDEX primary_contact_duns_loc_idx
2 ON primary_contact (duns_loc)
3 /
Index created.
SCOTT@orcl_11gR2> -- composite domain index (cdi) with filter by clause
SCOTT@orcl_11gR2> -- as suggested by Roger:
SCOTT@orcl_11gR2> CREATE INDEX duns_text_key_idx
2 ON duns (text_key)
3 INDEXTYPE IS CTXSYS.CONTEXT
4 FILTER BY duns_loc
5 /
Index created.
SCOTT@orcl_11gR2> -- gather statistics:
SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS')
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT')
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> -- variables:
SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER
SCOTT@orcl_11gR2> EXEC :employeeid := 1
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100)
SCOTT@orcl_11gR2> EXEC :search := 'highway'
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> -- original query:
SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
SCOTT@orcl_11gR2> WITH
2 subset AS
3 (SELECT d.duns_loc
4 FROM duns d
5 JOIN primary_contact pc
6 ON d.duns_loc = pc.duns_loc
7 AND pc.emp_id = :employeeID)
8 SELECT score(1), d.*
9 FROM duns d
10 JOIN subset s
11 ON d.duns_loc = s.duns_loc
12 WHERE CONTAINS (TEXT_KEY, :search,1) > 0
13 ORDER BY score(1) DESC
14 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 4228563783
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 84 | 121 (4)| 00:00:02 |
| 1 | SORT ORDER BY | | 2 | 84 | 121 (4)| 00:00:02 |
|* 2 | HASH JOIN | | 2 | 84 | 120 (3)| 00:00:02 |
| 3 | NESTED LOOPS | | 38 | 1292 | 50 (2)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 5 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | DUNS_DUNS_LOC_IDX | 1 | 5 | 1 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("D"."DUNS_LOC"="PC"."DUNS_LOC")
5 - access("CTXSYS"."CONTAINS"("D"."TEXT_KEY",:SEARCH,1)>0)
6 - access("D"."DUNS_LOC"="D"."DUNS_LOC")
7 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- queries with better plans (no nested loops):
SCOTT@orcl_11gR2> -- subquery factoring (with) clauses:
SCOTT@orcl_11gR2> WITH
2 subset1 AS
3 (SELECT pc.duns_loc
4 FROM primary_contact pc
5 WHERE pc.emp_id = :employeeID),
6 subset2 AS
7 (SELECT score(1), d.*
8 FROM duns d
9 WHERE CONTAINS (TEXT_KEY, :search,1) > 0)
10 SELECT subset2.*
11 FROM subset1, subset2
12 WHERE subset1.duns_loc = subset2.duns_loc
13 ORDER BY score(1) DESC
14 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- inline views (sub-queries in the from clause):
SCOTT@orcl_11gR2> SELECT subset2.*
2 FROM (SELECT pc.duns_loc
3 FROM primary_contact pc
4 WHERE pc.emp_id = :employeeID) subset1,
5 (SELECT score(1), d.*
6 FROM duns d
7 WHERE CONTAINS (TEXT_KEY, :search,1) > 0) subset2
8 WHERE subset1.duns_loc = subset2.duns_loc
9 ORDER BY score(1) DESC
10 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- ansi join:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns
3 JOIN primary_contact
4 ON duns.duns_loc = primary_contact.duns_loc
5 WHERE CONTAINS (duns.text_key, :search, 1) > 0
6 AND primary_contact.emp_id = :employeeid
7 ORDER BY SCORE(1) DESC
8 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- old join:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns, primary_contact
3 WHERE CONTAINS (duns.text_key, :search, 1) > 0
4 AND duns.duns_loc = primary_contact.duns_loc
5 AND primary_contact.emp_id = :employeeid
6 ORDER BY SCORE(1) DESC
7 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- in clause:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns
3 WHERE CONTAINS (duns.text_key, :search, 1) > 0
4 AND duns.duns_loc IN
5 (SELECT primary_contact.duns_loc
6 FROM primary_contact
7 WHERE primary_contact.emp_id = :employeeid)
8 ORDER BY SCORE(1) DESC
9 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 3825821668
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN SEMI | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -
Poor query performance only with migrated 7.0 queries
Dear Team,
We are facing a serious query performance issue after migration of queries from 3.5 to 7.0.
I executed a query in 3.5 with some variable values which takes fraction of seconds to display the output. But the same migrated query with same variable entries is taking very long time and giving time out error.
We are not using any aggregates in the InfoProvider level.
Both the queries are based on same cube but 3.5 query is taking less time and 7.0 is taking very long time if more selection is done.
I checked for notes where I didn't find specific note for this particular scenario. I found notes only for general query performance improvement.
I want to know the reason why only in 7.0 the same 3.5 query is taking a long time and giving time out error. And please suggest some notes or suggestions related to this scenario.
Regards,
ChanHi,
Queries in BI 7.0 are almost the same as queries in 3.x format.
inorder to check if the problem is in the query runtime (database time) or JAVA runtime (probably rendering) you should try running it from RSRT once in JAVA web and once in ABAP web.
if the problem is only with JAVA web, than u should take the URL and add &profiling=X at the end.
after the query execution u can use statistics which will be shown at the top of the page.
With my experience, the problem is in the rendering phase of the query. Things that could be done is to limit the number of rows shown at each page, that could be done by changing the 0ANALYSIS web template - it's one of the web template parameters.
Tomer. -
Its 11G R2 version, and query is performing very slow
SELECT OBJSTATE
FROM
SUB_CON_CALL_OFF WHERE SUB_CON_NO = :B2 AND CALL_OFF_SEQ = :B1
call count cpu elapsed disk query current rows
Parse 140 0.00 0.00 0 0 0 0
Execute 798747 8.34 14.01 0 4 0 0
Fetch 798747 22.22 35.54 0 7987470 0 798747
total 1597634 30.56 49.56 0 7987474 0 798747
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 51 (recursive depth: 1)
Rows Row Source Operation
5 FILTER (cr=50 pr=0 pw=0 time=239 us)
5 NESTED LOOPS (cr=40 pr=0 pw=0 time=164 us)
5 NESTED LOOPS (cr=30 pr=0 pw=0 time=117 us)
5 TABLE ACCESS BY INDEX ROWID SUB_CON_CALL_OFF_TAB (cr=15 pr=0 pw=0 time=69 us)
5 INDEX UNIQUE SCAN SUB_CON_CALL_OFF_PK (cr=10 pr=0 pw=0 time=41 us)(object id 59706)
5 TABLE ACCESS BY INDEX ROWID SUB_CONTRACT_TAB (cr=15 pr=0 pw=0 time=42 us)
5 INDEX UNIQUE SCAN SUB_CONTRACT_PK (cr=10 pr=0 pw=0 time=26 us)(object id 59666)
5 INDEX UNIQUE SCAN USER_PROFILE_ENTRY_SYS_PK (cr=10 pr=0 pw=0 time=41 us)(object id 60891)
5 INDEX UNIQUE SCAN USER_ALLOWED_SITE_PK (cr=10 pr=0 pw=0 time=36 us)(object id 60866)
5 FAST DUAL (cr=0 pr=0 pw=0 time=4 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 1 0.00 0.00
gc cr block 2-way 3 0.00 0.00
gc current block 2-way 1 0.00 0.00
gc cr multi block request 4 0.00 0.00 Edited by: 842638 on Feb 2, 2013 5:52 AMHi Mark,
Just have few basic doubts regarding the below query performance :
call count cpu elapsed disk query current rows
Parse 140 0.00 0.00 0 0 0 0
Execute 798747 8.34 14.01 0 4 0 0
Fetch 798747 22.22 35.54 0 7987470 0 798747
total 1597634 30.56 49.56 0 7987474 0 798747
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 51 (recursive depth: 1)
Rows Row Source Operation
5 FILTER (cr=50 pr=0 pw=0 time=239 us)
5 NESTED LOOPS (cr=40 pr=0 pw=0 time=164 us)
5 NESTED LOOPS (cr=30 pr=0 pw=0 time=117 us)
5 TABLE ACCESS BY INDEX ROWID SUB_CON_CALL_OFF_TAB (cr=15 pr=0 pw=0 time=69 us)
5 INDEX UNIQUE SCAN SUB_CON_CALL_OFF_PK (cr=10 pr=0 pw=0 time=41 us)(object id 59706)
5 TABLE ACCESS BY INDEX ROWID SUB_CONTRACT_TAB (cr=15 pr=0 pw=0 time=42 us)
5 INDEX UNIQUE SCAN SUB_CONTRACT_PK (cr=10 pr=0 pw=0 time=26 us)(object id 59666)
5 INDEX UNIQUE SCAN USER_PROFILE_ENTRY_SYS_PK (cr=10 pr=0 pw=0 time=41 us)(object id 60891)
5 INDEX UNIQUE SCAN USER_ALLOWED_SITE_PK (cr=10 pr=0 pw=0 time=36 us)(object id 60866)
5 FAST DUAL (cr=0 pr=0 pw=0 time=4 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 1 0.00 0.00
gc cr block 2-way 3 0.00 0.00
gc current block 2-way 1 0.00 0.00
gc cr multi block request 4 0.00 0.00
1] How do you determine that this query performance is +ok+ ?
2] What is the actual need of checking the query performance this way?
3] Is this the TKPROF output?
4] How do you know that the query was +called+ 798747 times? the +execute+ shows 0
Could you please help me with this?
Thanks.
Ranit B. -
How to improve query performance using infoset
I create one infoset that including 4 char.and 3 DSO which all are time-dependent.When query run, system show very poor perfomance, sometimes no data show in BEX anayzer. In this case I have to close BEX analyzer at first and then open it again, after that it show real results. It seems very strange. Does anybody has experience on infoset performance improvement. pls info, thanks!
Hi
As info set itself doesn't have any data so it improves Performance
also go through the below tips.
Find the query Run-time
where to find the query Run-time ?
557870 'FAQ BW Query Performance'
130696 - Performance trace in BW
This info may be helpful.
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
Statistical Records Part 4: How to read ST03N datasets from DB in NW2004
How to read ST03N datasets from DB
Try table rsddstats to get the statistics
Using cache memory will decrease the loading time of the report.
Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
Open the Aggregates...and observe VALUATION and USAGE columns.
"---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
In usage column,we will come to know how far the aggregate has been used in query.
Thus we can check the performance of the aggregate.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
performance ISSUE related to AGGREGATE
Note 356732 - Performance Tuning for Queries with Aggregates
Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
202469 - Using aggregate check tool
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
6
Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
Generate Report in RSRT
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Business Intelligence Journal Improving Query Performance in Data Warehouses
http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
Achieving BI Query Performance Building Business Intelligence
http://www.dmreview.com/issues/20051001/1038109-1.html
Assign points if useful
Cheers
SM -
Inventory Ageing query performance
Hi All,
I have created inventory ageing query on our custom cube which is replica of 0IC_C03. We have data from 2003 onwards. the performance of the query is very poor the system almost hangs. I tried to create aggregates to improve performance but its failed. What i should do to improve the performance and why the aggregate filling is failed. Cube have compressed data. Pls guide.
Regards:
JitendraInaddition to the above posts
Check the below points ... and take action accordingly to increase the query performance.
mainly check --Is the Cube data Compressed. it will increase the performance of the query..
1)If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
2)Check code for all exit variables used in a report.
3)Check the read mode for the query. recommended is H.
4)If Alternative UOM solution is used, turn off query cache.
5)Use Constant Selection instead of SUMCT and SUMGT within formulas.
6)Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
7)Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed.
Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
8)Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
9)If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing.
10)Check the user exits usage involved in OLAP run time?
11)Use Constant Selection instead of SUMCT and SUMGT within formulas.
12)
Turn on the BW Statistics: RSA1, choose Tools -> BW statistics for InfoCubes(Choose OLAP and WHM for your relevant Cubes)
To check the Query Performance problem
Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all InfoCubes.
You need to run ST03N in expert mode to get these values
based on the analysis and the values taken from the above - Check if an aggregate is suitable or setting OLAP etc.
Edited by: prashanthk on Nov 26, 2010 9:17 AM -
QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES
WHAT ARE QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
WHAT ARE DATALOADING PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
WILL REWARD FULL POINT S
REGARDS
GURUBW Back end
Some Tips -
1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 Background Processing Job Management to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 ABAP/4 Run-time Analysis and then run the analysis for the transaction code RSA3 Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW BW IMG Menu on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
Hope it Helps
Chetan
@CP.. -
System/Query Performance: What to look for in these tcodes
Hi
I have been researching on system/query performance in general in the BW environment.
I have seen tcodes such as
ST02 :Buffer/Table analysis
ST03 :System workload
ST03N:
ST04 : Database monitor
ST05 : SQL trace
ST06 :
ST66:
ST21:
ST22:
SE30: ABAP runtime analysis
RSRT:Query performance
RSRV: Analysis and repair of BW objects
For example, Note 948066 provides descriptions of these tcodes but what I am not getting are thresholds and their implications. e.g. ST02 gave tune summary screen with several rows and columns (?not sure what they are called) with several numerical values.
Is there some information on these rows/columns such as the typical range for each of these columns and rows; and the acceptable figures, and which numbers under which columns suggest what problems?
Basically some type of a metric for each of these indicators provided by these performance tcodes.
Something similar to when you are using an Operating system, and the CPU performance is consistently over 70% which may suggest the need to upgrade CPU; while over 90% suggests your system is about to crush, etc.
I will appreciate some guidelines on the use of these tcodes and from your personal experience, which indicators you pay attention to under each tcode and why?
Thankshi Amanda,
i forgot something .... SAP provides Early Watch report, if you have solution manager, you can generate it by yourself .... in Early Watch report there will be red, yellow and green light for parameters
http://help.sap.com/saphelp_sm40/helpdata/EN/a4/a0cd16e4bcb3418efdaf4a07f4cdf8/frameset.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0f35bf3-14a3-2910-abb8-89a7a294cedb
EarlyWatch focuses on the following aspects:
· Server analysis
· Database analysis
· Configuration analysis
· Application analysis
· Workload analysis
EarlyWatch Alert a free part of your standard maintenance contract with SAP is a preventive service designed to help you take rapid action before potential problems can lead to actual downtime. In addition to EarlyWatch Alert, you can also decide to have an EarlyWatch session for a more detailed analysis of your system.
ask your basis for Early Watch sample report, the parameters in Early Watch should cover what you are looking for with red, yellow, green indicators
Understanding Your EarlyWatch Alert Reports
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4b88cb90-0201-0010-5bb1-a65272a329bf
hope this helps. -
An index can not being used and still afect a query performance?
Hi i have a query with a high cost so i created two indexes, A and B, to improve its performance.
After the creation of the indexes when i reviewed the execution plan of the query the cost had been reduced, but i noticed that the index B is not being used,
and if i try to force the query to use index B with a HINT the cost increases, so i decided to drop the index B.
Once i droped the index B i checked the execution plan again and then i noticed that the cost of the query increased, if i recreate the index B the explain plan
shows a lower cost even though its not being used by the execution plan.
Does anyone know why is this happening?
An index can, not being used by the execution plan and still affect a query performance?user11173393 wrote:
Hi i have a query with a high cost so i created two indexes, A and B, to improve its performance.
After the creation of the indexes when i reviewed the execution plan of the query the cost had been reduced, but i noticed that the index B is not being used,
and if i try to force the query to use index B with a HINT the cost increases, so i decided to drop the index B.
Once i droped the index B i checked the execution plan again and then i noticed that the cost of the query increased, if i recreate the index B the explain plan
shows a lower cost even though its not being used by the execution plan.
Does anyone know why is this happening?
An index can, not being used by the execution plan and still affect a query performance?You said that is what is happening, & I believe you. -
Having more LTSs in logical dimension table hit the query performance?
Hi,
We have a logical table having around 19 LTSs. Having more LTSs in logical dimension table hit the query performance?
Thanks,
AnileshHi Anilesh,
Its kind of both YES and NO. Here is why...
NO:
LTS are supposed to give BI Server an optimistic and logical way to retrieve the data. So, having more Optimistic LTS might help the BI Server with some good options tailored to a variety of analysis requests.
YES:
Many times, we have to bring in multiple physical tables as a part of single LTS (Mostly when the physical model is a snowflake) which might cause performance issues. Say there is a LTS with two tables "A" and "B", but for a ad-hoc analysis just on columns in "A", the query would still include the join with table "B" if this LTS is being used. We might want to avoid this kind of situations with multiple LTS each for a table and one for both of them.
Hope this helps.
Thank you,
Dhar -
Hi,
I am working on a application Developed in Forms10g and Oralce 10g.
I have few very large transaction tables in db and most of the screens in my application based on these tables only.
When user performs a query (with out any filter conditions) the whole table(s) loaded into memory and takes very long time. Further queries on the same screen perform better.
How can I keep these tables in memory (buffer) always to reduce the initial query time?
or
Is there any way to share the session buffers with other sessions, sothat it does not take long time in each session?
or
Any query performance tuning suggestions will be appreciated.
Thanks in advanceThanks a lot for your posts, very large means around
12 million rows. Yep, that's a large table
I have set the query all records to "No". Which is good. It means only enough records are fetched to fill the initial block. That's probably about 10 records. All the other records are not fetched from the database, so they're also not kept in memory at the Forms server.
Even when I try the query in SQL PLUS it is taking
long time. Sounds like a query performance problem, not a Forms issue. You're probably better of asking in the database or SQL forum. You could at least include the SELECT statement here if you want any help with it. We can't guess why a query is slow if we have no idea what the query is.
My concern is, when I execute the same query again or
in another session (some other user or same user),
can I increase the performance because the tables are
already in memory. any possibility for this? Can I
set any database parameters to share the data between
sessions like that... The database already does this. If data is retrieved from disk for one user it is cached in the SGA (Shared Global Area). Mind the word Shared. This caching information is shared by all sessions, so other users should benefit from it.
Caching also has its limits. The most obvious one is the size of the SGA of the database server. If the table is 200 megabyte and the server only has 8 megabyte of cache available, than caching is of little use.
Am I thinking in the right way? or I lost some where?Don't know.
There's two approaches:
- try to tune the query or database to have better performance. For starters, open SQL*Plus, execute "set timing on", then execute "set autotrace traceonly explain statistics", then execute your query and look at the results. it should give you an idea on how the database is executing the query and what improvements could be made. You could come back here with the SELECT statement and timing and trace results, but the database or SQL forum is probably better
- MORE IMPORTANTLY: think if it is necessary for users to perform such time consuming (and perhaps complex) queries. Do users really need the ability to query all records. Are they ever going to browse through millions of records?
>
>
Thanks
Maybe you are looking for
-
I prepared a GUI user connection application in NebBeans 5.5 accessing mySql database in the company server. The application run very well in desktop. However, when I post it to the company server web, it gets nothing from the database. Can any one g
-
Why can't I download Free iPod Touch Apps from Apps Store w/Shopping Cart?
I have an 8GB iPod Touch (original model) with the current 2.1 software. I also have iTunes v. 8.0. The behavior I am seeing started about 3 weeks ago and continues with these updates - it was not like this when I first downloaded the 2.0 software an
-
How to assign approver to missing approver of workflow
We have defined approval group on specific employee so when every user create their leave then it go to specific user for approval. Now the issue is, approver has been terminated and the dynamic query is getting no record. Due to this neither approve
-
Please explain how apple tv works
If I buy apple tv, what do I get and what does it allow me to do? What else do I have to have, or pay for, to get shows and movies? Is this an alternative to cable? Do I still need to buy content from the itunes store? What exactly is the advantage o
-
Image update problem (bug?)
Within a canvas, which is placed in the center of a borderLayout, I am importing images via MediaTracker. I have a dialog and within this dialog, when an item is selected from a list, the image within the canvas ought to change when "ok' is clicked i