Query BW tahckes too much time to runn
Hi,
We have a query BW who takes 5 minute to run.
Our user asks us to minimize the time of the extraction.
Pease let us know if you can help us.
Best Regards
Carmen
Hi,
use InfoCubes instead of DSO's for reporting.
If you use InfoCubes then create aggregates for your queries.
Also use partition for the Cubes based on e.g. 0CALYEAR.
Here are some useful documents you should read:
[SAP BW Query Performance Tuning with Aggregates|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9]
[Performance Tuning for SAP Business Information Warehouse|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c]
SAP notes:
Note [859456|https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bw_bex/~form/handler%7b5f4150503d3030323030363832353030303030303031393732265f4556454e543d444953504c4159265f4e4e554d3d383539343536%7d] - Increasing performance using the cache
Note [356732|https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bw_bex/~form/handler%7b5f4150503d3030323030363832353030303030303031393732265f4556454e543d444953504c4159265f4e4e554d3d333536373332%7d] - Performance Tuning for Queries with Aggregates
Note [1241794|https://websmp230.sap-ag.de/sap(bD1kZSZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1241794] - Query performance with partitioned InfoCubes
Regards
Andreas
Similar Messages
-
hi
The following query is taking too much time (more than 30 minutes), working with 11g.
The table has three columns rid, ida, geometry and index has been created on all columns.
The table has around 5,40,000 records of point geometries.
Please help me with your suggestions. I want to select duplicate point geometry where ida=CORD.
SQL> select a.rid, b.rid from totalrecords a, totalrecords b where a.ida='CORD' and b.idat='CORD' and
sdo_equal(a.geometry, b.geometry)='TRUE' and a.rid !=b.rid order by 1,2;
regardsI have removed some AND conditions That was not necessary. It's just that Oracle can see for example that
a.ida='CORD' AND
b.idat='CORD' AND
a.rid !=b.rid AND
sdo_equal(a.geometry, b.geometry)='TRUE'
ORDER BY 1,2;if a.ida does not equal 'CORD', the whole set of conditions evaluates to FALSE, so Oracle will not bother evaluating the rest of the conditions because it's all AND'ed together, and TRUE AND FALSE = FALSE.
So if you place your least expensive conditions first (even though the optimizer can and will reorder conditions) this will give you a small performance benefit. Too small to notice, but on 5.4 million records it should be noticable.
and I have set layer_gtype=POINT.Good, that will help. I forgot about that one (Thanks Luc!).
Now i am facing the problem to DELETE duplicate point geometry. The following query is taking too much time. What is too much time? Do you need to delete these duplicate points on a daily or hourly basis? Or is this a one-time cleanup action? If it's a one-time cleanup operation, does it really matter if it takes half an hour?
And if this is a daily or even hourly operation, then why don't you prevent the duplicates from entering the table in the first place? That will save you from having to clean up afterwards. Of course, this might not be possible with your business requirements.
Lastly: can you post an explain plan for your queries? Those might give us an idea of what is taking so much time. Please enclose the results of the explain plan with
[ c o d e ]
<code/results here>
[ / c o d e ]
that way the original formatting is kept and it makes things much easier to read.
Regards,
Stefan -
Discoverer Query is taking too much time
Hi,
I am having performance problems with some queries in Discoverer(Relational). Discoverer is taking around 30 minutes to run the report. But if I run the same query through TOAD it takes only 5 to 6 seconds. Why it is so ?
Structure of Report:
The report is using crosstab with 3 dimensions on the left side and 3 dimensions on the page items.
Why the performance in the discoverer is slow and how can I improve it ?
Thanks & Kind Regards
RanaHi all
Russ' comments are correct. When you use crosstabs or page items, Discoverer has to execute the entire query before it can bring back any results. This is a known factor that should be taken into account when end users create workbooks.
The following conditions will greatly impact performance:
1. Crosstabs with many items on the left axis
2. Multiple Crosstab values
3. Page items with a large set of values,
4. Crosstabs or page items that use complex calculations,
5. Multiple page items
Thus, users must avoid building worksheets that use too many of the above. As commented previously, this is well documented in the forums and on the Oracle website and should not come up by surprise. If it does then either suitable training has not been given to the end users or Oracle's own end user documentation has not been read. Section 6 of the Discoverer Plus user guide has the following advice:
Whether you are using Discoverer Plus Relational to perform ad hoc queries, or to create reports for other end users, you want to minimize the time it takes to run queries and reports. By following a few simple design guidelines, you can maximize Discoverer performance.
Where possible:
use tabular reports rather than cross-tabular reports
minimize the number of page items in reports
avoid wide cross tabular report
avoid creating reports that return tens of thousands of rows
provide parameters to reduce the amount of data produced
minimize the number of worksheets in workbooks
remove extraneous worksheets from workbooks (especially if end users frequently use Discoverer’s export option, see Notes below)
Notes:
When end users export data in Discoverer Plus Relational or Discoverer Viewer, they can export either the current worksheet or all the worksheets. In other words, they cannot selectively choose the worksheets to be exported. Remove extraneous worksheets so that extra data is not included when end users export all worksheets.
I hope this helps
Regards
Michael -
Delete query is taking too much time...
Hi All,
Below delete query is taking at least 1hrs. !!!!
DELETE aux_current_link aux
WHERE EXISTS (
SELECT *
FROM link_trans_cons link2
WHERE aux.tr_leu_id = link2.tr_leu_id
AND aux.kind_of_link = link2.kind_of_link
AND link2.TYPE = 'H');
table aux_current_link has record - 284279 and has 6 normal index.
pls help me.
SubirNot even close to enough information.
Look here to see if you can tune the operation.
When your query takes too long ...
But for a delete you need to understand that the indexes need to be maintained, you have 6 of them, that requires effort. ALSO, foreign keys need to be checked to make sure you don't violate any enabled foreign keys (so maybe you need an index on a table where a column from this table you deleting from is being referenced).
If you are deleting a HIGH percentage of the table you may be better off doing a create table as select ....query to keep all rows from your table....then drop your current table, rename the new one you made, and add all the indexes to it.
Otherwise, once you've tuned the operation (query), assuming it can be tuned, it's going to take as long as it needs to take.....
Message was edited by:
Tubby -
Query is taking too much time for inserting into a temp table and for spooling
Hi,
I am working on a query optimization project where I have found a query which takes hell lot of time to execute.
Temp table is defined as follows:
DECLARE @CastSummary TABLE (CastID INT, SalesOrderID INT, ProductionOrderID INT, Actual FLOAT,
ProductionOrderNo NVARCHAR(50), SalesOrderNo NVARCHAR(50), Customer NVARCHAR(MAX), Targets FLOAT)
SELECT
C.CastID,
SO.SalesOrderID,
PO.ProductionOrderID,
F.CalculatedWeight,
PO.ProductionOrderNo,
SO.SalesOrderNo,
SC.Name,
SO.OrderQty
FROM
CastCast C
JOIN Sales.Production PO ON PO.ProductionOrderID = C.ProductionOrderID
join Sales.ProductionDetail d on d.ProductionOrderID = PO.ProductionOrderID
LEFT JOIN Sales.SalesOrder SO ON d.SalesOrderID = SO.SalesOrderID
LEFT JOIN FinishedGoods.Equipment F ON F.CastID = C.CastID
JOIN Sales.Customer SC ON SC.CustomerID = SO.CustomerID
WHERE
(C.CreatedDate >= @StartDate AND C.CreatedDate < @EndDate)
It takes almost 33% for Table Insert when I insert the data in a temp table and then 67% for Spooling. I had removed 2 LEFT joins and made it as JOIN from the above query and then tried. Query execution became bit fast. But still needs improvement.
How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables?? Please suggest.
-PepHow I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables??
I suggest you start with index tuning. Specifically, make sure columns specified in the WHERE and JOIN columns are properly indexed (ideally clustered or covering, and unique when possible). Changing outer joins to inner joins is appropriate
if you don't need outer joins in the first place.
Dan Guzman, SQL Server MVP, http://www.dbdelta.com -
Query Consuming too much time.
Hi,
i am using Release 10.2.0.4.0 version of oracle. I am having a query, its taking too much time(~7 minutes) for indexed read. Please help me to understand the reason and workaround for same.
select *
FROM a,
b
WHERE a.xdt_docownerpaypk = b.paypk
AND a.xdt_doctype = 'PURCHASEORDER'
AND b.companypk = 1202829117
AND a.xdt_createdt BETWEEN TO_DATE (
'07/01/2009',
'MM/DD/YYYY')
AND TO_DATE (
'01/01/2010',
'MM/DD/YYYY')
ORDER BY a.xdt_createdt DESC;
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 1 | SORT ORDER BY | | 1 | 1 | 907 |00:06:45.83 | 66716 | 60047 | 478K| 448K| 424K (0)|
|* 2 | TABLE ACCESS BY INDEX ROWID | a | 1 | 1 | 907 |00:06:45.82 | 66716 | 60047 | | | |
| 3 | NESTED LOOPS | | 1 | 1 | 6977 |00:06:45.64 | 60045 | 60030 | | | |
| 4 | TABLE ACCESS BY INDEX ROWID| b | 1 | 1 | 1 |00:00:00.01 | 4 | 0 | | | |
|* 5 | INDEX RANGE SCAN | IDX_PAYIDENTITYCOMPANY | 1 | 1 | 1 |00:00:00.01 | 3 | 0 | | | |
|* 6 | INDEX RANGE SCAN | IDX_XDT_N7 | 1 | 3438 | 6975 |00:06:45.64 | 60041 | 60030 | | | |
Predicate Information (identified by operation id):
2 - filter(("a"."XDT_CREATEDT"<=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"a"."XDT_CREATEDT">=TO_DATE(' 2009-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
5 - access("b"."COMPANYPK"=1202829117)
6 - access("XDT_DOCTYPE"='PURCHASEORDER' AND "a"."XDT_DOCOWNERPAYPK"="b"."PAYPK")
filter("a"."XDT_DOCOWNERPAYPK"="b"."PAYPK")
32 rows selected.
index 'idx_xdt_n7' is on (xdt_doctype,action_date,xdt_docownerpaypk).
index idx_xdt_n7 details are as below.
blevel distinct_keys avg_leaf_blocks_per_key avg_data_blocks_per_key clustering_factor num_rows
3 868840 1 47 24020933 69871000
But when i am deriving exact value of paypk from table b and applying to the query, its using another index(idx_xdt_n4) which is on index 'idx_xdt_n4' is on (month,year,xdt_docownerpaypk,xdt_doctype,action_date)
and completes within ~17 seconds. below is the query/plan details.
select *
FROM a
WHERE a.xdt_docownerpaypk = 1202829132
AND xdt_doctype = 'PURCHASEORDER'
AND a.xdt_createdt BETWEEN TO_DATE (
'07/01/2009',
'MM/DD/YYYY')
AND TO_DATE (
'01/01/2010',
'MM/DD/YYYY')
ORDER BY xdt_createdt DESC;
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 1 | SORT ORDER BY | | 1 | 3224 | 907 |00:00:02.19 | 7001 | 339 | 337K| 337K| 299K (0)|
|* 2 | TABLE ACCESS BY INDEX ROWID| a | 1 | 3224 | 907 |00:00:02.19 | 7001 | 339 | | | |
|* 3 | INDEX SKIP SCAN | IDX_XDT_N4 | 1 | 38329 | 6975 |00:00:02.08 | 330 | 321 | | | |
Predicate Information (identified by operation id):
2 - filter(("a"."XDT_CREATEDT"<=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"a"."XDT_CREATEDT">=TO_DATE(' 2009-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
3 - access("a"."XDT_DOCOWNERPAYPK"=1202829132 AND "XDT_DOCTYPE"='PURCHASEORDER')
filter(("a"."XDT_DOCOWNERPAYPK"=1202829132 AND "XDT_DOCTYPE"='PURCHASEORDER'))
index idx_xdt_n4 details are as below.
blevel distinct_keys avg_leaf_blocks_per_key avg_data_blocks_per_key clustering_factor num_rows
3 868840 1 47 23942833 70224133Edited by: 930254 on Apr 26, 2013 5:04 AMthe first query uses the predicate "XDT_DOCTYPE"='PURCHASEORDER' to determine the range of the index IDX_XDT_N7 that has to be scanned and uses the other predicates to filter out most of the index blocks. The second query uses an INDEX SKIP SCAN ignoring the first column of the index IDX_XDT_N4 and using the predicates for the following columns ("a"."XDT_DOCOWNERPAYPK"=1202829132 AND "XDT_DOCTYPE"='PURCHASEORDER') to get a much more selective access (reading only 330 blocks instead of > 60K).
I think there are two possible options to improve the performance:
1. If creating a new index is an option you could define an index on table A(xdt_doctype, xdt_docownerpaypk, xdt_createdt)
2. If creating a new index is not an option you could use an INDEX SKIP SCAN Hint (INDEX_SS(A IDX_XDT_N4)) to order the CBO to use the second index (without a hint the CBO tends to ignore the option of using a SKIP SCAN in an NL join). But using Hints in production is rarely a good idea... In 11g you could you sql baselines to avoid such hints in the code.
Regards
Martin -
Data Dictionary query takes too much time.
Hello,
I am using ORACLE DATABASE 11g.
The below query is taking too much time to execute and give the output. As i have tried a few Oracle sql hints but it dint worked out.
SELECT
distinct B.TABLE_NAME, 'Y'
FROM USER_IND_PARTITIONS A, USER_INDEXES B, USER_IND_SUBPARTITIONS C
WHERE A.INDEX_NAME = B.INDEX_NAME
AND A.PARTITION_NAME = C.PARTITION_NAME
AND C.STATUS = 'UNUSABLE'
OR A.STATUS = 'UNUSABLE'
OR B.STATUS = 'INVALID';Please guide me what to do ? to run this query in a fast pace mode ...
thanks in advance ..Your query is incorrect. It will return ALL tables if A.STATUS = 'UNUSABLE' or B.STATUS = 'INVALID'. Most likely you meant:
SELECT
distinct B.TABLE_NAME, 'Y'
FROM USER_IND_PARTITIONS A, USER_INDEXES B, USER_IND_SUBPARTITIONS C
WHERE A.INDEX_NAME = B.INDEX_NAME
AND A.PARTITION_NAME = C.PARTITION_NAME
AND (C.STATUS = 'UNUSABLE'
OR A.STATUS = 'UNUSABLE'
OR B.STATUS = 'INVALID');But the above will return subpartitioned tables with invalid/unusable indexes. It will not return non-subpartitioned partitioned tables with invalid/unusable indexes/index partitions same as non-partitioned tables with invalid/unusable indexes. If you want to get any table with invalid/unusable indexes you need to outer join which will hurt performance even more. I suggest you use UNION:
SELECT DISTINCT TABLE_NAME,
'Y'
FROM (
SELECT INDEX_NAME,'Y' FROM USER_INDEXES WHERE STATUS = 'INVALID'
UNION ALL
SELECT INDEX_NAME,'Y' FROM USER_IND_PARTITIONS WHERE STATUS = 'UNUSABLE'
UNION ALL
SELECT INDEX_NAME,'Y' FROM USER_IND_SUBPARTITIONS WHERE STATUS = 'UNUSABLE'
) A,
USER_INDEXES B
WHERE A.INDEX_NAME = B.INDEX_NAME
/SY. -
Delete query taking too much time
Hi All,
my delete query is taking too much time. around 1hr 30 min for 1.5 lac records.
Though I have dropped mv log on the table & disabled all the triggers on it.
Moreover deletion is based on primary key .
delete from table_name where primary_key in (values)
above is dummy format of my query.
can anyone please tell me what could be other reason that query is performing that slow.
Is there anything to check in DB other than triggers,mv log,constraints in order to improve the performance?
Please reply asap.Delete is the most time consuming operation, as the whole record data has to be stored at the undo segments. On the other hand, there is a part of the query used to select records to delete that probably is adding an extra overhead to the process, the in (values) clause. It would be nice on your side to post another dummy from this (values) clause. I could figure out this is a subquery, and in order for you to obtain this list you have to run a inefficient query.
You can gather the execution plan so you can see where the most heavy part of th query is. This way a better tuning approach and a more accurate diagnostic can be issued.
~ Madrid. -
Reporting time taking too much time
Hi Experts,
I created one query one of my data target and i executed query, it takes too much time, can any one tell me how can i resolve this issue? please help me out?
Thanks in advace
DavidCheck if the IC data is compressed or not?Compression will increase the performance of the query.
Check for the stats info using ST03 and analyse what need to be done .creating aggregates etc..
Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all InfoCubes.
You need to run ST03N in expert mode to get these values
based on the analysis and the values taken from the above - Check if an aggregate is suitable or setting OLAP etc.
Check the read mode for the query. recommended is H.
If the Query is built on the top of MP then
- including characteristic 0INFOPROV when you design the query for the MultiProvider so that you can filter by InfoProvider.
- Do not use more than 10 InfoProviders when you define a MultiProvider. If a number of InfoProviders are being read by a query at the same time, it can take a while for the system to calculate the results. -
SELECT query takes too much time! Y?
Plz find my SELECT query below:
select w~mandt
wvbeln wposnr wmeins wmatnr wwerks wnetwr
wkwmeng wvrkme wmatwa wcharg w~pstyv
wposar wprodh wgrkor wantlf wkztlf wlprio
wvstel wroute wumvkz wumvkn wabgru wuntto
wawahr werdat werzet wfixmg wprctr wvpmat
wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
wbedae wcuobj w~mtvfp
xetenr xwmeng xbmeng xettyp xwepos xabart
x~edatu
xtddat xmbdat xlddat xwadat xabruf xetart
x~ezeit
into table t_vbap
from vbap as w
inner join vbep as x on xvbeln = wvbeln and
xposnr = wposnr and
xmandt = wmandt
where
( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
( ( ( erdat > pre_dat and erdat < p_syndt ) or
( erdat = p_syndt and erzet <= p_syntm ) ) ) and
w~matnr in s_matnr and
w~pstyv in s_itmcat and
w~lfrel in s_lfrel and
w~abgru = ' ' and
w~kwmeng > 0 and
w~mtvfp in w_mtvfp and
x~ettyp in w_ettyp and
x~bdart in s_req_tp and
x~plart in s_pln_tp and
x~etart in s_etart and
x~abart in s_abart and
( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
The problem: It takes too much time while executing this statement.
Could anybody change this statement and help me out to reduce the DB Access time?
ThxWays of Performance Tuning
1. Selection Criteria
2. Select Statements
Select Queries
SQL Interface
Aggregate Functions
For all Entries
Select Over more than one internal table
Selection Criteria
1. Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement.
2. Select with selection list.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
Select Statements Select Queries
1. Avoid nested selects
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
2. Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
3. When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
4. For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit.
SELECT * FROM SBOOK INTO SBOOK_WA
UP TO 1 ROWS
WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
WHERE CARRID = 'LH'.
EXIT.
ENDSELECT.
5. Use Select Single if all primary key fields are supplied in the Where condition .
If all primary key fields are supplied in the Where conditions you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
Select Statements SQL Interface
1. Use column updates instead of single-row updates
to update your database tables.
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
SFLIGHT_WA-SEATSOCC =
SFLIGHT_WA-SEATSOCC - 1.
UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
SET SEATSOCC = SEATSOCC - 1.
2. For all frequently used Select statements, try to use an index.
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE MANDT IN ( SELECT MANDT FROM T000 )
AND CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
3. Using buffered tables improves the performance considerably.
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
BYPASSING BUFFER
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100 INTO T100_WA
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
Select Statements Aggregate Functions
If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
Maxno = 0.
Select * from zflight where airln = LF and cntry = IN.
Check zflight-fligh > maxno.
Maxno = zflight-fligh.
Endselect.
The above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = LF and cntry = IN.
Select Statements For All Entries
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
Points to be must considered FOR ALL ENTRIES
Check that data is present in the driver table
Sorting the driver table
Removing duplicates from the driver table
Consider the following piece of extract
Loop at int_cntry.
Select single * from zfligh into int_fligh
where cntry = int_cntry-cntry.
Append int_fligh.
Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
Select * from zfligh appending table int_fligh
For all entries in int_cntry
Where cntry = int_cntry-cntry.
Endif.
Select Statements Select Over more than one Internal table
1. Its better to use a views instead of nested Select statements.
SELECT * FROM DD01L INTO DD01L_WA
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T INTO DD01T_WA
WHERE DOMNAME = DD01L_WA-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L_WA-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO DD01V_WA
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT
2. To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
3. Instead of using nested Select loops it is often better to use subqueries.
SELECT * FROM SPFLI
INTO TABLE T_SPFLI
WHERE CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
INTO SFLIGHT_WA
FOR ALL ENTRIES IN T_SPFLI
WHERE SEATSOCC < F~SEATSMAX
AND CARRID = T_SPFLI-CARRID
AND CONNID = T_SPFLI-CONNID
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
WHERE SEATSOCC < F~SEATSMAX
AND EXISTS ( SELECT * FROM SPFLI
WHERE CARRID = F~CARRID
AND CONNID = F~CONNID
AND CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK' )
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
1. Table operations should be done using explicit work areas rather than via header lines.
READ TABLE ITAB INTO WA WITH KEY K = 'X BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
2. Always try to use binary search instead of linear search. But dont forget to sort your internal table before that.
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
3. A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4. A binary search using secondary index takes considerably less time.
5. LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
CHECK WA-K = 'X'.
ENDLOOP.
6. Modifying selected components using MODIFY itab TRANSPORTING f1 f2.. accelerates the task of updating a line of an internal table.
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7. Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
I = SY-TABIX MOD 2.
IF I = 0.
<WA>-FLAG = 'X'.
ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
I = SY-TABIX MOD 2.
IF I = 0.
WA-FLAG = 'X'.
MODIFY ITAB FROM WA.
ENDIF.
ENDLOOP.
8. If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
LOOP AT ITAB1 INTO WA1.
READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
IF SY-SUBRC = 0.
ADD: WA1-VAL1 TO WA2-VAL1,
WA1-VAL2 TO WA2-VAL2.
MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
ELSE.
INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
9. "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to LOOP-APPEND-ENDLOOP.
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
10. DELETE ADJACENT DUPLICATES accelerates the task of deleting duplicate entries considerably as compared to READ-LOOP-DELETE-ENDLOOP.
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
IF WA = PREV_LINE.
DELETE ITAB.
ELSE.
PREV_LINE = WA.
ENDIF.
ENDLOOP.
11. "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to DO -DELETE-ENDDO.
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
DELETE ITAB INDEX 450.
ENDDO.
12. Copying internal tables by using ITAB2[ ] = ITAB1[ ] as compared to LOOP-APPEND-ENDLOOP.
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
13. Specify the sort key as restrictively as possible to run the program faster.
SORT ITAB BY K. makes the program runs faster as compared to SORT ITAB.
Internal Tables contd
Hashed and Sorted tables
1. For single read access hashed tables are more optimized as compared to sorted tables.
2. For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
Point # 1
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE STAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
Point # 2
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP. -
Parsing the query takes too much time.
Hello.
I hitting the bug in в Oracle XE (parsing some query takes too much time).
A similar bug was previously found in the commercial release and was successfully fixed (SR Number 3-3301916511).
Please, raise a bug for Oracle XE.
Steps to reproduce the issue:
1. Extract files from testcase_dump.zip and testcase_sql.zip
2. Under username SYSTEM execute script schema.sql
3. Import data from file TESTCASE14.DMP
4. Under username SYSTEM execute script testcase14.sql
SQL text can be downloaded from http://files.mail.ru/DJTTE3
Datapump dump of testcase can be downloaded from http://files.mail.ru/EC1J36
Regards,
Viacheslav.Bug number? Version fix applies to?
Relevant Note that describes the problem and points out bug/patch availability?
With a little luck some PSEs might be "backported", since 11g XE is not base release e.g. 11.2.0.1. -
Spatial query with sdo_aggregate_union taking too much time
Hello friends,
the following query taking too much time for execution.
table1 contains around 2000 records.
table2 contains 124 rows
SELECT
table1.id
, table1.txt
, table1.id2
, table1.acti
, table1.acti
, table1.geom as geom
FROM
table1
WHERE
sdo_relate
table1.geom,
select sdo_aggr_union(sdoaggrtype(geom, 0.0005))
from table2
,'mask=(ANYINTERACT) querytype=window'
)='TRUE'
I am new in spatial. trying to find out list of geometry which is fall within geometry stored in table2.
ThanksHi Thanks lot for your reply.
But It is not require to use sdo_aggregate function to find out whether geomatry in one table is in other geomatry..
Let me give you clear picture....
What I trying to do is, tale1 contains list of all station (station information) of state and table2 contains list of area of city. So I want to find out station which is belonging to city.
For this I thought to get aggregation union of city area and then check for any interaction of that final aggregation result with station geometry to check whether it is in city or not.
I hope this would help you to understand my query.
Thanks
I appreciate your efforts. -
Query taking too much time with dates??
hello folks,
I am trying pull some data using the date condition and for somereason its taking too much time to return the data
and trunc(al.activity_date) = TRUNC (SYSDATE, 'DD') - 1 --If i use this its takes too much time
and al.activity_date >= to_date('20101123 000000', 'YYYYMMDD HH24MISS')
and al.activity_date <= to_date('20101123 235959', 'YYYYMMDD HH24MISS') -- If i use this it returns the data in a second. why is that?
How do i get the previous day without using the hardcoded to_date('20101123 000000', 'YYYYMMDD HH24MISS'), if i need to retrieve it faster??Presumably you've got an index on activity_date.
If you apply a function like TRUNC to activity_date, you can no longer use the index.
Post execution plans to verify.
and al.activity_date >= TRUNC (SYSDATE, 'DD') - 1
and al.activity_date < TRUNC (SYSDATE, 'DD') -
Report taking too much time in the portal
Hi freiends,
we have developed a report on the ods,and we publish the same on the portal.
the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
is there any way to sort out this issue,like can we send the report to the individual user's mail id
so that they can not log in to the portal
or can we create the same report on the cube.
what could be the main difference if the report made on the cube or ods?
please help me
thanks in advance
sridathHi
Try this to improve performance of query
Find the query Run-time
where to find the query Run-time ?
557870 'FAQ BW Query Performance'
130696 - Performance trace in BW
This info may be helpful.
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
/people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
/people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
Try table rsddstats to get the statistics
Using cache memory will decrease the loading time of the report.
Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
Open the Aggregates...and observe VALUATION and USAGE columns.
"---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
In usage column,we will come to know how far the aggregate has been used in query.
Thus we can check the performance of the aggregate.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
performance ISSUE related to AGGREGATE
Note 356732 - Performance Tuning for Queries with Aggregates
Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
202469 - Using aggregate check tool
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
6
Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
Generate Report in RSRT
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Business Intelligence Journal Improving Query Performance in Data Warehouses
http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
Achieving BI Query Performance Building Business Intelligence
http://www.dmreview.com/issues/20051001/1038109-1.html
Assign points if useful
Cheers
SM -
While condition is taking too much time
I have a query that returns around 2100 records ( not many ! ) , when I am processing my result set with a while condition , it's taking too much time ( around 30 seconds ). Here is the code
public static GroupHierEntity load(Connection con)
throws SQLException
internalCustomer=false;
String customerNameOfLogger = com.photomask.framework.ControlServlet.CUSTOMER_NAME;
if ( customerNameOfLogger.startsWith("DPI") || customerNameOfLogger.startsWith("DUPONT") || customerNameOfLogger==null || customerNameOfLogger.equals("") ||customerNameOfLogger.equals("Unavailable") )
{ internalCustomer=true;}
// System.out.println(" ***************** customer name of logger " + com.photomask.framework.ControlServlet.CUSTOMER_NAME + "internal customer " + internalCustomer);
// show all groups to internal customers and only their customer groups for external customers
if (internalCustomer) {
stmtLoad = con.prepareStatement(sqlLoad);
ResultSet rs = stmtLoad.executeQuery();
return new GroupHierEntity(rs); }
else
stmtLoadExternal = con.prepareStatement(sqlLoadExternal);
stmtLoadExternal.setString(1, customerNameOfLogger);
stmtLoadExternal.setString(2, customerNameOfLogger);
// System.out.println("***** sql " +sqlLoadExternal);
ResultSet rs = stmtLoadExternal.executeQuery();
return new GroupHierEntity(rs);
GroupHierEntity ge = GroupHierEntity.load(con);
while(ge.next())
lvl = ge.getInt("lvl");
oid = ge.getLong("oid");
name = ge.getString("name");
if(internalCustomer) {
if (lvl == 2)
int i = getAlphaIndex(name);
super.setAppendRoot(alphaIndex);
gn = new GroupListDataNode(lvl+1,oid,name);
gn.setSelectable(true);
this.addNode(gn);
count++;
System.out.println("*** count "+ count);
ge.close();
========================
Then I removed every thing in the while clause and just run as it is , still it is taking same time ( 30 secs )
while(ge.next())
{count++;}
Why the while condition ( ge.next() ) is taking so much time ? Is there any other efficient way of reading the result set ?
Thanks ,
balaI tried all these things. The query is not taking much time ( 1 sec ). but the resultset.next() is taking too much time. I counted the time by putting System.out.pr.. at various points to see whuch is taking how much time.
executeQuery() is only taking 1 sec. Processing the result set ( moving the cursor to next position ) is taking too much time.
I have similar queries that return some 800 rows , that only takes 1 sec.
I have doubt on resultset.next(). Any other alternative ?
Maybe you are looking for
-
Error message the disk could not be written to or read from.
Every time I try to synchronize my shuffle,I get the message that attempting to (I forget what it said exactly) failed, the disk could not be written to or read from. To synchronize it I have to reset the shuffle using the reset utility and then only
-
How do I get my finder windows to open in the same way?
Hi, I'm really frustrated with the finder view options. I want all my windows for every single folder to open the same way. Same icon size. Same spacing between the icons. I've pressed the 'Use as Defaults' so many times, but the options I choose don
-
How to change the name of phone!
I just picked up my 3G and had my 2G turned on for my wife, my question is know both phones say my name, how do I change this?
-
I log into the internet banking website and click on the account I wish to see the statement for. The site takes me to another page to view the statement but there is nothing there to view. I got an error message the first time I logged into the site
-
hi friends, Could u please tell me what BDC TABLE CONTROLER in which cases we can use the bdc Table Contorler Regards srinu y