Improving query preformance.
Dear Friends,
We are using BI7.00. For one of our reports, we load data to the ODS which contains 578068 records, When the respective query is executed, it takes long time to provide the output. I do not have any formulas in the query.
Can experts provide step by step solution to improve the query performance so that the output is fast.
Regards,
M.M
Performance tuning on an ODS is not as easy as on InfoCubes because you don't have aggregates. If you already have defined indexes check if the indexes are used in your queries. You can either check in in the RSRT when you do Execute & Debug and set a breakpoint at Show SQL-Statement and Show Execution Plan or trace your ODS with ST01. You should also check the database statistics if you haven't done so.
If these tips are not enough because your ODS is quite huge, create an InfoCube with the identical InfoObjects. With automatic delta upload activated in the ODS you can always keep an exact copy of the ODS data in your cube and build your reports on the cube with aggregates and all other performance tools you need.
There is a great presentation on performance on the service marketplace (service.sap.com/bw, register with your OSS-user, then click on performance. In the upper part there is a PDF called performance workshop that really goes into details.
557870: FAQ: BW Query Performance,
572661: OLAP:Performance problem with hierarchies (if you got its)
402469: Additional indexes on Master Data Tables
Note 444287 'Checking the index storage quality'
and to these tools: BW Statistics, BW Workload Analysis in ST03N (Use Export Mode!) and the content of Table RSDDSTAT or tx RSRT and RSRTRACE for a single query !
Re: ODS Performance
secondary index on ods
Similar Messages
-
How to improve Query performance on large table in MS SQL Server 2008 R2
I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups is a best option or splitting the table into multiple smaller tables?
Hi bala197164,
First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
Partitioning:
http://msdn.microsoft.com/en-us/library/ms178148.aspx
CREATE INDEX (Transact-SQL):
http://msdn.microsoft.com/en-us/library/ms188783.aspx
TechNet
Subscriber Support
If you are
TechNet Subscription user and have any feedback on our support quality, please send your feedback
here.
Allen Li
TechNet Community Support -
How to improve query performance using infoset
I create one infoset that including 4 char.and 3 DSO which all are time-dependent.When query run, system show very poor perfomance, sometimes no data show in BEX anayzer. In this case I have to close BEX analyzer at first and then open it again, after that it show real results. It seems very strange. Does anybody has experience on infoset performance improvement. pls info, thanks!
Hi
As info set itself doesn't have any data so it improves Performance
also go through the below tips.
Find the query Run-time
where to find the query Run-time ?
557870 'FAQ BW Query Performance'
130696 - Performance trace in BW
This info may be helpful.
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
Statistical Records Part 4: How to read ST03N datasets from DB in NW2004
How to read ST03N datasets from DB
Try table rsddstats to get the statistics
Using cache memory will decrease the loading time of the report.
Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
Open the Aggregates...and observe VALUATION and USAGE columns.
"---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
In usage column,we will come to know how far the aggregate has been used in query.
Thus we can check the performance of the aggregate.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
performance ISSUE related to AGGREGATE
Note 356732 - Performance Tuning for Queries with Aggregates
Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
202469 - Using aggregate check tool
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
6
Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
Generate Report in RSRT
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Business Intelligence Journal Improving Query Performance in Data Warehouses
http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
Achieving BI Query Performance Building Business Intelligence
http://www.dmreview.com/issues/20051001/1038109-1.html
Assign points if useful
Cheers
SM -
How many ways can i improve query performance?
Hi All,
can any body help me
How many ways can i improve query performance in oracle ?
Thanks,
narasimhaAs many as you can think of them!!!
-
Hi,
I am executing one query it takes 40-45 mins, can anybody tell me where is the issue because I have index on SUBSCRIPTION table.
Query is taking time in Nested Loop. Can anyboduy please help to improve query performance.
Select count(unique individual_id)
from SUBSCRIPTION S ,SOURCE D WHERE S.ORDER_DOCUMENT_KEY_CD=D.FULFILLMENT_KEY_CD AND prod_abbr='TOH'
and to_char(source_start_dt,'YYMM')>='1010' and mke_mag_source_type_cd='D';
select count(*) from source; ----------3,425,131
select count(*) from subscription;---------394,517,271
Below is exlain Plan
Plan
SELECT STATEMENT CHOOSECost: 219 Bytes: 38 Cardinality: 1
13 SORT GROUP BY Bytes: 38 Cardinality: 1
12 PX COORDINATOR
11 PX SEND QC (RANDOM) SYS.:TQ10001 Bytes: 38 Cardinality: 1
10 SORT GROUP BY Bytes: 38 Cardinality: 1
9 PX RECEIVE Bytes: 38 Cardinality: 1
8 PX SEND HASH SYS.:TQ10000 Bytes: 38 Cardinality: 1
7 SORT GROUP BY Bytes: 38 Cardinality: 1
6 TABLE ACCESS BY LOCAL INDEX ROWID TABLE SUBSCRIPTION Cost: 21 Bytes: 3,976 Cardinality: 284
5 NESTED LOOPS Cost: 219 Bytes: 604,276 Cardinality: 15,902
2 PX BLOCK ITERATOR
1 TABLE ACCESS FULL TABLE SOURCE Cost: 72 Bytes: 1,344 Cardinality: 56
4 PARTITION HASH ALL Cost: 2 Cardinality: 284 Partition #: 12 Partitions accessed #1 - #16
3 INDEX RANGE SCAN INDEX XAK1SUBSCRIPTION Cost: 2 Cardinality: 284 Partition #: 12 Partitions accessed #1 - #16
Please suggestit eliminate hidden conversation from char to numberi dont know indexes/partition on TC table, and you?
drop table test;
create table test as select level id, sysdate + level/24/60/60 datum from dual connect by level < 10000;
create index idx1 on test(datum);
analyze table test compute statistics;
explain plan for select count(*) from test where to_char(datum,'YYYYMMDD') > '20120516';
SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3467505462
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 7 | 7 (15)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TEST | 500 | 3500 | 7 (15)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter(TO_CHAR(INTERNAL_FUNCTION("DATUM"),'YYYYMMDD')>'20120516')
explain plan for select count(*) from test where datum > trunc(sysdate);
SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2330213601
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 7 | 7 (15)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | INDEX FAST FULL SCAN| IDX1 | 9999 | 69993 | 7 (15)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("DATUM">TRUNC(SYSDATE@!))
drop index idx1;
create index idx1 on test(to_number(to_char(datum,'YYYYMMDD')));
analyze table test compute statistics;
explain plan for select count(*) from test where to_number(to_char(datum,'YYYYMMDD')) > 20120516;
SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 227046122
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 5 | 2 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 5 | | |
|* 2 | INDEX RANGE SCAN| IDX1 | 1 | 5 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access(TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION("DATUM"),'YYYYMMDD'))>
20120516)
explain plan for select count(*) from test where datum > trunc(sysdate);
SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3467505462
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 7 | 7 (15)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TEST | 9999 | 69993 | 7 (15)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("DATUM">TRUNC(SYSDATE@!)) -
How to improve Query Performance
Hi Friends...
I Want to improve query performance.I need following things.
1.What is the process to findout the performance?. Any transaction code's and how to use?.
2.How can I know whether the query is running good or bad ,ie. in performance praspect.
3.I want to see the values i.e. how much time it is taking to run?. and where the defect is?.
4.How to improve the query performance?. After I did the needfull things to improve performance, I want to see the query execution time. i.e. it is running fast or not?.
Eg..
Eg 1. Need to create aggregates.
Solution: where can I create aggregates?. Now I'm in production system. So where I need to create? .i.e. indevelopment or in Quality or in Production system?.
Any chenges I need to do in Development?.Because I'm in Production system.
So please tell me solution for my questions.
Thanks
Ganga
Message was edited by: Ganga Nhi ganga
please refer oss note :557870 : Frequently asked questions on query performance
also refer to
Prakash's weblog
/people/prakash.darji/blog/2006/01/27/query-creation-checklist
/people/prakash.darji/blog/2006/01/26/query-optimization
performance docs on query
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
This is the oss notes of FAQ on query performance
1. What kind of tools are available to monitor the overall Query Performance?
1. BW Statistics
2. BW Workload Analysis in ST03N (Use Export Mode!)
3. Content of Table RSDDSTAT
2. Do I have to do something to enable such tools?
Yes, you need to turn on the BW Statistics:
RSA1, choose Tools -> BW statistics for InfoCubes
(Choose OLAP and WHM for your relevant Cubes)
3. What kind of tools is available to analyze a specific query in detail?
1. Transaction RSRT
2. Transaction RSRTRACE
4. Do I have an overall query performance problem?
i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
ii. You need to run ST03N in expert mode to get these values
5. What can I do if the database proportion is high for all queries?
Check:
1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
3. If Buffers, I/O, CPU, memory on the database server are exhausted?
4. If Cube compression is used regularly
5. If Database partitioning is used (not available on all DB platforms)
6. What can I do if the OLAP proportion is high for all queries?
Check:
1. If the CPUs on the application server are exhausted
2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
7. What can I do if the client proportion is high for all queries?
Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
8. Where can I get specific runtime information for one query?
1. Again you can use ST03N -> BW System Load
2. Depending on the time frame you select, you get historical data or current data.
3. To get to a specific query you need to drill down using the InfoCube name
4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
9. What kind of query performance problems can I recognize using ST03N
values for a specific query?
(Use Details to get the runtime segments)
1. High Database Runtime
2. High OLAP Runtime
3. High Frontend Runtime
10. What can I do if a query has a high database runtime?
1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
3. Check if the read mode of the query is unfavourable - Recommended (H)
11. What can I do if a query has a high OLAP runtime?
1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
3. Check if a user exit Usage is involved in the OLAP runtime?
4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
5. Check if a proper index on the inclusion table exist
12. What can I do if a query has a high frontend runtime?
1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
3. Check if the bandwidth for WAN connection is sufficient
REWARDING POINTS IS THE WAY OF SAYING THANKS IN SDN
CHEERS
RAVI -
How to improve query & loading performance.
Hi All,
How to improve query & loading performance.
Thanks in advance.
Rgrds
shobaHi Shoba
There are lot of things to improve the query and loading performance.
please refer oss note :557870 : Frequently asked questions on query performance
also refer to
weblogs:
/people/prakash.darji/blog/2006/01/27/query-creation-checklist
/people/prakash.darji/blog/2006/01/26/query-optimization
performance docs on query
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
This is the oss notes of FAQ on query performance
1. What kind of tools are available to monitor the overall Query Performance?
1. BW Statistics
2. BW Workload Analysis in ST03N (Use Export Mode!)
3. Content of Table RSDDSTAT
2. Do I have to do something to enable such tools?
Yes, you need to turn on the BW Statistics:
RSA1, choose Tools -> BW statistics for InfoCubes
(Choose OLAP and WHM for your relevant Cubes)
3. What kind of tools is available to analyze a specific query in detail?
1. Transaction RSRT
2. Transaction RSRTRACE
4. Do I have an overall query performance problem?
i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
ii. You need to run ST03N in expert mode to get these values
5. What can I do if the database proportion is high for all queries?
Check:
1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
3. If Buffers, I/O, CPU, memory on the database server are exhausted?
4. If Cube compression is used regularly
5. If Database partitioning is used (not available on all DB platforms)
6. What can I do if the OLAP proportion is high for all queries?
Check:
1. If the CPUs on the application server are exhausted
2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
7. What can I do if the client proportion is high for all queries?
Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
8. Where can I get specific runtime information for one query?
1. Again you can use ST03N -> BW System Load
2. Depending on the time frame you select, you get historical data or current data.
3. To get to a specific query you need to drill down using the InfoCube name
4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
9. What kind of query performance problems can I recognize using ST03N
values for a specific query?
(Use Details to get the runtime segments)
1. High Database Runtime
2. High OLAP Runtime
3. High Frontend Runtime
10. What can I do if a query has a high database runtime?
1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
3. Check if the read mode of the query is unfavourable - Recommended (H)
11. What can I do if a query has a high OLAP runtime?
1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
3. Check if a user exit Usage is involved in the OLAP runtime?
4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
5. Check if a proper index on the inclusion table exist
12. What can I do if a query has a high frontend runtime?
1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
3. Check if the bandwidth for WAN connection is sufficient
and the some threads:
how can i increse query performance other than creating aggregates
How to improve query performance ?
Query performance - bench marking
may be helpful
Regards
C.S.Ramesh
[email protected] -
How to improve query performance built on a ODS
Hi,
I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
Is there any method to improve or optimize th query performance that build on ODS.
The ODS got huge volume of data ~ 300 Million records for 2 years.
Thanx in advance,
Guru.Hi Raj,
Here are some few tips which helps you in improving ur query performance
Checklist for Query Performance
1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
3. Within structures, make sure the filter order exists with the highest level filter first.
4. Check code for all exit variables used in a report.
5. Move Time restrictions to a global filter whenever possible.
6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
9. If Alternative UOM solution is used, turn off query cache.
10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
11. Turn off formatting and results rows to minimize Frontend time whenever possible.
12. Check for nested hierarchies. Always a bad idea.
13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
calculations. Try to avoid calculations before restrictions.
17. Turn off warning messages on queries.
18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
19. Check to see where currency conversions are happening if they are used.
20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
21. Avoid Cell Editor use if at all possible.
22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order. -
Hi there,
I'm relatively new to Pl/SQL and to APEX and i'm developing a application on APEX. So i have a report on a page which pulls out the orders according to the type of user. If the type is not LOG_FARM,'MANAGER','MNG_FARM' the code runs fine in acceptable time but when the user is one of those three it takes almost 3 minutes to finish the query. What i wanted to know is if within the code you see any clear change that i could do to improve the speed or if i can for example bring only the newest results (although this last option is not good because i have a search engine above and like this i wont have acess to older registers).
Declare
v_restricao varchar2(100);
v_restricao_dim varchar2(4000);
Begin
if :G_APP_USER_EMP_TYPE = 'LOG_FARM' then
v_restricao := 'and FO.ORDER_LOGISTICS = ''Y''';
else
v_restricao := null;
end if;
if :G_APP_USER_EMP_TYPE IN('MNG_FARM','DIM_FARM') then
v_restricao_dim := '
WHERE pharmacyid in (SELECT pharm_id
FROM dim_user
WHERE UPPER (emp_login) = UPPER ('''||:app_user||''')) ';
end if;
if :G_APP_USER_EMP_TYPE = 'CALL_FARM' then
v_restricao_dim := '
WHERE pharmacyid IN (SELECT DISTINCT pharmid
FROM dimuser
WHERE SEL_PHARM = ''Y'' ) ';
end if;
if :G_APP_USER_EMP_TYPE IN('LOG_FARM','MANAGER','MNG_FARM') then
return 'SELECT fo.pharm_id, fo.order_id, dp.pharmacy_dsc, ds.whs_dsc, to_CHAR(fo.order_dt,''DD/MM/YYYY'') AS ORDER_DT,
fo.order_obs, acronyms (fo.order_stat, ''ORDER.STATUS'') as STATUS,
fo.order_stat
FROM fact_order fo,
dim_pharm dp,
dim_whs ds
WHERE upper(order_id) LIKE NVL(upper('''||:p501_order_id||'''),''%'')
AND fo.pharm_id = dp.pharmacy_id
AND fo.whs_id = ds.whs_id ' || v_restricao || '
order by fo.order_id desc ';
ELSE
return 'SELECT fo.pharm_id, fo.order_id, dp.pharmacy_dsc, ds.whs_dsc, to_CHAR(fo.order_dt,''DD/MM/YYYY'') AS ORDER_DT,
fo.order_obs, acronyms (fo.order_stat, ''ORDER.STATUS'') as STATUS,
fo.order_stat,(select dkap.emp_name from dim_kap dkap where DKAP.PHARM_ID = FO.PHARM_ID AND DKAP.sel_pharm = ''Y'') AS KAP
FROM fact_order fo,
dim_pharm dp,
dim_whs ds ,
(SELECT pharmacyid
FROM dim_pharm ' || v_restricao_dim ||' ) dim_phar
WHERE upper(order_id) LIKE NVL(upper('''||:p501_order_id||'''),''%'')
AND fo.pharm_id = dp.pharmacy_id
AND fo.whs_id = ds.whs_id
and FO.ORDER_LOGISTICS = ''N''
AND dim_phar.pharmacy_id = fo.pharm_id
order by fo.order_id desc';
END IF;
END;
The two are quite similar only the second has a condition to show only the pharmacys associated with the current user and the first one shows all the pharmacys.
Thanks in advance.
BrunoHi there,
I'm relatively new to Pl/SQL and to APEX and i'm developing a application on APEX. So i have a report on a page which pulls out the orders according to the type of user. If the type is not LOG_FARM,'MANAGER','MNG_FARM' the code runs fine in acceptable time but when the user is one of those three it takes almost 3 minutes to finish the query. What i wanted to know is if within the code you see any clear change that i could do to improve the speed or if i can for example bring only the newest results (although this last option is not good because i have a search engine above and like this i wont have acess to older registers).
Declare
v_restricao varchar2(100);
v_restricao_dim varchar2(4000);
Begin
if :G_APP_USER_EMP_TYPE = 'LOG_FARM' then
v_restricao := 'and FO.ORDER_LOGISTICS = ''Y''';
else
v_restricao := null;
end if;
if :G_APP_USER_EMP_TYPE IN('MNG_FARM','DIM_FARM') then
v_restricao_dim := '
WHERE pharmacyid in (SELECT pharm_id
FROM dim_user
WHERE UPPER (emp_login) = UPPER ('''||:app_user||''')) ';
end if;
if :G_APP_USER_EMP_TYPE = 'CALL_FARM' then
v_restricao_dim := '
WHERE pharmacyid IN (SELECT DISTINCT pharmid
FROM dimuser
WHERE SEL_PHARM = ''Y'' ) ';
end if;
if :G_APP_USER_EMP_TYPE IN('LOG_FARM','MANAGER','MNG_FARM') then
return 'SELECT fo.pharm_id, fo.order_id, dp.pharmacy_dsc, ds.whs_dsc, to_CHAR(fo.order_dt,''DD/MM/YYYY'') AS ORDER_DT,
fo.order_obs, acronyms (fo.order_stat, ''ORDER.STATUS'') as STATUS,
fo.order_stat
FROM fact_order fo,
dim_pharm dp,
dim_whs ds
WHERE upper(order_id) LIKE NVL(upper('''||:p501_order_id||'''),''%'')
AND fo.pharm_id = dp.pharmacy_id
AND fo.whs_id = ds.whs_id ' || v_restricao || '
order by fo.order_id desc ';
ELSE
return 'SELECT fo.pharm_id, fo.order_id, dp.pharmacy_dsc, ds.whs_dsc, to_CHAR(fo.order_dt,''DD/MM/YYYY'') AS ORDER_DT,
fo.order_obs, acronyms (fo.order_stat, ''ORDER.STATUS'') as STATUS,
fo.order_stat,(select dkap.emp_name from dim_kap dkap where DKAP.PHARM_ID = FO.PHARM_ID AND DKAP.sel_pharm = ''Y'') AS KAP
FROM fact_order fo,
dim_pharm dp,
dim_whs ds ,
(SELECT pharmacyid
FROM dim_pharm ' || v_restricao_dim ||' ) dim_phar
WHERE upper(order_id) LIKE NVL(upper('''||:p501_order_id||'''),''%'')
AND fo.pharm_id = dp.pharmacy_id
AND fo.whs_id = ds.whs_id
and FO.ORDER_LOGISTICS = ''N''
AND dim_phar.pharmacy_id = fo.pharm_id
order by fo.order_id desc';
END IF;
END;
The two are quite similar only the second has a condition to show only the pharmacys associated with the current user and the first one shows all the pharmacys.
Thanks in advance.
Bruno -
Is there any way to improve Query which searches XML data from a table??
hi all,
i have a table which have one column say 'colA' as Varchar(max) datatype which i save xml data and it have other cols too
Currently i am searching data inside this table using like operator
eg:
Select * from tablename where colA like ‘%<tagname>parameterstringvalue</tagname>%’
when i check with the Execution plan i could see it Takes 82% for clusterd index scan ( primarykey col
not ColA)
i added new non clusterd index for the same with include col as ColA and i found nonclusterd index scan
with same estimated I O cost and Extimated operator cost as clusterd index scan
My Question is :-
1. why didnt nonclusterd index seek come?
2. In What way i can improve perfomance for such situvation? i had seen couple of post suggesting to rewrite the Query as SELECT
* FROM myTable WHERE CONTAINS (myCol1, myCol2, "myString").
I Try creating Full text index and found cost increased compared to the original Query ?
3. As per my assumption the wild charecter ('%') in begining makes perfomance issue is there any option
or an alternative for such case?hi...i can give a skeleton
--Table Structure------------
Table1:-
(colA - int(PK),
ColB - Varchar(max),
ColC-uniquieidentifier,
ColD-datetime,
ColE-Bit)
It have clusterd index for ColA
Table2:-
(ColA-int(fk)
colF-int(pk)
colG-varchar(max),
ColH-uniqueidentifier,
colI-int,
colJ-int
ColK-date)
-----------------Query Skeleton-------------------:
select Distinct
s.colA,
s.ColB,
S.colC,
S.colD
from Table1 s with (nolock)
left outer join table2 Q with (nolock) on s.colA=q.ColA
where Q.ColA is null
and s.colB like '%<tag>sometext</tag>%'
and s.colD >='1/1/2010'
and s.colD <='1/1/2014'
i hope this will help to understand it clearly......... -
Hi,
This piece of code is taking a long time and many times gives Abap runtime errors with SYSTEM_IMODE_TOO_LARGE. (Basically the performance of the query needs to improve. I have done ST05 and also ran the query and I would appreciate any input on improving performance. I have put my comments in brackets after every query.
SELECT vbeln auart audat kunnr qmnum vkbur
FROM vbak
INTO CORRESPONDING FIELDS OF TABLE t_vbak
WHERE vbak~vkorg IN s_vkorg
AND vbak~audat IN s_erdat
AND vbak~kunnr IN s_kunnr.
(The above query is fine. Returns about 80,000 rows)
CHECK sy-subrc EQ 0.
Select VBAP
SELECT erdat matnr posnr pstyv vbeln arktx werks abgru
FROM vbap
INTO CORRESPONDING FIELDS OF TABLE t_vbap
FOR ALL ENTRIES IN t_vbak
WHERE vbeln = t_vbak-vbeln.
(The above query takes a long time to execute)
t_vbap_ra[] = t_vbap[].
(The above statement takes a long time to execute)
DELETE t_vbap_ra
WHERE pstyv <> 'ZRRA' AND
pstyv <> 'ZWRA'.
(This statement takes a long time to execute)
Thanks
RamHi,
Try this.
1. let the internal table t_vbak have the fields with the same name & order in which the fields are fetched from the database table so that you can replace the 'CORRESPONDING FIELDS OF' stmt with 'INTO TABLE'.
2. Whenever u make use of for all entries check whether the table has records in it or not and also modify the below select query as
if t_vbak[] is not intial.
SELECT erdat matnr posnr pstyv vbeln arktx werks abgru
FROM vbap
INTO CORRESPONDING FIELDS OF TABLE t_vbap
FOR ALL ENTRIES IN t_vbak
WHERE vbeln = t_vbak-vbeln
and (( pstyv ne 'ZRRA' ) or
(pstyv ne 'ZWRA' )).
endif.
Sharin. -
Improve Query Suggestions Load time
How can I improve load time for Pre Query Suggestions on search home page when user start typing ??
I notice it was slow in loading when I hit first time in the morning so I try to warm up by adding
""http://SiteName/_api/search/suggest?querytext='sharepoint' ""
in to warm up script but even during the day time after hitting few times it is slower some times . Any reason ?
Do you think moving Query Component to WFE will do any help here?
Pleas let me know - Thanks .Hi,
Query Suggestions should work at a high level overview is:
• You issue a query within a Search Center site and get results..
• When you hover over or click a result.. this gets stored as a “RecordPageClick” method and will be stored in the Web Applications W3WP process…
• Every five minutes ( I believe ) this W3WP will flush these Recorded Page Clicks and pass them over to the Search Proxy…
• This will then store them in a table in the SSA ( Search Service Application ) Admin DB
• By default, once a day, there is a timer job, Prepare Query Suggestions, that will run
• It does a check to see if the same term has been queried for and clicked at least 6 times and then will move them to another table in the same DB..
• If there are successful queries\clicks > 6 times for a term, and they move to the appropriate table, then when you start to type a word, like “share”
• This will fire a “method” over to the Search proxy servers called “GetQuerySuggestions” and will check that Admin DB to see if they have any matches..
• If so, then it will show up in your Search Box as like “SharePoint” for a suggestion…
Other components involved with the Query Suggestions:
Timer Jobs
Prepare query suggestions Daily
Query Classification Dictionary Update for Search Application Search Service Application Minutes
Query Logging Minutes
Database
MSSQLogQuerySuggestion (SearchDB) This gets cleaned up when we run the timer job
MSSQLogQueryString (SearchDB) Info on the Query String
MSSQLogSearchCounts (SearchDB) Info on the click counts
MSSQLogQuerySuggestion Looks like this may be where the hits for suggestions are stored
So the issue might related to timer job, database, connection between SharePoint server and SQL server. There is a similar case caused by DistributedCache.
If you move query component on to another sever, this may improve the process related to Search service, however, it may affect the performance due to networking.
Please collect verbose ULS log per steps below:
Enable Verbose logging via Central Admin> Monitoring> Reporting>Configure diagnostic logging(You can simply set all the categories with Verbose level and I can filter myself).
Reproduce the issue(Try to remove SSRS).
Record the time and get a capture of the error message(including the correlation ID if there is). Collect the log file which is by default located in the folder: <C:\Program files\common files\Microsoft Shared\web server extensions\15\LOGS>.
Stop verbose logging.
Regards,
Rebecca Tu
TechNet Community Support
Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
[email protected]. -
Need to improve query performance
I need to improve the following query, that takes more than 20 minutes to run...
all variables with v_ are set within procedure based on wheather user selected to filter on the field or chose not to
in my basic search, I only pass in equipment_index_number, when_discovered_date range and s_class
/*+ ORDERED INDEX (JCN, ad_jcn_trak_seq_wdd_ein_fil) USE_NL (JCN AM SH)
JCN.JOB_ID as JOB_ID,
JCN.JOB_SEQ as job_seq,
JCN.EQUIPMENT_INDEX_NUMBER as EQUIPMENT_INDEX_NUMBER,
AM.JOB_DATE_OF_LAST_UPDATE as job_date_of_last_update,
SH.UNIT_NAME as unit_name,
JCN.FILTER_KEY as FILTER_KEY,
AM.WHEN_DISCOVERED_CODE as when_discovered_code,
AM.MAN_HOURS_OPENING as man_hours_opening,
AM.TYPE_AVAILABILITY_CODE as type_availability_code,
AM.WHEN_DISCOVERED_DATE as WHEN_DISCOVERED_DATE,
AM.JCN as JCN,
AM.STATUS_CODE as STATUS_CODE,
AM.MAN_HOURS_REMAINING as MAN_HOURS_REMAINING,
AM.PRIORITY_CODE as PRIORITY_CODE,
AM.DATE_CLOSING as DATE_CLOSING,
AM.EQUIPMENT_NOMENCLATURE as EQUIPMENT_NOMENCLATURE,
AM.DEFERRAL_REASON_CODE as DEFERRAL_REASON_CODE,
AM.DUE_DATE as DUE_DATE,
AM.LOCATION as LOCATION,
AM.ACTION_TAKEN_CODE as ACTION_TAKEN_CODE,
AM.SAFETY_CODE as SAFETY_CODE,
AM.ESWBS_OPENING as ESWBS_OPENING,
AM.CSMP_NARRATIVE_SUMMARY as CSMP_NARRATIVE_SUMMARY,
AM.INSURV_NUMBER as INSURV_NUMBER ,
AM.INSURV_MAINTENANCE_INDICATOR as INSURV_MAINTENANCE_INDICATOR,
AM.INSURV_MISSION_DEGRADING_CODE as INSURV_MISSION_DEGRADING_CODE,
AM.PROBLEM_DESCRIPTION as PROBLEM_DESCRIPTION ,
AM.RECOMMEND_SOLUTION as RECOMMEND_SOLUTION,
AM.ACTUAL_SOLUTION as ACTUAL_SOLUTION ,
AM.CLOSING_REMARKS as CLOSING_REMARKS ,
AM.ADDITIONAL_NARRATIVE as ADDITIONAL_NARRATIVE
FROM
AD_TR JCN JOIN
RW_ACT_MAINT AM ON
JCN.JOB_SEQ = AM.JOB_SEQ
LEFT JOIN
SH_UNIT SH ON
JCN.HULL_NUMBER = SH.HULL_NUMBER
WHERE
JCN.JOB_SEQ is NOT null and
JCN.WHEN_DISCOVERED_DATE between p_from_date and p_to_date and
(v_jcn = -1 or AM.TYPE_HULL||AM.WORK_AREA||AM.JOB_SEQUENCE_NUMBER IN (select jcn from temp_hull_wa_jsn)) and
(v_apl = -1 or AM.APL IN (select apl from temp_apl_eic)) and
(v_eic = -1 or AM.EIC IN (select eic from temp_apl_eic)) and
(v_ein = -1 or JCN.EQUIPMENT_INDEX_NUMBER IN (select equipment_index_number from temp_ein)) and
(v_filter = -1 or JCN.FILTER_KEY IN (select filter_key from temp_filter)) and
(v_tmr = -1 or TRIM(AM.ACTION_TAKEN_CODE) = '8') and
(v_s_class = -1 or AM.S_CLASS IN (select s_class from temp_s_class)) and
(v_discovered = -1 or AM.WHEN_DISCOVERED_CODE IN (select when_discov_code from temp_when_discov_code)) and
(v_action = -1 and v_atc = -1) or
(v_action = 1 and v_atc = 1 and
(AM.ACTION_TAKEN_CODE IS NULL or AM.ACTION_TAKEN_CODE IN (select action_code from temp_action_code))
) or
(v_action = 1 and v_atc = -1 and
AM.ACTION_TAKEN_CODE IN (select action_code from temp_action_code)
) or
(v_action = -1 and v_atc = 1 and
AM.ACTION_TAKEN_CODE IS NULL
) and
(v_status = -1 or AM.STATUS_CODE IN(select status_code from temp_status_code)) and
(v_priority = -1 or AM.PRIORITY_CODE IN(select priority_code from temp_priority_code))and
(v_cause = -1 or AM.CAUSE_CODE IN(select cause_code from temp_cause_code))and
(v_deferral = -1 or AM.DEFERRAL_REASON_CODE IN(select deferral_reason_code from temp_deferral_code))and
(v_availability = -1 or AM.TYPE_AVAILABILITY_CODE IN(select type_availability_code from temp_type_avail_code)) and
(v_narrative = -1 or UPPER(AM.CSMP_NARRATIVE_SUMMARY) LIKE v_upper_narrative or
UPPER(AM.PROBLEM_DESCRIPTION) LIKE v_upper_narrative or UPPER(AM.RECOMMEND_SOLUTION) LIKE v_upper_narrative or
UPPER(AM.ACTUAL_SOLUTION) LIKE v_upper_narrative or UPPER(AM.CLOSING_REMARKS) LIKE v_upper_narrative or
UPPER(AM.ADDITIONAL_NARRATIVE) LIKE v_upper_narrative or UPPER(AM.SLR_NARRATIVE) LIKE v_upper_narrative) and
(v_niin = -1 or EXISTS
(select 1 from rw_issues pi
where pi.job_seq=am.job_seq and
pi.niin in (select niin from temp_niin))
or EXISTS
(select 1 from rw_demands pd
where pd.job_seq=am.job_seq and
pd.niin in (select niin from temp_niin))
ORDER BY JCN.H_NUMBER, JCN.WORK_AREA, JCN.JSN, JCN.JCN_SUFFIX;
AD_TR table has 5 mil rows, RW_ACT_MAINT has 15 mil rows and sh_unit has only 1000 rows
I have index on AD_TR table for job_seq, when discovered date, equipment_index_number and filter_key which is the most frequent search selections that user makes...
any suggestions?SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL>
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 16
SQL>
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL>
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL>
SQL> column sname format a20
SQL> column pname format a20
SQL> column pva12 format a20
SQL>
SQL> select sname
2 , pname
3 , pval1
4 , pval2
5 from
6 sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 12-05-2007 14:40
SYSSTATS_INFO DSTOP 12-05-2007 14:40
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 1227.03273
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SNAME PNAME PVAL1 PVAL2
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
SQL>
SQL> explain plan for
Execution Plan
Plan hash value: 2373934626
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 113 | 13221 | 319 (1)| 00:00:04 | | |
| 1 | SORT ORDER BY | | 113 | 13221 | 319 (1)| 00:00:04 | | |
|* 2 | HASH JOIN OUTER -
How to improve query performance of an ODS- with 320 million records
<b>Issue:</b>
The reports are giving time-outs while execution.
<b>Scenario</b>:
We have an ODS having approximately 320 millions of records in it.
The reports are based on
The ODS and
InfoSets based on this ODS.
These reports are giving time-outs while execution.
<b>Few facts about this ODS:</b>
There are around 75 restricted and calculated keyfigures used in the query definition.
We cant replace this ODS by cube as there is requirement of InfoSet on it.
This is in BW 3.5 environment.
<b>Few things we tried:</b>
Secondary Indices are created on the fields which are appearing in the selection screen of the reports. Its not worked.
The Restriction/Calculation logic in the query definition can be moved to backend. Will it make the difference?
Question:
Can you suggest the ways to improve the query performance of this ODS?
Your immediate response is highly appreciated. Thanks in advance.Hey!
I think Oliver's questions are good. 320 Mio records are to much for an ODS. If you can get rid of the InfoSet that would be helpful. Why exactly do you need it? If you don't need you could partition your ODS with a characteristic and report over an MultiProvider.
Is there a way to delete some data from the ODS?
Maybe you make an Upgrade to 7.0 in the next time? There you can use InfoSets on InfoCubes.
You also could try to precalculation like sam say. This is possible with reporting agent or Information Broadcasting. Then you have it in your cache. Look that your cache is large enough. Maybe you can use a table or something.
Do you just need to make one or some special reports on a special time? Maybe you can make an update in another ODS writing just the result in it. For this you can use update rules or maybe analysisprocess designer (transaction RSANWB) is the better way.
Maybe it is also possible to increase the parameter for your dialog-runtime rdisp/max_wprun_time (If you don't know, you basis should. Else look here https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/ab254cf2-0c01-0010-c28d-b26d04627e61)
Best regards,
Peter -
How to improve query performance when reporting on ods object?
Hi,
Can anybody give me the answer, how to improve my query performance when reporting on ODS object?
Thanks in advance,
Ravi Alakuntla.Hi Ravi,
Check these links which may cater your requirement,
Re: performance issues of ODS
Which criteria to follow to pick InfoObj. as secondary index of ODS?
PDF on BW performance tuning,
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
Regards,
Mani.
Maybe you are looking for
-
Dear Experts, I have a requirement to save ALV report output to PDF. When I saved, the Grand Total row is appearing in a new page even though there is space left in the previous page. Can we control this anywhere in the report. Please help.
-
HI SAP GURUS CAN ANY ONE HELP ME OUT 1. WHAT IS THIRD PARTY ORDERING? HOW WE CONFIGURE THIRD PARTY ORDERING ? DOES DELIVERY OCCURS IN THIRD PARTY ORDERING? WHEN 2. IF I HAVE GOT 4PLANTS HTEN HOW MANY BILLING CAN BE DONE? 3. AFTER PGI WHAT HA
-
Can't connect to a Bluetooth earpiece
I've hired a developer to create an air app for me on Android. The app needs to record audio from, and send audio to, the user, ideally via a Bluetooth earpiece. The developer tells me air doesn't support Bluetooth earpieces, which surprises me (he'
-
Partner Determination Via Multiple Access Sequences
COM_PARTNER_BADI. In a certain item level partner determination procedure for partner function ZPF, we use an access sequence of 1000. As expected, we copy partners from the preceding document to our item. That is what we want. We also want to invo
-
Cannot get popups on internet explorer
I have to use my phone for work but I can't get it to do popups on the website I use. Anyone else have this issue?