Query performance to increase

hi i'm using cte's as defined and took a temp table to take the columns from temp table.now,when i'm executing this logic of sp..its taking 7min for 3lakhs records. can any one pls help me how to improve my query performance..
the code which i'm using is:
create temtable as(cols);WITH Datematrix(AllocationDate)/*cte*/
As
SELECT @StartDate AS AllocationDate
UNION ALL
SELECT DATEADD(D,1,AllocationDate) AS AllocationDate
FROM Datematrix WHERE AllocationDate<@EndDate
/*cte*/ Allocation (Division,DivisionID,ResourceName,ResourceEmailID,ResourceID,Project
,ProjectID,Scope,ScopeID,WorkItem,TaskStartDate,TaskEndDate
,ProgramID ,Program,PortfolioID ,Portfolio,StatusID,Status,TaskID,EstimateHrs,ScopeEstimateHrs)
AS
SELECT
DIV.Division
,RES.DivisionID
,RES.ResourceName
,ResourceEmailID = STUFF((
SELECT COALESCE( ', ' + CONVERT(VARCHAR,RES.Email1), '')
FROM dbo.TasksResource TSKRES WITH(NOLOCK) LEFT OUTER JOIN
dbo.tb_Resource RES WITH(NOLOCK) ON RES.UID = TSKRES.ResourceID
WHERE TSKRES.TaskID = TSK.TaskID
FOR XML PATH('')), 1, 1, '')
,RES.UID ResourceID
,PRJ.Project + ' (' + CONVERT(VARCHAR(15),PRJ.StartDate,101) +' - ' + CONVERT(VARCHAR(15),PRJ.EndDate,101) + ')' as Project
,PRJ.UID ProjectID
,SCP.Title Scope
,SCP.ScopeID
,TSK.Title WorkItem
,TSK.StartDate TaskStartDate
,TSK.EndDate TaskEndDate
,PRJ.ProgramID
,PR.Program
,PR.PortfolioID
,PF.Portfolio
,TSK.StatusID
,ST.Status
,TSK.TaskID
,TSK.EstimateHrs
,(isnull(SCP.EstimateARCH,0) + isnull(SCP.EstimateBA,0) + isnull(SCP.EstimateDev,0) + isnull(SCP.EstimatePM,0) + isnull(SCP.EstimateQA,0) + isnull(SCP.EstimateRM,0)) as ScopeEstimateHrs
--SCP.EstimateARCH + SCP.EstimateBA +SCP.EstimateDev +SCP.EstimatePM +SCP.EstimateQA +SCP.EstimateRM as ScopeEstimateHrs
FROM Tasks TSK WITH(NOLOCK)
INNER JOIN dbo.Scope SCP WITH(NOLOCK) ON TSK.ScopeID = SCP.ScopeID
INNER JOIN dbo.tb_Project PRJ WITH(NOLOCK)ON TSK.ProjectID = PRJ.UID
INNER JOIN dbo.tb_Program PR WITH(NOLOCK) ON PR.UID=PRJ.ProgramID
INNER JOIN dbo.tb_Portfolio PF WITH(NOLOCK)ON PF.UID=PR.PortfolioID
LEFT OUTER JOIN dbo.TasksResource TSKRES WITH(NOLOCK)ON TSKRES.TaskID = TSK.TaskID
LEFT OUTER JOIN dbo.tb_Resource RES WITH(NOLOCK) ON RES.UID = TSKRES.ResourceID
LEFT JOIN dbo.tb_Division DIV WITH(NOLOCK) ON RES.DivisionID = DIV.UID
LEFT JOIN dbo.tb_Status ST WITH(NOLOCK) ON TSK.StatusID=ST.UID /*relating with the high level work items */
WHERE (PRJ.UID = @Project OR @Project = -1)
AND (PRJ.ProgramID = @Program OR @Program = -1)
AND (PRJ.PortfolioID =@Portfolio OR @Portfolio = -1)
,/*columns used in 2 cte's are taken in below maindata*/
MainData (AllocationDate,Division,DivisionID,ResourceName,ResourceEmailID,ResourceID,Project,ProjectID
,Scope,ScopeID,WorkItem,TaskStartDate,TaskEndDate
,ProgramID ,Program,PortfolioID ,Portfolio,StatusID,Status,TaskID,EstimateHrs,ScopeEstimateHrs,Allocated)
AS
( SELECT
Datematrix.*
,Allocation.*
,CASE WHEN ISDATE(TaskStartDate)=1 THEN 1 ELSE 0 END AS Allocated
FROM Datematrix FULL OUTER JOIN Allocation
ON ( Datematrix.AllocationDate>= Allocation.TaskStartDate
AND Datematrix.AllocationDate<=Allocation.TaskEndDate
)INSERT INTO #TempTable
SELECT * FROM MainData
OPTION (MAXRECURSION 0);this way the code goes...please help.. i need my query to be tuned..!!thanks in advance..
lucky

When asking performance related questions, it is usually a bad idea to use pseudo code. In this case, I am referring to the fact that your "temtable" declaration is invalid, and the fact that it is unclear where the local variables and/or parameter originate
from.
In your query, you are using the "optional parameter" pattern for local variables and/or parameters @Project, @Program and @Portfolio. I therefore assume that we are talking about a stored procedure here.
In stored procedures, optional parameters often have a negative effect on performance. You can cancel some of that effect by adding the OPTION(RECOMPILE) hint (assuming it is part of the stored procedure, and we are talking about parameters, not local variables).
Also, you could consider using a calendar table instead of the Datematrix CTE. Reason for that is, that it is now probably a difficult join with Allocation, because the optimizer probably hasn't established that Datematrix is a unique range of dates. Also,
it will not have any index. A calendar table with unique index on the date can help.
The rest is probably up to the indexes on the base tables. Since you did not post any DDL, I can't comment on that. Lack of proper indexes are usually the biggest reason of Select performance problems.
Gert-Jan

Similar Messages

  • Report burst:To increase query performance in xcelsius

    Is there anyway to increase query performance in xcelsius by using report bursting

    Fremlin,
    Report bursting is only for distributing your reports to your end users.
    You can improve performance only by following the [Best practices|https://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac] in xcelsius.
    -Anil

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • How to improve query performance built on a ODS

    Hi,
    I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
    Is there any method to improve or optimize th query performance that build on ODS.
    The ODS got huge volume of data ~ 300 Million records for 2 years.
    Thanx in advance,
    Guru.

    Hi Raj,
    Here are some few tips which helps you in improving ur query performance
    Checklist for Query Performance
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
    calculations. Try to avoid calculations before restrictions.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

  • Query Performance - Query very slow to run

    I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.

    Hi Joel,
    Walkthrough Checklist for Query Performance:
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Regards
    Vivek Tripathi

  • Query Performance Issues on a cube sized 64GB.

    Hi,
    We have a non-time based cube whose size is 64 GB . Effectively, I can't use time dimension for partitioning. The transaction table has ~ 850 million records. We have 20+ dimensions among which 2 of the dimensions have 50 million records.
    I have equally distributed the fact table records among 60 partitions. Each partition size is around 900 MB.
    The processing of the cube is not an issue as it completes in 3.5 hours. The issue is with the query performance of the cube.
    When an MDX query is submitted, unfortunately, in majority of the cases the storage engine has to scan all the partitions (as our cube  is not time dependent and we can't find a suitable dimension that will fit the bill to partition measure group based
    on it.)
    I'm aware of the cache warming and  usage based aggregation(UBO) techniques.
    However, the cube is available for users to perform adhoc queries and hence the benefits of cache warming and UBO may cease to contribute to the performance gain as there is a high probability that each user may look at the data from different perspectives
    (especially when we have 20 + dimensions) as day(s) progress.
    Also, we have 15 + average calculations (calculated measures) in the cube. So, the storage engine sends all the granular data that the formula engine might have requested (could be millions of rows) and then perform the average calculation.
    A look at the profiler suggested that considerable amount of time has been spent by storage engine to gather the records (from 60 partitions).
    FYI - Our server has RAM 32 GB and 8 cores  and it is exclusive to Analysis Services.
    I would appreciate comments from anyone who has worked on a large cube that is not time dependent and the steps they took to improve the adhoc query performance for the users.
    Thanks
    CoolP

    Hello CoolP,
    Here is a good articles regarding how to tuning query performance in SSAS, please see:
    Analysis Services Query Performance Top 10 Best Practices:
    http://technet.microsoft.com/en-us/library/cc966527.aspx
    Hope you can find some helpful clues to tuning your SSAS Server query performance. Moreover, there are two ways to improve the query response time for an increasing number of end-users:
    Adding more power to the existing server (scale up)
    Distributing the load among several small servers (scale out)
    For detail information, please see:
    http://technet.microsoft.com/en-us/library/cc966449.aspx
    Regards,
    Elvis Long
    TechNet Community Support

  • An index can not being used and still afect a query performance?

    Hi i have a query with a high cost so i created two indexes, A and B, to improve its performance.
    After the creation of the indexes when i reviewed the execution plan of the query the cost had been reduced, but i noticed that the index B is not being used,
    and if i try to force the query to use index B with a HINT the cost increases, so i decided to drop the index B.
    Once i droped the index B i checked the execution plan again and then i noticed that the cost of the query increased, if i recreate the index B the explain plan
    shows a lower cost even though its not being used by the execution plan.
    Does anyone know why is this happening?
    An index can, not being used by the execution plan and still affect a query performance?

    user11173393 wrote:
    Hi i have a query with a high cost so i created two indexes, A and B, to improve its performance.
    After the creation of the indexes when i reviewed the execution plan of the query the cost had been reduced, but i noticed that the index B is not being used,
    and if i try to force the query to use index B with a HINT the cost increases, so i decided to drop the index B.
    Once i droped the index B i checked the execution plan again and then i noticed that the cost of the query increased, if i recreate the index B the explain plan
    shows a lower cost even though its not being used by the execution plan.
    Does anyone know why is this happening?
    An index can, not being used by the execution plan and still affect a query performance?You said that is what is happening, & I believe you.

  • How does Index fragmentation and statistics affect the sql query performance

    Hi,
    How does Index fragmentation and statistics affect the sql query performance
    Thanks
    Shashikala
    Shashikala

    How does Index fragmentation and statistics affect the sql query performance
    Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
    the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    My TechNet Wiki Articles

  • Forms - Query - Performance

    Hi,
    I am working on a application Developed in Forms10g and Oralce 10g.
    I have few very large transaction tables in db and most of the screens in my application based on these tables only.
    When user performs a query (with out any filter conditions) the whole table(s) loaded into memory and takes very long time. Further queries on the same screen perform better.
    How can I keep these tables in memory (buffer) always to reduce the initial query time?
    or
    Is there any way to share the session buffers with other sessions, sothat it does not take long time in each session?
    or
    Any query performance tuning suggestions will be appreciated.
    Thanks in advance

    Thanks a lot for your posts, very large means around
    12 million rows. Yep, that's a large table
    I have set the query all records to "No". Which is good. It means only enough records are fetched to fill the initial block. That's probably about 10 records. All the other records are not fetched from the database, so they're also not kept in memory at the Forms server.
    Even when I try the query in SQL PLUS it is taking
    long time. Sounds like a query performance problem, not a Forms issue. You're probably better of asking in the database or SQL forum. You could at least include the SELECT statement here if you want any help with it. We can't guess why a query is slow if we have no idea what the query is.
    My concern is, when I execute the same query again or
    in another session (some other user or same user),
    can I increase the performance because the tables are
    already in memory. any possibility for this? Can I
    set any database parameters to share the data between
    sessions like that... The database already does this. If data is retrieved from disk for one user it is cached in the SGA (Shared Global Area). Mind the word Shared. This caching information is shared by all sessions, so other users should benefit from it.
    Caching also has its limits. The most obvious one is the size of the SGA of the database server. If the table is 200 megabyte and the server only has 8 megabyte of cache available, than caching is of little use.
    Am I thinking in the right way? or I lost some where?Don't know.
    There's two approaches:
    - try to tune the query or database to have better performance. For starters, open SQL*Plus, execute "set timing on", then execute "set autotrace traceonly explain statistics", then execute your query and look at the results. it should give you an idea on how the database is executing the query and what improvements could be made. You could come back here with the SELECT statement and timing and trace results, but the database or SQL forum is probably better
    - MORE IMPORTANTLY: think if it is necessary for users to perform such time consuming (and perhaps complex) queries. Do users really need the ability to query all records. Are they ever going to browse through millions of records?
    >
    >
    Thanks

  • Inventory Ageing query performance

    Hi All,
       I have created inventory ageing query on our custom cube which is replica of 0IC_C03. We have data from 2003 onwards. the performance of the query is very poor the system almost hangs. I tried to create aggregates to improve performance but its failed. What i should do to improve the performance and why the aggregate filling is failed. Cube have compressed data. Pls guide.
    Regards:
    Jitendra

    Inaddition to the above posts
    Check the below points ... and take action accordingly to increase the query performance.
    mainly check --Is the Cube data Compressed. it will increase the performance of the query..
    1)If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2)Check code for all exit variables used in a report.
    3)Check the read mode for the query. recommended is H.
    4)If Alternative UOM solution is used, turn off query cache.
    5)Use Constant Selection instead of SUMCT and SUMGT within formulas.
    6)Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    7)Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed.
    Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    8)Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    9)If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing.
    10)Check the user exits usage involved in OLAP run time?
    11)Use Constant Selection instead of SUMCT and SUMGT within formulas.
    12)
    Turn on the BW Statistics: RSA1, choose Tools -> BW statistics for InfoCubes(Choose OLAP and WHM for your relevant Cubes)
    To check the Query Performance problem
    Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all InfoCubes.
    You need to run ST03N in expert mode to get these values
    based on the analysis and the values taken from the above  - Check if an aggregate is suitable or setting OLAP etc.
    Edited by: prashanthk on Nov 26, 2010 9:17 AM

  • Query Performance & Turning off graph

    Since we've upgraded from BW3.5 to BI 7.0, query performance deteriorated.  The same query now runs longer in BI 7 than compared to in BW 3.5, especially for queries that return large sets of data.
    For example, one query took 4 minutes 30 seconds in BW 3.5, but takes 10 minutes in BI 7.0
    File size also increase from 6,500 Kb in BW 3.5 to 16,000 Kb in BI 7.0
    I believe it takes longer to format the data after retrieval, and it was trying to plot 17,000 records on the the embeded chart. But after we've unhidden and deleted the chart, file size almost dropped half, to about 7,600 Kb.
    I believe the embedded graph is a nice-to-have, but it all depends how the data is plotted to detgermine the meaningfulness of the grapah - which is not in most cases for us and hence the graph is not too measningful.
    Does anyone know if there is a way to turn off / disable the auto pre-plotting of graph in BI 7?
    I believe that this should improve query performance if it is disabled..
    Many thanks for your help.
    Lawrence

    Just to add that: for smaller queries, the time and size dfference are not significant, but large queries are taking significantly longer (and larger in file size in BI 7)..

  • Input ready query performance

    Hi Experts,
    We are working on input ready queries. But the input ready reports are taking lot of time around 10 to 15 mins to display the results and hence the planning functions like save are also taking lot of time.
    We can't use the OLAP cache as these reports are developed on Aggregation level.
    Could somebody guide me how to improve the performance of input ready queries.
    Thanks in Advance,
    Raj

    Hi
    You can do repartitioning and reclustering
    Repartitiong help you to partition the cube even after data uploaded in cube.Similar to partitioning but you do after cube load
    Reclustering enables related data to be stored in same extent in the database and increase the query performance . Similar to clustering but you do after cube load
    There are heap of material available in BI section of SDN which you can make use of
    Regards
    N Ganesh

  • How can i increse query performance other than creating aggregates

    Hi,
           i created one query and it will take more time to give a report i created aggregates but no use again it will take same so pls give me a option to increase the query performance.
    bye .
    sandhya.

    Hi SANDHYA,
    You can increase the query performance by using indices too. You can search a table for data records that satisfy certain search criteria faster using an index.
    An index can be considered a copy of a database table that has been reduced to certain fields. This copy is always in sorted form. Sorting provides faster access to the data records of the table, for example using a binary search. The index also contains a pointer to the corresponding record of the actual table so that the fields not contained in the index can also be read.
    Pls refer the follwoing links for more idea about indices
    http://help.sap.com/saphelp_nw04/helpdata/en/9b/c743f5b40711d194f900a0c929b3c3/frameset.htm
    Re: Bitmap vs BTree
    How to create B-Tree and Bitmap index in SAP
    Re: Cardinality
    Line Item Dimesnion
    I ve some good documents regarding indices...I would send them if i know ur mail id.
    regards,
    RV.

  • Query performance on Multiprovider(Remote Cube)

    Hi All,
    I have to increase the query performance for a report which built on Multi provider.
    This multiprovide designed from several remote cubes,but for this report data will bring through one remote cube from R/3.
    In filter i had one remote cube, which bring data from R/3.
    Now in ST03 the stats are like
    %init Time - 0, %DB time - 0, %OLAP time - 16.67, %Front end - 83.33.
    Now i have to improve the %Front end lapsed time.
    Could you please guide me.
    Thanks
    Srinivas

    Hi Srinivas,
    Please see this document
    https://websmp105.sap-ag.de/~sapidb/011000358700001394912002
    And this Discussion Thread
    Re: Deactivate Hierarchy symbols in excel
    See whether this is helpful in case of Remote Cubes.
    Thanks
    CK

  • Query performance when multiple single variable values selected

    Hi Gurus,
    I have a sales analysis report that I run with some complex selection criteria.  One of the variables is the Sales Orgranisation.
    If I run the report for individual sales organisations, the performance is excellant, results are dislayed in a matter of seconds.  However, if I specify more than one sales organisation, the query will run and run and run.... until it eventually times out.
    For example; 
    I run the report for SALEORG1 and the results are displayed in less than 1 minute. 
    I then run the report for SALEORG2 and again the results are displayed in less than 1 minute.
    If I try to run the query for both SALEORG1 and SALEORG2, the report does not display any results but continues until it times out.
    Anybody got any ideas on why this would be happening?
    Any advise, gratefully accepted.
    Regards,
    David

    While compression is generally something that you should be doing, I don't think it is a factor here, since query performance is OK when using just a sinlge value.
    I would do two things - first make sure the DB stats for the cube are current.  You might even consider increasing the sample rate for the stats collection  if you are using one.  You don't mention the DB you use, or what type of index is on Salesorg which could play a role. Does the query run against a multiprovider on top of multiple cubes, or is there just one cube involved.
    If you still have problems after refreshing the stats, then I think the next step is to get a DB execution plan for the query by running the query from RSRT debugging mode when it is run with just one value and another when run with multiple values to see if the DB is doing something different - seek out your DBA if you are not familiar with executoin plan.

Maybe you are looking for