Report burst:To increase query performance in xcelsius

Is there anyway to increase query performance in xcelsius by using report bursting

Fremlin,
Report bursting is only for distributing your reports to your end users.
You can improve performance only by following the [Best practices|https://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac] in xcelsius.
-Anil

Similar Messages

  • Increase Query Designer Performance?

    Hi together,
    Is it possible to increase the performance of the Query Designer? I have a query which is based on a MultiProvider with several InfoCubes. If i work some minutes with the Query Designer, it becomes slower and slower. Then it takes from one up to ten seconds to see the digit in my formula, which i pressed on the keyboard just before. This is very awful! At least the half of the time i work with the Query Designer i have to wait, that this tool stops calculating something or displaying the hourglass. Very inefficient.
    Thanks for hints in advance!

    >
    Ricardo Rosa wrote:
    > Hi Timo,
    >
    > Usually this kind of issue should be solved with frontend patch upgrade, have you tried to reproduce this with latest FEP?
    >
    > Other suggestion which should help is to search for RSZ tables inconsistencies with report ANALYZE_RSZ_TABLES, this can search for inconsistencies which can decrease the performance in the query definition and also suggest a fix for that.
    >
    > Kind regards,
    >
    > Ricardo
    Hi Ricardo,
    Yes, i've already upgraded to latest FEP version.
    But thank you for this report. It looks very helpful!
    >
    Arun Varadarajan wrote:
    > Shikha,
    > The question was regarding Query designer performance and not query performance...
    >
    > We faced similar issue earlier - even opening of Query designer took a huge amount of time but then on upgrading upgrading my system to 1 GB ram most of the performance issue went away... check the RAM usage in the task manager and CPU utilization. This might make query designer faster.
    Hi Arun,
    At this point there is no more hope for me, as my system has already 2 GB RAM and a dual core cpu with 2.2 GHz.

  • Urgent: regarding the increasing the performance of report

    Hi,
    I had a report which is displaying the correct data but i execute on PRD Server,it gets Request Time Out.So i want to increase the performance of it.Plzz help me out in doing this.
    REPORT  ZWIP_STOCK NO STANDARD PAGE HEADING LINE-SIZE 150.
    TABLES: AFPO, AFRU, MARA, MAKT.
    DATA: BEGIN OF ITAB OCCURS 0,
          AUFNR LIKE AFPO-AUFNR,
          MATNR LIKE AFPO-MATNR,
          LGORT LIKE AFPO-LGORT,
          MEINS LIKE MARA-MEINS,
          NTGEW LIKE MARA-NTGEW,
          MTART LIKE MARA-MTART,
          STOCK TYPE P LENGTH 10 DECIMALS 3,
          END OF ITAB.
    DATA : ITAB2 LIKE ITAB OCCURS 0 WITH HEADER LINE.
    DATA : DESC LIKE MAKT-MAKTX.
    SELECT-OPTIONS : MAT_TYPE FOR MARA-MTART.
    SELECT-OPTIONS : P_MATNR FOR AFPO-MATNR.
    DATA : V_MINOPR LIKE AFRU-VORNR,
           V_MAXOPR LIKE AFRU-VORNR,
           V_QTYMIN LIKE AFRU-GMNGA,
           V_QTYMAX LIKE AFRU-GMNGA,
           V_QTY TYPE P LENGTH 10 DECIMALS 3.
            SELECT AAUFNR AMATNR ALGORT BMEINS BNTGEW BMTART FROM AFPO AS A
              INNER JOIN MARA AS B ON AMATNR = BMATNR
                INTO TABLE ITAB WHERE ELIKZ <> 'X' AND MTART IN MAT_TYPE AND A~MATNR IN P_MATNR.
        ITAB2[] = ITAB[].
        SORT ITAB2 BY MATNR MEINS MTART NTGEW.
        DELETE ADJACENT DUPLICATES FROM ITAB2 COMPARING MATNR MEINS MTART NTGEW.
       LOOP AT ITAB2.
        V_QTY = 0.
          LOOP AT ITAB WHERE MATNR = ITAB2-MATNR.
            SELECT MIN( VORNR ) INTO V_MINOPR FROM AFRU WHERE AUFNR = ITAB-AUFNR.
            SELECT MAX( VORNR ) INTO V_MAXOPR FROM AFRU WHERE AUFNR = ITAB-AUFNR.
            SELECT SUM( GMNGA ) INTO V_QTYMIN FROM AFRU WHERE AUFNR = ITAB-AUFNR AND VORNR =  V_MINOPR.
            SELECT SUM( GMNGA ) INTO V_QTYMAX FROM AFRU WHERE AUFNR = ITAB-AUFNR AND VORNR =  V_MAXOPR.
            V_QTY = V_QTY + V_QTYMIN - V_QTYMAX.
          ENDLOOP.
          ITAB2-STOCK = V_QTY.
          MODIFY ITAB2.
        ENDLOOP.
        LOOP AT ITAB2.
              WRITE:/ ITAB2-MATNR,ITAB2-STOCK.
        ENDLOOP.

    Instead of code from
    itab2[] = itab[] till last endloop try code given below
    data : begin of minopr occurs 0,
           aufnr type afru-aurnr,
           vornr type afru-vornr,
           end of minopr.
    data : begin of maxopr occurs 0,
           aufnr type afru-aurnr,
           vornr type afru-vornr,
           end of maxopr.
    data : begin of qtymin occurs 0,
           aufnr type afru-aurnr,
           vornr type afru-vornr,
           end of qtymin.
    data : begin of qtymax occurs 0,
           aufnr type afru-aurnr,
           vornr type afru-vornr,
           end of qtymax.
    select aurnr vornr into table minopr from afru for all entries in itab where aurnr = itab-aufnr
    maxopr[] = minopr[].
    sort minopr by aufnr vornr ascending.
    sort maxopr by aufnr vornr descending.
    delete adjacent duplicates from minopr comparing aufnr.
    delete adjacent duplicates from maxopr comparing aufnr.
    SELECT aufnr vornr GMNGA INTO TABLE QTYMIN FROM AFRU for all entries in minopr WHERE AUFNR = minopr-AUFNR AND VORNR = MINOPR-vornr.
    SELECT aufnr vornr GMNGA INTO TABLE QTYMAX FROM AFRU for all entries in maxopr WHERE AUFNR = maxopr-AUFNR AND VORNR = maxopr-vornr.
    sort qtymin by aufnr.
    sort qtymax by aufnr
    sort itab by matnr MEINS MTART NTGEW.
    LOOP AT ITAB.
    v_minopr = 0.
    v_maxopr = 0.
    read table qtymin with key aufnr = itab-aufnr binary search.
    if sy-subrc = 0.
    loop at qtymin from sy-tabix.
    if qtymin-aufnr = itab-aufnr.
    V_MINOPR = V_MINOPR + itab-gmnga.
    else.
    exit.
    endif.
    endloop.
    endif.
    read table qtymax with key aufnr = itab-aufnr binary search.
    if sy-subrc = 0.
    loop at qtymax from sy-tabix.
    if qtymax-aufnr = itab-aufnr.
    V_MaxOPR = V_MaxOPR + itab-gmnga.
    else.
    exit.
    endif.
    endloop.
    endif.
    V_QTY = V_QTY + V_QTYMIN - V_QTYMAX.
    At new itab-matnr.
    if sy-tabix = 1.
    continue.
    endif.
    itab2 = itab.
    itab2-stock = v_qty.
    append itab2.
    V_QTY = 0.
    endat.
    ENDLOOP.
    itab2 = itab.
    itab2-stock = v_qty.
    append itab2.
    LOOP AT ITAB2.
    WRITE:/ ITAB2-MATNR,ITAB2-STOCK.
    ENDLOOP.

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • How to increase the performance of a report

    can any body tell me how to increase the performance of a report?////
    i have prepared a report to show the expense detail .I have used BSIS and BSAS table.
    But whenever I am executing it is facing runtime error (TIME_OUT error ).
    Moderator Message: Duplicate Post. Read my comments in your previous thread.
    Edited by: kishan P on Nov 25, 2010 1:38 PM

    Please SEARCH in SCN before posting.
    Also post performance related issues here.

  • Increase the Performance of Search In Interactive Report

    Hi,
    I created a report which has about 10000+ records. To load the report it is not taking much time. But when i make a search in interactive report it is taking lot of time. Please suggest me how to increase the performance of interactive search.
    I am using Oracle apex 3.2 and Oracle 10g XE database.
    Please suggest me.
    Thanks
    Sudhir

    Hi,
    1) I am using Row Ranges Pagination from X to Y
    2) To Execute it takes about 1.15 seconds
    3) This is the function am using to make a call
    FUNCTION  FUNC_ORACLE_CONTRACT(P_SERIAL_NUMBER IN VARCHAR2,P_FLAG IN VARCHAR2) 
    RETURN VARCHAR2
    AS
    L_LOCATION_ID    NUMBER;
    L_SYSTEM_ID      VARCHAR2(200);
    L_ENTITLEMENT_ID VARCHAR2(200);
    L_CREATED_DATE   DATE;
    L_COMPANY_NAME   VARCHAR2(500);
    L_LEGAL_NAME     VARCHAR2(500);
           SELECT DISTINCT LOCATION_ID, SYSTEM_ID, ENTITLEMENT_ID, CREATED_DATE       
           FROM CUSTOMER_LICENSES
           WHERE PRODUCT_SERIAL_NUMBER = P_SERIAL_NUMBER OR
                 ENTITLEMENT_ID = P_SERIAL_NUMBER        OR
                 ACTIVATION_ID = P_SERIAL_NUMBER ;
    BEGIN
      OPEN C1;
      FETCH C1 INTO L_LOCATION_ID, L_SYSTEM_ID, L_ENTITLEMENT_ID, L_CREATED_DATE; 
        IF P_FLAG = 'COMPANY_ID' THEN
           SELECT COMPANY_NAME || ' (C)' INTO L_COMPANY_NAME
           FROM CUSTOMER_LOCATIONS
           WHERE LOCATION_ID = L_LOCATION_ID;
             IF L_COMPANY_NAME IS NULL THEN
                 SELECT LEGAL_NAME || ' (P)' INTO L_LEGAL_NAME
                 FROM PARTNER_LOCATIONS
                 WHERE ID = L_LOCATION_ID;
               RETURN L_LEGAL_NAME  ;
             ELSE
                RETURN L_COMPANY_NAME ;        
              END IF;
        ELSIF  P_FLAG = 'SYSTEM_ID' THEN
          RETURN L_SYSTEM_ID;
        ELSIF  P_FLAG = 'ENTITLEMENT_ID' THEN
          RETURN L_ENTITLEMENT_ID;
        ELSIF  P_FLAG = 'LOCATION_ID' THEN
          RETURN L_LOCATION_ID;   
       END IF;  
    CLOSE C1; 
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    RETURN NULL;
    WHEN OTHERS THEN
    RETURN NULL;
    END FNC_ORACLE_CONTRACTS;Edited by: Sudhir_Meru on Apr 9, 2013 4:11 PM

  • How to increase the performance of an insert query?

    Hi,
    I am using oracle database for our application where we executes one insert query and need the performance of 300 insert queries to be executed per second.
    As of now we are getting only 30/sec.We are not knowing how to tune the database for getting this performance.
    we are doing all this though C programming under both linux and solaris environment.
    Can u guide me in this issue.
    Regards,
    vamsi krishna

    Performance tuning issue is not something you can get a straight forward answer for.
    You need to look at various aspects of the system to get close to or increase the performance of your application. Tuning just the database might not give you want you want.
    So, you need to look at the Application codes, Mermory, Disk IO, etc. Have a look at the Performance Tuning Guide from the Oracle Documentation Library for your Oracle Release and Operating System
    http://www.oracle.com/technology/documentation/index.html#previous

  • How to improve query performance when reporting on ods object?

    Hi,
    Can anybody give me the answer, how to improve my query performance when reporting on ODS object?
    Thanks in advance,
    Ravi Alakuntla.

    Hi Ravi,
    Check these links which may cater your requirement,
    Re: performance issues of ODS
    Which criteria to follow to pick InfoObj. as secondary index of ODS?
    PDF on BW performance tuning,
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    Regards,
    Mani.

  • Query performance to increase

    hi i'm using cte's as defined and took a temp table to take the columns from temp table.now,when i'm executing this logic of sp..its taking 7min for 3lakhs records. can any one pls help me how to improve my query performance..
    the code which i'm using is:
    create temtable as(cols);WITH Datematrix(AllocationDate)/*cte*/
    As
    SELECT @StartDate AS AllocationDate
    UNION ALL
    SELECT DATEADD(D,1,AllocationDate) AS AllocationDate
    FROM Datematrix WHERE AllocationDate<@EndDate
    /*cte*/ Allocation (Division,DivisionID,ResourceName,ResourceEmailID,ResourceID,Project
    ,ProjectID,Scope,ScopeID,WorkItem,TaskStartDate,TaskEndDate
    ,ProgramID ,Program,PortfolioID ,Portfolio,StatusID,Status,TaskID,EstimateHrs,ScopeEstimateHrs)
    AS
    SELECT
    DIV.Division
    ,RES.DivisionID
    ,RES.ResourceName
    ,ResourceEmailID = STUFF((
    SELECT COALESCE( ', ' + CONVERT(VARCHAR,RES.Email1), '')
    FROM dbo.TasksResource TSKRES WITH(NOLOCK) LEFT OUTER JOIN
    dbo.tb_Resource RES WITH(NOLOCK) ON RES.UID = TSKRES.ResourceID
    WHERE TSKRES.TaskID = TSK.TaskID
    FOR XML PATH('')), 1, 1, '')
    ,RES.UID ResourceID
    ,PRJ.Project + ' (' + CONVERT(VARCHAR(15),PRJ.StartDate,101) +' - ' + CONVERT(VARCHAR(15),PRJ.EndDate,101) + ')' as Project
    ,PRJ.UID ProjectID
    ,SCP.Title Scope
    ,SCP.ScopeID
    ,TSK.Title WorkItem
    ,TSK.StartDate TaskStartDate
    ,TSK.EndDate TaskEndDate
    ,PRJ.ProgramID
    ,PR.Program
    ,PR.PortfolioID
    ,PF.Portfolio
    ,TSK.StatusID
    ,ST.Status
    ,TSK.TaskID
    ,TSK.EstimateHrs
    ,(isnull(SCP.EstimateARCH,0) + isnull(SCP.EstimateBA,0) + isnull(SCP.EstimateDev,0) + isnull(SCP.EstimatePM,0) + isnull(SCP.EstimateQA,0) + isnull(SCP.EstimateRM,0)) as ScopeEstimateHrs
    --SCP.EstimateARCH + SCP.EstimateBA +SCP.EstimateDev +SCP.EstimatePM +SCP.EstimateQA +SCP.EstimateRM as ScopeEstimateHrs
    FROM Tasks TSK WITH(NOLOCK)
    INNER JOIN dbo.Scope SCP WITH(NOLOCK) ON TSK.ScopeID = SCP.ScopeID
    INNER JOIN dbo.tb_Project PRJ WITH(NOLOCK)ON TSK.ProjectID = PRJ.UID
    INNER JOIN dbo.tb_Program PR WITH(NOLOCK) ON PR.UID=PRJ.ProgramID
    INNER JOIN dbo.tb_Portfolio PF WITH(NOLOCK)ON PF.UID=PR.PortfolioID
    LEFT OUTER JOIN dbo.TasksResource TSKRES WITH(NOLOCK)ON TSKRES.TaskID = TSK.TaskID
    LEFT OUTER JOIN dbo.tb_Resource RES WITH(NOLOCK) ON RES.UID = TSKRES.ResourceID
    LEFT JOIN dbo.tb_Division DIV WITH(NOLOCK) ON RES.DivisionID = DIV.UID
    LEFT JOIN dbo.tb_Status ST WITH(NOLOCK) ON TSK.StatusID=ST.UID /*relating with the high level work items */
    WHERE (PRJ.UID = @Project OR @Project = -1)
    AND (PRJ.ProgramID = @Program OR @Program = -1)
    AND (PRJ.PortfolioID =@Portfolio OR @Portfolio = -1)
    ,/*columns used in 2 cte's are taken in below maindata*/
    MainData (AllocationDate,Division,DivisionID,ResourceName,ResourceEmailID,ResourceID,Project,ProjectID
    ,Scope,ScopeID,WorkItem,TaskStartDate,TaskEndDate
    ,ProgramID ,Program,PortfolioID ,Portfolio,StatusID,Status,TaskID,EstimateHrs,ScopeEstimateHrs,Allocated)
    AS
    ( SELECT
    Datematrix.*
    ,Allocation.*
    ,CASE WHEN ISDATE(TaskStartDate)=1 THEN 1 ELSE 0 END AS Allocated
    FROM Datematrix FULL OUTER JOIN Allocation
    ON ( Datematrix.AllocationDate>= Allocation.TaskStartDate
    AND Datematrix.AllocationDate<=Allocation.TaskEndDate
    )INSERT INTO #TempTable
    SELECT * FROM MainData
    OPTION (MAXRECURSION 0);this way the code goes...please help.. i need my query to be tuned..!!thanks in advance..
    lucky

    When asking performance related questions, it is usually a bad idea to use pseudo code. In this case, I am referring to the fact that your "temtable" declaration is invalid, and the fact that it is unclear where the local variables and/or parameter originate
    from.
    In your query, you are using the "optional parameter" pattern for local variables and/or parameters @Project, @Program and @Portfolio. I therefore assume that we are talking about a stored procedure here.
    In stored procedures, optional parameters often have a negative effect on performance. You can cancel some of that effect by adding the OPTION(RECOMPILE) hint (assuming it is part of the stored procedure, and we are talking about parameters, not local variables).
    Also, you could consider using a calendar table instead of the Datematrix CTE. Reason for that is, that it is now probably a difficult join with Allocation, because the optimizer probably hasn't established that Datematrix is a unique range of dates. Also,
    it will not have any index. A calendar table with unique index on the date can help.
    The rest is probably up to the indexes on the base tables. Since you did not post any DDL, I can't comment on that. Lack of proper indexes are usually the biggest reason of Select performance problems.
    Gert-Jan

  • Query Performance for OLE DB OLAP Reporting

    Hi Experts,
    what are the advantages of enhancing query performance by
    A) building Aggregates or
    B) using Information Broadcaster Query Precalculation?
    Since the settings in Information Broadcaster could be done by any user - will the precalculated version be used only for this user or for all users exeucting the query?
    Are these settings also used if the query is executed via a 3rd party Frontend tool?
    Thanks,
    Angie

    Hi Angie,
    Which is the third party tool that's accessing the query? Is it BO? If so there's a lot of information available.

  • How to improve query performance built on a ODS

    Hi,
    I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
    Is there any method to improve or optimize th query performance that build on ODS.
    The ODS got huge volume of data ~ 300 Million records for 2 years.
    Thanx in advance,
    Guru.

    Hi Raj,
    Here are some few tips which helps you in improving ur query performance
    Checklist for Query Performance
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
    calculations. Try to avoid calculations before restrictions.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

  • Query Performance - Query very slow to run

    I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.

    Hi Joel,
    Walkthrough Checklist for Query Performance:
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Regards
    Vivek Tripathi

  • Inventory Ageing query performance

    Hi All,
       I have created inventory ageing query on our custom cube which is replica of 0IC_C03. We have data from 2003 onwards. the performance of the query is very poor the system almost hangs. I tried to create aggregates to improve performance but its failed. What i should do to improve the performance and why the aggregate filling is failed. Cube have compressed data. Pls guide.
    Regards:
    Jitendra

    Inaddition to the above posts
    Check the below points ... and take action accordingly to increase the query performance.
    mainly check --Is the Cube data Compressed. it will increase the performance of the query..
    1)If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2)Check code for all exit variables used in a report.
    3)Check the read mode for the query. recommended is H.
    4)If Alternative UOM solution is used, turn off query cache.
    5)Use Constant Selection instead of SUMCT and SUMGT within formulas.
    6)Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    7)Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed.
    Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    8)Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    9)If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing.
    10)Check the user exits usage involved in OLAP run time?
    11)Use Constant Selection instead of SUMCT and SUMGT within formulas.
    12)
    Turn on the BW Statistics: RSA1, choose Tools -> BW statistics for InfoCubes(Choose OLAP and WHM for your relevant Cubes)
    To check the Query Performance problem
    Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all InfoCubes.
    You need to run ST03N in expert mode to get these values
    based on the analysis and the values taken from the above  - Check if an aggregate is suitable or setting OLAP etc.
    Edited by: prashanthk on Nov 26, 2010 9:17 AM

  • Input ready query performance

    Hi Experts,
    We are working on input ready queries. But the input ready reports are taking lot of time around 10 to 15 mins to display the results and hence the planning functions like save are also taking lot of time.
    We can't use the OLAP cache as these reports are developed on Aggregation level.
    Could somebody guide me how to improve the performance of input ready queries.
    Thanks in Advance,
    Raj

    Hi
    You can do repartitioning and reclustering
    Repartitiong help you to partition the cube even after data uploaded in cube.Similar to partitioning but you do after cube load
    Reclustering enables related data to be stored in same extent in the database and increase the query performance . Similar to clustering but you do after cube load
    There are heap of material available in BI section of SDN which you can make use of
    Regards
    N Ganesh

  • How can i increse query performance other than creating aggregates

    Hi,
           i created one query and it will take more time to give a report i created aggregates but no use again it will take same so pls give me a option to increase the query performance.
    bye .
    sandhya.

    Hi SANDHYA,
    You can increase the query performance by using indices too. You can search a table for data records that satisfy certain search criteria faster using an index.
    An index can be considered a copy of a database table that has been reduced to certain fields. This copy is always in sorted form. Sorting provides faster access to the data records of the table, for example using a binary search. The index also contains a pointer to the corresponding record of the actual table so that the fields not contained in the index can also be read.
    Pls refer the follwoing links for more idea about indices
    http://help.sap.com/saphelp_nw04/helpdata/en/9b/c743f5b40711d194f900a0c929b3c3/frameset.htm
    Re: Bitmap vs BTree
    How to create B-Tree and Bitmap index in SAP
    Re: Cardinality
    Line Item Dimesnion
    I ve some good documents regarding indices...I would send them if i know ur mail id.
    regards,
    RV.

Maybe you are looking for

  • Officejet 6500 Wireless Puts lines on page when using document feeder

    Hi, Only using the document feeder for either print or scan there are vertical lines down page sometimes in the middle and sometimes at the end and some horizontal lines as well. Scanning or printing direct from the glass - doesn't happen. I have to

  • Impossible to open Raw file on network drive (macOS)

    Hi, When i try to open a RAW file, either .NEF (Nikon) or .PEF (Pentax) from a network drive, i have this error : "could not complete your request because photoshop does not recognize this type of file" https://www.dropbox.com/s/zqxtcfmmuj3fff1/Photo

  • Doing subtotal in ALV manually

    Hi since in alv when calculation % the summing up is wrong i did the calculation manually buy inserting lines. but now all properties of ALV is gone.  i am aware of that but still i want to arrange the display to may it same as AVL would do it. since

  • Dragging (and dropping) components in a panel

    I want to learn how to move a component or image around in a panel and drop it. To also drop other images and then connect the images with a connecting line. I don't really need to go into the Drag and Drop API because I'm just staying in my GUI. Can

  • Any best practice recommendations for controlling access to dashboards?

    Everyone,      I understand that an Xcelsius dashboard compiled into a .swf file contains no means for providing access control to limit who can or how many times they can run the dashboard. Basically, if they have a copy of the .swf they can use it