Performance query

10gR2 (10.2.0.2)
when using the following query , when i increase then n value the query takes much longer time although explain doesnt change..
can someone help?
select * from
SELECT Event.ID,
Event.CATEGORY ,
Event.DDOMAIN ,
Event.NETWORK ,
Event.NODE ,
Event.ENTITY ,
Event.HELPURL ,
Event.GROUPNAME ,
Event.OWNERNAME,
Event.WEBNMS ,
Event.SEVERITY ,
Event.SOURCE ,
Event.TEXT ,
Event.TTIME ,
null PROPNAME,
null PROPVAL,
ROW_NUMBER() OVER(order by Event.ttime desc) N
FROM bbnnms_user.Event
WHERE (((((Event.CATEGORY = 'Alarms')))))
where N BETWEEN 10000175 AND 10000224

Oracle processes this in two steps.
First it calculates all the rows that conform to <= the maximum in your between (10000224 ). This rowset is then passed on to the next step of returning just the rows >= your minimum (10000175).
[ Oracle pushed *JUST* the predicate N <= 10000224 into the nested view ]
When you have a small maximum, oracle has very few rows to hold internally and process/sort - so its quick, when you increase this value it has a lot or rows to process (10000224 rows) - It becomes slow.
Even if you the rows you required are always at the top end around the 10000175,
reordering it to select * from (select * from (select * from .... ROW_NUMBER...) where N >= 10000175) where N <= 10000224
still would not help.
Oracle would still push the "N <= 10000224" into the inner nested view, still returning 10000224 rows.

Similar Messages

  • Frm-40505:ORACLE error: unable to perform query in oracle forms 10g

    Hi,
    I get error frm-40505:ORACLE error: unable to perform query on oracle form in 10g environment, but the same form works properly in 6i.
    Please let me know what do i need to do to correct this problem.
    Regards,
    Priya

    Hi everyone,
    I have block created on view V_LE_USID_1L (which gives the error frm-40505) . We don't need any updation on this block, so the property 'updateallowed' is set to 'NO'.
    To fix this error I modified 'Keymode' property, set it to 'updatable' from 'automatic'. This change solved the problem with frm-40505 but it leads one more problem.
    The datablock v_le_usid_1l allows user to enter the text (i.e. updated the field), when the data is saved, no message is shown. When the data is refreshed on the screen, the change done previously on the block will not be seen (this is because the block updateallowed is set to NO), how do we stop the fields of the block being editable?
    We don't want to go ahead with this solution as, we might find several similar screens nad its diff to modify each one of them individually. When they work properly in 6i, what it doesn't in 10g? does it require any registry setting?
    Regards,
    Priya

  • ? Mail [12721] Error 1 performing query: WHERE clause too complex...

    Console keeps showing this about a zillion times in a row, a zillion times a day: "Mail [12721] Error 1 performing query: WHERE clause too complex no more than 100 terms allowed"
    I can't find any search results anywhere online about this.
    Lots of stalls and freezes in mail, finder/os x, and safari -- freqent failures to maintain a broadband connection (multiple times every day).
    All apps are slow, cranky with interminable beach balls getting worse all the time.
    anyone know what the heck is going on?

    Try rebuilding the mailbox to see if that helps.
    Also, how much disk space is available on your boot drive?

  • Is this the best performed query?

    Hi Guys,
    Is this the best performed query or i can still improve it ?
    I am new to SQL performacne tune, please help to get best performance of the query.
    SQL> EXPLAIN PLAN SET STATEMENT_ID = 'ASH'
    2 FOR
    3 SELECT /*+ FIRST_ROWS(30) */ PSP.PatientNumber, PSP.IntakeID, U.OperationCenterCode OpCenterProcessed,
    4 PSP.ServiceCode, PSP.UOMcode, PSP.StartDt, PSP.ProvID, PSP.ExpDt, NVL(PSP.Units, 0) Units,
    5 PAS.Descript, PAS.ServiceCatID, PSP.CreatedBy AuthCreatedBy, PSP.CreatedDateTime AuthCreatedDateTime,
    6 PSP.AuthorizationID, PSP.ExtracontractReasonCode, PAS.ServiceTypeCode,
    7 NVL(PSP.ProvNotToExceedRate, 0) ProvOverrideRate,
    8 prov.ShortName ProvShortName, PSP.OverrideReasonCode, PAS.ContractProdClassId
    9 ,prov.ProvParentID ProvParentID, prov.ProvTypeCd ProvTypeCd
    10 FROM tblPatServProv psp, tblProductsAndSvcs pas, tblProv prov, tblUser u, tblGlMonthlyClose GLMC
    11 WHERE GLMC.AUTHORIZATIONID >= 239
    12 AND GLMC.AUTHORIZATIONID < 11039696
    13 AND PSP.AuthorizationID = GLMC.AUTHORIZATIONID
    14 AND PSP.Authorizationid < 11039696
    15 AND (PSP.ExpDt >= to_date('01/03/2000','MM/DD/YYYY') OR PSP.ExpDt IS NULL)
    16 AND PSP.ServiceCode = PAS.ServiceCode(+)
    17 AND prov.ProvID(+) = PSP.ProvID
    18* AND U.UserId(+) = PSP.CreatedBy
    19 /
    Explained.
    Elapsed: 00:00:00.46
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    Plan hash value: 3602678330
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 8503K| 3073M| 91 (2)| 00:00:02 |
    |* 1 | HASH JOIN RIGHT OUTER | | 8503K| 3073M| 91 (2)| 00:00:02 |
    | 2 | TABLE ACCESS FULL | TBLPRODUCTSANDSVCS | 4051 | 209K| 16 (0)| 00:00:01 |
    | 3 | NESTED LOOPS | | 31 | 6200 | 75 (2)| 00:00:01 |
    | 4 | NESTED LOOPS OUTER | | 30 | 5820 | 45 (3)| 00:00:01 |
    |* 5 | HASH JOIN RIGHT OUTER | | 30 | 4950 | 15 (7)| 00:00:01 |
    | 6 | TABLE ACCESS FULL | TBLUSER | 3444 | 58548 | 12 (0)| 00:00:01 |
    |* 7 | TABLE ACCESS FULL | TBLPATSERVPROV | 8301K| 585M| 2 (0)| 00:00:01 |
    | 8 | TABLE ACCESS BY INDEX ROWID| TBLPROV | 1 | 29 | 1 (0)| 00:00:01 |
    |* 9 | INDEX UNIQUE SCAN | PK_TBLPROV | 1 | | 0 (0)| 00:00:01 |
    |* 10 | INDEX UNIQUE SCAN | PK_W_GLMONTHLYCLOSE | 1 | 6 | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - access("PSP"."SERVICECODE"="PAS"."SERVICECODE"(+))
    5 - access("U"."USERID"(+)="PSP"."CREATEDBY")
    7 - filter(("PSP"."EXPDT">=TO_DATE('2000-01-03 00:00:00', 'yyyy-mm-dd hh24:mi:ss') OR
    "PSP"."EXPDT" IS NULL) AND "PSP"."AUTHORIZATIONID">=239 AND "PSP"."AUTHORIZATIONID"<11039696)
    9 - access("PROV"."PROVID"(+)="PSP"."PROVID")
    10 - access("PSP"."AUTHORIZATIONID"="GLMC"."AUTHORIZATIONID")
    filter("GLMC"."AUTHORIZATIONID">=239 AND "GLMC"."AUTHORIZATIONID"<11039696)
    28 rows selected.
    Elapsed: 00:00:00.42

    Thanks a lot for your reply.
    Here are the indexes on those tables.
    table --> TBLPATSERVPROV ---> index PK_TBLPATSERVPROV ---> column AUTHORIZATIONID
    table --> TBLPRODUCTSANDSVCS ---> index PK_TBLPRODUCTSANDSVCS ---> column SERVICECODE
    table --> TBLUSER ---> index PK_TBLUSER ---> column USERID

  • FRM-40505  Oracle Error: Unable to perform query(URGENT)

    Hi I developed a form with a control_block and table_block(based on table)
    in same Canvas.
    Based on values on control_block and pressing Find button detail block will be queried.
    Control_block ->
    textitem name "payment_type" char type
    text item name "class_code " char type
    push button "find"
    base table: --> payment_terms(termid,payment_type,class_code,other colums)
    table_block is based on above table
    Now I have written when-button-pressed trigger on find button..
    declare
    l_search varchar2(100);     
    BEGIN
    l_search := 'payment_type='|| :control_block .payment_type||' AND class_code='||:control_block .class_code ;
    SET_BLOCK_PROPERTY('table_block',DEFAULT_WHERE,l_search);
    go_block('table_block');
    EXECUTE_QUERY;
    EXCEPTION
         when others then
         null;
    END;
    I am getting
    FRM-40505 Oracle Error: Unable to perform query
    please help..

    You don't need to build the default_where at run time. Just hard-code the WHERE Clause property as:
        column_x = :PARAMETER.X
    But, if for some compelling reason, you MUST do it at run time this should work:
        Set_block_property('MYBLOCK',Default_where,
            'COLUMN_X=:PARAMETER.X');
    Note that there are NO quotes except for first and last. If you get some sort of error when you query, you should actually see :Parameter.X replaced with :1 when you do Help, Display Error.

  • Performance query in sql

      
    This query takes 1 mins to execute, Please suggesr if we can improve more on performance
      SELECT P.ProgramId
                      ,P.Tier4Id
                    ,7 AS MetricId
                    ,(convert(DATETIME, CONVERT(VARCHAR, Month(s.DATE)) + '/1/' + CONVERT(VARCHAR, Year(s.DATE)))) AS DATE
                    ,CASE
                        WHEN SUM(ISNULL(S.ITD_BCWS_PMB, 0)) <> 0
                            THEN ROUND(SUM(ISNULL(S.ITD_BCWP_PMB, 0)) / SUM(ISNULL(S.ITD_BCWS_PMB, 0)), 2)
                        ELSE NULL
                        END AS Value
                    ,SUM(ISNULL(S.ITD_BCWP_PMB, 0)) AS BCWP
                FROM Staging.SPI_CPI_WBS S
                INNER JOIN Core.Projects PJ ON S.ProjectId = PJ.ProjectId AND PJ.IsActive = 1
                INNER JOIN Core.ProgramFinancials PF ON PF.ProgramId = S.ProgramId AND Year(PF.DATE) = Year(S.DATE) AND PF.IsYearly = 1
                INNER JOIN Core.v_WBS_Tier_Structure_SPICPI P ON S.WBSId = P.WBSId AND P.ProgramId = S.ProgramId
                WHERE S.LSKINdicator = 1 AND S.ProjectId IS NOT NULL
                    /*AP3-820 Check for BCWP > 0*/
                    AND ISNULL(S.ITD_BCWP_PMB, 0) > 0 AND (P.ProgramStatus = 1 OR isnull(PF.Spend, 0) != 0)
                GROUP BY P.ProgramId
                    ,s.DATE
                    ,P.Tier4Id
                HAVING SUM(ISNULL(S.ITD_BCWP_PMB, 0)) > 0
    Smash126

    Hi This entire query i am posting see if any more optimization can be done please
     UPDATE DW.ProgramScores_T4_P    
     SET     
      SPI=null    
      ,SPI_H=null    
      ,CPI=null    
      ,CPI_h=null    
      , M_SPI = null    
      ,M_SPI_H = null    
      ,M_CPI = null    
      ,M_CPI_H = null    
     where Date  between dateadd(mm,-12,getdate()) and getdate()       
    -- SPI CPI - Programscores    
    MERGE DW.ProgramScores_T4_P DW    
    USING      
    SELECT   isnull(SPI.ProgramId, CPI.ProgramId) AS ProgramId    
      ,isnull(SPI.DATE, CPI.DATE) AS DATE    
         ,isnull(SPI.Tier4Id,CPI.Tier4Id) as Tier4Id     
      ,SPI    
      ,SPI_H    
      ,CPI    
      ,CPI_H    
      ,CASE     
       WHEN isnull(SPI.DATE, CPI.DATE) BETWEEN dateadd(mm, - 12, getdate()) AND getdate()    
        THEN 1    
       ELSE 0    
       END AS Datediff    
      ,Month(isnull(SPI.DATE, CPI.DATE)) AS Month    
      ,Year(isnull(SPI.DATE, CPI.DATE)) AS Year    
    FROM (    
     SELECT SPI.ProgramId    
      ,SPI.Metricid    
      ,Tier4Id    
      ,DATE    
      ,SPI    
      ,SPI_H    
     FROM (    
      SELECT ProgramId    
       ,Tier4Id    
       ,A.MetricId    
       ,DATE    
       ,Value AS SPI    
        CASE     
         WHEN [Value] IS NULL OR [IsGoalOriented] = (0) OR [IsGoalOriented] IS NULL /*OR MBM.fn_CheckGoalsAvailability(A.MetricId) = 0 */    
          THEN (5)    
         ELSE CASE     
           WHEN [Value] >= [LCLG] AND [Value] <= [UCLG] OR [Value] >= [LCLG] AND [UCLG] IS NULL    
            THEN (2)    
           ELSE CASE     
             WHEN [Value] >= [LCLY] AND [Value] <= [UCLY]    
              THEN (3)    
             ELSE (4)    
             END    
           END    
         END    
        ) AS SPI_H    
      FROM (    
       SELECT P.ProgramId    
          ,P.Tier4Id    
        ,7 AS MetricId    
        ,(convert(DATETIME, CONVERT(VARCHAR, Month(s.DATE)) + '/1/' + CONVERT(VARCHAR, Year(s.DATE)))) AS DATE    
        ,CASE     
         WHEN SUM(ISNULL(S.ITD_BCWS_PMB, 0)) <> 0    
          THEN ROUND(SUM(ISNULL(S.ITD_BCWP_PMB, 0)) / SUM(ISNULL(S.ITD_BCWS_PMB, 0)), 2)    
         ELSE NULL    
         END AS Value    
        ,SUM(ISNULL(S.ITD_BCWP_PMB, 0)) AS BCWP    
       FROM Staging.SPI_CPI_WBS S    
       INNER JOIN Core.Projects PJ ON S.ProjectId = PJ.ProjectId AND PJ.IsActive = 1    
       INNER JOIN Core.ProgramFinancials PF ON PF.ProgramId = S.ProgramId AND Year(PF.DATE) = Year(S.DATE) AND PF.IsYearly = 1    
       INNER JOIN Core.v_WBS_Tier_Structure_SPICPI P ON S.WBSId = P.WBSId AND P.ProgramId = S.ProgramId    
       WHERE S.LSKINdicator = 1 AND S.ProjectId IS NOT NULL    
        /*AP3-820 Check for BCWP > 0*/    
        AND ISNULL(S.ITD_BCWP_PMB, 0) > 0 AND (P.ProgramStatus = 1 OR isnull(PF.Spend, 0) != 0)    
       GROUP BY P.ProgramId    
        ,s.DATE    
        ,P.Tier4Id    
       HAVING SUM(ISNULL(S.ITD_BCWP_PMB, 0)) > 0    
       ) A    
      INNER JOIN MBM.Metrics M ON M.MetricId = A.MetricId AND M.IsActive = 1    
      ) SPI    
     ) SPI    
    LEFT JOIN (    
     SELECT CPI.ProgramId    
           --,WBSId    
      ,Tier4Id    
      --,Tier3Id    
      --,Tier2Id    
      ,CPI.Metricid    
      ,DATE    
      ,CPI    
      ,CPI_H    
     FROM (    
      SELECT ProgramId    
          --,WBSId    
        ,Tier4Id    
        --,Tier3Id    
        --,Tier2Id    
       ,A.MetricId    
       ,DATE    
       ,Value AS CPI    
        CASE     
         WHEN [Value] IS NULL OR [IsGoalOriented] = (0) OR [IsGoalOriented] IS NULL /*OR MBM.fn_CheckGoalsAvailability(A.MetricId) = 0 */    
          THEN (5)    
         ELSE CASE     
           WHEN [Value] >= [LCLG] AND [Value] <= [UCLG] OR [Value] >= [LCLG] AND [UCLG] IS NULL    
            THEN (2)    
           ELSE CASE     
             WHEN [Value] >= [LCLY] AND [Value] <= [UCLY]              THEN (3)    
             ELSE (4)    
             END    
           END    
         END    
        ) AS CPI_H    
      FROM (    
       SELECT P.ProgramId    
       --,S.WBSId    
        ,P.Tier4Id    
        --,WBS.Tier3Id    
        --,WBS.Tier2Id    
        ,8 AS MetricId    
        ,(convert(DATETIME, CONVERT(VARCHAR, Month(s.DATE)) + '/1/' + CONVERT(VARCHAR, Year(s.DATE)))) AS DATE    
        ,CASE     
         WHEN SUM(ISNULL(S.ITD_ACWP, 0)) <> 0    
          THEN ROUND(SUM(ISNULL(S.ITD_BCWP_PMB, 0)) / SUM(ISNULL(S.ITD_ACWP, 0)), 2)    
         ELSE NULL    
         END AS Value    
        ,SUM(ISNULL(S.ITD_BCWP_PMB, 0)) AS BCWP    
       FROM Staging.SPI_CPI_WBS S    
       INNER JOIN Core.Projects PJ ON S.ProjectId = PJ.ProjectId AND PJ.IsActive = 1    
       --INNER JOIN Core.Programs P ON P.ProgramId = S.ProgramId and P.Isactive=1    
       INNER JOIN Core.ProgramFinancials PF ON PF.ProgramId = S.ProgramId AND Year(PF.DATE) = Year(S.DATE) AND PF.IsYearly = 1    
       --inner join Core.WBS_CAM_TIERS_Structure WBS ON S.WBSId = WBS.WBSId    
       INNER JOIN Core.v_WBS_Tier_Structure_SPICPI P ON S.WBSId = P.WBSId AND P.ProgramId = S.ProgramId    
       WHERE S.LSKINdicator = 1 AND S.ProjectId IS NOT NULL    
        /*AP3-820 Check for BCWP > 0*/    
        AND ISNULL(S.ITD_BCWP_PMB, 0) > 0 AND (P.ProgramStatus = 1 OR isnull(PF.Spend, 0) != 0)    
       GROUP BY P.ProgramId    
        ,s.DATE    
        --,S.WBSId    
        ,P.Tier4Id    
           --,WBS.Tier3Id    
           --,WBS.Tier2Id    
       HAVING SUM(ISNULL(S.ITD_BCWP_PMB, 0)) > 0 /*AP3-688 & AP3-702 Check for BCWP > 0 */    
       ) A    
      INNER JOIN MBM.Metrics M ON M.MetricId = A.MetricId AND M.IsActive = 1    
      ) CPI    
     WHERE CPI IS NOT NULL    
     ) CPI     
      ON CPI.ProgramId = SPI.ProgramId AND CPI.DATE = SPI.DATE AND     
         CPI.Tier4Id = SPI.Tier4Id --AND  CPI.Tier3Id = SPI.Tier3Id AND  CPI.Tier2Id = SPI.Tier2Id    
    )SPICPI     
      on SPICPI.ProgramId = DW.ProgramId and Datediff(dd,SPICPI.Date,DW.Date) = 0  and     
         SPICPI.Tier4Id = DW.Tier4Id --and SPICPI.Tier3Id = DW.Tier3Id and SPICPI.Tier2Id = DW.Tier2Id    
    WHEN MATCHED AND (SPICPI.Datediff = 1) THEN    
    update SET DW.SPI = SPICPI.SPI,    
         DW.SPI_H = SPICPI.SPI_H,    
         DW.CPI = SPICPI.CPI,    
         DW.CPI_H = SPICPI.CPI_H,    
         DW.UpdatedDate = getdate()    
    WHEN NOT MATCHED THEN    
    insert (ProgramId,Tier4Id, Date,SPI,SPI_H,CPI,CPI_H,CreatedDate)    
    values(SPICPI.ProgramId,SPICPI.Tier4Id,SPICPI.Date,SPICPI.SPI,SPICPI.SPI_H, SPICPI.CPI,SPICPI.CPI_H,getdate());    
    Smash126

  • ORA-01722: invalid number when performing query

    Hi,
    I am running SQL Developer on a laptop (XP Pro) accessing Oracle Applications
    Product Version     11.5.10.2
    Platform     IBM SP AIX
    OS Version     AIX
    Database     9.2.0.7.0
    Support ID     14460365
    If I run the following query it works fine -
    select
    mtrh.request_number
    ,to_number(mtrh.request_number)
    from
    mtl_txn_request_headers mtrh
    where
    to_number(mtrh.request_number) = 135060
    and mtrh.request_number = '135060' -- works with this line in!!!!
    however if I comment out the last line I get
    An error was encountered performing the requested operation :
    ORA-01722: invalid number
    The field request_number is defined as varchar2(30)
    It seems that there is something strange about the way it handles to_number in where clauses.
    Thanks
    Mick Sulley

    You have an invalid number in request_number. If you add "and mtrh.request_number = '135060' ", the result set will be reduced to only those rows which have 135060 in the column and the to_number() will work. WIthout that row, it does to_number(request_number) for all rows in order to identify the one you want. When it comes across a request_number column which contains an invalid number it reports an error.
    <preach>
    If request_number is a number then it should be stored in a number column. If it isn't, don't try an convert it to a number.
    </preach>

  • Performance query on 0IC_C03 inventory cube

    Hello,
    I am currently facing performance problems on this cube. The query is on material groups so the number of row returned are not to high.
    The cube is compressed. Could aggregates be a solution, or does this not work well on this cube because on the non-cumulative key figure?
    Does anyone have any hints on speeding this cube up? (the only tip I see in the collective note is to always compress)
    Best regards
    Jørgen

    Hi Ruud,
    Once compression with Marker update, latest balances will be created automatically for inventory cube: 0IC_C03.
    Historic moments only required to show stock status for any historic date(eg: 02-01-2008).
    If user not interested to check 3 years old status of stock, old data can be deleted selectively from cube using selective deletion.
    Go through doc: [How Tou2026 Handle Inventory Management Scenarios in BW|https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328?overridelayout=true]
    Srini

  • Query Performance - Query very slow to run

    I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.

    Hi Joel,
    Walkthrough Checklist for Query Performance:
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Regards
    Vivek Tripathi

  • Oracle performance query

    Hi folks,
    A question about Oracle performance.
    Which query would be faster....Is it the join of the tables or Is it the sub query of the tables.
    ex : select A.* from A a, B b where a.col1 = b.col1;
    (OR)
    select * from A where col1 in (select col1 from B);
    Thanks
    Shekar.

    the query are not equivalent!
    SQL> select * from dept where deptno in (select deptno from emp)
        DEPTNO DNAME          LOC
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            10 ACCOUNTING     NEW YORK
    SQL> select dept.* from dept,emp where dept.deptno=emp.deptno;
        DEPTNO DNAME          LOC
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            30 SALES          CHICAGO
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            30 SALES          CHICAGO
            10 ACCOUNTING     NEW YORK
            20 RESEARCH       DALLAS
            10 ACCOUNTING     NEW YORK
            30 SALES          CHICAGO
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            20 RESEARCH       DALLAS
            10 ACCOUNTING     NEW YORK
    14 rows selected.

  • Query Performance - Query not using proper plan

    Hello,
    I am experiencing a performance issue with queries that span multiple partitions/tablespaces. Specifically, a query utilizing a stored procedure does not use the indices, rather, full table scans are being done resulting in the query taking 30+ minutes to complete. The same query, when removed from the SP, returns results in milliseconds.
    In an attempt to correct the issue, table stats were updated, the stored procedure was re-compiled as well packages that may have been affected after the table stats update. In addition, the database was bounced (shutdown, restarted) but no noticable performance increase was acheived.
    I'm looking for any insight on how to correct this issue.
    I can provide additional information if required.
    Thanks,
    Scott.

    Post the query, the stored procedure, and the table structure. My first quess here is that the stored procedure is binding an incorrect datatype, but I need to see the requested info to be certain.

  • Performance :Query is taking Longtime

    Hi,
    Query on Cube jumps to Query  on ODS and Query on ods takes very long time how can we optimize/improve?
    Rgds,
    C.V.
    Message was edited by:
            C.V. P

    Hi,
    well, i am sure you are aware that Data Stores are not optimized for reporting.
    The Data Store active table can become very large and thus reporting on that table means reporting on a HUGE amount of data.
    The common solution is the creation of additional index on the ODS table to speed up reporting performance. This can be doen in 3.x from the ODS maintenance.
    Also make sure the DB Statistics are active (Put ODS active table in db20)
    Look at this thread for the options you have:
    ODS Performance?
    Please assign points if useful,
    Gili

  • Delivery performance query

    Hi
    We have SD_C04 cube and created a Query (Delivery Performance). The cube extracted the data from FLAT FILES (2lis_12_vcitm and 2lis_12_vcscl). While running the query i have noticed the following issues:
    1.   If multiple line items for a delivery document exist, the sales document and item comes up "#" for the additional line items.
    2.   The part numbers can be different within the same delivery with multiple line items, and the delivery quantity will appear blank for the additional line items.
    3.  If multiple line items in a delivery with the same part number, but different goods issue data - the query only reports a quantity for 1 line item, yet calculates a date for both line items.
    4.  The Delivery quantity column does not seem to be calculating correctly.   For example, the CVS file show three lines for the same part but different goods issues date for line 1 & 2.  Lines 1 & 2 were schedule on 5/28, but the delivery quantity is only represented with the information from 1 line item.  Line 3 delivery appears in the query on a second line with a blank quantity.
    Analysis: I have checked the Transfer Rules (one-to-one) and update rules. can i know where i have missed
    Thanks for your advice

    Raj,
    One solution is to insert the query onto four sheets - sheet 1 looking at Q1, sheet 2 looking at Q2 etc. etc.
    You make the selections on Quarter in the Free Characteristic part rather than at query run time - that way you can make different selections on the different sheet.
    Ensure the Return to Global Refresh is off on all queries.
    Save your workbook.
    Regards
    Gill

  • Increase performance query more than 10 millions records significantly

    The story is :
    Everyday, there is more than 10 million records which the data in textfiles format (.csv(comma separated value) extension, or other else).
    Example textfiles name is transaction.csv
    Phone_Number
    6281381789999
    658889999888
    618887897
    etc .. more than 10 million rows
    From transaction.csv then split to 3 RAM (memory) tables :
    1st. table nation (nation_id, nation_desc)
    2nd. table operator(operator_id, operator_desc)
    3rd. table area(area_id, area_desc)
    Then query this 3 RAM tables to result physical EXT_TRANSACTION (in harddisk)
    Given physical External Oracle table name EXT_TRANSACTION with column result is :
    Phone_Number Nation_Desc Operator_Desc Area_Desc
    ======================================
    6281381789999 INA SMP SBY
    So : Textfiles (transaction.csv) --> RAM tables --> Oracle tables (EXT_TRANSACTION)
    The first 2 digits is nation_id, next 4 digits is operator_id, and next 2 digits is area_id.
    I ever heard, to increase performance significantly, there is a technique to create table in memory (RAM) and not in harddisk.
    Any advice would be very appreciate.
    Thanks.

    Oracle uses sophisticated algorithms for various memory caches, including buffering data in memory. It is described in Oracle® Database Concepts.
    You can tell Oracle via the CACHE table clause to keep blocks for that table in the buffer cache (refer to the URL for the technical details of how this is done).
    However, this means there are now less of the buffer cache available to cache other data often used. So this approach could make accessing one table a bit faster at the expense of making access to other tables slower.
    This is a balancing act - how much can one "interfere" with cache before affecting and downgrading performance. Oracle also recommends that this type of "forced" caching is use for small lookup tables. It is not a good idea to use this on large tables.
    As for your problem - why do you assume that keeping data in memory will make processing faster? That is a very limited approach. Memory is a resource that is in high demand. It is a very finite resource. It needs to be carefully spend to get the best and optimal performance.
    The buffer cache is designed to cache "hot" (often accessed) data blocks. So in all likelihood, telling Oracle to cache a table you use a lot is not going to make it faster. Oracle is already caching the hot data blocks as best possible.
    You also need to consider what the actual performance problem is. If your process needs to crunch tons of data, it is going to be slow. Throwing more memory will be treating the symptom - not the actual problem that tons of data are being processed.
    So you need to define the actual problem. Perhaps it is not slow I/O - there could be a user defined PL/SQL function used as part of the ELT process that causes the problem. Parallel processing could be use to do more I/O at the same time (assuming the I/O subsystem has the capacity). The process can perhaps be designed better - and instead of multiple passes through a data set, crunching the same data (but different columns) again and again, do it in a single pass.
    10 million rows are nothing ito what Oracle can process on even a small server today. I have dual CPU AMD servers doing over 2,000 inserts per second in a single process. A Perl program making up to a 1,000 PL/SQL procedure calls per second. Oracle is extremely capable - as it today's hardware and software. But that needs a sound software engineering approach. And that approach says that we first need to fully understand the problem before we can solve it, treating the cause and not the symptom.

  • BW Performance Query

    Hi Friends,
    I need to create customised query to see the query aveage run time and the user usage based on hour/day/week/month.
    I try to create in bi 7.0 but i was not able to find in the BI Content delivered queries and also when i try to create on my own i was not finding the reqired key fig and characteristics.
    My requirements are:-
    User selection (filter drop down menu) to aggregate the time by hour, day, week or month. Default should show past month with performance statistics by aggregates day
    User selection (filter drop down menu) to choose a single MultiProvider. Default should be to report on all InfoProviders
    User selection (filter drop down menu) to choose a report duration. Possibel durations would be: last week, last month, last quarter, last year. Default would be last month
    User selection (filter drop down menu) to choose a single reporting user. Default should be for all users
    The two measures are as:
    1.Average execution time for all reports
    2.Total number of report executions
    Ts in Advance,

    Hi Sruthi,
    Your requirement seems to be more or less meeting with the query 0TCT_MC01_Q0200.
    This query specifies:
    OLAP Time
    Data Manager Time
    Deviation in Times in Percent per BI Application
    To start from scratch you need to install the multiprovider 0TCT_MC01 from the business content (underliying cubes being 0TCT_C01 & 0TCT_VC01) along with its workflow.
    You can subsequently make changes to this report to include user input variables and othe key figures or characteristics as per your requriement.
    For your information one such Key figure 0TCTQUCOUNT determines the count of the number of times a query is run by a user.
    For more information on the objects and their usage refer to the link
    http://help.sap.com/saphelp_nw70/helpdata/EN/44/08a75d19e32d2fe10000000a11466f/frameset.htm
    Regards
    Shalabh Jain

  • Important Performance Query

    Hi,
    I have just started using Toplink ORM 10.1.3.3.
    When I log the finest level of logs, I see some entries which could be big performance issues when I put the app in production. Has some one else noticed the same thing, or am I doing something terribly wrong ?
    [TopLink Fine]: *2008.08.24 08:35:47.231* --SELECT SEQ.NEXTVAL FROM DUAL
    [TopLink Finest]: *2008.08.24 08:35:49.394* --sequencing preallocation for SEQ: objects: 50 , first: 4,602, last: 4,651
    [TopLink Fine]: *2008.08.24 08:35:49.805* --SELECT SYSTIMESTAMP FROM DUAL
    [TopLink Finest]: *2008.08.24 08:35:52.018* --Assign return row DatabaseRecord(     MY_TABLE.MODIFIED_DATE => 2008-08-25 06:05:50.404)
    The above logs correspond to single record insert in my application. One is for sequence and another for timestamp for optimistic locking. Simple operation like this is taking 2 seconds(just on server !) to perform an action and this will be involved in every query (at least the TS because of optimistic locking). So I am little worried about the performance.
    Has anyone faced similar issue ?
    Regards,
    Ani

    Thanks for the reply.
    As rightly mentioned by you, the time taken for pre-allocation is not that big bothering factor for me as it is divided between batch size.
    I need to have an timestamp field in my tables for auditing purpose and I am trying to reuse the same for optimistic locking.
    In non ORM world, typically the system time is obtained using db functions. But I guess that ORM tool has to make a call and get the current timestamp from DB, set it in object and then only it can persist the obj.
    i.e. I was expecting that to insert an entity following sql is formed insert into mytable(col1,col2) values(123,_sysdate_) . But instead of having sysdate or something similar, the timestamp is taken from the DB first, set into object and then persisted.
    The reason for this behavior could be that the timestamp should be set in the object copy without having to perform a read after save.
    I have not tried to run any performance test on my usage. As mentioned earlier, we have just started development and I was trying to explore the optimal way to use toplink right from the beginning.
    Regards,
    Ani

Maybe you are looking for

  • Error While Exporting in FDM

    Hi, I am trying to add data in ESSBASE. CSV file import and validation are successful, it also create a successful export file but gives error when it tries to load data into essbase. "Data Load Failed. 10415 - Data Load Errors. Essbase API Procedure

  • Force creation of new segment

    Hello all, I have a problem with a map. In the ORIGIN schema, I have fields : ShipmentStatus LOCCode DeliveryCode In the DESTINATION Schema : LOC (with it's several child fields) I'm creating 1 LOC based in the values of ShipmentStatus and LOCCode an

  • Best practice for encrypting connectionString

    Hi,     Was using connectionString for database connections and was happy with it and it was doing what it supposed to.  If someone changed the database credentials we didn't have to recompile the program to make the change.   Then our organization h

  • IPhone 3.0

    Is iPhone 3.0 coming out any soon? I'm considering an iPhone 3G and I'm not sure whether i have to wait for the next version!

  • How to implement a repaint when the IE window  is max or min  in a applet?

    Hi!! I already did a applet......and it has a button to print the content....when i press the print buttom�.appears a dialog box about that the �Printing will start to print� �OK��Cancel�, I press the yes button and after appears others dialog box ab