RMAN performance query

Hi experts,
Could you please recommend some information for better rman performance
what i know is
1. using duration parameter
2. sizing large_pool
3. i/o slaves
Please mention any other point i have missed out.
regards,
shaan

Check Tuning Backup and Recovery.

Similar Messages

  • Frm-40505:ORACLE error: unable to perform query in oracle forms 10g

    Hi,
    I get error frm-40505:ORACLE error: unable to perform query on oracle form in 10g environment, but the same form works properly in 6i.
    Please let me know what do i need to do to correct this problem.
    Regards,
    Priya

    Hi everyone,
    I have block created on view V_LE_USID_1L (which gives the error frm-40505) . We don't need any updation on this block, so the property 'updateallowed' is set to 'NO'.
    To fix this error I modified 'Keymode' property, set it to 'updatable' from 'automatic'. This change solved the problem with frm-40505 but it leads one more problem.
    The datablock v_le_usid_1l allows user to enter the text (i.e. updated the field), when the data is saved, no message is shown. When the data is refreshed on the screen, the change done previously on the block will not be seen (this is because the block updateallowed is set to NO), how do we stop the fields of the block being editable?
    We don't want to go ahead with this solution as, we might find several similar screens nad its diff to modify each one of them individually. When they work properly in 6i, what it doesn't in 10g? does it require any registry setting?
    Regards,
    Priya

  • ? Mail [12721] Error 1 performing query: WHERE clause too complex...

    Console keeps showing this about a zillion times in a row, a zillion times a day: "Mail [12721] Error 1 performing query: WHERE clause too complex no more than 100 terms allowed"
    I can't find any search results anywhere online about this.
    Lots of stalls and freezes in mail, finder/os x, and safari -- freqent failures to maintain a broadband connection (multiple times every day).
    All apps are slow, cranky with interminable beach balls getting worse all the time.
    anyone know what the heck is going on?

    Try rebuilding the mailbox to see if that helps.
    Also, how much disk space is available on your boot drive?

  • Is this the best performed query?

    Hi Guys,
    Is this the best performed query or i can still improve it ?
    I am new to SQL performacne tune, please help to get best performance of the query.
    SQL> EXPLAIN PLAN SET STATEMENT_ID = 'ASH'
    2 FOR
    3 SELECT /*+ FIRST_ROWS(30) */ PSP.PatientNumber, PSP.IntakeID, U.OperationCenterCode OpCenterProcessed,
    4 PSP.ServiceCode, PSP.UOMcode, PSP.StartDt, PSP.ProvID, PSP.ExpDt, NVL(PSP.Units, 0) Units,
    5 PAS.Descript, PAS.ServiceCatID, PSP.CreatedBy AuthCreatedBy, PSP.CreatedDateTime AuthCreatedDateTime,
    6 PSP.AuthorizationID, PSP.ExtracontractReasonCode, PAS.ServiceTypeCode,
    7 NVL(PSP.ProvNotToExceedRate, 0) ProvOverrideRate,
    8 prov.ShortName ProvShortName, PSP.OverrideReasonCode, PAS.ContractProdClassId
    9 ,prov.ProvParentID ProvParentID, prov.ProvTypeCd ProvTypeCd
    10 FROM tblPatServProv psp, tblProductsAndSvcs pas, tblProv prov, tblUser u, tblGlMonthlyClose GLMC
    11 WHERE GLMC.AUTHORIZATIONID >= 239
    12 AND GLMC.AUTHORIZATIONID < 11039696
    13 AND PSP.AuthorizationID = GLMC.AUTHORIZATIONID
    14 AND PSP.Authorizationid < 11039696
    15 AND (PSP.ExpDt >= to_date('01/03/2000','MM/DD/YYYY') OR PSP.ExpDt IS NULL)
    16 AND PSP.ServiceCode = PAS.ServiceCode(+)
    17 AND prov.ProvID(+) = PSP.ProvID
    18* AND U.UserId(+) = PSP.CreatedBy
    19 /
    Explained.
    Elapsed: 00:00:00.46
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    Plan hash value: 3602678330
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 8503K| 3073M| 91 (2)| 00:00:02 |
    |* 1 | HASH JOIN RIGHT OUTER | | 8503K| 3073M| 91 (2)| 00:00:02 |
    | 2 | TABLE ACCESS FULL | TBLPRODUCTSANDSVCS | 4051 | 209K| 16 (0)| 00:00:01 |
    | 3 | NESTED LOOPS | | 31 | 6200 | 75 (2)| 00:00:01 |
    | 4 | NESTED LOOPS OUTER | | 30 | 5820 | 45 (3)| 00:00:01 |
    |* 5 | HASH JOIN RIGHT OUTER | | 30 | 4950 | 15 (7)| 00:00:01 |
    | 6 | TABLE ACCESS FULL | TBLUSER | 3444 | 58548 | 12 (0)| 00:00:01 |
    |* 7 | TABLE ACCESS FULL | TBLPATSERVPROV | 8301K| 585M| 2 (0)| 00:00:01 |
    | 8 | TABLE ACCESS BY INDEX ROWID| TBLPROV | 1 | 29 | 1 (0)| 00:00:01 |
    |* 9 | INDEX UNIQUE SCAN | PK_TBLPROV | 1 | | 0 (0)| 00:00:01 |
    |* 10 | INDEX UNIQUE SCAN | PK_W_GLMONTHLYCLOSE | 1 | 6 | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - access("PSP"."SERVICECODE"="PAS"."SERVICECODE"(+))
    5 - access("U"."USERID"(+)="PSP"."CREATEDBY")
    7 - filter(("PSP"."EXPDT">=TO_DATE('2000-01-03 00:00:00', 'yyyy-mm-dd hh24:mi:ss') OR
    "PSP"."EXPDT" IS NULL) AND "PSP"."AUTHORIZATIONID">=239 AND "PSP"."AUTHORIZATIONID"<11039696)
    9 - access("PROV"."PROVID"(+)="PSP"."PROVID")
    10 - access("PSP"."AUTHORIZATIONID"="GLMC"."AUTHORIZATIONID")
    filter("GLMC"."AUTHORIZATIONID">=239 AND "GLMC"."AUTHORIZATIONID"<11039696)
    28 rows selected.
    Elapsed: 00:00:00.42

    Thanks a lot for your reply.
    Here are the indexes on those tables.
    table --> TBLPATSERVPROV ---> index PK_TBLPATSERVPROV ---> column AUTHORIZATIONID
    table --> TBLPRODUCTSANDSVCS ---> index PK_TBLPRODUCTSANDSVCS ---> column SERVICECODE
    table --> TBLUSER ---> index PK_TBLUSER ---> column USERID

  • FRM-40505  Oracle Error: Unable to perform query(URGENT)

    Hi I developed a form with a control_block and table_block(based on table)
    in same Canvas.
    Based on values on control_block and pressing Find button detail block will be queried.
    Control_block ->
    textitem name "payment_type" char type
    text item name "class_code " char type
    push button "find"
    base table: --> payment_terms(termid,payment_type,class_code,other colums)
    table_block is based on above table
    Now I have written when-button-pressed trigger on find button..
    declare
    l_search varchar2(100);     
    BEGIN
    l_search := 'payment_type='|| :control_block .payment_type||' AND class_code='||:control_block .class_code ;
    SET_BLOCK_PROPERTY('table_block',DEFAULT_WHERE,l_search);
    go_block('table_block');
    EXECUTE_QUERY;
    EXCEPTION
         when others then
         null;
    END;
    I am getting
    FRM-40505 Oracle Error: Unable to perform query
    please help..

    You don't need to build the default_where at run time. Just hard-code the WHERE Clause property as:
        column_x = :PARAMETER.X
    But, if for some compelling reason, you MUST do it at run time this should work:
        Set_block_property('MYBLOCK',Default_where,
            'COLUMN_X=:PARAMETER.X');
    Note that there are NO quotes except for first and last. If you get some sort of error when you query, you should actually see :Parameter.X replaced with :1 when you do Help, Display Error.

  • Performance query in sql

      
    This query takes 1 mins to execute, Please suggesr if we can improve more on performance
      SELECT P.ProgramId
                      ,P.Tier4Id
                    ,7 AS MetricId
                    ,(convert(DATETIME, CONVERT(VARCHAR, Month(s.DATE)) + '/1/' + CONVERT(VARCHAR, Year(s.DATE)))) AS DATE
                    ,CASE
                        WHEN SUM(ISNULL(S.ITD_BCWS_PMB, 0)) <> 0
                            THEN ROUND(SUM(ISNULL(S.ITD_BCWP_PMB, 0)) / SUM(ISNULL(S.ITD_BCWS_PMB, 0)), 2)
                        ELSE NULL
                        END AS Value
                    ,SUM(ISNULL(S.ITD_BCWP_PMB, 0)) AS BCWP
                FROM Staging.SPI_CPI_WBS S
                INNER JOIN Core.Projects PJ ON S.ProjectId = PJ.ProjectId AND PJ.IsActive = 1
                INNER JOIN Core.ProgramFinancials PF ON PF.ProgramId = S.ProgramId AND Year(PF.DATE) = Year(S.DATE) AND PF.IsYearly = 1
                INNER JOIN Core.v_WBS_Tier_Structure_SPICPI P ON S.WBSId = P.WBSId AND P.ProgramId = S.ProgramId
                WHERE S.LSKINdicator = 1 AND S.ProjectId IS NOT NULL
                    /*AP3-820 Check for BCWP > 0*/
                    AND ISNULL(S.ITD_BCWP_PMB, 0) > 0 AND (P.ProgramStatus = 1 OR isnull(PF.Spend, 0) != 0)
                GROUP BY P.ProgramId
                    ,s.DATE
                    ,P.Tier4Id
                HAVING SUM(ISNULL(S.ITD_BCWP_PMB, 0)) > 0
    Smash126

    Hi This entire query i am posting see if any more optimization can be done please
     UPDATE DW.ProgramScores_T4_P    
     SET     
      SPI=null    
      ,SPI_H=null    
      ,CPI=null    
      ,CPI_h=null    
      , M_SPI = null    
      ,M_SPI_H = null    
      ,M_CPI = null    
      ,M_CPI_H = null    
     where Date  between dateadd(mm,-12,getdate()) and getdate()       
    -- SPI CPI - Programscores    
    MERGE DW.ProgramScores_T4_P DW    
    USING      
    SELECT   isnull(SPI.ProgramId, CPI.ProgramId) AS ProgramId    
      ,isnull(SPI.DATE, CPI.DATE) AS DATE    
         ,isnull(SPI.Tier4Id,CPI.Tier4Id) as Tier4Id     
      ,SPI    
      ,SPI_H    
      ,CPI    
      ,CPI_H    
      ,CASE     
       WHEN isnull(SPI.DATE, CPI.DATE) BETWEEN dateadd(mm, - 12, getdate()) AND getdate()    
        THEN 1    
       ELSE 0    
       END AS Datediff    
      ,Month(isnull(SPI.DATE, CPI.DATE)) AS Month    
      ,Year(isnull(SPI.DATE, CPI.DATE)) AS Year    
    FROM (    
     SELECT SPI.ProgramId    
      ,SPI.Metricid    
      ,Tier4Id    
      ,DATE    
      ,SPI    
      ,SPI_H    
     FROM (    
      SELECT ProgramId    
       ,Tier4Id    
       ,A.MetricId    
       ,DATE    
       ,Value AS SPI    
        CASE     
         WHEN [Value] IS NULL OR [IsGoalOriented] = (0) OR [IsGoalOriented] IS NULL /*OR MBM.fn_CheckGoalsAvailability(A.MetricId) = 0 */    
          THEN (5)    
         ELSE CASE     
           WHEN [Value] >= [LCLG] AND [Value] <= [UCLG] OR [Value] >= [LCLG] AND [UCLG] IS NULL    
            THEN (2)    
           ELSE CASE     
             WHEN [Value] >= [LCLY] AND [Value] <= [UCLY]    
              THEN (3)    
             ELSE (4)    
             END    
           END    
         END    
        ) AS SPI_H    
      FROM (    
       SELECT P.ProgramId    
          ,P.Tier4Id    
        ,7 AS MetricId    
        ,(convert(DATETIME, CONVERT(VARCHAR, Month(s.DATE)) + '/1/' + CONVERT(VARCHAR, Year(s.DATE)))) AS DATE    
        ,CASE     
         WHEN SUM(ISNULL(S.ITD_BCWS_PMB, 0)) <> 0    
          THEN ROUND(SUM(ISNULL(S.ITD_BCWP_PMB, 0)) / SUM(ISNULL(S.ITD_BCWS_PMB, 0)), 2)    
         ELSE NULL    
         END AS Value    
        ,SUM(ISNULL(S.ITD_BCWP_PMB, 0)) AS BCWP    
       FROM Staging.SPI_CPI_WBS S    
       INNER JOIN Core.Projects PJ ON S.ProjectId = PJ.ProjectId AND PJ.IsActive = 1    
       INNER JOIN Core.ProgramFinancials PF ON PF.ProgramId = S.ProgramId AND Year(PF.DATE) = Year(S.DATE) AND PF.IsYearly = 1    
       INNER JOIN Core.v_WBS_Tier_Structure_SPICPI P ON S.WBSId = P.WBSId AND P.ProgramId = S.ProgramId    
       WHERE S.LSKINdicator = 1 AND S.ProjectId IS NOT NULL    
        /*AP3-820 Check for BCWP > 0*/    
        AND ISNULL(S.ITD_BCWP_PMB, 0) > 0 AND (P.ProgramStatus = 1 OR isnull(PF.Spend, 0) != 0)    
       GROUP BY P.ProgramId    
        ,s.DATE    
        ,P.Tier4Id    
       HAVING SUM(ISNULL(S.ITD_BCWP_PMB, 0)) > 0    
       ) A    
      INNER JOIN MBM.Metrics M ON M.MetricId = A.MetricId AND M.IsActive = 1    
      ) SPI    
     ) SPI    
    LEFT JOIN (    
     SELECT CPI.ProgramId    
           --,WBSId    
      ,Tier4Id    
      --,Tier3Id    
      --,Tier2Id    
      ,CPI.Metricid    
      ,DATE    
      ,CPI    
      ,CPI_H    
     FROM (    
      SELECT ProgramId    
          --,WBSId    
        ,Tier4Id    
        --,Tier3Id    
        --,Tier2Id    
       ,A.MetricId    
       ,DATE    
       ,Value AS CPI    
        CASE     
         WHEN [Value] IS NULL OR [IsGoalOriented] = (0) OR [IsGoalOriented] IS NULL /*OR MBM.fn_CheckGoalsAvailability(A.MetricId) = 0 */    
          THEN (5)    
         ELSE CASE     
           WHEN [Value] >= [LCLG] AND [Value] <= [UCLG] OR [Value] >= [LCLG] AND [UCLG] IS NULL    
            THEN (2)    
           ELSE CASE     
             WHEN [Value] >= [LCLY] AND [Value] <= [UCLY]              THEN (3)    
             ELSE (4)    
             END    
           END    
         END    
        ) AS CPI_H    
      FROM (    
       SELECT P.ProgramId    
       --,S.WBSId    
        ,P.Tier4Id    
        --,WBS.Tier3Id    
        --,WBS.Tier2Id    
        ,8 AS MetricId    
        ,(convert(DATETIME, CONVERT(VARCHAR, Month(s.DATE)) + '/1/' + CONVERT(VARCHAR, Year(s.DATE)))) AS DATE    
        ,CASE     
         WHEN SUM(ISNULL(S.ITD_ACWP, 0)) <> 0    
          THEN ROUND(SUM(ISNULL(S.ITD_BCWP_PMB, 0)) / SUM(ISNULL(S.ITD_ACWP, 0)), 2)    
         ELSE NULL    
         END AS Value    
        ,SUM(ISNULL(S.ITD_BCWP_PMB, 0)) AS BCWP    
       FROM Staging.SPI_CPI_WBS S    
       INNER JOIN Core.Projects PJ ON S.ProjectId = PJ.ProjectId AND PJ.IsActive = 1    
       --INNER JOIN Core.Programs P ON P.ProgramId = S.ProgramId and P.Isactive=1    
       INNER JOIN Core.ProgramFinancials PF ON PF.ProgramId = S.ProgramId AND Year(PF.DATE) = Year(S.DATE) AND PF.IsYearly = 1    
       --inner join Core.WBS_CAM_TIERS_Structure WBS ON S.WBSId = WBS.WBSId    
       INNER JOIN Core.v_WBS_Tier_Structure_SPICPI P ON S.WBSId = P.WBSId AND P.ProgramId = S.ProgramId    
       WHERE S.LSKINdicator = 1 AND S.ProjectId IS NOT NULL    
        /*AP3-820 Check for BCWP > 0*/    
        AND ISNULL(S.ITD_BCWP_PMB, 0) > 0 AND (P.ProgramStatus = 1 OR isnull(PF.Spend, 0) != 0)    
       GROUP BY P.ProgramId    
        ,s.DATE    
        --,S.WBSId    
        ,P.Tier4Id    
           --,WBS.Tier3Id    
           --,WBS.Tier2Id    
       HAVING SUM(ISNULL(S.ITD_BCWP_PMB, 0)) > 0 /*AP3-688 & AP3-702 Check for BCWP > 0 */    
       ) A    
      INNER JOIN MBM.Metrics M ON M.MetricId = A.MetricId AND M.IsActive = 1    
      ) CPI    
     WHERE CPI IS NOT NULL    
     ) CPI     
      ON CPI.ProgramId = SPI.ProgramId AND CPI.DATE = SPI.DATE AND     
         CPI.Tier4Id = SPI.Tier4Id --AND  CPI.Tier3Id = SPI.Tier3Id AND  CPI.Tier2Id = SPI.Tier2Id    
    )SPICPI     
      on SPICPI.ProgramId = DW.ProgramId and Datediff(dd,SPICPI.Date,DW.Date) = 0  and     
         SPICPI.Tier4Id = DW.Tier4Id --and SPICPI.Tier3Id = DW.Tier3Id and SPICPI.Tier2Id = DW.Tier2Id    
    WHEN MATCHED AND (SPICPI.Datediff = 1) THEN    
    update SET DW.SPI = SPICPI.SPI,    
         DW.SPI_H = SPICPI.SPI_H,    
         DW.CPI = SPICPI.CPI,    
         DW.CPI_H = SPICPI.CPI_H,    
         DW.UpdatedDate = getdate()    
    WHEN NOT MATCHED THEN    
    insert (ProgramId,Tier4Id, Date,SPI,SPI_H,CPI,CPI_H,CreatedDate)    
    values(SPICPI.ProgramId,SPICPI.Tier4Id,SPICPI.Date,SPICPI.SPI,SPICPI.SPI_H, SPICPI.CPI,SPICPI.CPI_H,getdate());    
    Smash126

  • ORA-01722: invalid number when performing query

    Hi,
    I am running SQL Developer on a laptop (XP Pro) accessing Oracle Applications
    Product Version     11.5.10.2
    Platform     IBM SP AIX
    OS Version     AIX
    Database     9.2.0.7.0
    Support ID     14460365
    If I run the following query it works fine -
    select
    mtrh.request_number
    ,to_number(mtrh.request_number)
    from
    mtl_txn_request_headers mtrh
    where
    to_number(mtrh.request_number) = 135060
    and mtrh.request_number = '135060' -- works with this line in!!!!
    however if I comment out the last line I get
    An error was encountered performing the requested operation :
    ORA-01722: invalid number
    The field request_number is defined as varchar2(30)
    It seems that there is something strange about the way it handles to_number in where clauses.
    Thanks
    Mick Sulley

    You have an invalid number in request_number. If you add "and mtrh.request_number = '135060' ", the result set will be reduced to only those rows which have 135060 in the column and the to_number() will work. WIthout that row, it does to_number(request_number) for all rows in order to identify the one you want. When it comes across a request_number column which contains an invalid number it reports an error.
    <preach>
    If request_number is a number then it should be stored in a number column. If it isn't, don't try an convert it to a number.
    </preach>

  • Performance query on 0IC_C03 inventory cube

    Hello,
    I am currently facing performance problems on this cube. The query is on material groups so the number of row returned are not to high.
    The cube is compressed. Could aggregates be a solution, or does this not work well on this cube because on the non-cumulative key figure?
    Does anyone have any hints on speeding this cube up? (the only tip I see in the collective note is to always compress)
    Best regards
    Jørgen

    Hi Ruud,
    Once compression with Marker update, latest balances will be created automatically for inventory cube: 0IC_C03.
    Historic moments only required to show stock status for any historic date(eg: 02-01-2008).
    If user not interested to check 3 years old status of stock, old data can be deleted selectively from cube using selective deletion.
    Go through doc: [How Tou2026 Handle Inventory Management Scenarios in BW|https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328?overridelayout=true]
    Srini

  • Query Performance - Query very slow to run

    I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.

    Hi Joel,
    Walkthrough Checklist for Query Performance:
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Regards
    Vivek Tripathi

  • Oracle performance query

    Hi folks,
    A question about Oracle performance.
    Which query would be faster....Is it the join of the tables or Is it the sub query of the tables.
    ex : select A.* from A a, B b where a.col1 = b.col1;
    (OR)
    select * from A where col1 in (select col1 from B);
    Thanks
    Shekar.

    the query are not equivalent!
    SQL> select * from dept where deptno in (select deptno from emp)
        DEPTNO DNAME          LOC
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            10 ACCOUNTING     NEW YORK
    SQL> select dept.* from dept,emp where dept.deptno=emp.deptno;
        DEPTNO DNAME          LOC
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            30 SALES          CHICAGO
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            30 SALES          CHICAGO
            10 ACCOUNTING     NEW YORK
            20 RESEARCH       DALLAS
            10 ACCOUNTING     NEW YORK
            30 SALES          CHICAGO
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            20 RESEARCH       DALLAS
            10 ACCOUNTING     NEW YORK
    14 rows selected.

  • Query Performance - Query not using proper plan

    Hello,
    I am experiencing a performance issue with queries that span multiple partitions/tablespaces. Specifically, a query utilizing a stored procedure does not use the indices, rather, full table scans are being done resulting in the query taking 30+ minutes to complete. The same query, when removed from the SP, returns results in milliseconds.
    In an attempt to correct the issue, table stats were updated, the stored procedure was re-compiled as well packages that may have been affected after the table stats update. In addition, the database was bounced (shutdown, restarted) but no noticable performance increase was acheived.
    I'm looking for any insight on how to correct this issue.
    I can provide additional information if required.
    Thanks,
    Scott.

    Post the query, the stored procedure, and the table structure. My first quess here is that the stored procedure is binding an incorrect datatype, but I need to see the requested info to be certain.

  • Performance :Query is taking Longtime

    Hi,
    Query on Cube jumps to Query  on ODS and Query on ods takes very long time how can we optimize/improve?
    Rgds,
    C.V.
    Message was edited by:
            C.V. P

    Hi,
    well, i am sure you are aware that Data Stores are not optimized for reporting.
    The Data Store active table can become very large and thus reporting on that table means reporting on a HUGE amount of data.
    The common solution is the creation of additional index on the ODS table to speed up reporting performance. This can be doen in 3.x from the ODS maintenance.
    Also make sure the DB Statistics are active (Put ODS active table in db20)
    Look at this thread for the options you have:
    ODS Performance?
    Please assign points if useful,
    Gili

  • Delivery performance query

    Hi
    We have SD_C04 cube and created a Query (Delivery Performance). The cube extracted the data from FLAT FILES (2lis_12_vcitm and 2lis_12_vcscl). While running the query i have noticed the following issues:
    1.   If multiple line items for a delivery document exist, the sales document and item comes up "#" for the additional line items.
    2.   The part numbers can be different within the same delivery with multiple line items, and the delivery quantity will appear blank for the additional line items.
    3.  If multiple line items in a delivery with the same part number, but different goods issue data - the query only reports a quantity for 1 line item, yet calculates a date for both line items.
    4.  The Delivery quantity column does not seem to be calculating correctly.   For example, the CVS file show three lines for the same part but different goods issues date for line 1 & 2.  Lines 1 & 2 were schedule on 5/28, but the delivery quantity is only represented with the information from 1 line item.  Line 3 delivery appears in the query on a second line with a blank quantity.
    Analysis: I have checked the Transfer Rules (one-to-one) and update rules. can i know where i have missed
    Thanks for your advice

    Raj,
    One solution is to insert the query onto four sheets - sheet 1 looking at Q1, sheet 2 looking at Q2 etc. etc.
    You make the selections on Quarter in the Free Characteristic part rather than at query run time - that way you can make different selections on the different sheet.
    Ensure the Return to Global Refresh is off on all queries.
    Save your workbook.
    Regards
    Gill

  • Increase performance query more than 10 millions records significantly

    The story is :
    Everyday, there is more than 10 million records which the data in textfiles format (.csv(comma separated value) extension, or other else).
    Example textfiles name is transaction.csv
    Phone_Number
    6281381789999
    658889999888
    618887897
    etc .. more than 10 million rows
    From transaction.csv then split to 3 RAM (memory) tables :
    1st. table nation (nation_id, nation_desc)
    2nd. table operator(operator_id, operator_desc)
    3rd. table area(area_id, area_desc)
    Then query this 3 RAM tables to result physical EXT_TRANSACTION (in harddisk)
    Given physical External Oracle table name EXT_TRANSACTION with column result is :
    Phone_Number Nation_Desc Operator_Desc Area_Desc
    ======================================
    6281381789999 INA SMP SBY
    So : Textfiles (transaction.csv) --> RAM tables --> Oracle tables (EXT_TRANSACTION)
    The first 2 digits is nation_id, next 4 digits is operator_id, and next 2 digits is area_id.
    I ever heard, to increase performance significantly, there is a technique to create table in memory (RAM) and not in harddisk.
    Any advice would be very appreciate.
    Thanks.

    Oracle uses sophisticated algorithms for various memory caches, including buffering data in memory. It is described in Oracle® Database Concepts.
    You can tell Oracle via the CACHE table clause to keep blocks for that table in the buffer cache (refer to the URL for the technical details of how this is done).
    However, this means there are now less of the buffer cache available to cache other data often used. So this approach could make accessing one table a bit faster at the expense of making access to other tables slower.
    This is a balancing act - how much can one "interfere" with cache before affecting and downgrading performance. Oracle also recommends that this type of "forced" caching is use for small lookup tables. It is not a good idea to use this on large tables.
    As for your problem - why do you assume that keeping data in memory will make processing faster? That is a very limited approach. Memory is a resource that is in high demand. It is a very finite resource. It needs to be carefully spend to get the best and optimal performance.
    The buffer cache is designed to cache "hot" (often accessed) data blocks. So in all likelihood, telling Oracle to cache a table you use a lot is not going to make it faster. Oracle is already caching the hot data blocks as best possible.
    You also need to consider what the actual performance problem is. If your process needs to crunch tons of data, it is going to be slow. Throwing more memory will be treating the symptom - not the actual problem that tons of data are being processed.
    So you need to define the actual problem. Perhaps it is not slow I/O - there could be a user defined PL/SQL function used as part of the ELT process that causes the problem. Parallel processing could be use to do more I/O at the same time (assuming the I/O subsystem has the capacity). The process can perhaps be designed better - and instead of multiple passes through a data set, crunching the same data (but different columns) again and again, do it in a single pass.
    10 million rows are nothing ito what Oracle can process on even a small server today. I have dual CPU AMD servers doing over 2,000 inserts per second in a single process. A Perl program making up to a 1,000 PL/SQL procedure calls per second. Oracle is extremely capable - as it today's hardware and software. But that needs a sound software engineering approach. And that approach says that we first need to fully understand the problem before we can solve it, treating the cause and not the symptom.

  • RMAN clone query

    Hello DBA's
    I'm using an 9i Oracle Solaris box.
    I have a task to clone a TEST enviornment into a existing TRAIN enviornment using RMAN.
    I edited the TEST spfile and ran the RMAN duplicate commnad - 'duplicate target database to train;' without deleting the existing datafiles on TRAIN.
    RMAN automatically refreshed the datafiles with TEST's datafiles.
    My query is:
    1) Is this acceptable and will i land into problems later on OR should i have dropped the TRAIN database completley before doing a new clone?
    2) In 9i (Solaris) there is no easy way to drop the database, should i have done a shutdown and dropped the TRAIN's datafiles manually?
    You adivce/suggestions would be much appreciated!

    It is always better to remove the "TRAIN" database before cloning "TEST" over to "TRAIN".
    Consider the scenario :
    Day 1: "TEST" has 5 datafiles in /u01. /u01 has filesystem space of 10GB and the 5 datafiles take up 6GB
    Day 2: "TEST" is cloned to "TRAIN" for the first time. "TRAIN" is on /u02, 10GB, 5 files taking 6GB
    Day 6: "TRAIN" is very busily used with a lot more data inserted. The 5 files in /u02 grow to 9GB. You add a 6th datafile of 1GB in /u03
    Day 12: Your traning is over and you need to clone "TEST" to "TRAIN" again for the next set of trainees
    Day 13: "TEST" is cloned to "TRAIN".
    As has been pointed out, the clone would actually overwrite files. However, you'll find 5 files of 9GB being replaced by 5 files of 6GB. And a 6th file of 1GB "hanging loose" not being owned by anybody.
    Wouldn't that cause confusion on, say, Day 19 ?
    Here's another scenario :
    Day 1: "TEST" has 5 datafiles in /u01. /u01 has filesystem space of 10GB and the 5 datafiles take up 6GB
    Day 2: "TEST" is cloned to "TRAIN" for the first time. "TRAIN" is on /u02, 10GB, 5 files taking 6GB
    Day 6: "TRAIN" is used for DBA traning. The trainee DBAs "reorganise" the database, using export import, down to 4 datafiles and taking 4GB only (because data has been deleted)
    Day 12 : training is over
    Day 13 : "TEST" is cloned to "TRAIN"
    What do you see happening now ?
    Here's a third scenario :
    What about DBAs renaming datafiles (not just locations, but file names) during Training ?
    Here's a fourth scenario :
    Between Day 2 and Day 13, the "TEST" server storage is expanded and /u01 is 20GB. Also, "TEST" grows to 16GB in size. But the "TRAIN" server's /u02 is still 10GB.
    What happens when you re-clone on Day 13 ?
    At each refresh clone, you must consider that
    a. file numbers may have changed
    b. file names may have changed
    c. file sizes may have changed

Maybe you are looking for

  • Logical Standby Database in NOARCHIVE Mode

    Hi, I have configured a Logical Standby Database for Reporting purposes. A Physical Standby Database is running for MAA. i.e. in case of Role Transition (switch/Failover) the Physical Stdby Db will get the role of the Primary. The logical standby dat

  • Sliding text in Firefox location/search bars

    It has come to my attention that, when I use a bitmap font for my GTK apps, the text in Firefox's search and location bars tends to slide arround annoyingly when selected. Here's a screenshot of this: The "sliding" is visible in the circled areas. Is

  • Search help coming in automatic sort

    Hello gurus, I have created a Search help and used in table which is again made to transaction with TMG. When i click on f4 on the field , the search help coming in sorted order, now how to remove sort . Thanks

  • Firefox won't let me re-install

    I periodically uninstall and then re-install Firefox. This time it asked me to reboot my computer in order to complete the uninstall process. I rebooted, and attempted to re-install Firefox. After selecting the "Standard" installation and pressing en

  • Annoying thing in BB Desktop Manager

    Every time when I install something via BB Desktop Manager it offers to install Documents To Go on my device. Even I did not mark  Documents To Go in the applications list it becomes selected automatically and the desktop manager offers me to install