Improve a query's performance

Hello
I need to make a query that will return a value when there are rows that meet a certain condition and another value when there are rows that meet another condition, the most simple way I thought of was using two queries like that:
+1. select count(*)+
from  T1
where (A = a OR B = b) and C > 0
+2. select count(*)+
from  T2
LEFT JOIN T1 ON T1.X = T2.X
where (T1.A = a OR T1.B = b) and T2.A is not null
However, this way seems to be very slow when called numerous times from the application so I thought to use this next single query:
select distinct
case when (select count(*)
from  T1
where (A = a or B = b ) and C > 0) > 0
then 1
else 0
end  ARE_THERE_ROWS_THAT_MEET_CONDITION_A,
case when (select count(*)
from  T2
LEFT JOIN T1 ON T1.X = T2.X
where (T1.A = a OR T1.B = b) and T2.A is not null)) >0
then 1
else 0
end  ARE_THERE_ROWS_THAT_MEET_CONDITION_B
from T1;
This way I can connect to the DB only once (instead of twice) but running this way did not improve the performance dramaticly.
Is there a better more efficent and more simple way (single query prefered) to perform this query?

Hi,
You siad :
>
1. select count(*)
from T1
where (A = a OR B = b) and C > 0
However, this way seems to be very slow when called numerous times from the application so I thought to use this next single query:
>
It's slow because:
1) you don't probably read only one row per call (you probably read the FULL table) -> post the explain plan.
2) you call a "numerous time" this query.
What it suggest as a starting point (i guess you're calling a function having a, b as parameters):
- Give us the cardinality for the columns A, B and C (how many different values did you have VS the complete volume of T1)?
- Tell us which indexes you have on T1
You can run:
SELECT table_name, num_rows, blocks, empty_blocks, avg_space, avg_row_len,
       last_analyzed
  FROM user_tables
WHERE table_name IN ('T1', 'T2')
SELECT index_name, index_type, table_name, uniqueness, blevel, leaf_blocks,
       distinct_keys, clustering_factor, last_analyzed
  FROM user_indexes
WHERE table_name IN ('T1', 'T2')Edited by: user11268895 on Jul 18, 2010 4:32 PM
Edited by: user11268895 on Jul 18, 2010 4:50 PM

Similar Messages

  • Improve the query's performance

    Hi Guys,
    I'm an Oracle beginner and I'm getting the following issue:
    I have to run a query with a join between two tables, let me say A and B.
    The table A has 10 Milions of records (800MB) and is indexed on two columns (a1 and a2).
    The table B has 100000 records (64MB) no index.
    The query is:
    select
    A.a1,
    A.a2,
    B.b1
    from A,B
    where A.a1 >= B.b1 and A.a2<=B.b1
    The problem is that it is really slow (more that 1 day) although the execution plan used is NESTED LOOPS.
    How can I improve the performance of this query?
    Please help me,
    GF

    Hi Guys,
    First of all thank you very much for your prompt response.
    devmiral your advise could help me. I have just realized that the statistic have never been updated.
    I'll try to update it and let you know.
    Eric here is an quick example:
    Table A Table B
    a2 a1 a3 b1
    10 14 1 10
    15 16 2 11
    18 19 3 15
    20 21 4
    The query is:
    select
    A.a2,
    A.a1,
    A.a3,
    B.b1
    from A,B
    where A.a1 >= B.b1 and A.a2<=B.b1
    The output expected should be:
    a2 a1 a3 b1
    10 14 1 10
    10 14 1 11
    15 16 2 15
    Thank you
    GF

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • Improving a query performance

    Hello gurus,
    We do use RSRT, ST03(expert mode) to check if a query is running slowly. what actually do we check there and how?
    Can anyone please explain me how its done.
    Thanks in advance
    S N

    Hi SN,
    You need to check the DB time taken by the queries and the summarization ratio for the queries. As a rule of thumb if the summarization ration > 10 and the database time > 30% then creating an aggregate will improve the query performance.
    BY summarization ration i mean ratio between the number of records read from the cube to the number of records displayed in the report.
    Thx,
    Soumya

  • How to improve the query performance

    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    -------WHERE TSK.ProjectID = @Project-----
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    hi..
    My SP is as above..
    I connected this SP to dataset in SSRS report..as per my logic..Portfolio contains many Programs and Program contains many Projects.
    When i selected the ALL value for parameters Program and Project..i'm unable to get output.
    but when i select values for all 3 parameters i'm getting output. i took default values for paramters also.
    so i commented the where condition in SP as shown above
    --------where TSK.ProjectID=@Project-------------
    now i'm getting output when selecting ALL value for parameters.
    but here the issue is performance..it takes 10sec to retrieve for single project when i'm executing the sp.
    how can i create index on temp table in this sp and how can i improve the query performance..
    please help.
    thanks in advance..
    lucky

    Didnt i provide you solution in other thread?
    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    WHERE (TSK.ProjectID = @Project OR @Project = -1)
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • How to improve query & loading performance.

    Hi All,
    How to improve query & loading performance.
    Thanks in advance.
    Rgrds
    shoba

    Hi Shoba
    There are lot of things to improve the query and loading performance.
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    weblogs:
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1. BW Statistics
    2. BW Workload Analysis in ST03N (Use Export Mode!)
    3. Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
    RSA1, choose Tools -> BW statistics for InfoCubes
    (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in detail?
    1. Transaction RSRT
    2. Transaction RSRTRACE
    4. Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
    3. If Buffers, I/O, CPU, memory on the database server are exhausted?
    4. If Cube compression is used regularly
    5. If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1. If the CPUs on the application server are exhausted
    2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
    8. Where can I get specific runtime information for one query?
    1. Again you can use ST03N -> BW System Load
    2. Depending on the time frame you select, you get historical data or current data.
    3. To get to a specific query you need to drill down using the InfoCube name
    4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
    values for a specific query?
    (Use Details to get the runtime segments)
    1. High Database Runtime
    2. High OLAP Runtime
    3. High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
    2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
    3. Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
    2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
    3. Check if a user exit Usage is involved in the OLAP runtime?
    4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5. Check if a proper index on the inclusion table exist
    12. What can I do if a query has a high frontend runtime?
    1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
    2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3. Check if the bandwidth for WAN connection is sufficient
    and the some threads:
    how can i increse query performance other than creating aggregates
    How to improve query performance ?
    Query performance - bench marking
    may be helpful
    Regards
    C.S.Ramesh
    [email protected]

  • Peformance improvement of  query

    Hi guru's,
    Can any one help me in knowing how can i improve performance of this query
    OPEN CURSOR WITH HOLD s_cursor FOR
            SELECT a~vbeln
                   a~posnr
                   b~vdatu
                   b~vsnmr_v
                   a~abgru
                   a~kwmeng
                   a~matnr
                   a~bedae
                   a~lgort
                   a~sernr
                   a~vrkme
                   a~werks
                   c~edatu
                   a~aedat
                   a~erdat
            FROM  ( vbap AS a
                    INNER JOIN vbak AS b
                      ON  avbeln = bvbeln
                      AND b~vbtyp = 'W'                      "C0006-R2
                    INNER JOIN vbep AS c
                      ON  avbeln = cvbeln
                      AND aposnr = cposnr
                      AND c~etenr = '1' )
            WHERE  a~vbeln IN
                 ( SELECT objectid
                   FROM   cdhdr
                   WHERE  objectclas = 'VERKBELEG'
                  AND    tcode      = 'VA02'
                   AND    udate     IN l_r_aedat )
            OR     a~erdat IN l_r_aedat.
          ENDIF.                          "Full or Delta ?
        ENDIF.                             "First data package ?
    Fetch records into e_t_data
          FETCH NEXT CURSOR s_cursor
                     INTO CORRESPONDING FIELDS
                     OF TABLE l_t_data
                     PACKAGE SIZE s_s_if-maxsize.
        IF sy-subrc <> 0.
          CLOSE CURSOR s_cursor.
    can i make use of for all entries will it improve my query performance? this query is used to extract data to datasource in bw
    thanks in advance.
    rahul.

    >FROM cdhdr
    >WHERE objectclas = 'VERKBELEG'
    >* AND tcode = 'VA02'
    >AND udate IN l_r_aedat )
    >OR a~erdat IN l_r_aedat.
    I guess now that table cdhdr is out of the cluster it will be easier...
    We used to calculate the start datestamp with year -1990 (take last 2 digits)
    + timestamp +000000 as a start position ("161121145259" as date time + 0000 for fraction of seconds) . So we had the document type, and a time span to go looking for document numbers.
    This would then give us access to either the document itself or cdpos with primary key - last field...
    Maybe this technique can still be of help to you.
    You should at least have the full key for the sales document. You should as well be able to tell whether the scheduling has changed (VBEP) from cdpos.
    If it helps some in your from clause I would have following order:
    1) VBEP
    2) VBAP
    3) VBAK
    Your where clause looks fine
    Enjoy

  • Important!! Improve the life and performance of the battery.

    Reduce the operating temperature and increase battery life
    The battery in your notebook PC is designed to provide the necessary amount of energy for the processor while maintaining HP high safety standards. As a result, the battery may not charge or may stop providing power to the notebook when the battery temperature exceeds the specified, design safety level.
    If the battery life appears shorter than normal, the battery stops charging before it is 99%-100% full and the battery appears warmer than usual, the battery has most likely reached its designed "no charge" safety state. The battery will no longer charge until the temperature condition is corrected.
    Try one of the following methods to correct the battery temperature:
    When charging the battery, do not use applications that require large amounts of system resources such as graphic or memory intensive applications, heavy and extended hard drive usage.
    Turn off your notebook and remove the battery to allow it to return to a safe operating temperature.
    Make sure the notebook PC is operating on a hard surface. Using the Notebook PC on a bed or sofa may block the vents causing the notebook PC to heat up and shut down.
    By taking these steps, the battery will return to its normal operating temperature range and continue to charge and discharge as designed.
    Calibrating the battery while PC not in use
    Recalibrating the battery requires a cycle of a complete charge and a complete discharge. To recalibrate the battery while using the PC is not is use complete the following steps.
    The recalibration may take 1-5 hours depending on the age of the battery and the configuration of the notebook PC you own. The PC should not be used while you perform the following steps. Completing all the following steps will also calibrate the battery so that the power meter readings are accurate.
    Shut down the notebook PC
    Connect the AC Adapter to the notebook PC and to an electrical socket.
    Charge the Notebook PC until the Battery Charge light is Green. This indicates the battery is completely charged.
    Press and release the Power Button to start the computer.
    Press the F8 key several times when the HP Logo displays.
    When the Windows Advanced Startup Menu displays, select the Startup in Safe Mode option.
    Remove the AC power adapter from the notebook PC.
    Allow the battery to discharge completely until the notebook PC turns off.
    The battery is now calibrated and the battery level reading on the power meter is now accurate.
    If you are not using the notebook regularly then please unplug the AC adapter and shut down the notebook. By following these practices will improve the life and performance of the battery. Here is a quick list of Do's and Don'ts for the care of your Li-On batteries:
    Do's
    When you receive a new Notebook or Tablet PC, leave the battery to fully charge overnight.
    Condition a new battery by using it until it is fully discharged, and then re-charge it fully. Doing this once a month will help to accurately calibrate your battery.
    Always ensure the battery is recharged as soon as possible after it becomes fully discharged. A battery will be permanently damaged if left for an extended length of time in a fully discharged state.
    Remember that a Lithium-Ion battery will slowly deteriorate; a new battery will always perform better than one that is 6-months old.
    Remember that the battery half-life is rated for a certain total number of charge/discharge cycles (see your User Manual or Quick Start Guide for the rating). For example, a battery that is rated for 3 hours and 500 charge/discharge cycles, will still be considered as within specification, even if it only lasts for 1 hour 45 minutes after 500 charge/discharge cycles.
    Heat is the worst enemy of a battery. Allow plenty of air to circulate around the Notebook/Tablet PC, so that the battery is kept as cool as possible when charging and also when in use. If provided, use the integrated 'legs' under the Notebook to raise the notebook and improve air circulation.
    Remove the battery if storing for several months (the battery should be at approximately 50% charge or higher).
    If you use a NoteBus or if charging your Notebooks or Tablet PCs in a confined space, allow for adequate ventilation in order to keep the batteries as cool as possible.
    Don'ts
    Do Not - Expose the battery to excessive heat or cold (i.e. outside the range of 10-35 degrees Centigrade ambient).
    Do Not - Store the battery in a fully charged state (store batteries with about 50% charge).
    Do Not - Allow a nearly flat battery to be unused for more than a month or so. The battery will slowly discharge until it becomes fully discharged and this will permanently damage the battery cells.
    Do Not - Charge your Notebook/Tablet PC inside a carry case - the battery may overheat.
    Do Not - Charge your Notebook/Tablet PC when stacked on top of each other - the battery may overheat.
    Remember: Your battery is slowly degrading all the time, even if it is not used. Keeping your battery as cool as possible will slow down this degradation considerably.
    For more information please visit the following links:
    How to Improve the Performance of the Battery
    http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01297640&cc=us&lc=en&dlc=en
    10 Tips to make your Laptop Battery last longer
    http://labnol.blogspot.com/2006/03/10-tips-to-make-your-laptop-battery.html
    Disclaimer: By clicking on the link above, you will be leaving HP.com to visit a web site that is not maintained by HP and where the HP privacy policy does not apply. This link is provided to you for convenience and does not serve as an endorsement by HP of any information or contacts that you may find on this non-HP site.
    ||-Although I am working on behalf of HP, I am speaking for myself and not for HP.-||
    //Click on Kudos if my reply was helpful and answered your question//
    ||-If my answer solved the problem please mark the topic as the accepted solution-||

    I hope the above article will help you guys..
    ||-Although I am working on behalf of HP, I am speaking for myself and not for HP.-||
    //Click on Kudos if my reply was helpful and answered your question//
    ||-If my answer solved the problem please mark the topic as the accepted solution-||

  • Asset query execution performance after upgrade from 4.6C to ECC 6.0+EHP4

    Hi,guys
    I am encounted a weird problems about asset query execution performance after upgrade to ECC 6.0.
    Our client had migrated sap system from 4.6c to ECC 6.0. We test all transaction code and related stand report and query.
    Everything is working normally except this asset depreciation query report. It is created based on ANLP, ANLZ, ANLA, ANLB, ANLC table; there is also some ABAP code for additional field.
    This report execution costed about 6 minutes in 4.6C system; however it will take 25 minutes in ECC 6.0 with same selection parameter.
    At first, I am trying to find some difference in table index ,structure between 4.6c and ECC 6.0,but there is no difference about it.
    i am wondering why the other query reports is running normally but only this report running with too long time execution dump messages even though we do not make any changes for it.
    your reply is very appreciated
    Regards
    Brian

    Thanks for your replies.
    I check these notes, unfortunately it is different our situation.
    Our situation is all standard asset report and query (sq01) is running normally except this query report.
    I executed se30 for this query (SQ01) at both 4.6C and ECC 6.0.
    I find there is some difference in select sequence logic even though same query without any changes.
    I list there for your reference.
    4.6C
    AQA0FI==========S2============
    Open Cursor ANLP                                    38,702  39,329,356  = 39,329,356      34.6     AQA0FI==========S2============   DB     Opens
    Fetch ANLP                                         292,177  30,378,351  = 30,378,351      26.7    26.7  AQA0FI==========S2============   DB     OpenS
    Select Single ANLC                                  15,012  19,965,172  = 19,965,172      17.5    17.5  AQA0FI==========S2============   DB     OpenS
    Select Single ANLA                                  13,721  11,754,305  = 11,754,305      10.3    10.3  AQA0FI==========S2============   DB     OpenS
    Select Single ANLZ                                   3,753   3,259,308  =  3,259,308       2.9     2.9  AQA0FI==========S2============   DB     OpenS
    Select Single ANLB                                   3,753   3,069,119  =  3,069,119       2.7     2.7  AQA0FI==========S2============   DB     OpenS
    ECC 6.0
    Perform FUNKTION_AUSFUEHREN     2     358,620,931          355
    Perform COMMAND_QSUB     1     358,620,062          68
    Call Func. RSAQ_SUBMIT_QUERY_REPORT     1     358,569,656          88
    Program AQIWFI==========S2============     2     358,558,488          1,350
    Select Single ANLA     160,306     75,576,052     =     75,576,052
    Open Cursor ANLP     71,136     42,096,314     =     42,096,314
    Select Single ANLC     71,134     38,799,393     =     38,799,393
    Select Single ANLB     61,888     26,007,721     =     26,007,721
    Select Single ANLZ     61,888     24,072,111     =     24,072,111
    Fetch ANLP     234,524     13,510,646     =     13,510,646
    Close Cursor ANLP     71,136     2,017,654     =     2,017,654
    We can see first open cursor ANLP ,fetch ANLP then select ANLC,ANLA,ANLZ,ANLB at 4.C.
    But it changed to first select ANLA,and open cursor ANLP,then select  ANLC,ANLB,ANLZ,at last fetch ANLP.
    Probably,it is the real reason why it is running long time in ECC 6.0.
    Is there any changes for query selcection logic(table join function) in ECC 6.0.

  • HT1651 how can i improve my macbook's performance without installing memory

    how can i improve my macbook's performance without installing memory

    More RAM & bigger faster Hard Drive will help, maybe a better Graphics card also, since 10.5 ises the Video much harder.
    At the Apple Icon at top left>About this Mac.
    Then click on More Info>Hardware and report this upto *but not including the Serial#*...
    Hardware Overview:
    Machine Name: Power Mac G5 Quad
    Machine Model: PowerMac11,2
    CPU Type: PowerPC G5 (1.1)
    Number Of CPUs: 4
    CPU Speed: 2.5 GHz
    L2 Cache (per CPU): 1 MB
    Memory: 10 GB
    Bus Speed: 1.25 GHz
    Boot ROM Version: 5.2.7f1
    Then click on More Info>Hardware>Graphics/Displays and report like this...
    NVIDIA GeForce 7800GT:
      Chipset Model:          GeForce 7800GT
      Type:          Display
      Bus:          PCI
      Slot:          SLOT-1
      VRAM (Total):          256 MB
      Vendor:          nVIDIA (0x10de)
      Device ID:          0x0092
      Revision ID:          0x00a1
      ROM Revision:          2152.2
      Displays:
    VGA Display:
      Resolution:          1920 x 1080 @ 60 Hz
      Depth:          32-bit Color
      Core Image:          Supported
      Main Display:          Yes
      Mirror:          Off
      Online:          Yes
      Quartz Extreme:          Supported
    Display:
      Status:          No display connected

  • How can i improve this query.

    Hi guys i am beginner , just wanted to know some info , how can i improve this query ..
    select *
    from tableA A, viewB B,
    where A.key = B.key
    and a.criteria1 = '111'
    and a.criteria2 = some_funtion(a.key)
    one more thing should function should be on left side of equal sign.
    will a join make it better or something else is needed more than that .

    952936 wrote:
    Hi guys i am beginner , just wanted to know some info , how can i improve this query ..
    select *
    from tableA A, viewB B,
    where A.key = B.key
    and a.criteria1 = '111'
    and a.criteria2 = some_funtion(a.key)
    one more thing should function should be on left side of equal sign.
    will a join make it better or something else is needed more than that .If you are a beginner try to learn the ANSI Syntax. This will help you a lot to write better queries.
    Your select would look like this in ANSI.
    select *
    from tableA A
    JOIN viewB B ON A.key = B.key
    WHERE a.criteria1 = '111'
    and a.criteria2 = some_function(a.key);The good thing here is that this separates the typical joining part of the select from the typical filter criteria.
    The other syntax very often let you forget one join. Just because there are so many tables and so many filters, that you just don't notice correctly anymore what was join and what not.
    If you notice that the number of column is not what you expect, you can easiely modify the query and compare the results.
    example A
    Remove View B from the query (temporarily comment it out).
    select *
    from tableA A
    --JOIN viewB B ON A.key = B.key
    WHERE a.criteria1 = '111'
    and a.criteria2 = some_funtion(a.key)
    example B
    You notice, that values from A are missing. Maybe because there is no matching key in ViewB? Then change the join to an outer join.
    select *
    from tableA A
    LEFT OUTER JOIN viewB B ON A.key = B.key
    WHERE a.criteria1 = '111'
    and a.criteria2 = some_funtion(a.key)(The outer keyword is optional, left join would be enough).

  • How to improve this query speed ?....help me

    How to improve the query speed. Any hints can u suggest in the query or any correction. Here i am using sample tables for checking purpose, When i am trying with my original values, this type of query taking longer time to run
    select ename,sal,comm from emp where(comm is null and &status='ok') or (comm is not null and &status='failed');
    Thanx in advance
    prasanth a.s.

    What about
    select ename,sal,comm from emp where comm is null and &status='ok'
    union all
    select ename,sal,comm from emp where comm is not null and &status='failed';
    Regards
    Vaishnavi

  • Urgent query regarding performance

    hi
    i have one query regarding performance.iam using interactive reporting and workspace.
    i have all the linsence server,shared services,and Bi services and ui services and oracle9i which has metadata installed in one system(one server).data base which stores relationaldata(DB2) on another system.(i.e 2 systems in total).
    in order to increase performance i made some adjustments
    i installed hyperion BI server services, UI services,license server and shared services server such that all web applications (that used web sphere 5.1) such as shared services and UI services in server1(or computer1).and remaining linsence and bi server services in computer2 and i installed database(db2) on another computer3.(i.e 3 systems in total)
    my query : oracle 9i which has metadata where to install that in ( computer 1 or in computer 2 )
    i want to get best performance.where to install that oracle 9i which has metadata stored in it.
    for any queries please reply mail
    [email protected]
    9930120470

    You should know that executing a query is always slower the first time. Then Oracle can optimise your query and store it temporary for further executions. But passing from 3 minutes to 3 seconds, maybe your original query is really, really slow. Most of the times I only win few milliseconds. If Oracle is able to optimize it to 3 seconds. You must clearly rewrite your query.
    Things you should know to enhance your execution time : try to reduce the number of nested loops, nested loops give your an exponential execution time which is really slow :
    for rec1 in (select a from b) loop
      for  rec2 in (select c from d) loop
      end loop;
    end loop;Anything like that is bad.
    Try to avoid Cartesian products by writing the best where clause possible.
    select a.a,
             b.b
    from  a,
            b
    where b.b > 1This is bad and slow.

  • Increase Query Designer Performance?

    Hi together,
    Is it possible to increase the performance of the Query Designer? I have a query which is based on a MultiProvider with several InfoCubes. If i work some minutes with the Query Designer, it becomes slower and slower. Then it takes from one up to ten seconds to see the digit in my formula, which i pressed on the keyboard just before. This is very awful! At least the half of the time i work with the Query Designer i have to wait, that this tool stops calculating something or displaying the hourglass. Very inefficient.
    Thanks for hints in advance!

    >
    Ricardo Rosa wrote:
    > Hi Timo,
    >
    > Usually this kind of issue should be solved with frontend patch upgrade, have you tried to reproduce this with latest FEP?
    >
    > Other suggestion which should help is to search for RSZ tables inconsistencies with report ANALYZE_RSZ_TABLES, this can search for inconsistencies which can decrease the performance in the query definition and also suggest a fix for that.
    >
    > Kind regards,
    >
    > Ricardo
    Hi Ricardo,
    Yes, i've already upgraded to latest FEP version.
    But thank you for this report. It looks very helpful!
    >
    Arun Varadarajan wrote:
    > Shikha,
    > The question was regarding Query designer performance and not query performance...
    >
    > We faced similar issue earlier - even opening of Query designer took a huge amount of time but then on upgrading upgrading my system to 1 GB ram most of the performance issue went away... check the RAM usage in the task manager and CPU utilization. This might make query designer faster.
    Hi Arun,
    At this point there is no more hope for me, as my system has already 2 GB RAM and a dual core cpu with 2.2 GHz.

  • Improving redo log writer performance

    I have a database on RAC (2 nodes)
    Oracle 10g
    Linux 3
    2 servers PowerEdge 2850
    I'm tuning my database with "spotilght". I have alredy this alert
    "The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
    The serveres are not in RAID5.
    How can I improve redo log writer performance?
    Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
    Therefore, redo log devices should be placed on fast devices.
    Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
    To reduce redo write time see Improving redo log writer performance.
    See Also:
    Tuning Contention - Redo Log Files
    Tuning Disk I/O - Archive Writer

    Some comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
    > Assuming that you are not CPU constrained,
    moving the online redo to
    high-speed solid-state disk can make a hugedifference.
    Do you honestly think this is practical and usable
    advice Don? There is HUGE price difference between
    SSD and and normal hard disks. Never mind the
    following disadvantages. Quoting
    (http://en.wikipedia.org/wiki/Solid_state_disk):[
    i]
    # Price - As of early 2007, flash memory prices are
    still considerably higher  
    per gigabyte than those of comparable conventional
    hard drives - around $10
    per GB compared to about $0.25 for mechanical
    drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
    Capacity - The capacity of SSDs tends to be
    significantly smaller than the
    capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
    Lower recoverability - After mechanical failure the
    data is completely lost as
    the cell is destroyed, while if normal HDD suffers
    mechanical failure the data
    is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
    Vulnerability against certain types of effects,
    including abrupt power loss
    (especially DRAM based SSDs), magnetic fields and
    electric/static charges
    compared to normal HDDs (which store the data inside
    a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
    Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
    Limited write cycles. Typical Flash storage will
    typically wear out after
    100,000-300,000 write cycles, while high endurance
    Flash storage is often
    marketed with endurance of 1-5 million write cycles
    (many log files, file
    allocation tables, and other commonly used parts of
    the file system exceed
    this over the lifetime of a computer). Special file
    systems or firmware
    designs can mitigate this problem by spreading
    writes over the entire device,
    rather than rewriting files in place.
    Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
    >
    Looking at many of your postings to Oracle Forums
    thus far Don, it seems to me that you are less
    interested in providing actual practical help, and
    more interested in self-promotion - of your company
    and the Oracle books produced by it.
    .. and that is not a very nice approach when people
    post real problems wanting real world practical
    advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us.

Maybe you are looking for

  • Error while loading PDF Preset file

    Dear All, I am trying to create a PDF of the InDesign Document after my process. For that I load the preset file generate the PDF and unload Preset file from the server. On this it gets removed properly from the temporary folder also. Problem is when

  • Leading zeros in report.

    Hi, I have got a char Tracking number of type numc . I have loaded the data using alpha conv routine. So when i look at cube content my char is showing leading zeroes for the values. but in reports I am not able to see leading zeroes. what setting ne

  • How can I export hyperlinks from Numbers?

    I have a Numbers file with lots of hyperlinks used in it. I would like to export the data WITH the hyperlinks so that I can post it on my website. Exporting as CSV and copy-paste do not bring over the link references. Is there a way to do this?

  • "not enough disk space", but I have 60 gigs

    So i'm trying to import this 2 hour vid, and it's telling me I dont have enough disk space. But I have 60 gigs on the drive i'm drawing the video from! I look in prefs and it doesn't have an option to select a hard drive, is it selecting it automatic

  • BDC for VA01 and VL01 - Needed

    Hi experts, Can anyone give ur valuable ideas for the following scenario. I need to create recording for Sales order creation(VA01) and Delivery creation(VL01) tcodes. But VA01 should be created in foreground and VL01 in background for the correspond