Optimizer - Query relationship

Hi all,
I have a query, which runs fast if I give a certain Index as hint, even though the cost is 18205.(thro explain plan)The query runs in 50 seconds.
However, I force certain other indexes,then the cost is reduced to 18. But the query results come out after 2 minutes.
Is the query performance and Cost not inversely proportional?
null

Before you tune a query you need to be sure, as to what are are looking for.. i.e. optmize the query for best performance(resources wise) or best performance time wise.
There is no inverse relationship between the cost and performance( time wise, I understand this is what you meant).
If your optmization goal is min resources go with min cost but more time, but if the goal is best retrieval time, go the best time even if it means more cost..

Similar Messages

  • How to optimize query in oracle??

    hello all,
    i want to question this forum. How i can optimize query sql in oracle where i have data > 100.000 record in my table.. This is my query:
    select avg(
    select /*+ NO_USE_NL(B) */ (SUM(A.QTY_INSTORE) + SUM(A.QTY_DELIVERED)) TOTAL_DUMMY
                from daily_distribution_pop a,
                     ref_toko b
               where substr(a.svm_id, 1, 10) = b.id
                 and b.segment_type = 'A'
                 and b.enable_flag = 'Y'
                 and substr(a.pom_id, 1, 2) in ('PO')
                 and substr(a.svm_id, 1, 4) = '1203'
                 and to_char(a.tgl_visit, 'iw') = '09'
                 AND A.TGL_VISIT = (SELECT MAX(A.TGL_VISIT)
                                      FROM daily_distribution_pop A
                                     WHERE SUBSTR(SVM_ID, 1, 10) = B.ID
                                       and to_char(A.tgl_visit, 'iw') = '09')
                 AND (A.QTY_INSTORE <> 0
                       OR A.QTY_DELIVERED <> 0)
    select COUNT(DISTINCT B.ID)
      from daily_distribution_pop A1,
           ref_toko b
    where b.id = substr(A1.svm_id, 1, 10)
       AND b.segment_type = 'A'
       AND b.enable_flag  = 'Y'
       AND substr(A1.pom_id, 1, 2) in ('PO')
       AND '1203' = substr(A1.svm_id, 1, 4)
       AND '09' = to_char(A1.tgl_visit, 'iw')
       AND A1.TGL_VISIT = (SELECT /*+ NO_PUSH_SUBQ */ MAX(A2.TGL_VISIT)
                              FROM daily_distribution_pop A2
                             WHERE B.ID = SUBSTR(SVM_ID, 1, 10)
                               AND to_char(A2.tgl_visit, 'iw') = to_char(A1.tgl_visit, 'iw'))
       AND (0 <> A1.QTY_INSTORE
             OR 0 <> A1.QTY_DELIVERED)) total from duall;this query is work but when i upload to my application program (asp.net) this query very slowly when i access from my internet. I try in local not slowly. May be in this forum i can find answer from my problem..I'm stuck. Thanks for your answer
    Edited by: xoops on Mar 26, 2010 2:07 AM
    Edited by: xoops on Mar 26, 2010 2:07 AM
    Edited by: xoops on Mar 26, 2010 2:08 AM

    xoops wrote:
    sorry i not understand whta do you mean. Sory i newbie in oracle; THanksAlex was asking (i believe) why you have the hints on the SQL statements. Hints are typically a last resort, and if you are a newbie as you say, they are very likely NOT the route you should be investigating.
    I notice you are having to do A LOT of data manipulation in order to join your tables, this suggests a bad data model (like when you have to join a table to another one based on the first 4 characters in a string). Is there any chance you can change that? If not, you can at least hope to allow access paths by using things like LIKE instead of SUBSTR.
    I've made a couple changes to your first query, removing the double access to the daily_distribution_pop table, and replacing the substr's with LIKE conditions (no idea what if any indexes you have on these tables).
    If this is any better, you should be able to make the same sort of changes to your second query. If it's no better than what you have now, you'll have to follow the links provided by Alex and give us a lot more information. And my suggestion would be to fix the data model as a starting point.
    select
       avg(the_sums)
    from
       select
          (sum(a.qty_instore) + sum(a.qty_delivered)) as the_sums
       from
          select
             a.qty_instore,
             a.qty_delivered,
             a.tgl_visit,
             max(a.tgl_visit) over (partition by b.id) as max_tgl_visit
          from
             daily_distribution_pop  a,
             ref_toko                b
          where substr(a.svm_id, 1, 10) = b.id
          and   b.segment_type          = 'A'
          and   b.enable_flag           = 'Y'
          --and   substr(a.pom_id, 1, 2) in ('PO')
          and   a.pom_id like 'PO%'
          --and   substr(a.svm_id, 1, 4)     = '1203'
          and   a.svm_id like '1203%'
          and   to_char(a.tgl_visit, 'iw') = '09'
          and   (a.qty_instore  0 or a.qty_delivered  0)
       where max_tgl_visit = tgl_visit
    );

  • Optimize query

    Hi All,
    Could you please tell me why this query is taking more time in a package where it is define under a item procedure, but when I not using this in my package which is a outbound taking data from oracle to staging table, this query execute fast and in 40 min I am getting more then 3 lakh+ records in the staging table but when I am include below code it taking more the 12 hours to fatch the records from oracle to staging table. please help how to optimize this query as I have created indexs for all the table.
    SELECT distinct substr(TRIM(fds.short_text),instr(fds.short_text,'Rev:',1)+5,instr(fds.short_text,'UOM:',1)-instr(fds.short_text,'Rev:',1)-5)
    INTO v_Drawing_Rev
    FROM fnd_documents_tl fdt
    ,FND_DOCUMENTS_SHORT_TEXT fds
    ,fnd_attached_documents fad
    WHERE 1=1
    --AND (fdt.MEDIA_ID=14173223)
    AND fdt.language = 'US'
    AND fdt.MEDIA_ID = fds.MEDIA_ID
    AND fad.pk2_value = r_get_data(i).inventory_item_id --to_char(msi.inventory_item_id)
    AND fad.pk1_value = (SELECT to_char(organization_id)
    FROM org_organization_definitions
    WHERE organization_code = 'FLS')
    AND fad.document_id = fdt.document_id
    AND 'MTL_SYSTEM_ITEMS' = fad.ENTITY_NAME
    AND ROWNUM =1;
    EXCEPTION
    WHEN OTHERS THEN
    v_Drawing_Rev := NULL;
    END;
    Edited by: user605933 on Sep 23, 2010 7:12 AM

    Pl see these links on how to post a tuning request
    HOW TO: Post a SQL statement tuning request - template posting
    When your query takes too long ...
    HTH
    Srini

  • Optimize query results(timing) returned from custom search query

    Casey can you send the following to Oracle Support as soon as you can, we need their input on optimizing returning search results.
    To Oracle:
    We have noticed in the IdocScript reference guide that there is a way to optimize the query results returning to the user.
    Here is the scenario: We have created a custom web query where we send a query to the UCM and return a result using a custom template via url like so
    http://dnrucm.dnr.state.la.us/ucm/idcplg?IdcService=GET_SEARCH_RESULTS&QueryText=dDocTitle <matches> `AGREEMENT` <AND> xStateLeaseNum<matches>`00340`&ResultCount=30&SortOrder=Desc&SortField=dInDate&urlTemplate=/ucm/groups/public/documents/oos/hcst-search.hcst
    This works fine. The problem is that when a query is broader like
    http://dnrucm.dnr.state.la.us/ucm/idcplg?IdcService=GET_SEARCH_RESULTS&QueryText= xStateLeaseNum<matches>`00340`&ResultCount=30&SortOrder=Desc&SortField=dInDate&urlTemplate=/ucm/groups/public/documents/oos/hcst-search.hcst
    The query takes an extremely long time to execute and the results template sometimes never comes back, which seems like a timeout.
    Is there something else that we can do to optimize the query results?

    Hi John,
    What version of xMII are you using?
    Some things I would try:
    1. Clear the Java Web Start Cache - see if that makes any difference.
    2. If not, what happens if you save the query under a different name, and then try using it in a new transaction?
    3. Please feel free to enter you case into xMII Support via the SAP Support Portal if you are unable to make this work.
    Kind Regards,
    Diana Hoppe

  • Optimize query on table with 10 million transactions

    Is there a better way than showed below to retrieve the transactions from a table that contains more than 9 million transactions.
    SELECT FROM TAB_COMM WHERE CARR_CD='ABC' AND POL_NUM LIKE 'HPA%';How to optimize this query to show results for policy starting with HPA? This query returns more than 1 million transactions. There are already many indexes on this table and not sure if adding index will work on LIKE operator?

    Thanks for all the responses. So much to learn from you all!
    I could understand that we may need to add another index to this table. Before adding one, please look into the existing ones:
    CREATE TABLE TAB_COMM
      CARR_COMM_KY_ID     NUMBER                    NOT NULL,
      PRU_WK_CD           NUMBER                    NOT NULL,
      CARR_CD             VARCHAR2(5 BYTE)          NOT NULL,
      SSN_TIN_NUM         VARCHAR2(9 BYTE),
      POL_NUM             VARCHAR2(20 BYTE),
      GRP_POL_NUM         VARCHAR2(20 BYTE),
      REC_TYP_CD          VARCHAR2(1 BYTE),
      EXTRCT_DT           DATE,
      OUT_BRKRG_ORG_CD    VARCHAR2(3 BYTE),
      SSN_TIN_CD          VARCHAR2(1 BYTE),
      CARR_AGT_CNTR_NUM   VARCHAR2(20 BYTE),
      SP_PCT_RT           NUMBER(5,4),
      INS_FRST_NAME       VARCHAR2(20 BYTE),
      INS_MID_NAME        VARCHAR2(25 BYTE),
      INS_LST_NAME        VARCHAR2(50 BYTE),
      INS_SFX_NAME        VARCHAR2(20 BYTE),
      ISS_DT              DATE,
      POL_DT              DATE,
      YR_INFRC_NUM        DATE,
      PROD_CD             VARCHAR2(20 BYTE)         NOT NULL,
      APPL_DT             DATE,
      IPP_DT              DATE,
      PREM_DUE_DT         DATE,
      ISS_ST_CD           VARCHAR2(2 BYTE),
      RES_ST_CD           VARCHAR2(2 BYTE),
      FRST_RNWL_CD        VARCHAR2(4 BYTE)          NOT NULL,
      TRANS_TYP_CD        VARCHAR2(4 BYTE)          NOT NULL,
      GRS_PREM_AMT        NUMBER(11,2),
      COMM_PREM_AMT       NUMBER(11,2),
      COMM_RT             NUMBER(5,4),
      PREM_MODE_CD        VARCHAR2(2 BYTE),
      PRU_REVN_AMT        NUMBER(11,2),
      AGNT_EARN_COMM_AMT  NUMBER(11,2),
      CK_NUM              VARCHAR2(8 BYTE),
      CK_DT               DATE,
      ISS_AGE_NUM         VARCHAR2(2 BYTE),
      CNTR_NUM            VARCHAR2(6 BYTE),
      SM_CNTR_NUM         VARCHAR2(6 BYTE),
      SM_OVRD_AMT         NUMBER(11,2),
      PDI_REVN_AMT        NUMBER(11,2),
      OVRD_STAT_CD        VARCHAR2(2 BYTE),
      MOD_BY_ID           VARCHAR2(50 BYTE),
      MOD_DT              DATE,
      CRT_BY_ID           VARCHAR2(50 BYTE),
      CRT_DT              DATE,
      BAT_ID              VARCHAR2(20 BYTE),
      ST_SRC_CD           VARCHAR2(1 BYTE),
      PRODTN_STAT_CD      VARCHAR2(2 BYTE),
      CHANL_CD            VARCHAR2(4 BYTE)          DEFAULT '00',
      COMP_PLN_CD         VARCHAR2(2 BYTE),
      GA_CD               VARCHAR2(12 BYTE)         NOT NULL,
      CARR_PROD_CD        VARCHAR2(25 BYTE)         NOT NULL,
      AGNT_SHR_RT         NUMBER(5,4),
      SRC_FILE_NAME       VARCHAR2(100 BYTE),
      GA_OVRD_AMT         NUMBER(11,2),
      PDI_OVRD_RT         NUMBER(5,4),
      DAMIS_WK_CD         NUMBER,
      RECNCL_STAT_CD      VARCHAR2(2 BYTE),
      REC_ID              NUMBER(10),
      TERM_DUR            NUMBER(4),
      SALES_CD            CHAR(1 BYTE),
      INS_ADDR            VARCHAR2(50 BYTE),
      INS_CITY_NAME       VARCHAR2(30 BYTE),
      INS_ZIP_CD          VARCHAR2(5 BYTE),
      TRKG_CD             VARCHAR2(12 BYTE),
      SUPL_KIND_CD        CHAR(1 BYTE)
    )This huge table has following index columns:
    Index1 columns
    POL_NUM, CNTR_NUM, POL_DT, ISS_DT, TRANS_TYP_DT,COMM_PREM_AMT, PRU_REVN_AMTIndex2 columns
    SRC_FILE_NAME, REC_IDIndex3, 4, 5, 6, 7, 8, 9, 10 - single column indexes on
    CARR_CD
    SP_PCT_RT
    CNTR_NUM
    POL_NUM
    INS_FRST_NAME
    INS_LST_NAME
    INS_MID_NAME
    CARR_COMM_KY_ID <- pkIndex 6 columns
    POL_NUM, CNTR_NUM, "INS_FRST_NAME"||"INS_MID_NAME"||"INS_LST_NAME"||"INS_SFX_NAME", TRUNC("EXTRCT_DT"), SRC_FILE_NAME, PROD_CD, TRANS_TYP_CD, FRST_RNWL_CD, PRODTN_STAT_CD, GA_CDIndex 11 columns
    POL_NUM, CNTR_NUM, "INS_FRST_NAME"||"INS_MID_NAME"||"INS_LST_NAME"||"INS_SFX_NAME", TRUNC("EXTRCT_DT"), SRC_FILE_NAME, PROD_CD, TRANS_TYP_CD, FRST_RNWL_CD, PRODTN_STAT_CD, GA_CDI should add index only if it brings the result faster, I don't want to add another index in this huge list of indexes. Please advice.

  • Urgent! How to use push_pred to optimize query with UNION in 10g?

    Hi,
    We are facing slow query performance in 10g database.
    Appreciate if anyone could advise on how to optimize the performance by using push pred?
    Or is there any other ways.
    Thanks in advance.
    Cheers,
    SC

    dont post duplicate post

  • Not able to optimize query

    Hello,
    I am working on one query but not able to optimize on that query after working for 2-3 days.
    Can anyone please help?
    SELECT x7_0.keypowermarketentity,
    x1_0.plantmonthlyasofyear,
    x1_0.plantmonthlyasofmonth,
    x0_0.plantownerperiodasof,
    x0_0.unitcapacitysummernet,
    x0_0.unitcapacitywinternet,
    x3_0.mappingoverlap,
    x2_0.keyunitoperatingext,
    x2_0.powerplantunitserviceyear,
    x2_0.powerplantunitservicemonth,
    x2_0.powerplantunitretireyear,
    x2_0.powerplantunitretiremonth,
    x4_0.unitcapacityasof,
    x5_0.unitcapacityasof,
    x4_0.unitcapacitysummernet,
    x4_0.unitcapacitywinternet,
    x1_0.futuresyear,
    x1_0.keyfuelcontracttype,
    x1_0.fuelcontractcoal,
    x1_0.fuelcontractcoalprice,
    x1_0.coalburnedheat,
    x1_0.fuelcontractgas,
    x1_0.fuelcontractgasprice,
    x1_0.gasburnedheat,
    x1_0.fuelcontractoil,
    x1_0.fuelcontractoilprice,
    x1_0.oilburnedheat,
    x9_0.keycoalproducingregionparent,
    x2_0.keypowerplantunit
    FROM Adw.dbo.powerplantannual x0_0
    INNER JOIN energy.dbo.electricfuelcontract x1_0
    ON ( x0_0.annualization = 12
    AND ( x1_0.plantmonthlyasofmonth = (
    Month(x0_0.plantownerperiodasof) )
    AND ( x1_0.plantmonthlyasofyear = (
    Year(x0_0.plantownerperiodasof) )
    AND x0_0.keypowerplant =
    x1_0.keypowerplant ) ) )
    INNER JOIN Adw.dbo.powerplantunit x2_0
    ON x1_0.keypowerplant = x2_0.keypowerplant
    INNER JOIN Adw.dbo.unitnercsubregion x3_0
    ON x2_0.keypowerplantunit = x3_0.keypowerplantunit
    INNER JOIN Adw.dbo.unitcapacitystatus x4_0
    ON x2_0.keypowerplantunit = x4_0.keypowerplantunit
    LEFT JOIN Adw.dbo.unitcapacitystatus x5_0
    ON ( x5_0.keyunitcapacitychangestatus = 1
    AND x4_0.keyunitcapacitystatus =
    x5_0.keyunitcapacitystatusprior )
    AND x5_0.updoperation < 2
    INNER JOIN lookup.dbo.nercsubregion x6_0
    ON x3_0.keynercsubregion = x6_0.keynercsubregion
    INNER JOIN calcs.dbo.powermarketentity x7_0
    ON x6_0.keynercregion = x7_0.keynercregion
    LEFT JOIN Adw.dbo.mine x8_0
    ON x1_0.keymine = x8_0.keymine
    AND x8_0.updoperation < 2
    LEFT JOIN lookup.dbo.coalproducingregion x9_0
    ON x9_0.keycoalproducingregion = (
    Isnull(x8_0.keycoalproducingregionprimary, 1) )
    AND x9_0.updoperation < 2
    WHERE ( x2_0.keyunitoperatingext IN ( 1, 3, 4, 9 )
    AND ( ( x3_0.mappingended IS NULL
    OR x3_0.mappingended > x0_0.plantownerperiodasof )
    AND ( x3_0.mappingbegun <= x0_0.plantownerperiodasof
    AND ( x7_0.keypowermarketentity = 0
    AND ( x1_0.plantmonthlyasofyear = 2010
    AND ( x4_0.keyunitcapacitychangestatus = 1
    AND ( x3_0.keynercsubregion NOT IN (
    114, 115, 117, 112,
    111, 116, 110, 109,
    118, 83, 91 )
    AND x7_0.keynercregion IS NOT NULL
    AND x0_0.updoperation < 2
    AND x1_0.updoperation < 2
    AND x1_0.appstatus = 2
    AND x2_0.updoperation < 2
    AND x3_0.updoperation < 2
    AND x4_0.updoperation < 2
    AND x6_0.updoperation < 2
    AND x7_0.updoperation < 2
    ORDER BY x0_0.keypowerplant,
    x1_0.plantmonthlyasofyear,
    x1_0.plantmonthlyasofmonth,
    x1_0.keyfuelcontracttype,
    x9_0.keycoalproducingregionparent
    SQL Server parse and compile time: 
       CPU time = 0 ms, elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    SQL Server parse and compile time: 
       CPU time = 1716 ms, elapsed time = 1746 ms.
    (180795 row(s) affected)
    Table 'CoalProducingRegion'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Mine'. Scan count 5, logical reads 295, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'PowerMarketEntity'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'UnitNERCSubRegion'. Scan count 2, logical reads 39, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'NERCSubRegion'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'PowerPlantUnit'. Scan count 5, logical reads 454, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'ElectricFuelContract'. Scan count 4755, logical reads 20758, physical reads 0, read-ahead reads 11, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'UnitCapacityStatus'. Scan count 10, logical reads 2156, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'PowerPlantAnnual'. Scan count 180795, logical reads 639534, physical reads 0, read-ahead reads 12, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 40376 ms,  elapsed time = 80588 ms.
    SQL Server parse and compile time: 
       CPU time = 0 ms, elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.

    There is some lousy nested loops in one of the joined tables slowing the process down (ElectricFuelContract). Is parallelism enabled? If not, enable it. It should change the physical algorythm to hash match.
    Someone is going to ask you for the DDL statements to create all the data structures required to run your query, which you'll respond in an hour or so. This will improve the chances of having someone rewrite your query considerably.
    A few hours later, just before this thread has an answer marked, some ancient legend is going to come around and give you a few advices on naming conventions he'll probably just copy-paste from one of his books. Do what you will with it.
    Just because there are clouds in the sky it doesn't mean it isn't blue. But someone will come and argue that in addition to clouds, birds, airplanes, pollution, sunsets, daltonism and nuclear bombs, all adding different colours to the sky, this
    is an undocumented behavior and should not be relied upon.

  • Need help to optimize query

    Hi
    I need help to optimize the below query.... I takes to long to execute.
    SELECT  vbak~waerk
                  vbapvbeln vbapposnr
                  vbapnetwr vbapbrgew vbap~erdat
            INTO CORRESPONDING FIELDS OF TABLE tmp_orders
            FROM vbak INNER JOIN vbap ON vbakvbeln = vbapvbeln
            WHERE vbap~matnr EQ t_mchb-matnr
            AND vbap~netwr   GT 0            
            AND vbap~brgew   GT 1                             
            AND vbak~auart     IN ('ZCAO', 'ZDDO').   
    Please help...
    Edited by: Alvaro Tejada Galindo on Mar 18, 2008 6:02 PM

    Hi,
    try this logic and see:
    data: begin of itab_for_vbak occurs 0,
          waerk type WAERK,
          vbeln type vbeln,
          end of itab_for_vbak,
          wa_for_vbak like itab_for_vbak.
    data: begin of itab_for_vbap occurs 0,
          vbeln like vbap-VBELN,
          posnr like vbap-POSNR,
          netwr like vbap-NETWR,
          brgew like vbap-brgew,
          erdat like ERDAT
          end of itab_for_vbap,
          wa_for_vbap like itab_for_vbap.
    SELECT
    waerk
    vbeln
    from
    vbak
    into table itab_for_vbak
    where
    vbak~auart IN ('ZCAO', 'ZDDO').
    select
    vbeln
    posnr
    netwr
    brgew
    erdat
    into table itab_for_vbap
    where
    vbap~matnr EQ t_mchb-matnr
    AND vbap~netwr GT 0
    AND vbap~brgew GT 1.
    loop at itab_for_vbak into wa_for_vbak.
    read table itab_for_vbap with key vbeln = wa_for_vbak-vbeln.
    if sy-subrc eq 0.
    tmp_orders-waerk = wa_for_vbak-waerk.
    tmp_orders-vbeln = wa_for_vbap-vbeln.
    tmp_orders-posnr = wa_for_vbap-posnr.
    tmp_orders-brgew = wa_for_vbap-brgew.
    tmp_orders-erdat = wa_for_vbap-erdat.
    append tmp_orders.
    endif.
    endloop.
    Please let me know how it has improved your performance or still if you have issues on this.
    Thanks,
    Vishnu.

  • How to optimize query that returns data based on one matching and one missing field joining two tables

    Hi all,
    Here is what I am trying to do. I have 2 tables A and B. Both have a fiscal year and a school ID column. I want to return all data from table B where all school IDs match but fiscal year from A is not in B. I wrote 2 queries below but this took
    2 minutes each to process through 30,000 records in table B. Need to optmize this query.
    1) select 1 from table A inner join table B
    on A.SchoolID=B.SchoolID where A.Year not in (select distinct Year from table B)
    2) select distinct Year from Table A where School ID in (select distinct School ID from table B
    and Year not in (select distinct Year from table B)

    Faraz81,
    query execution time will depend not only on your data volume and structure but also on your system resources.
    You should post your execution plans and DDL to generate data structures so we can take a better look, but one think you could try right away is to store the results of the subquery in a table variable and use it instead.
    You'll also benefit from the creation of:
    1. An index for the B.SchoolID column.
    2. Statistics for the Year column in table B.
    You can also try to change the physical algorithm used to join A to B by using query hints (HASH, MERGE, LOOP) and see how they perform. For example:
    select 1 from table A inner HASH join table B
    on A.SchoolID=B.SchoolID where A.Year not in (select distinct Year from table B)
    As the query optimizer generally chooses the best plan, this might not be a good idea though, but then again, without further information its going to be hard to help you.

  • Help to optimize query

    Hi pal,
    Please help me to optimize this query, this query is to slow (execute time for about 1 Million record is over 5 Hour):
    IF EXISTS(SELECT [name] FROM tempdb.sys.tables WHERE [name] like '#Temp%') BEGIN
    DROP TABLE #Temp;
    END;
    Select *,CAST(0 AS int) PreviusRowId,(CAST(Doc_Date AS VARCHAR)+CAST(Doc_Time AS VARCHAR)) AS FullTime INTO #Temp From vwStoragePricingData vw ORDER BY vw.Stock,vw.Part_No,vw.Doc_Date,vw.Doc_Time,vw.Ie_Code,vw.Doc_No
    BEGIN TRANSACTION
    SET QUOTED_IDENTIFIER ON
    SET ARITHABORT ON
    SET NUMERIC_ROUNDABORT OFF
    SET CONCAT_NULL_YIELDS_NULL ON
    SET ANSI_NULLS ON
    SET ANSI_PADDING ON
    SET ANSI_WARNINGS ON
    COMMIT
    BEGIN TRANSACTION
    GO
    ALTER TABLE #Temp ADD
    AutoAmount AS CASE WHEN RemainQty<=0 THEN 0 ELSE RemainPrice/RemainQty END
    GO
    ALTER TABLE #Temp SET (LOCK_ESCALATION = TABLE)
    GO
    COMMIT
    UPDATE #Temp
    SET PreviusRowId = (SELECT TOP 1 RowId FROM #Temp preTemp WHERE
    preTemp.Stock=#Temp.Stock AND preTemp.Part_No=#Temp.Part_No
    AND preTemp.FullTime<=#Temp.FullTime AND preTemp.Doc_No<#Temp.Doc_No ORDER BY RowId DESC)
    UPDATE #Temp
    SET RemainQty = FirstQty+InputQty-ExportQty,
    RemainPrice = FirstPrice+InputQty-ExportPrice,
    IsCalculated=1,
    IsFinished=1
    WHERE Doc_No='0'
    UPDATE
    #Temp
    SET
    #Temp.FirstPrice= Head.AutoAmount * #Temp.FirstQty ,
    #Temp.InputPrice= Head.InputQty ,
    #Temp.ExportPrice= Head.ExportQty ,
    #Temp.IsCalculated=1
    FROM
    #Temp
    INNER JOIN
    #Temp Head
    ON #Temp.RefrenceTransactId=Head.TransactId AND head.TypeIsSelfReference=1 AND #Temp.IsCalculated=0
    IF EXISTS(SELECT [name] FROM tempdb.sys.tables WHERE [name] like '#TempDate%') BEGIN
    DROP TABLE #TempDate;
    END;
    SELECT DISTINCT Doc_Date INTO #TempDate FROM #Temp
    SELECT * FROM #TempDate
    DECLARE @CurrentDate AS VARCHAR(10)
    DECLARE Date_cursor CURSOR FOR
    SELECT Doc_Date
    FROM #TempDate ORDER BY Doc_Date
    OPEN Date_cursor
    FETCH NEXT FROM Date_cursor
    INTO @CurrentDate
    WHILE @@FETCH_STATUS = 0
    BEGIN
    DECLARE @ProccessStock AS VARCHAR(3)
    DECLARE @ProccessPartNo AS VARCHAR(50)
    DECLARE Calculate_cursor CURSOR FOR
    SELECT Stock,Part_No FROM #Temp WHERE Doc_Date=@CurrentDate
    GROUP BY Doc_Date,Doc_Time,Stock,Part_No
    ORDER BY Doc_Date,Doc_Time,COUNT(RefrenceTransactId)
    OPEN Calculate_cursor
    FETCH NEXT FROM Calculate_cursor
    INTO @ProccessStock ,@ProccessPartNo
    WHILE @@FETCH_STATUS = 0
    BEGIN
    UPDATE #Temp
    SET RemainQty += (SELECT RemainQty FROM #Temp preTemp WHERE
    RowId=#Temp.PreviusRowId),
    RemainPrice += (SELECT RemainPrice FROM #Temp preTemp WHERE
    RowId=#Temp.PreviusRowId),
    IsCalculated=1,
    IsFinished=1
    WHERE Stock=@ProccessStock AND Part_No=@ProccessPartNo AND Doc_Date=@CurrentDate AND IsFinished=0
    UPDATE #Temp
    SET
    #Temp.FirstPrice= Head.AutoAmount * #Temp.FirstQty ,
    #Temp.InputPrice= Head.AutoAmount * #Temp.InputQty ,
    #Temp.ExportPrice= Head.AutoAmount * #Temp.ExportQty ,
    #Temp.IsCalculated=1
    FROM
    #Temp
    INNER JOIN
    #Temp Head
    ON #Temp.RefrenceTransactId=Head.TransactId
    WHERE
    Head.Stock=@ProccessStock AND Head.Part_No=@ProccessPartNo AND Head.Doc_Date=@CurrentDate
    -- Get the next Calculate.
    FETCH NEXT FROM Calculate_cursor
    INTO @ProccessStock ,@ProccessPartNo
    END
    CLOSE Calculate_cursor;
    DEALLOCATE Calculate_cursor;
    -- Get the next Stock.
    FETCH NEXT FROM Date_cursor
    INTO @CurrentDate
    END
    CLOSE Date_cursor;
    DEALLOCATE Date_cursor;
    SELECT * FROM #Temp
    Thank you
    Rambod Taati

    Good day Rambod
    1. We can not reproduce your situation in order to optimize the query, since we dont have vwStoragePricingData tbale for example. Therefore, we can only comment on theoretical "golden rules".
    It is highly recomend to post DDL+DML for any relevant element, and give us the tools to reproduce your situation as close as we can
    2. We can not check query plan without having the tables in our server, therefore we we can not see what really going there :-( 
    3. Olaf mentioned the most important issues... I can add some points in general, which might improve your query,
    I see several times updating the temp table. 
    >> First update, might use massive table scan!
       * An index on the temp table might do great.
          One of the most valuable assets of a temp table (#temp) is the ability to add either a clustered or nonclustered index.
          Additionally, #temp tables allow for the auto-generated statistics to be created against them.
       * Moreover, we have no information on your server version.
          If you are using SQL 2012 or newer then you might get better result using LEAD/LAG functions 
    >> second update, you should try to combine with the first one. If i notice correctly
    (again we could not check or see the anything since we dont have information to reproduce), then you can do this in a single update. Probably the third update as well (but this might need more complex query since you use the result of first
    update for this update)
    >> Same comment go on second temp table...
    >> as Olaf mentioned you should try to work with SET, and not row by row using cursor or any type of looping.
      Ronen Ariely
     [Personal Site]    [Blog]    [Facebook]

  • Query optimization - Query is taking long time even there is no table scan in execution plan

    Hi All,
    The below query execution is taking very long time even there are all required indexes present. 
    Also in execution plan there is no table scan. I did a lot of research but i am unable to find a solution. 
    Please help, this is required very urgently. Thanks in advance. :)
    WITH cte
    AS (
    SELECT Acc_ex1_3
    FROM Acc_ex1
    INNER JOIN Acc_ex5 ON (
    Acc_ex1.Acc_ex1_Id = Acc_ex5.Acc_ex5_Id
    AND Acc_ex1.OwnerID = Acc_ex5.OwnerID
    WHERE (
    cast(Acc_ex5.Acc_ex5_92 AS DATETIME) >= '12/31/2010 18:30:00'
    AND cast(Acc_ex5.Acc_ex5_92 AS DATETIME) < '01/31/2014 18:30:00'
    SELECT DISTINCT R.ReportsTo AS directReportingUserId
    ,UC.UserName AS EmpName
    ,UC.EmployeeCode AS EmpCode
    ,UEx1.Use_ex1_1 AS PortfolioCode
    SELECT TOP 1 TerritoryName
    FROM UserTerritoryLevelView
    WHERE displayOrder = 6
    AND UserId = R.ReportsTo
    ) AS BranchName
    ,GroupsNotContacted AS groupLastContact
    ,GroupCount AS groupTotal
    FROM ReportingMembers R
    INNER JOIN TeamMembers T ON (
    T.OwnerID = R.OwnerID
    AND T.MemberID = R.ReportsTo
    AND T.ReportsTo = 1
    INNER JOIN UserContact UC ON (
    UC.CompanyID = R.OwnerID
    AND UC.UserID = R.ReportsTo
    INNER JOIN Use_ex1 UEx1 ON (
    UEx1.OwnerId = R.OwnerID
    AND UEx1.Use_ex1_Id = R.ReportsTo
    INNER JOIN (
    SELECT Accounts.AssignedTo
    ,count(DISTINCT Acc_ex1_3) AS GroupCount
    FROM Accounts
    INNER JOIN Acc_ex1 ON (
    Accounts.AccountID = Acc_ex1.Acc_ex1_Id
    AND Acc_ex1.Acc_ex1_3 > '0'
    AND Accounts.OwnerID = 109
    GROUP BY Accounts.AssignedTo
    ) TotalGroups ON (TotalGroups.AssignedTo = R.ReportsTo)
    INNER JOIN (
    SELECT Accounts.AssignedTo
    ,count(DISTINCT Acc_ex1_3) AS GroupsNotContacted
    FROM Accounts
    INNER JOIN Acc_ex1 ON (
    Accounts.AccountID = Acc_ex1.Acc_ex1_Id
    AND Acc_ex1.OwnerID = Accounts.OwnerID
    AND Acc_ex1.Acc_ex1_3 > '0'
    INNER JOIN Acc_ex5 ON (
    Accounts.AccountID = Acc_ex5.Acc_ex5_Id
    AND Acc_ex5.OwnerID = Accounts.OwnerID
    WHERE Accounts.OwnerID = 109
    AND Acc_ex1.Acc_ex1_3 NOT IN (
    SELECT Acc_ex1_3
    FROM cte
    GROUP BY Accounts.AssignedTo
    ) TotalGroupsNotContacted ON (TotalGroupsNotContacted.AssignedTo = R.ReportsTo)
    WHERE R.OwnerID = 109
    Please mark it as an answer/helpful if you find it as useful. Thanks, Satya Prakash Jugran

    Hi All,
    Thanks for the replies.
    I have optimized that query to make it run in few seconds.
    Here is my final query.
    select ReportsTo as directReportingUserId, 
    UserName AS EmpName, 
    EmployeeCode AS EmpCode,
    Use_ex1_1 AS PortfolioCode,
    BranchName,
    GroupInfo.groupTotal,
    GroupInfo.groupLastContact,
    case when exists
    (select 1 from ReportingMembers RM
    where RM.ReportsTo =  UserInfo.ReportsTo
    and RM.MemberID <> UserInfo.ReportsTo
    ) then 0  else UserInfo.ReportsTo end as memberid1,
    (select code from Regions where ownerid=109 and  name=UserInfo.BranchName) as BranchCode,
    ROW_NUMBER() OVER (ORDER BY directReportingUserId) AS ROWNUMBER
    FROM 
    (select distinct R.ReportsTo, UC.UserName, UC.EmployeeCode,UEx1.Use_ex1_1,
    (select top 1 TerritoryName 
    from UserTerritoryLevelView
    where displayOrder = 6
    and UserId = R.ReportsTo) as BranchName,
    Case when R.ReportsTo = Accounts.AssignedTo then Accounts.AssignedTo else 0 end as memberid1
    from ReportingMembers R
    INNER JOIN TeamMembers T ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo AND T.ReportsTo = 1)
    inner join UserContact UC on (UC.CompanyID = R.OwnerID and UC.UserID = R.ReportsTo )
    inner join Use_ex1 UEx1 on (UEx1.OwnerId = R.OwnerID and UEx1.Use_ex1_Id = R.ReportsTo)
    inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
    union
    select distinct R.ReportsTo, UC.UserName, UC.EmployeeCode,UEx1.Use_ex1_1,
    (select top 1 TerritoryName 
    from UserTerritoryLevelView
    where displayOrder = 6
    and UserId = R.ReportsTo) as BranchName,
    Case when R.ReportsTo = Accounts.AssignedTo then Accounts.AssignedTo else 0 end as memberid1
    from ReportingMembers R
    --INNER JOIN TeamMembers T ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo)
    inner join UserContact UC on (UC.CompanyID = R.OwnerID and UC.UserID = R.ReportsTo)
    inner join Use_ex1 UEx1 on (UEx1.OwnerId = R.OwnerID and UEx1.Use_ex1_Id = R.ReportsTo)
    inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
    where R.MemberID = 1
    ) UserInfo
    inner join 
    select directReportingUserId, sum(Groups) as groupTotal, SUM(GroupsNotContacted) as groupLastContact
    from
    select distinct R.ReportsTo as directReportingUserId, Acc_ex1_3 as GroupName, 1 as Groups,
    case when Acc_ex5.Acc_ex5_92 between GETDATE()-365*10 and GETDATE() then 1 else 0 end as GroupsNotContacted
    FROM ReportingMembers R
    INNER JOIN TeamMembers T 
    ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo AND T.ReportsTo = 1)
    inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
    inner join Acc_ex1 on (Acc_ex1.OwnerID = 109 and Acc_ex1.Acc_ex1_Id = Accounts.AccountID and Acc_ex1.Acc_ex1_3 > '0')
    inner join Acc_ex5 on (Acc_ex5.OwnerID = 109 and Acc_ex5.Acc_ex5_Id = Accounts.AccountID )
    --where TerritoryID in ( select ChildRegionID  RegionID from RegionWithSubRegions where OwnerID =109 and RegionID = 729)
    union 
    select distinct R.ReportsTo as directReportingUserId, Acc_ex1_3 as GroupName, 1 as Groups,
    case when Acc_ex5.Acc_ex5_92 between GETDATE()-365*10 and GETDATE() then 1 else 0 end as GroupsNotContacted
    FROM ReportingMembers R
    INNER JOIN TeamMembers T 
    ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo)
    inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
    inner join Acc_ex1 on (Acc_ex1.OwnerID = 109 and Acc_ex1.Acc_ex1_Id = Accounts.AccountID and Acc_ex1.Acc_ex1_3 > '0')
    inner join Acc_ex5 on (Acc_ex5.OwnerID = 109 and Acc_ex5.Acc_ex5_Id = Accounts.AccountID )
    --where TerritoryID in ( select ChildRegionID  RegionID from RegionWithSubRegions where OwnerID =109 and RegionID = 729)
    where R.MemberID = 1
    ) GroupWiseInfo
    group by directReportingUserId
    ) GroupInfo
    on UserInfo.ReportsTo = GroupInfo.directReportingUserId
    Please mark it as an answer/helpful if you find it as useful. Thanks, Satya Prakash Jugran

  • Performance optimization : query taking 7mints

    Hi All ,
    Requirement : I need to improve the performance of  custom program ( Program taking more than 7 mints +dump).I checked in runtime analysis and below mention query taking more time .
    Please let me know the approach to minimize the  query time  .
    TYPES: BEGIN OF lty_dberchz1,
               belnr    TYPE dberchz1-belnr,
               belzeile TYPE dberchz1-belzeile,
               belzart  TYPE dberchz1-belzart,
               buchrel  TYPE dberchz1-buchrel,
               tariftyp TYPE dberchz1-tariftyp,
               tarifnr  TYPE dberchz1-tarifnr,
               v_zahl1  TYPE dberchz1-v_zahl1,
               n_zahl1  TYPE dberchz1-n_zahl1,
               v_zahl3  TYPE dberchz1-v_zahl3,
               n_zahl3  TYPE dberchz1-n_zahl3,
               nettobtr TYPE dberchz3-nettobtr,
               twaers   TYPE dberchz3-twaers,
             END   OF lty_dberchz1.
      DATA: lt_dberchz1 TYPE SORTED TABLE OF lty_dberchz1
            WITH NON-UNIQUE KEY belnr belzeile
            INITIAL SIZE 0 WITH HEADER LINE.
    DATA: lt_dberchz1a LIKE TABLE OF lt_dberchz1 WITH HEADER LINE.
    *** ***********************************Taking more time*************************************************
    *Individual line items
        SELECT dberchz1~belnr dberchz1~belzeile
               belzart buchrel tariftyp tarifnr
               v_zahl1 n_zahl1 v_zahl3 n_zahl3
               nettobtr twaers
          INTO TABLE lt_dberchz1
          FROM dberchz1 JOIN dberchz3
          ON dberchz1~belnr = dberchz3~belnr
          AND dberchz1~belzeile = dberchz3~belzeile
          WHERE buchrel  EQ 'X'.
        DELETE lt_dberchz1 WHERE belzart NOT IN r_belzart.     
        LOOP AT lt_dberchz1.
          READ TABLE lt_dberdlb BINARY SEARCH
          WITH KEY billdoc = lt_dberchz1-belnr.
          IF sy-subrc NE 0.
            DELETE lt_dberchz1.
          ENDIF.
        ENDLOOP.
        lt_dberchz1a[] = lt_dberchz1[].
        DELETE lt_dberchz1 WHERE belzart EQ 'ZUTAX1'
                              OR belzart EQ 'ZUTAX2'
                              OR belzart EQ 'ZUTAX3'.
        DELETE lt_dberchz1a WHERE belzart NE 'ZUTAX1'
                              AND belzart NE 'ZUTAX2'
                              AND belzart NE 'ZUTAX3'.
    ***************************second query************************************
    *  SELECT opbel budat vkont partner sto_opbel
        INTO CORRESPONDING FIELDS OF TABLE lt_erdk
        FROM erdk
        WHERE budat IN r_budat
          AND druckdat   NE '00000000'
          AND stokz      EQ space
          AND intopbel   EQ space
          AND total_amnt GT 40000.
    **************************taking more time*********************************
      SORT lt_erdk BY opbel.
      IF lt_erdk[] IS NOT INITIAL.
        SELECT DISTINCT printdoc billdoc vertrag
          INTO CORRESPONDING FIELDS OF TABLE lt_dberdlb
          FROM dberdlb
    * begin of code change by vishal
          FOR ALL ENTRIES IN lt_erdk
          WHERE printdoc = lt_erdk-opbel.
        IF lt_dberdlb[] IS NOT INITIAL.
          SELECT belnr belzart ab bis aus01
                 v_zahl1 n_zahl1 v_zahl3 n_zahl3
            INTO CORRESPONDING FIELDS OF TABLE lt_dberchz1
            FROM dberchz1
            FOR ALL ENTRIES IN lt_dberdlb
            WHERE belnr   EQ lt_dberdlb-billdoc
              AND belzart IN ('ZUTAX1', 'ZUTAX2', 'ZUTAX3').
        ENDIF. "lt_dberdlb
       endif.
    Regards
    Rahul
    Edited by: Matt on Mar 17, 2009 4:17 PM - Added  tags and moved to correct forum

    Run the SQL Trace and tell us where the time is spent,
    see here how to use it:
    SELECT dberchz1~belnr dberchz1~belzeile
               belzart buchrel tariftyp tarifnr
               v_zahl1 n_zahl1 v_zahl3 n_zahl3
               nettobtr twaers
          INTO TABLE lt_dberchz1
          FROM dberchz1 JOIN dberchz3
          ON dberchz1~belnr = dberchz3~belnr
          AND dberchz1~belzeile = dberchz3~belzeile
          WHERE buchrel  EQ 'X'.
    I assume that is this select, but without data is quite useless
    How large are the two tables  dberchz1 JOIN dberchz3
    What are the key fields?
    Is there an index on buchrel
    Please use aliases  dberchz1 as a
                                 INNER JOIN dberchz3 as b
    to which table does buchrel belong?
    I don't know you tables, but buchrel  EQ 'X' seems not selective, so a lot of data
    might be selected.
    lt_dberchz1 TYPE SORTED TABLE OF lty_dberchz1
            WITH NON-UNIQUE KEY belnr belzeile
            INITIAL SIZE 0 WITH HEADER LINE.
        DELETE lt_dberchz1 WHERE belzart NOT IN r_belzart.     
        LOOP AT lt_dberchz1.
          READ TABLE lt_dberdlb BINARY SEARCH
          WITH KEY billdoc = lt_dberchz1-belnr.
          IF sy-subrc NE 0.
            DELETE lt_dberchz1.
          ENDIF.
        ENDLOOP.
        lt_dberchz1a[] = lt_dberchz1[].
        DELETE lt_dberchz1 WHERE belzart EQ 'ZUTAX1'
                              OR belzart EQ 'ZUTAX2'
                              OR belzart EQ 'ZUTAX3'.
        DELETE lt_dberchz1a WHERE belzart NE 'ZUTAX1'
                              AND belzart NE 'ZUTAX2'
                              AND belzart NE 'ZUTAX3'.
    This is really poor coding, there is sorted table ... nice a compelelty different key is
    needed and used .... useless.
    Then there is a loop which is anywy a full processing no sort necessary.
    Where is the read if you use binary search on TABLE lt_dberdlb ?
    Then the tables are again process completely ...
        DELETE lt_dberchz1a WHERE belzart NE 'ZUTAX1'
                              AND belzart NE 'ZUTAX2'
                              AND belzart NE 'ZUTAX3'.
    What is that ???? Are you sure that anything can survive this delete???
    Siegfried

  • To optimize query run-time

    Hi,
    I have an sql query that access a single table which contains approx. 10 million records. The table has 4 columns (there are no indexes defined for the columns), which are 'new_id', 'time', 'access_code' and 'graph_qty'. With time being defined in 'dd-mm-yyyy hh:mm:ss'. The idea is to scan the table to retrieve records whose 'time' and whose 'access_code' match with the condition in the where-clause and then de-normalize these records based on the 'access_code'. The resultant set would have records wherein 'access_code' is denormalized into columns, with each 'access_code' having a unique column.
    The query runs for about 30mins.
    /* '&¤t_date' is a pre-defined variable containing a date (assume sysdate)*/
    SELECT
    new_id
    , trunc(time) AS time
    , SUM(CASE WHEN access_code = '100' THEN graph_qty END ) AS graph_1
    , SUM(CASE WHEN access_code = '200' THEN graph_qty END ) AS graph_2
    , SUM(CASE WHEN access_code = '300' THEN graph_qty END ) AS graph_3
    , SUM(CASE WHEN access_code = '400' THEN graph_qty END ) AS graph_4
    , SUM(CASE WHEN access_code = '500' THEN graph_qty END ) AS graph_5
    , SUM(CASE WHEN access_code = '600' THEN graph_qty END ) AS graph_6
    , SUM(CASE WHEN access_code = '700' THEN graph_qty END ) AS graph_7
    , SUM(CASE WHEN access_code = '800' THEN graph_qty END ) AS graph_8
    , SUM(CASE WHEN access_code = '900' THEN graph_qty END ) AS graph_9
    , SUM(CASE WHEN access_code = '1000' THEN graph_qty END ) AS graph_10
    FROM
    dummy_table
    WHERE trunc(time) IN ( '&¤t_date'
    , ADD_MONTHS('&¤t_date',-1)
    , ADD_MONTHS('&¤t_date',-3)
    , ADD_MONTHS('&¤t_date',-6)
    , ADD_MONTHS('&¤t_date',-12)
    , ADD_MONTHS('&¤t_date',-13)
    , ADD_MONTHS('&¤t_date',-15)
    , ADD_MONTHS('&¤t_date',-18)
    , ADD_MONTHS('&¤t_date',-24)
    AND access_code IN ('100','200','300','400','500','600','700','800','900','1000')
    GROUP BY
    new_id
    , time
    Please suggest ways to reduce the qry run-time.
    Thanks,
    kartik

    does this 20min include the time to display the data?
    how long much does it take to execute the following queries?
    select /*+ full(t) */ count(*) from dummy_table t;
    select count(*) from(
    SELECT
    new_id
    , trunc(time) AS time
    , SUM(CASE WHEN access_code = '100' THEN graph_qty END ) AS graph_1
    , SUM(CASE WHEN access_code = '200' THEN graph_qty END ) AS graph_2
    , SUM(CASE WHEN access_code = '300' THEN graph_qty END ) AS graph_3
    , SUM(CASE WHEN access_code = '400' THEN graph_qty END ) AS graph_4
    , SUM(CASE WHEN access_code = '500' THEN graph_qty END ) AS graph_5
    , SUM(CASE WHEN access_code = '600' THEN graph_qty END ) AS graph_6
    , SUM(CASE WHEN access_code = '700' THEN graph_qty END ) AS graph_7
    , SUM(CASE WHEN access_code = '800' THEN graph_qty END ) AS graph_8
    , SUM(CASE WHEN access_code = '900' THEN graph_qty END ) AS graph_9
    , SUM(CASE WHEN access_code = '1000' THEN graph_qty END ) AS graph_10
    FROM
    dummy_table
    WHERE trunc(time) IN ( '&¤t_date'
    , ADD_MONTHS('&¤t_date',-1)
    , ADD_MONTHS('&¤t_date',-3)
    , ADD_MONTHS('&¤t_date',-6)
    , ADD_MONTHS('&¤t_date',-12)
    , ADD_MONTHS('&¤t_date',-13)
    , ADD_MONTHS('&¤t_date',-15)
    , ADD_MONTHS('&¤t_date',-18)
    , ADD_MONTHS('&¤t_date',-24)
    AND access_code IN ('100','200','300','400','500','600','700','800','900','1000')
    select count(*) from(
    SELECT
    new_id
    , trunc(time) AS time
    , SUM(CASE WHEN access_code = '100' THEN graph_qty END ) AS graph_1
    , SUM(CASE WHEN access_code = '200' THEN graph_qty END ) AS graph_2
    , SUM(CASE WHEN access_code = '300' THEN graph_qty END ) AS graph_3
    , SUM(CASE WHEN access_code = '400' THEN graph_qty END ) AS graph_4
    , SUM(CASE WHEN access_code = '500' THEN graph_qty END ) AS graph_5
    , SUM(CASE WHEN access_code = '600' THEN graph_qty END ) AS graph_6
    , SUM(CASE WHEN access_code = '700' THEN graph_qty END ) AS graph_7
    , SUM(CASE WHEN access_code = '800' THEN graph_qty END ) AS graph_8
    , SUM(CASE WHEN access_code = '900' THEN graph_qty END ) AS graph_9
    , SUM(CASE WHEN access_code = '1000' THEN graph_qty END ) AS graph_10
    FROM
    dummy_table
    WHERE trunc(time) IN ( '&¤t_date'
    , ADD_MONTHS('&¤t_date',-1)
    , ADD_MONTHS('&¤t_date',-3)
    , ADD_MONTHS('&¤t_date',-6)
    , ADD_MONTHS('&¤t_date',-12)
    , ADD_MONTHS('&¤t_date',-13)
    , ADD_MONTHS('&¤t_date',-15)
    , ADD_MONTHS('&¤t_date',-18)
    , ADD_MONTHS('&¤t_date',-24)
    AND access_code IN ('100','200','300','400','500','600','700','800','900','1000')
    GROUP BY
    new_id
    , time

  • Request for help to Optimize query

    Hi all,
    The query below returns the data i need but its quite sluggish. Any suggestions to make it faster? Maybe without using a union all so i need only one ft access? (included explain plan below)
    select invoice_id,'Nee' distribution_approved
    from ap_invoices_all aia
    where exists
    (select invoice_id
    from ap_invoice_distributions_all aid
    where aid.invoice_id = aia.invoice_id
    and match_status_flag != 'A')
    union all
    select invoice_id,'Ja' distribution_approved
    from ap_invoices_all aia
    where not exists
    (select invoice_id
    from ap_invoice_distributions_all aid
    where aid.invoice_id = aia.invoice_id
    and match_status_flag != 'A')
    SELECT STATEMENT Hint=RULE                                        
    UNION-ALL                                        
    FILTER                                        
    TABLE ACCESS FULL     AP_INVOICES_ALL                                   
    TABLE ACCESS BY INDEX ROWID     AP_INVOICE_DISTRIBUTIONS_ALL                                   
    INDEX RANGE SCAN     AP_INVOICE_DISTRIBUTIONS_U1                                   
    FILTER                                        
    TABLE ACCESS FULL     AP_INVOICES_ALL                                   
    TABLE ACCESS BY INDEX ROWID     AP_INVOICE_DISTRIBUTIONS_ALL                                   
    INDEX RANGE SCAN     AP_INVOICE_DISTRIBUTIONS_U1                                   
         

    I tend to be skeptical about Garcia's solution as it does not have a GROUP BY
    clause in it. However, here is an alternative which should save you 50% of time,SELECT aia.invoice_id, 'Nee' dist_approved
    FROM   ap_invoices_all aia,
           ap_invoice_distributions_all aid
    WHERE  aid.match_status_flag != 'A'
    AND    aid.invoice_id = aia.invoice_id
    UNION ALL
    (SELECT aia.invoice_id, 'Ja' dist_approved
    FROM ap_invoices_all aia
    MINUS
    SELECT aid.invoice_id, 'Ja' dist_approved
    FROM   ap_invoice_distributions_all aid
    WHERE  match_status_flag != 'A')
    /Thx,
    SriDHAR

  • Optimize Query with decode fuction

    When i use this query , its cost is around 13455 .
    SELECT
    X.col1,
    X.col2
    FROM A X,
    B Y,
    C Z
    WHERE
    X.col1 = Y.col3
    AND Z.col4 = Y.col5
    AND Z.col5 <> 'ORDER_SUBMITTED'
    AND Z.col6 <> 'INACTIVE'
    AND X.col7 = 'Y'
    --AND decode(X.col,'Y','YES')= 'YES'
    AND Y.col8 >= TRUNC(SYSDATE)
    ORDER BY X.col1 ;
    But when i make modification in this query and just add the decode fuction its cost is decreased and now it is aroud 2089.
    The new query is
    SELECT
    X.col1,
    X.col2
    FROM A X,
    B Y,
    C Z
    WHERE
    X.col1 = Y.col3
    AND Z.col4 = Y.col5
    AND Z.col5 <> 'ORDER_SUBMITTED'
    AND Z.col6 <> 'INACTIVE'
    --AND X.col7 = 'Y'
    AND decode(X.col7,'Y','YES')= 'YES'
    AND Y.col8 >= TRUNC(SYSDATE)
    ORDER BY X.col1 ;
    i dont know the solid reason behind this . Please help if you have the answer .

    If not, its more than likely related to oracle having to execute the decode statement first (meaning that it has to figure out all the data in table A ) - so uses it as the driving table , before it can join it to table B and C and the volume of data that it has to access is less than in the previous plan where it either used table b or table C as the driving table.
    No there is no function based index . but this is correct that now the index is working for Table B . In first the index of table B didn't use . That is actually i want to know why it happened ??

Maybe you are looking for

  • I had the bottom case of my MacbookPro replaced and now I can't print help?

    Hi I had the bottom case to my Macbook Pro 2.33 Ghz intel core 2 duo replaced. Prior to this the machine worked perfectly. Now it can't connect to the internet using DHCP and it won't print either over the internet or with the printer plugged in dire

  • Heap Dump file generation problem

    Hi, I've configured configtool to have these 2 parameters: -XX:+HeapDumpOnOutOfMemoryError -XX:+HeapDumpOnCtrlBreak In my understanding, with these 2 parameters, the heap dump files will only be generated under 2 situations, ie, when out of memory oc

  • Orders on credit block for multiple times

    Released documents are unchecked is set up deviation of 5 % and no of days 30 in OVA8 and use static credit check. customer is over the credit limit, when i create an order will go on credit block. After first release, if i change any other fields su

  • CS4, Windows 7 (64bit) & Novell

    A new Install of CS4 InDesign on a new Windows 7 (64bit) Enterprise desktop, running Novell directory.  Files open fine in preview mode, then a few seconds later the file link is broken and the file temporarily gets a modify date of 31st December 199

  • About hr authorization

    hi i want to assign authorization to the whole hr team , so that users under that team can directly have the access, can any body help me out, and can any body help me to get the pdf of SAP security book, i wish to prepare for the SAP certification o