Improve the speend of query

I have to select those records that match users' criteria from a table with about 5 million records,
also need to join with other tables, i take too long to execute the query, how can I speed it up?

I got 3 tables, and there relationship is as follow.
Table A:
seqA number (PK)
seqB number
colA varchar
Table B:
seqB number (PK)
rowno number (PK)
colB varchar
Table C:
seqA number
rowno number
colC varchar
Table A and B have more than 5 million records, Table C may has no record. But I need to outer join table c.
my sql is like that:
select a.colA,b.colB, c.colC
from a, b, c
where a.seqA = b.seqB
and a.seqA = c.seqA(+)
and b.rowno = c.rowno(+)
but c can only outer join with one table, how can I make it work and improve the speed???

Similar Messages

  • Bind variables improve the performence of  query??

    Hi,
    Bind variables improve the performence of program or SQL Query.
    Select empno,ename,sal from emp where empno=:eno;
    select sal into :vno from emp;
    According to these queries i am asking performence of bind variables to learn more from you.
    Regards,
    Venkat.
    Edited by: Venkata2 on Sep 16, 2008 6:40 PM

    Karthick_Arp wrote:
    any variable inside a stored procedure is a bind variable.
    using bind variables can help you avoid hard parse. which leads to performance improvement.
    Well, you mentioned pros and I will mention cons. Since bind variable value is not known until execution time optimizer can not select best plan. So Oracle introduced bind variable peeking where plan is deferred until first execution. With bind variable peeking optimizer costructs plan using bind variable values at the first execution. However, if first execution has "not typical" bind variable values we end up with sub-optimal plan.
    SY.
    P.S. Please do not consider my reply as "we should not use bind variables".

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • How to improve the performance of one program in one select query

    Hi,
    I am facing performance issue in one program. I have given some part of the code of the program.
    it is taking much time below select query. How to improve the performance.
    Quick response is highly appreciated.
    Program code
    DATA: BEGIN OF t_dels_tvpod OCCURS 100,
    vbeln LIKE tvpod-vbeln,
    posnr LIKE tvpod-posnr,
    lfimg_diff LIKE tvpod-lfimg_diff,
    calcu LIKE tvpod-calcu,
    podmg LIKE tvpod-podmg,
    uecha LIKE lips-uecha,
    pstyv LIKE lips-pstyv,
    xchar LIKE lips-xchar,
    grund LIKE tvpod-grund,
    END OF t_dels_tvpod,
    DATA: l_tabix LIKE sy-tabix,
    lt_dels_tvpod LIKE t_dels_tvpod OCCURS 10 WITH HEADER LINE,
    ls_dels_tvpod LIKE t_dels_tvpod.
    SELECT vbeln INTO TABLE lt_dels_tvpod FROM likp
    FOR ALL ENTRIES IN t_dels_tvpod
    WHERE vbeln = t_dels_tvpod-vbeln
    AND erdat IN s_erdat
    AND bldat IN s_bldat
    AND podat IN s_podat
    AND ernam IN s_ernam
    AND kunnr IN s_kunnr
    AND vkorg IN s_vkorg
    AND vstel IN s_vstel
    AND lfart NOT IN r_del_types_exclude.
    Waiting for quick response.
    Best regards,
    BDP

    Bansidhar,
    1) You need to add a check to make sure that internal table t_dels_tvpod (used in the FOR ALL ENTRIES clause) is not blank. If it is blank skip the SELECt statement.
    2)  Check the performance with and without clause 'AND lfart NOT IN r_del_types_exclude'. Sometimes NOT causes the select statement to not use the index. Instead of 'lfart NOT IN r_del_types_exclude' use 'lfart IN r_del_types_exclude' and build r_del_types_exclude by using r_del_types_exclude-sign = 'E' instead of 'I'.
    3) Make sure that the table used in the FOR ALL ENTRIES clause has unique delivery numbers.
    Try doing something like this.
    TYPES: BEGIN OF ty_del_types_exclude,
             sign(1)   TYPE c,
             option(2) TYPE c,
             low       TYPE likp-lfart,
             high      TYPE likp-lfart,
           END OF ty_del_types_exclude.
    DATA: w_del_types_exclude TYPE          ty_del_types_exclude,
          t_del_types_exclude TYPE TABLE OF ty_del_types_exclude,
          t_dels_tvpod_tmp    LIKE TABLE OF t_dels_tvpod        .
    IF NOT t_dels_tvpod[] IS INITIAL.
    * Assuming that I would like to exclude delivery types 'LP' and 'LPP'
      CLEAR w_del_types_exclude.
      REFRESH t_del_types_exclude.
      w_del_types_exclude-sign = 'E'.
      w_del_types_exclude-option = 'EQ'.
      w_del_types_exclude-low = 'LP'.
      APPEND w_del_types_exclude TO t_del_types_exclude.
      w_del_types_exclude-low = 'LPP'.
      APPEND w_del_types_exclude TO t_del_types_exclude.
      t_dels_tvpod_tmp[] = t_dels_tvpod[].
      SORT t_dels_tvpod_tmp BY vbeln.
      DELETE ADJACENT DUPLICATES FROM t_dels_tvpod_tmp
        COMPARING
          vbeln.
      SELECT vbeln
        FROM likp
        INTO TABLE lt_dels_tvpod
        FOR ALL ENTRIES IN t_dels_tvpod_tmp
        WHERE vbeln EQ t_dels_tvpod_tmp-vbeln
        AND erdat IN s_erdat
        AND bldat IN s_bldat
        AND podat IN s_podat
        AND ernam IN s_ernam
        AND kunnr IN s_kunnr
        AND vkorg IN s_vkorg
        AND vstel IN s_vstel
        AND lfart IN t_del_types_exclude.
    ENDIF.

  • Need help in improving the performance for the sql query

    Thanks in advance for helping me.
    I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. The data count which is updated in the target table is 2 million records and the target table has 15 million records.
    Any suggestions or solutions for improving performance are appreciated
    SQL query:
    update targettable tt
    set mnop = 'G',
    where ( x,y,z ) in
    select a.x, a.y,a.z
    from table1 a
    where (a.x, a.y,a.z) not in (
    select b.x,b.y,b.z
    from table2 b
    where 'O' = b.defg
    and mnop = 'P'
    and hijkl = 'UVW';

    987981 wrote:
    I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. And that meant what? Surely if you spend all that time and effort to try various approaches, it should mean something? Failures are as important teachers as successes. You need to learn from failures too. :-)
    The data count which is updated in the target table is 2 million records and the target table has 15 million records.Tables have rows btw, not records. Database people tend to get upset when rows are called records, as records exist in files and a database is not a mere collection of records and files.
    The failure to find a single faster method with the approaches you tried, points to that you do not know what the actual performance problem is. And without knowing the problem, you still went ahead, guns blazing.
    The very first step in dealing with any software engineering problem, is to identify the problem. Seeing the symptoms (slow performance) is still a long way from problem identification.
    Part of identifying the performance problem, is understanding the workload. Just what does the code task the database to do?
    From your comments, it needs to find 2 million rows from 15 million rows. Change these rows. And then write 2 million rows back to disk.
    That is not a small workload. Simple example. Let's say that the 2 million row find is 1ms/row and the 2 million row write is also 1ms/row. This means a 66 minute workload. Due to the number of rows, an increase in time/row either way, will potentially have 2 million fold impact.
    So where is the performance problem? Time spend finding the 2 million rows (where other tables need to be read, indexes used, etc)? Time spend writing the 2 million rows (where triggers and indexes need to be fired and maintained)? Both?

  • How to improve the performance of the query

    Hi,
    Help me by giving tips how to improve the performance of the query. Can I post the query?
    Suresh

    Below is the formatted query and no wonder it is taking lot of time. Will give you a list of issues soon after analyzing more. Till then understand the pitfalls yourself from this formatted query.
    SELECT rt.awb_number,
           ar.activity_id as task_id,
           t.assignee_org_unit_id,
           t.task_type_code,
           ar.request_id
    FROM activity_task ar,
         request_task rt,
         task t
    WHERE ar.activity_id =t.task_id
    AND ar.request_id = rt.request_id
    AND ar.complete_status != 'act.stat.closed'
    AND t.assignee_org_unit_id in (SELECT org_unit_id
                                   FROM org_unit
                                   WHERE org_unit_id in (SELECT oo.org_unit_id
                                                         FROM org_unit oo
                                                         WHERE oo.org_unit_id='3'
                                                         OR oo.parent_id ='3'
                                   OR parent_id in (SELECT oo.org_unit_id
                                                    FROM org_unit oo
                                                    WHERE oo.org_unit_id='3'
                                                    OR oo.parent_id ='3'
                                   AND has_queue=1
    AND ar.parent_task_id not in (SELECT tt.task_id
                                  FROM task tt
                                  WHERE tt.assignee_org_unit_id in (SELECT org_unit_id
                                                                    FROM org_unit
                                                                    WHERE org_unit_id in (SELECT oo.org_unit_id
                                                                                          FROM org_unit oo
                                                                                          WHERE oo.org_unit_id='3'
                                                                                          OR oo.parent_id ='3'
                                                                     OR parent_id in (SELECT oo.org_unit_id
                                                                                      FROM org_unit oo     
                                                                                      WHERE oo.org_unit_id='3'
                                                                                      OR oo.parent_id ='3'
                                                                     AND has_queue=1
    AND rt.awb_number is not null
    ORDER BY rt.awb_numberCheers
    Sarma.

  • HI All, How to improve the performance in given query?

    HI All,
    How to improve the performance in given query?
    Query is..
    PARAMETERS : p_vbeln type lips-vbeln.
    DATA : par_charg TYPE LIPS-CHARG,
    par_werks TYPE LIPS-WERKS,
    PAR_MBLNR TYPE MSEG-MBLNR .
    SELECT SINGLE charg
    werks
    INTO (par_charg, par_werks)
    FROM lips
    WHERE vbeln = p_vbeln.
    IF par_charg IS NOT INITIAL.
    SELECT single max( mblnr )
    INTO par_mblnr
    FROM mseg
    WHERE bwart EQ '101'
    AND werks EQ par_werks (index on werks only)
    AND charg EQ par_charg.
    ENDIF.
    Regards
    Steve

    Hi steve,
    Can't you use the material in your query (and not only the batch)?
    I am assuming your system has an index MSEG~M by MANDT + MATNR + WERKS (+ other fields). Depending on your system (how many different materials you have), this will probably speed up the query considerably.
    Anyway, in our system we ended up by creating an index by CHARG, but leave as a last option, only if selecting by matnr and werks is not good enough for your scenario.
    Hope this helps,
    Rui Dantas

  • Inner Join. How to improve the performance of inner join query

    Inner Join. How to improve the performance of inner join query.
    Query is :
    select f1~ablbelnr
             f1~gernr
             f1~equnr
             f1~zwnummer
             f1~adat
             f1~atim
             f1~v_zwstand
             f1~n_zwstand
             f1~aktiv
             f1~adatsoll
             f1~pruefzahl
             f1~ablstat
             f1~pruefpkt
             f1~popcode
             f1~erdat
             f1~istablart
             f2~anlage
             f2~ablesgr
             f2~abrdats
             f2~ableinh
                from eabl as f1
                inner join eablg as f2
                on f1ablbelnr = f2ablbelnr
                into corresponding fields of table it_list
                where f1~ablstat in s_mrstat
                %_HINTS ORACLE 'USE_NL (T_00 T_01) index(T_01 "EABLG~0")'.
    I wanted to modify the query, since its taking lot of time to load the data.
    Please suggest : -
    Treat this is very urgent.

    Hi Shyamal,
    In your program , you are using "into corresponding fields of ".
    Try not to use this addition in your select query.
    Instead, just use "into table it_list".
    As an example,
    Just give a normal query using "into corresponding fields of" in a program. Now go to se30 ( Runtime analysis), and give the program name and execute it .
    Now if you click on Analyze button , you can see, the analysis given for the query.The one given in "Red" line informs you that you need to find for alternate methods.
    On the other hand, if you are using "into table itab", it will give you an entirely different analysis.
    So try not to give "into corresponding fields" in your query.
    Regards,
    SP.

  • Please help me how to improve the performance of this query further.

    Hi All,
    Please help me how to improve the performance of this query further.
    Thanks.

    Hi,
    this is not your first SQL tuning request in this community -- you really should learn how to obtain performance diagnostics.
    The information you posted is not nearly enough to even start troubleshooting the query -- you haven't specified elapsed time, I/O, or the actual number of rows the query returns.
    The only piece of information we have is saying that your query executes within a second. If we believe this, then your query doesn't need tuning. If we don't, then we throw it away
    and we're left with nothing.
    Start by reading this blog post: Kyle Hailey » Power of DISPLAY_CURSOR
    and applying this knowledge to your case.
    Best regards,
      Nikolay

  • Ways to improve the performance of my query?

    Hi all,
    I have created a multi provider which enables to fetch the data from 3 ods. And each ods contains huge amount of data. As a result my query performance is very slow..
    apart from creating indexes on ods? is there any other to be carried out to improve the performance of my query. Since all the 3 info providers are ods.
    thanxs
    haritha

    Haritha,
    If you still need more info, just have a look below:
    There are few ways your queries can be improved:
    1. Your Data volume in your InfoProviders.
    2. Dim table, how you have manage your objects into your dim table.
    3. Query that runs from multiprovider vs cube itself. when running from multiproviders at the time of execution the system has to create more tables hence the query performance will be affected.
    4. Aggregates into the cube, and they perfection of designing the aggregates.
    5. OLAPCHACHE
    6. Calculation formula
    etc.
    and also you can go thru the links below:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    Hope this helps you.
    ****Assign Points*******
    Gattu

  • To improve the query???

    Example:
    select money,sum(total),sum(amount),sum(credit), sum(total_im), sum(total_am)
    from account group by money;
    The problem is: The function sum -> because "TABLE ACCESS FULL" of the table account.
    How to improve the query? Without to create an index for each column (sum).
    Regards.

    user9331221 wrote:
    How to improve the query?There is not much room for improvement. The select statement has to compute those five aggregates. It cannot do that without reading the entire table. So a TABLE ACCESS FULL is almost necessary. The only option (besides Keith's materialized view suggestion) is to create an index on (money,total,amount,credit,total_im,total_am) in which case this statement can do a full index scan. But this is only advantageous for your query when the table contains lots of other columns that make the table relatively big compared to the index. And the obvious disadvantage is that you now have an extra index to maintain during DML.
    Regards,
    Rob.
    Edited by: Rob van Wijk on 9-sep-2010 13:34

  • Please guide to improve the performance of XML column reading query.

    Hi Experts,
    The below query is taking 45 seconds to return 170 records.
    Due to selecting XMLTYPE column in query.
    select *
    from RANGE where WSNO = 3
    order by PREFERENCE desc;
    The total number of records in the table is 1060.
    Even it's a very small table why it's taking 45 seconds.
    Can anybody please help me on how to get the output in 2 to 3 seconds.
    I want all the columns from the table.
    The problem only with REST column XMLTYPE.
    If I am not selecting this column I am getting output in 1 to 2 seconds.
    I am posting the execution plan and DDL for table and index.
    PLAN_TABLE_OUTPUT
    Plan hash value: 3593186720
    | SNO  | Operation                    | EMPNAME                   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |                        |    31 | 23281 |    21   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY               |                        |    31 | 23281 |    21   (5)| 00:00:01 |
    |   2 |   TABLE ACCESS BY INDEX ROWID | RANGE                 |    31 | 23281 |    20   (0)| 00:00:01 |
    |*  3 |    INDEX RANGE SCAN          | INDX_WSNO              |    31 |       |     1   (0)| 00:00:01 |
    Predicate Information (SNOentified by operation SNO):
       3 - access("WSNO"=3)
    CREATE TABLE RANGE
      SNO                              NUMBER,
      BUSNO                            NUMBER,
      EMPNAME                          NVARCHAR2(64),
      PREFERENCE                       NUMBER,
      TSNO                             NUMBER,
      MEMBER                          CHAR(1 CHAR) ,
      EQU                             CHAR(1 CHAR) ,
      REMAIL                          CHAR(1 CHAR) ,
      SSR                             CHAR(1 CHAR) ,
      SUB                             CHAR(1 CHAR) ,
      SPN                             CHAR(1 CHAR) ,
      SEMPNAME                        NVARCHAR2(128),
      FVL                             NUMBER(32),
      TVL                             NUMBER(32),
      ISD                             CHAR(1 CHAR),
      CHANGED                         NVARCHAR2(64),
      CDATE                           TIMESTAMP(6) ,
      UDBY                            NVARCHAR2(64),
      UDATE                           TIMESTAMP(6),
      LSTU                            CLOB,
      WSNO                            NUMBER,
      ASTN                            CHAR(1 CHAR),
      ASTNPL                          CHAR(1 CHAR),
      AVAF                            CHAR(1 CHAR),
      REST                            SYS.XMLTYPE
    TABLESPACE USERS
    PCTUSED    0
    PCTFREE    10
    INITRANS   11
    MAXTRANS   255
    STORAGE    (
                INITIAL          64K
                NEXT             1M
                MINEXTENTS       1
                MAXEXTENTS       UNLIMITED
                PCTINCREASE      0
                BUFFER_POOL      DEFAULT
    XMLTYPE REST STORE AS CLOB
          ( TABLESPACE  USERS
            ENABLE      STORAGE IN ROW
            CHUNK       8192
            PCTVERSION  10
            NOCACHE
            INDEX       (
              TABLESPACE USERS
              STORAGE    (
                          INITIAL          64K
                          NEXT             1
                          MINEXTENTS       1
                          MAXEXTENTS       UNLIMITED
                          PCTINCREASE      0
                          BUFFER_POOL      DEFAULT
            STORAGE    (
                        INITIAL          64K
                        NEXT             1M
                        MINEXTENTS       1
                        MAXEXTENTS       UNLIMITED
                        PCTINCREASE      0
                        BUFFER_POOL      DEFAULT
      LOB (LSTU) STORE AS
          ( TABLESPACE  USERS
            ENABLE      STORAGE IN ROW
            CHUNK       8192
            RETENTION
            NOCACHE
            INDEX       (
              TABLESPACE USERS
              STORAGE    (
                          INITIAL          64K
                          NEXT             1
                          MINEXTENTS       1
                          MAXEXTENTS       UNLIMITED
                          PCTINCREASE      0
                          BUFFER_POOL      DEFAULT
            STORAGE    (
                        INITIAL          64K
                        NEXT             1M
                        MINEXTENTS       1
                        MAXEXTENTS       UNLIMITED
                        PCTINCREASE      0
                        BUFFER_POOL      DEFAULT
    NOCACHE
    NOPARALLEL
    MONITORING;
    CREATE INDEX INDX_WSNO ON RANGE(WSNO);
    CREATE UNIQUE INDEX RULE_EMPNAME ON RANGE(BUSNO, EMPNAME);
    Please help me how to improve the performance of this query.
    Thanks.

    Can you try something like this and check if its faster? (this is just my guess as I dont have your data )
    SELECT SNO,
           BUSNO,
           EMPNAME,
           PREFERENCE,
           TSNO,
           MEMBER,
           EQU,
           REMAIL,
           SSR,
           SUB,
           SPN,
           SEMPNAME,
           FVL,
           TVL,
           ISD,
           CHANGED,
           CDATE,
           UDBY,
           UDATE,
           LSTU,
           WSNO,
           ASTN,
           ASTNPL,
           AVAF,
           xmltype.getclobval (rest) rest
      FROM RANGE
    WHERE wsno = 3
    order by PREFERENCE desc;
    Cheers,
    Manik.

  • How to improve the query performance

    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    -------WHERE TSK.ProjectID = @Project-----
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    hi..
    My SP is as above..
    I connected this SP to dataset in SSRS report..as per my logic..Portfolio contains many Programs and Program contains many Projects.
    When i selected the ALL value for parameters Program and Project..i'm unable to get output.
    but when i select values for all 3 parameters i'm getting output. i took default values for paramters also.
    so i commented the where condition in SP as shown above
    --------where TSK.ProjectID=@Project-------------
    now i'm getting output when selecting ALL value for parameters.
    but here the issue is performance..it takes 10sec to retrieve for single project when i'm executing the sp.
    how can i create index on temp table in this sp and how can i improve the query performance..
    please help.
    thanks in advance..
    lucky

    Didnt i provide you solution in other thread?
    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    WHERE (TSK.ProjectID = @Project OR @Project = -1)
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Improve the query's performance

    Hi Guys,
    I'm an Oracle beginner and I'm getting the following issue:
    I have to run a query with a join between two tables, let me say A and B.
    The table A has 10 Milions of records (800MB) and is indexed on two columns (a1 and a2).
    The table B has 100000 records (64MB) no index.
    The query is:
    select
    A.a1,
    A.a2,
    B.b1
    from A,B
    where A.a1 >= B.b1 and A.a2<=B.b1
    The problem is that it is really slow (more that 1 day) although the execution plan used is NESTED LOOPS.
    How can I improve the performance of this query?
    Please help me,
    GF

    Hi Guys,
    First of all thank you very much for your prompt response.
    devmiral your advise could help me. I have just realized that the statistic have never been updated.
    I'll try to update it and let you know.
    Eric here is an quick example:
    Table A Table B
    a2 a1 a3 b1
    10 14 1 10
    15 16 2 11
    18 19 3 15
    20 21 4
    The query is:
    select
    A.a2,
    A.a1,
    A.a3,
    B.b1
    from A,B
    where A.a1 >= B.b1 and A.a2<=B.b1
    The output expected should be:
    a2 a1 a3 b1
    10 14 1 10
    10 14 1 11
    15 16 2 15
    Thank you
    GF

  • Improve the query performance

    Hi, I need help for improving query performance..
    I have a query in which i am joining 2 tables, join after some aggregation. Both table has more than 50 million record. 
    There is no index created on these table,both tables are loaded after truncation. So is it required to create index on this table before joining? The query status was showing 'suspended' since it was running for long time. For temporary purpose, i just executed
    the query multiple times by changing month filter each times. 
    How can i improve this instead of adding month filter and running multiple times

    Hi Nikkred,
    According to your description, you are joining 2 table which contain more than 50 million records. Now what you want is improving query performance, right?
    Query tuning is not an easy task. Basically it depends on three factors: your degree of knowledge, the query itself and the amount of optimization required. So in your scenario, you can post your detail query, so that you can get more help. Besides, you
    can create index on your table which can improve the performance. Here are some links about performance tuning tips for you reference.
    http://www.mssqltips.com/sql-server-tip-category/9/performance-tuning/
    http://www.infoworld.com/d/data-management/7-performance-tips-faster-sql-queries-262
    Regards,
    Charlie Liao
    TechNet Community Support

Maybe you are looking for

  • T006A table entries problem

    Hi, We have a problem with table T006A entries.We need to change the data in this standard table. SE16N TCode is not in SRM.How to change the existing entries in the standard tables with transport request. Regards, Chandu

  • Can't Sign in to my CC account

    Hi, I can't sign in to my CC account. This fix http://helpx.adobe.com/creative-cloud/kb/unable-login-creative-cloud-248.html still does not work? Thanks

  • Which XI Content packages do I have to import to ESB

    Hello, we want to use a web service from SWCV SAP APPL 6.05 with PI 7.1 . For this we imported package "XI CONTENT SAP APPL 6.05" into PI ESB. When we try to create matching outbound interface for the service we want to use, we get an error message "

  • Turn off zoom & rotate options for trackpad

    I use Photoshop and Word with my new MacBook, and the new zoom and rotate features for the trackpad are absolutely infuriating! It often rotates and zooms the photos I'm working on, or zooms the word document, because my fingers might barely graze th

  • !!!Urgent: Boolean array to number!!!

    I am trying to convert boolean array to number but I guess it has some limits. I have a boolean array containing 50 elements I want to convert that into number but it is not happening can anyone help. Its urgent. The best solution is the one you find