Performance Issue with the Query urgent

is there any way to get the data for the material Rejected with Movement type 122 & 123 except MSEG table, as my report is very slow...
my query is as below : -
SELECT SUM( a~dmbtr )INTO value1
    FROM mseg AS a INNER JOIN mkpf AS b
         ON amblnr = bmblnr
        AND amjahr = bmjahr
      WHERE a~lifnr = p_lifnr
        AND a~bwart IN ('122')
        AND b~budat IN s_budat
      GROUP BY lifnr.
  ENDSELECT.
abhishek suppal

Hi Abhi,
Try like this ....
SELECT SUM( a~dmbtr )INTO value1
FROM mseg AS a INNER JOIN mkpf AS b
ON amblnr = bmblnr
AND amjahr = bmjahr
WHERE a~lifnr = p_lifnr
AND a~bwart IN ('122'<b>,'123'</b>)
AND b~budat IN s_budat
GROUP BY lifnr.
ENDSELECT.
or ...
define ranges...like
ranges: r_bwart for XXXX-bwart.
r_bwart-sign = 'I'.
r_bwart-option = 'EQ'.
r_bwart-low = 122.
append r_bwart.
r_bwart-low = 123.
append r_bwart.
now...
in select statement u just add
AND a~bwart IN r_bwart
Thanks
Eswar

Similar Messages

  • Performance Issue with the query

    Hi Experts,
    While working on peoplesoft, today I was stuck with a problem when one of the Query is performing really bad. With all the statistics updates, query is not performing good. On one of the table, query is spending time doing lot of IO. (db file sequential read wait). And if I delete the stats for the table and let dynamic sampling to take place, query just works good and there is no IO wait on the table.
    Here is the query
    SELECT A.BUSINESS_UNIT_PC, A.PROJECT_ID,  E.DESCR,  A.EMPLID,  D.NAME,  C.DESCR,  A.TRC,
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( A.EST_GROSS),
      DECODE( A.TRC, 'ROVA1', 0, 'ROVA2', 0, ( SUM( A.EST_GROSS)/100) * 9.75),
      '2012-07-01',
      '2012-07-31',
      SUM(     DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)
      DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
      DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
      DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'),
          TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'),
          TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
      || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
      || TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY')  || ' / '  || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
      || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
      C.TRC,  TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
      E.BUSINESS_UNIT, 
      E.PROJECT_ID
    FROM
         PS_TL_PAYABLE_TIME A, 
         PS_TL_TRC_TBL C,
         PS_PROJECT E, 
         PS_PERSONAL_DATA D,
           PS_PERALL_SEC_QRY D1, 
         PS_JOB F, 
         PS_EMPLMT_SRCH_QRY F1
    WHERE
         D.EMPLID = D1.EMPLID
    AND      D1.OPRID   = 'TMANI'
    AND      F.EMPLID   = F1.EMPLID
    AND      F.EMPL_RCD = F1.EMPL_RCD
    AND      F1.OPRID   = 'TMANI'
    AND      A.DUR BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD') AND TO_DATE('2012-07-31','YYYY-MM-DD')
    AND      C.TRC   = A.TRC
    AND      C.EFFDT =  (SELECT
                   MAX(C_ED.EFFDT) 
                  FROM
                   PS_TL_TRC_TBL C_ED
                     WHERE
                   C.TRC = C_ED.TRC 
                  AND C_ED.EFFDT <= SYSDATE 
    AND      E.BUSINESS_UNIT = A.BUSINESS_UNIT_PC
    AND      E.PROJECT_ID    = A.PROJECT_ID
    AND      A.EMPLID        = D.EMPLID
    AND      A.EMPLID        = F.EMPLID
    AND      A.EMPL_RCD      = F.EMPL_RCD
    AND      F.EFFDT         =  (SELECT
                        MAX(F_ED.EFFDT) 
                      FROM
                        PS_JOB F_ED
                        WHERE
                        F.EMPLID = F_ED.EMPLID 
                      AND      F.EMPL_RCD = F_ED.EMPL_RCD
                        AND      F_ED.EFFDT <= SYSDATE 
    AND      F.EFFSEQ =  (SELECT
                   MAX(F_ES.EFFSEQ) 
                   FROM
                   PS_JOB F_ES
                     WHERE
                   F.EMPLID = F_ES.EMPLID 
                   AND F.EMPL_RCD = F_ES.EMPL_RCD
                     AND F.EFFDT  = F_ES.EFFDT 
    AND      F.GP_PAYGROUP  = DECODE(' ', ' ', F.GP_PAYGROUP, ' ')
    AND      A.CURRENCY_CD  = DECODE(' ', ' ', A.CURRENCY_CD, ' ')
    AND      DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')) = DECODE(' ', ' ', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')), 'L', 'L', 'E', 'E', 'A', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')))
    AND ( A.EMPLID, A.EMPL_RCD) IN  (SELECT
                             B.EMPLID,
                             B.EMPL_RCD 
                        FROM
                             PS_TL_GROUP_DTL B
                                    WHERE
                             B.TL_GROUP_ID = DECODE('ER012', ' ', B.TL_GROUP_ID, 'ER012') 
    AND      E.PROJECT_USER1   = DECODE(' ', ' ', E.PROJECT_USER1, ' ')
    AND      A.PROJECT_ID      = DECODE(' ', ' ', A.PROJECT_ID, ' ')
    AND      A.PAYABLE_STATUS <>
      CASE
        WHEN to_number(TO_CHAR(sysdate, 'DD')) < 15
        THEN
          CASE
            WHEN A.DUR > last_day(add_months(sysdate, -2))
            THEN ' '
            ELSE 'NA'
          END
        ELSE
          CASE
            WHEN A.DUR > last_day(add_months(sysdate, -1))
            THEN ' '
            ELSE 'NA'
          END
      END
    AND      A.EMPLID = DECODE(' ', ' ', A.EMPLID, ' ')
    GROUP BY A.BUSINESS_UNIT_PC,
           A.PROJECT_ID,
           E.DESCR,
         A.EMPLID,
           D.NAME,
           C.DESCR,
           A.TRC,
           '2012-07-01',
           '2012-07-31',
           DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
           DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
           DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-31',      'YYYY-MM-DD'), 'Mon ')
           || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
           || TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY')  || ' / '
           || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')  || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
           C.TRC,  TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
           E.BUSINESS_UNIT,  E.PROJECT_ID
    HAVING SUM( A.EST_GROSS) <> 0
    ORDER BY 1,
            2,
             5,
             6 ;Here is the screenshot for IO wait
    [https://lh4.googleusercontent.com/-6PFW2FSK3yE/UCrwUbZ0pvI/AAAAAAAAAPA/eHM48AOC0Uo]
    Edited by: Oceaner on Aug 14, 2012 5:38 PM

    Here is the execution plan with all the statistics present
    PLAN_TABLE_OUTPUT
    Plan hash value: 1575300420
    | Id  | Operation                                   | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                            |                    |     1 |   237 |  2703   (1)| 00:00:33 |
    |*  1 |  FILTER                                     |                    |       |       |            |          |
    |   2 |   SORT GROUP BY                             |                    |     1 |   237 |            |          |
    |   3 |    CONCATENATION                            |                    |       |       |            |          |
    |*  4 |     FILTER                                  |                    |       |       |            |          |
    |*  5 |      FILTER                                 |                    |       |       |            |          |
    |*  6 |       HASH JOIN                             |                    |     1 |   237 |  1695   (1)| 00:00:21 |
    |*  7 |        HASH JOIN                            |                    |     1 |   204 |  1689   (1)| 00:00:21 |
    |*  8 |         HASH JOIN SEMI                      |                    |     1 |   193 |  1352   (1)| 00:00:17 |
    |   9 |          NESTED LOOPS                       |                    |       |       |            |          |
    |  10 |           NESTED LOOPS                      |                    |     1 |   175 |  1305   (1)| 00:00:16 |
    |* 11 |            HASH JOIN                        |                    |     1 |   148 |  1304   (1)| 00:00:16 |
    |  12 |             JOIN FILTER CREATE              | :BF0000            |       |       |            |          |
    |  13 |              NESTED LOOPS                   |                    |       |       |            |          |
    |  14 |               NESTED LOOPS                  |                    |     1 |   134 |   967   (1)| 00:00:12 |
    |  15 |                NESTED LOOPS                 |                    |     1 |   103 |   964   (1)| 00:00:12 |
    |* 16 |                 TABLE ACCESS FULL           | PS_PROJECT         |   197 |  9062 |   278   (1)| 00:00:04 |
    |* 17 |                 TABLE ACCESS BY INDEX ROWID | PS_TL_PAYABLE_TIME |     1 |    57 |     7   (0)| 00:00:01 |
    |* 18 |                  INDEX RANGE SCAN           | IDX$$_C44D0007     |    16 |       |     3   (0)| 00:00:01 |
    |* 19 |                INDEX RANGE SCAN             | IDX$$_3F450003     |     1 |       |     2   (0)| 00:00:01 |
    |* 20 |               TABLE ACCESS BY INDEX ROWID   | PS_JOB             |     1 |    31 |     3   (0)| 00:00:01 |
    |  21 |             VIEW                            | PS_EMPLMT_SRCH_QRY |  5428 | 75992 |   336   (2)| 00:00:05 |
    PLAN_TABLE_OUTPUT
    |  22 |              SORT UNIQUE                    |                    |  5428 |   275K|   336   (2)| 00:00:05 |
    |* 23 |               FILTER                        |                    |       |       |            |          |
    |  24 |                JOIN FILTER USE              | :BF0000            | 55671 |  2827K|   335   (1)| 00:00:05 |
    |  25 |                 NESTED LOOPS                |                    | 55671 |  2827K|   335   (1)| 00:00:05 |
    |  26 |                  TABLE ACCESS BY INDEX ROWID| PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |* 27 |                   INDEX UNIQUE SCAN         | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    |* 28 |                  TABLE ACCESS FULL          | PS_SJT_PERSON      | 55671 |  1739K|   333   (1)| 00:00:04 |
    |  29 |                CONCATENATION                |                    |       |       |            |          |
    |  30 |                 NESTED LOOPS                |                    |     1 |    63 |     2   (0)| 00:00:01 |
    |* 31 |                  INDEX FAST FULL SCAN       | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |* 32 |                  INDEX UNIQUE SCAN          | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  33 |                 NESTED LOOPS                |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 34 |                  INDEX UNIQUE SCAN          | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 35 |                  INDEX UNIQUE SCAN          | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  36 |                NESTED LOOPS                 |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 37 |                 INDEX UNIQUE SCAN           | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 38 |                 INDEX UNIQUE SCAN           | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |* 39 |            INDEX UNIQUE SCAN                | PS_PERSONAL_DATA   |     1 |       |     0   (0)| 00:00:01 |
    |  40 |           TABLE ACCESS BY INDEX ROWID       | PS_PERSONAL_DATA   |     1 |    27 |     1   (0)| 00:00:01 |
    |* 41 |          INDEX FAST FULL SCAN               | PS_TL_GROUP_DTL    |   323 |  5814 |    47   (3)| 00:00:01 |
    |  42 |         VIEW                                | PS_PERALL_SEC_QRY  |  7940 | 87340 |   336   (2)| 00:00:05 |
    |  43 |          SORT UNIQUE                        |                    |  7940 |   379K|   336   (2)| 00:00:05 |
    |* 44 |           FILTER                            |                    |       |       |            |          |
    |* 45 |            FILTER                           |                    |       |       |            |          |
    |  46 |             NESTED LOOPS                    |                    | 55671 |  2663K|   335   (1)| 00:00:05 |
    |  47 |              TABLE ACCESS BY INDEX ROWID    | PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |* 48 |               INDEX UNIQUE SCAN             | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |* 49 |              TABLE ACCESS FULL              | PS_SJT_PERSON      | 55671 |  1576K|   333   (1)| 00:00:04 |
    |  50 |            CONCATENATION                    |                    |       |       |            |          |
    |  51 |             NESTED LOOPS                    |                    |     1 |    63 |     2   (0)| 00:00:01 |
    |* 52 |              INDEX FAST FULL SCAN           | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |* 53 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  54 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 55 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 56 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  57 |            CONCATENATION                    |                    |       |       |            |          |
    |  58 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 59 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 60 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  61 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 62 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 63 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  64 |            NESTED LOOPS                     |                    |     1 |    63 |     3   (0)| 00:00:01 |
    |  65 |             INLIST ITERATOR                 |                    |       |       |            |          |
    |* 66 |              INDEX RANGE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     2   (0)| 00:00:01 |
    |* 67 |             INDEX RANGE SCAN                | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |  68 |        TABLE ACCESS FULL                    | PS_TL_TRC_TBL      |   922 | 30426 |     6   (0)| 00:00:01 |
    |  69 |      SORT AGGREGATE                         |                    |     1 |    20 |            |          |
    |* 70 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    20 |     3   (0)| 00:00:01 |
    |  71 |      SORT AGGREGATE                         |                    |     1 |    14 |            |          |
    |* 72 |       INDEX RANGE SCAN                      | PS_TL_TRC_TBL      |     2 |    28 |     2   (0)| 00:00:01 |
    |  73 |      SORT AGGREGATE                         |                    |     1 |    23 |            |          |
    |* 74 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    23 |     3   (0)| 00:00:01 |
    |* 75 |     FILTER                                  |                    |       |       |            |          |
    PLAN_TABLE_OUTPUT
    |* 76 |      FILTER                                 |                    |       |       |            |          |
    |* 77 |       HASH JOIN                             |                    |     1 |   237 |   974   (2)| 00:00:12 |
    |* 78 |        HASH JOIN                            |                    |     1 |   226 |   637   (1)| 00:00:08 |
    |* 79 |         HASH JOIN                           |                    |     1 |   193 |   631   (1)| 00:00:08 |
    |  80 |          NESTED LOOPS                       |                    |       |       |            |          |
    |  81 |           NESTED LOOPS                      |                    |     1 |   179 |   294   (1)| 00:00:04 |
    |  82 |            NESTED LOOPS                     |                    |     1 |   152 |   293   (1)| 00:00:04 |
    |  83 |             NESTED LOOPS                    |                    |     1 |   121 |   290   (1)| 00:00:04 |
    |* 84 |              HASH JOIN SEMI                 |                    |     1 |    75 |   289   (1)| 00:00:04 |
    |* 85 |               TABLE ACCESS BY INDEX ROWID   | PS_TL_PAYABLE_TIME |     1 |    57 |   242   (0)| 00:00:03 |
    |* 86 |                INDEX SKIP SCAN              | IDX$$_C44D0007     |   587 |       |   110   (0)| 00:00:02 |
    |* 87 |               INDEX FAST FULL SCAN          | PS_TL_GROUP_DTL    |   323 |  5814 |    47   (3)| 00:00:01 |
    |* 88 |              TABLE ACCESS BY INDEX ROWID    | PS_PROJECT         |     1 |    46 |     1   (0)| 00:00:01 |
    |* 89 |               INDEX UNIQUE SCAN             | PS_PROJECT         |     1 |       |     0   (0)| 00:00:01 |
    |* 90 |             TABLE ACCESS BY INDEX ROWID     | PS_JOB             |     1 |    31 |     3   (0)| 00:00:01 |
    |* 91 |              INDEX RANGE SCAN               | IDX$$_3F450003     |     1 |       |     2   (0)| 00:00:01 |
    |* 92 |            INDEX UNIQUE SCAN                | PS_PERSONAL_DATA   |     1 |       |     0   (0)| 00:00:01 |
    |  93 |           TABLE ACCESS BY INDEX ROWID       | PS_PERSONAL_DATA   |     1 |    27 |     1   (0)| 00:00:01 |
    |  94 |          VIEW                               | PS_EMPLMT_SRCH_QRY |  5428 | 75992 |   336   (2)| 00:00:05 |
    |  95 |           SORT UNIQUE                       |                    |  5428 |   275K|   336   (2)| 00:00:05 |
    |* 96 |            FILTER                           |                    |       |       |            |          |
    |  97 |             NESTED LOOPS                    |                    | 55671 |  2827K|   335   (1)| 00:00:05 |
    |  98 |              TABLE ACCESS BY INDEX ROWID    | PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |* 99 |               INDEX UNIQUE SCAN             | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    |*100 |              TABLE ACCESS FULL              | PS_SJT_PERSON      | 55671 |  1739K|   333   (1)| 00:00:04 |
    | 101 |             CONCATENATION                   |                    |       |       |            |          |
    | 102 |              NESTED LOOPS                   |                    |     1 |    63 |     2   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |*103 |               INDEX FAST FULL SCAN          | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |*104 |               INDEX UNIQUE SCAN             | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 105 |              NESTED LOOPS                   |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*106 |               INDEX UNIQUE SCAN             | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*107 |               INDEX UNIQUE SCAN             | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 108 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*109 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*110 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 111 |         TABLE ACCESS FULL                   | PS_TL_TRC_TBL      |   922 | 30426 |     6   (0)| 00:00:01 |
    | 112 |        VIEW                                 | PS_PERALL_SEC_QRY  |  7940 | 87340 |   336   (2)| 00:00:05 |
    | 113 |         SORT UNIQUE                         |                    |  7940 |   379K|   336   (2)| 00:00:05 |
    |*114 |          FILTER                             |                    |       |       |            |          |
    |*115 |           FILTER                            |                    |       |       |            |          |
    | 116 |            NESTED LOOPS                     |                    | 55671 |  2663K|   335   (1)| 00:00:05 |
    | 117 |             TABLE ACCESS BY INDEX ROWID     | PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |*118 |              INDEX UNIQUE SCAN              | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    |*119 |             TABLE ACCESS FULL               | PS_SJT_PERSON      | 55671 |  1576K|   333   (1)| 00:00:04 |
    | 120 |           CONCATENATION                     |                    |       |       |            |          |
    | 121 |            NESTED LOOPS                     |                    |     1 |    63 |     2   (0)| 00:00:01 |
    |*122 |             INDEX FAST FULL SCAN            | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |*123 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 124 |            NESTED LOOPS                     |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*125 |             INDEX UNIQUE SCAN               | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*126 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 127 |           CONCATENATION                     |                    |       |       |            |          |
    | 128 |            NESTED LOOPS                     |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*129 |             INDEX UNIQUE SCAN               | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |*130 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 131 |            NESTED LOOPS                     |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*132 |             INDEX UNIQUE SCAN               | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*133 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 134 |           NESTED LOOPS                      |                    |     1 |    63 |     3   (0)| 00:00:01 |
    | 135 |            INLIST ITERATOR                  |                    |       |       |            |          |
    |*136 |             INDEX RANGE SCAN                | PSASJT_CLASS_ALL   |     1 |    39 |     2   (0)| 00:00:01 |
    |*137 |            INDEX RANGE SCAN                 | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    | 138 |      SORT AGGREGATE                         |                    |     1 |    20 |            |          |
    |*139 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    20 |     3   (0)| 00:00:01 |
    | 140 |      SORT AGGREGATE                         |                    |     1 |    14 |            |          |
    |*141 |       INDEX RANGE SCAN                      | PS_TL_TRC_TBL      |     2 |    28 |     2   (0)| 00:00:01 |
    | 142 |      SORT AGGREGATE                         |                    |     1 |    23 |            |          |
    |*143 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    23 |     3   (0)| 00:00:01 |
    | 144 |      SORT AGGREGATE                         |                    |     1 |    14 |            |          |
    |*145 |       INDEX RANGE SCAN                      | PS_TL_TRC_TBL      |     2 |    28 |     2   (0)| 00:00:01 |
    | 146 |      SORT AGGREGATE                         |                    |     1 |    20 |            |          |
    |*147 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    20 |     3   (0)| 00:00:01 |
    | 148 |      SORT AGGREGATE                         |                    |     1 |    23 |            |          |
    |*149 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    23 |     3   (0)| 00:00:01 |
    ------------------------------------------------------------------------------------------------------------------Most of the IO wait occur at step 17. Though exlain plan simply doesnot show this thing..But when we run this query from Peoplesoft application (online page), this is visible through Grid control SQL Monitoring
    Second part of the Question continues....

  • Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Thanks Kelly,
    The answers would be the following:
    1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
    Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
    The cells will be numeric, free text, drop downs, and some calculated numeric.
    Are we reaching the limits for UI performance?
    Thanks

  • Performance issues with the Vouchers index build in SES

    Hi All,
    We are currently performing an upgrade for: PS FSCM 9.1 to PS FSCM 9.2.
    As a part of the upgrade, Client wants Oracle SES to be deployed for some modules including, Purchasing, Payables (Vouchers)
    We are facing severe performance issues with the Vouchers index build. (Volume of data = approx. 8.5 million rows of data)
    The index creation process runs for over 5 days.
    Can you please share any information or issues that you may have faced on your project and how they were addressed?

    Check the following logs for errors:
    1.  The message log from the process scheduler
    2.  search_server1-diagnostic.log  in /search_server1/logs directory
    If the build is getting stuck while crawling then we typically have to increase the Java Heap size for the Weblogic instance for SES>

  • Performance issues with the Tuxedo MQ Adapter

    We are experimenting some performance issues with the MQ Adapter. For example, we are seeing that the MQ Adapter takes from 10 to 100 ms in reading a single message from the queue and sending to the Tuxedo service. The Tuxedo service takes 80 ms in its execution so there is a considerable waste of time in the MQ adapter that we cannot explain.
    Also, we have looked a lot of rollback transactions on the MQ adapter, for example we got 980 rollback transactions for 15736 transactions sent and only the MQ adapter is involved in the rollback. However, the operations are executed properly. The error we got is
    135027.122.hqtux101!MQI_QMTESX01.7636.1.0: gtrid x0 x4ec1491f x25b59: LIBTUX_CAT:376: ERROR: tpabort: xa_rollback returned XA_RBROLLBACK.
    I am looking for information at Oracle site, but I have not found nothing. Could you or someone from your team help me?

    Hi Todd,
    We have 6 MQI adapters reading from 5 different queues, but in this case we are writing in only one queue.
    Someone from Oracle told us that the XA_RBROLLBACK occurs because we have 6 MQ adapters that are reading from the same queues and when one adapter finds a message and try to get that message, it can occurs that other MQ Adapter gets it before. In this case, the MQ adapter rollbacks the transaction. Even when we got some XA_RBROLLBACK errors, we don´t lose message. Also, I read something about that when XA sends a xa_end call to MQ adapter, it actually does the rollback, so when the MQ adapter receives the xa_rollback call, it answers with XA_RBROLLBACK. Is that true?
    However, I am more worried about the performance. We are putting a request message in a MQ queue and waiting for the reply. In some cases, it takes 150ms and in other cases it takes much more longer (more than 400ms). The average is 300ms. MQ adapter calls a service (txgralms0) which lasts 110ms in average.
    This is our configuration:
    "MQI_QMTESX01" SRVGRP="g03000" SRVID=3000
    CLOPT="-- -C /tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg"
    RQPERM=0600 REPLYQ=N RPPERM=0600 MIN=6 MAX=6 CONV=N
    SYSTEM_ACCESS=FASTPATH
    MAXGEN=1 GRACE=86400 RESTART=N
    MINDISPATCHTHREADS=0 MAXDISPATCHTHREADS=1 THREADSTACKSIZE=0
    SICACHEENTRIESMAX="500"
    /tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg:
    *SERVER
    MINMSGLEVEL=0
    MAXMSGLEVEL=0
    DEFMAXMSGLEN=4096
    TPESVCFAILDATA=Y
    *QUEUE_MANAGER
    LQMID=QMTESX01
    NAME=QMTESX01
    *SERVICE
    NAME=txgralms0
    FORMAT=MQSTR
    TRAN=N
    *QUEUE
    LQMID=QMTESX01
    MQNAME=QAT.Q.NACAR.TO.TUX.KGCRQ01
    *QUEUE
    LQMID=QMTESX01
    MQNAME=QAT.Q.NACAR.TO.TUX.KGCPQ01
    *QUEUE
    LQMID=QMTESX01
    MQNAME=QAT.Q.NACAR.TO.TUX.KPSAQ01
    *QUEUE
    LQMID=QMTESX01
    MQNAME=QAT.Q.NACAR.TO.TUX.KPINQ01
    *QUEUE
    LQMID=QMTESX01
    MQNAME=QAT.Q.NACAR.TO.TUX.KDECQ01
    Thanks in advance,
    Marling

  • Performance issue with the ABAP statements

    Hello,
    Please can some help me with the below statements where I am getting performance problem.
    SELECT * FROM /BIC/ASALHDR0100 into Table CHDATE.
    SORT CHDATE by DOC_NUMBER.
    SORT SOURCE_PACKAGE by DOC_NUMBER.
    LOOP AT CHDATE INTO WA_CHDATE.
       READ TABLE SOURCE_PACKAGE INTO WA_CIDATE WITH KEY DOC_NUMBER =
       WA_CHDATE-DOC_NUMBER BINARY SEARCH.
       MOVE WA_CHDATE-CREATEDON  to WA_CIDATE-CREATEDON.
    APPEND WA_CIDATE to CIDATE.
    ENDLOOP.
    I wrote an above code for the follwing requirement.
    1. I have 2 tables from where i am getting the data
    2.I have common fields in both the table names CREATEDON date. In both the tables I hve the values.
    3. While accessing the 2 table and copying to thrid table i have to modify the field.
    I am getting performance issues with the above statements.
    Than
    Edited by: Rob Burbank on Jul 29, 2010 10:06 AM

    Hello,
    try a select like the following one instead of you code.
    SELECT field field2 ...
    INTO TABLE it_table
    FROM table1 AS T1 INNER JOIN table2 AS T2
    ON t1-doc_number = t2-doc_number

  • Performance issue with the Select query

    Hi,
    I have an issue with the performance with a seclet query.
    In table AFRU - AUFNR is not a key field.
    So i had selected the low and high values into s_reuck and used it in Where condition.
    Still i have an issue with the Performance.
    SELECT SINGLE RUECK
    RMZHL
    IEDD
    AUFNR
    STOKZ
    STZHL
    FROM AFRU INTO table t_AFRU
    FOR ALL ENTRIES IN T_ZSCPRT100
    WHERE RUECK IN S_RUECK AND
    AUFNR = T_ZSCPRT100-AUFNR AND
    STOKZ = SPACE AND
    STZHL = 0.
    I had also cheked by createing an index for AUFNR in the table AFRU...it does not help.
    Is there anyway that we can declare Key field while declaring the Internal table....?
    ANy suggestions to fix the performance issue is apprecaited!
    Regards,
    Kittu

    Hi,
    Thank you for your quick response!
    Rui dantas, i have lill confusion...this is my code below :
    data : t_zscprt type standard table of ty_zscprt,
           wa_zscprt type ty_zscprt.
    types : BEGIN OF ty_zscprt100,
            aufnr type zscprt100-aufnr,
            posnr  type zscprt100-posnr,
            ezclose type zscprt100-ezclose,
            serialnr type zscprt100-serialnr,
            lgort type zscprt100-lgort,
          END OF ty_zscprt100.
    data : t_zscprt100 type standard table of ty_zscprt100,
           wa_zscprt100 type ty_zscprt100.
    Types: begin of ty_afru,
                rueck type CO_RUECK,
                rmzhl type CO_RMZHL,
                iedd  type RU_IEDD,
                aufnr type AUFNR,
                stokz type CO_STOKZ,
                stzhl type CO_STZHL,
             end of ty_afru.
    data : t_afru type STANDARD TABLE OF ty_afru,
            WA_AFRU TYPE TY_AFRU.
    SELECT AUFNR
            POSNR
            EZCLOSE
            SERIALNR
            LGORT
            FROM ZSCPRT100 INTO TABLE T_ZSCPRT100
            FOR ALL ENTRIES IN T_ZSCPRT
            WHERE   AUFNR = T_ZSCPRT-PRTNUM
            AND   SERIALNR IN S_SERIAL
            AND    LGORT   IN S_LGORT.
        IF sy-subrc <> 0.
           MESSAGE ID 'Z2' TYPE 'I' NUMBER '41'. "BDCG87
           stop."BDCG87
        ENDIF.
      ENDIF.
    SELECT    RUECK
                  RMZHL
                  IEDD
                  AUFNR
                  STOKZ
                  STZHL
                  FROM AFRU INTO TABLE T_AFRU
                  FOR ALL ENTRIES IN T_ZSCPRT100
                  WHERE RUECK IN S_RUECK     AND
                        AUFNR = T_ZSCPRT100-AUFNR AND
                        STOKZ = SPACE AND
                        STZHL = 0.
    Using AUFNR, get AUFPL from AFKO
    Using AUFPL, get RUECK from AFVC
    Using RUEKC, read AFRU
    In other words, one select joining AFKO <-> AFVC <-> AFRU should get what you want.
    This is my select query, would you want me to write another select query to meet this criteria..
    From AUFNR> I will get AUFPL from AFKO> BAsed on AUFPL I will get RUECK, based on RUEKC i need to read AFRU..but i need to select few field from AFRu based on AUFNR....
    ANy suggestions wil be appreciated!
    Regards
    Kittu

  • Performance Issue with sql query

    Hi,
    My db is 10.2.0.5 with RAC on ASM, Cluster ware version 10.2.0.5.
    With bsoa table as
    SQL> desc bsoa;
    Name                                      Null?    Type
    ID                                        NOT NULL NUMBER
    LOGIN_TIME                                         DATE
    LOGOUT_TIME                                        DATE
    SUCCESSFUL_IND                                     VARCHAR2(1)
    WORK_STATION_NAME                                  VARCHAR2(80)
    OS_USER                                            VARCHAR2(30)
    USER_NAME                                 NOT NULL VARCHAR2(30)
    FORM_ID                                            NUMBER
    AUDIT_TRAIL_NO                                     NUMBER
    CREATED_BY                                         VARCHAR2(30)
    CREATION_DATE                                      DATE
    LAST_UPDATED_BY                                    VARCHAR2(30)
    LAST_UPDATE_DATE                                   DATE
    SITE_NO                                            NUMBER
    SESSION_ID                                         NUMBER(8)
    The query
    UPDATE BSOA SET LOGOUT_TIME =SYSDATE WHERE SYS_CONTEXT('USERENV', 'SESSIONID') = SESSION_ID
    Is taking a lot of time to execute and in AWR reports also it is on top in
    1. SQL Order by elapsed time
    2. SQL order by reads
    3. SQL order by gets
    So i am trying a way to solve the performance issue as the application is slow specially during login and logout time.
    I understand that the function in the where condition cause to do FTS, but i can not think what other parts to look at.
    Also:
    SQL> SELECT COUNT(1) FROM BSOA;
      COUNT(1)
       7800373
    The explain plan for  "UPDATE BSOA SET LOGOUT_TIME =SYSDATE WHERE SYS_CONTEXT('USERENV', 'SESSIONID') = SESSION_ID" is
    {code}
    PLAN_TABLE_OUTPUT
    Plan hash value: 1184960901
    | Id  | Operation          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT   |                    |     1 |    26 | 18748   (3)| 00:03:45 |
    |   1 |  UPDATE            | BSOA |       |       |            |          |
    |*  2 |   TABLE ACCESS FULL| BSOA |     1 |    26 | 18748   (3)| 00:03:45 |
    Predicate Information (identified by operation id):
       2 - filter("SESSION_ID"=TO_NUMBER(SYS_CONTEXT('USERENV','SESSIONID')))
    {code}

    Hi,
    There are also triggers before update and AUDITS on this table.
    CREATE OR REPLACE TRIGGER B2.TRIGGER1
    BEFORE UPDATE
    ON B2.BSOA  REFERENCING OLD AS OLD NEW AS NEW
    FOR EACH ROW
    :NEW.LAST_UPDATED_BY   := USER    ;
    :NEW.LAST_UPDATE_DATE  := SYSDATE ;
    END;
    CREATE OR REPLACE TRIGGER B2.TRIGGER2
    BEFORE INSERT
    ON B2.BSOA  REFERENCING OLD AS OLD NEW AS NEW
    FOR EACH ROW
    :NEW.CREATED_BY        := USER ;
    :NEW.CREATION_DATE     := SYSDATE ;
    :NEW.LAST_UPDATED_BY   := USER    ;
    :NEW.LAST_UPDATE_DATE  := SYSDATE ;
    END;
    And also there is an audit on this table
    AUDIT UPDATE ON B2.BSOA BY ACCESS WHENEVER SUCCESSFUL;
    AUDIT UPDATE ON B2.BSOA BY ACCESS WHENEVER NOT SUCCESSFUL;
    And the sessionid column in BSOA has height balanced histogram.
    When i create an index i get the following error. As i am on 10g I can't use DDL_LOCK_TIMEOUT . I may have to wait for next down time.
    SQL> CREATE INDEX B2.BSOA_SESSID_I ON B2.BSOA(SESSION_ID) TABLESPACE B2 COMPUTE STATISTICS;
    CREATE INDEX B2.BSOA_SESSID_I ON B2.BSOA(SESSION_ID) TABLESPACE B2 COMPUTE STATISTICS
    ERROR at line 1:
    ORA-00054: resource busy and acquire with NOWAIT specified
    Thanks

  • Issue with the query

    Hi Friends,
    I am having a problem with this query. The query is to fetch all the elements for the employees. The elements that need to be fetched are set up in the flex sets.
    In the table fnd_flex_values there are columns start_date_active and end_date_active fields and these are null. In the below query when I use the condition
    and trunc(sysdate) between NVL(ffv.start_Date_active,TO_DATE('01-JAN-1951','DD-MON-YYYY')) and NVL(ffv.end_Date_active,TO_DATE('31-DEC-4712','DD-MON-YYYY'))
    the query fetches the results in 5seconds
    but when I replace the same query with the statement
    and to_date(to_char(ppa.date_earned,'DD-MON-YYYY'),'DD-MON-YYYY') between NVL(ffv.start_Date_active,TO_DATE('01-JAN-1951','DD-MON-YYYY')) and NVL(ffv.end_Date_active,TO_DATE('31-DEC-4712','DD-MON-YYYY'))
    the query takes a lot of time. Infact it gets timed out
    Columns start_date_active and end_date_active fields are date type. Can anyone say what is the issue with this query
    When I give the sysdate instead of ppa.date_earned date
    select papf.person_id, papf.full_name, papf.employee_number
    ,petf.element_name, pivf.name,
    prrv.result_value, ppa.date_earned,ppa.effective_date, to_char(ppa.date_earned,'DD-MON-YYYY') date_earned, paaf.assignment_id, petf.effective_start_date
    ,petf.effective_end_date, ffv.*
    from per_all_people_f papf
    ,per_all_assignments_f paaf
    ,pay_payroll_actions ppa
    ,pay_assignment_actions paa
    ,pay_element_types_f petf
    ,pay_input_values_f pivf
    ,pay_run_results prr
    ,pay_run_result_values prrv
    ,per_person_type_usages_f pptuf
    ,per_person_types ppt
    ,fnd_flex_values ffv
    ,fnd_flex_value_sets ffvs
    where 1=1
    and papf.person_id = paaf.person_id
    and trunc(ppa.date_earned) between trunc(papf.effective_start_date) and trunc(papf.effective_end_date)
    and trunc(ppa.date_earned) between trunc(paaf.effective_start_date) and trunc(paaf.effective_end_date)
    and paa.assignment_id = paaf.assignment_id
    and paa.action_status='C'
    and ppa.payroll_id = 61
    --and ppa.consolidation_set_id = 108
    and ppa.payroll_action_id = paa.payroll_action_id
    and prr.assignment_action_id = paa.assignment_action_id
    and prr.element_type_id = petf.element_type_id
    -- and trunc(ppa.date_earned) between trunc(petf.effective_start_date) and trunc(petf.effective_end_date)
    and trunc(sysdate) between petf.effective_start_date and petf.effective_end_date
    and prrv.run_result_id = prr.run_result_id
    and prrv.input_value_id = pivf.input_value_id
    and     prr.status in ('P','PA')
    and pivf.name ='Pay Value'
    and ppa.date_earned between pivf.effective_start_date and pivf.effective_end_date
    and ppa.time_period_id=145
    and paaf.person_id = pptuf.person_id
    and pptuf.person_type_id = ppt.person_type_id
    and ppt.user_person_type = nvl('Employee',ppt.user_person_type)
    and trunc(ppa.date_earned) between pptuf.effective_start_date and pptuf.effective_end_date
    and petf.element_name = ffv.flex_value
    and ffv.flex_value_set_id = ffvs.flex_value_set_id
    and ffv.enabled_flag ='Y'
    and ffvs.flex_value_set_name='GROUP_ELEMENTS'
    and trunc(sysdate) between NVL(ffv.start_Date_active,TO_DATE('01-JAN-1951','DD-MON-YYYY')) and NVL(ffv.end_Date_active,TO_DATE('31-DEC-4712','DD-MON-YYYY'))
    -- and trunc(ppa.effective_date) between nvl(ffv.start_Date_active,trunc(ppa.effective_date)) and NVL(ffv.end_Date_active,trunc(ppa.effective_date))
    -- and to_date(to_char(ppa.date_earned,'DD-MON-YYYY'),'DD-MON-YYYY') between NVL(ffv.start_Date_active,TO_DATE('01-JAN-1951','DD-MON-YYYY')) and
    order by ffv.parent_flex_value_low, papf.employee_number;
    Thanks

    ʃʃp wrote:
    /* Formatted on 2012/06/11 13:34 (Formatter Plus v4.8.8) */
    SELECT DISTINCT fr_ir_code, fr_fc_code1, product1, param_rep_prd, no_of_links_start_ir, no_of_supps_start_ir
               FROM comm_exst_rp_comit_aggr_irview
              WHERE fr_ir_code = 'AS01'
                AND srta_period = 2
                AND srta_year = 2011
                AND param_rep_prd = '2011-2'
                AND DECODE (:bind_variable, 'All', 1, 0) = DECODE (:bind_variable, 'All', 1, 1)
           GROUP BY fr_ir_code, fr_fc_code1, product1, param_rep_prd, no_of_links_start_ir, no_of_supps_start_ir
    UNION
    SELECT DISTINCT fr_ir_code, fr_fc_code1, product1, param_rep_prd, no_of_links_start_irda AS no_of_links_start_ir,
                    no_of_supps_start__irda AS no_of_supps_start_ir
               FROM comm_exst_rp_comit_aggr_irview
              WHERE fr_ir_code = 'AS01'
                AND srta_period = 2
                AND srta_year = 2011
                AND param_rep_prd = '2011-2'
                AND da_status = 'N'
                AND DECODE (:bind_variable, 'All', 1, 0) = DECODE (:bind_variable, 'All', 0, 0)
           GROUP BY fr_ir_code, fr_fc_code1, product1, param_rep_prd, no_of_links_start_irda, no_of_supps_start__irdaYes Union is one of the best solutions for handling two queries.
    As per my understanding DECODE function use is:
    DECODE (value,<if this value>,<return this value>,
    <if this value>,<return this value>,
    <otherwise this value>)
    But I am little bit jumbled for understanding below two lines in the query..
    1)
    AND DECODE (:bind_variable, 'All', 1, 0) = DECODE (:bind_variable, 'All', 1, 1)
    2)
    AND DECODE (:bind_variable, 'All', 1, 0) = DECODE (:bind_variable, 'All', 0, 0)Can you please tell me how the comparision is done using DECODE function?

  • Performance issue with insert query !

    Hi ,
    I am using dbxml-2.4.16, my node-storage container is loaded with a large document ( 54MB xml ).
    My document basically contains around 65k records in the same table ( 65k child nodes for one parent node ). I need to insert more records in to my DB, my insert XQuery is consuming a lot of time ( ~23 sec ) to insert one entry through command-line and around 50sec through code.
    My container is indexed with "node-attribute-equality-string". The insert query I used:
    insert nodes <NS:sampleEntry mySSIAddress='70011' modifier = 'create'><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
    If I modify my query with
    into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:sampleTable/NS:sampleEntry[@mySSIAddress='1']
    insted of
    into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
    Time taken reduces only by 8 secs.
    I have also tried to use insert "after", "before", "as first", "as last" , but there is no difference in performance.
    Is anything wrong with my query, what should be the expected time to insert one record in a DB of 65k records.
    Has anybody got any idea regarding this performance issue.
    Kindly help me out.
    Thanks,
    Kapil.

    Hi George,
    Thanks for your reply.
    Here is the info you requested,
    dbxml> listIndexes
    Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
    Index: node-attribute-equality-string for node {}:mySSIAddress
    2 indexes found.
    dbxml> info
    Version: Oracle: Berkeley DB XML 2.4.16: (October 21, 2008)
    Berkeley DB 4.6.21: (September 27, 2007)
    Default container name: n_b_i_f_c_a_z.dbxml
    Type of default container: NodeContainer
    Index Nodes: on
    Shell and XmlManager state:
    Not transactional
    Verbose: on
    Query context state: LiveValues,Eager
    The insery query with update takes ~32 sec ( shown below )
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS';insert nodes <NS:sampleEntry mySSIAddress='70000' modifier = 'create' ><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
    Time in seconds for command 'query': 32.5002
    and the query without the updation part takes ~14 sec ( shown below )
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
    Time in seconds for command 'query': 13.7289
    The query :
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//PMB:sampleTable/PMB:sampleEntry[@mySSIAddress='1000']"
    Time in seconds for command 'query': 0.005375
    is very fast.
    The Updation of the document seems to consume much of the time.
    Regards,
    Kapil.

  • Performance issue of the query

    Hi All,
    Please help me to avoid sequential access on the table VBAK.
    as i am using this query on VBAK,VBUK,VBPA and VBFA.
    the query as follows.
    How to improve the performance of this query,its very urgent please give me the sample query if possible.
    SELECT a~vbeln
    a~erdat
    a~ernam
    a~vkorg
    a~vkbur
    a~kunnr
    b~kunnr AS kunnr_shp
    b~adrnr
    c~gbstk
    c~fkstk
    d~vbeln AS vbeln_i
    INTO CORRESPONDING FIELDS OF TABLE t_z92sales
    FROM vbak AS a INNER JOIN vbuk AS c ON avbeln = cvbeln
    INNER JOIN vbpa AS b ON avbeln = bvbeln
    AND bposnr = 0 AND bparvw = 'WE'
    LEFT OUTER JOIN vbfa AS d ON avbeln = dvbelv AND d~vbtyp_n = 'M'
    WHERE ( a~vbeln BETWEEN fvbeln AND tvbeln )
    AND a~vbtyp = 'C'
    AND a~vkorg IN vkorg_s
    AND c~gbstk IN gbstk_s.

    Hi Ramada,
    Try using the following code.
    RANGES: r_vbeln FOR vbka-vbeln.
    DATA: w_index TYPE sy-tabix,
          BEGIN OF t_vbak OCCURS 0,
            vbeln  TYPE vbak-vbeln,
            erdat  TYPE vbak-erdat,
            ernam  TYPE vbak-ernam,
            vkorg  TYPE vbak-vkorg,
            vkbur  TYPE vbak-vkbur,
            kunnr  TYPE vbak-kunnr,
            del(1) TYPE c         ,
          END OF t_vbak,
          BEGIN OF t_vbuk OCCURS 0,
            vbeln TYPE vbuk-vbeln,
            gbstk TYPE vbuk-gbstk,
            fkstk TYPE vbuk-fkstk,
          END OF t_vbuk,
          BEGIN OF t_vbpa OCCURS 0,
            vbeln TYPE vbpa-vbeln,
            kunnr TYPE vbpa-kunnr,
            adrnr TYPE vbpa-adrnr,
          END OF t_vbpa,
          BEGIN OF t_vbfa OCCURS 0,
            vbelv TYPE vbfa-vbelv,
            vbeln TYPE vbfa-vbeln,
          END OF t_vbfa,
          BEGIN OF t_z92sales OCCURS 0,
            vbeln     TYPE vbak-vbeln,
            erdat     TYPE vbak-erdat,
            ernam     TYPE vbak-ernam,
            vkorg     TYPE vbak-vkorg,
            vkbur     TYPE vbak-vkbur,
            kunnr     TYPE vbak-kunnr,
            gbstk     TYPE vbuk-gbstk,
            fkstk     TYPE vbuk-fkstk,
            kunnr_shp TYPE vbpa-kunnr,
            adrnr     TYPE vbpa-adrnr,
            vbeln_i   TYPE vbfa-vbeln,
          END OF t_z92sales.
    IF NOT fvbeln IS INITIAL
    OR NOT tvbeln IS INITIAL.
      REFRESH r_vbeln.
      r_vbeln-sign   = 'I'.
      IF fvbeln IS INITIAL.
        r_vbeln-option = 'EQ'.
        r_vbeln-low    = tvbeln.
      ELSEIF tvbeln IS INITIAL.
        r_vbeln-option = 'EQ'.
        r_vbeln-low    = fvbeln.
      ELSE.
        r_vbeln-option = 'BT'.
        r_vbeln-low    = fvbeln.
        r_vbeln-high   = tvbeln.
      ENDIF.
      APPEND r_vbeln.
      CLEAR r_vbeln.
      SELECT vbeln
             erdat
             ernam
             vkorg
             vkbur
             kunnr
        FROM vbak
        INTO TABLE t_vbak
        WHERE vbeln IN r_vbeln
        AND   vbtyp EQ 'C'
        AND   vkorg IN vkorg_s.
      IF sy-subrc EQ 0.
        SORT t_vbak BY vbeln.
        SELECT vbeln
               gbstk
               fkstk
        FROM vbuk
        INTO TABLE t_vbuk
        FOR ALL ENTRIES IN t_vbak
        WHERE vbeln EQ t_vbak-vbeln
        AND   gbstk IN gbstk_s.
        IF sy-subrc EQ 0.
          SORT t_vbuk BY vbeln.
          LOOP AT t_vbak.
            w_index = sy-tabix.
            READ TABLE t_vbuk WITH KEY vbeln = t_vbak-vbeln
                                       BINARY SEARCH
                                       TRANSPORTING NO FIELDS.
            IF sy-subrc NE 0.
              t_vbak-del = 'X'.
              MODIFY t_vbak INDEX w_index TRANSPORTING del.
            ENDIF.
          ENDLOOP.
          DELETE t_vbak WHERE del EQ 'X'.
        ENDIF.
      ENDIF.
    ELSE.
      SELECT vbeln
             gbstk
             fkstk
      FROM vbuk
      INTO TABLE t_vbuk
      WHERE vbtyp EQ 'C'
      AND   gbstk IN gbstk_s.
      IF sy-subrc EQ 0.
        SORT t_vbuk BY vbeln.
        SELECT vbeln
               erdat
               ernam
               vkorg
               vkbur
               kunnr
          FROM vbak
          INTO TABLE t_vbak
          FOR ALL ENTRIES IN t_vbuk
          WHERE vbeln EQ t_vbuk-vbeln
          AND   vkorg IN vkorg_s.
        IF sy-subrc EQ 0.
          SORT t_vbak BY vbeln.
        ENDIF.
      ENDIF.
    ENDIF.
    IF NOT t_vbak[] IS INITIAL.
      SELECT vbeln
             kunnr
             adrnr
        FROM vbpa
        INTO TABLE t_vbpa
        FOR ALL ENTRIES IN t_vbak
        WHERE vbeln EQ t_vbak-vbeln
        AND   posnr EQ '000000'
        AND   parvw EQ 'WE'.
      IF sy-subrc EQ 0.
        SORT t_vbpa BY vbeln.
        SELECT vbelv
               vbeln
          FROM vbfa
          INTO TABLE t_vbfa
          FOR ALL ENTRIES IN t_vbpa
          WHERE vbelv EQ t_vbpa-vbeln
          AND   vbtyp_n EQ 'M'.
        IF sy-subrc EQ 0.
          SORT t_vbfa BY vbelv.
        ENDIF.
      ENDIF.
    ENDIF.
    REFRESH t_z92sales.
    LOOP AT t_vbak.
      READ TABLE t_vbuk WITH KEY vbeln = t_vbak-vbeln
                                 BINARY SEARCH
                                 TRANSPORTING
                                   gbstk
                                   fkstk.
      IF sy-subrc EQ 0.
        READ TABLE t_vbpa WITH KEY vbeln = t_vbak-vbeln
                                           BINARY SEARCH
                                           TRANSPORTING
                                             kunnr
                                             adrnr.
        IF sy-subrc EQ 0.
          READ TABLE t_vbfa WITH KEY vbelv = t_vbak-vbeln
                                     BINARY SEARCH
                                     TRANSPORTING
                                       vbeln.
          IF sy-subrc EQ 0.
            t_z92sales-vbeln     = t_vbak-vbeln.
            t_z92sales-erdat     = t_vbak-erdat.
            t_z92sales-ernam     = t_vbak-ernam.
            t_z92sales-vkorg     = t_vbak-vkorg.
            t_z92sales-vkbur     = t_vbak-vkbur.
            t_z92sales-kunnr     = t_vbak-kunnr.
            t_z92sales-gbstk     = t_vbuk-gbstk.
            t_z92sales-fkstk     = t_vbuk-fkstk.
            t_z92sales-kunnr_shp = t_vbpa-kunnr.
            t_z92sales-adrnr     = t_vbpa-adrnr.
            t_z92sales-vbeln_i   = t_vbfa-vbeln.
            APPEND t_z92sales.
            CLEAR  t_z92sales.
          ENDIF.
        ENDIF.
      ENDIF.
    ENDLOOP.

  • Performance issue with infoset query

    Dear Experts,
    when i trying to run a query based on infoset. the execution time of the query is very high. when i try to see the query designer i have got the following warning...
    Warning Message :
    Diagnosis
    InfoProvider XXXX does not contain characteristic 0CALYEAR. The exception aggregation for key figure XXXX can therefore not be applied.
    System Response
    The key figure will therefore be aggregated using all characteristics of XXXXX with the generic aggregation SUM.
    could anyone can guide me how to check at the infoset level to resolve this issue....
    Thanks,
    Mannu!

    Hi Mannu.
    Go to change mode in your InfoSet and identify the Char 0CALYEAR. Check whether it is selected (click the box beside 0CALYEAR).
    This needs to be done, because the exception aggregation can occur only for the fields available in the BEx Query!
    Since its not available, it is trying to aggregate based on all available fields selected in Infoset. This may be the reasong to your performace issue.
    Make this setting and I hope your issue will get resolved. Do post the outcome.
    Cheers,
    VA
    Edited by: Vishwa  Anand on Sep 6, 2010 5:15 PM

  • Performance issue with select query

    Hi friends ,
    This is my select query which is taking so much time to retrive the records.CAn Any one help me in solving this performance issue.
    *- Get the Goods receipts  mainly selected per period (=> MKPF secondary
      SELECT msegebeln msegebelp mseg~werks
             ekkobukrs ekkolifnr ekkozterm ekkoekorg ekko~ekgrp
             ekkoinco1 ekkoexnum
             lfa1name1 lfa1land1 lfa1ktokk lfa1stceg
             mkpfmblnr mkpfmjahr msegzeile mkpfbldat mkpf~budat
             mseg~bwart
    *Start of changes for CIP 6203752 by PGOX02
             mseg~smbln
    *End of changes for CIP 6203752 by PGOX02
             ekpomatnr ekpotxz01 ekpomenge ekpomeins
             ekbemenge ekbedmbtr ekbewrbtr ekbewaers
             ekpolgort ekpomatkl ekpowebaz ekpokonnr ekpo~ktpnr
             ekpoplifz ekpobstae
             INTO TABLE it_temp
        FROM mkpf JOIN mseg ON msegmblnr EQ mkpfmblnr
                           AND msegmjahr EQ mkpfmjahr
                  JOIN ekbe ON ekbeebeln EQ msegebeln
                           AND ekbeebelp EQ msegebelp
                           AND ekbe~zekkn EQ '00'
                           AND ekbe~vgabe EQ '1'
                           AND ekbegjahr EQ msegmjahr
                           AND ekbebelnr EQ msegmblnr
                           AND ekbebuzei EQ msegzeile
                  JOIN ekpo ON ekpoebeln EQ ekbeebeln
                           AND ekpoebelp EQ ekbeebelp
                  JOIN ekko ON ekkoebeln EQ ekpoebeln
                  JOIN lfa1 ON lfa1lifnr EQ ekkolifnr
        WHERE mkpf~budat IN so_budat
          AND mkpf~bldat IN so_bldat
          AND mkpf~vgart EQ 'WE'
          AND mseg~bwart IN so_bwart
          AND mseg~matnr IN so_matnr
          AND mseg~werks IN so_werks
          AND mseg~lifnr IN so_lifnr
          AND mseg~ebeln IN so_ebeln
          AND ekko~ekgrp IN so_ekgrp
          AND ekko~bukrs IN so_bukrs
          AND ekpo~matkl IN so_matkl
          AND ekko~bstyp IN so_bstyp
          AND ekpo~loekz EQ space
          AND ekpo~plifz IN so_plifz.
    Thanks & Regards,
    Manoj Kumar .Thatha
    Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting and please use code tags when posting code - post locked
    Edited by: Rob Burbank on Feb 4, 2010 9:03 AM

    Hi friends ,
    This is my select query which is taking so much time to retrive the records.CAn Any one help me in solving this performance issue.
    *- Get the Goods receipts  mainly selected per period (=> MKPF secondary
      SELECT msegebeln msegebelp mseg~werks
             ekkobukrs ekkolifnr ekkozterm ekkoekorg ekko~ekgrp
             ekkoinco1 ekkoexnum
             lfa1name1 lfa1land1 lfa1ktokk lfa1stceg
             mkpfmblnr mkpfmjahr msegzeile mkpfbldat mkpf~budat
             mseg~bwart
    *Start of changes for CIP 6203752 by PGOX02
             mseg~smbln
    *End of changes for CIP 6203752 by PGOX02
             ekpomatnr ekpotxz01 ekpomenge ekpomeins
             ekbemenge ekbedmbtr ekbewrbtr ekbewaers
             ekpolgort ekpomatkl ekpowebaz ekpokonnr ekpo~ktpnr
             ekpoplifz ekpobstae
             INTO TABLE it_temp
        FROM mkpf JOIN mseg ON msegmblnr EQ mkpfmblnr
                           AND msegmjahr EQ mkpfmjahr
                  JOIN ekbe ON ekbeebeln EQ msegebeln
                           AND ekbeebelp EQ msegebelp
                           AND ekbe~zekkn EQ '00'
                           AND ekbe~vgabe EQ '1'
                           AND ekbegjahr EQ msegmjahr
                           AND ekbebelnr EQ msegmblnr
                           AND ekbebuzei EQ msegzeile
                  JOIN ekpo ON ekpoebeln EQ ekbeebeln
                           AND ekpoebelp EQ ekbeebelp
                  JOIN ekko ON ekkoebeln EQ ekpoebeln
                  JOIN lfa1 ON lfa1lifnr EQ ekkolifnr
        WHERE mkpf~budat IN so_budat
          AND mkpf~bldat IN so_bldat
          AND mkpf~vgart EQ 'WE'
          AND mseg~bwart IN so_bwart
          AND mseg~matnr IN so_matnr
          AND mseg~werks IN so_werks
          AND mseg~lifnr IN so_lifnr
          AND mseg~ebeln IN so_ebeln
          AND ekko~ekgrp IN so_ekgrp
          AND ekko~bukrs IN so_bukrs
          AND ekpo~matkl IN so_matkl
          AND ekko~bstyp IN so_bstyp
          AND ekpo~loekz EQ space
          AND ekpo~plifz IN so_plifz.
    Thanks & Regards,
    Manoj Kumar .Thatha
    Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting and please use code tags when posting code - post locked
    Edited by: Rob Burbank on Feb 4, 2010 9:03 AM

  • Help with the query---urgent

    PROCEDURE Mktg_Obj_Cmpgn(p_attr_tbl_nm IN attr.attr_tbl_nm%TYPE,p_actn_cd IN chg_log.actn_cd%TYPE) IS
    v_tbl_nm varchar2(100);
    v_actn_cd varchar2(4);
    BEGIN
    v_tbl_nm := p_attr_tbl_nm;
    v_actn_cd := p_actn_cd;
    --dbms_output.put_line(v_tbl_nm);
    IF v_actn_cd = 'CRET' THEN
    SELECT to_clob(XMLELEMENT("requestmessage",XMLATTRIBUTES(xmlforest(
    seq_pblsh_rqst.nextval AS "publish_id")),
    XMLAGG(XMLELEMENT("campaign",XMLATTRIBUTES(XMLFOREST(chg.actn_cd AS "ACTION",
    pgm.program_idseq AS "parententityvalue",
    ent_sbsrb.ENT_ID AS "entid",
    pgm.campaign_idseq AS "campaignid",
    bus_unt.portfolio_subtype_id AS "campaignsegment",
    pgm_typ.pgm_type_nm AS "campaigntype",
    LKP.lookup_key_desc AS "campaignclass",
    pgm.editor_userid AS "campaignlastmodifiedby",
    pgm.edited_dtm AS "campaignlastmodifieddate",
    pgm.mkt_initv_proc_tx AS "campaignlink",
    ofr.offer_idseq AS "offerlink")))),
    XMLAGG(XMLELEMENT("PROCESSCODE", XMLATTRIBUTES(XMLFOREST(action as "link",PROGRAM.program_idseq AS "parententityvalue",
    ENT_SBSRB_ATTR.ent_id AS "entid"))))))AS "v_clob" into v_clob1
    FROM dual,
    chg_log chg,
    program pgm,
    ent_sbsrb_attr ent_sbsrb,
    business_unit bus_unt,
    program_type pgm_typ,
    offer ofr,
    (SELECT lookup_keyword,lookup_key_desc,lookup_type
    FROM lookup_key
    WHERE lookup_type = 'PRTFL_CMPGN_ID')LKP
    WHERE chg.attr_id = ent_sbsrb.attr_id
    and chg.parnt_ent_val_tx = to_char(pgm.program_idseq)
    and pgm.business_unit_idseq = bus_unt.business_unit_idseq
    and pgm.pgm_type_cd = pgm_typ.pgm_type_cd
    and ofr.program_idseq = pgm.program_idseq
    and pgm.pgm_cls_cd(+) = LKP.lookup_keyword;
    END IF;
    end Mktg_Obj_Cmpgn;
    I have this procedure within a package. v_clob1 is declared in packagebody as a CLOB variable. when i try to compile it i am getting the below given error.can anyone please help me?
    (1): PL/SQL: ORA-19208: parameter 1 of function XMLELEMENT must be aliased
    (2): PL/SQL: SQL Statement ignored

    hi sir,
    wanna ask u guyz one query
    let say i have table name account(parent) and service(child)
    account
    id name status
    1 john 0
    2 ki 1
    3 kdf 2
    service
    id trans_id name status
    1 1 et 9999
    1 2 eee 2222
    2 3 ere 999
    2 4 wew 0
    i plan to use something like this
    delete account from account t1,service t2 where t1id=t2.id and t1.status<>0;
    but sql command not correct so plz help me
    now i need a query to delete records from service table only but
    account.status <> 0 and accound.id=service.id
    i dunt want to delete record from account table
    so can u help me with this query
    thanks
    Message was edited by:
    mani_um

  • Performance issue with Spatial query

    Need inputs to optimize a spatial query that takes 9 seconds to return 34410 rows. All tables have been analyzed. Plan shows indexes columns. Details below :
    SELECT count(*) FROM (select ANT_DIR_GEOLOC , decode(D_UL_SQE_LOW,0,0,D_UL_SQE_LOW/D_UL_SQE_CHECK) as D_UL_SQE_BAD_SECTORS
    from dae_admin.VIEW_D_UL_SQE_BAD_SECTOR G , SECTORS S Where G.SECTOR_ID = S.SECTOR_ID and WEEK_NUMBER=32 and
    WEEK_NUMBER_YEAR=2007 and WEEKEND = 'N')
    WHERE MDSYS.SDO_FILTER(ANT_DIR_GEOLOC, MDSYS.SDO_GEOMETRY(2003, 99900001,NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1, 1003,
    3),MDSYS.SDO_ORDINATE_ARRAY(-1.06449263071068E7,3111428.11503326,-8140237.76425813,5605276.61542001)), 'querytype=WINDOW') ='TRUE'
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 146 | | 26699 (1)| 00:05:21 |
    | 1 | SORT AGGREGATE | | 1 | 146 | | | |
    |* 2 | HASH JOIN | | 1123 | 160K| | 26699 (1)| 00:05:21 |
    | 3 | PARTITION RANGE ALL | | 1123 | 152K| | 603 (1)| 00:00:08 |
    | 4 | TABLE ACCESS BY LOCAL INDEX ROWID | SECTORS | 1123 | 152K| | 603 (1)| 00:00:
    |* 5 | DOMAIN INDEX | SECTORS_ANTDIR_SX | | | | | |
    | 6 | VIEW | VIEW_D_UL_SQE_BAD_SECTOR | 20510 | 140K| | 26095 (1)| 00:05:14 |
    | 7 | HASH GROUP BY | | 20510 | 901K| | 26095 (1)| 00:05:14 |
    | 8 | VIEW | VIEW_D_UL_SQE_BAD_CARRIER | 984K| 42M| | 26095 (1)| 00:05:14 |
    | 9 | HASH GROUP BY | | 984K| 73M| 180M| 26095 (1)| 00:05:14 |
    | 10 | TABLE ACCESS BY LOCAL INDEX ROWID| FACT_DAILY_CHANNEL_STATS | 395K| 12M| | 603
    | 11 | NESTED LOOPS | | 984K| 73M| | 8179 (1)| 00:01:39 |
    | 12 | MERGE JOIN CARTESIAN | | 2 | 90 | | 4 (0)| 00:00:01 |
    | 13 | TABLE ACCESS BY INDEX ROWID | NETWORK_PARAMETERS | 1 | 19 | | 2 (0)| 00:00
    |* 14 | INDEX RANGE SCAN | NP_NAME_IDX | 1 | | | 1 (0)| 00:00:01 |
    | 15 | BUFFER SORT | | 5 | 130 | | 2 (0)| 00:00:01 |
    | 16 | TABLE ACCESS BY INDEX ROWID | DIM_TIME | 5 | 130 | | 2 (0)| 00:00:01 |
    |* 17 | INDEX RANGE SCAN | DIM_TIME_WEEK_3_IX | 5 | | | 1 (0)| 00:00:01 |
    | 18 | PARTITION RANGE ITERATOR | | 395K| | | 2139 (1)| 00:00:26 |
    |* 19 | INDEX RANGE SCAN | FA_DCC_EVENT_DATE_NP_ID | 395K| | | 2139 (1)| 00:00:26 |
    CREATE OR REPLACE VIEW VIEW_D_UL_SQE_BAD_SECTOR
    (SECTOR_ID, WEEK_NUMBER_YEAR, WEEK_NUMBER, WEEKEND, D_UL_SQE_LOW,
    D_UL_SQE_CHECK)
    AS
    SELECT sector_id,
    week_number_year,
    week_number,
    weekend,
    SUM(d_ul_sqe_low) d_ul_sqe_low,
    SUM(d_ul_sqe_check) d_ul_sqe_check
    FROM view_d_ul_sqe_bad_carrier
    GROUP BY sector_id, week_number_year, week_number, weekend
    view_d_ul_sqe_bad_carrier:
    SELECT /*+ index(f) */
    f.sector_id,
    f.channel,
    f.br_id,
    t.week_number_year,
    t.week_number,
    t.weekend,
    SUM(f.np_value_numerator) d_ul_sqe_low,
    SUM(f.np_value_denominator) d_ul_sqe_check
    FROM fact_daily_channel_stats f, network_parameters p, dim_time t
    WHERE f.np_id = p.np_id
    AND f.event_date = t.event_date
    AND p.name = 'DL_UL_SQE_LOW_RATE'
    GROUP BY f.sector_id, f.channel, f.br_id, t.week_number_year, t.week_number, t.weekend
    fact_daily_channel_stats is a table that isPartitioned on event date.

    Ivan - Yes, all the columns in the filter are indexed.
    Row Count:
    ==========
    fact_daily_channel_stats - 101263303
    network_parameters - 168
    SECTORS - 105216
    dim_time - 7306
    Any tip/ clue on re-writing the query would be of great help.
    Rich - Below is the response from our Spatial Team :
    1. Projection:
    We are working in Mercator projection. Since Oracle has out of the box Mercator with datum NAD 27 (SRID=49155), and we needed Mercator with NAD 83, we defined our own projection with SRID = 99900001, which you are seeing in our query. Projection is defined as:
    insert into sdo_coord_ref_system
    SELECT 99900001, a.coord_ref_sys_name, a.coord_ref_sys_kind,
    a.coord_sys_id, a.datum_id, 10076,
    2000023, a.projection_conv_id, a.cmpd_horiz_srid,
    a.cmpd_vert_srid, a.information_source, a.data_source,
    a.is_legacy, a.legacy_code, 'PROJCS["Mercator", GEOGCS [ "NAD 83", DATUM ["NAD 83", SPHEROID ["GRS 80", 6378137, 298.257222101]], PRIMEM [ "Greenwich", 0.000000 ], UNIT ["Decimal Degree", 0.01745329251994330]], PROJECTION ["Mercator"], UNIT ["Meter", 1.000000000000]]', a.legacy_cs_bounds, a.is_valid, a.supports_sdo_geometry
    FROM mdsys.sdo_coord_ref_sys a where srid=49155;
    2. Metadata for SECTORS table is:
    INSERT INTO USER_SDO_GEOM_METADATA (TABLE_NAME, COLUMN_NAME, DIMINFO, SRID)
    VALUES('SECTORS', 'GEOLOC',SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', -20037566, 20037566, 1),
    SDO_DIM_ELEMENT('Y', -71957574, 71957574, 1)),99900001);
    which means tolerance is 1 meter.
    3. Precision - I don't think we need to supply to MDSYS.SDO_FILTER coordinates with such great precision. Maybe rounding them (by dropping fractional part at all, 1 meter should be enough) would help. If not - then maybe we need to round coordinates we generate when converting from longitude/latitude to Mercator and store in SECTORS table. But on the other hand SECTORS table defines tolerance of 1 meter in metadata, and spatial index should use that tolerance.

Maybe you are looking for

  • Why does the color and or tone of my image change?

    Why does the color and or tone of my image change after working in LR for a couple of hours? I open LR and everything looks fine - work for a couple of hours on the same shoot - but then the pictures start to look flat or greenish cast as time goes b

  • Workflow Administrator other then SYSADMIN

    We need to have people working in the system as Workflow Admin but not logging in as SYSADMIN. We have placed an "*" in the "Workflow Configuation" -> "Workflow System Administrator" but when someone logs in as a user with "Workflow Administror Web A

  • Why does the Flex app not looking right?

    Hi, I am making an AIR application. But what ever application that I make in AIR using Flex has grey area at the bottom. The canvas or what can I say, the stage isn't covering the whole area.    <------------  See here is the grey area In the design

  • Upgrading Final Cut Pro 2

    How do I know if my copy of FCP2 is an academic version or otherwise not eligible for upgrading. I plan to buy a G5 and upgrade my software in several months and I'm trying to anticipate what I will need.

  • Apple iPad 2 Smart Cover For iPad 1

    Helo folks Since no difference between first and sec gen. Of pad's screen , length and width sizes, i want to get a ipad 2 smart cover for my original ipad 1 , possible ? .. It should ?