Performance issue with select query

Hi friends ,
This is my select query which is taking so much time to retrive the records.CAn Any one help me in solving this performance issue.
*- Get the Goods receipts  mainly selected per period (=> MKPF secondary
  SELECT msegebeln msegebelp mseg~werks
         ekkobukrs ekkolifnr ekkozterm ekkoekorg ekko~ekgrp
         ekkoinco1 ekkoexnum
         lfa1name1 lfa1land1 lfa1ktokk lfa1stceg
         mkpfmblnr mkpfmjahr msegzeile mkpfbldat mkpf~budat
         mseg~bwart
*Start of changes for CIP 6203752 by PGOX02
         mseg~smbln
*End of changes for CIP 6203752 by PGOX02
         ekpomatnr ekpotxz01 ekpomenge ekpomeins
         ekbemenge ekbedmbtr ekbewrbtr ekbewaers
         ekpolgort ekpomatkl ekpowebaz ekpokonnr ekpo~ktpnr
         ekpoplifz ekpobstae
         INTO TABLE it_temp
    FROM mkpf JOIN mseg ON msegmblnr EQ mkpfmblnr
                       AND msegmjahr EQ mkpfmjahr
              JOIN ekbe ON ekbeebeln EQ msegebeln
                       AND ekbeebelp EQ msegebelp
                       AND ekbe~zekkn EQ '00'
                       AND ekbe~vgabe EQ '1'
                       AND ekbegjahr EQ msegmjahr
                       AND ekbebelnr EQ msegmblnr
                       AND ekbebuzei EQ msegzeile
              JOIN ekpo ON ekpoebeln EQ ekbeebeln
                       AND ekpoebelp EQ ekbeebelp
              JOIN ekko ON ekkoebeln EQ ekpoebeln
              JOIN lfa1 ON lfa1lifnr EQ ekkolifnr
    WHERE mkpf~budat IN so_budat
      AND mkpf~bldat IN so_bldat
      AND mkpf~vgart EQ 'WE'
      AND mseg~bwart IN so_bwart
      AND mseg~matnr IN so_matnr
      AND mseg~werks IN so_werks
      AND mseg~lifnr IN so_lifnr
      AND mseg~ebeln IN so_ebeln
      AND ekko~ekgrp IN so_ekgrp
      AND ekko~bukrs IN so_bukrs
      AND ekpo~matkl IN so_matkl
      AND ekko~bstyp IN so_bstyp
      AND ekpo~loekz EQ space
      AND ekpo~plifz IN so_plifz.
Thanks & Regards,
Manoj Kumar .Thatha
Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting and please use code tags when posting code - post locked
Edited by: Rob Burbank on Feb 4, 2010 9:03 AM

Hi friends ,
This is my select query which is taking so much time to retrive the records.CAn Any one help me in solving this performance issue.
*- Get the Goods receipts  mainly selected per period (=> MKPF secondary
  SELECT msegebeln msegebelp mseg~werks
         ekkobukrs ekkolifnr ekkozterm ekkoekorg ekko~ekgrp
         ekkoinco1 ekkoexnum
         lfa1name1 lfa1land1 lfa1ktokk lfa1stceg
         mkpfmblnr mkpfmjahr msegzeile mkpfbldat mkpf~budat
         mseg~bwart
*Start of changes for CIP 6203752 by PGOX02
         mseg~smbln
*End of changes for CIP 6203752 by PGOX02
         ekpomatnr ekpotxz01 ekpomenge ekpomeins
         ekbemenge ekbedmbtr ekbewrbtr ekbewaers
         ekpolgort ekpomatkl ekpowebaz ekpokonnr ekpo~ktpnr
         ekpoplifz ekpobstae
         INTO TABLE it_temp
    FROM mkpf JOIN mseg ON msegmblnr EQ mkpfmblnr
                       AND msegmjahr EQ mkpfmjahr
              JOIN ekbe ON ekbeebeln EQ msegebeln
                       AND ekbeebelp EQ msegebelp
                       AND ekbe~zekkn EQ '00'
                       AND ekbe~vgabe EQ '1'
                       AND ekbegjahr EQ msegmjahr
                       AND ekbebelnr EQ msegmblnr
                       AND ekbebuzei EQ msegzeile
              JOIN ekpo ON ekpoebeln EQ ekbeebeln
                       AND ekpoebelp EQ ekbeebelp
              JOIN ekko ON ekkoebeln EQ ekpoebeln
              JOIN lfa1 ON lfa1lifnr EQ ekkolifnr
    WHERE mkpf~budat IN so_budat
      AND mkpf~bldat IN so_bldat
      AND mkpf~vgart EQ 'WE'
      AND mseg~bwart IN so_bwart
      AND mseg~matnr IN so_matnr
      AND mseg~werks IN so_werks
      AND mseg~lifnr IN so_lifnr
      AND mseg~ebeln IN so_ebeln
      AND ekko~ekgrp IN so_ekgrp
      AND ekko~bukrs IN so_bukrs
      AND ekpo~matkl IN so_matkl
      AND ekko~bstyp IN so_bstyp
      AND ekpo~loekz EQ space
      AND ekpo~plifz IN so_plifz.
Thanks & Regards,
Manoj Kumar .Thatha
Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting and please use code tags when posting code - post locked
Edited by: Rob Burbank on Feb 4, 2010 9:03 AM

Similar Messages

  • Performance issue with select query and for all entries.

    hi,
    i have a report to be performance tuned.
    the database table has around 20 million entries and 25 fields.
    so, the report fetches the distinct values of two fields using one select query.
    so, the first select query fetches around 150 entries from the table for 2 fields.
    then it applies some logic and eliminates some entries and makes entries around 80-90...
    and then it again applies the select query on the same table using for all entries applied on the internal table with 80-90 entries...
    in short,
    it accesses the same database table twice.
    so, i tried to get the database table in internal table and apply the logic on internal table and delete the unwanted entries.. but it gave me memory dump, and it wont take that huge amount of data into abap memory...
    is around 80-90 entries too much for using "for all entries"?
    the logic that is applied to eliminate the entries from internal table is too long, and hence cannot be converted into where clause to convert it into single select..
    i really cant find the way out...
    please help.

    chinmay kulkarni wrote:Chinmay,
    Even though you tried to ask the question with detailed explanation, unfortunately it is still not clear.
    It is perfectly fine to access the same database twice. If that is working for you, I don't think there is any need to change the logic. As Rob mentioned, 80 or 8000 records is not a problem in "for all entries" clause.
    >
    > so, i tried to get the database table in internal table and apply the logic on internal table and delete the unwanted entries.. but it gave me memory dump, and it wont take that huge amount of data into abap memory...
    >
    It is not clear what you tried to do here. Did you try to bring all 20 million records into an internal table? That will certainly cause the program to short dump with memory shortage.
    > the logic that is applied to eliminate the entries from internal table is too long, and hence cannot be converted into where clause to convert it into single select..
    >
    That is fine. Actually, it is better (performance wise) to do much of the work in ABAP than writing a complex WHERE clause that might bog down the database.

  • Performance Issue with Selection Screen Values

    Hi,
    I am facing a performance issue(seems like a performance issue ) in my project.
    I have a query with some RKFs and sales area in filters (single value variable which is optional).
    Query is by default  restricted by current month.
    The Cube on which the query operates has around 400,000 records for a month.
    The Cube gets loaded every three hours
    When I run the query with no filters I get the output within 10~15 secs.
    The issue I am facing is that,  when I enter a sales area in my selection screen the query gets stuck in the data selection step. In fact we are facing the same problem if we use one or two other characteristics in our selection screen
    We have aggregates/indexes etc on our cube.
    Has any one faced a similar situation?
    Does any one have any comments on this ?
    Your help will be appreciated. Thanks

    Hi A R,
    Goto RSRT--> Give ur query anme --> Execute =Debug
    --> No a pop up ill come with many check boxes select "Display Aggregates found" option --> now give ur
    selections in variable screen > first it will give the already existing aggregate names> continue> now after displaying all the aggregates it will display the list of objects realted to cube wise> try to copy these objects into notepad> again go with ur drill downs now u'll get the already existing aggregates for this drill down-> it will display the list of objects> copy them to notepad> now sort all the objects related to one cube by deleting duplicate objects in the note pad>goto that Infocube> context>maintain aggregates> create aggregate on the objects u copied into note pad.
    now try to execyte the report... it should work properly with out delays for those selections.
    I hope it helps you...
    Regards,
    Ramki.

  • Performance Issue with sql query

    Hi,
    My db is 10.2.0.5 with RAC on ASM, Cluster ware version 10.2.0.5.
    With bsoa table as
    SQL> desc bsoa;
    Name                                      Null?    Type
    ID                                        NOT NULL NUMBER
    LOGIN_TIME                                         DATE
    LOGOUT_TIME                                        DATE
    SUCCESSFUL_IND                                     VARCHAR2(1)
    WORK_STATION_NAME                                  VARCHAR2(80)
    OS_USER                                            VARCHAR2(30)
    USER_NAME                                 NOT NULL VARCHAR2(30)
    FORM_ID                                            NUMBER
    AUDIT_TRAIL_NO                                     NUMBER
    CREATED_BY                                         VARCHAR2(30)
    CREATION_DATE                                      DATE
    LAST_UPDATED_BY                                    VARCHAR2(30)
    LAST_UPDATE_DATE                                   DATE
    SITE_NO                                            NUMBER
    SESSION_ID                                         NUMBER(8)
    The query
    UPDATE BSOA SET LOGOUT_TIME =SYSDATE WHERE SYS_CONTEXT('USERENV', 'SESSIONID') = SESSION_ID
    Is taking a lot of time to execute and in AWR reports also it is on top in
    1. SQL Order by elapsed time
    2. SQL order by reads
    3. SQL order by gets
    So i am trying a way to solve the performance issue as the application is slow specially during login and logout time.
    I understand that the function in the where condition cause to do FTS, but i can not think what other parts to look at.
    Also:
    SQL> SELECT COUNT(1) FROM BSOA;
      COUNT(1)
       7800373
    The explain plan for  "UPDATE BSOA SET LOGOUT_TIME =SYSDATE WHERE SYS_CONTEXT('USERENV', 'SESSIONID') = SESSION_ID" is
    {code}
    PLAN_TABLE_OUTPUT
    Plan hash value: 1184960901
    | Id  | Operation          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT   |                    |     1 |    26 | 18748   (3)| 00:03:45 |
    |   1 |  UPDATE            | BSOA |       |       |            |          |
    |*  2 |   TABLE ACCESS FULL| BSOA |     1 |    26 | 18748   (3)| 00:03:45 |
    Predicate Information (identified by operation id):
       2 - filter("SESSION_ID"=TO_NUMBER(SYS_CONTEXT('USERENV','SESSIONID')))
    {code}

    Hi,
    There are also triggers before update and AUDITS on this table.
    CREATE OR REPLACE TRIGGER B2.TRIGGER1
    BEFORE UPDATE
    ON B2.BSOA  REFERENCING OLD AS OLD NEW AS NEW
    FOR EACH ROW
    :NEW.LAST_UPDATED_BY   := USER    ;
    :NEW.LAST_UPDATE_DATE  := SYSDATE ;
    END;
    CREATE OR REPLACE TRIGGER B2.TRIGGER2
    BEFORE INSERT
    ON B2.BSOA  REFERENCING OLD AS OLD NEW AS NEW
    FOR EACH ROW
    :NEW.CREATED_BY        := USER ;
    :NEW.CREATION_DATE     := SYSDATE ;
    :NEW.LAST_UPDATED_BY   := USER    ;
    :NEW.LAST_UPDATE_DATE  := SYSDATE ;
    END;
    And also there is an audit on this table
    AUDIT UPDATE ON B2.BSOA BY ACCESS WHENEVER SUCCESSFUL;
    AUDIT UPDATE ON B2.BSOA BY ACCESS WHENEVER NOT SUCCESSFUL;
    And the sessionid column in BSOA has height balanced histogram.
    When i create an index i get the following error. As i am on 10g I can't use DDL_LOCK_TIMEOUT . I may have to wait for next down time.
    SQL> CREATE INDEX B2.BSOA_SESSID_I ON B2.BSOA(SESSION_ID) TABLESPACE B2 COMPUTE STATISTICS;
    CREATE INDEX B2.BSOA_SESSID_I ON B2.BSOA(SESSION_ID) TABLESPACE B2 COMPUTE STATISTICS
    ERROR at line 1:
    ORA-00054: resource busy and acquire with NOWAIT specified
    Thanks

  • Issue with Select Query in the Delivery userexit USEREXIT_SAVE_DOCUMENT

    Hi All,
    I am facing a strang issue with delivery userexit
    1) I have a delivery user exit MV50AFZ1 - USEREXIT_SAVE_DOCUMENT.
    2) In this user exit. I have written a select query as shown below
    *Get the already delivered data
        SELECT vbeln
               vgbel
               vgpos
               erdat
               erzet
               lfimg
          FROM lips
          INTO TABLE t_lips
           FOR ALL ENTRIES IN t_xlips_reference
         WHERE vgbel EQ t_xlips_reference-vgbel
           AND vgpos EQ t_xlips_reference-vgpos.
        IF sy-subrc EQ 0.
        ENDIF.
    3) The use of the above select query is to find out if there is any delivery that has already been created for the reference slaes order for which the current delivery is being created.
    4) The issue here is that for the FIRST DELIVERY, this select query should fail becuase there is no delivery created earlier and LIPS table would not have data. But, I am seeing some data getting populated in the internal table. The data that I am seeing in the internal table is the data of XLIPS which is nothing but the one that is about to get saved in the database after finishing this userexit.
    5) STRANGE Point is that this is working fine in case if I create delivery using the transaction VL01N. But, if I create delivery using VL10A program I am facing this issue.
    << Removed >>
    Thanks,
    Babu Kilari
    Edited by: Rob Burbank on Jun 16, 2010 4:22 PM

    Then why don't you add
    AND vbeln NE likp-vbeln

  • Performance issue with insert query !

    Hi ,
    I am using dbxml-2.4.16, my node-storage container is loaded with a large document ( 54MB xml ).
    My document basically contains around 65k records in the same table ( 65k child nodes for one parent node ). I need to insert more records in to my DB, my insert XQuery is consuming a lot of time ( ~23 sec ) to insert one entry through command-line and around 50sec through code.
    My container is indexed with "node-attribute-equality-string". The insert query I used:
    insert nodes <NS:sampleEntry mySSIAddress='70011' modifier = 'create'><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
    If I modify my query with
    into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:sampleTable/NS:sampleEntry[@mySSIAddress='1']
    insted of
    into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
    Time taken reduces only by 8 secs.
    I have also tried to use insert "after", "before", "as first", "as last" , but there is no difference in performance.
    Is anything wrong with my query, what should be the expected time to insert one record in a DB of 65k records.
    Has anybody got any idea regarding this performance issue.
    Kindly help me out.
    Thanks,
    Kapil.

    Hi George,
    Thanks for your reply.
    Here is the info you requested,
    dbxml> listIndexes
    Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
    Index: node-attribute-equality-string for node {}:mySSIAddress
    2 indexes found.
    dbxml> info
    Version: Oracle: Berkeley DB XML 2.4.16: (October 21, 2008)
    Berkeley DB 4.6.21: (September 27, 2007)
    Default container name: n_b_i_f_c_a_z.dbxml
    Type of default container: NodeContainer
    Index Nodes: on
    Shell and XmlManager state:
    Not transactional
    Verbose: on
    Query context state: LiveValues,Eager
    The insery query with update takes ~32 sec ( shown below )
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS';insert nodes <NS:sampleEntry mySSIAddress='70000' modifier = 'create' ><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
    Time in seconds for command 'query': 32.5002
    and the query without the updation part takes ~14 sec ( shown below )
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
    Time in seconds for command 'query': 13.7289
    The query :
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//PMB:sampleTable/PMB:sampleEntry[@mySSIAddress='1000']"
    Time in seconds for command 'query': 0.005375
    is very fast.
    The Updation of the document seems to consume much of the time.
    Regards,
    Kapil.

  • Issue with select query for secondary index

    Hi all,
    I have created a secondary index A on mara table with fields Mandt and Packaging Material Type VHART.
    Now i am trying to write a report
    Tables : mara.
    data : begin of itab occurs 0.
    include structure mara.
    data  : end of itab.
    *select * from mara into table itab*
    CLIENT SPECIFIED where
      MANDT = SY-MANDT and
      VHART = 'WER'.
    I'm getting an error
    Unable to interpret "CLIENT". Possible causes of error: Incorrect spelling or comma error.          
    if i change to my select query     to
    *select * from mara into table itab*
      where
      MANDT = SY-MANDT and
      VHART = 'WER'.
    I'm getting an error
    Without the addition "CLIENT SPECIFIED", you cannot specify the client     field "MANDT" in the WHERE condition.
    Let me know if iam wrong and we are at 4.6c
    Thanks

    Like I already said, even if you have added the mandt field in the secondary index, there is no need the use it in the select statement.
    Let me elaborate on my reply before. If you have created a UNIQUE index, which I don't think you have, then you should include CLIENT in the index. A unique index for a client-dependent table must contain the client field.
    Additional info:
    The accessing speed does not depend on whether or not an index is defined as a unique index. A unique index is simply a means of defining that certain field combinations of data records in a table are unique.
    Even if you have defined a secondary index, this does not automatically mean, that this index is used. This also depends on the database optimizer. The optimizer will determine which index is best and use it. So before transporting this index, you should make sure that the index is used. How to check this, have a look at the link:
    [check if index is used|http://help.sap.com/saphelp_nw70/helpdata/EN/cf/21eb3a446011d189700000e8322d00/content.htm]
    Edited by: Micky Oestreich on May 13, 2008 10:09 PM

  • Issue with select query join....

    Hello,
    I am facing a problem with a select query. I get a message that itab_data is not long enough.
    DATA: itab_data like ptrv_head occurs 0 with header line.
    SELECT *    from PTRV_HEAD
                as a inner join PA0002 as b
                on a~pernr = b~pernr
                inner join PA0001 as c
                on a~pernr = c~pernr
                INTO TABLE itab_data
           where a~PERNR in S_PERNR
           and   a~REINR in S_TRIP
           and   a~ZLAND in S_LAND
           and   a~DATV1 in S_BEGIN
           and   a~DATB1 in S_END
           and   b~NACHN in S_FIRST
           and   b~VORNA in S_LAST
           and   c~BUKRS in S_BUKRS
           and   c~KOSTL in S_KOSTL
           and   c~PERSG in S_EMPGP
           and   c~PERSK in S_SUBGP
           and   c~begda in ldbbegda
           and   c~endda in ldbendda.
    Regards,
    Jainam.
    Edited by: Jainam Shah on Mar 27, 2009 4:13 PM
    Edited by: Jainam Shah on Mar 27, 2009 4:13 PM

    Hi,
    Try this..
    DATA: t_dfies  TYPE STANDARD TABLE OF dfies.
    DATA: t_fields TYPE STANDARD TABLE OF char40.
    DATA: s_dfies  TYPE dfies,
          s_fields TYPE char40.
    * Get the fields
    CALL FUNCTION 'DDIF_FIELDINFO_GET'
      EXPORTING
        tabname        = 'PTRV_HEAD'
      TABLES
        dfies_tab      = t_dfies[]
      EXCEPTIONS
        not_found      = 1
        internal_error = 2
        OTHERS         = 3.
    IF sy-subrc <> 0.
      MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
              WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    * Build the fields to be selected.
    LOOP AT t_dfies INTO s_dfies.
      CONCATENATE 'A~' s_dfies-fieldname INTO s_fields.
      APPEND s_fields TO t_fields.
      CLEAR: s_fields.
    ENDLOOP.
    * Select.
    SELECT (t_fields)    from PTRV_HEAD
    Thanks
    Naren

  • Performance issue with infoset query

    Dear Experts,
    when i trying to run a query based on infoset. the execution time of the query is very high. when i try to see the query designer i have got the following warning...
    Warning Message :
    Diagnosis
    InfoProvider XXXX does not contain characteristic 0CALYEAR. The exception aggregation for key figure XXXX can therefore not be applied.
    System Response
    The key figure will therefore be aggregated using all characteristics of XXXXX with the generic aggregation SUM.
    could anyone can guide me how to check at the infoset level to resolve this issue....
    Thanks,
    Mannu!

    Hi Mannu.
    Go to change mode in your InfoSet and identify the Char 0CALYEAR. Check whether it is selected (click the box beside 0CALYEAR).
    This needs to be done, because the exception aggregation can occur only for the fields available in the BEx Query!
    Since its not available, it is trying to aggregate based on all available fields selected in Infoset. This may be the reasong to your performace issue.
    Make this setting and I hope your issue will get resolved. Do post the outcome.
    Cheers,
    VA
    Edited by: Vishwa  Anand on Sep 6, 2010 5:15 PM

  • Performance issue with this query.

    Hi Experts,
    This query is fetching 500 records.
    SELECT
    RECIPIENT_ID ,FAX_STATUS
    FROM
    FAX_STAGE WHERE LOWER(FAX_STATUS) like 'moved to%'
    Execution Plan
    | Id  | Operation                   | Name                | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT            |                     |   159K|    10M|  2170   (1)|
    |   1 |  TABLE ACCESS BY INDEX ROWID| FAX_STAGE           |   159K|    10M|  2170   (1)|
    |   2 |   INDEX RANGE SCAN          | INDX_FAX_STATUS_RAM | 28786 |       |   123   (0)|
    Note
       - 'PLAN_TABLE' is old version
    Statistics
              1  recursive calls
              0  db block gets
             21  consistent gets
              0  physical reads
              0  redo size
            937  bytes sent via SQL*Net to client
            375  bytes received via SQL*Net from client
              3  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             19  rows processed
    Total number of records in the table.
    SELECT COUNT(*) FROM FAX_STAGE--3679418
    Distinct reccords are low for this column.
    SELECT DISTINCT FAX_STATUS FROM FAX_STAGE;
    Completed
    BROKEN
    Broken - New
    moved to - America
    MOVED to - Australia
    Moved to Canada and australia
    Functional based indexe on FAX_STAGE(LOWER(FAX_STATUS))
    stats are upto date.
    Still the cost is high
    How to improve the performance of this query.
    Please help me.
    Thanks in advance.

    With no heavy activity on your fax_stage table a bitmap index might do better - see  CREATE INDEX
    I would try FTS (Full Table Scan) first as 6 vs. 3679418 is low cardinality for sure so using an index is not very helpful in this case (maybe too much Exadata oriented)
    There's a lot of web pages where you can read: full table scans are not always evil and indexes are not always good or vice versa Ask Tom &amp;quot;How to avoid the full table scan&amp;quot;
    Regards
    Etbin

  • Performance Issue with the query

    Hi Experts,
    While working on peoplesoft, today I was stuck with a problem when one of the Query is performing really bad. With all the statistics updates, query is not performing good. On one of the table, query is spending time doing lot of IO. (db file sequential read wait). And if I delete the stats for the table and let dynamic sampling to take place, query just works good and there is no IO wait on the table.
    Here is the query
    SELECT A.BUSINESS_UNIT_PC, A.PROJECT_ID,  E.DESCR,  A.EMPLID,  D.NAME,  C.DESCR,  A.TRC,
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( A.EST_GROSS),
      DECODE( A.TRC, 'ROVA1', 0, 'ROVA2', 0, ( SUM( A.EST_GROSS)/100) * 9.75),
      '2012-07-01',
      '2012-07-31',
      SUM(     DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)
      DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
      DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
      DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'),
          TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'),
          TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
      || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
      || TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY')  || ' / '  || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
      || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
      C.TRC,  TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
      E.BUSINESS_UNIT, 
      E.PROJECT_ID
    FROM
         PS_TL_PAYABLE_TIME A, 
         PS_TL_TRC_TBL C,
         PS_PROJECT E, 
         PS_PERSONAL_DATA D,
           PS_PERALL_SEC_QRY D1, 
         PS_JOB F, 
         PS_EMPLMT_SRCH_QRY F1
    WHERE
         D.EMPLID = D1.EMPLID
    AND      D1.OPRID   = 'TMANI'
    AND      F.EMPLID   = F1.EMPLID
    AND      F.EMPL_RCD = F1.EMPL_RCD
    AND      F1.OPRID   = 'TMANI'
    AND      A.DUR BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD') AND TO_DATE('2012-07-31','YYYY-MM-DD')
    AND      C.TRC   = A.TRC
    AND      C.EFFDT =  (SELECT
                   MAX(C_ED.EFFDT) 
                  FROM
                   PS_TL_TRC_TBL C_ED
                     WHERE
                   C.TRC = C_ED.TRC 
                  AND C_ED.EFFDT <= SYSDATE 
    AND      E.BUSINESS_UNIT = A.BUSINESS_UNIT_PC
    AND      E.PROJECT_ID    = A.PROJECT_ID
    AND      A.EMPLID        = D.EMPLID
    AND      A.EMPLID        = F.EMPLID
    AND      A.EMPL_RCD      = F.EMPL_RCD
    AND      F.EFFDT         =  (SELECT
                        MAX(F_ED.EFFDT) 
                      FROM
                        PS_JOB F_ED
                        WHERE
                        F.EMPLID = F_ED.EMPLID 
                      AND      F.EMPL_RCD = F_ED.EMPL_RCD
                        AND      F_ED.EFFDT <= SYSDATE 
    AND      F.EFFSEQ =  (SELECT
                   MAX(F_ES.EFFSEQ) 
                   FROM
                   PS_JOB F_ES
                     WHERE
                   F.EMPLID = F_ES.EMPLID 
                   AND F.EMPL_RCD = F_ES.EMPL_RCD
                     AND F.EFFDT  = F_ES.EFFDT 
    AND      F.GP_PAYGROUP  = DECODE(' ', ' ', F.GP_PAYGROUP, ' ')
    AND      A.CURRENCY_CD  = DECODE(' ', ' ', A.CURRENCY_CD, ' ')
    AND      DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')) = DECODE(' ', ' ', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')), 'L', 'L', 'E', 'E', 'A', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')))
    AND ( A.EMPLID, A.EMPL_RCD) IN  (SELECT
                             B.EMPLID,
                             B.EMPL_RCD 
                        FROM
                             PS_TL_GROUP_DTL B
                                    WHERE
                             B.TL_GROUP_ID = DECODE('ER012', ' ', B.TL_GROUP_ID, 'ER012') 
    AND      E.PROJECT_USER1   = DECODE(' ', ' ', E.PROJECT_USER1, ' ')
    AND      A.PROJECT_ID      = DECODE(' ', ' ', A.PROJECT_ID, ' ')
    AND      A.PAYABLE_STATUS <>
      CASE
        WHEN to_number(TO_CHAR(sysdate, 'DD')) < 15
        THEN
          CASE
            WHEN A.DUR > last_day(add_months(sysdate, -2))
            THEN ' '
            ELSE 'NA'
          END
        ELSE
          CASE
            WHEN A.DUR > last_day(add_months(sysdate, -1))
            THEN ' '
            ELSE 'NA'
          END
      END
    AND      A.EMPLID = DECODE(' ', ' ', A.EMPLID, ' ')
    GROUP BY A.BUSINESS_UNIT_PC,
           A.PROJECT_ID,
           E.DESCR,
         A.EMPLID,
           D.NAME,
           C.DESCR,
           A.TRC,
           '2012-07-01',
           '2012-07-31',
           DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
           DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
           DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-31',      'YYYY-MM-DD'), 'Mon ')
           || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
           || TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY')  || ' / '
           || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')  || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
           C.TRC,  TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
           E.BUSINESS_UNIT,  E.PROJECT_ID
    HAVING SUM( A.EST_GROSS) <> 0
    ORDER BY 1,
            2,
             5,
             6 ;Here is the screenshot for IO wait
    [https://lh4.googleusercontent.com/-6PFW2FSK3yE/UCrwUbZ0pvI/AAAAAAAAAPA/eHM48AOC0Uo]
    Edited by: Oceaner on Aug 14, 2012 5:38 PM

    Here is the execution plan with all the statistics present
    PLAN_TABLE_OUTPUT
    Plan hash value: 1575300420
    | Id  | Operation                                   | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                            |                    |     1 |   237 |  2703   (1)| 00:00:33 |
    |*  1 |  FILTER                                     |                    |       |       |            |          |
    |   2 |   SORT GROUP BY                             |                    |     1 |   237 |            |          |
    |   3 |    CONCATENATION                            |                    |       |       |            |          |
    |*  4 |     FILTER                                  |                    |       |       |            |          |
    |*  5 |      FILTER                                 |                    |       |       |            |          |
    |*  6 |       HASH JOIN                             |                    |     1 |   237 |  1695   (1)| 00:00:21 |
    |*  7 |        HASH JOIN                            |                    |     1 |   204 |  1689   (1)| 00:00:21 |
    |*  8 |         HASH JOIN SEMI                      |                    |     1 |   193 |  1352   (1)| 00:00:17 |
    |   9 |          NESTED LOOPS                       |                    |       |       |            |          |
    |  10 |           NESTED LOOPS                      |                    |     1 |   175 |  1305   (1)| 00:00:16 |
    |* 11 |            HASH JOIN                        |                    |     1 |   148 |  1304   (1)| 00:00:16 |
    |  12 |             JOIN FILTER CREATE              | :BF0000            |       |       |            |          |
    |  13 |              NESTED LOOPS                   |                    |       |       |            |          |
    |  14 |               NESTED LOOPS                  |                    |     1 |   134 |   967   (1)| 00:00:12 |
    |  15 |                NESTED LOOPS                 |                    |     1 |   103 |   964   (1)| 00:00:12 |
    |* 16 |                 TABLE ACCESS FULL           | PS_PROJECT         |   197 |  9062 |   278   (1)| 00:00:04 |
    |* 17 |                 TABLE ACCESS BY INDEX ROWID | PS_TL_PAYABLE_TIME |     1 |    57 |     7   (0)| 00:00:01 |
    |* 18 |                  INDEX RANGE SCAN           | IDX$$_C44D0007     |    16 |       |     3   (0)| 00:00:01 |
    |* 19 |                INDEX RANGE SCAN             | IDX$$_3F450003     |     1 |       |     2   (0)| 00:00:01 |
    |* 20 |               TABLE ACCESS BY INDEX ROWID   | PS_JOB             |     1 |    31 |     3   (0)| 00:00:01 |
    |  21 |             VIEW                            | PS_EMPLMT_SRCH_QRY |  5428 | 75992 |   336   (2)| 00:00:05 |
    PLAN_TABLE_OUTPUT
    |  22 |              SORT UNIQUE                    |                    |  5428 |   275K|   336   (2)| 00:00:05 |
    |* 23 |               FILTER                        |                    |       |       |            |          |
    |  24 |                JOIN FILTER USE              | :BF0000            | 55671 |  2827K|   335   (1)| 00:00:05 |
    |  25 |                 NESTED LOOPS                |                    | 55671 |  2827K|   335   (1)| 00:00:05 |
    |  26 |                  TABLE ACCESS BY INDEX ROWID| PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |* 27 |                   INDEX UNIQUE SCAN         | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    |* 28 |                  TABLE ACCESS FULL          | PS_SJT_PERSON      | 55671 |  1739K|   333   (1)| 00:00:04 |
    |  29 |                CONCATENATION                |                    |       |       |            |          |
    |  30 |                 NESTED LOOPS                |                    |     1 |    63 |     2   (0)| 00:00:01 |
    |* 31 |                  INDEX FAST FULL SCAN       | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |* 32 |                  INDEX UNIQUE SCAN          | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  33 |                 NESTED LOOPS                |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 34 |                  INDEX UNIQUE SCAN          | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 35 |                  INDEX UNIQUE SCAN          | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  36 |                NESTED LOOPS                 |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 37 |                 INDEX UNIQUE SCAN           | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 38 |                 INDEX UNIQUE SCAN           | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |* 39 |            INDEX UNIQUE SCAN                | PS_PERSONAL_DATA   |     1 |       |     0   (0)| 00:00:01 |
    |  40 |           TABLE ACCESS BY INDEX ROWID       | PS_PERSONAL_DATA   |     1 |    27 |     1   (0)| 00:00:01 |
    |* 41 |          INDEX FAST FULL SCAN               | PS_TL_GROUP_DTL    |   323 |  5814 |    47   (3)| 00:00:01 |
    |  42 |         VIEW                                | PS_PERALL_SEC_QRY  |  7940 | 87340 |   336   (2)| 00:00:05 |
    |  43 |          SORT UNIQUE                        |                    |  7940 |   379K|   336   (2)| 00:00:05 |
    |* 44 |           FILTER                            |                    |       |       |            |          |
    |* 45 |            FILTER                           |                    |       |       |            |          |
    |  46 |             NESTED LOOPS                    |                    | 55671 |  2663K|   335   (1)| 00:00:05 |
    |  47 |              TABLE ACCESS BY INDEX ROWID    | PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |* 48 |               INDEX UNIQUE SCAN             | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |* 49 |              TABLE ACCESS FULL              | PS_SJT_PERSON      | 55671 |  1576K|   333   (1)| 00:00:04 |
    |  50 |            CONCATENATION                    |                    |       |       |            |          |
    |  51 |             NESTED LOOPS                    |                    |     1 |    63 |     2   (0)| 00:00:01 |
    |* 52 |              INDEX FAST FULL SCAN           | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |* 53 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  54 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 55 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 56 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  57 |            CONCATENATION                    |                    |       |       |            |          |
    |  58 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 59 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 60 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  61 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 62 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 63 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  64 |            NESTED LOOPS                     |                    |     1 |    63 |     3   (0)| 00:00:01 |
    |  65 |             INLIST ITERATOR                 |                    |       |       |            |          |
    |* 66 |              INDEX RANGE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     2   (0)| 00:00:01 |
    |* 67 |             INDEX RANGE SCAN                | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |  68 |        TABLE ACCESS FULL                    | PS_TL_TRC_TBL      |   922 | 30426 |     6   (0)| 00:00:01 |
    |  69 |      SORT AGGREGATE                         |                    |     1 |    20 |            |          |
    |* 70 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    20 |     3   (0)| 00:00:01 |
    |  71 |      SORT AGGREGATE                         |                    |     1 |    14 |            |          |
    |* 72 |       INDEX RANGE SCAN                      | PS_TL_TRC_TBL      |     2 |    28 |     2   (0)| 00:00:01 |
    |  73 |      SORT AGGREGATE                         |                    |     1 |    23 |            |          |
    |* 74 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    23 |     3   (0)| 00:00:01 |
    |* 75 |     FILTER                                  |                    |       |       |            |          |
    PLAN_TABLE_OUTPUT
    |* 76 |      FILTER                                 |                    |       |       |            |          |
    |* 77 |       HASH JOIN                             |                    |     1 |   237 |   974   (2)| 00:00:12 |
    |* 78 |        HASH JOIN                            |                    |     1 |   226 |   637   (1)| 00:00:08 |
    |* 79 |         HASH JOIN                           |                    |     1 |   193 |   631   (1)| 00:00:08 |
    |  80 |          NESTED LOOPS                       |                    |       |       |            |          |
    |  81 |           NESTED LOOPS                      |                    |     1 |   179 |   294   (1)| 00:00:04 |
    |  82 |            NESTED LOOPS                     |                    |     1 |   152 |   293   (1)| 00:00:04 |
    |  83 |             NESTED LOOPS                    |                    |     1 |   121 |   290   (1)| 00:00:04 |
    |* 84 |              HASH JOIN SEMI                 |                    |     1 |    75 |   289   (1)| 00:00:04 |
    |* 85 |               TABLE ACCESS BY INDEX ROWID   | PS_TL_PAYABLE_TIME |     1 |    57 |   242   (0)| 00:00:03 |
    |* 86 |                INDEX SKIP SCAN              | IDX$$_C44D0007     |   587 |       |   110   (0)| 00:00:02 |
    |* 87 |               INDEX FAST FULL SCAN          | PS_TL_GROUP_DTL    |   323 |  5814 |    47   (3)| 00:00:01 |
    |* 88 |              TABLE ACCESS BY INDEX ROWID    | PS_PROJECT         |     1 |    46 |     1   (0)| 00:00:01 |
    |* 89 |               INDEX UNIQUE SCAN             | PS_PROJECT         |     1 |       |     0   (0)| 00:00:01 |
    |* 90 |             TABLE ACCESS BY INDEX ROWID     | PS_JOB             |     1 |    31 |     3   (0)| 00:00:01 |
    |* 91 |              INDEX RANGE SCAN               | IDX$$_3F450003     |     1 |       |     2   (0)| 00:00:01 |
    |* 92 |            INDEX UNIQUE SCAN                | PS_PERSONAL_DATA   |     1 |       |     0   (0)| 00:00:01 |
    |  93 |           TABLE ACCESS BY INDEX ROWID       | PS_PERSONAL_DATA   |     1 |    27 |     1   (0)| 00:00:01 |
    |  94 |          VIEW                               | PS_EMPLMT_SRCH_QRY |  5428 | 75992 |   336   (2)| 00:00:05 |
    |  95 |           SORT UNIQUE                       |                    |  5428 |   275K|   336   (2)| 00:00:05 |
    |* 96 |            FILTER                           |                    |       |       |            |          |
    |  97 |             NESTED LOOPS                    |                    | 55671 |  2827K|   335   (1)| 00:00:05 |
    |  98 |              TABLE ACCESS BY INDEX ROWID    | PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |* 99 |               INDEX UNIQUE SCAN             | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    |*100 |              TABLE ACCESS FULL              | PS_SJT_PERSON      | 55671 |  1739K|   333   (1)| 00:00:04 |
    | 101 |             CONCATENATION                   |                    |       |       |            |          |
    | 102 |              NESTED LOOPS                   |                    |     1 |    63 |     2   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |*103 |               INDEX FAST FULL SCAN          | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |*104 |               INDEX UNIQUE SCAN             | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 105 |              NESTED LOOPS                   |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*106 |               INDEX UNIQUE SCAN             | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*107 |               INDEX UNIQUE SCAN             | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 108 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*109 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*110 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 111 |         TABLE ACCESS FULL                   | PS_TL_TRC_TBL      |   922 | 30426 |     6   (0)| 00:00:01 |
    | 112 |        VIEW                                 | PS_PERALL_SEC_QRY  |  7940 | 87340 |   336   (2)| 00:00:05 |
    | 113 |         SORT UNIQUE                         |                    |  7940 |   379K|   336   (2)| 00:00:05 |
    |*114 |          FILTER                             |                    |       |       |            |          |
    |*115 |           FILTER                            |                    |       |       |            |          |
    | 116 |            NESTED LOOPS                     |                    | 55671 |  2663K|   335   (1)| 00:00:05 |
    | 117 |             TABLE ACCESS BY INDEX ROWID     | PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |*118 |              INDEX UNIQUE SCAN              | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    |*119 |             TABLE ACCESS FULL               | PS_SJT_PERSON      | 55671 |  1576K|   333   (1)| 00:00:04 |
    | 120 |           CONCATENATION                     |                    |       |       |            |          |
    | 121 |            NESTED LOOPS                     |                    |     1 |    63 |     2   (0)| 00:00:01 |
    |*122 |             INDEX FAST FULL SCAN            | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |*123 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 124 |            NESTED LOOPS                     |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*125 |             INDEX UNIQUE SCAN               | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*126 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 127 |           CONCATENATION                     |                    |       |       |            |          |
    | 128 |            NESTED LOOPS                     |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*129 |             INDEX UNIQUE SCAN               | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |*130 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 131 |            NESTED LOOPS                     |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*132 |             INDEX UNIQUE SCAN               | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*133 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 134 |           NESTED LOOPS                      |                    |     1 |    63 |     3   (0)| 00:00:01 |
    | 135 |            INLIST ITERATOR                  |                    |       |       |            |          |
    |*136 |             INDEX RANGE SCAN                | PSASJT_CLASS_ALL   |     1 |    39 |     2   (0)| 00:00:01 |
    |*137 |            INDEX RANGE SCAN                 | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    | 138 |      SORT AGGREGATE                         |                    |     1 |    20 |            |          |
    |*139 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    20 |     3   (0)| 00:00:01 |
    | 140 |      SORT AGGREGATE                         |                    |     1 |    14 |            |          |
    |*141 |       INDEX RANGE SCAN                      | PS_TL_TRC_TBL      |     2 |    28 |     2   (0)| 00:00:01 |
    | 142 |      SORT AGGREGATE                         |                    |     1 |    23 |            |          |
    |*143 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    23 |     3   (0)| 00:00:01 |
    | 144 |      SORT AGGREGATE                         |                    |     1 |    14 |            |          |
    |*145 |       INDEX RANGE SCAN                      | PS_TL_TRC_TBL      |     2 |    28 |     2   (0)| 00:00:01 |
    | 146 |      SORT AGGREGATE                         |                    |     1 |    20 |            |          |
    |*147 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    20 |     3   (0)| 00:00:01 |
    | 148 |      SORT AGGREGATE                         |                    |     1 |    23 |            |          |
    |*149 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    23 |     3   (0)| 00:00:01 |
    ------------------------------------------------------------------------------------------------------------------Most of the IO wait occur at step 17. Though exlain plan simply doesnot show this thing..But when we run this query from Peoplesoft application (online page), this is visible through Grid control SQL Monitoring
    Second part of the Question continues....

  • Performance issue with sql query. Please explain

    I have a sql query
    A.column1 and B.column1 are indexed.
    Query1: select A.column1,A.column3, B.column2 from tableA , tableB where A.column1=B.column1;
    query2: select A.column1,A.column3,B.column2,B.column4 from tableA , tableB where A.column1=B.column1;
    1. Does both query takes same time? If not why?.
    As they both go for same row with different number of columns. And Since the complete datablock is loaded in Database buffer cache. so there should not be extra time taken upto this time.
    Please tell me if I am wrong.

    For me apart from required excessive bytes sent via SQL*Net to client as well bytes received via SQL*Net from client will cause to chatty network which will degrade the performance.
    SQL> COLUMN plan_plus_exp FORMAT A100
    SQL> SET LINESIZE 1000
    SQL> SET AUTOTRACE TRACEONLY
    SQL> SELECT *
      2    FROM emp
      3  /
    14 rows selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=14 Bytes=616)
       1    0   TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=3 Card=14 Bytes=616)
    Statistics
              1  recursive calls
              0  db block gets
              8  consistent gets
              0  physical reads
              0  redo size
    1631 bytes sent via SQL*Net to client
    423 bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             14  rows processed
    SQL> SELECT ename
      2    FROM emp
      3  /
    14 rows selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=14 Bytes=154)
       1    0   TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=3 Card=14 Bytes=154)
    Statistics
              1  recursive calls
              0  db block gets
              8  consistent gets
              0  physical reads
              0  redo size
    456 bytes sent via SQL*Net to client
    423 bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             14  rows processed Khurram

  • Performance issue in select query

    Moderator message: do not post the same question in two forums.  Duplicate (together with answers) deleted.
    SELECT a~grant_nbr
            a~zzdonorfy
            a~company_code
            b~language
          b~short_desc
    INTO TABLE i_gmgr_text
    FROM gmgr AS a
    INNER JOIN gmgrtexts AS b ON a~grant_nbr = b~grant_nbr
    WHERE a~grant_nbr  IN s_grant
    AND   a~zzdonorfy  IN s_dono
    AND   b~language   EQ sy-langu
    AND   b~short_desc IN s_cont.
    How to use for all all entries in the above inner join for better performance?
    then....
      IF sy-subrc EQ 0.
        SORT i_gmgr_text BY grant_nbr.
      ENDIF.
      IF i_gmgr_text[] IS NOT INITIAL.
    * Actual Line Item Table
        SELECT rgrant_nbr
               gl_sirid
               rbukrs
               rsponsored_class
               refdocnr
               refdocln
        FROM gmia
        INTO TABLE i_gmia
        FOR ALL ENTRIES IN i_gmgr_text
        WHERE rgrant_nbr        = i_gmgr_text-grant_nbr
        AND   rbukrs            = i_gmgr_text-company_code
        AND   rsponsored_class IN s_spon.
        IF sy-subrc EQ 0.
          SORT i_gmia BY refdocnr refdocln.
        ENDIF.
    Edited by: Matt on Dec 17, 2008 1:40 PM

    > How to use for all all entries in the above inner join for better performance?
    my best christmas recommendation for performance, simply ignore such recommendations.
    And check the performance of your join!
    Is the performance really low, if it is then there is probably no index support. Without indexes FOR ALL ENTRIES will be much slower.
    Siegfried

  • Performance issue in Select Query on ERCH

    Dear Friends,
    I have to execute a query on ERCH in production system having around 20 lakh data which keeps on increasing. "Select BELNR VERTRAG ADATSOLL from ERCH where BCREASON = u201803u2019 " . The expected volume of data that the query will return is approx 1400 records.
    Since the where clause includes a field which is not a key field please suggest the performance of the query so that it doesn't time-out.
    Regards,
    Amit Srivastava

    Hello Amit Srivastava ,
                                        You can add the Contract account number(VKONT)  and Business Partner number(GPARTNER) in your query and can create a custom Index for the contract account number  (We have this index created in our system).
    This will improve the performance effectively.
    Hope this answers your question.
    Thanks,
    Greetson

  • Performance Issue with the Query urgent

    is there any way to get the data for the material Rejected with Movement type 122 & 123 except MSEG table, as my report is very slow...
    my query is as below : -
    SELECT SUM( a~dmbtr )INTO value1
        FROM mseg AS a INNER JOIN mkpf AS b
             ON amblnr = bmblnr
            AND amjahr = bmjahr
          WHERE a~lifnr = p_lifnr
            AND a~bwart IN ('122')
            AND b~budat IN s_budat
          GROUP BY lifnr.
      ENDSELECT.
    abhishek suppal

    Hi Abhi,
    Try like this ....
    SELECT SUM( a~dmbtr )INTO value1
    FROM mseg AS a INNER JOIN mkpf AS b
    ON amblnr = bmblnr
    AND amjahr = bmjahr
    WHERE a~lifnr = p_lifnr
    AND a~bwart IN ('122'<b>,'123'</b>)
    AND b~budat IN s_budat
    GROUP BY lifnr.
    ENDSELECT.
    or ...
    define ranges...like
    ranges: r_bwart for XXXX-bwart.
    r_bwart-sign = 'I'.
    r_bwart-option = 'EQ'.
    r_bwart-low = 122.
    append r_bwart.
    r_bwart-low = 123.
    append r_bwart.
    now...
    in select statement u just add
    AND a~bwart IN r_bwart
    Thanks
    Eswar

Maybe you are looking for

  • JDBC receiver - Update problem

    Hi, Scenario: JDBC - RFC - JDBC scenario. 1. JDBC to RFC part- A select statement from the JDBC sender, is mapped to the RFC in the ECC system) 2. RFC response is then mapped to an Update query using a receiver JDBC adapter. (Please note: Only the RF

  • ABAP Dump : TSV_TNEW_PAGE_ALLOC_FAILED in program SAPLCOM_PARTNER_OB

    Hi All,             I am getting the dump TSV_TNEW_PAGE_ALLOC_FAILED in program SAPLCOM_PARTNER_OB in LCOM_PARTNER_OB in line 165. Short text     No more storage space available for extending an internal table. What happened You attempted to extend a

  • Feature Request: Store "Saved Location" as part of metadata

    Feature request: LR does not save the name of the user-created "Saved Location" (the ones that the user creates in the Saved Locations panel) as part of the metadata. But I could be a good idea to make this possible. As a general functionality or as

  • Question about LVM

    i have a really quick question about LVM, been trying to search but can't really find the answer here's the scenario i have my new quad core phenom system running arch, with an LVM volume on a few 1TB drives. i was overclocking it the other night and

  • Cant Uninstall air hockey

    I have repeatedly tried to uninstall air hockey and all I get is upgrade message so I upgrade and still I can't Uninstall it and still I get the same upgrade message  This is on a playbook  Any help would be much appreciated  Blanketstacker Solved! G