Performance issue in the CRM

Dear Experts
We are facing some performance issue in the CRM-WEBGUI . All the standard and custom one are slow it take more time to open.
However from the ABAP side is OK the response time is less than one second.
Any specific operation i have to perform in term of CRM ?
Regards
Haroon

Hello,
i guess you started the webUI for the first time, right?
At the first time it is very slow - SGEN does not really help here.
It should be much better if you call the same screens a second and third time.
See the link:
[CRM Webclient UI - Generation of views after system restart / upgrade|CRM Webclient UI - Generation of views after system restart / upgrade]
Kind regards
Manfred

Similar Messages

  • Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Thanks Kelly,
    The answers would be the following:
    1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
    Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
    The cells will be numeric, free text, drop downs, and some calculated numeric.
    Are we reaching the limits for UI performance?
    Thanks

  • Performance issues with the Vouchers index build in SES

    Hi All,
    We are currently performing an upgrade for: PS FSCM 9.1 to PS FSCM 9.2.
    As a part of the upgrade, Client wants Oracle SES to be deployed for some modules including, Purchasing, Payables (Vouchers)
    We are facing severe performance issues with the Vouchers index build. (Volume of data = approx. 8.5 million rows of data)
    The index creation process runs for over 5 days.
    Can you please share any information or issues that you may have faced on your project and how they were addressed?

    Check the following logs for errors:
    1.  The message log from the process scheduler
    2.  search_server1-diagnostic.log  in /search_server1/logs directory
    If the build is getting stuck while crawling then we typically have to increase the Java Heap size for the Weblogic instance for SES>

  • Performance issues with the Tuxedo MQ Adapter

    We are experimenting some performance issues with the MQ Adapter. For example, we are seeing that the MQ Adapter takes from 10 to 100 ms in reading a single message from the queue and sending to the Tuxedo service. The Tuxedo service takes 80 ms in its execution so there is a considerable waste of time in the MQ adapter that we cannot explain.
    Also, we have looked a lot of rollback transactions on the MQ adapter, for example we got 980 rollback transactions for 15736 transactions sent and only the MQ adapter is involved in the rollback. However, the operations are executed properly. The error we got is
    135027.122.hqtux101!MQI_QMTESX01.7636.1.0: gtrid x0 x4ec1491f x25b59: LIBTUX_CAT:376: ERROR: tpabort: xa_rollback returned XA_RBROLLBACK.
    I am looking for information at Oracle site, but I have not found nothing. Could you or someone from your team help me?

    Hi Todd,
    We have 6 MQI adapters reading from 5 different queues, but in this case we are writing in only one queue.
    Someone from Oracle told us that the XA_RBROLLBACK occurs because we have 6 MQ adapters that are reading from the same queues and when one adapter finds a message and try to get that message, it can occurs that other MQ Adapter gets it before. In this case, the MQ adapter rollbacks the transaction. Even when we got some XA_RBROLLBACK errors, we don´t lose message. Also, I read something about that when XA sends a xa_end call to MQ adapter, it actually does the rollback, so when the MQ adapter receives the xa_rollback call, it answers with XA_RBROLLBACK. Is that true?
    However, I am more worried about the performance. We are putting a request message in a MQ queue and waiting for the reply. In some cases, it takes 150ms and in other cases it takes much more longer (more than 400ms). The average is 300ms. MQ adapter calls a service (txgralms0) which lasts 110ms in average.
    This is our configuration:
    "MQI_QMTESX01" SRVGRP="g03000" SRVID=3000
    CLOPT="-- -C /tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg"
    RQPERM=0600 REPLYQ=N RPPERM=0600 MIN=6 MAX=6 CONV=N
    SYSTEM_ACCESS=FASTPATH
    MAXGEN=1 GRACE=86400 RESTART=N
    MINDISPATCHTHREADS=0 MAXDISPATCHTHREADS=1 THREADSTACKSIZE=0
    SICACHEENTRIESMAX="500"
    /tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg:
    *SERVER
    MINMSGLEVEL=0
    MAXMSGLEVEL=0
    DEFMAXMSGLEN=4096
    TPESVCFAILDATA=Y
    *QUEUE_MANAGER
    LQMID=QMTESX01
    NAME=QMTESX01
    *SERVICE
    NAME=txgralms0
    FORMAT=MQSTR
    TRAN=N
    *QUEUE
    LQMID=QMTESX01
    MQNAME=QAT.Q.NACAR.TO.TUX.KGCRQ01
    *QUEUE
    LQMID=QMTESX01
    MQNAME=QAT.Q.NACAR.TO.TUX.KGCPQ01
    *QUEUE
    LQMID=QMTESX01
    MQNAME=QAT.Q.NACAR.TO.TUX.KPSAQ01
    *QUEUE
    LQMID=QMTESX01
    MQNAME=QAT.Q.NACAR.TO.TUX.KPINQ01
    *QUEUE
    LQMID=QMTESX01
    MQNAME=QAT.Q.NACAR.TO.TUX.KDECQ01
    Thanks in advance,
    Marling

  • Performance issue in the Incremental ETL

    Hi Experts,
    we have performance issue and the task "Custom_SIL_OvertimeFact" is taking more than 55 mins.
    this tasks reads the .csv file coming from business every 2 weeks with a few rows of data updated and appended.
    every day this tasks runs and scans the whole of 70,000 records when in reality we just have 4-5 records updated/apended.
    is there a way to shrink the time taken by this task?
    when the record in the .csv file comes only once in a month then why do the mapping has to scan this many number of records on daily Incremental load?
    please provide me some suggestions .
    best regards,
    Kam

    The reason you have to scan all the records is because you do not have an identifier to find what has been updated.
    Becuase its a insert/update job it would be using normal load.
    Truncating the table and reloading it would help you in terms of perfromance since then you would be able to use bulk load.
    If there is any reason you cant do that then you can also look at the udpate strategy and see that all the fields that are being used to determine if a particular transaction is insert/update/existing must have an index defined on it , apart from this you should try and minimize the number of indexes on this table or make sure that you are dropping these when the load is running and creating them back once its finished !!
    Also check your commit point and see if you can bump it up !!
    Edited by: Sid_BI on Jan 28, 2010 4:03 PM

  • Performance issue with the ABAP statements

    Hello,
    Please can some help me with the below statements where I am getting performance problem.
    SELECT * FROM /BIC/ASALHDR0100 into Table CHDATE.
    SORT CHDATE by DOC_NUMBER.
    SORT SOURCE_PACKAGE by DOC_NUMBER.
    LOOP AT CHDATE INTO WA_CHDATE.
       READ TABLE SOURCE_PACKAGE INTO WA_CIDATE WITH KEY DOC_NUMBER =
       WA_CHDATE-DOC_NUMBER BINARY SEARCH.
       MOVE WA_CHDATE-CREATEDON  to WA_CIDATE-CREATEDON.
    APPEND WA_CIDATE to CIDATE.
    ENDLOOP.
    I wrote an above code for the follwing requirement.
    1. I have 2 tables from where i am getting the data
    2.I have common fields in both the table names CREATEDON date. In both the tables I hve the values.
    3. While accessing the 2 table and copying to thrid table i have to modify the field.
    I am getting performance issues with the above statements.
    Than
    Edited by: Rob Burbank on Jul 29, 2010 10:06 AM

    Hello,
    try a select like the following one instead of you code.
    SELECT field field2 ...
    INTO TABLE it_table
    FROM table1 AS T1 INNER JOIN table2 AS T2
    ON t1-doc_number = t2-doc_number

  • Performance issue on the sys.dba_audit_session

    i have the following query which is taking long time and had performance issue.
    SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS
    failed_count
    FROM sys.dba_audit_session
    WHERE returncode != 0
    AND timestamp >= current_timestamp - TO_DSINTERVAL('0 0:30:00')
    call count cpu elapsed disk query current rows
    Parse 1 0.01 0.04 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 2 68.42 216.08 3943789 3960058 0 1
    total 4 68.43 216.13 3943789 3960058 0 1
    The view dba_audit_session is a select from the view dba_audit_trail. If you
    look at the definition of dba_audit_trail it does a CAST on the ntimestamp#
    column. Therefore disabling index access because there is not a function
    based index on ntimestamp#. I am not even sure a function based index would
    work to match what the view does.
    cast ( /* TIMESTAMP */
    (from_tz(ntimestamp#,'00:00') at local) as date),
    To get index access the metric would have to avoid the use of the view. I have changed the query like this.
    SELECT /*+ INDEX(a I_AUD3) */ TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD
    HH24:MI:SS TZD') AS curr_timestamp, COUNT(userid) AS failed_count
    FROM sys.aud$ a
    WHERE returncode != 0
    and action# between 100 and 102
    AND ntimestamp# >= systimestamp at time zone 'GMT' - 30/1440
    is it correct way to di it?
    could you comment on this ?

    The query is run by grid Control (or DBConsole) to count the metric related to audit sessions which is ON by default in 11g. To decrease the impact of this query you should purge the aud$ table regularly.
    Best way is to use DBMS_AUDIT_MGMT to periodically purge the data older than "whatever date". If you don't need the audit infor, you can simply truncate aud$.

  • Performance issue in the table

    Hi All
    i have one table which is based on a VO. This VO has 150 attributes.I am facing performance issue.
    first Q is i need to display only 15 attributes in the table. but other attributes are also required .
    so i have two options .
    1. make other as form value
    or
    2. make them messagetextinput and set rendered false.
    i just wanted to know which will perform better.
    pls help

    these attributes have some default values.
    i need to pass these values to the api on some action
    if i dont keep in my page then
    will row.getAttribute("XYZ") still carry its value
    ??

  • Performance issue with the Select query

    Hi,
    I have an issue with the performance with a seclet query.
    In table AFRU - AUFNR is not a key field.
    So i had selected the low and high values into s_reuck and used it in Where condition.
    Still i have an issue with the Performance.
    SELECT SINGLE RUECK
    RMZHL
    IEDD
    AUFNR
    STOKZ
    STZHL
    FROM AFRU INTO table t_AFRU
    FOR ALL ENTRIES IN T_ZSCPRT100
    WHERE RUECK IN S_RUECK AND
    AUFNR = T_ZSCPRT100-AUFNR AND
    STOKZ = SPACE AND
    STZHL = 0.
    I had also cheked by createing an index for AUFNR in the table AFRU...it does not help.
    Is there anyway that we can declare Key field while declaring the Internal table....?
    ANy suggestions to fix the performance issue is apprecaited!
    Regards,
    Kittu

    Hi,
    Thank you for your quick response!
    Rui dantas, i have lill confusion...this is my code below :
    data : t_zscprt type standard table of ty_zscprt,
           wa_zscprt type ty_zscprt.
    types : BEGIN OF ty_zscprt100,
            aufnr type zscprt100-aufnr,
            posnr  type zscprt100-posnr,
            ezclose type zscprt100-ezclose,
            serialnr type zscprt100-serialnr,
            lgort type zscprt100-lgort,
          END OF ty_zscprt100.
    data : t_zscprt100 type standard table of ty_zscprt100,
           wa_zscprt100 type ty_zscprt100.
    Types: begin of ty_afru,
                rueck type CO_RUECK,
                rmzhl type CO_RMZHL,
                iedd  type RU_IEDD,
                aufnr type AUFNR,
                stokz type CO_STOKZ,
                stzhl type CO_STZHL,
             end of ty_afru.
    data : t_afru type STANDARD TABLE OF ty_afru,
            WA_AFRU TYPE TY_AFRU.
    SELECT AUFNR
            POSNR
            EZCLOSE
            SERIALNR
            LGORT
            FROM ZSCPRT100 INTO TABLE T_ZSCPRT100
            FOR ALL ENTRIES IN T_ZSCPRT
            WHERE   AUFNR = T_ZSCPRT-PRTNUM
            AND   SERIALNR IN S_SERIAL
            AND    LGORT   IN S_LGORT.
        IF sy-subrc <> 0.
           MESSAGE ID 'Z2' TYPE 'I' NUMBER '41'. "BDCG87
           stop."BDCG87
        ENDIF.
      ENDIF.
    SELECT    RUECK
                  RMZHL
                  IEDD
                  AUFNR
                  STOKZ
                  STZHL
                  FROM AFRU INTO TABLE T_AFRU
                  FOR ALL ENTRIES IN T_ZSCPRT100
                  WHERE RUECK IN S_RUECK     AND
                        AUFNR = T_ZSCPRT100-AUFNR AND
                        STOKZ = SPACE AND
                        STZHL = 0.
    Using AUFNR, get AUFPL from AFKO
    Using AUFPL, get RUECK from AFVC
    Using RUEKC, read AFRU
    In other words, one select joining AFKO <-> AFVC <-> AFRU should get what you want.
    This is my select query, would you want me to write another select query to meet this criteria..
    From AUFNR> I will get AUFPL from AFKO> BAsed on AUFPL I will get RUECK, based on RUEKC i need to read AFRU..but i need to select few field from AFRu based on AUFNR....
    ANy suggestions wil be appreciated!
    Regards
    Kittu

  • Performance issue with the table use vrkpa

    Hi.
    here is the selection criteria that i am using and the table use vrkpa i only used to map the table kna1 and vbrk.vbrk and kna1 doesnot have the direct primary key relationship.
    please check and let me know wht this vrkpa is taking time and how can i improve the performance as from kna1,i am fetching data very easily while fetching nothing from vrkpa and fetching fkdat from vbrk.
    the idea behind using these tables is just for one kunnr (from kna1)getting the relevant entries based on the fkdat(selection screen input field),please suggest.
        SELECT kunnr
               name1
               land1
               regio
               ktokd
               FROM kna1
               INTO TABLE it_kna1
               FOR ALL ENTRIES IN it_knb1
               WHERE kunnr = it_knb1-kunnr
               AND ktokd = '0003'.
        IF sy-subrc = 0.
          SORT it_kna1 BY kunnr.
          DELETE ADJACENT DUPLICATES FROM it_kna1 COMPARING kunnr.
        ENDIF.
      ENDIF.
      IF NOT it_kna1[] IS INITIAL.
        SELECT kunnr
               vbeln
               FROM vrkpa
               INTO TABLE it_vrkpa
               FOR ALL ENTRIES IN it_kna1
               WHERE kunnr = it_kna1-kunnr.
        IF sy-subrc = 0.
          SORT it_vrkpa BY kunnr vbeln.
        ENDIF.
      ENDIF.
      IF NOT it_vrkpa[] IS INITIAL.
        SELECT vbeln
               kunrg
               fkdat
              kkber
               bukrs
               FROM vbrk
               INTO TABLE it_vbrk
               FOR ALL ENTRIES IN it_vrkpa
               WHERE vbeln = it_vrkpa-vbeln.
        IF sy-subrc = 0.
          DELETE it_vbrk WHERE fkdat NOT IN s_indate.
          DELETE it_vbrk WHERE fkdat NOT IN s_chdate.
          DELETE it_vbrk WHERE bukrs NOT IN s_ccode.
          SORT it_vbrk DESCENDING BY vbeln fkdat.
        ENDIF.
      ENDIF.

    Hi,
    Transaction SE11
    Table VRKPA => Display (not Change)
    Click on "Indexes"
    Click on "Create" (if your system is Basis 7.00, then click on the "Create" drop-down icon and choose "Create extension index")
    Choose a name (up to 3 characterss, start with Z)
    Enter a description for the index
    Enter the field names of the index
    Choose "Save" (prompts for transport request)
    Choose "Activate"
    If after "Activate' the status shows "Index exists in database system <...>", then you have nothing more to dotable is very large the activation will not create the index in the database and the status remains "Index does nor exist". In that case:
    - Transaction SE14
    - Table VRKPA -> Edit
    - Choose "Indexes" and select your new index
    - Choose "Create database index"; mark the option "Background"
    - Wait until the job is finished and check in SE11 that the index now exists in the DB
    You don't have to do anyhting to your program because Oracle should choose the new index automatically. Run a SQL Trace to make sure.
    Rgds,
    Mark

  • Performance issue of the query

    Hi All,
    Please help me to avoid sequential access on the table VBAK.
    as i am using this query on VBAK,VBUK,VBPA and VBFA.
    the query as follows.
    How to improve the performance of this query,its very urgent please give me the sample query if possible.
    SELECT a~vbeln
    a~erdat
    a~ernam
    a~vkorg
    a~vkbur
    a~kunnr
    b~kunnr AS kunnr_shp
    b~adrnr
    c~gbstk
    c~fkstk
    d~vbeln AS vbeln_i
    INTO CORRESPONDING FIELDS OF TABLE t_z92sales
    FROM vbak AS a INNER JOIN vbuk AS c ON avbeln = cvbeln
    INNER JOIN vbpa AS b ON avbeln = bvbeln
    AND bposnr = 0 AND bparvw = 'WE'
    LEFT OUTER JOIN vbfa AS d ON avbeln = dvbelv AND d~vbtyp_n = 'M'
    WHERE ( a~vbeln BETWEEN fvbeln AND tvbeln )
    AND a~vbtyp = 'C'
    AND a~vkorg IN vkorg_s
    AND c~gbstk IN gbstk_s.

    Hi Ramada,
    Try using the following code.
    RANGES: r_vbeln FOR vbka-vbeln.
    DATA: w_index TYPE sy-tabix,
          BEGIN OF t_vbak OCCURS 0,
            vbeln  TYPE vbak-vbeln,
            erdat  TYPE vbak-erdat,
            ernam  TYPE vbak-ernam,
            vkorg  TYPE vbak-vkorg,
            vkbur  TYPE vbak-vkbur,
            kunnr  TYPE vbak-kunnr,
            del(1) TYPE c         ,
          END OF t_vbak,
          BEGIN OF t_vbuk OCCURS 0,
            vbeln TYPE vbuk-vbeln,
            gbstk TYPE vbuk-gbstk,
            fkstk TYPE vbuk-fkstk,
          END OF t_vbuk,
          BEGIN OF t_vbpa OCCURS 0,
            vbeln TYPE vbpa-vbeln,
            kunnr TYPE vbpa-kunnr,
            adrnr TYPE vbpa-adrnr,
          END OF t_vbpa,
          BEGIN OF t_vbfa OCCURS 0,
            vbelv TYPE vbfa-vbelv,
            vbeln TYPE vbfa-vbeln,
          END OF t_vbfa,
          BEGIN OF t_z92sales OCCURS 0,
            vbeln     TYPE vbak-vbeln,
            erdat     TYPE vbak-erdat,
            ernam     TYPE vbak-ernam,
            vkorg     TYPE vbak-vkorg,
            vkbur     TYPE vbak-vkbur,
            kunnr     TYPE vbak-kunnr,
            gbstk     TYPE vbuk-gbstk,
            fkstk     TYPE vbuk-fkstk,
            kunnr_shp TYPE vbpa-kunnr,
            adrnr     TYPE vbpa-adrnr,
            vbeln_i   TYPE vbfa-vbeln,
          END OF t_z92sales.
    IF NOT fvbeln IS INITIAL
    OR NOT tvbeln IS INITIAL.
      REFRESH r_vbeln.
      r_vbeln-sign   = 'I'.
      IF fvbeln IS INITIAL.
        r_vbeln-option = 'EQ'.
        r_vbeln-low    = tvbeln.
      ELSEIF tvbeln IS INITIAL.
        r_vbeln-option = 'EQ'.
        r_vbeln-low    = fvbeln.
      ELSE.
        r_vbeln-option = 'BT'.
        r_vbeln-low    = fvbeln.
        r_vbeln-high   = tvbeln.
      ENDIF.
      APPEND r_vbeln.
      CLEAR r_vbeln.
      SELECT vbeln
             erdat
             ernam
             vkorg
             vkbur
             kunnr
        FROM vbak
        INTO TABLE t_vbak
        WHERE vbeln IN r_vbeln
        AND   vbtyp EQ 'C'
        AND   vkorg IN vkorg_s.
      IF sy-subrc EQ 0.
        SORT t_vbak BY vbeln.
        SELECT vbeln
               gbstk
               fkstk
        FROM vbuk
        INTO TABLE t_vbuk
        FOR ALL ENTRIES IN t_vbak
        WHERE vbeln EQ t_vbak-vbeln
        AND   gbstk IN gbstk_s.
        IF sy-subrc EQ 0.
          SORT t_vbuk BY vbeln.
          LOOP AT t_vbak.
            w_index = sy-tabix.
            READ TABLE t_vbuk WITH KEY vbeln = t_vbak-vbeln
                                       BINARY SEARCH
                                       TRANSPORTING NO FIELDS.
            IF sy-subrc NE 0.
              t_vbak-del = 'X'.
              MODIFY t_vbak INDEX w_index TRANSPORTING del.
            ENDIF.
          ENDLOOP.
          DELETE t_vbak WHERE del EQ 'X'.
        ENDIF.
      ENDIF.
    ELSE.
      SELECT vbeln
             gbstk
             fkstk
      FROM vbuk
      INTO TABLE t_vbuk
      WHERE vbtyp EQ 'C'
      AND   gbstk IN gbstk_s.
      IF sy-subrc EQ 0.
        SORT t_vbuk BY vbeln.
        SELECT vbeln
               erdat
               ernam
               vkorg
               vkbur
               kunnr
          FROM vbak
          INTO TABLE t_vbak
          FOR ALL ENTRIES IN t_vbuk
          WHERE vbeln EQ t_vbuk-vbeln
          AND   vkorg IN vkorg_s.
        IF sy-subrc EQ 0.
          SORT t_vbak BY vbeln.
        ENDIF.
      ENDIF.
    ENDIF.
    IF NOT t_vbak[] IS INITIAL.
      SELECT vbeln
             kunnr
             adrnr
        FROM vbpa
        INTO TABLE t_vbpa
        FOR ALL ENTRIES IN t_vbak
        WHERE vbeln EQ t_vbak-vbeln
        AND   posnr EQ '000000'
        AND   parvw EQ 'WE'.
      IF sy-subrc EQ 0.
        SORT t_vbpa BY vbeln.
        SELECT vbelv
               vbeln
          FROM vbfa
          INTO TABLE t_vbfa
          FOR ALL ENTRIES IN t_vbpa
          WHERE vbelv EQ t_vbpa-vbeln
          AND   vbtyp_n EQ 'M'.
        IF sy-subrc EQ 0.
          SORT t_vbfa BY vbelv.
        ENDIF.
      ENDIF.
    ENDIF.
    REFRESH t_z92sales.
    LOOP AT t_vbak.
      READ TABLE t_vbuk WITH KEY vbeln = t_vbak-vbeln
                                 BINARY SEARCH
                                 TRANSPORTING
                                   gbstk
                                   fkstk.
      IF sy-subrc EQ 0.
        READ TABLE t_vbpa WITH KEY vbeln = t_vbak-vbeln
                                           BINARY SEARCH
                                           TRANSPORTING
                                             kunnr
                                             adrnr.
        IF sy-subrc EQ 0.
          READ TABLE t_vbfa WITH KEY vbelv = t_vbak-vbeln
                                     BINARY SEARCH
                                     TRANSPORTING
                                       vbeln.
          IF sy-subrc EQ 0.
            t_z92sales-vbeln     = t_vbak-vbeln.
            t_z92sales-erdat     = t_vbak-erdat.
            t_z92sales-ernam     = t_vbak-ernam.
            t_z92sales-vkorg     = t_vbak-vkorg.
            t_z92sales-vkbur     = t_vbak-vkbur.
            t_z92sales-kunnr     = t_vbak-kunnr.
            t_z92sales-gbstk     = t_vbuk-gbstk.
            t_z92sales-fkstk     = t_vbuk-fkstk.
            t_z92sales-kunnr_shp = t_vbpa-kunnr.
            t_z92sales-adrnr     = t_vbpa-adrnr.
            t_z92sales-vbeln_i   = t_vbfa-vbeln.
            APPEND t_z92sales.
            CLEAR  t_z92sales.
          ENDIF.
        ENDIF.
      ENDIF.
    ENDLOOP.

  • Performance issue with the code

    hi,
    i have below code.for printing it is taking enough time..
    which way i can improve the performance of below piece of code and what changes i will do for improving performance??
    kindly help me..
    form get_komgd.
      tables: kotd994.                     "kondd
      data : tfill_auswahl type i.
      data: begin of auswahl occurs 10,
               kappl like kotd994-kappl ,
               kschl like kotd994-kschl ,
               vkorg like kotd994-vkorg ,
               vtweg like kotd994-vtweg ,
               spart like kotd994-spart ,
               kvgr1 like kotd994-kvgr1 ,
               matwa like kotd994-matwa ,
               datbi like kotd994-datbi ,
               datab like kotd994-datab ,
               knumh like kotd994-knumh ,
               smatn like kondd-smatn,
               meins like kondd-meins,
                 sugrd like kondd-sugrd,
         end of auswahl.
      tables: kondd.
      select * from  kondd
             where  smatn       = ltap-matnr.
        select * from  kotd994
               where  kappl       = 'V'
             and    kschl       = vbak-kschl
               and    vkorg       = vbak-vkorg
               and    vtweg       = vbak-vtweg
         and    spart       = vbak-spart
               and    kvgr1       = vbak-kvgr1
         and    matwa       = vbak-matwa
               and    datbi       >= sy-datum
               and    datab       <= sy-datum
               and    knumh       = kondd-knumh    .
        endselect.
        if sy-subrc = 0.
       and kotd994-kvgr1(1) = 'Z'.
          move-corresponding kotd994 to auswahl.
          move-corresponding kondd to auswahl.
          append auswahl   .
        endif.                             " sy-subrc = 0.
    write: / auswahl.
      endselect.
      describe table auswahl lines tfill_auswahl.
      if tfill_auswahl = 1.
        komgd-matwa = auswahl-matwa.
      else.
        clear komgd-matwa.
      endif.                               " tfill_auswahl = 1.
    endform.                               " ZHX_GET_COSTUMER_NR

    Hi,
    Using two select statements will take more time.
    Rather use this sample code:
    select *    from kondd in to corresponding fields of table auswahl
            where smatn = ltap-matnr.
        select *      from kotd994 into corresponding fields of auswahl
                 for all entries of auswahl
                  where kappl = 'V'
                  and vkorg = vbak-vkorg
                  and vtweg = vbak-vtweg
                  and kvgr1 = vbak-kvgr1
                  and datbi >= sy-datum
                  and datab <= sy-datum
                  and knumh = auswahl-knumh .
    however, in the structure of auswahl maintain the field of knumh as well, so that we can pass the values in the second select directly.
    This will improve the performance very well.
    if possible, try and use the primary key combinations in the where clause while extracting the data.

  • Performance Issue with the query

    Hi Experts,
    While working on peoplesoft, today I was stuck with a problem when one of the Query is performing really bad. With all the statistics updates, query is not performing good. On one of the table, query is spending time doing lot of IO. (db file sequential read wait). And if I delete the stats for the table and let dynamic sampling to take place, query just works good and there is no IO wait on the table.
    Here is the query
    SELECT A.BUSINESS_UNIT_PC, A.PROJECT_ID,  E.DESCR,  A.EMPLID,  D.NAME,  C.DESCR,  A.TRC,
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( A.EST_GROSS),
      DECODE( A.TRC, 'ROVA1', 0, 'ROVA2', 0, ( SUM( A.EST_GROSS)/100) * 9.75),
      '2012-07-01',
      '2012-07-31',
      SUM(     DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)
      DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
      DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
      DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'),
          TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'),
          TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
      || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
      || TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY')  || ' / '  || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
      || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
      C.TRC,  TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
      E.BUSINESS_UNIT, 
      E.PROJECT_ID
    FROM
         PS_TL_PAYABLE_TIME A, 
         PS_TL_TRC_TBL C,
         PS_PROJECT E, 
         PS_PERSONAL_DATA D,
           PS_PERALL_SEC_QRY D1, 
         PS_JOB F, 
         PS_EMPLMT_SRCH_QRY F1
    WHERE
         D.EMPLID = D1.EMPLID
    AND      D1.OPRID   = 'TMANI'
    AND      F.EMPLID   = F1.EMPLID
    AND      F.EMPL_RCD = F1.EMPL_RCD
    AND      F1.OPRID   = 'TMANI'
    AND      A.DUR BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD') AND TO_DATE('2012-07-31','YYYY-MM-DD')
    AND      C.TRC   = A.TRC
    AND      C.EFFDT =  (SELECT
                   MAX(C_ED.EFFDT) 
                  FROM
                   PS_TL_TRC_TBL C_ED
                     WHERE
                   C.TRC = C_ED.TRC 
                  AND C_ED.EFFDT <= SYSDATE 
    AND      E.BUSINESS_UNIT = A.BUSINESS_UNIT_PC
    AND      E.PROJECT_ID    = A.PROJECT_ID
    AND      A.EMPLID        = D.EMPLID
    AND      A.EMPLID        = F.EMPLID
    AND      A.EMPL_RCD      = F.EMPL_RCD
    AND      F.EFFDT         =  (SELECT
                        MAX(F_ED.EFFDT) 
                      FROM
                        PS_JOB F_ED
                        WHERE
                        F.EMPLID = F_ED.EMPLID 
                      AND      F.EMPL_RCD = F_ED.EMPL_RCD
                        AND      F_ED.EFFDT <= SYSDATE 
    AND      F.EFFSEQ =  (SELECT
                   MAX(F_ES.EFFSEQ) 
                   FROM
                   PS_JOB F_ES
                     WHERE
                   F.EMPLID = F_ES.EMPLID 
                   AND F.EMPL_RCD = F_ES.EMPL_RCD
                     AND F.EFFDT  = F_ES.EFFDT 
    AND      F.GP_PAYGROUP  = DECODE(' ', ' ', F.GP_PAYGROUP, ' ')
    AND      A.CURRENCY_CD  = DECODE(' ', ' ', A.CURRENCY_CD, ' ')
    AND      DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')) = DECODE(' ', ' ', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')), 'L', 'L', 'E', 'E', 'A', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')))
    AND ( A.EMPLID, A.EMPL_RCD) IN  (SELECT
                             B.EMPLID,
                             B.EMPL_RCD 
                        FROM
                             PS_TL_GROUP_DTL B
                                    WHERE
                             B.TL_GROUP_ID = DECODE('ER012', ' ', B.TL_GROUP_ID, 'ER012') 
    AND      E.PROJECT_USER1   = DECODE(' ', ' ', E.PROJECT_USER1, ' ')
    AND      A.PROJECT_ID      = DECODE(' ', ' ', A.PROJECT_ID, ' ')
    AND      A.PAYABLE_STATUS <>
      CASE
        WHEN to_number(TO_CHAR(sysdate, 'DD')) < 15
        THEN
          CASE
            WHEN A.DUR > last_day(add_months(sysdate, -2))
            THEN ' '
            ELSE 'NA'
          END
        ELSE
          CASE
            WHEN A.DUR > last_day(add_months(sysdate, -1))
            THEN ' '
            ELSE 'NA'
          END
      END
    AND      A.EMPLID = DECODE(' ', ' ', A.EMPLID, ' ')
    GROUP BY A.BUSINESS_UNIT_PC,
           A.PROJECT_ID,
           E.DESCR,
         A.EMPLID,
           D.NAME,
           C.DESCR,
           A.TRC,
           '2012-07-01',
           '2012-07-31',
           DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
           DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
           DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-31',      'YYYY-MM-DD'), 'Mon ')
           || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
           || TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY')  || ' / '
           || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')  || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
           C.TRC,  TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
           E.BUSINESS_UNIT,  E.PROJECT_ID
    HAVING SUM( A.EST_GROSS) <> 0
    ORDER BY 1,
            2,
             5,
             6 ;Here is the screenshot for IO wait
    [https://lh4.googleusercontent.com/-6PFW2FSK3yE/UCrwUbZ0pvI/AAAAAAAAAPA/eHM48AOC0Uo]
    Edited by: Oceaner on Aug 14, 2012 5:38 PM

    Here is the execution plan with all the statistics present
    PLAN_TABLE_OUTPUT
    Plan hash value: 1575300420
    | Id  | Operation                                   | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                            |                    |     1 |   237 |  2703   (1)| 00:00:33 |
    |*  1 |  FILTER                                     |                    |       |       |            |          |
    |   2 |   SORT GROUP BY                             |                    |     1 |   237 |            |          |
    |   3 |    CONCATENATION                            |                    |       |       |            |          |
    |*  4 |     FILTER                                  |                    |       |       |            |          |
    |*  5 |      FILTER                                 |                    |       |       |            |          |
    |*  6 |       HASH JOIN                             |                    |     1 |   237 |  1695   (1)| 00:00:21 |
    |*  7 |        HASH JOIN                            |                    |     1 |   204 |  1689   (1)| 00:00:21 |
    |*  8 |         HASH JOIN SEMI                      |                    |     1 |   193 |  1352   (1)| 00:00:17 |
    |   9 |          NESTED LOOPS                       |                    |       |       |            |          |
    |  10 |           NESTED LOOPS                      |                    |     1 |   175 |  1305   (1)| 00:00:16 |
    |* 11 |            HASH JOIN                        |                    |     1 |   148 |  1304   (1)| 00:00:16 |
    |  12 |             JOIN FILTER CREATE              | :BF0000            |       |       |            |          |
    |  13 |              NESTED LOOPS                   |                    |       |       |            |          |
    |  14 |               NESTED LOOPS                  |                    |     1 |   134 |   967   (1)| 00:00:12 |
    |  15 |                NESTED LOOPS                 |                    |     1 |   103 |   964   (1)| 00:00:12 |
    |* 16 |                 TABLE ACCESS FULL           | PS_PROJECT         |   197 |  9062 |   278   (1)| 00:00:04 |
    |* 17 |                 TABLE ACCESS BY INDEX ROWID | PS_TL_PAYABLE_TIME |     1 |    57 |     7   (0)| 00:00:01 |
    |* 18 |                  INDEX RANGE SCAN           | IDX$$_C44D0007     |    16 |       |     3   (0)| 00:00:01 |
    |* 19 |                INDEX RANGE SCAN             | IDX$$_3F450003     |     1 |       |     2   (0)| 00:00:01 |
    |* 20 |               TABLE ACCESS BY INDEX ROWID   | PS_JOB             |     1 |    31 |     3   (0)| 00:00:01 |
    |  21 |             VIEW                            | PS_EMPLMT_SRCH_QRY |  5428 | 75992 |   336   (2)| 00:00:05 |
    PLAN_TABLE_OUTPUT
    |  22 |              SORT UNIQUE                    |                    |  5428 |   275K|   336   (2)| 00:00:05 |
    |* 23 |               FILTER                        |                    |       |       |            |          |
    |  24 |                JOIN FILTER USE              | :BF0000            | 55671 |  2827K|   335   (1)| 00:00:05 |
    |  25 |                 NESTED LOOPS                |                    | 55671 |  2827K|   335   (1)| 00:00:05 |
    |  26 |                  TABLE ACCESS BY INDEX ROWID| PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |* 27 |                   INDEX UNIQUE SCAN         | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    |* 28 |                  TABLE ACCESS FULL          | PS_SJT_PERSON      | 55671 |  1739K|   333   (1)| 00:00:04 |
    |  29 |                CONCATENATION                |                    |       |       |            |          |
    |  30 |                 NESTED LOOPS                |                    |     1 |    63 |     2   (0)| 00:00:01 |
    |* 31 |                  INDEX FAST FULL SCAN       | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |* 32 |                  INDEX UNIQUE SCAN          | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  33 |                 NESTED LOOPS                |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 34 |                  INDEX UNIQUE SCAN          | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 35 |                  INDEX UNIQUE SCAN          | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  36 |                NESTED LOOPS                 |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 37 |                 INDEX UNIQUE SCAN           | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 38 |                 INDEX UNIQUE SCAN           | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |* 39 |            INDEX UNIQUE SCAN                | PS_PERSONAL_DATA   |     1 |       |     0   (0)| 00:00:01 |
    |  40 |           TABLE ACCESS BY INDEX ROWID       | PS_PERSONAL_DATA   |     1 |    27 |     1   (0)| 00:00:01 |
    |* 41 |          INDEX FAST FULL SCAN               | PS_TL_GROUP_DTL    |   323 |  5814 |    47   (3)| 00:00:01 |
    |  42 |         VIEW                                | PS_PERALL_SEC_QRY  |  7940 | 87340 |   336   (2)| 00:00:05 |
    |  43 |          SORT UNIQUE                        |                    |  7940 |   379K|   336   (2)| 00:00:05 |
    |* 44 |           FILTER                            |                    |       |       |            |          |
    |* 45 |            FILTER                           |                    |       |       |            |          |
    |  46 |             NESTED LOOPS                    |                    | 55671 |  2663K|   335   (1)| 00:00:05 |
    |  47 |              TABLE ACCESS BY INDEX ROWID    | PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |* 48 |               INDEX UNIQUE SCAN             | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |* 49 |              TABLE ACCESS FULL              | PS_SJT_PERSON      | 55671 |  1576K|   333   (1)| 00:00:04 |
    |  50 |            CONCATENATION                    |                    |       |       |            |          |
    |  51 |             NESTED LOOPS                    |                    |     1 |    63 |     2   (0)| 00:00:01 |
    |* 52 |              INDEX FAST FULL SCAN           | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |* 53 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  54 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 55 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 56 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  57 |            CONCATENATION                    |                    |       |       |            |          |
    |  58 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 59 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 60 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  61 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 62 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 63 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  64 |            NESTED LOOPS                     |                    |     1 |    63 |     3   (0)| 00:00:01 |
    |  65 |             INLIST ITERATOR                 |                    |       |       |            |          |
    |* 66 |              INDEX RANGE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     2   (0)| 00:00:01 |
    |* 67 |             INDEX RANGE SCAN                | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |  68 |        TABLE ACCESS FULL                    | PS_TL_TRC_TBL      |   922 | 30426 |     6   (0)| 00:00:01 |
    |  69 |      SORT AGGREGATE                         |                    |     1 |    20 |            |          |
    |* 70 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    20 |     3   (0)| 00:00:01 |
    |  71 |      SORT AGGREGATE                         |                    |     1 |    14 |            |          |
    |* 72 |       INDEX RANGE SCAN                      | PS_TL_TRC_TBL      |     2 |    28 |     2   (0)| 00:00:01 |
    |  73 |      SORT AGGREGATE                         |                    |     1 |    23 |            |          |
    |* 74 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    23 |     3   (0)| 00:00:01 |
    |* 75 |     FILTER                                  |                    |       |       |            |          |
    PLAN_TABLE_OUTPUT
    |* 76 |      FILTER                                 |                    |       |       |            |          |
    |* 77 |       HASH JOIN                             |                    |     1 |   237 |   974   (2)| 00:00:12 |
    |* 78 |        HASH JOIN                            |                    |     1 |   226 |   637   (1)| 00:00:08 |
    |* 79 |         HASH JOIN                           |                    |     1 |   193 |   631   (1)| 00:00:08 |
    |  80 |          NESTED LOOPS                       |                    |       |       |            |          |
    |  81 |           NESTED LOOPS                      |                    |     1 |   179 |   294   (1)| 00:00:04 |
    |  82 |            NESTED LOOPS                     |                    |     1 |   152 |   293   (1)| 00:00:04 |
    |  83 |             NESTED LOOPS                    |                    |     1 |   121 |   290   (1)| 00:00:04 |
    |* 84 |              HASH JOIN SEMI                 |                    |     1 |    75 |   289   (1)| 00:00:04 |
    |* 85 |               TABLE ACCESS BY INDEX ROWID   | PS_TL_PAYABLE_TIME |     1 |    57 |   242   (0)| 00:00:03 |
    |* 86 |                INDEX SKIP SCAN              | IDX$$_C44D0007     |   587 |       |   110   (0)| 00:00:02 |
    |* 87 |               INDEX FAST FULL SCAN          | PS_TL_GROUP_DTL    |   323 |  5814 |    47   (3)| 00:00:01 |
    |* 88 |              TABLE ACCESS BY INDEX ROWID    | PS_PROJECT         |     1 |    46 |     1   (0)| 00:00:01 |
    |* 89 |               INDEX UNIQUE SCAN             | PS_PROJECT         |     1 |       |     0   (0)| 00:00:01 |
    |* 90 |             TABLE ACCESS BY INDEX ROWID     | PS_JOB             |     1 |    31 |     3   (0)| 00:00:01 |
    |* 91 |              INDEX RANGE SCAN               | IDX$$_3F450003     |     1 |       |     2   (0)| 00:00:01 |
    |* 92 |            INDEX UNIQUE SCAN                | PS_PERSONAL_DATA   |     1 |       |     0   (0)| 00:00:01 |
    |  93 |           TABLE ACCESS BY INDEX ROWID       | PS_PERSONAL_DATA   |     1 |    27 |     1   (0)| 00:00:01 |
    |  94 |          VIEW                               | PS_EMPLMT_SRCH_QRY |  5428 | 75992 |   336   (2)| 00:00:05 |
    |  95 |           SORT UNIQUE                       |                    |  5428 |   275K|   336   (2)| 00:00:05 |
    |* 96 |            FILTER                           |                    |       |       |            |          |
    |  97 |             NESTED LOOPS                    |                    | 55671 |  2827K|   335   (1)| 00:00:05 |
    |  98 |              TABLE ACCESS BY INDEX ROWID    | PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |* 99 |               INDEX UNIQUE SCAN             | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    |*100 |              TABLE ACCESS FULL              | PS_SJT_PERSON      | 55671 |  1739K|   333   (1)| 00:00:04 |
    | 101 |             CONCATENATION                   |                    |       |       |            |          |
    | 102 |              NESTED LOOPS                   |                    |     1 |    63 |     2   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |*103 |               INDEX FAST FULL SCAN          | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |*104 |               INDEX UNIQUE SCAN             | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 105 |              NESTED LOOPS                   |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*106 |               INDEX UNIQUE SCAN             | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*107 |               INDEX UNIQUE SCAN             | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 108 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*109 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*110 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 111 |         TABLE ACCESS FULL                   | PS_TL_TRC_TBL      |   922 | 30426 |     6   (0)| 00:00:01 |
    | 112 |        VIEW                                 | PS_PERALL_SEC_QRY  |  7940 | 87340 |   336   (2)| 00:00:05 |
    | 113 |         SORT UNIQUE                         |                    |  7940 |   379K|   336   (2)| 00:00:05 |
    |*114 |          FILTER                             |                    |       |       |            |          |
    |*115 |           FILTER                            |                    |       |       |            |          |
    | 116 |            NESTED LOOPS                     |                    | 55671 |  2663K|   335   (1)| 00:00:05 |
    | 117 |             TABLE ACCESS BY INDEX ROWID     | PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |*118 |              INDEX UNIQUE SCAN              | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    |*119 |             TABLE ACCESS FULL               | PS_SJT_PERSON      | 55671 |  1576K|   333   (1)| 00:00:04 |
    | 120 |           CONCATENATION                     |                    |       |       |            |          |
    | 121 |            NESTED LOOPS                     |                    |     1 |    63 |     2   (0)| 00:00:01 |
    |*122 |             INDEX FAST FULL SCAN            | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |*123 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 124 |            NESTED LOOPS                     |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*125 |             INDEX UNIQUE SCAN               | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*126 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 127 |           CONCATENATION                     |                    |       |       |            |          |
    | 128 |            NESTED LOOPS                     |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*129 |             INDEX UNIQUE SCAN               | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |*130 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 131 |            NESTED LOOPS                     |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*132 |             INDEX UNIQUE SCAN               | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*133 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 134 |           NESTED LOOPS                      |                    |     1 |    63 |     3   (0)| 00:00:01 |
    | 135 |            INLIST ITERATOR                  |                    |       |       |            |          |
    |*136 |             INDEX RANGE SCAN                | PSASJT_CLASS_ALL   |     1 |    39 |     2   (0)| 00:00:01 |
    |*137 |            INDEX RANGE SCAN                 | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    | 138 |      SORT AGGREGATE                         |                    |     1 |    20 |            |          |
    |*139 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    20 |     3   (0)| 00:00:01 |
    | 140 |      SORT AGGREGATE                         |                    |     1 |    14 |            |          |
    |*141 |       INDEX RANGE SCAN                      | PS_TL_TRC_TBL      |     2 |    28 |     2   (0)| 00:00:01 |
    | 142 |      SORT AGGREGATE                         |                    |     1 |    23 |            |          |
    |*143 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    23 |     3   (0)| 00:00:01 |
    | 144 |      SORT AGGREGATE                         |                    |     1 |    14 |            |          |
    |*145 |       INDEX RANGE SCAN                      | PS_TL_TRC_TBL      |     2 |    28 |     2   (0)| 00:00:01 |
    | 146 |      SORT AGGREGATE                         |                    |     1 |    20 |            |          |
    |*147 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    20 |     3   (0)| 00:00:01 |
    | 148 |      SORT AGGREGATE                         |                    |     1 |    23 |            |          |
    |*149 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    23 |     3   (0)| 00:00:01 |
    ------------------------------------------------------------------------------------------------------------------Most of the IO wait occur at step 17. Though exlain plan simply doesnot show this thing..But when we run this query from Peoplesoft application (online page), this is visible through Grid control SQL Monitoring
    Second part of the Question continues....

  • Performance issue with the FM FKK_CLEARING_PROPOSAL_GEN_0110.

    Hi Experts,
    I am passing some 60000 transactions to this FM FKK_CLEARING_PROPOSAL_GEN_0110 and it takes a lot of time to process.
    Please let me know if there is any way / SAP notes to be implemented so that I can reduce the time consumed by this FM to process the same number of transactions.
    Regards,
    Karthik

    Hi,
    To determine the root cause you can use ST05 Performance Trace which can help to record database access, locking activities, and remote calls of reports and transactions in a trace file and display the performance log as a list.
    Once you are done with this you can check if any modifications are required for poor code performance ( like PERFORM commit routine ON COMMIT), initilization of all internal tables again from which data was updated at the end of the commit routine - to prevent a duplicate update in the next call.
    And then you can adjust this module in event 110 : Automatic Clearing Proposal using tcode FQEVENTS
    or at Financial Accounting -> Contract Accounts Receivable and Payable -> Program Enhancements -> Define Customer-Specific Function Modules) for event 110.
    You can also activate the following fields in table T_FKKCL for clearing amounts to be assigned
    XAKTP 'X' (marked) Item will be used
    XAKTS 'X' (marked) Specified cash discount will be used
    AUGBW Assigned amount (gross)
    ASKTW Assigned cash discount amount
    SKTPA Accepted cash discount percentage (which is similar as SKTPZ)
    Alternately check the clearing control rules at
    Contract Accounts Receivable and Payable -> Basic Functions -> Open Item Management -> Clearing Control
    by assigning a clearing variant to the clearing type of the clearing process you can make use of the clearing control to assign open items automatically by:
    1. Selection of open items for clearing
    2. Grouping of open items for joint clearing
    3. Sorting of open items for the order of processing for individual item groups or items within the group
    4. Split of payment amount according to different methods for partial clearing
    Thanks,
    Sagar

  • Performance issue with the ztable.

    I have created a Ztable with 5 fields and generated the table maintienance. My client complains that the performance is very slow while updating or adding new entries in the table. They are doing manual entry.
    Is there any measure that could be taken to improve the performance ?

    hi,
    how many entries ?
    you can maintain ztab using sm30 with "restrict data range"
    -> I think that will improve performance
    btw @Shiva ,
    I think always 1 person can maintain one table with sm30 and not several persons at the same time
    A.
    Message was edited by:
            Andreas Mann

Maybe you are looking for