Performance issue with MSEG table

Hi all,
I need to fetch materials(MATNR) based on the service order number (AUFNR) in the selection screen,but there is performance isssue with this , how to over come this issue .
Regards ,
Amit

Hi,
There could be various reasons for performance issue with MSEG.
1) database statistics of tables and indexes are not upto date.
because of this wrong index is choosen during the execution.
2) Improper indexes, because there is no indexes with the fields mentioned in the WHERE clause of the statement. Because of this reason, CBO would have choosen wrong index and did a range scan.
3) Optimizer bug in oracle.
4) Size of table is very huge, archive.
Better switch on ST05 trace before you run this statements, so it will give more detailed information, where exactly time being spent during the execution.
Hope this helps
dileep

Similar Messages

  • Performance issue with MSEG table in Production

    Hi,
    I have written a report with 4 select queries.
    First i am selecting data from VBRK table in i_vbrk. Then for all entries in i_vbrk, i am fetching records from VBRP into i_vbrp table. Then for all entries in i_vbrp, records are fetched from MKPF into i_mkpf. Then, finally for all entries in i_mkpf, records are fetched from MSEG into i_mseg table.
    Performance of this report is good in Quality system, but it is very poor in Production systems. It is taking more than 20 mins to get executed. MSEG table query is taking most of the time.
    I have done indexing and packet sizing on MSEG table, but still performace issue persists. So, cqan you please let me know if there is any way by which performace of the program can be improved???
    Please help.
    Thanks,
    Archana

    Hi Archana,
    I was having the same issue for MKPF and MSEG , I am using INNER JOIN Condition .
    SELECT
    mkpf~mblnr
    mkpf~mjahr
    mkpf~budat
    mkpf~usnam
    mkpf~bktxt
    mseg~zeile
    mseg~bwart
    mseg~prctr
    mseg~matnr
    mseg~werks
    mseg~lgort
    mseg~menge
    mseg~meins
    mseg~ebeln
    mseg~sgtxt
    mseg~shkzg
    mseg~dmbtr
    mseg~waers
    mseg~sobkz
    mkpf~xblnr
    mkpf~frbnr
    mseg~lifnr
    INTO TABLE xmseg
    FROM mkpf
    INNER JOIN mseg
    ON mkpfmandt EQ msegmandt AND
    mkpfmblnr EQ msegmblnr AND
      mkpfmjahr EQ msegmjahr
    WHERE mkpf~vgart IN se_vgart
    AND   mkpf~budat IN se_budat
    AND   mkpf~usnam IN se_usnam
    AND   mkpf~bktxt IN se_bktxt
    AND   mseg~bwart IN se_bwart
    AND   mseg~matnr IN se_matnr
    AND   mseg~werks IN se_werks
    AND   mseg~lgort IN se_lgort
    AND   mseg~sobkz IN se_sobkz
    AND   mseg~lifnr IN se_lifnr
    %_HINTS ORACLE '&SUBSTITUTE VALUES&'.
    But still I have a issue in performance , Can anybody  give some suggestions , please .
    Regards,
    Shiv

  • Performance issues with pipelined table functions

    I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
    Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
    Many thanks in advance.
    CREATE OR REPLACE PACKAGE pipeline_example
    IS
       TYPE resultset_typ IS REF CURSOR;
       TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
       TYPE table_typ IS TABLE OF row_typ;
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ;
       c_default_limit   CONSTANT PLS_INTEGER := 100;  
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ);
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ);
    END pipeline_example;
    CREATE OR REPLACE PACKAGE BODY pipeline_example
    IS
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ
       IS
          o_resultset   resultset_typ;
       BEGIN
          OPEN o_resultset FOR
             SELECT colC, colD, colE
               FROM some_table
              WHERE colA = ArgA AND colB = argB;
          RETURN o_resultset;
       END base_query;
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
       IS
          aa_source_data   table_typ;-- := table_typ ();
       BEGIN
          LOOP
             FETCH p_source_data
             BULK COLLECT INTO aa_source_data
             LIMIT p_limit_size;
             EXIT WHEN aa_source_data.COUNT = 0;
             /* Process the batch of (p_limit_size) records... */
             FOR i IN 1 .. aa_source_data.COUNT
             LOOP
                PIPE ROW (aa_source_data (i));
             END LOOP;
          END LOOP;
          CLOSE p_source_data;
          RETURN;
       END processor;
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT /*+ PARALLEL(t, 5) */ colC,
                      SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM TABLE (processor (base_query (argA, argB),100)) t
             GROUP BY colC
             ORDER BY colC
       END with_pipeline;
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT colC,
                      SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM (SELECT colC, colD, colE
                         FROM some_table
                        WHERE colA = ArgA AND colB = argB)
             GROUP BY colC
             ORDER BY colC;
       END no_pipeline;
    END pipeline_example;
    ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
    Edited by: Earthlink on Nov 14, 2010 11:31 AM
    Edited by: Earthlink on Nov 14, 2010 11:32 AM
    Edited by: Earthlink on Nov 20, 2010 12:04 PM
    Edited by: Earthlink on Nov 20, 2010 12:54 PM

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Performance issue with COEP table in ECC 6

    Hi,,
    Any idea how to resonlve performance issue on COEP table in ECC6.0
    We are not using COEP table right now. this table occupies 100gb of 900 gb in PRD system.
    Can i directly archive/delete the table?
    Regards
    Siva

    Hi Siva,
    You cannot archive COEP table alone. It should be archived along with the respective archive object. Just deleting the table is not at all a good idea.
    For finding out the appropriate archive object contributing to the entries in COEP, you need to perform CO table analysis using programs RARCCOA1 and RARCCOA2. For further informaton refer to SAP note 138688.
    Hope this helps,
    Naveen

  • Performance Issue with MSEG

    Hi,
    I have a report having below logic.it is taking long time to fetch data from dictionary.
        SELECT
         KT~MBLNR  "NUMBER OF MATERIAL DOCUMENT
         KT~MJAHR  "MATERIAL DOCUMENT YEAR
         KT~BUDAT  "POSTING DATE IN THE DOCUMENT
         ST~ZEILE  "ITEM IN MATERIAL DOCUMENT
         ST~BWART  "MOVEMENT TYPE (INVENTORY MANAGEMENT)
         ST~MATNR  "MATERIAL NUMBER
         ST~WERKS  "PLANT
         ST~LGORT  "STORAGE LOCATION
         ST~CHARG  "BATCH NUMBER
         ST~SHKZG  "DEBIT/CREDIT INDICATOR
         ST~MENGE  "QUANTITY
         ST~BUKRS  "COMPANY CODE
         ST~OIVBELN " DELIVERY NO
         ST~OIPOSNR
    INTO CORRESPONDING FIELDS OF TABLE GIT_MKPFMSEG_P
    FROM
        MKPF AS KI INNER JOIN
        MSEG AS SI ON
        KI~MANDT = SI~MANDT AND
        KI~MBLNR = SI~MBLNR AND
        KI~MJAHR = SI~MJAHR ) INNER JOIN
        MKPF AS KT ON
        KI~MANDT = KT~MANDT AND
        KI~MBLNR = KT~MBLNR AND
        KI~MJAHR = KT~MJAHR ) INNER JOIN
        MSEG AS ST ON
        SI~MANDT = ST~MANDT AND
        SI~MBLNR = ST~MBLNR AND
        SI~MJAHR = ST~MJAHR AND
        SI~ZEILE = ST~ZEILE )
        WHERE
             KI~BUDAT IN S_DATE
        AND  SI~MATNR EQ GWA_MARD-MATNR
        AND  SI~WERKS EQ GWA_MARD-WERKS
        AND  SI~LGORT EQ GWA_MARD-LGORT
        %_HINTS ORACLE 'INDEX(u201CMSEGu201D u201CMSEG~Mu201D)'.
    I have gone through all most all SDN forum posts related to MSEG slowness issue and followed everything in code.for me, no INDEX is required as it is using internally standard SAP INDEX M while scanning. along with that, I have also forced it inside my WHERE clause for some hope.The above JOIN is as per SAP Note 1293807.
    From BASIS end,they have updated statistics for MSEG table along with they have updated ORACLE DB profile parameters as per SAP's latest release.
    after that also,performance is very slow and going for time out.MSEG has 8,200,000 entries.
    pl. advice

    Hi all,
    after using below code also,it is not giving me a significant change in time consumed for report execution.User want max with in 1-2 minute where MSEG has 8,200,000 entries. it is taking around 30-40 minutes & sometimes going for Time Out Dump.
    SELECT
         KT~MBLNR  "NUMBER OF MATERIAL DOCUMENT
         KT~MJAHR  "MATERIAL DOCUMENT YEAR
         KT~BUDAT  "POSTING DATE IN THE DOCUMENT
         ST~ZEILE  "ITEM IN MATERIAL DOCUMENT
         ST~BWART  "MOVEMENT TYPE (INVENTORY MANAGEMENT)
         ST~MATNR  "MATERIAL NUMBER
         ST~WERKS  "PLANT
         ST~LGORT  "STORAGE LOCATION
         ST~CHARG  "BATCH NUMBER
         ST~SHKZG  "DEBIT/CREDIT INDICATOR
         ST~MENGE  "QUANTITY
         ST~BUKRS  "COMPANY CODE
         ST~OIVBELN " DELIVERY NO
         ST~OIPOSNR
    INTO TABLE GIT_MKPFMSEG_P
    FROM
        MKPF AS KI INNER JOIN
        MSEG AS SI ON
        KI~MBLNR = SI~MBLNR AND
        KI~MJAHR = SI~MJAHR ) INNER JOIN
        MKPF AS KT ON
        KI~MBLNR = KT~MBLNR AND
        KI~MJAHR = KT~MJAHR ) INNER JOIN
        MSEG AS ST ON
        SI~MBLNR = ST~MBLNR AND
        SI~MJAHR = ST~MJAHR AND
        SI~ZEILE = ST~ZEILE )
        WHERE
             KI~BUDAT IN S_DATE
        AND  SI~MATNR EQ GWA_MARD-MATNR
        AND  SI~WERKS EQ GWA_MARD-WERKS
        AND  SI~LGORT EQ GWA_MARD-LGORT
        %_HINTS ORACLE 'INDEX(u201CMSEGu201D u201CMSEG~Mu201D)'.
    simplified version of the above join, I have already used previously.But not useful.
    P.N:I have tried many different logic's to minimize the load on DATABASE server while fetching data from table. without join also,if i am doing,It is stopping in MSEG line.
    I don't want to restrict WHERE clause BWART wise as per requirement.
    twice MSEG joining logic is described in the above mentioned SAP Note(1293807).
    now I am thinking to go for MKPF statitics update as per SAP Note 821722 though it for MS SQL Server. BASIS has done for MSEG,but not so helpful.

  • Insert performance issue with Partitioned Table.....

    Hi All,
    I have a performance issue during with a table which is partitioned. without table being partitioned
    it ran in less time but after partition it took more than double.
    1) The table was created initially without any partition and the below insert took only 27 minuts.
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:27:35.20
    2) Now I re-created the table with partition(range yearly - below) and the same insert took 59 minuts.
    Is there anyway i can achive the better performance during insert on this partitioned table?
    [ similerly, I have another table with 50 Million records and the insert took 10 hrs without partition.
    with partitioning the table, it took 18 hours... ]
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4195045590
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 643K| 34M| | 12917 (3)| 00:02:36 |
    |* 1 | HASH JOIN | | 643K| 34M| 2112K| 12917 (3)| 00:02:36 |
    | 2 | VIEW | index$_join$_001 | 69534 | 1290K| | 529 (3)| 00:00:07 |
    |* 3 | HASH JOIN | | | | | | |
    | 4 | INDEX FAST FULL SCAN| PK_ACCOUNT_MASTER_BASE | 69534 | 1290K| | 181 (3)| 00:00
    | 5 | INDEX FAST FULL SCAN| ACCOUNT_MASTER_BASE_IDX2 | 69534 | 1290K| | 474 (2)| 00:00
    PLAN_TABLE_OUTPUT
    | 6 | TABLE ACCESS FULL | TB_SISADMIN_BALANCE | 2424K| 87M| | 6413 (4)| 00:01:17 |
    Predicate Information (identified by operation id):
    1 - access("A"."VENDOR_ACCT_NBR"=SUBSTR("B"."ACCOUNT_NO",1,8) AND
    "A"."VENDOR_CD"="B"."COMPANY_NO")
    3 - access(ROWID=ROWID)
    Open C1;
    Loop
    Fetch C1 Bulk Collect Into C_Rectype Limit 10000;
    Forall I In 1..C_Rectype.Count
    Insert test
         col1,col2,col3)
    Values
         val1, val2,val3);
    V_Rec := V_Rec + Nvl(C_Rectype.Count,0);
    Commit;
    Exit When C_Rectype.Count = 0;
    C_Rectype.delete;
    End Loop;
    End;
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:51:01.22
    Edited by: user520824 on Jul 16, 2010 9:16 AM

    I'm concerned about the view in step 2 and the index join in step 3. A composite index with both columns might eliminate the index join and result in fewer read operations.
    If you know which partition the data is going into beforehand you can save a little bit of processing by specifying the partition (which may not be a scalable long-term solution) in the insert - I'm not 100% sure you can do this on inserts but I know you can on selects.
    The APPEND hint won't help the way you are using it - the VALUES clause in an insert makes it be ignored. Where it is effective and should help you is if you can do the insert in one query - insert into/select from. If you are using the loop to avoid filling up undo/rollback you can use a bulk collect to batch the selects and commit accordingly - but don't commit more often than you have to because more frequent commits slow transactions down.
    I don't think there is a nologging hint :)
    So, try something like
    insert /*+ hints */ into ...
    Select
         A.Ing_Acct_Nbr, currency_Symbol,
         Balance_Date,     Company_No,
         Substr(Account_No,1,8) Account_No,
         Substr(Account_No,9,1) Typ_Cd ,
         Substr(Account_No,10,1) Chk_Cd,
         Td_Balance,     Sd_Balance,
         Sysdate,     'Sisadmin'
    From Ideaal_Cons.Tb_Account_Master_Base A,
         Ideaal_Staging.Tb_Sisadmin_Balance B
    Where A.Vendor_Acct_Nbr = Substr(B.Account_No,1,8)
       And A.Vendor_Cd = b.company_no
          ;Edited by: riedelme on Jul 16, 2010 7:42 AM

  • Performance issue with JEST table

    Moved to correct forum by moderator
    Hi all,
    I have a report which is giving performance issue.
    It hits the function module "status_read", which in turn hits the table JEST..
    The select query is:
    SELECT SINGLE * FROM JSTO CLIENT SPECIFIED
    WHERE MANDT = MANDT
    AND   OBJNR = OBJNR.
    I know we should not use client specified,  but this is a SAP standard code..
    Since this query is hit many times, it results in TIME_OUT error..
    I observed that the table JEST has 133,523,962 entries in production and in technical details, the size catagory is metnioned as 3 - (Data records expected: 280,000 to 1,100,000).
    Since here, the data size is exceeded, if i change the size catagory to 4 would improve the performance?
    Or should I request Client to archive this table? If yes, please guide me how to go for it? I have heard there are archiving objects.. please specify which objects should be considered for archiving...
    I could only think of above two solutions, please let me know if there is any other workaround...
    thanks!
    Edited by: Matt on Jan 27, 2009 11:12 AM

    Hi,
    I'm not sure the exact archiving object for this, here are some archiving objects related to tabel JEST
    MM_EBAN
    MM_EKKO
    MM_MATNR
    PP_ORDER
    PR_ORDER
    PM_NET
    pl. go thru them using tcode: SARA
    thanks\
    Mahesh

  • Performance issue with XLA tables and GL tables R12

    Hi all,
    I have one SQL that joins all the XLA tables with GL tables to get invoice related encumbrance data.
    My problem is for some reason the SQL is going to GL_LE_LINES first (from explain plane). As
    a result my SQL is taking some 25 min to finish.
    I am pretty sure if I can manage to force the SQL to use XLA table 1st the SQL will finish in couple of
    minutes. I even tried LEADING hint. But, it didn't work.
    Can someone help me?
    SELECT poh.segment1,
                        tmp.closed_code,
                        gcc.segment1,
                        gcc.segment2,
                        gcc.segment3,
                        gcc.segment4,
                        gcc.segment5,
                        gcc.segment6,
                        gcc.segment7,
                        SUM (NVL (gjl.entered_dr, 0) - NVL (gjl.entered_cr, 0))
                   FROM apps.up_po_encumb_relief_tmp_nb TMP,
                        apps.po_headers_all POH,
                        apps.po_distributions_all pod,
                        apps.ap_invoice_distributions_all APID,
                        xla.xla_transaction_entities XTE,
                        xla_events XE,
                        apps.xla_ae_headers XAH,
                        apps.xla_ae_lines XAL,
                        apps.gl_import_references GIR, -- DOUBLE CHECK JOIN CONDITIONS ON THIS TO INCLUDE OTHER COLS
                        apps.gl_je_lines GJL,
                        apps.gl_je_headers GJH,
                        apps.gl_code_combinations GCC
                  WHERE     POH.segment1 = TMP.PO_NUMBER
                        AND POH.PO_HEADER_ID = POD.PO_HEADER_ID
                        AND POD.Po_distribution_id = APID.po_distribution_id
                        AND XTE.APPLICATION_ID = 200                           -- Payables
                        AND XTE.SOURCE_ID_INT_1 = APID.INVOICE_ID       --POH.po_header_id
                        AND XTE.ENTITY_ID = XE.ENTITY_ID
                        AND XTE.APPLICATION_ID = XE.APPLICATION_ID
                        AND XAH.ENTITY_ID = XE.ENTity_ID
                        AND XAH.EVENT_ID = XE.EVENT_ID
                        AND XAH.APPLICATION_ID = XE.APPLICATION_ID
                        AND XAL.AE_HEADER_ID = XAH.AE_HEADER_ID
                        AND XAL.APPLICATION_ID = XAH.APPLICATION_ID
                        AND GIR.gl_sl_link_table = XAL.gl_sl_link_table
                        AND GIR.gl_sl_link_id = XAL.gl_sl_link_id
                        AND GJL.je_header_id = GIR.je_header_id
                        AND GJL.je_line_num = GIR.je_line_num
                        AND GJH.je_header_id = GJL.je_header_id
                        AND GJH.status = 'P'
                        AND POD.code_combination_id = GJL.code_combination_id
                        AND GJL.code_combination_id = GCC.code_combination_id
                        AND GCC.enabled_flag = 'Y'
                        AND GJH.je_source = 'Payables'
                        AND GJH.je_category = 'Purchase Invoices'
                        AND GJH.encumbrance_type_id IN (1001, 1002)
                        AND GJH.actual_flag = 'E'
                        AND GJH.status = 'P'
                        AND (NVL (GJL.entered_dr, 0) != 0 OR NVL (GJL.entered_cr, 0) != 0)
               GROUP BY poh.segment1,
                        tmp.closed_code,
                        gcc.segment1,
                        gcc.segment2,
                        gcc.segment3,
                        gcc.segment4,
                        gcc.segment5,
                        gcc.segment6,
                        gcc.segment7;

    Hi,
    did you
    - check table statistics (have the affected tables been analyzed recently)?
    - check explain plan for full table scans? You are using NVL on gjl.entered_dr
      and gjl.entered_cr which may lead to a full table scan, as far as i know, there
      is no (standard) functional index on both columns.
    Regards

  • Performance issue with the table use vrkpa

    Hi.
    here is the selection criteria that i am using and the table use vrkpa i only used to map the table kna1 and vbrk.vbrk and kna1 doesnot have the direct primary key relationship.
    please check and let me know wht this vrkpa is taking time and how can i improve the performance as from kna1,i am fetching data very easily while fetching nothing from vrkpa and fetching fkdat from vbrk.
    the idea behind using these tables is just for one kunnr (from kna1)getting the relevant entries based on the fkdat(selection screen input field),please suggest.
        SELECT kunnr
               name1
               land1
               regio
               ktokd
               FROM kna1
               INTO TABLE it_kna1
               FOR ALL ENTRIES IN it_knb1
               WHERE kunnr = it_knb1-kunnr
               AND ktokd = '0003'.
        IF sy-subrc = 0.
          SORT it_kna1 BY kunnr.
          DELETE ADJACENT DUPLICATES FROM it_kna1 COMPARING kunnr.
        ENDIF.
      ENDIF.
      IF NOT it_kna1[] IS INITIAL.
        SELECT kunnr
               vbeln
               FROM vrkpa
               INTO TABLE it_vrkpa
               FOR ALL ENTRIES IN it_kna1
               WHERE kunnr = it_kna1-kunnr.
        IF sy-subrc = 0.
          SORT it_vrkpa BY kunnr vbeln.
        ENDIF.
      ENDIF.
      IF NOT it_vrkpa[] IS INITIAL.
        SELECT vbeln
               kunrg
               fkdat
              kkber
               bukrs
               FROM vbrk
               INTO TABLE it_vbrk
               FOR ALL ENTRIES IN it_vrkpa
               WHERE vbeln = it_vrkpa-vbeln.
        IF sy-subrc = 0.
          DELETE it_vbrk WHERE fkdat NOT IN s_indate.
          DELETE it_vbrk WHERE fkdat NOT IN s_chdate.
          DELETE it_vbrk WHERE bukrs NOT IN s_ccode.
          SORT it_vbrk DESCENDING BY vbeln fkdat.
        ENDIF.
      ENDIF.

    Hi,
    Transaction SE11
    Table VRKPA => Display (not Change)
    Click on "Indexes"
    Click on "Create" (if your system is Basis 7.00, then click on the "Create" drop-down icon and choose "Create extension index")
    Choose a name (up to 3 characterss, start with Z)
    Enter a description for the index
    Enter the field names of the index
    Choose "Save" (prompts for transport request)
    Choose "Activate"
    If after "Activate' the status shows "Index exists in database system <...>", then you have nothing more to dotable is very large the activation will not create the index in the database and the status remains "Index does nor exist". In that case:
    - Transaction SE14
    - Table VRKPA -> Edit
    - Choose "Indexes" and select your new index
    - Choose "Create database index"; mark the option "Background"
    - Wait until the job is finished and check in SE11 that the index now exists in the DB
    You don't have to do anyhting to your program because Oracle should choose the new index automatically. Run a SQL Trace to make sure.
    Rgds,
    Mark

  • Performance issue with RESB table

    Hi,
    User want to improve the performance of a standard program RLLL07SE in which a RESB table data is fetched and take much time on it.
      The select querry for RESB is,
        SELECT * FROM RESB WHERE
               MATNR = MATNR AND
               WERKS = WERKS AND
               XLOEK = SPACE AND                    "Löschkennzeichen
               KZEAR = SPACE AND                    "endausgefaßt
             XWAOK = CON_X AND                    "Warenausgang erlaubt
               LGNUM = LGNUM AND
               LGTYP = LGTYP AND
               LGPLA = LGPLA.
    whereas the table index is created on following fields of RESB,
    MATNR
    WERKS
    XLOEK
    KZEAR
    BDTER
    What possible can be done in this respect as the program is a standard one we can change only in Table Inxex I guess..or what else can be done?
    Can we add LGNUM LGTYP LGPLA into the particular index apart from the existing fields?

    Hi,
    Instead of creating the Index, Get Data from RESB with the where clause having the entire key of the index and then loop to the internal and delete the unwanted entries as shown below.
    loop at itab.
      if itab-LGNUM = LGNUM AND
        itab-LGTYP = LGTYP AND
        itabLGPLA.
      else.
         delete itab index sy-tabix.
      endif.
    endloop.
    As u r getting data with entire index fields the performance will surely increase. Also avoid Select * and retrieve whatever fields u require.
    As you r not having value to the field BDTER, you can pass a range or select-options for this field which has empty valuee.
    Regards,
    Satya

  • Performance issue with BSAS table

    Hi,
    I am considering 1 lac gls for BSAS selection. It is giving runtime error DBIF_RSQL_INVALID_RSQL and exception CX_SY_OPEN_SQL_DB.
    To overcome this issue i  used the followiing code :
        DO.
          PERFORM f_make_index USING sy-index.
          REFRESH lr_hkont.
          CLEAR   lr_hkont.
          APPEND LINES OF gr_hkont FROM gv_from TO gv_to TO lr_hkont.
          IF lr_hkont[] IS INITIAL.
            EXIT.
          ENDIF.
          SELECT bukrs hkont gjahr belnr buzei budat augbl augdt waers wrbtr
                                         dmbtr dmbe2 shkzg blart FROM bsas
                                         APPENDING CORRESPONDING FIELDS OF TABLE
                                         gt_bsas
                                         FOR ALL ENTRIES IN gt_bsis
                                         WHERE bukrs = gt_bsis-bukrs
                                         AND   hkont IN lr_hkont
                                         AND   gjahr = gt_bsis-gjahr
                                         AND   augbl = gt_bsis-belnr
                                         and   budat = gt_bsis-budat.
    enddo.
    I am passing 500 gls for each BSAS selection and appending to GT_BSAS internal table. This code it is taking 50 hours to fetch the data.
    Please suggest me how to improve the performance of the report.
    Thanks,

    Hi,
    1. check whether the SELECT inside the DO statement is required. this shud be the culprit.
    2. In the SELECT query avoid using APPENDING CORRESPONDING TABLES caluse. directly populate into another internal table and append it later.
    3. check whether the internal table gt_bsis is initial before using FOR ALL ENTRIES in the select query. check if it has duplicate entries too. if it has....delete the duplictaes.
    regards,
    madhu

  • Performance issue with mard table

    Hello All
      I am using the following two query.
    1.
      SELECT mara~matnr werks xchar mtart matkl meins trame
                  umlmc  mara~lvorm as lvorm_mara
                  marc~lvorm as lvorm_marc
                  INTO CORRESPONDING FIELDS OF TABLE t_mat
                   FROM mara INNER JOIN marc
                   ON maramatnr = marcmatnr
                  WHERE mara~matnr IN matnr
                  AND werks IN werks
                 AND mtart IN matart
                 AND matkl IN matkla
                 AND ekgrp IN ekgrup
                 and spart in spart.
    if  t_mat[] is not initial.
    2.
       SELECT matnr werks lgort
               labst umlme insme einme speme retme lvorm
               INTO (collector-matnr, collector-werks, collector-lgort,
                     collector-labst, collector-umlme, collector-insme,
                     collector-einme, collector-speme, collector-retme,
                     collector-lvorm)
               FROM mard
               FOR ALL ENTRIES IN t_mat
               WHERE matnr = t_mat-matnr
                 AND werks = t_mat-werks
                 AND lgort = lgort-LOW
                 AND LABST <> 0
                 AND UMLME <> 0.
    endif.
    Now here in the Table t_mat abt 180000 record are there. when I am using the table t_mat for all entries with respect to t_mat and using all primery key itself.
    the performance of program is dull. It is giving to time out Error
    Can some One suggest some solution.
    Regards
    Swati namdev

    few suggessions:
    1. First avoid "corresponding fields of " statement. Use "into table" instead.
    2. Use open cursor to fetch data
    DATA: s_cursor TYPE cursor.
    OPEN CURSOR WITH HOLD s_cursor FOR
    SELECT mara~matnr werks xchar mtart matkl meins trame
    umlmc mara~lvorm as lvorm_mara
    marc~lvorm as lvorm_marc
    FROM mara INNER JOIN marc
    ON maramatnr = marcmatnr
    WHERE mara~matnr IN matnr
    AND werks IN werks
    AND mtart IN matart
    AND matkl IN matkla
    AND ekgrp IN ekgrup
    and spart in spart.
    if t_mat[] is not initial.
    DO.
      FETCH NEXT CURSOR s_cursor
                 APPENDING
                 TABLE t_mat
                 PACKAGE SIZE '2000'.
      IF sy-subrc <> 0.
        CLOSE CURSOR s_cursor.
        EXIT.
      ENDIF.
    ENDDO.
    Edited by: Ravi Kumar Singh on Apr 7, 2008 11:07 AM

  • SAP PERFORMANCE PROPLEM with  MSEG table

    Dear All,
                          The following  query is running long time is our process (SM50)
    SELECT T_00."MATNR" AS c,T_00."WERKS" AS c,T_00."LGORT" AS c,T_00."BWART" AS c,
    T_00."ZEILE" AS c,T_00."LIFNR" AS c,T_00."EBELN" AS c,T_00."MBLNR" AS c,
    T_00."MJAHR" AS c,T_00."SOBKZ" AS c,T_00."ERFMG" AS c,T_00."SHKZG" AS c,
    T_00."ERFME" AS c,T_00."BUKRS" AS c,T_00."MENGE" AS c,T_00."MEINS" AS c,
    T_00."WAERS" AS c,T_00."XAUTO" AS c,T_00."ANLN1" AS c,T_00."ANLN2" AS c,
    T_00."APLZL" AS c,T_00."AUFNR" AS c,T_00."AUFPL" AS c,T_00."BPMNG" AS c,
    T_00."BPRME" AS c,T_00."BSTME" AS c,T_00."BSTMG" AS c,T_00."BWTAR" AS c,
    T_00."CHARG" AS c,T_00."DMBTR" AS c,T_00."EBELP" AS c,T_00."EXBWR" AS c,
    T_00."EXVKW" AS c,T_00."GRUND" AS c,T_00."KDAUF" AS c,T_00."KDEIN" AS c,
    T_00."KDPOS" AS c,T_00."KOSTL" AS c,T_00."KUNNR" AS c,T_00."KZBEW" AS c,
    T_00."KZVBR" AS c,T_00."NPLNR" AS c,T_00."RSNUM" AS c,T_00."RSPOS" AS c,
    T_00."VKWRT" AS c,T_00."WEMPF" AS c
    FROM "MSEG" T_00 INNER
    JOIN "MKPF" T_01 ON T_01."MANDT" = @P1 AND T_00."MBLNR" = T_01."MBLNR" AND
    T_00."MJAHR" = T_01."MJAHR"
    WHERE T_00."MANDT" = @P2 AND T_00."WERKS" = @P3 AND T_00."LGORT" = @P4 AND
    T_00."LIFNR" = @P5 AND T_00."BWART" = @P6 AND T_01."BUDAT" BETWEEN @P7 AND @P8
    /* R3:ZEXCISE:201 T:MSEG 66882061616*/
    With regards,
    V.shunmga Sundaram

    Is this a question or do you just want to share this with us....?

  • Performance issue with MKPF table

    Hi All,
    My requirement is  to get the material documents based on process orders. For this   I used XBLNR(Reference Document Number ) in where condition  of  SELECT statement on  MKPF table. But this is taking more time for accessing the table. Then I tried with Secondary Index on this field, but unfortunately it is affecting negative impact on programs.
    Could you please suggest me on this how to proceed for getting material documents with reference to process orders?
    Thanks in Advance,
    Vempati

    [Search results here|https://forums.sdn.sap.com/search.jspa?q=mkpf%20performance&dateRange=last90days&searchID=23205030&rankBy=10001&start=0]

  • Performance issue with temporary table

    Hello oracle community,
    Oracle 11.1
    I have a problem with a global temp table (IMPO.REPCUSTOMERSLUCK24). I insert about 600.000 records into the table and doing some UPDATE statements on the table and at the end a MERGE statemtent to fill another table. I think the problem is, that the optimizier dont know how many records are in the temp table (Cardinality 1), but I cannot use DBMS_STATS.GATHER_TABLE_STATS to analyze the temp table (will lose the records if I do). Maybe I could analyze it with the "preserve on commit" option, but would like to avoid that. here is the
    Plan
    UPDATE STATEMENT ALL_ROWSCost: 1 Bytes: 1.171 Cardinality: 1                                              
         15 UPDATE IMPO.REPCUSTOMERSLUCK24                                         
              14 FILTER                                    
                   2 TABLE ACCESS BY INDEX ROWID TABLE (TEMP) IMPO.REPCUSTOMERSLUCK24 Cost: 1 Bytes: 1.171 Cardinality: 1                               
                        1 INDEX RANGE SCAN INDEX IMPO.FK_1883_REPCUSTOMERSLUCK24 Cost: 1 Cardinality: 1                          
                   13 FILTER                               
                        12 SORT GROUP BY NOSORT Cost: 0 Bytes: 2.212 Cardinality: 1                          
                             11 NESTED LOOPS                     
                                  9 NESTED LOOPS Cost: 0 Bytes: 2.212 Cardinality: 1                
                                       7 NESTED LOOPS Cost: 0 Bytes: 1.685 Cardinality: 1           
                                            4 TABLE ACCESS BY INDEX ROWID TABLE (TEMP) IMPO.REPCONTRACTSLUCK24 Cost: 0 Bytes: 1.158 Cardinality: 1      
                                                 3 INDEX FULL SCAN INDEX IMPO.FK_1875_REPCONTRACTSLUCK24 Cost: 0 Cardinality: 1
                                            6 TABLE ACCESS BY INDEX ROWID TABLE CRM2.MEDIACODE Cost: 0 Bytes: 527 Cardinality: 1      
                                                 5 INDEX UNIQUE SCAN INDEX (UNIQUE) CRM2.AK_1970_MEDIACODE Cost: 0 Cardinality: 1
                                       8 INDEX UNIQUE SCAN INDEX (UNIQUE) CRM2.PK_1955_PARTNER Cost: 0 Cardinality: 1           
                                  10 TABLE ACCESS BY INDEX ROWID TABLE CRM2.PARTNER Cost: 0 Bytes: 527 Cardinality: 1                
    any suggestions to my problem ?
    Ikrischer

    hi,
    dynamic sampling is read only a part of the table to make an estimatation (generally count the number of rows, or get an average (if the sample is 'large' enough' for the result to be reliable) etc.
    So in you case you could evaluate the number of row like this (the explain plans show you that the estimated cost is propotional to the size of the sample read (either expressed in # of rows or block)).
    SQL*Plus: Release 10.2.0.2.0 - Production on Thu Jun 17 15:32:43 2010
    Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, Oracle Label Security, OLAP and Data Mining options
    SQL> CREATE GLOBAL TEMPORARY TABLE XTEST
      2  (
      3    NUM1  NUMBER                                  NOT NULL
      4  )
      5  ON COMMIT PRESERVE ROWS
      6  NOCACHE
      7  /
    Table created.
    SQL> INSERT INTO xtest
      2     SELECT     ROWNUM
      3     FROM       DUAL
      4     CONNECT BY ROWNUM <= 100000;
    100000 rows created.
    SQL> commit;
    Commit complete.
    SQL> EXEC dbms_stats.gather_table_stats(ownname=>user,tabname=>'XTEST');
    PL/SQL procedure successfully completed.
    SQL> EXPLAIN PLAN SET STATEMENT_ID = 'st1' FOR SELECT COUNT(*)*10 FROM xtest SAMPLE(10);
    Explained.
    SQL> EXPLAIN PLAN SET STATEMENT_ID = 'st2' FOR SELECT COUNT(*)*1.1 FROM xtest SAMPLE(90);
    Explained.
    SQL> set linesize 120;
    SQL> SELECT PLAN_TABLE_OUTPUT FROM   TABLE(DBMS_XPLAN.DISPLAY(NULL,'st1','TYPICAL'));
    PLAN_TABLE_OUTPUT
    Plan hash value: 2221487120
    | Id  | Operation            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |       |     1 |     4 |    31  (26)| 00:00:01 |
    |   1 |  SORT AGGREGATE      |       |     1 |     4 |            |          |
    |   2 |   TABLE ACCESS SAMPLE| XTEST | 10077 | 40308 |    31  (26)| 00:00:01 |
    9 rows selected.
    SQL> SELECT PLAN_TABLE_OUTPUT FROM   TABLE(DBMS_XPLAN.DISPLAY(NULL,'st2','TYPICAL'));
    PLAN_TABLE_OUTPUT
    Plan hash value: 2221487120
    | Id  | Operation            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |       |     1 |     4 |    32  (29)| 00:00:01 |
    |   1 |  SORT AGGREGATE      |       |     1 |     4 |            |          |
    |   2 |   TABLE ACCESS SAMPLE| XTEST | 90693 |   354K|    32  (29)| 00:00:01 |
    9 rows selected.
    SQL> Note the difference of rows/bytes in both samples, but be carrefull because the explain plan only gives you an estimation ...
    REM: If you sample by blocks, you'll get less 'IO' (physical or not) (select count(*)1.5 from mytable sample block (50) is costless thans elect count(*)1.5 from mytable sample (50)) ...

Maybe you are looking for

  • Decode or nvl2 or case functions

    create a query that displays employees lastname and commission amounts.if an employee does not earn commission,put "no commission."label column comm Sample Output lastname comm king no commission abel .3 Note:The column commission is of number dataty

  • Why is my Layer Mask filling with Grey

    I'm working in a multi layered CS5.1 Photoshop project (I did not create it). One of the adjustment layers in the project needs a layer masks, and when I create one it is not working properly. I make my selection that I want to mask off and create th

  • Premiere Pro CC crashing on startup

    Hello,  I have an issue I can't seem to be able to figure out. It crash systematically upon starting up. of all my Adobee CC application only Premiere does that. I try several times to renamed the "... Documents\Adobe\Premiere Pro" folder but that di

  • My screen is white with a file flashing and won't go any further

    I was using my computer and then it froze so I restarted it and now all I get is a white screen with a file flashing with a question mark. Help please

  • Sharing Process Servers Between Version

    The CPS version at my company is 6.0.2. We are going to a later version of CPS (7 or 8, not sure yet) and want to know if the process servers can be shared between the 6 version and the 7 or 8 version. We are wanting to migrate jobs gradually instead