Performance issues with pipelined table functions

I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
Many thanks in advance.
CREATE OR REPLACE PACKAGE pipeline_example
IS
   TYPE resultset_typ IS REF CURSOR;
   TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
   TYPE table_typ IS TABLE OF row_typ;
   FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
      RETURN resultset_typ;
   c_default_limit   CONSTANT PLS_INTEGER := 100;  
   FUNCTION processor (
      p_source_data   IN resultset_typ,
      p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
      RETURN table_typ
      PIPELINED
      PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
   PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                            argB          IN     VARCHAR2,
                            o_resultset      OUT resultset_typ);
   PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                          argB          IN     VARCHAR2,
                          o_resultset      OUT resultset_typ);
END pipeline_example;
CREATE OR REPLACE PACKAGE BODY pipeline_example
IS
   FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
      RETURN resultset_typ
   IS
      o_resultset   resultset_typ;
   BEGIN
      OPEN o_resultset FOR
         SELECT colC, colD, colE
           FROM some_table
          WHERE colA = ArgA AND colB = argB;
      RETURN o_resultset;
   END base_query;
   FUNCTION processor (
      p_source_data   IN resultset_typ,
      p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
      RETURN table_typ
      PIPELINED
      PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
   IS
      aa_source_data   table_typ;-- := table_typ ();
   BEGIN
      LOOP
         FETCH p_source_data
         BULK COLLECT INTO aa_source_data
         LIMIT p_limit_size;
         EXIT WHEN aa_source_data.COUNT = 0;
         /* Process the batch of (p_limit_size) records... */
         FOR i IN 1 .. aa_source_data.COUNT
         LOOP
            PIPE ROW (aa_source_data (i));
         END LOOP;
      END LOOP;
      CLOSE p_source_data;
      RETURN;
   END processor;
   PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                            argB          IN     VARCHAR2,
                            o_resultset      OUT resultset_typ)
   IS
   BEGIN
      OPEN o_resultset FOR
           SELECT /*+ PARALLEL(t, 5) */ colC,
                  SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                  SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                  SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                  SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
             FROM TABLE (processor (base_query (argA, argB),100)) t
         GROUP BY colC
         ORDER BY colC
   END with_pipeline;
   PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                          argB          IN     VARCHAR2,
                          o_resultset      OUT resultset_typ)
   IS
   BEGIN
      OPEN o_resultset FOR
           SELECT colC,
                  SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                  SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                  SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                  SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
             FROM (SELECT colC, colD, colE
                     FROM some_table
                    WHERE colA = ArgA AND colB = argB)
         GROUP BY colC
         ORDER BY colC;
   END no_pipeline;
END pipeline_example;
ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
Edited by: Earthlink on Nov 14, 2010 11:31 AM
Edited by: Earthlink on Nov 14, 2010 11:32 AM
Edited by: Earthlink on Nov 20, 2010 12:04 PM
Edited by: Earthlink on Nov 20, 2010 12:54 PM

Earthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

Similar Messages

  • Query performance improvement using pipelined table function

    Hi,
    I have got two select queries one is like...
    select * from table
    another is using pielined table function
    select *
    from table(pipelined_function(cursor(select * from table)))
    which query will return result set more faster????????
    suggest methods for retrieving dataset more faster (using pipelined table function) than a normal select query.
    rgds
    somy

    Compare the performance between these solutions:
    create table big as select * from all_objects;
    First test the performance of a normal select statement:
    begin
      for r in (select * from big) loop
       null;
      end loop;
    end;
    /Second a pipelined function:
    create type rc_vars as object 
    (OWNER  VARCHAR2(30)
    ,OBJECT_NAME     VARCHAR2(30));
    create or replace type rc_vars_table as table of  rc_vars ;
    create or replace
    function rc_get_vars
    return rc_vars_table
    pipelined
    as
      cursor c_aobj
             is
             select owner, object_name
             from   big;
      l_aobj c_aobj%rowtype;
    begin
      for r_aobj in c_aobj loop
        pipe row(rc_vars(r_aobj.owner,r_aobj.object_name));
      end loop;
      return;
    end;
    /Test the performance of the pipelined function:
    begin
      for r in (select * from table(rc_get_vars)) loop
       null;
      end loop;
    end;
    /On my system the simple select-statement is 20 times faster.
    Correction: It is 10 times faster, not 20.
    Message was edited by:
    wateenmooiedag

  • Performance issue with using MAX function in pl/sql

    Hello All,
    We are having a performance issue with the below logic wherein MAX is being used in order to get the latest instance/record for a given input variable ( p_in_header_id).. the item_key is having the format as :
    p_in_header_id - <number generated from a sequence>
    This query to fetch even 1 record takes around 1 minutes 30 sec..could someone please help if there is a better way to form this logic & to improve performance in this case.
    We want to get the latest record for the item_key ( this we are getting using MAX (begin_date)) for a given p_in_header_id value.
    Query 1 :
    SELECT item_key FROM wf_items WHERE item_type = 'xxxxzzzz'
    AND SUBSTR (item_key, 1, INSTR (item_key, '-') - 1) =p_in_header_id
    AND root_activity ='START_REQUESTS'
    AND begin_date =
    (SELECT MAX (begin_date) FROM wf_items WHERE item_type = 'xxxxzzzz'
    AND root_activity ='START_REQUESTS'
    AND SUBSTR (item_key, 1, INSTR (item_key, '-') - 1) =p_in_header_id);
    Could someone please help us with this performance issue..we are really stuck because of this
    regards

    First of all Thanks to all gentlemen who replied ..many thanks ...
    Tried the ROW_NUMBER() option but still it is taking time...have given output for the query and tkprof results as well. Even when it doesn't fetch any record ( this is a valid cased because the input header id doesn't have any workflow request submitted & hence no entry in the wf_items table)..then also see the time it has taken.
    Looked at the RANK & DENSE_RANK options which were suggested..but it is still taking time..
    Any further suggestions or ideas as to how this could be resolved..
    SELECT 'Y', 'Y', ITEM_KEY
    FROM
    ( SELECT ITEM_KEY, ROW_NUMBER() OVER(ORDER BY BEGIN_DATE DESC) RN FROM
    WF_ITEMS WHERE ITEM_TYPE = 'xxxxzzzz' AND ROOT_ACTIVITY = 'START_REQUESTS'
    AND SUBSTR(ITEM_KEY,1,INSTR(ITEM_KEY,'-') - 1) = :B1
    ) T WHERE RN <= 1
    call count cpu elapsed disk query current rows
    Parse 0 0.00 0.00 0 0 0 0
    Execute 1 0.00 1.57 0 0 0 0
    Fetch 1 8700.00 544968.73 8180 8185 0 0
    total 2 8700.00 544970.30 8180 8185 0 0
    many thanks

  • Performance issue with COEP table in ECC 6

    Hi,,
    Any idea how to resonlve performance issue on COEP table in ECC6.0
    We are not using COEP table right now. this table occupies 100gb of 900 gb in PRD system.
    Can i directly archive/delete the table?
    Regards
    Siva

    Hi Siva,
    You cannot archive COEP table alone. It should be archived along with the respective archive object. Just deleting the table is not at all a good idea.
    For finding out the appropriate archive object contributing to the entries in COEP, you need to perform CO table analysis using programs RARCCOA1 and RARCCOA2. For further informaton refer to SAP note 138688.
    Hope this helps,
    Naveen

  • Performance issue with MSEG table

    Hi all,
    I need to fetch materials(MATNR) based on the service order number (AUFNR) in the selection screen,but there is performance isssue with this , how to over come this issue .
    Regards ,
    Amit

    Hi,
    There could be various reasons for performance issue with MSEG.
    1) database statistics of tables and indexes are not upto date.
    because of this wrong index is choosen during the execution.
    2) Improper indexes, because there is no indexes with the fields mentioned in the WHERE clause of the statement. Because of this reason, CBO would have choosen wrong index and did a range scan.
    3) Optimizer bug in oracle.
    4) Size of table is very huge, archive.
    Better switch on ST05 trace before you run this statements, so it will give more detailed information, where exactly time being spent during the execution.
    Hope this helps
    dileep

  • Clob datatype with pipelined table function.

    hi
    i made two functions one of them which use varchar2 data type with pipelined and
    another with clob data type with pipelined.
    i am giving parameters to both of them first varch2 with pipelined is working fine.
    but another is not.
    and i made diff type for both of them.
    like clob object type for second.
    and varchar2 type for first.
    my first function is like
    TYPE "CSVOBJECTFORMAT"  AS OBJECT ( "S"   VARCHAR2(500));
    TYPE "CSVTABLETYPE" AS TABLE OF CSVOBJECTFORMAT;
    CREATE OR REPLACE FUNCTION "FN_PARSECSVSTRING" (p_list
        VARCHAR2, p_delim VARCHAR2:=' ') RETURN CsvTableType PIPELINED
        IS
    l_idx PLS_INTEGER;
    l_list VARCHAR2(32767) := p_list;
    l_value VARCHAR2(32767);
    BEGIN
    LOOP
    l_idx := INSTR(l_list, p_delim);
    IF l_idx > 0 THEN
    PIPE ROW(CsvObjectFormat(SUBSTR(l_list, 1, l_idx-1)));
    l_list := SUBSTR(l_list, l_idx+LENGTH(p_delim));
    ELSE
    PIPE ROW(CsvObjectFormat(l_list));
    EXIT;
    END IF;
    END LOOP;
    RETURN;
    END fn_ParseCSVString;
    and out put for this function is like
    which is correct.
    SQL>   SELECT s  FROM  TABLE( CAST( fn_ParseCSVString('+588675,1~#588675^1^99^~2~16~115~99~SP5601~~~~~0~~', '~') as CsvTableType)) ;
    S
    +588675,1
    #588675^1^99^
    2
    16
    115
    99
    SP5601
    S
    0
    14 rows selected.
    SQL>
    my second function is like
    TYPE "CSVOBJECTFORMAT1"  AS OBJECT ( "S"   clob);
    TYPE "CSVTABLETYPE1" AS TABLE OF CSVOBJECTFORMAT1;
    CREATE OR REPLACE FUNCTION "FN_PARSECSVSTRING1" (p_list
        clob, p_delim VARCHAR2:=' ') RETURN CsvTableType1 PIPELINED
        IS
    l_idx PLS_INTEGER;
    l_list  clob := p_list;
    l_value VARCHAR2(32767);
    BEGIN
    dbms_output.put_line('hello');
      LOOP
      l_idx := INSTR(l_list, p_delim);
      IF l_idx > 0 THEN
    PIPE ROW(CsvObjectFormat1(substr(l_list, 1, l_idx-1)));
    l_list :=  dbms_lob.substr(l_list, l_idx+LENGTH(p_delim));
    ELSE
    PIPE ROW(CsvObjectFormat1(l_list));
    exit;
    END IF;
      END LOOP;
    RETURN;
    END fn_ParseCSVString1;
    SQL>   SELECT s  FROM  TABLE( CAST( fn_ParseCSVString1('+588675,1~#588675^1^99^~2~16~115~99~SP5601~~~~~0~~', '~') as CsvTableType1)) ;
    S
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    and it goes on until i use ctrl+C to break it.
    actually i want to make a function which can accept large values  so i am trying to change first function.thanks

    RTFM DBMS_LOB.SUBSTR. Unlike built-in function SUBSTR, second parameter in DBMS_LOB.SUBSTR is length, not position. Also, PL/SQL fully supports CLOBs, so there is no need to use DBMS_LOB:
    SQL> CREATE OR REPLACE
      2  FUNCTION FN_PARSECSVSTRING1(p_list clob,
      3                              p_delim VARCHAR2:=' '
      4                             )
      5    RETURN CsvTableType1
      6    PIPELINED
      7    IS
      8        l_pos   PLS_INTEGER := 1;
      9        l_idx   PLS_INTEGER;
    10        l_value clob;
    11    BEGIN
    12        LOOP
    13          l_idx := INSTR(p_list, p_delim,l_pos);
    14          IF l_idx > 0
    15            THEN
    16              PIPE ROW(CsvObjectFormat1(substr(p_list,l_pos,l_idx-l_pos)));
    17              l_pos := l_idx+LENGTH(p_delim);
    18            ELSE
    19              PIPE ROW(CsvObjectFormat1(substr(p_list,l_pos)));
    20              RETURN;
    21          END IF;
    22        END LOOP;
    23        RETURN;
    24  END fn_ParseCSVString1;
    25  /
    Function created.
    SQL> SELECT rownum,s  FROM  TABLE( CAST( fn_ParseCSVString1('+588675,1~#588675^1^99^~2~16~115~99~SP5
    601~~~~~0~~', '~') as CsvTableType1)) ;
        ROWNUM S
             1 +588675,1
             2 #588675^1^99^
             3 2
             4 16
             5 115
             6 99
             7 SP5601
             8
             9
            10
            11
        ROWNUM S
            12 0
            13
            14
    14 rows selected.
    SQL> SY.

  • Insert performance issue with Partitioned Table.....

    Hi All,
    I have a performance issue during with a table which is partitioned. without table being partitioned
    it ran in less time but after partition it took more than double.
    1) The table was created initially without any partition and the below insert took only 27 minuts.
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:27:35.20
    2) Now I re-created the table with partition(range yearly - below) and the same insert took 59 minuts.
    Is there anyway i can achive the better performance during insert on this partitioned table?
    [ similerly, I have another table with 50 Million records and the insert took 10 hrs without partition.
    with partitioning the table, it took 18 hours... ]
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4195045590
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 643K| 34M| | 12917 (3)| 00:02:36 |
    |* 1 | HASH JOIN | | 643K| 34M| 2112K| 12917 (3)| 00:02:36 |
    | 2 | VIEW | index$_join$_001 | 69534 | 1290K| | 529 (3)| 00:00:07 |
    |* 3 | HASH JOIN | | | | | | |
    | 4 | INDEX FAST FULL SCAN| PK_ACCOUNT_MASTER_BASE | 69534 | 1290K| | 181 (3)| 00:00
    | 5 | INDEX FAST FULL SCAN| ACCOUNT_MASTER_BASE_IDX2 | 69534 | 1290K| | 474 (2)| 00:00
    PLAN_TABLE_OUTPUT
    | 6 | TABLE ACCESS FULL | TB_SISADMIN_BALANCE | 2424K| 87M| | 6413 (4)| 00:01:17 |
    Predicate Information (identified by operation id):
    1 - access("A"."VENDOR_ACCT_NBR"=SUBSTR("B"."ACCOUNT_NO",1,8) AND
    "A"."VENDOR_CD"="B"."COMPANY_NO")
    3 - access(ROWID=ROWID)
    Open C1;
    Loop
    Fetch C1 Bulk Collect Into C_Rectype Limit 10000;
    Forall I In 1..C_Rectype.Count
    Insert test
         col1,col2,col3)
    Values
         val1, val2,val3);
    V_Rec := V_Rec + Nvl(C_Rectype.Count,0);
    Commit;
    Exit When C_Rectype.Count = 0;
    C_Rectype.delete;
    End Loop;
    End;
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:51:01.22
    Edited by: user520824 on Jul 16, 2010 9:16 AM

    I'm concerned about the view in step 2 and the index join in step 3. A composite index with both columns might eliminate the index join and result in fewer read operations.
    If you know which partition the data is going into beforehand you can save a little bit of processing by specifying the partition (which may not be a scalable long-term solution) in the insert - I'm not 100% sure you can do this on inserts but I know you can on selects.
    The APPEND hint won't help the way you are using it - the VALUES clause in an insert makes it be ignored. Where it is effective and should help you is if you can do the insert in one query - insert into/select from. If you are using the loop to avoid filling up undo/rollback you can use a bulk collect to batch the selects and commit accordingly - but don't commit more often than you have to because more frequent commits slow transactions down.
    I don't think there is a nologging hint :)
    So, try something like
    insert /*+ hints */ into ...
    Select
         A.Ing_Acct_Nbr, currency_Symbol,
         Balance_Date,     Company_No,
         Substr(Account_No,1,8) Account_No,
         Substr(Account_No,9,1) Typ_Cd ,
         Substr(Account_No,10,1) Chk_Cd,
         Td_Balance,     Sd_Balance,
         Sysdate,     'Sisadmin'
    From Ideaal_Cons.Tb_Account_Master_Base A,
         Ideaal_Staging.Tb_Sisadmin_Balance B
    Where A.Vendor_Acct_Nbr = Substr(B.Account_No,1,8)
       And A.Vendor_Cd = b.company_no
          ;Edited by: riedelme on Jul 16, 2010 7:42 AM

  • Performance issue with JEST table

    Moved to correct forum by moderator
    Hi all,
    I have a report which is giving performance issue.
    It hits the function module "status_read", which in turn hits the table JEST..
    The select query is:
    SELECT SINGLE * FROM JSTO CLIENT SPECIFIED
    WHERE MANDT = MANDT
    AND   OBJNR = OBJNR.
    I know we should not use client specified,  but this is a SAP standard code..
    Since this query is hit many times, it results in TIME_OUT error..
    I observed that the table JEST has 133,523,962 entries in production and in technical details, the size catagory is metnioned as 3 - (Data records expected: 280,000 to 1,100,000).
    Since here, the data size is exceeded, if i change the size catagory to 4 would improve the performance?
    Or should I request Client to archive this table? If yes, please guide me how to go for it? I have heard there are archiving objects.. please specify which objects should be considered for archiving...
    I could only think of above two solutions, please let me know if there is any other workaround...
    thanks!
    Edited by: Matt on Jan 27, 2009 11:12 AM

    Hi,
    I'm not sure the exact archiving object for this, here are some archiving objects related to tabel JEST
    MM_EBAN
    MM_EKKO
    MM_MATNR
    PP_ORDER
    PR_ORDER
    PM_NET
    pl. go thru them using tcode: SARA
    thanks\
    Mahesh

  • Performance issue with XLA tables and GL tables R12

    Hi all,
    I have one SQL that joins all the XLA tables with GL tables to get invoice related encumbrance data.
    My problem is for some reason the SQL is going to GL_LE_LINES first (from explain plane). As
    a result my SQL is taking some 25 min to finish.
    I am pretty sure if I can manage to force the SQL to use XLA table 1st the SQL will finish in couple of
    minutes. I even tried LEADING hint. But, it didn't work.
    Can someone help me?
    SELECT poh.segment1,
                        tmp.closed_code,
                        gcc.segment1,
                        gcc.segment2,
                        gcc.segment3,
                        gcc.segment4,
                        gcc.segment5,
                        gcc.segment6,
                        gcc.segment7,
                        SUM (NVL (gjl.entered_dr, 0) - NVL (gjl.entered_cr, 0))
                   FROM apps.up_po_encumb_relief_tmp_nb TMP,
                        apps.po_headers_all POH,
                        apps.po_distributions_all pod,
                        apps.ap_invoice_distributions_all APID,
                        xla.xla_transaction_entities XTE,
                        xla_events XE,
                        apps.xla_ae_headers XAH,
                        apps.xla_ae_lines XAL,
                        apps.gl_import_references GIR, -- DOUBLE CHECK JOIN CONDITIONS ON THIS TO INCLUDE OTHER COLS
                        apps.gl_je_lines GJL,
                        apps.gl_je_headers GJH,
                        apps.gl_code_combinations GCC
                  WHERE     POH.segment1 = TMP.PO_NUMBER
                        AND POH.PO_HEADER_ID = POD.PO_HEADER_ID
                        AND POD.Po_distribution_id = APID.po_distribution_id
                        AND XTE.APPLICATION_ID = 200                           -- Payables
                        AND XTE.SOURCE_ID_INT_1 = APID.INVOICE_ID       --POH.po_header_id
                        AND XTE.ENTITY_ID = XE.ENTITY_ID
                        AND XTE.APPLICATION_ID = XE.APPLICATION_ID
                        AND XAH.ENTITY_ID = XE.ENTity_ID
                        AND XAH.EVENT_ID = XE.EVENT_ID
                        AND XAH.APPLICATION_ID = XE.APPLICATION_ID
                        AND XAL.AE_HEADER_ID = XAH.AE_HEADER_ID
                        AND XAL.APPLICATION_ID = XAH.APPLICATION_ID
                        AND GIR.gl_sl_link_table = XAL.gl_sl_link_table
                        AND GIR.gl_sl_link_id = XAL.gl_sl_link_id
                        AND GJL.je_header_id = GIR.je_header_id
                        AND GJL.je_line_num = GIR.je_line_num
                        AND GJH.je_header_id = GJL.je_header_id
                        AND GJH.status = 'P'
                        AND POD.code_combination_id = GJL.code_combination_id
                        AND GJL.code_combination_id = GCC.code_combination_id
                        AND GCC.enabled_flag = 'Y'
                        AND GJH.je_source = 'Payables'
                        AND GJH.je_category = 'Purchase Invoices'
                        AND GJH.encumbrance_type_id IN (1001, 1002)
                        AND GJH.actual_flag = 'E'
                        AND GJH.status = 'P'
                        AND (NVL (GJL.entered_dr, 0) != 0 OR NVL (GJL.entered_cr, 0) != 0)
               GROUP BY poh.segment1,
                        tmp.closed_code,
                        gcc.segment1,
                        gcc.segment2,
                        gcc.segment3,
                        gcc.segment4,
                        gcc.segment5,
                        gcc.segment6,
                        gcc.segment7;

    Hi,
    did you
    - check table statistics (have the affected tables been analyzed recently)?
    - check explain plan for full table scans? You are using NVL on gjl.entered_dr
      and gjl.entered_cr which may lead to a full table scan, as far as i know, there
      is no (standard) functional index on both columns.
    Regards

  • Performance issue with MSEG table in Production

    Hi,
    I have written a report with 4 select queries.
    First i am selecting data from VBRK table in i_vbrk. Then for all entries in i_vbrk, i am fetching records from VBRP into i_vbrp table. Then for all entries in i_vbrp, records are fetched from MKPF into i_mkpf. Then, finally for all entries in i_mkpf, records are fetched from MSEG into i_mseg table.
    Performance of this report is good in Quality system, but it is very poor in Production systems. It is taking more than 20 mins to get executed. MSEG table query is taking most of the time.
    I have done indexing and packet sizing on MSEG table, but still performace issue persists. So, cqan you please let me know if there is any way by which performace of the program can be improved???
    Please help.
    Thanks,
    Archana

    Hi Archana,
    I was having the same issue for MKPF and MSEG , I am using INNER JOIN Condition .
    SELECT
    mkpf~mblnr
    mkpf~mjahr
    mkpf~budat
    mkpf~usnam
    mkpf~bktxt
    mseg~zeile
    mseg~bwart
    mseg~prctr
    mseg~matnr
    mseg~werks
    mseg~lgort
    mseg~menge
    mseg~meins
    mseg~ebeln
    mseg~sgtxt
    mseg~shkzg
    mseg~dmbtr
    mseg~waers
    mseg~sobkz
    mkpf~xblnr
    mkpf~frbnr
    mseg~lifnr
    INTO TABLE xmseg
    FROM mkpf
    INNER JOIN mseg
    ON mkpfmandt EQ msegmandt AND
    mkpfmblnr EQ msegmblnr AND
      mkpfmjahr EQ msegmjahr
    WHERE mkpf~vgart IN se_vgart
    AND   mkpf~budat IN se_budat
    AND   mkpf~usnam IN se_usnam
    AND   mkpf~bktxt IN se_bktxt
    AND   mseg~bwart IN se_bwart
    AND   mseg~matnr IN se_matnr
    AND   mseg~werks IN se_werks
    AND   mseg~lgort IN se_lgort
    AND   mseg~sobkz IN se_sobkz
    AND   mseg~lifnr IN se_lifnr
    %_HINTS ORACLE '&SUBSTITUTE VALUES&'.
    But still I have a issue in performance , Can anybody  give some suggestions , please .
    Regards,
    Shiv

  • Performance issue with the table use vrkpa

    Hi.
    here is the selection criteria that i am using and the table use vrkpa i only used to map the table kna1 and vbrk.vbrk and kna1 doesnot have the direct primary key relationship.
    please check and let me know wht this vrkpa is taking time and how can i improve the performance as from kna1,i am fetching data very easily while fetching nothing from vrkpa and fetching fkdat from vbrk.
    the idea behind using these tables is just for one kunnr (from kna1)getting the relevant entries based on the fkdat(selection screen input field),please suggest.
        SELECT kunnr
               name1
               land1
               regio
               ktokd
               FROM kna1
               INTO TABLE it_kna1
               FOR ALL ENTRIES IN it_knb1
               WHERE kunnr = it_knb1-kunnr
               AND ktokd = '0003'.
        IF sy-subrc = 0.
          SORT it_kna1 BY kunnr.
          DELETE ADJACENT DUPLICATES FROM it_kna1 COMPARING kunnr.
        ENDIF.
      ENDIF.
      IF NOT it_kna1[] IS INITIAL.
        SELECT kunnr
               vbeln
               FROM vrkpa
               INTO TABLE it_vrkpa
               FOR ALL ENTRIES IN it_kna1
               WHERE kunnr = it_kna1-kunnr.
        IF sy-subrc = 0.
          SORT it_vrkpa BY kunnr vbeln.
        ENDIF.
      ENDIF.
      IF NOT it_vrkpa[] IS INITIAL.
        SELECT vbeln
               kunrg
               fkdat
              kkber
               bukrs
               FROM vbrk
               INTO TABLE it_vbrk
               FOR ALL ENTRIES IN it_vrkpa
               WHERE vbeln = it_vrkpa-vbeln.
        IF sy-subrc = 0.
          DELETE it_vbrk WHERE fkdat NOT IN s_indate.
          DELETE it_vbrk WHERE fkdat NOT IN s_chdate.
          DELETE it_vbrk WHERE bukrs NOT IN s_ccode.
          SORT it_vbrk DESCENDING BY vbeln fkdat.
        ENDIF.
      ENDIF.

    Hi,
    Transaction SE11
    Table VRKPA => Display (not Change)
    Click on "Indexes"
    Click on "Create" (if your system is Basis 7.00, then click on the "Create" drop-down icon and choose "Create extension index")
    Choose a name (up to 3 characterss, start with Z)
    Enter a description for the index
    Enter the field names of the index
    Choose "Save" (prompts for transport request)
    Choose "Activate"
    If after "Activate' the status shows "Index exists in database system <...>", then you have nothing more to dotable is very large the activation will not create the index in the database and the status remains "Index does nor exist". In that case:
    - Transaction SE14
    - Table VRKPA -> Edit
    - Choose "Indexes" and select your new index
    - Choose "Create database index"; mark the option "Background"
    - Wait until the job is finished and check in SE11 that the index now exists in the DB
    You don't have to do anyhting to your program because Oracle should choose the new index automatically. Run a SQL Trace to make sure.
    Rgds,
    Mark

  • Performance issue with RESB table

    Hi,
    User want to improve the performance of a standard program RLLL07SE in which a RESB table data is fetched and take much time on it.
      The select querry for RESB is,
        SELECT * FROM RESB WHERE
               MATNR = MATNR AND
               WERKS = WERKS AND
               XLOEK = SPACE AND                    "Löschkennzeichen
               KZEAR = SPACE AND                    "endausgefaßt
             XWAOK = CON_X AND                    "Warenausgang erlaubt
               LGNUM = LGNUM AND
               LGTYP = LGTYP AND
               LGPLA = LGPLA.
    whereas the table index is created on following fields of RESB,
    MATNR
    WERKS
    XLOEK
    KZEAR
    BDTER
    What possible can be done in this respect as the program is a standard one we can change only in Table Inxex I guess..or what else can be done?
    Can we add LGNUM LGTYP LGPLA into the particular index apart from the existing fields?

    Hi,
    Instead of creating the Index, Get Data from RESB with the where clause having the entire key of the index and then loop to the internal and delete the unwanted entries as shown below.
    loop at itab.
      if itab-LGNUM = LGNUM AND
        itab-LGTYP = LGTYP AND
        itabLGPLA.
      else.
         delete itab index sy-tabix.
      endif.
    endloop.
    As u r getting data with entire index fields the performance will surely increase. Also avoid Select * and retrieve whatever fields u require.
    As you r not having value to the field BDTER, you can pass a range or select-options for this field which has empty valuee.
    Regards,
    Satya

  • Performance issue with BSAS table

    Hi,
    I am considering 1 lac gls for BSAS selection. It is giving runtime error DBIF_RSQL_INVALID_RSQL and exception CX_SY_OPEN_SQL_DB.
    To overcome this issue i  used the followiing code :
        DO.
          PERFORM f_make_index USING sy-index.
          REFRESH lr_hkont.
          CLEAR   lr_hkont.
          APPEND LINES OF gr_hkont FROM gv_from TO gv_to TO lr_hkont.
          IF lr_hkont[] IS INITIAL.
            EXIT.
          ENDIF.
          SELECT bukrs hkont gjahr belnr buzei budat augbl augdt waers wrbtr
                                         dmbtr dmbe2 shkzg blart FROM bsas
                                         APPENDING CORRESPONDING FIELDS OF TABLE
                                         gt_bsas
                                         FOR ALL ENTRIES IN gt_bsis
                                         WHERE bukrs = gt_bsis-bukrs
                                         AND   hkont IN lr_hkont
                                         AND   gjahr = gt_bsis-gjahr
                                         AND   augbl = gt_bsis-belnr
                                         and   budat = gt_bsis-budat.
    enddo.
    I am passing 500 gls for each BSAS selection and appending to GT_BSAS internal table. This code it is taking 50 hours to fetch the data.
    Please suggest me how to improve the performance of the report.
    Thanks,

    Hi,
    1. check whether the SELECT inside the DO statement is required. this shud be the culprit.
    2. In the SELECT query avoid using APPENDING CORRESPONDING TABLES caluse. directly populate into another internal table and append it later.
    3. check whether the internal table gt_bsis is initial before using FOR ALL ENTRIES in the select query. check if it has duplicate entries too. if it has....delete the duplictaes.
    regards,
    madhu

  • Performance issue with mard table

    Hello All
      I am using the following two query.
    1.
      SELECT mara~matnr werks xchar mtart matkl meins trame
                  umlmc  mara~lvorm as lvorm_mara
                  marc~lvorm as lvorm_marc
                  INTO CORRESPONDING FIELDS OF TABLE t_mat
                   FROM mara INNER JOIN marc
                   ON maramatnr = marcmatnr
                  WHERE mara~matnr IN matnr
                  AND werks IN werks
                 AND mtart IN matart
                 AND matkl IN matkla
                 AND ekgrp IN ekgrup
                 and spart in spart.
    if  t_mat[] is not initial.
    2.
       SELECT matnr werks lgort
               labst umlme insme einme speme retme lvorm
               INTO (collector-matnr, collector-werks, collector-lgort,
                     collector-labst, collector-umlme, collector-insme,
                     collector-einme, collector-speme, collector-retme,
                     collector-lvorm)
               FROM mard
               FOR ALL ENTRIES IN t_mat
               WHERE matnr = t_mat-matnr
                 AND werks = t_mat-werks
                 AND lgort = lgort-LOW
                 AND LABST <> 0
                 AND UMLME <> 0.
    endif.
    Now here in the Table t_mat abt 180000 record are there. when I am using the table t_mat for all entries with respect to t_mat and using all primery key itself.
    the performance of program is dull. It is giving to time out Error
    Can some One suggest some solution.
    Regards
    Swati namdev

    few suggessions:
    1. First avoid "corresponding fields of " statement. Use "into table" instead.
    2. Use open cursor to fetch data
    DATA: s_cursor TYPE cursor.
    OPEN CURSOR WITH HOLD s_cursor FOR
    SELECT mara~matnr werks xchar mtart matkl meins trame
    umlmc mara~lvorm as lvorm_mara
    marc~lvorm as lvorm_marc
    FROM mara INNER JOIN marc
    ON maramatnr = marcmatnr
    WHERE mara~matnr IN matnr
    AND werks IN werks
    AND mtart IN matart
    AND matkl IN matkla
    AND ekgrp IN ekgrup
    and spart in spart.
    if t_mat[] is not initial.
    DO.
      FETCH NEXT CURSOR s_cursor
                 APPENDING
                 TABLE t_mat
                 PACKAGE SIZE '2000'.
      IF sy-subrc <> 0.
        CLOSE CURSOR s_cursor.
        EXIT.
      ENDIF.
    ENDDO.
    Edited by: Ravi Kumar Singh on Apr 7, 2008 11:07 AM

  • Performance issue with MKPF table

    Hi All,
    My requirement is  to get the material documents based on process orders. For this   I used XBLNR(Reference Document Number ) in where condition  of  SELECT statement on  MKPF table. But this is taking more time for accessing the table. Then I tried with Secondary Index on this field, but unfortunately it is affecting negative impact on programs.
    Could you please suggest me on this how to proceed for getting material documents with reference to process orders?
    Thanks in Advance,
    Vempati

    [Search results here|https://forums.sdn.sap.com/search.jspa?q=mkpf%20performance&dateRange=last90days&searchID=23205030&rankBy=10001&start=0]

Maybe you are looking for