Possible performance issue about DB table kmc_dbrm_contract

Hello,
We've just completed load tests for a large portal.
EP 6.40 SP20
During these tests, DB people have identified some contention on this particular table, which contains only three records.
The "offending" query seems rather fast and is just incrementing a counter. The reason for the contention is that we had a very large number os such queries, resulting on multiple locks, that could last up to a maximum of 1,5 seconds.
This is not related to our project developements and therefore I assume this is a portal standard.
Can you help me finding what this table is all about and if there is anything standard we can do to explain and/or prevent this slight delay?
I'm not a technology expert (just a PM), so I would appreciate a rather detailed response.
Thank you,
Luis C Leme

Hello,
This is known behavior (see http://help.sap.com/saphelp_nw70ehp3/helpdata/en/62/468698a8e611d5993600508b6b8b11/frameset.htm) when FSDB Repository is used and its option "Enable FSDB Content Tracking" is ticked.
The part of official docuementation says:
The database synchronization of content access might have a negative impact on performance. Every read or write content request to an FSDB resource waits to obtain a write lock on the lock record in the database. Therefore, the accumulated waiting time for obtaining the write lock in the database might increase and the waiting threads might consume a considerable amount of the available threads in the thread pool.
Best Regards,
Georgi

Similar Messages

  • Performance issue with COEP table in ECC 6

    Hi,,
    Any idea how to resonlve performance issue on COEP table in ECC6.0
    We are not using COEP table right now. this table occupies 100gb of 900 gb in PRD system.
    Can i directly archive/delete the table?
    Regards
    Siva

    Hi Siva,
    You cannot archive COEP table alone. It should be archived along with the respective archive object. Just deleting the table is not at all a good idea.
    For finding out the appropriate archive object contributing to the entries in COEP, you need to perform CO table analysis using programs RARCCOA1 and RARCCOA2. For further informaton refer to SAP note 138688.
    Hope this helps,
    Naveen

  • Performance issues with pipelined table functions

    I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
    Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
    Many thanks in advance.
    CREATE OR REPLACE PACKAGE pipeline_example
    IS
       TYPE resultset_typ IS REF CURSOR;
       TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
       TYPE table_typ IS TABLE OF row_typ;
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ;
       c_default_limit   CONSTANT PLS_INTEGER := 100;  
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ);
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ);
    END pipeline_example;
    CREATE OR REPLACE PACKAGE BODY pipeline_example
    IS
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ
       IS
          o_resultset   resultset_typ;
       BEGIN
          OPEN o_resultset FOR
             SELECT colC, colD, colE
               FROM some_table
              WHERE colA = ArgA AND colB = argB;
          RETURN o_resultset;
       END base_query;
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
       IS
          aa_source_data   table_typ;-- := table_typ ();
       BEGIN
          LOOP
             FETCH p_source_data
             BULK COLLECT INTO aa_source_data
             LIMIT p_limit_size;
             EXIT WHEN aa_source_data.COUNT = 0;
             /* Process the batch of (p_limit_size) records... */
             FOR i IN 1 .. aa_source_data.COUNT
             LOOP
                PIPE ROW (aa_source_data (i));
             END LOOP;
          END LOOP;
          CLOSE p_source_data;
          RETURN;
       END processor;
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT /*+ PARALLEL(t, 5) */ colC,
                      SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM TABLE (processor (base_query (argA, argB),100)) t
             GROUP BY colC
             ORDER BY colC
       END with_pipeline;
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT colC,
                      SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM (SELECT colC, colD, colE
                         FROM some_table
                        WHERE colA = ArgA AND colB = argB)
             GROUP BY colC
             ORDER BY colC;
       END no_pipeline;
    END pipeline_example;
    ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
    Edited by: Earthlink on Nov 14, 2010 11:31 AM
    Edited by: Earthlink on Nov 14, 2010 11:32 AM
    Edited by: Earthlink on Nov 20, 2010 12:04 PM
    Edited by: Earthlink on Nov 20, 2010 12:54 PM

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Insert performance issue with Partitioned Table.....

    Hi All,
    I have a performance issue during with a table which is partitioned. without table being partitioned
    it ran in less time but after partition it took more than double.
    1) The table was created initially without any partition and the below insert took only 27 minuts.
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:27:35.20
    2) Now I re-created the table with partition(range yearly - below) and the same insert took 59 minuts.
    Is there anyway i can achive the better performance during insert on this partitioned table?
    [ similerly, I have another table with 50 Million records and the insert took 10 hrs without partition.
    with partitioning the table, it took 18 hours... ]
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4195045590
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 643K| 34M| | 12917 (3)| 00:02:36 |
    |* 1 | HASH JOIN | | 643K| 34M| 2112K| 12917 (3)| 00:02:36 |
    | 2 | VIEW | index$_join$_001 | 69534 | 1290K| | 529 (3)| 00:00:07 |
    |* 3 | HASH JOIN | | | | | | |
    | 4 | INDEX FAST FULL SCAN| PK_ACCOUNT_MASTER_BASE | 69534 | 1290K| | 181 (3)| 00:00
    | 5 | INDEX FAST FULL SCAN| ACCOUNT_MASTER_BASE_IDX2 | 69534 | 1290K| | 474 (2)| 00:00
    PLAN_TABLE_OUTPUT
    | 6 | TABLE ACCESS FULL | TB_SISADMIN_BALANCE | 2424K| 87M| | 6413 (4)| 00:01:17 |
    Predicate Information (identified by operation id):
    1 - access("A"."VENDOR_ACCT_NBR"=SUBSTR("B"."ACCOUNT_NO",1,8) AND
    "A"."VENDOR_CD"="B"."COMPANY_NO")
    3 - access(ROWID=ROWID)
    Open C1;
    Loop
    Fetch C1 Bulk Collect Into C_Rectype Limit 10000;
    Forall I In 1..C_Rectype.Count
    Insert test
         col1,col2,col3)
    Values
         val1, val2,val3);
    V_Rec := V_Rec + Nvl(C_Rectype.Count,0);
    Commit;
    Exit When C_Rectype.Count = 0;
    C_Rectype.delete;
    End Loop;
    End;
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:51:01.22
    Edited by: user520824 on Jul 16, 2010 9:16 AM

    I'm concerned about the view in step 2 and the index join in step 3. A composite index with both columns might eliminate the index join and result in fewer read operations.
    If you know which partition the data is going into beforehand you can save a little bit of processing by specifying the partition (which may not be a scalable long-term solution) in the insert - I'm not 100% sure you can do this on inserts but I know you can on selects.
    The APPEND hint won't help the way you are using it - the VALUES clause in an insert makes it be ignored. Where it is effective and should help you is if you can do the insert in one query - insert into/select from. If you are using the loop to avoid filling up undo/rollback you can use a bulk collect to batch the selects and commit accordingly - but don't commit more often than you have to because more frequent commits slow transactions down.
    I don't think there is a nologging hint :)
    So, try something like
    insert /*+ hints */ into ...
    Select
         A.Ing_Acct_Nbr, currency_Symbol,
         Balance_Date,     Company_No,
         Substr(Account_No,1,8) Account_No,
         Substr(Account_No,9,1) Typ_Cd ,
         Substr(Account_No,10,1) Chk_Cd,
         Td_Balance,     Sd_Balance,
         Sysdate,     'Sisadmin'
    From Ideaal_Cons.Tb_Account_Master_Base A,
         Ideaal_Staging.Tb_Sisadmin_Balance B
    Where A.Vendor_Acct_Nbr = Substr(B.Account_No,1,8)
       And A.Vendor_Cd = b.company_no
          ;Edited by: riedelme on Jul 16, 2010 7:42 AM

  • Performance issue in the table

    Hi All
    i have one table which is based on a VO. This VO has 150 attributes.I am facing performance issue.
    first Q is i need to display only 15 attributes in the table. but other attributes are also required .
    so i have two options .
    1. make other as form value
    or
    2. make them messagetextinput and set rendered false.
    i just wanted to know which will perform better.
    pls help

    these attributes have some default values.
    i need to pass these values to the api on some action
    if i dont keep in my page then
    will row.getAttribute("XYZ") still carry its value
    ??

  • Performance issue with JEST table

    Moved to correct forum by moderator
    Hi all,
    I have a report which is giving performance issue.
    It hits the function module "status_read", which in turn hits the table JEST..
    The select query is:
    SELECT SINGLE * FROM JSTO CLIENT SPECIFIED
    WHERE MANDT = MANDT
    AND   OBJNR = OBJNR.
    I know we should not use client specified,  but this is a SAP standard code..
    Since this query is hit many times, it results in TIME_OUT error..
    I observed that the table JEST has 133,523,962 entries in production and in technical details, the size catagory is metnioned as 3 - (Data records expected: 280,000 to 1,100,000).
    Since here, the data size is exceeded, if i change the size catagory to 4 would improve the performance?
    Or should I request Client to archive this table? If yes, please guide me how to go for it? I have heard there are archiving objects.. please specify which objects should be considered for archiving...
    I could only think of above two solutions, please let me know if there is any other workaround...
    thanks!
    Edited by: Matt on Jan 27, 2009 11:12 AM

    Hi,
    I'm not sure the exact archiving object for this, here are some archiving objects related to tabel JEST
    MM_EBAN
    MM_EKKO
    MM_MATNR
    PP_ORDER
    PR_ORDER
    PM_NET
    pl. go thru them using tcode: SARA
    thanks\
    Mahesh

  • Performance issue with MSEG table

    Hi all,
    I need to fetch materials(MATNR) based on the service order number (AUFNR) in the selection screen,but there is performance isssue with this , how to over come this issue .
    Regards ,
    Amit

    Hi,
    There could be various reasons for performance issue with MSEG.
    1) database statistics of tables and indexes are not upto date.
    because of this wrong index is choosen during the execution.
    2) Improper indexes, because there is no indexes with the fields mentioned in the WHERE clause of the statement. Because of this reason, CBO would have choosen wrong index and did a range scan.
    3) Optimizer bug in oracle.
    4) Size of table is very huge, archive.
    Better switch on ST05 trace before you run this statements, so it will give more detailed information, where exactly time being spent during the execution.
    Hope this helps
    dileep

  • Performance issue about using JDBC?

    Since no one reply me, I post again. :(
    I just got a big performance problem lately, and I tried all the possible ways, still can't fix it. Could you help me out or give me more suggestions?
    Oracle 8i for Solaris 2.6
    A web application with back end is Oracle database, developed by Java, use JDBC driver. It also uses Servelet. Report generated in browser is using dynamic SQL.
    When I click some link to generate report in browser, it will run the corresponding SQL script, then return the result to browser. The problem is it takes long long time to get the result. For simple query, it takes around 2-3 minutes. But if I run the same SQL script in
    SQL*Plus, it only takes 4-5 seconds, or even less. So I think the index for this query is fine. (I also rebuild all indices, some result.) And all the hit ratios in SGA are also OK. When browser generate reports, I didn't see high CPU usage or I/O activity.
    I really have no idea why this happens. But I think the Oracle DB is fine, 'cause query is run normally in SQL*Plus. The problem may related to the JDBC driver or JDBC connection. The developers also have no clue about this. When the Java app run the query, does it use the same way to access the tables and indexes as used in SQL*Plus?
    Any idea or suggestions?
    Thanks a lot and have a good day!
    null

    Thanks for all.
    So do you guys has any suggestion on the following code?
    DESCRIBE TABLE gt_vbeln LINES l_lines.
      IF l_lines = 0.
    ***>>Links20060411
    *  ELSEIF l_lines GT c_1000.
    *    SELECT vbelv posnv vbeln posnn vbtyp_n rfmng
    *        APPENDING TABLE gt_vbfa_all PACKAGE SIZE c_1000
    *        FROM vbfa
    *        FOR   ALL ENTRIES IN gt_vbeln
    *        WHERE vbelv   EQ  gt_vbeln-vbelv
    *          AND posnv   EQ gt_vbeln-posnv
    *          AND vbtyp_n IN ('T', 'J', 'R', 'h').
    *    ENDSELECT.
    *  ELSE.
    *    SELECT vbelv posnv vbeln posnn vbtyp_n rfmng
    *        INTO TABLE gt_vbfa_all FROM vbfa
    *        FOR   ALL ENTRIES IN gt_vbeln
    *        WHERE vbelv   EQ  gt_vbeln-vbelv
    *          AND posnv   EQ  gt_vbeln-posnv
    *          AND vbtyp_n IN ('T', 'J', 'R', 'h').
      ELSEIF l_lines > c_1000.
        SELECT vbelv posnv vbeln posnn vbtyp_n rfmng
            APPENDING TABLE gt_vbfa PACKAGE SIZE c_1000
            FROM vbfa
            FOR   ALL ENTRIES IN gt_vbeln
            WHERE vbelv   =  gt_vbeln-vbelv
              AND posnv   = gt_vbeln-posnv
              AND vbtyp_n IN ('T', 'J').
        ENDSELECT.
      ELSE.
        SELECT vbelv posnv vbeln posnn vbtyp_n rfmng
            INTO TABLE gt_vbfa FROM vbfa
            FOR   ALL ENTRIES IN gt_vbeln
            WHERE vbelv   =  gt_vbeln-vbelv
              AND posnv   =  gt_vbeln-posnv
              AND vbtyp_n IN ('T', 'J').
      ENDIF.
    Currently it runs timeout ,as the l_lines is very very large.
    I think maybe we  can change the package size. But what's the best package size for performance?
    Thanks..

  • Performance issue with MSEG table in Production

    Hi,
    I have written a report with 4 select queries.
    First i am selecting data from VBRK table in i_vbrk. Then for all entries in i_vbrk, i am fetching records from VBRP into i_vbrp table. Then for all entries in i_vbrp, records are fetched from MKPF into i_mkpf. Then, finally for all entries in i_mkpf, records are fetched from MSEG into i_mseg table.
    Performance of this report is good in Quality system, but it is very poor in Production systems. It is taking more than 20 mins to get executed. MSEG table query is taking most of the time.
    I have done indexing and packet sizing on MSEG table, but still performace issue persists. So, cqan you please let me know if there is any way by which performace of the program can be improved???
    Please help.
    Thanks,
    Archana

    Hi Archana,
    I was having the same issue for MKPF and MSEG , I am using INNER JOIN Condition .
    SELECT
    mkpf~mblnr
    mkpf~mjahr
    mkpf~budat
    mkpf~usnam
    mkpf~bktxt
    mseg~zeile
    mseg~bwart
    mseg~prctr
    mseg~matnr
    mseg~werks
    mseg~lgort
    mseg~menge
    mseg~meins
    mseg~ebeln
    mseg~sgtxt
    mseg~shkzg
    mseg~dmbtr
    mseg~waers
    mseg~sobkz
    mkpf~xblnr
    mkpf~frbnr
    mseg~lifnr
    INTO TABLE xmseg
    FROM mkpf
    INNER JOIN mseg
    ON mkpfmandt EQ msegmandt AND
    mkpfmblnr EQ msegmblnr AND
      mkpfmjahr EQ msegmjahr
    WHERE mkpf~vgart IN se_vgart
    AND   mkpf~budat IN se_budat
    AND   mkpf~usnam IN se_usnam
    AND   mkpf~bktxt IN se_bktxt
    AND   mseg~bwart IN se_bwart
    AND   mseg~matnr IN se_matnr
    AND   mseg~werks IN se_werks
    AND   mseg~lgort IN se_lgort
    AND   mseg~sobkz IN se_sobkz
    AND   mseg~lifnr IN se_lifnr
    %_HINTS ORACLE '&SUBSTITUTE VALUES&'.
    But still I have a issue in performance , Can anybody  give some suggestions , please .
    Regards,
    Shiv

  • MKPF & MSEG Performance issue : Join on table  MKPF & MSEG  taking too much time ..

    Hello Experts,
    I had a issue where we are executing one custom report in which  i used inner join on table  MKPF & MSEG,  some time join statement  took  9-10 min to excute and some time execute within  1-2 min with same test data .
    i am not able to understand what the actaully happing .
    please help.
    code :
       SELECT f~mblnr f~mjahr f~usnam f~bktxt  p~bukrs
        INTO TABLE itab
        FROM mkpf AS f INNER JOIN mseg AS p
            ON f~mblnr = p~mblnr AND f~mjahr = p~mjahr
         WHERE f~vgart = 'WE'
           AND f~budat IN p_budat
           AND f~usnam IN p_sgtxt
           AND p~bwart IN ('101','105')
           AND p~werks IN p_werks
           AND p~lgort IN p_lgort.
    Regards,
    Dipendra Panwar.

    Hi Dipendra,
    if you call a report twice after another with the same test data for data selection, then the second run should be faster, because some data are remaining in memory and needn't to be caught from database. This will be also for the following third und further runs, until the data in the SAP memory will be removed by other programs.
    For performance traces you should try to test with a first run.
    Regards,
    Klaus

  • Performance issue with RESB table

    Hi,
    User want to improve the performance of a standard program RLLL07SE in which a RESB table data is fetched and take much time on it.
      The select querry for RESB is,
        SELECT * FROM RESB WHERE
               MATNR = MATNR AND
               WERKS = WERKS AND
               XLOEK = SPACE AND                    "Löschkennzeichen
               KZEAR = SPACE AND                    "endausgefaßt
             XWAOK = CON_X AND                    "Warenausgang erlaubt
               LGNUM = LGNUM AND
               LGTYP = LGTYP AND
               LGPLA = LGPLA.
    whereas the table index is created on following fields of RESB,
    MATNR
    WERKS
    XLOEK
    KZEAR
    BDTER
    What possible can be done in this respect as the program is a standard one we can change only in Table Inxex I guess..or what else can be done?
    Can we add LGNUM LGTYP LGPLA into the particular index apart from the existing fields?

    Hi,
    Instead of creating the Index, Get Data from RESB with the where clause having the entire key of the index and then loop to the internal and delete the unwanted entries as shown below.
    loop at itab.
      if itab-LGNUM = LGNUM AND
        itab-LGTYP = LGTYP AND
        itabLGPLA.
      else.
         delete itab index sy-tabix.
      endif.
    endloop.
    As u r getting data with entire index fields the performance will surely increase. Also avoid Select * and retrieve whatever fields u require.
    As you r not having value to the field BDTER, you can pass a range or select-options for this field which has empty valuee.
    Regards,
    Satya

  • Performance issue with BSAS table

    Hi,
    I am considering 1 lac gls for BSAS selection. It is giving runtime error DBIF_RSQL_INVALID_RSQL and exception CX_SY_OPEN_SQL_DB.
    To overcome this issue i  used the followiing code :
        DO.
          PERFORM f_make_index USING sy-index.
          REFRESH lr_hkont.
          CLEAR   lr_hkont.
          APPEND LINES OF gr_hkont FROM gv_from TO gv_to TO lr_hkont.
          IF lr_hkont[] IS INITIAL.
            EXIT.
          ENDIF.
          SELECT bukrs hkont gjahr belnr buzei budat augbl augdt waers wrbtr
                                         dmbtr dmbe2 shkzg blart FROM bsas
                                         APPENDING CORRESPONDING FIELDS OF TABLE
                                         gt_bsas
                                         FOR ALL ENTRIES IN gt_bsis
                                         WHERE bukrs = gt_bsis-bukrs
                                         AND   hkont IN lr_hkont
                                         AND   gjahr = gt_bsis-gjahr
                                         AND   augbl = gt_bsis-belnr
                                         and   budat = gt_bsis-budat.
    enddo.
    I am passing 500 gls for each BSAS selection and appending to GT_BSAS internal table. This code it is taking 50 hours to fetch the data.
    Please suggest me how to improve the performance of the report.
    Thanks,

    Hi,
    1. check whether the SELECT inside the DO statement is required. this shud be the culprit.
    2. In the SELECT query avoid using APPENDING CORRESPONDING TABLES caluse. directly populate into another internal table and append it later.
    3. check whether the internal table gt_bsis is initial before using FOR ALL ENTRIES in the select query. check if it has duplicate entries too. if it has....delete the duplictaes.
    regards,
    madhu

  • Disco Plus 10g Performance Issue When Moving Table Headings to Page Items

    Hello,
    We are experiencing an performance anomaly in Discoverer Plus (the latest release: 10.1.2.48.18), and are wondering if anyone else out there has noticed similar behavior. We're having trouble identifying the cause, and have a TAR openned with Oracle support but they are not able to reproduce our issue, and have been slow to offer suggestions or help.
    The issue happens when a user drags a table heading up into the page items area in a worksheet. When we move a heading up to the page items it works quickly one time, but then, every time after that, it takes anywhere from a few minutes up to HOURS to move any additional headings up to the page items area.
    The problem occurrs on approximately 3 out of every 5 of our workbooks. We've tried different sizes of workbooks, with large (several million records) and small (a few thousand records) datasets, and none of this seems to affect the issue at hand. The problem only occurs in Discoverer Plus, not Desktop. We've also spent some time researching caching and memory configuration, and believe that we have setup all of the recommended options for maximum performance on our systems.
    I would just like to know if anyone else out there in the community has experienced the same issue, and if anyone has any advice for us.
    Thank you,
    -Scott

    Hi,
    I found the following on Best Practices of Orcle Discoverer 10g by
    Mike Donohue - Product Management - Oracle Business Intelligence
    Performance – Parameters and Page Items
    Page Items provide very responsive, interactive manipulation of data
    At a cost:
    Forces retrieval of all Detail values
    Incremental increases of memory as indices are built
    Parameters reduce result set, improve performance
    Use Page Items only when needed – 2 to 3 with < 12 values each
    Performance – Parameters and Page Items - example
    Loc(3), Dept(10), ProdType(50), Prod(1,000), Date(365)
    547,500,000 potential rows/combinations (3*10*50*1000*365)
    Use parameters for Loc, Dept, ProdType, and 90 day date range
    90,000 potential rows/combinations (1000*90)
    Reduce data retrieved by ~ 6000 X
    Improve performance by several orders of magnitude
    I hope this will help you!

  • Performance issue on LIPS table

    Hi Experts,
    I need to know the delivery for particular batches and materials,Hence i am using the below select query in my program
    SELECT vbeln
                 posnr
                 matnr
                 werks
                 lgort
                 charg
                 lfimg
                 meins FROM lips
                INTO TABLE int_lips
    FOR ALL ENTRIES IN int_mchb
                         WHERE vbeln IN s_vbeln
                             AND pstyv IN s_pstyv
                             AND matnr EQ int_mchb-matnr
                             AND werks EQ int_mchb-werks
                            AND lgort EQ int_mchb-lgort
                            AND charg EQ int_mchb-charg.
    My program is fine when delivery is given in the selection screen but it is taking lot of time when no delivery is entered in the selection screen.
    Please guide me how can i increase my program performance. Is there is any need to create the secondary index?
    Thanks in advance.
    Regards,
    Kavya

    Using
    vbeln IN s_vbeln
    slows down your query as the cost of IN operator is high. If it is empty then all records are processed. As this is the left most column in the table so it double slows down as the set of records can't be restrcited to smaller group before next fields are compared.
    The best would be
    select-options s_vbeln ... obligatory.
    "or
    if s_vbeln[] is not initial.
      select ....
    endif.
    Regards
    Marcin

  • Performance issue in internal tables

    Hi,
    I am having an internal table with large volume (around 10  Lacs ) of records. Right now it has been sorted with required fields and using the same fileds in the where condtion of loop. But it is taking lot of time to read the records while loop the internal table.
    Could you please suggest the best way to read the internal table so that read time should be reduced.
    Points will be assinged for the better soultions.
    Thanks  in Advance,
    Chandra Mohan Vempati

    Hi..
    When you execute a LOOP using WHERE condition, it will actually process all the rows from first row to Last row.
    To Avoid this we have to use LOOP AT IT_VBAK from <row> .
    Try this code .. it will surely improve the performance.
    For eg IT_VBAK is my internal table with 10 lacks records .
    But i want to process only records with KUNNR = 1000.
    DATA: V_START TYPE I.
    DATA : V_KUNNR TYPE VBAK-KUNNR.
    SORT IT_VBAK BY KUNNR.
    READ TABLE IT_VBAK INTO WA_VBAK
       with KEY KUNNR = '0000001000'
       TRANPORTING NO FIELDS
       BINARY SEARCH.
    IF SY-SUBRC = 0.
      V_START = SY-TABIX.  "Capture the row position of first row
      V_KUNNR = WA_VBAK-KUNNR.
      LOOP AT IT_VBAK INTO WA_VBAK
                                    From V_START.
    **Terminate the loop after the completion of the records for the KUNNR
         IF  WA_VBAK-KUNNR <> V_KUNNR.
           EXIT.
         ENDIF.
    **Process the records here      
    WRITE:/ WA_VBAK-VBELN,
                 WA_VBAK-VKORG.
       ENDLOOP.  
    ENDLOOP.
    <b>Reward if Helpful</b>

Maybe you are looking for