Performance issue with sys.user_history$ table

Hi,
I am investigating performance for one of my client's databases (which is at 9.2.0.8) as they are experiencing intermittent poor response. In the Statspack report (Top SQL section) I can see that a catalog table called SYS.USER_HISTORY$ is being accessed very frequently. Now I understand that this would be a result of password limits being set in users' profile but each time the table is accessed, it would appear to incur a full table scan resulting in over 7,500 gets each time. The total buffer gets (and physical blocks read) from this table account for a high percentage of the total and since the highest waits are buffer and I/O related this must be a major factor. Here is an extract from the Statspack report:
CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
2,327,138 316 7,364.4 7.8 190.22 4889.61 3236020785
select password_date from user_history$ where user# = :1 order by password_date desc
2,320,524 313 7,413.8 7.8 199.41 4278.44 3584552880
delete from user_history$ where password_date < :1 and user# = :2
2,272,260 308 7,377.5 7.6 169.36 3453.12 822812381
select 1 from dual where exists (select password from user_history$ where password = :1 and user# = :2)
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
1,448,689 316 4,584.5 20.6 190.22 4889.61 3236020785
select password_date from user_history$ where user# = :1 order by password_date desc
1,269,172 313 4,054.9 18.1 199.41 4278.44 3584552880
delete from user_history$ where password_date < :1 and user# = :2
1,206,906 308 3,918.5 17.2 169.36 3453.12 822812381
select 1 from dual where exists (select password from user_history$ where password = :1 and user# = :2)
Is there any way to improve access to this table? Since it's a catalog table, I presume it would not be acceptable to an index to it but, for example, would it be acceptable to assign it to a suitably sized KEEP buffer pool which should at least reduce the amount of physical I/O incurred?
Any ideas would be appreciated.
Regards,
Ian Brennan

Hi,
Here is the remaining information which I have now gathered:-
select count(*) from dba_users;
24681
select count(*) from sys.user_history$;
1258133
select profile, limit from dba_profiles where resource_name = 'PASSWORD_REUSE_TIME';
PROFILE LIMIT
DEFAULT UNLIMITED
PRS2_DEFAULT_PROFILE 365
select bytes from dba_segments where SEGMENT_NAME='USER_HISTORY$';
61865984
explain plan for
select password_date from user_history$ where user# = :1 order by password_date desc;
SELECT STATEMENT CHOOSE
Cost: 647 Bytes: 913 Cardinality: 83
2 SORT ORDER BY
Cost: 647 Bytes: 913 Cardinality: 83
1 TABLE ACCESS FULL SYS.USER_HISTORY$
Cost: 638 Bytes: 913 Cardinality: 83
Any further thoughts?

Similar Messages

  • Performance issue with joins on table VBAK, VBEP, VBKD and VBAP

    hi all,
    i have a report where there is a join on all 4 tables VBAK, VBEP, VBKD and VBAP.
    the report is giving performance issues because of this join.
    all the key fields are used for the joining of tables. but some of the non-key fields like vbap-vstel, vbap-abgru and vbep-wadat are also part of select query and are getting filled.
    because of these there is a performance issue.
    is there any way i can improve the performance of the join select query?
    i am trying "for all entries" clause...
    kindly provide any alternative if possible.
    thanks.

    Hi,
    Pls perform some of the below steps as applicable for the performance improvement:
    a) Remove join on all the tables and put joins only on header and item (VBAK & VBAP).
    b) code should have separate select for VBEP and VBKD.
    c) remove the non key fields from the where clause. Once you retrieve data from the database into the internal table, sort the table and delete the entries which are not part of the non-key fields like vstel, abgru and wadat.
    d) last option is you can create index in the VBAP & VBEP table with respect to the fields vstel, abgru & wadat ( not advisable)
    e) buffering option on database tables also possible.
    f) select only the fields into the internal table that are applicable for the processing logic and also the select query should contaian the field names in the same order as mentioned in the database table.
    Hope this helps.
    Regards
    JLN

  • View objects performance issue with oracle seeded tables

    While i am writing a view object on a oracle seeded tables like MTL_PARAMETERS, its taking more time to show in the oaf page.I am trying to display all these view object columns in detail disclosure of advanced table. My Application is taking more than two minutes to display the view columns of the query which is returning just 200 rows. Please help me how to improve performance when my query using seeded tables.
    This issue is happening only in R12 view object and advanced tables.
    Edited by: vlsn on Jun 24, 2012 11:36 PM

    Hi All,
    Here is architecture of my application:
    Java application creates XML from the screen values and then inserts that XML
    into a framework(separate DB schema) table . then Java calls a Stored Procedure from same framework DB and in SP we have following steps.
    1. It fatches XML from the XML type table and inserts XML into screen specific XML TYPE table in the framework DB Schema. This table has a trigger which parses XML and then inserts XML values into GTT which are created in separate product schemas.
    2. it calls Product SP and then in product SP we have business logic. Product SP
    does the execution and then inserts response into Response GTT.
    3. Response XML is created by using XML generation function and response GTT.
    I hope u will understand my architeture this time and now let me know if GTT are good in this scenario or not. also please not that i need data in GTT only during execution and not after that. i dont want to do specific delete which i have to do if i am using normal tables.
    Regards,
    Vikas Kumar

  • Performance issues with version enable partitioned tables?

    Hi all,
    Are there any known performance issues with version enable partitioned tables?
    I’ve been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
    Tanks in advance,
    Vitor
    Example:
         Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=CHOOSE          1          249                    
    UPDATE     SIG.SIG_QUA_IMG_LT                                   
    NESTED LOOPS SEMI          1     266     249                    
    PARTITION RANGE ALL                                   1     9
    TABLE ACCESS FULL     SIG.SIG_QUA_IMG_LT     1     259     2               1     9
    VIEW     SYS.VW_NSO_1     1     7     247                    
    NESTED LOOPS          1     739     247                    
    NESTED LOOPS          1     677     247                    
    NESTED LOOPS          1     412     246                    
    NESTED LOOPS          1     114     244                    
    INDEX RANGE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62     2                    
    INDEX RANGE SCAN     SIG.QIM_PK     1     52     243                    
    TABLE ACCESS BY GLOBAL INDEX ROWID     SIG.SIG_QUA_IMG_LT     1     298     2               ROWID     ROW L
    INDEX RANGE SCAN     SIG.SIG_QUA_IMG_PKI$     1          1                    
    INDEX RANGE SCAN     WMSYS.WM$NEXTVER_TABLE_NV_INDX     1     265     1                    
    INDEX UNIQUE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62                         
    /* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */                                        
    UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1                                        
    SET z1.nextver =                                        
    SYS.ltutil.subsversion                                        
    (z1.nextver,                                        
    SYS.ltutil.getcontainedverinrange (z1.nextver,                                        
    'SIG.SIG_QUA_IMG',                                        
    'NpCyPCX3dkOAHSuBMjGioQ==',                                        
    4574,                                        
    4575                                        
    4574                                        
    WHERE z1.ROWID IN (
    (SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
    INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
    INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
    t2.ROWID
    FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j1,
    sig.sig_qua_img_lt t1,
    sig.sig_qua_img_lt t2,
    wmsys.wm$nextver_table j2,
    (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j3
    WHERE t1.VERSION = j1.VERSION
    AND t1.ima_id = t2.ima_id
    AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
    AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
    AND t2.nextver != '-1'
    AND t2.nextver = j2.next_vers
    AND j2.VERSION = j3.VERSION))

    Hello Vitor,
    There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
    One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
    Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
    Thank You,
    Ben

  • Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Thanks Kelly,
    The answers would be the following:
    1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
    Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
    The cells will be numeric, free text, drop downs, and some calculated numeric.
    Are we reaching the limits for UI performance?
    Thanks

  • Performance issues with pipelined table functions

    I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
    Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
    Many thanks in advance.
    CREATE OR REPLACE PACKAGE pipeline_example
    IS
       TYPE resultset_typ IS REF CURSOR;
       TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
       TYPE table_typ IS TABLE OF row_typ;
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ;
       c_default_limit   CONSTANT PLS_INTEGER := 100;  
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ);
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ);
    END pipeline_example;
    CREATE OR REPLACE PACKAGE BODY pipeline_example
    IS
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ
       IS
          o_resultset   resultset_typ;
       BEGIN
          OPEN o_resultset FOR
             SELECT colC, colD, colE
               FROM some_table
              WHERE colA = ArgA AND colB = argB;
          RETURN o_resultset;
       END base_query;
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
       IS
          aa_source_data   table_typ;-- := table_typ ();
       BEGIN
          LOOP
             FETCH p_source_data
             BULK COLLECT INTO aa_source_data
             LIMIT p_limit_size;
             EXIT WHEN aa_source_data.COUNT = 0;
             /* Process the batch of (p_limit_size) records... */
             FOR i IN 1 .. aa_source_data.COUNT
             LOOP
                PIPE ROW (aa_source_data (i));
             END LOOP;
          END LOOP;
          CLOSE p_source_data;
          RETURN;
       END processor;
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT /*+ PARALLEL(t, 5) */ colC,
                      SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM TABLE (processor (base_query (argA, argB),100)) t
             GROUP BY colC
             ORDER BY colC
       END with_pipeline;
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT colC,
                      SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM (SELECT colC, colD, colE
                         FROM some_table
                        WHERE colA = ArgA AND colB = argB)
             GROUP BY colC
             ORDER BY colC;
       END no_pipeline;
    END pipeline_example;
    ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
    Edited by: Earthlink on Nov 14, 2010 11:31 AM
    Edited by: Earthlink on Nov 14, 2010 11:32 AM
    Edited by: Earthlink on Nov 20, 2010 12:04 PM
    Edited by: Earthlink on Nov 20, 2010 12:54 PM

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Performance issue with MSEG table

    Hi all,
    I need to fetch materials(MATNR) based on the service order number (AUFNR) in the selection screen,but there is performance isssue with this , how to over come this issue .
    Regards ,
    Amit

    Hi,
    There could be various reasons for performance issue with MSEG.
    1) database statistics of tables and indexes are not upto date.
    because of this wrong index is choosen during the execution.
    2) Improper indexes, because there is no indexes with the fields mentioned in the WHERE clause of the statement. Because of this reason, CBO would have choosen wrong index and did a range scan.
    3) Optimizer bug in oracle.
    4) Size of table is very huge, archive.
    Better switch on ST05 trace before you run this statements, so it will give more detailed information, where exactly time being spent during the execution.
    Hope this helps
    dileep

  • Performance issues with Homesharing?

    I have a Time Capsule as the base station for my wireless network, then 2 Airport Express setup to extend the network around the house, an iMac i7 as the main iTunes Library and couple of iPads, and a couple of Apple TVs. Everything has the latest software, but I have several performance issues with Home sharing. I've done several tests making sure nothing is taking additional bandwidth, so here are the list of issues:
    1) With nothing else running, trying playing a movie via home sharing in an iPad 2 which is located on my iMac, it stops and I have to keep pressing the play button over and over again. I typically see that the iPad tries to download part of the movie first and then starts playing so that it deals with the bandwidth, but in many cases it doesn't.
    2) When trying to play any iTunes content (movies, music, photos, etc) from my Apple TV I can see my computer library, but when I go in on any of the menus, it says there's no content. I have to reboot the Apple TV and then problem fixed. I's just annoying that I have to reboot.
    3) When watching a Netflix movie on my iPad and with Airplay I send the sound to some speakers via Airplay through an Airport Express. At time I lose the connection to the speakers.
    I've complained about Wifi's instability, but here I tried to keep everything with Apples products to avoid any compatibility issues and stay within N wireless technology, which I understood it was much more stable.
    Has anyone some suggestions?

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Performance Issues with large XML (1-1.5MB) files

    Hi,
    I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
    When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
    I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
    I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
    I guess I'm running out of options and patience as well.;)
    I would appreciate any ideas/suggestions, please help.....
    Thanks;
    Ramakrishna Chinta

    Are you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0

  • Performance issues with 0CO_OM_WBS_1

    We use BW3.5 & R/3 4.7 and encounter huge performance issues with 0CO_OM_WBS_1? Always having to do a full load involving approx 15M records even though there are on the average 100k new records since previous load. This takes a longtime.
    Is there a way to delta-enable this datasource?

    Hi,
    This DS is not delta enabled and you can only do a full load.  For a delta enabled one, you need to use 0CO_OM_WBS_6.  This works as other Financials extractors, as it has a safety delta (configurable, default 2 hours, in table BWOM_SETTINGS).
    What you should do is maybe, use the WBS_6 as a delta and only extract full loads for WBS_1 for shorter durations.
    As you must have an ODS for WBS_1 at the first stage, I would suggest do a full load only for posting periods that are open.  This will reduce the data load.
    You may also look at creating your own generic data source with delta; if you are clear on the tables and logic used.
    cheers...

  • Performance issues with data warehouse loads

    We have performance issues with our data warehouse load ETL process. I have run
    analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
    Scott

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Performance issue with HRALXSYNC report..

    HI,
    I'm facing performance issue with the HRALXSYNC report. As this is Standard report, Can any body suggest me how to optimize the standard report..
    Thanks in advance.
    Saleem Javed
    Moderator message: Please Read before Posting in the Performance and Tuning Forum, also look for existing SAP notes and/or send a support message to SAP.
    Edited by: Thomas Zloch on Aug 23, 2011 4:17 PM

    Sreedhar,
    Thanks for you quick response. Indexes were not created for VBPA table. basis people tested by creating indexes and gave a report that it is taking more time with indexes than regular query optimizer. this is happening in the funtion forward_ag_selection.
    select vbeln lifnr from vbpa
         appending corresponding fields of table lt_select
         where     vbeln in ct_vbeln
         and     posnr eq posnr_initial
         and     parvw eq 'SP'
         and     lifnr in it_spdnr.
    I don't see any issue with this query. I give more info later

  • Performance Issue with VL06O report

    Hi,
    We are having performance issue with VL06O report, when run with forwarding agent. It is taking about an hour with forwarding agent. The issue is with VBPA table and we found one OSS note, but it is for old versions. ours is ECC 5.0. Can anybody know the solution? If you guys need more information, please ask me.
    Thanks,
    Surya

    Sreedhar,
    Thanks for you quick response. Indexes were not created for VBPA table. basis people tested by creating indexes and gave a report that it is taking more time with indexes than regular query optimizer. this is happening in the funtion forward_ag_selection.
    select vbeln lifnr from vbpa
         appending corresponding fields of table lt_select
         where     vbeln in ct_vbeln
         and     posnr eq posnr_initial
         and     parvw eq 'SP'
         and     lifnr in it_spdnr.
    I don't see any issue with this query. I give more info later

  • Performance Issue with BSIS(open accounting items)

    Hey All,
          I am having serious performance issue with a accrual report which gets all open GL items, and need some tips for optimization.
    The main issue is that I am accesing large tables like BSIS, BSEG, BSAS etc without proper indexes and that I am dealing with huge amounts of data.
    The select itself take a long time and after that as I have so much data overall execution is slow too.
    The select which concerns me the most is:
      SELECT zuonr hkont gjahr belnr buzei budat blart wrbtr shkzg xblnr waers bukrs
                 INTO TABLE i_bsis
                  FROM bsis
                  WHERE bukrs = '1000'
                  AND hkont in r_hkont   
                  AND budat <= p_lcdate
                  AND augdt = 0
                  AND augbl = space
                  AND gsber = c_ZRL1   
                  AND gjahr BETWEEN l_gjahr2 AND l_gjahr
                  AND ( blart = c_re      "Invoice
                  OR    blart = c_we      "Goods receipt
                  OR    blart = c_zc      "Invoice Cancels
                  OR    blart = c_kp ).   "Accounting offset
    I have seen other related threads, but was not that helpful.
    We already have a secondary index on bukrs hkont and budat, and i have checked in ST05 that it does use it. But inspite that it takes more than 15 hrs to complete(maybe because of huge data).
    Any Input is highly appreciated.
    Thanks

    Thank you Thomas for your inputs:
    You said that R_HKONT contains several ranges of account numbers. If these ranges cover a significant
    portion of the overall existing account numbers, then there is no really quick access possible via the
    BSIS primary key.
    Unfortunately R_HKONT contains all account numbers.
    As Rob said, your index on HKONT and BUDAT does not help much, since you are selecting "<=" on
    BUDAT. No chance of narrowing down that range?
    Will look into this.
    What about GSBER? Does the value in c_ZRL1 provide a rather small subset of the overall values? Then
    an index on BUKRS and GSBER might be helpful.
    ZRL1 does provide a decent selection . But I dont know if one more index is a good idea on overall
    system performance.
    I assume that the four document types are not very selective, so it probably does not pay off to
    investigate selecting on BKPF (there is an index involving BLART) and joining BSIS for the additional
    information. You still might want to look into it though.
    I did try to investigate this option too. Based on other threads related to BSIS and Robs Suggestion in
    those threads I tried this:
    SELECT bukrs belnr gjahr blart budat
      FROM bkpf INTO TABLE bkpf_l
            WHERE bukrs = c_pepsico
            AND bstat IN (' ', 'A', 'B', 'D', 'M', 'S', 'V', 'W', 'Z')
            AND blart IN ('RE', 'WE', 'ZC', 'KP')
            AND gjahr BETWEEN l_gjahr2 AND l_gjahr
            AND budat <= p_lcdate.
    SELECT zuonr hkont gjahr belnr buzei budat blart wrbtr shkzg xblnr waers bukrs
               FROM bsis INTO TABLE i_bsis FOR ALL ENTRIES IN bkpf_l
                         WHERE bukrs = bkpf_l-bukrs
                          AND  hkont IN r_hkont
                          AND  budat = bkpf_l-budat
                          AND  augdt = 0
                          AND  augbl = space
                          AND  gjahr = bkpf_l-gjahr
                          AND  belnr = bkpf_l-belnr
                          AND  blart = bkpf_l-blart
                          AND  gsber = c_zrl1.
    The improves the select on BSIS a lot, but the first select on BKPF kills it. Not sure if this would help
    improve performance.
    Also I was wondering whether it helps on refreshing the tabe statistics through DB20. The last refresh
    was done 7 months back. How frequently should we do this? Will it help?

  • Performance issue with using MAX function in pl/sql

    Hello All,
    We are having a performance issue with the below logic wherein MAX is being used in order to get the latest instance/record for a given input variable ( p_in_header_id).. the item_key is having the format as :
    p_in_header_id - <number generated from a sequence>
    This query to fetch even 1 record takes around 1 minutes 30 sec..could someone please help if there is a better way to form this logic & to improve performance in this case.
    We want to get the latest record for the item_key ( this we are getting using MAX (begin_date)) for a given p_in_header_id value.
    Query 1 :
    SELECT item_key FROM wf_items WHERE item_type = 'xxxxzzzz'
    AND SUBSTR (item_key, 1, INSTR (item_key, '-') - 1) =p_in_header_id
    AND root_activity ='START_REQUESTS'
    AND begin_date =
    (SELECT MAX (begin_date) FROM wf_items WHERE item_type = 'xxxxzzzz'
    AND root_activity ='START_REQUESTS'
    AND SUBSTR (item_key, 1, INSTR (item_key, '-') - 1) =p_in_header_id);
    Could someone please help us with this performance issue..we are really stuck because of this
    regards

    First of all Thanks to all gentlemen who replied ..many thanks ...
    Tried the ROW_NUMBER() option but still it is taking time...have given output for the query and tkprof results as well. Even when it doesn't fetch any record ( this is a valid cased because the input header id doesn't have any workflow request submitted & hence no entry in the wf_items table)..then also see the time it has taken.
    Looked at the RANK & DENSE_RANK options which were suggested..but it is still taking time..
    Any further suggestions or ideas as to how this could be resolved..
    SELECT 'Y', 'Y', ITEM_KEY
    FROM
    ( SELECT ITEM_KEY, ROW_NUMBER() OVER(ORDER BY BEGIN_DATE DESC) RN FROM
    WF_ITEMS WHERE ITEM_TYPE = 'xxxxzzzz' AND ROOT_ACTIVITY = 'START_REQUESTS'
    AND SUBSTR(ITEM_KEY,1,INSTR(ITEM_KEY,'-') - 1) = :B1
    ) T WHERE RN <= 1
    call count cpu elapsed disk query current rows
    Parse 0 0.00 0.00 0 0 0 0
    Execute 1 0.00 1.57 0 0 0 0
    Fetch 1 8700.00 544968.73 8180 8185 0 0
    total 2 8700.00 544970.30 8180 8185 0 0
    many thanks

Maybe you are looking for

  • Can we use Substitution variables in MAXL?

    Hi, Can we use substitution variables in MAXL script? I have to run this MAXL command for clearing a slice of ASO cube on V11.1.1.3. alter database Apname.DBname clear data in region 'CrossJoin({[2009]},{[Dec]})'; I am planning to use Current_year &

  • Issue in payroll

    Hi Experts i am working on indian payroll currently we are having 6  working days in a week and the total no of days for month is 27/28 days appx but my requirement is that what ever the no of working days in a month. when we run the payroll the paym

  • GR/IR clearing tickbox

    Is it possible to remove the GR/IR clearing tick box in transaction MIRO? Although, we know that it is a standard field for SAP. The user wants to avoid that for future AP clerk issues and for their business planning. We all know that if the PO is a

  • Send XML as acknowledgment from BPM

    Hi  Experts I have scenario where our XI box will receive an XML, so whenever we receive an XML XI should send back another XML as an Acknowledgment. Is there any way I can send an custom XML format as an acknowledgment? In BPM Send step I could see

  • Forgot iPod Touch 4th passcode. Trying to recover and restore doesn't work!

    My daughter got new iPod Touch for Christmas. She added a passcode for fun, but promptly forgot it. According to support, I'm supposed to put the iPod in "recover" mode and then "restore" it to erase everything including the passcode. However, I've t