Perfomance issue with BSIS Table

Hi all,
I am encountering a performance issue when seelcting from BSIS table.
The requirement is I have Internal Orders extracted from a Master table, based on these internal orders and company code I need to go to BSIS table and do an extraction for other fields. When I am doing a Select on BSIS with the above conditions, the performance is very bad. Is there any other better ways to handle this.
Thanks in advance.
GK

hi gunjan,
to improve the performance on BSIS fetch,
then first of check your fields in where condition..
then go to se11. and display BSIS table.
In application tool bar there is push button named 'Indexes'
click on it.
and see if there is any index already present there which contains all your fields in where condition clause.
if not then ask your senior if you can create a seconadry index on it.
if he allows you to create then go for it.
hope this will help u..

Similar Messages

  • Hello Gurus..... ISSUE with child Table update

    I have an issue with child table update
    I have created a GTC with one parent table and two child tables. I'm able to update the parent table and the values are found in db, but the ISSUE is the child Table values are not updating the db.
    please give me a solution
    regards
    Srikanth

    If you are keeping referential integrity in the database, not in the application, it is easy to find the child and parent tables. Here is a quick and dirty query. You can join this to dba_cons_columns to find out on which columns the referential constraints are defined. This lists all child-parent table including SYS and SYSTEM users. You can run this for specific users of course.
    select cons1.owner child_owner,cons1.table_name child_table,
    cons2.owner parent_owner,cons2.table_name parent_table
    from dba_constraints cons1,dba_constraints cons2
    where cons1.constraint_type='R'
    and cons1.r_constraint_name=cons2.constraint_name;

  • Issues with Advance Table Add Row New Row not work in some scenarios.

    Hi,
    Wondering if there's any issue with Advanced Tables where it does not create any rows. I don't know if anyone tried this or not. I have one OA Page with Advanced Table and a button that when clicked open a new OA Page in a POP-UP Window. The pop-up page conatins one textbox where u enter a data and this gets saved in one of the VO's transient attribute. Now on the ase page if you don't click a button to open a pop-up page you can Add New Rows in the Advanced Table by clicking Add Row Button. But as soon as you open a popup window and close it Add New Rows button doesn't work and is not creating any new rows. Basically page stops working. Both the POP-UP and the base page share the same AM but have different controllers.
    POP-UP page is a custom page that I open giving the Destination URI value in the button item and target frame _blank.
    I even tried creating rows programmatically for Advance Table but this too doesn't work once u open a pop-up. Also I have used pageContext.putTransactionValue in the pop-up page and am checking and removing this in the base page.
    Any help is appreciated.
    Thanks

    anyone

  • Performance Issue with BSIS(open accounting items)

    Hey All,
          I am having serious performance issue with a accrual report which gets all open GL items, and need some tips for optimization.
    The main issue is that I am accesing large tables like BSIS, BSEG, BSAS etc without proper indexes and that I am dealing with huge amounts of data.
    The select itself take a long time and after that as I have so much data overall execution is slow too.
    The select which concerns me the most is:
      SELECT zuonr hkont gjahr belnr buzei budat blart wrbtr shkzg xblnr waers bukrs
                 INTO TABLE i_bsis
                  FROM bsis
                  WHERE bukrs = '1000'
                  AND hkont in r_hkont   
                  AND budat <= p_lcdate
                  AND augdt = 0
                  AND augbl = space
                  AND gsber = c_ZRL1   
                  AND gjahr BETWEEN l_gjahr2 AND l_gjahr
                  AND ( blart = c_re      "Invoice
                  OR    blart = c_we      "Goods receipt
                  OR    blart = c_zc      "Invoice Cancels
                  OR    blart = c_kp ).   "Accounting offset
    I have seen other related threads, but was not that helpful.
    We already have a secondary index on bukrs hkont and budat, and i have checked in ST05 that it does use it. But inspite that it takes more than 15 hrs to complete(maybe because of huge data).
    Any Input is highly appreciated.
    Thanks

    Thank you Thomas for your inputs:
    You said that R_HKONT contains several ranges of account numbers. If these ranges cover a significant
    portion of the overall existing account numbers, then there is no really quick access possible via the
    BSIS primary key.
    Unfortunately R_HKONT contains all account numbers.
    As Rob said, your index on HKONT and BUDAT does not help much, since you are selecting "<=" on
    BUDAT. No chance of narrowing down that range?
    Will look into this.
    What about GSBER? Does the value in c_ZRL1 provide a rather small subset of the overall values? Then
    an index on BUKRS and GSBER might be helpful.
    ZRL1 does provide a decent selection . But I dont know if one more index is a good idea on overall
    system performance.
    I assume that the four document types are not very selective, so it probably does not pay off to
    investigate selecting on BKPF (there is an index involving BLART) and joining BSIS for the additional
    information. You still might want to look into it though.
    I did try to investigate this option too. Based on other threads related to BSIS and Robs Suggestion in
    those threads I tried this:
    SELECT bukrs belnr gjahr blart budat
      FROM bkpf INTO TABLE bkpf_l
            WHERE bukrs = c_pepsico
            AND bstat IN (' ', 'A', 'B', 'D', 'M', 'S', 'V', 'W', 'Z')
            AND blart IN ('RE', 'WE', 'ZC', 'KP')
            AND gjahr BETWEEN l_gjahr2 AND l_gjahr
            AND budat <= p_lcdate.
    SELECT zuonr hkont gjahr belnr buzei budat blart wrbtr shkzg xblnr waers bukrs
               FROM bsis INTO TABLE i_bsis FOR ALL ENTRIES IN bkpf_l
                         WHERE bukrs = bkpf_l-bukrs
                          AND  hkont IN r_hkont
                          AND  budat = bkpf_l-budat
                          AND  augdt = 0
                          AND  augbl = space
                          AND  gjahr = bkpf_l-gjahr
                          AND  belnr = bkpf_l-belnr
                          AND  blart = bkpf_l-blart
                          AND  gsber = c_zrl1.
    The improves the select on BSIS a lot, but the first select on BKPF kills it. Not sure if this would help
    improve performance.
    Also I was wondering whether it helps on refreshing the tabe statistics through DB20. The last refresh
    was done 7 months back. How frequently should we do this? Will it help?

  • Performance issues with pipelined table functions

    I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
    Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
    Many thanks in advance.
    CREATE OR REPLACE PACKAGE pipeline_example
    IS
       TYPE resultset_typ IS REF CURSOR;
       TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
       TYPE table_typ IS TABLE OF row_typ;
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ;
       c_default_limit   CONSTANT PLS_INTEGER := 100;  
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ);
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ);
    END pipeline_example;
    CREATE OR REPLACE PACKAGE BODY pipeline_example
    IS
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ
       IS
          o_resultset   resultset_typ;
       BEGIN
          OPEN o_resultset FOR
             SELECT colC, colD, colE
               FROM some_table
              WHERE colA = ArgA AND colB = argB;
          RETURN o_resultset;
       END base_query;
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
       IS
          aa_source_data   table_typ;-- := table_typ ();
       BEGIN
          LOOP
             FETCH p_source_data
             BULK COLLECT INTO aa_source_data
             LIMIT p_limit_size;
             EXIT WHEN aa_source_data.COUNT = 0;
             /* Process the batch of (p_limit_size) records... */
             FOR i IN 1 .. aa_source_data.COUNT
             LOOP
                PIPE ROW (aa_source_data (i));
             END LOOP;
          END LOOP;
          CLOSE p_source_data;
          RETURN;
       END processor;
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT /*+ PARALLEL(t, 5) */ colC,
                      SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM TABLE (processor (base_query (argA, argB),100)) t
             GROUP BY colC
             ORDER BY colC
       END with_pipeline;
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT colC,
                      SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM (SELECT colC, colD, colE
                         FROM some_table
                        WHERE colA = ArgA AND colB = argB)
             GROUP BY colC
             ORDER BY colC;
       END no_pipeline;
    END pipeline_example;
    ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
    Edited by: Earthlink on Nov 14, 2010 11:31 AM
    Edited by: Earthlink on Nov 14, 2010 11:32 AM
    Edited by: Earthlink on Nov 20, 2010 12:04 PM
    Edited by: Earthlink on Nov 20, 2010 12:54 PM

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Issue with Multiple Tables in Report

    Post Author: dwessell
    CA Forum: General
    Hi,
    I'm using Crystal Reports 2k8.
    I'm doing a report with three tables, CQ_HEADER, SO_HEADER and SALESPERSON. Both the CQ_HEADER and the SO_HEADER tables link to the SALESPERSON table via a SPN_AUTO_KEY field.
    However, I always receive duplicates in my result set, due to the joins made, and I don't receive results that are valid in one table, and empty in another (Such that it only counts a CQ, if there is a SO associated with it. Here's the query that's produced by CR.
      SELECT "CQ_HEADER"."CQ_NUMBER", "CQ_HEADER"."ENTRY_DATE", "CQ_HEADER"."TOTAL_PRICE", "SALESPERSON"."SALESPERSON_NAME", "SO_HEADER"."ENTRY_DATE", "SO_HEADER"."TOTAL_PRICE"
    FROM   "CQ_HEADER" "CQ_HEADER" INNER JOIN ("SO_HEADER" "SO_HEADER" INNER JOIN "SALESPERSON" "SALESPERSON" ON "SO_HEADER"."SPN_AUTO_KEY"="SALESPERSON"."SPN_AUTO_KEY") ON "CQ_HEADER"."SPN_AUTO_KEY"="SALESPERSON"."SPN_AUTO_KEY"
    WHERE  ("CQ_HEADER"."ENTRY_DATE">={ts '2007-12-01 00:00:00'} AND "CQ_HEADER"."ENTRY_DATE"<{ts '2007-12-18 00:00:00'}) AND ("SO_HEADER"."ENTRY_DATE">={ts '2007-12-01 00:00:00'} AND "SO_HEADER"."ENTRY_DATE"<{ts '2007-12-18 00:00:00'})
    ORDER BY "SALESPERSON"."SALESPERSON_NAME"
    There is no link between the SO_HEADER and the CQ_HEADER.  Can anyone make a suggestion as to how I could go about structuring this such that it doesn't return duplicate values?
    Thanks
    David     

    Hey,
    I understand you used Retainsameconnection property true for all the OLEDB connections you used in the package if not make sure its for all the connection including file connection as well.
    Additionally, you can try to set Delayvalidation property to true for all the dataflows and control flows in the connection and try running the package for 10MB file.
    I hope this will fix the intermittent failure issue you are facing with SSIS.
    (Please mark solved if I've answered your question, vote for it as helpful to help other user's find a solution quicker)
    Thanks,
    Atul Gaikwad.

  • Issue with Temp tables in SSIS 2012 with RetainSameConnection=true

    Hello,
    We have few packages written in 2008 and are being upgraded to 2012. Our package mostly uses temp tables during the process.  During initial migration, we faced issue with handling temp table in the OLE Db destination provider and found a solution for
    the same under 
    usage of Temp tables in SSIS 2012
    Most of our packages execute fine now. 
    we came across a different issue recently. For one of our package, which merges 3 feeds into a temp table and then executes a stored procedure for processing, the package fails intermittently.
    Below are properties of SSIS and its components, which you might be interested
    * Retainsameconnection for the OLE Db connection manager set to True
    * properties of OLEDB Destination 
    AccessMode : SQL Command
    CommandTimeOut : 0
    SQLCommand : Select * from #tmp
    * using SSIS 2012 and SQL OLEDB Native Provider 11 (Provider=SQLNCLI11.1)
    * one of the feed is 10MB
    During investigation using profiler, found that though I use RetainSameConnection, I often  could see that more than one SPId is used during the scope of SSIS execution and when ever this happens, package fails with below error message 
    An OLE DB record is available. Source: "Microsoft SQL Server Native Client 11.0" Hresult: 0x80040E14 Description: "Statement(s) could not be prepared.".
    An OLE DB record is available. Source: "Microsoft SQL Server Native Client 11.0" Hresult: 0x80040E14 Description: "Invalid object name '#tmp'."
    Now, Why SSIS uses a different SPId during its execution when RetainSameconnection is set to True (note : I have only one OLEDB connection in that package)? 
    To simulate the issue, Instead of 10MB file, I used a 500KB file and execute the package twice and all went fine.
    Is it because of 10 MB file taking long time to process causing the time out of that OLEDB destionation forcing the SSIS to go for another connection? but remember, CommandTimeout is set to infinite(0) for that OLEDB destination. 
    Much appreciated your response. 

    Hey,
    I understand you used Retainsameconnection property true for all the OLEDB connections you used in the package if not make sure its for all the connection including file connection as well.
    Additionally, you can try to set Delayvalidation property to true for all the dataflows and control flows in the connection and try running the package for 10MB file.
    I hope this will fix the intermittent failure issue you are facing with SSIS.
    (Please mark solved if I've answered your question, vote for it as helpful to help other user's find a solution quicker)
    Thanks,
    Atul Gaikwad.

  • Rebate related issue with database table VKDFS & VBAK

    Hi everybody,
    I am facing the problem with the tables VKDFS and VBAK.
    In my program the report has to display the details of the agrement numbers concerning to the sale or billing doucmnets later on it has to create a credit memo for that particular customer.
    In the coding the program in very beging step, it is fetching all sales documents from VKDFS as per selections like following.
      select        * from  vkdfs into table ivkdfs
             where  fktyp  in r_fktyp
             and    vkorg  in s_vkorg
             and    fkdat  in s_fkdat
             and    kunnr  in s_kunnr
             and    fkart  in s_fkart
             and    vbeln  in s_vbeln
             and    faksk  in s_faksk
             and    vtweg  in s_vtweg
             and    spart  in s_spart
             and    netwr  in s_netwr
             and    waerk  in s_waerk.
    After this whatever the sales orders fetched here, for those all again its fetching from VBAK table as following.
    SVBAK[] = IVKDFS[]
    select * from vbak into table ivbak
      for all entries in svbak
      where vbeln = svbak-vbeln
      and   knuma in s_knuma
      and   auart in s_auart
      and   submi in s_submi
      and  (vbak_wtab).
    So, its filtering from VBAK.
    But the exact issue is that, there is one sales order which is available in VBAK but does not available in VKDFS table.
    So, my program fails to display the report regarding to that agreement number.
    As per my analysis I came to know that there are no entries in VKDFS table against to the sales orders in VBAK concerning agreement numbers.
    VKDFS-SD index: billing initiator table.
    I want to know how come this VKDFS table is updating against to VBAK table. If possible how to make this entry in that table against to the values in VBAK. But it should not effect other tables.
    Please let me know the solution if you people have any .
    Its an urgent and sev 1 tickets
    eagerly waiting for solution or some information.
    Thanks&Regards.
    J.

    Hi everybody,
    I am facing the problem with the tables VKDFS and VBAK.
    In my program the report has to display the details of the agrement numbers concerning to the sale or billing doucmnets later on it has to create a credit memo for that particular customer.
    In the coding the program in very beging step, it is fetching all sales documents from VKDFS as per selections like following.
      select        * from  vkdfs into table ivkdfs
             where  fktyp  in r_fktyp
             and    vkorg  in s_vkorg
             and    fkdat  in s_fkdat
             and    kunnr  in s_kunnr
             and    fkart  in s_fkart
             and    vbeln  in s_vbeln
             and    faksk  in s_faksk
             and    vtweg  in s_vtweg
             and    spart  in s_spart
             and    netwr  in s_netwr
             and    waerk  in s_waerk.
    After this whatever the sales orders fetched here, for those all again its fetching from VBAK table as following.
    SVBAK[] = IVKDFS[]
    select * from vbak into table ivbak
      for all entries in svbak
      where vbeln = svbak-vbeln
      and   knuma in s_knuma
      and   auart in s_auart
      and   submi in s_submi
      and  (vbak_wtab).
    So, its filtering from VBAK.
    But the exact issue is that, there is one sales order which is available in VBAK but does not available in VKDFS table.
    So, my program fails to display the report regarding to that agreement number.
    As per my analysis I came to know that there are no entries in VKDFS table against to the sales orders in VBAK concerning agreement numbers.
    VKDFS-SD index: billing initiator table.
    I want to know how come this VKDFS table is updating against to VBAK table. If possible how to make this entry in that table against to the values in VBAK. But it should not effect other tables.
    Please let me know the solution if you people have any .
    Its an urgent and sev 1 tickets
    eagerly waiting for solution or some information.
    Thanks&Regards.
    J.

  • SQL Query : Order By issue with HUGE Table

    Hello friends,
    I have been through a terrible issue with order by. I would appreciate your help. Please let me know, your input for my case:
    => if i run select query it returns result quick in some milliseconds. (sql dev. fetches 50 rows at a time)
    => if i run select query with where condition and column (say A) in where condition is even indexed and i have order by and that order by column (say B) is also indexed.
    Now, here is the issue:
    1. if no. of rows with that where condition can filter yielding small result set then order by works fine .. 1-5 sec which is good.
    2.*if no. of rows with that where condition can filter yielding Large result set, say more than 50,000 then with order by then the wait time is exponential.... i have even waited 10+ mins to get the result back for 120,000 records.*
    Is order by takes that long for 100K records ... i think something else if wrong... your pointer will really be helpful... i am very new to sql and even newer for large table case.
    I am using SQL Developer Version 2.1.1.64
    and Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    Thank you so much.
    Edited by: 896719 on Jan 11, 2013 8:38 AM

    Yes you are correct, but my concentration was on order by thing, so it will do full scan of table so i was putting that ... and was also wondering if millions of record in table should not be a issue...???
    Any way for the explain plan , when just a value in the where changes there is the huge difference i want to point out too as below:
    SELECT
    FROM
        EES_EVT EES_EVT  where APLC_EVT_CD= 'ABC' ORDER BY  CRE_DTTM DESC
    execution time : 0.047 sec
    Plan hash value: 290548126
    | Id  | Operation                    | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |             |    27 | 14688 |    25   (4)| 00:00:01 |
    |   1 |  SORT ORDER BY               |             |    27 | 14688 |    25   (4)| 00:00:01 |
    |   2 |   TABLE ACCESS BY INDEX ROWID| EES_EVT     |    27 | 14688 |    24   (0)| 00:00:01 |
    |*  3 |    INDEX RANGE SCAN          | XIE1EES_EVT |    27 |       |     4   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - access("APLC_EVT_CD"='ABC')
    Note
       - SQL plan baseline "SYS_SQL_PLAN_6d41e6b91925c463" used for this statement
    =============================================================================================
    SELECT
    FROM
        EES_EVT EES_EVT  where APLC_EVT_CD= 'XYZ' ORDER BY  CRE_DTTM DESC
    execution : 898.672 sec.
    Plan hash value: 290548126
    | Id  | Operation                    | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |             |   121K|    62M|       |   102K  (1)| 00:11:02 |
    |   1 |  SORT ORDER BY               |             |   121K|    62M|    72M|   102K  (1)| 00:11:02 |
    |   2 |   TABLE ACCESS BY INDEX ROWID| EES_EVT     |   121K|    62M|       | 88028   (1)| 00:09:27 |
    |*  3 |    INDEX RANGE SCAN          | XIE1EES_EVT |   121K|       |       |   689   (1)| 00:00:05 |
    Predicate Information (identified by operation id):
       3 - access("APLC_EVT_CD"='XYZ')
    Note
       - SQL plan baseline "SYS_SQL_PLAN_ef5709641925c463" used for this statementAlso Note this table contains 74328 MB data in it.
    Thanks

  • Strange issue with ADF table in chrome browser

    I have ADF table which should display 23 rows, but only 20 rows are visible in chrome browser, but other browsers like IE, firefox displays the 23 rows correctly. I have used default ADF table with Drag&drop behaviour in this table. All the 23 rows exported correctly to Excel with export to Excel behaviour and inspect page source also shows all the rows in Chrome browser, but display in the adf is only problem in chrome browser. We're having a production issue with this, any ideas are appreciated.
    Thanks,
    Surya

    Hi All,
    Is this issue fixed yet? There are a couple of threads reporting this issue and the original thread has been Archived. It is a real issue, and it remains an issue. The Chrome browser cuts off the last row of a table in the display. IE displays the row correctly. I am working with JDev 12.1.2 and I am building an application using ADF Tables. Without exception, on every page that has one, the last row of the table is cut off from display in a very ugly way and you cannot scroll down to display the full row. I have tried wrapping the table in a Panel Collection - same result, I have tried setting the height of the table - same result. I have tried surrounding the table with a PanelGroupLayout component (layout set to scroll) - same result. I have even tried surrounding the table with a PanelHeaderComponent component, Type set to both default and Stretch - yes, you guessed it, same result! I've even put the table in the middle of a PanelStretchLayout component - but the last row is always cut off.
    This should be easy for you to reproduce, just drop a data control on a ADF page and select a table. When you view it in the Chrome browser and you will see what I'm talking about. I'm using Google Chrome version 31.0.1650.63 m.
    I have experimented with AFStretchWidth and AutoHeightRows (as suggested by previous threads), nothing seems to work.
    Here's another suggestion, if the forum would allow you to insert an image, I could actually show you what I'm talking about. Food for thought perhaps?
    Best regards,
    Nigel
    "Life's too short not to use ADF"

  • Insert performance issue with Partitioned Table.....

    Hi All,
    I have a performance issue during with a table which is partitioned. without table being partitioned
    it ran in less time but after partition it took more than double.
    1) The table was created initially without any partition and the below insert took only 27 minuts.
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:27:35.20
    2) Now I re-created the table with partition(range yearly - below) and the same insert took 59 minuts.
    Is there anyway i can achive the better performance during insert on this partitioned table?
    [ similerly, I have another table with 50 Million records and the insert took 10 hrs without partition.
    with partitioning the table, it took 18 hours... ]
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4195045590
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 643K| 34M| | 12917 (3)| 00:02:36 |
    |* 1 | HASH JOIN | | 643K| 34M| 2112K| 12917 (3)| 00:02:36 |
    | 2 | VIEW | index$_join$_001 | 69534 | 1290K| | 529 (3)| 00:00:07 |
    |* 3 | HASH JOIN | | | | | | |
    | 4 | INDEX FAST FULL SCAN| PK_ACCOUNT_MASTER_BASE | 69534 | 1290K| | 181 (3)| 00:00
    | 5 | INDEX FAST FULL SCAN| ACCOUNT_MASTER_BASE_IDX2 | 69534 | 1290K| | 474 (2)| 00:00
    PLAN_TABLE_OUTPUT
    | 6 | TABLE ACCESS FULL | TB_SISADMIN_BALANCE | 2424K| 87M| | 6413 (4)| 00:01:17 |
    Predicate Information (identified by operation id):
    1 - access("A"."VENDOR_ACCT_NBR"=SUBSTR("B"."ACCOUNT_NO",1,8) AND
    "A"."VENDOR_CD"="B"."COMPANY_NO")
    3 - access(ROWID=ROWID)
    Open C1;
    Loop
    Fetch C1 Bulk Collect Into C_Rectype Limit 10000;
    Forall I In 1..C_Rectype.Count
    Insert test
         col1,col2,col3)
    Values
         val1, val2,val3);
    V_Rec := V_Rec + Nvl(C_Rectype.Count,0);
    Commit;
    Exit When C_Rectype.Count = 0;
    C_Rectype.delete;
    End Loop;
    End;
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:51:01.22
    Edited by: user520824 on Jul 16, 2010 9:16 AM

    I'm concerned about the view in step 2 and the index join in step 3. A composite index with both columns might eliminate the index join and result in fewer read operations.
    If you know which partition the data is going into beforehand you can save a little bit of processing by specifying the partition (which may not be a scalable long-term solution) in the insert - I'm not 100% sure you can do this on inserts but I know you can on selects.
    The APPEND hint won't help the way you are using it - the VALUES clause in an insert makes it be ignored. Where it is effective and should help you is if you can do the insert in one query - insert into/select from. If you are using the loop to avoid filling up undo/rollback you can use a bulk collect to batch the selects and commit accordingly - but don't commit more often than you have to because more frequent commits slow transactions down.
    I don't think there is a nologging hint :)
    So, try something like
    insert /*+ hints */ into ...
    Select
         A.Ing_Acct_Nbr, currency_Symbol,
         Balance_Date,     Company_No,
         Substr(Account_No,1,8) Account_No,
         Substr(Account_No,9,1) Typ_Cd ,
         Substr(Account_No,10,1) Chk_Cd,
         Td_Balance,     Sd_Balance,
         Sysdate,     'Sisadmin'
    From Ideaal_Cons.Tb_Account_Master_Base A,
         Ideaal_Staging.Tb_Sisadmin_Balance B
    Where A.Vendor_Acct_Nbr = Substr(B.Account_No,1,8)
       And A.Vendor_Cd = b.company_no
          ;Edited by: riedelme on Jul 16, 2010 7:42 AM

  • Performance issue with COEP table in ECC 6

    Hi,,
    Any idea how to resonlve performance issue on COEP table in ECC6.0
    We are not using COEP table right now. this table occupies 100gb of 900 gb in PRD system.
    Can i directly archive/delete the table?
    Regards
    Siva

    Hi Siva,
    You cannot archive COEP table alone. It should be archived along with the respective archive object. Just deleting the table is not at all a good idea.
    For finding out the appropriate archive object contributing to the entries in COEP, you need to perform CO table analysis using programs RARCCOA1 and RARCCOA2. For further informaton refer to SAP note 138688.
    Hope this helps,
    Naveen

  • Performance issue with MSEG table

    Hi all,
    I need to fetch materials(MATNR) based on the service order number (AUFNR) in the selection screen,but there is performance isssue with this , how to over come this issue .
    Regards ,
    Amit

    Hi,
    There could be various reasons for performance issue with MSEG.
    1) database statistics of tables and indexes are not upto date.
    because of this wrong index is choosen during the execution.
    2) Improper indexes, because there is no indexes with the fields mentioned in the WHERE clause of the statement. Because of this reason, CBO would have choosen wrong index and did a range scan.
    3) Optimizer bug in oracle.
    4) Size of table is very huge, archive.
    Better switch on ST05 trace before you run this statements, so it will give more detailed information, where exactly time being spent during the execution.
    Hope this helps
    dileep

  • Performance issue with XLA tables and GL tables R12

    Hi all,
    I have one SQL that joins all the XLA tables with GL tables to get invoice related encumbrance data.
    My problem is for some reason the SQL is going to GL_LE_LINES first (from explain plane). As
    a result my SQL is taking some 25 min to finish.
    I am pretty sure if I can manage to force the SQL to use XLA table 1st the SQL will finish in couple of
    minutes. I even tried LEADING hint. But, it didn't work.
    Can someone help me?
    SELECT poh.segment1,
                        tmp.closed_code,
                        gcc.segment1,
                        gcc.segment2,
                        gcc.segment3,
                        gcc.segment4,
                        gcc.segment5,
                        gcc.segment6,
                        gcc.segment7,
                        SUM (NVL (gjl.entered_dr, 0) - NVL (gjl.entered_cr, 0))
                   FROM apps.up_po_encumb_relief_tmp_nb TMP,
                        apps.po_headers_all POH,
                        apps.po_distributions_all pod,
                        apps.ap_invoice_distributions_all APID,
                        xla.xla_transaction_entities XTE,
                        xla_events XE,
                        apps.xla_ae_headers XAH,
                        apps.xla_ae_lines XAL,
                        apps.gl_import_references GIR, -- DOUBLE CHECK JOIN CONDITIONS ON THIS TO INCLUDE OTHER COLS
                        apps.gl_je_lines GJL,
                        apps.gl_je_headers GJH,
                        apps.gl_code_combinations GCC
                  WHERE     POH.segment1 = TMP.PO_NUMBER
                        AND POH.PO_HEADER_ID = POD.PO_HEADER_ID
                        AND POD.Po_distribution_id = APID.po_distribution_id
                        AND XTE.APPLICATION_ID = 200                           -- Payables
                        AND XTE.SOURCE_ID_INT_1 = APID.INVOICE_ID       --POH.po_header_id
                        AND XTE.ENTITY_ID = XE.ENTITY_ID
                        AND XTE.APPLICATION_ID = XE.APPLICATION_ID
                        AND XAH.ENTITY_ID = XE.ENTity_ID
                        AND XAH.EVENT_ID = XE.EVENT_ID
                        AND XAH.APPLICATION_ID = XE.APPLICATION_ID
                        AND XAL.AE_HEADER_ID = XAH.AE_HEADER_ID
                        AND XAL.APPLICATION_ID = XAH.APPLICATION_ID
                        AND GIR.gl_sl_link_table = XAL.gl_sl_link_table
                        AND GIR.gl_sl_link_id = XAL.gl_sl_link_id
                        AND GJL.je_header_id = GIR.je_header_id
                        AND GJL.je_line_num = GIR.je_line_num
                        AND GJH.je_header_id = GJL.je_header_id
                        AND GJH.status = 'P'
                        AND POD.code_combination_id = GJL.code_combination_id
                        AND GJL.code_combination_id = GCC.code_combination_id
                        AND GCC.enabled_flag = 'Y'
                        AND GJH.je_source = 'Payables'
                        AND GJH.je_category = 'Purchase Invoices'
                        AND GJH.encumbrance_type_id IN (1001, 1002)
                        AND GJH.actual_flag = 'E'
                        AND GJH.status = 'P'
                        AND (NVL (GJL.entered_dr, 0) != 0 OR NVL (GJL.entered_cr, 0) != 0)
               GROUP BY poh.segment1,
                        tmp.closed_code,
                        gcc.segment1,
                        gcc.segment2,
                        gcc.segment3,
                        gcc.segment4,
                        gcc.segment5,
                        gcc.segment6,
                        gcc.segment7;

    Hi,
    did you
    - check table statistics (have the affected tables been analyzed recently)?
    - check explain plan for full table scans? You are using NVL on gjl.entered_dr
      and gjl.entered_cr which may lead to a full table scan, as far as i know, there
      is no (standard) functional index on both columns.
    Regards

  • Perfomance issue with SharePoint List.

    Current System:
    Our application (for submitting and publishing articles) is using WSS 3.0
     to store content and host workflows.
    Article content like PDFs, Images & Videos are stored in a SharePoint document library.
    There is also a list in which we have configured a custom state machine workflow. While creating an list SP item from the UI, instance of the configured workflow instance will be triggered and start monitoring.
    Each article belongs to a Journal.  We have TBs of data stored in content databases. Initially there was only one site collection (“http://<myweb:8080>/ArtcleLibrary”) and entire article data was storing it the same document library.
    Later we decided to split the document library based on journal.  
    To achieve better scalability, we have now changed the application architecture by creating site collection for each journal and keep article content (there are many articles in a journal) in corresponding site collections. We have added more content databases
    and sites collections are equally distributed among these Dbs.
    Now it is like (“http://<myweb:8080>/<journalID>/ ArtcleLibrary”)
    Actual Problem:
    We have achieved scalability in the case of content management by splitting document libraries among different site collections and content databases.
    But still the SP list for managing workflow is in the initial site collection (“http://<myweb>:8080/WorkflowTasks”). This single list contains now 60,000 items (=60,000 workflow instances) even if we deleted completed list items older than 3 months.
    We have performance issues with the current workflow list. It is taking time (avg 20s) to create/read list item from the SP List. Sometimes items are not getting created in the list item.
    From MSDN I came to know than there are recommended count for items in SPList (2000
     or 5000 items).
    What is the best solution to solve the above performance issue with SharePoint list?
    Whether the above mentioned our approach to document library will work for SP list also (means split current SP List and keep in each Journal site collection)?
    Please advise.

    Hi Andrew,
    Thanks for your post.
    I still have some doubts.
    If we go like this, will OWSTIMER be able to handle workflows (custom workflow is attached to SP List) of these many site collections. We have now more than 600 site collections.

Maybe you are looking for

  • Vendor Name Appearing Different for Different Line Items in FBL1N

    Hi, I am executing FBL1N and changing the layout to include the Vendor Name in the line items. But when the line items are appearing, the Vendor Name is appearing different for different documents. I wish to see from where the system is picking up th

  • Pricing simulation

    hello, friends. need your help...  i understand that there is no standard transaction in SD to simulate pricing (other than using a dummy sales order).  however, we have pricing reports using V/LD so that we could list condition types based on access

  • J2EE_ADMIN User

    Hi experts, need help If I start communication between java stack and abap stack I get the following error: "Support desk was not able to read the properties of rfc destination 'QHR_LINVM88_QHJ' Check that the ABAP user 'J2EE_ADMIN' has sufficient au

  • Xfinity online not going to full screen

    Hey idk if xfinity people look at this at all, but with my google chrome browser it won't allow me to hit the button to go to fulls screen. It has set dimensions for the video and cuts off all the options that typically run across the bottom of the s

  • Replace spaces with a + sign

    Hi, In on of my columns I have a text... I want to replace spaces with a plus (+) sign... something linke this select text_field, Function_replace(text_field, " ", "+") new_field from table_A