Performance issue in query on reguh

Hi,
The following query on table REGUH is taking a lot of time..
    SELECT laufd laufi xvorl
           zbukr lifnr kunnr
           empfg vblnr znme1
           znme2 rbetr
        FROM reguh
          INTO TABLE i_reguh_vendor
             FOR ALL ENTRIES IN i_lfa1
               WHERE laufd IN s_laufd AND
                     zbukr EQ i_lfa1-bukrs AND
                     lifnr EQ i_lfa1-lifnr AND
                     rzawe IN s_rzawe AND
                     uzawe IN s_uzawe AND
                     xvorl IN s_xvorl.
The company code is available on the selection screen as select-option.

Check out the OSS Note numbers # 597984 ( Create Index )
and 145647
597984 ( Create Index )
Solution
Solve the performance problem by importing Support Packages for Release 4.6C and 4.70. You can implement an advance correction. For this purpose implement the attached program corrections and create the following index in table REGUH:
Transaction SE11
Display Database table REGUH
Goto -> Indexes...
In the standard system no indexes have been defined yet, so answer the displayed question "Do you want to create an index? " with "Yes". If customer-specific indexes have already been created, click on "Create" (F5) for the creation of a new index in the dialog box.
Index Name:     PMW
Short description: Logical database FPMF of Payment Medium Workbench
Index flds:       MANDT
                 FPM_KEY
                 GRPNO
                 SRTF1
Index -> Activate
Consequently, the index exists in the ABAP Dictionary. Finally, the index must be activated in the database.
If you cannot activate the index via "Utilities -> Database utility -> Create database index", let your database administrator create the index.
Note that follow-up actions can be required depending on the database, so that the new index is used.
For example, with DB2/390 a RUNSTATS must be carried out on table REGUH using Transaction DB13.
Hope this`ll help you .
Good Luck !
Thanks
Message was edited by: Saquib Khan

Similar Messages

  • Performance issue on query. Help needed.

    This is mainly a performance issue. I hope someone can help me on this.
    Basically I have four tables Master (150000 records), Child1 (100000+ records), Child2 (50 million records !), Child 3 (10000+ records)
    (please pardon the aliases).
    Now every record in master has more than one corresponding record in each of the child tables (one to many).
    Also there may not be any record in any or all of the tables for a particular master record.
    Now, I need to fetch the max of last_updated_date for every master record in each of the 3 child tables and then find the maximum of
    the three last_active_dates obtained from the 3 tables.
    eg: for Master ID 100, I need to query Child1 for all the records of Master ID 100 and get the max last_updated_date.
    Same for the other 2 tables and then get the maximum of these three values.
    (I also need to take care of cases where no record may be found in a child table for a Master ID)
    Writing a procedure that uses cursors that fetches the value from each of the child table hits performance
    badly. And thing is I need to find out the last_updated_date for every Master record (all 150000 of them). It'll probably take days to do this.
    SELECT MAX (C1.LAST_UPDATED_DATE)
    ,MAX (C2.LAST_UPDATED_DATE)
    ,MAX (C3.LAST_UPDATED_DATE)
    FROM CHILD1 C1
    ,CHILD2 C2
    ,CHILD3 C3
    WHERE C1.MASTER_ID = 100
    OR C2.MASTER_ID = 100
    OR C3.MASTER_ID = 100
    I tried the above but I got a temp tablespace error. I don't think the query is good enough at all.
    (The OR clause is to take care of no records in any child table. If there's an AND, then the join and hence select will
    fail even if there is no record in one child table but valid values in the other 2 tables).
    Thanks a lot.
    Edited by: user773489 on Dec 16, 2008 11:49 AM

    Not sure I understand the problem. The max you are getting from the above is already the greatest out of the three - that's why we do the UNION ALL.
    Here's sample code without output, maybe this will clear it up:
    with a as (
    select 10 MASTER_ID, to_date('12/15/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
    select 20 MASTER_ID, to_date('12/01/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
    select 30 MASTER_ID, to_date('12/02/2008', 'MM/DD/YYYY') LAST_DTE from dual
    b as (
    select 10 MASTER_ID, to_date('12/14/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
    select 20 MASTER_ID, to_date('12/02/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
    select 40 MASTER_ID, to_date('11/15/2008', 'MM/DD/YYYY') LAST_DTE from dual
    c as (
    select 10 MASTER_ID, to_date('12/07/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
    select 30 MASTER_ID, to_date('11/29/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
    select 40 MASTER_ID, to_date('12/13/2008', 'MM/DD/YYYY') LAST_DTE from dual
    select MASTER_ID, MAX(LAST_DTE)
    FROM
    (select MASTER_ID, LAST_DTE from a UNION ALL
    select MASTER_ID, LAST_DTE from b UNION ALL
    select MASTER_ID, LAST_DTE from c)
    group by MASTER_ID;
    MASTER_ID              MAX(LAST_DTE)            
    30                     02-DEC-08                
    40                     13-DEC-08                
    20                     02-DEC-08                
    10                     15-DEC-08                
    4 rows selectedEdited by: tk-7381344 on Dec 16, 2008 12:38 PM

  • Performance issues while query data from a table having large records

    Hi all,
    I have a performance issues on the queries on mtl_transaction_accounts table which has around 48,000,000 rows. One of the query is as below
    SQL ID: 98pqcjwuhf0y6 Plan Hash: 3227911261
    SELECT SUM (B.BASE_TRANSACTION_VALUE)
    FROM
    MTL_TRANSACTION_ACCOUNTS B , MTL_PARAMETERS A  
    WHERE A.ORGANIZATION_ID =    B.ORGANIZATION_ID 
    AND A.ORGANIZATION_ID =  :b1 
    AND B.REFERENCE_ACCOUNT =    A.MATERIAL_ACCOUNT 
    AND B.TRANSACTION_DATE <=  LAST_DAY (TO_DATE (:b2 ,   'MON-YY' )  )  
    AND B.ACCOUNTING_LINE_TYPE !=  15  
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.02       0.05          0          0          0           0
    Fetch        3    134.74     722.82     847951    1003824          0           2
    total        7    134.76     722.87     847951    1003824          0           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 2
    Optimizer mode: ALL_ROWS
    Parsing user id: 193  (APPS)
    Number of plan statistics captured: 1
    Rows (1st) Rows (avg) Rows (max)  Row Source Operation
             1          1          1  SORT AGGREGATE (cr=469496 pr=397503 pw=0 time=237575841 us)
        788242     788242     788242   NESTED LOOPS  (cr=469496 pr=397503 pw=0 time=337519154 us cost=644 size=5920 card=160)
             1          1          1    TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=2 pr=0 pw=0 time=59 us cost=1 size=10 card=1)
             1          1          1     INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=1 pr=0 pw=0 time=40 us cost=0 size=0 card=1)(object id 181399)
        788242     788242     788242    TABLE ACCESS BY INDEX ROWID MTL_TRANSACTION_ACCOUNTS (cr=469494 pr=397503 pw=0 time=336447304 us cost=643 size=4320 card=160)
       8704356    8704356    8704356     INDEX RANGE SCAN MTL_TRANSACTION_ACCOUNTS_N3 (cr=28826 pr=28826 pw=0 time=27109752 us cost=28 size=0 card=7316)(object id 181802)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (AGGREGATE)
    788242    NESTED LOOPS
          1     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                    'MTL_PARAMETERS' (TABLE)
          1      INDEX   MODE: ANALYZED (UNIQUE SCAN) OF
                     'MTL_PARAMETERS_U1' (INDEX (UNIQUE))
    788242     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                    'MTL_TRANSACTION_ACCOUNTS' (TABLE)
    8704356      INDEX   MODE: ANALYZED (RANGE SCAN) OF
                     'MTL_TRANSACTION_ACCOUNTS_N3' (INDEX)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      row cache lock                                 29        0.00          0.02
      SQL*Net message to client                       2        0.00          0.00
      db file sequential read                    847951        0.40        581.90
      latch: object queue header operation            3        0.00          0.00
      latch: gc element                              14        0.00          0.00
      gc cr grant 2-way                               3        0.00          0.00
      latch: gcs resource hash                        1        0.00          0.00
      SQL*Net message from client                     2        0.00          0.00
      gc current block 3-way                          1        0.00          0.00
    ********************************************************************************On a 5 node rac environment the program completes in 15 hours whereas on a single node environemnt the program completes in 2 hours.
    Is there any way I can improve the performance of this query?
    Regards
    Edited by: mhosur on Dec 10, 2012 2:41 AM
    Edited by: mhosur on Dec 10, 2012 2:59 AM
    Edited by: mhosur on Dec 11, 2012 10:32 PM

    CREATE INDEX mtl_transaction_accounts_n0
      ON mtl_transaction_accounts (
                                   transaction_date
                                 , organization_id
                                 , reference_account
                                 , accounting_line_type
    /:p

  • Performance Issue in Query using = and =

    Hi,
    I have a performance issue in using condition like:
    SELECT * FROM A WHERE ITEM_NO>='M-1130' AND ITEM_NO<='M-9999'.
    Item_No is a varchar2 field and the field contains Numberical as well as string values.
    Can anyone help to solve the issue.
    Thanks and Regards

    How can you say it is a performance issue with the condition? Do you have execution plan? If yes, post it between [pre] and [/pre] tags. Like this.
    [pre]SQL> explain plan for
      2  select sysdate
      3  from dual
      4  /
    Explained.
    SQL> select * from table(dbms_xplan.display)
      2  /
    PLAN_TABLE_OUTPUT
    Plan hash value: 1546270724
    | Id  | Operation        | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT |      |     1 |     2   (0)| 00:00:01 |
    |   1 |  FAST DUAL       |      |     1 |     2   (0)| 00:00:01 |
    8 rows selected.
    SQL>
    [/pre]

  • Performance issue with query when generated from an ODS

    I am generating a query from an ODS. The run time is very high. How do I improve the performance of the query ?

    Hi Baruah,
    Steps:
    1. Build the Secondary Index.
    2. divide the data in to 2 ODS where Historical and Present data ODS's and then build a Multiprovider and build the query on multiprovider.
    3.  Build the Indexing on the Table level (ODS table level).
    We cannot make much faster performance for the ODS's that too with huge data...
    The above are very few of them...
    Hope you understood ..
    Regards,
    Ravi Kanth

  • Performance issue with Query

    11g
    Hi there experts,
      I have an issue with performance with a simple SQL which I thought cannot be tuned but just wanted to check with the experts here. We are running a query to get a persons ID based on his logged in email address from a Parties table which is huge (Millions of records). The query takes about 30 seconds to return a value. Was wondering is there a way to optimize this
    The query is
    {code}
    select par.party_id
    from parties party, users users
    where
    lower(party.email_address) = lower(:USER_EMAIL)
    and party.system_reference = to_char(users.person_id)
    and users.active_flag ='Yes';
    {code}
    The emails are stored in upper and lower, hence the lower functions
    IS creating a function based index the only way?
    Thanks,
    Ryan

    Hi Everyone.
    Thanks and apologies, first post on tuning as such. Here is the explain plan generated through SQL DEVELOPER. IT showed the output in XML
    By the way, looks like the {code} tag does not work?
    {code}
      SELECT STATEMENT
      84903
         HASH JOIN
      84903
         Access Predicates
         PARTY.ORIG_SYSTEM_REFERENCE=TO_CHAR(PERSON_ID)
         TABLE ACCESS
         PER_USERS
      STORAGE FULL
      1059
         Access Predicates
         AND
         ACTIVE_FLAG='Y'
         OR
         OR
         BUSINESS_GROUP_ID=0
         BUSINESS_GROUP_ID=1
         BUSINESS_GROUP_ID=DECODE(SYS_CONTEXT('FND_VPD_CTX','FND_ENTERPRISE_ID'),NULL,BUSINESS_GROUP_ID,TO_NUMBER(SYS_CONTEXT('FND_VPD_CTX','FND_ENTERPRISE_ID')))
         Filter Predicates
         AND
         ACTIVE_FLAG='Y'
         OR
         OR
         BUSINESS_GROUP_ID=0
         BUSINESS_GROUP_ID=1
         BUSINESS_GROUP_ID=DECODE(SYS_CONTEXT('FND_VPD_CTX','FND_ENTERPRISE_ID'),NULL,BUSINESS_GROUP_ID,TO_NUMBER(SYS_CONTEXT('FND_VPD_CTX','FND_ENTERPRISE_ID')))
         TABLE ACCESS
         HZ_PARTIES
      STORAGE FULL
      83843
         Access Predicates
         LOWER(PARTY.EMAIL_ADDRESS)='[email protected]'
         Filter Predicates
         LOWER(PARTY.EMAIL_ADDRESS)='[email protected]'
    {code}
    Purvesh, around 50% are 'Yes'
    Thanks!

  • Performance issues with query input variable selection in ODS

    Hi everyone
    We've upgraded from BW 3.0B to NW04s BI using SP12.
    There is a problem encountered with input variable selection. This happens regardless of using BEx (new or old 3.x) or using RSRT. When using the F4 search help (or "Select from list" in BEx context) to list possible values, this takes forever for large ODS (containing millions of records).
    Using ST01 and SM50 to trace the code in the same query, we see a difference here:
    <u>NW04s BI SQL command</u>
    SELECT                                                                               
    "P0000"."COMP_CODE" AS "0000000032" ,"T0000"."TXTMD" AS "0000000032_TXTMD"                             
    FROM                                                                               
    ( "/BI0/PCOMP_CODE" "P0000" ) LEFT OUTER JOIN "/BI0/TCOMP_CODE" "T0000" ON  "P0000"."COMP_CODE" = "T0000
      "."COMP_CODE"                                                                               
    WHERE                                                                               
    "P0000"."OBJVERS" = 'A' AND "P0000"."COMP_CODE" IN ( SELECT "O"."COMP_CODE" AS "KEY" FROM              
      "/BI0/APY_PP_C100" "O" )                                                                               
    ORDER BY                                                                               
    "P0000"."COMP_CODE" ASC#                                                                               
    <u>BW 3.0B SQL command:</u>
    SELECT ROWNUM < 500 ....
    In 3.0B, rownum is limited to 500 and this results in a speedy, though limited query. In the new NW04s BI, this renders the selection screen unusable as ABAP dumps for timing out will occur first due to the large data volume searched using sequential read.
    It will not be feasible to create indexes for every single query selection parameter (issues with oerformance when loading, space required etc.). Is there a reason why SAP seems have fallen back on a less effective code for this?
    I have tried to change the number of selected rows to <500 in BEx settings but one must reach a responsive screen in order to get to that setting and it is not always possible or saved for the next run.
    Anyone with similar experience or can provide help on this?

    here is a reason why the F4 help on ODS was faster in BW 3.x.
    In BW 3.x the ODS did not support the read mode "Only values in
    InfoProvider". So If I compare the different SQL statements I propose
    to change the F4 mode in the InfoProvider specific properties to
    "About master data". This is the fastest F4 mode.
    As an alternative you can define indexes on your ODS to speed up F4.
    So would need a non-unique index on InfoObject 0COMP_CODE in your ODS
    Check below for insights
    https://forums.sdn.sap.com/click.jspa?searchID=6224682&messageID=2841493
    Hope it Helps
    Chetan
    @CP..

  • Performance issue in query

    Hi,
    I have this query in my program, it’s taking so much time to execute. Can you pls look at this query and tell me whats wrong with it. ITAB which is used in for all enteries has around 3000 records.  
        SELECT * FROM regup
                 INTO CORRESPONDING FIELDS OF TABLE itab1
                  FOR ALL ENTRIES IN itab
                 WHERE bukrs = itab-bukrs
                   AND  belnr = itab-belnr
                   AND  lifnr = itab-lifnr
                   AND  gjahr = itab-gjahr
                   AND  vblnr NE space.
    Can you tell me how I can optimize this?
    Your answers will be rewarded.
    Thanks

    In addition to sorting the itab as Rich suggested, you should then delete adjacent duplicates from itab comparing bukrs belnr lifnr gjahr. This will create the smallest possible itab for selecting. if you need this table for other processing, create another work table and use it instead.
    But since you are not using an index, it will still take time.
    Rob

  • Performance issue: View query is calculating a column value before filterin

    Hi,
    I have a view like this.(My view is more complex than this..I just tried to imitate the situation )
    Select emp_id, emp_name, calculate_salary(emp_id) salary
      From employees
    Where nvl(department, '@@##$$') = nvl(get_global.department, '@@##$$')When I execute the below query
    select emp_name from my_view where salary=10000 by setting a value to get_globals.department I found that it first calls the calculate_salary instead of applying the filter for departments. By somehow the cost base analyzer thinks that calling the function is cheaper than applying the filter. I wonder whether there is way to direct the cba and say to first apply the filters before calculating column values.
    Any suggestions ?
    Thanks

    I don't think it's possible.
    Before a WHERE clause can be applied the data has to be collected for the row so Oracle, I would imagine, isn't going to differentiate between what is a database column and a function, fetch just the columns, do the filtering and then go back to calculate the rest of the columns; it's just going to get all the columns, including the calculated ones, and then apply the filters.
    The only option would be to do...
    select emp_id, emp_name, calculate_salary(emp_id) salary
    from (select emp_id, emp_name
          from employees
          where nvl(deparment, '@@##$$') = nvl(get_global.department, '@@##$$')
          )as your view

  • Query giving performance issues.

    Hi All,
    Following query is giving performance issues. Query execution is taking around 11 minutes. Our client requirement is to get the data for the query in 30 sec to 1 min. Please suggest.
    SELECT
    DISTINCT r.rptid, p.product_id, p.vnd_dscr, p.vendor_cd, p.Firm_Cd, a.is_mm_sweep IsMoneyMarket
    FROM
    dev.ac_cube a,
    dev.rpt_master_rep_list_view r,
    dev.cur_ps_cube p
    WHERE a.repid = r.rep_id
    AND a.accountkey = p.accountkey
    AND p.product_id IS NOT NULL
    AND p.product_id != 'CASH'
    AND r.RPTID = '42817'
    Number of distinct records fetched by above query : 34445
    Number of records in dev.ac_cube table : 13.6 million
    Number of records in dev.cur_ps_cube table : 7 million
    Number of records in dev.rpt_master_rep_list_view for RPTID '42817' : 118354
    repid, product_id is indexed column in dev.ac_cube table. accountkey is primary key in dev.ac_cube table.
    rep_id is indexed column in base table on which dev.rpt_master_rep_list_view view is created

    The output is not complete as i expected. Have you followed the steps I have mentioned in my earlier post
    Also when posting output enclose between two {*code*} ...{*code*} (remove * within braces). It displays everything between the tags as proper formatted output otherwise its really hard to read and understand
    See sample below. However you need to get the session id, sql id, display_cursor output exactly in same way as mentioned in my previous post
    SQL> set arraysize 200 timing on serveroutput on linesize 200;
    SQL> set autotrace traceonly explain statistics;
    SQL> select level from dual  connect by level <= 10000;
    10000 rows selected.
    Elapsed: 00:00:00.09
    Execution Plan
    Plan hash value: 1236776825
    | Id  | Operation                    | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |      |     1 |     2   (0)| 00:00:01 |
    |   1 |  CONNECT BY WITHOUT FILTERING|      |       |            |          |
    |   2 |   FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
              0  consistent gets
              0  physical reads
              0  redo size
          57406  bytes sent via SQL*Net to client
            919  bytes received via SQL*Net from client
             51  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    SQL>
    {code}                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Oracle Forms6i Query Performance issue - Urgent

    Hi All,
    I'm using oracle forms6i and Oracle DB 9i.
    I'm facing the performance issue in query forms.
    In detail block form taking long time to load the data.
    Form contains 2 non data blocks
    1.HDR - 3 input parameters
    2.DETAILS - Grid - Details
    HDR input fields
    1.Company Code
    2.Company ACccount No
    3.Customer Name
    Details Grid is displayed the details.
    Here there are 2 tables involved
    1.Table1 - 1 crore records
    2.Table2 - 4 crore records
    In form procedure one cursor bulid and fetch is done directly and assign the values to form block fields.
    Below i've pasted the query
    SELECT
    t1.entry_dt,
    t2.authoriser_code,
    t1.company_code,
    t1.company_ac_no
    initcap(t1.customer_name) cust_name,
    t2.agreement_no
    t1.customer_id
    FROM
    table1 t1,
    table2 t2
    WHERE
    (t2.trans_no = t1.trans_no or t2.temp_trans_no = t1.trans_no)
    AND t1.company_code = nvl(:hdr.l_company_code,t1.company_code)
    AND t1.company_ac_no = nvl(:hdr.l_company_ac_no,t1.company_ac_no)
    AND lower(t1.customer_name) LIKE lower(nvl('%'||:hdr.l_customer_name||'%' ,t1.customer_name))
    GROUP BY
    t2.authoriser_code,
    t1.company_code,
    t1.company_ac_no,
    t1.customer_name,
    t2.agreement_no,
    t1.customer_id;
    Where Clause Analysis
    1.Condition 1 OR operator (In table2 two different columbs are compared with one column in table)
    2.Like Operator
    3.All the columns has index but not used properly always full table scan
    4.NVL chk
    5.If i run the qry in backend means coming little fast,front end very slow
    Input Parameter - Query retrival data - limit
    Only compnay code means record count will be 50 - 500 records -
    Only compnay code and comp ac number means record count will be 1-5
    Only compnay code,omp ac number and customer name means record count will be 1 - 5 records
    I have tried following ways
    1.Split the query using UNIOIN (OR clause seaparted) - Nested loops COST 850 , Nested loops COST 750 - index by row id - cost is 160 ,index by row id - cost is 152 full table access.................................
    2.Dynamic SQL build - 'DBMS_SQL.DEFINE COLUMN .....
    3.Given onlu one input parameter - Nested loops COST 780 , Nested loops COST 780 - index by row id - cost is 148 ,index by row id - cost is 152 full table access.................................
    Still im facing the same issue.
    Please help me out on this.
    Thanks and Regards,
    Oracle1001

    Sudhakar P wrote:
    the below query its take more than one minute while updating the records through pro*c.
    Execute 562238 161.03 174.15 7 3932677 2274833 562238Hi Sudhakar,
    If the database is capable of executing 562,238 update statements in one minute, then that's pretty good, don't you think.
    Your real problem is in the application code which probably looks something like this in pseudocode:
    for i in (some set containing 562,238 rows)
    loop
      <your update statement with all the bind variables>
    end loop;If you transform your code to do a single update statement, you'll gain a lot of seconds.
    Regards,
    Rob.

  • Performance issues with respect scheme registration,select & insert query

    I am facing performance issues with respect to schema registration,Select & insert query towards 10.2.0.3 version.It is taking around 45 minutes to register schema and it is taking around 5 min to insert a single document into xml db where as it was taking less than min to insert a single document into xml db of 9.2.0.6 version.Would like to know the issue and solution to resolve this issue.Please help me out on this as it is very urgent for me

    Since it appears that this is an XML DB specific question, you're probably better off posting in the XML DB. The folks over there have much more experience with the ins and outs of that particular product.
    Justin

  • Performance issues when creating a Report / Query in Discoverer

    Hi forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
    Please see attached the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
    Many thanks,
    Lance

    Hi Rod,
    I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
    I’ve been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
    Lance
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Performance Issue Executing a BEx Query in Crystal Report E 4.0

    Dear Forum
    I'm working for a customer with big performance issue Executing a BEx Query in Crystal via transient universe.
    When query is executed directly against BW via RSRT query returns results in under 2 seconds.
    When executed in crystal, without the use of subreports multiple executions (calls to BICS_GET_RESULTS) are seen. Runtimes are as long as 60 seconds.
    The Bex query is based on a multiprovider without ODS.
    The RFC trace shows BICS connection problems, CS as BICS_PROV_GET_INITIAL_STATE takes a lot of time.
    I checked the note 1399816 - Task name - prefix - RSDRP_EXECUTE_AT_QUERY_DISP, and itu2019s not applicable because the customer has the BI 7.01 SP 8 and it has already
                domain RSDR0_TASKNAME_LONG in package RSDRC with the
                description: 'BW Data Manager: Task name - 32 characters', data
                type: CHAR; No. Characters: 32, decimal digits: 0
                data element RSDR0_TASKNAME_LONG in package RSDRC with the
                description 'BW Data Manager: Task name - 32 characters' and the
                previously created domain.
    as described on the message
    Could you suggest me something to check, please?
    Thanks en advance
    Regards
    Rosa

    Hi,
    It would be great if you would quote the ADAPT and tell the audience when it is targetted for a fix.
    Generally speaking, CR for Enteprise  isn't as performant as WebI,  because uptake was rather slow .. so i'm of the opinion that there is improvements to be gained.   So please work with Support via OSS.
    My onlt recommendations can be :
    - Patch up to P2.12 in bi 4.0
    -  Define more default values on the Bex query variables.
    - Implement this note in the BW 1593802    Performance optimization when loading query views 
    Regards,
    H

Maybe you are looking for

  • [SOLVED] Non english chars kdemod 4 problem

    Hello, I have a little problem with KDE and the non english charactes. If I open a file with non english chars in its name I get something like this: (In this case kwrite opens "other" file but in other applications it fails with an error of file not

  • Regarding an error in the lov

    Hi, I am facing a problem, i am using an lov for an item. When i am running the lov query individually i am not getting any error but when i am using in the lov region i am getting the following error. Exception Details. oracle.apps.fnd.framework.OAE

  • They stole my macbook, this it is the serial number C2*******TY3, to return to my I need your help.

    They stole my macbook, this it is the serial number C2********TY3, to return to my mac. I need your help. <Edited by Host>

  • Iphone iOS 7 earphones not working

    worked fine before the update and the earphones work on my computer but when i plug them into my phone it doesnt play threw the earphones it just plays outloud any way i can fix this??

  • Error CL_RSR_REQUEST and form TEXT_ELEMENTS_GET: VARIABLE

    Hi Experts, Iam working on 3.1 system, When i tried to execute a Query. I got the following error. <b>CL_RSR_REQUEST and form TEXT_ELEMENTS_GET: VARIABLE</b> should i apply any SAP notes for this? How to correct this? any ideas. Thanks SP