Performance issues while query data from a table having large records

Hi all,
I have a performance issues on the queries on mtl_transaction_accounts table which has around 48,000,000 rows. One of the query is as below
SQL ID: 98pqcjwuhf0y6 Plan Hash: 3227911261
SELECT SUM (B.BASE_TRANSACTION_VALUE)
FROM
MTL_TRANSACTION_ACCOUNTS B , MTL_PARAMETERS A  
WHERE A.ORGANIZATION_ID =    B.ORGANIZATION_ID 
AND A.ORGANIZATION_ID =  :b1 
AND B.REFERENCE_ACCOUNT =    A.MATERIAL_ACCOUNT 
AND B.TRANSACTION_DATE <=  LAST_DAY (TO_DATE (:b2 ,   'MON-YY' )  )  
AND B.ACCOUNTING_LINE_TYPE !=  15  
call     count       cpu    elapsed       disk      query    current        rows
Parse        1      0.00       0.00          0          0          0           0
Execute      3      0.02       0.05          0          0          0           0
Fetch        3    134.74     722.82     847951    1003824          0           2
total        7    134.76     722.87     847951    1003824          0           2
Misses in library cache during parse: 1
Misses in library cache during execute: 2
Optimizer mode: ALL_ROWS
Parsing user id: 193  (APPS)
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
         1          1          1  SORT AGGREGATE (cr=469496 pr=397503 pw=0 time=237575841 us)
    788242     788242     788242   NESTED LOOPS  (cr=469496 pr=397503 pw=0 time=337519154 us cost=644 size=5920 card=160)
         1          1          1    TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=2 pr=0 pw=0 time=59 us cost=1 size=10 card=1)
         1          1          1     INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=1 pr=0 pw=0 time=40 us cost=0 size=0 card=1)(object id 181399)
    788242     788242     788242    TABLE ACCESS BY INDEX ROWID MTL_TRANSACTION_ACCOUNTS (cr=469494 pr=397503 pw=0 time=336447304 us cost=643 size=4320 card=160)
   8704356    8704356    8704356     INDEX RANGE SCAN MTL_TRANSACTION_ACCOUNTS_N3 (cr=28826 pr=28826 pw=0 time=27109752 us cost=28 size=0 card=7316)(object id 181802)
Rows     Execution Plan
      0  SELECT STATEMENT   MODE: ALL_ROWS
      1   SORT (AGGREGATE)
788242    NESTED LOOPS
      1     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                'MTL_PARAMETERS' (TABLE)
      1      INDEX   MODE: ANALYZED (UNIQUE SCAN) OF
                 'MTL_PARAMETERS_U1' (INDEX (UNIQUE))
788242     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                'MTL_TRANSACTION_ACCOUNTS' (TABLE)
8704356      INDEX   MODE: ANALYZED (RANGE SCAN) OF
                 'MTL_TRANSACTION_ACCOUNTS_N3' (INDEX)
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  row cache lock                                 29        0.00          0.02
  SQL*Net message to client                       2        0.00          0.00
  db file sequential read                    847951        0.40        581.90
  latch: object queue header operation            3        0.00          0.00
  latch: gc element                              14        0.00          0.00
  gc cr grant 2-way                               3        0.00          0.00
  latch: gcs resource hash                        1        0.00          0.00
  SQL*Net message from client                     2        0.00          0.00
  gc current block 3-way                          1        0.00          0.00
********************************************************************************On a 5 node rac environment the program completes in 15 hours whereas on a single node environemnt the program completes in 2 hours.
Is there any way I can improve the performance of this query?
Regards
Edited by: mhosur on Dec 10, 2012 2:41 AM
Edited by: mhosur on Dec 10, 2012 2:59 AM
Edited by: mhosur on Dec 11, 2012 10:32 PM

CREATE INDEX mtl_transaction_accounts_n0
  ON mtl_transaction_accounts (
                               transaction_date
                             , organization_id
                             , reference_account
                             , accounting_line_type
/:p

Similar Messages

  • Performance issue while extracting data from non-APPS schema tables.

    Hi,
    I have developed an ODBC application to extract data from Oracle E-Business Suite 12 tables.
    e.g. I am trying to extract data from HZ_IMP_PARTIES_INT table of Receivables application(Table is in "AR" database schema) using this ODBC application.
    The performance of extraction (i.e. number of rows extracted per second) is very low if the table belongs to non-APPS schema. (e.g. HZ_IMP_PARTIES_INT belongs to "AR" database schema)
    Now if I create same table (HZ_IMP_PARTIES_INT) with same data in "APPS" schema, the performance of extraction improves a lot. (i.e. the number of rows extracted per second increases a lot) with the same ODBC application.
    The "APPS" user is used to connect to the database in both scenarios.
    Also note that my ODBC application creates multiple threads and each thread creates its own connection to the database. Each thread extract different data using SQL filter condition.
    So, my question is:
    Is APPS schema optimized for any data extraction?
    I will really appreciate any pointer on this.
    Thanks,
    Rohit.

    Hi,
    Is APPS schema optimized for any data extraction? I would say NO as data extraction performance should be the same for all schemas. Do you run "Gather Schema Statistics" concurrent program for ALL schemas on regular basis?
    Regards,
    Hussein

  • Performance Issue - Fetching latest date from a507 table

    Hi All,
    I am fetching data from A507 table for material and batch combination. I want to fetch the latest record based on the value of field DATBI. I have written the code as follows. But in the select query its taking more time. I dont want to write any condition in where claue for DATBI field because I have already tried with that option.
    SELECT kschl
               matnr
               charg
               datbi
               knumh
        FROM a507
        INTO TABLE it_a507
        FOR ALL ENTRIES IN lit_mch1
        WHERE kschl = 'ZMRP'
        AND   matnr = lit_mch1-matnr
        AND   charg = lit_mch1-charg.
    SORT it_a507 BY kschl matnr charg datbi DESCENDING.
      DELETE ADJACENT DUPLICATES FROM it_a507 COMPARING kschl matnr charg.

    Hi,
    These kind of tables will be storing large volumes of data. Thus while making a select on it, its important to use as many primary key fields as possible in the where condition. Here you can try mentioning KAPPL since its specific to a requirement. If its for purchasing use 'M' and try.
    if not lit_mch1[] is initial.
    SELECT kschl
    matnr
    charg
    datbi
    knumh
    FROM a507
    INTO TABLE it_a507
    FOR ALL ENTRIES IN lit_mch1
    WHERE kappl = 'M'
    AND kschl = 'ZMRP'
    AND matnr = lit_mch1-matnr
    AND charg = lit_mch1-charg.
    endif.
    SORT it_a507 BY kschl matnr charg datbi DESCENDING.
    DELETE ADJACENT DUPLICATES FROM it_a507 COMPARING kschl matnr charg.
    This should considerably increase the performance
    Regards,
    Vik

  • Performance issue while transferring data from one itab to another itab

    hi experts,
    i have stored all the general material details in one internal table and description, valuation details of the meterial is stored in another internal table which is of standard table. now i need to tranfer all the data from these two internal tables into one final internal table but it is taking lot of time as it has to transfer lacs of data.
    i have declared the output table as shown below
    DATA:
      t_output TYPE standard TABLE
               OF type_output
               INITIAL SIZE 0
               WITH HEADER LINE.
    (according to the standard i have to declare like this and the two internal tables are declared similar to the above one)
    could somebody suggest me how shall I procced through this...
    thanks in advance....
    ragerds,
    Deepu

    Have a look at the following article which you may find useful:
      <a href="https://www.sdn.sap.comhttp://www.sdn.sap.comhttp://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/40729734-3668-2910-deba-fa0e95e2c541">Improving performance in nested loops</a>
    good luck, damian

  • How to show data from a table having large number of columns

    Hi ,
    I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns.
    Is it possible to design report in below format(half columns on one side of page, half on other side of page :
    Column1
    Data
    Column11
    Data
    Column2
    Data
    Column12
    Data
    Column3
    Data
    Column13
    Data
    Column4
    Data
    Column14
    Data
    Column5
    Data
    Column15
    Data
    Column6
    Data
    Column16
    Data
    Column7
    Data
    Column17
    Data
    Column8
    Data
    Column18
    Data
    Column9
    Data
    Column19
    Data
    Column10
    Data
    Column20
    Data
    I am using Apex 4.2.3 version on oracle 11g xe.

    user2602680 wrote:
    Please update your forum profile with a real handle instead of "user2602680".
    I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns.
    Is it possible to design report in below format(half columns on one side of page, half on other side of page :
    Column1
    Data
    Column11
    Data
    Column2
    Data
    Column12
    Data
    Column3
    Data
    Column13
    Data
    Column4
    Data
    Column14
    Data
    Column5
    Data
    Column15
    Data
    Column6
    Data
    Column16
    Data
    Column7
    Data
    Column17
    Data
    Column8
    Data
    Column18
    Data
    Column9
    Data
    Column19
    Data
    Column10
    Data
    Column20
    Data
    I am using Apex 4.2.3 version on oracle 11g xe.
    Yes, this can be achieved using a custom named column report template.

  • How can we improve the performance while fetching data from RESB table.

    Hi All,
    Can any bosy suggest me the right way to improve the performance while fetching data from RESB table. Below is the select statement.
    SELECT aufnr posnr roms1 roanz
        INTO (itab-aufnr, itab-pposnr, itab-roms1, itab-roanz)
        FROM resb
        WHERE kdauf  = p_vbeln
        AND   ablad  = itab-sposnr+2.
    Here I am using 'KDAUF'  & 'ABLAD' in condition. Can we use secondary index for improving the performance in this case.
    Regards,
    Himanshu

    Hi ,
    Declare intenal table with only those four fields.
    and try the beloe code....
    SELECT aufnr posnr roms1 roanz
    INTO  table itab
    FROM resb
    WHERE kdauf = p_vbeln
    AND ablad = itab-sposnr+2.
    yes, you can also use secondary index for improving the performance in this case.
    Regards,
    Anand .
    Reward if it is useful....

  • How to use for all entires clause while fetching data from archived tables

    How to use for all entires clause while fetching data from archived tables using the FM
    /PBS/SELECT_INTO_TABLE' .
    I need to fetch data from an Archived table for all the entries in an internal table.
    Kindly provide some inputs for the same.
    thanks n Regards
    Ramesh

    Hi Ramesh,
    I have a query regarding accessing archived data through PBS.
    I have archived SAP FI data ( Object FI_DOCUMNT) using SAP standard process through TCODE : SARA.
    Now please tell me can I acees this archived data through the PBS add on FM : '/PBS/SELECT_INTO_TABLE'.
    Do I need to do something else to access data archived through SAP standard process ot not ? If yes, then please tell me as I am not able to get the data using the above FM.
    The call to the above FM is as follows :
    CALL FUNCTION '/PBS/SELECT_INTO_TABLE'
      EXPORTING
        archiv           = 'CFI'
        OPTION           = ''
        tabname          = 'BKPF'
        SCHL1_NAME       = 'BELNR'
        SCHL1_VON        =  belnr-low
        SCHL1_BIS        =  belnr-low
        SCHL2_NAME       = 'GJAHR'
        SCHL2_VON        =  GJAHR-LOW
        SCHL2_BIS        =  GJAHR-LOW
        SCHL3_NAME       =  'BUKRS'
        SCHL3_VON        =  bukrs-low
        SCHL3_BIS        =  bukrs-low
      SCHL4_NAME       =
      SCHL4_VON        =
      SCHL4_BIS        =
        CLR_ITAB         = 'X'
      MAX_ZAHL         =
      tables
        i_tabelle        =  t_bkpf
      SCHL1_IN         =
      SCHL2_IN         =
      SCHL3_IN         =
      SCHL4_IN         =
    EXCEPTIONS
       EOF              = 1
       OTHERS           = 2
       OTHERS           = 3
    It gives me the following error :
    Index for table not supported ! BKPF BELNR.
    Please help ASAP.
    Thnaks and Regards
    Gurpreet Singh

  • Special character issue while loading data from SAP HR through VDS

    Hello,
    We have a special character issue, while loading data from SAP HR to IdM, using a VDS and following the standard documentation: http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e09fa547-f7c9-2b10-3d9e-da93fd15dca1?quicklink=index&overridelayout=true
    French accent like (é,à,è,ù), are correctly loaded but Turkish special ones (like : Ş, İ, ł ) are transformed into u201C#u201D in Idm.
    The question is : does someone know any special setting to do in the VDS or in IdM for special characters to solve this issue??
    Our SAP HR version is ECC6.0 (ABA/BASIS7.0 SP21, SAP_HR6.0 SP54) and we are using a VDS 7.1 SP5 and SAP NW IdM 7.1 SP5 Patch1 on oracle 10.2.
    Thanks

    We are importing directly to the HR staging area, using the transactions/programs "HRLDAP_MAP", "LDAP" and "/RPLDAP_EXTRACT", then we have a job which extract data from the staging area to a CSV file.
    So before the import, the character appears correctly in SAP HR, but by the time it comes through the VDS to the IDM's temporary table, it becomes "#".
    Yes, our data is coming from a Unicode system.
    So, could it be a java parameter to change / add in the VDS??
    Regards.

  • Error while selecting date from external table

    Hello all,
    I am getting the follwing error while selecting data from external table. Any idea why?
    SQL> CREATE TABLE SE2_EXT (SE_REF_NO VARCHAR2(255),
      2        SE_CUST_ID NUMBER(38),
      3        SE_TRAN_AMT_LCY FLOAT(126),
      4        SE_REVERSAL_MARKER VARCHAR2(255))
      5  ORGANIZATION EXTERNAL (
      6    TYPE ORACLE_LOADER
      7    DEFAULT DIRECTORY ext_tables
      8    ACCESS PARAMETERS (
      9      RECORDS DELIMITED BY NEWLINE
    10      FIELDS TERMINATED BY ','
    11      MISSING FIELD VALUES ARE NULL
    12      (
    13        country_code      CHAR(5),
    14        country_name      CHAR(50),
    15        country_language  CHAR(50)
    16      )
    17    )
    18    LOCATION ('SE2.csv')
    19  )
    20  PARALLEL 5
    21  REJECT LIMIT UNLIMITED;
    Table created.
    SQL> select * from se2_ext;
    SQL> select count(*) from se2_ext;
    select count(*) from se2_ext
    ERROR at line 1:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-29400: data cartridge error
    KUP-04043: table column not found in external source: SE_REF_NO
    ORA-06512: at "SYS.ORACLE_LOADER", line 19

    It would appear that you external table definition and the external data file data do not match up. Post a few input records so someone can duplicate the problem and determine the fix.
    HTH -- Mark D Powell --

  • Error while activating data from new table of DSO to active table

    HI,
    while activating data from new table of DSO to active table i am getting
    error message as "Error occurred while deciding partition number".
    Plz any idea hoe to resolve this one.
    thanks & regards
    KPS MOORTHY

    Hi
    You are trying to update/upload the Records which are already been there in the DSO Active Data Table which has the partition.
    Try to see the Record Nos already been activated and update/upload with selective, if possible.
    You can trace the changes at Change log table for the same.
    Hope it helps
    Edited by: Aduri on Jan 21, 2009 10:38 AM

  • Select max date from a table with multiple records

    I need help writing an SQL to select max date from a table with multiple records.
    Here's the scenario. There are multiple SA_IDs repeated with various EFFDT (dates). I want to retrieve the most recent effective date so that the SA_ID is unique. Looks simple, but I can't figure this out. Please help.
    SA_ID CHAR_TYPE_CD EFFDT CHAR_VAL
    0000651005 BASE 15-AUG-07 YES
    0000651005 BASE 13-NOV-09 NO
    0010973671 BASE 20-MAR-08 YES
    0010973671 BASE 18-JUN-10 NO

    Hi,
    Welcome to the forum!
    Whenever you have a question, post a little sample data in a form that people can use to re-create the problem and test their ideas.
    For example:
    CREATE TABLE     table_x
    (     sa_id          NUMBER (10)
    ,     char_type     VARCHAR2 (10)
    ,     effdt          DATE
    ,     char_val     VARCHAR2 (10)
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0000651005, 'BASE',    TO_DATE ('15-AUG-2007', 'DD-MON-YYYY'), 'YES');
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0000651005, 'BASE',    TO_DATE ('13-NOV-2009', 'DD-MON-YYYY'), 'NO');
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0010973671, 'BASE',    TO_DATE ('20-MAR-2008', 'DD-MON-YYYY'), 'YES');
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0010973671, 'BASE',    TO_DATE ('18-JUN-2010', 'DD-MON-YYYY'), 'NO');
    COMMIT;Also, post the results that you want from that data. I'm not certain, but I think you want these results:
    `    SA_ID LAST_EFFD
        651005 13-NOV-09
      10973671 18-JUN-10That is, the latest effdt for each distinct sa_id.
    Here's how to get those results:
    SELECT    sa_id
    ,         MAX (effdt)    AS last_effdt
    FROM      table_x
    GROUP BY  sa_id
    ;

  • Deadlock issue while sending data from PI to JDBC !!!

    Hi All,
    We are getting below error while sending data from PI to JDBC (Database).
    Error - JDBC message processing failed; reason Error processing request in sax parser: Error when executing statement for table/stored proc. 'store_order_details' (structure 'Order_details_Statement'): com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 61) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
    This is happening when more than one PI message is trying to deliver data at the same time to database.
    Can anyone let us know the root cause of this issue and how we can rectify this.
    Is there some setting which we can use to do parallel processing as Delivering one message at a time to database is causing performance issues & taking lot of time.
    Thanks
    Neha Verma

    Hello Neha,
    Strange but can please get below information.
    Please check with the DB admin about if the user is getting locked or is there any hanging threads related to user.
    Also confirm with DB admin if the exclusive lock is on table or on the row when you try insertign or updating information.
    You can share the user from the receiver channel.
    Regards,
    Hiren A.

  • Error While Viewing Data from MARA Table

    Hi All,
    After Importing SAP Tables, ex MARA  and viewing Data Getting "Error Calling RFC function to get table data: <DATA_BUFFER_EXCEEDED "r  . After decreasing or applying filters on fields , still not able to get data.
    Able to extract data from T001 table.
    Need to pull data in volumes right, so how to tackle this problem.
    Thanks,
    Ravindra

    Hi Ravindra ,
      You can check the OSS note - 1186277 for the resolution of this issue .
    Note- 1186277 is a SAP knowledge article .
    You can access it from here  - [Note - 1186277|https://css.wdf.sap.corp/sap(bD1lbiZjPTAwMQ==)/bc/bsp/sno/ui_entry/entry.htm?param=69765F6D6F64653D3030312669765F7361706E6F7465735F6E756D6265723D3131383632373726]
    Regards,
    Lokesh

  • SQL Select Performance - Approached to fetch data from big table

    Hi
    I just wanted to know view on different approached to fetch data from a table which has 40 billion records and which joined to another table which has 1 million record.
    e.g.
    I have two tables TableA and TableB
    TableA has 40 Billion records has 6 columns
         TableA has partitions on Date
         TableA has required indexes
    TableB has 1 Milluion record and has 10 columns
         Table A has indexes
    Now I have written query like
    select distinct TableA.column1,TableA.column2,TableB.columnA
    FROM TableA join TableB
    ON TableA.Column1=TableB.Column2
    WHERE TableA.DateColumn between StartDate and EndDate
    for a given date range it will fetch 5 billion of records which takes around 40 minutes.
    I just wanted to know what all tuning approaches I can follow. What would be the best approach to make record retrieval faster in such a scenario.
    Just wanted to know your vies/experience in such a scenario

    Sufficiently large array fetchsize
    and,
    possibly using parallel query,
    pop into my mind.
    I would be interested though in the business requirement that asks for you to write a program that gets 5 billion (!) rows out of the database...

  • Access issues while inserting data in a table in same schema

    Hi All.
    I have a script that at first creates and then populates a table. My script used to run fine in production environment till few hours back. But all of a sudden, it is popping up error while inserting data into the table .
    Error message - "Insufficient Previlages".
    Please suggest me what may be the reasons for this kind of error.
    Thanks in advance

    Sonika wrote:
    Hi All.
    I have a script that at first creates and then populates a table. My script used to run fine in production environment till few hours back. But all of a sudden, it is popping up error while inserting data into the table .
    Error message - "Insufficient Previlages".
    Please suggest me what may be the reasons for this kind of error.
    1) something changed
    2) you are hitting a bug

Maybe you are looking for

  • Photos imported into wrong folder

    When running an import and setting destination to import into subfolder organised by date (2011/2011-04-29) the ophotos taken late in the day do not go into the correct folder, they import into the next days folder. It appears the calculation used fo

  • So slow sadenly

    nothing chant in my network router all is the same sadenly sync fikm from my imac to aptv it so solow befor 800mg took 5-6 min now all night long? why? how to resolve? pleas help me

  • How to create a client jar file???

    How do I create a client jar file for the client program? Please see my attached ejb-jar.xml. I compiled the my EJB and deployed it successfully. However, I couldn't find the client jar anywhere. I use the weblogic admin console DD editor, I can see

  • When capturing how do I change the viewer from 4:3 to 16:9?

    When capturing (log and capture) the image shown is in 4:3. Apart from this being an old ratio how can I change it to the ratio being used to edit.

  • How long does it take for them to review your proof of purchase?

    I need a replacement for my iPod mini, and even though my Apple account information clearly states that it is still covered under the one-year warranty, they requested that I fax them a proof of purchase. I finally managed to get a credit card printo