Select query taking too much time to fetch data from pool table a005

Dear all,
I am using 2 pool table a005 and a006 in my program. I am using select query to fetch data from these table. i.e. example is mentioned below.
select * from a005 into table t_a005 for all entries in it_itab
                   where vkorg in s_vkorg
                   and     matnr in  s_matnr
                   and     aplp   in  s_aplp
                   and     kmunh = it_itab-kmunh.
here i can't create index also as tables are pool table...If there is any solutions , than please help me for same..
Thanks ,

it would be helpful to know what other fields are in the internal table you are using for the FOR ALL ENTRIES.
In general, you should code the order of your fields in the select in the same order as they appear in the database.  If you do not have the top key field, then the entire database is read. If it's large then it's going to take a lot of time.  The more key fields from the beginning of the structure that you can supply at faster the retrieval.
Regards,
Brent

Similar Messages

  • SELECT is taking lot of time to fetch data from cluster table BSET

    <Modified the subject line>
    Hi experts,
    I want to fetch data of some fields from bset table but it is taking a lot of time as the table is cluster table.
    Can you please suggest me any other process to fetch data from cluster table. I am using native sql to fetch data.
    Regards,
    SURYA
    Edited by: Suhas Saha on Jun 29, 2011 1:51 PM

    Hi Subhas,
    As per your suggestion I am now using normal SQL statement to select data from BSET but it is still taking much time.
    My SQL statement is :
    SELECT BELNR
                  GJAHR
                  BUZEI
                  MWSKZ
                  HWBAS
                  KSCHL
                  KNUMH FROM BSET INTO CORRESPONDING FIELDS OF TABLE IT_BSET
                  FOR ALL ENTRIES IN IT_BKPF
                  WHERE BELNR = IT_BKPF-BELNR
                      AND BUKRS = IT_BKPF-BUKRS.
    <Added code tags>
    Can you suggest me anymore?
    Regards,
    SURYA
    Edited by: Suhas Saha on Jun 29, 2011 4:16 PM

  • Delete query taking too much time

    Hi All,
    my delete query is taking too much time. around 1hr 30 min for 1.5 lac records.
    Though I have dropped mv log on the table & disabled all the triggers on it.
    Moreover deletion is based on primary key .
    delete from table_name where primary_key in (values)
    above is dummy format of my query.
    can anyone please tell me what could be other reason that query is performing that slow.
    Is there anything to check in DB other than triggers,mv log,constraints in order to improve the performance?
    Please reply asap.

    Delete is the most time consuming operation, as the whole record data has to be stored at the undo segments. On the other hand, there is a part of the query used to select records to delete that probably is adding an extra overhead to the process, the in (values) clause. It would be nice on your side to post another dummy from this (values) clause. I could figure out this is a subquery, and in order for you to obtain this list you have to run a inefficient query.
    You can gather the execution plan so you can see where the most heavy part of th query is. This way a better tuning approach and a more accurate diagnostic can be issued.
    ~ Madrid.

  • Query taking too much time with dates??

    hello folks,
    I am trying pull some data using the date condition and for somereason its taking too much time to return the data
       and trunc(al.activity_date) = TRUNC (SYSDATE, 'DD') - 1     --If i use this its takes too much time
      and al.activity_date >= to_date('20101123 000000', 'YYYYMMDD HH24MISS')
       and al.activity_date <= to_date('20101123 235959', 'YYYYMMDD HH24MISS') -- If i use this it returns the data in a second. why is that?
    How do i get the previous day without using the hardcoded to_date('20101123 000000', 'YYYYMMDD HH24MISS'), if i need to retrieve it faster??

    Presumably you've got an index on activity_date.
    If you apply a function like TRUNC to activity_date, you can no longer use the index.
    Post execution plans to verify.
    and al.activity_date >= TRUNC (SYSDATE, 'DD') - 1
    and al.activity_date < TRUNC (SYSDATE, 'DD')

  • Why is query taking too much time ?

    Hi gurus,
    I have a table name test which has 100000 records in it,now the question i like to ask is.
    when i query select * from test ; no proble with responce time, but when the next day i fire the above query it is taking too much of time say 3 times.i would also like to tell you that everything is ok in respect of tuning,the db is properly tuned, network is tuned properly. what could be the hurting factor here ??
    take care
    All expertise.

    Here is a small test on my windows PC.
    oracle 9i Rel1.
    Table : emp_test
    number of records : 42k
    set autot trace exp stat
    15:29:13 jaffar@PRIMEDB> select * from emp_test;
    41665 rows selected.
    Elapsed: 00:00:02.06 ==> response time.
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=24 Card=41665 Bytes=916630)
    1 0 TABLE ACCESS (FULL) OF 'EMP_TEST' (Cost=24 Card=41665 Bytes=916630)
    Statistics
    0 recursive calls
    0 db block gets
    2951 consistent gets
    178 physical reads
    0 redo size
    1268062 bytes sent via SQL*Net to client
    31050 bytes received via SQL*Net from client
    2779 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    41665 rows processed
    15:29:40 jaffar@PRIMEDB> delete from emp_test where deptno = 10;
    24998 rows deleted.
    Elapsed: 00:00:10.06
    15:31:19 jaffar@PRIMEDB> select * from emp_test;
    16667 rows selected.
    Elapsed: 00:00:00.09 ==> response time
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=24 Card=41665 Bytes=916630)
    1 0 TABLE ACCESS (FULL) OF 'EMP_TEST' (Cost=24 Card=41665 Bytes=916630)
    Statistics
    0 recursive calls
    0 db block gets
    1289 consistent gets
    0 physical reads
    0 redo size
    218615 bytes sent via SQL*Net to client
    12724 bytes received via SQL*Net from client
    1113 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    16667 rows processed

  • SELECT query takes too much time! Y?

    Plz find my SELECT query below:
    select w~mandt
    wvbeln wposnr wmeins wmatnr wwerks wnetwr
    wkwmeng wvrkme wmatwa wcharg w~pstyv
    wposar wprodh wgrkor wantlf wkztlf wlprio
    wvstel wroute wumvkz wumvkn wabgru wuntto
    wawahr werdat werzet wfixmg wprctr wvpmat
    wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
    wbedae wcuobj w~mtvfp
    xetenr xwmeng xbmeng xettyp xwepos xabart
    x~edatu
    xtddat xmbdat xlddat xwadat xabruf xetart
    x~ezeit
    into table t_vbap
    from vbap as w
    inner join vbep as x on xvbeln = wvbeln and
    xposnr = wposnr and
    xmandt = wmandt
    where
    ( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
    ( ( ( erdat > pre_dat and erdat < p_syndt ) or
    ( erdat = p_syndt and erzet <= p_syntm ) ) ) and
    w~matnr in s_matnr and
    w~pstyv in s_itmcat and
    w~lfrel in s_lfrel and
    w~abgru = ' ' and
    w~kwmeng > 0 and
    w~mtvfp in w_mtvfp and
    x~ettyp in w_ettyp and
    x~bdart in s_req_tp and
    x~plart in s_pln_tp and
    x~etart in s_etart and
    x~abart in s_abart and
    ( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
    The problem: It takes too much time while executing this statement.
    Could anybody change this statement and help me out to reduce the DB Access time?
    Thx

    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    4.     For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    If all primary key fields are supplied in the Where conditions you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    2.     For all frequently used Select statements, try to use an index.
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    3.     Using buffered tables improves the performance considerably.
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements  Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements  For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
              Loop at int_cntry.
      Select single * from zfligh into int_fligh
      where cntry = int_cntry-cntry.
      Append int_fligh.
                          Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • Query takes too much time in fetching last records.

    Hi,
    I am using oracle 8.1 and trying to execute a SQL statement, it takes few minutes and display records.
    When trying to fetch all the records, it is fast up to some level and takes much time to fetch last record.
    Ex: If total records = 16336 , then it fetches records faster up to 16300 and takes app.500 sec to fetch last 36 records.
    Could you kindly let me know the reason.
    I have copied the explain plan below for your reference.Please let me know if anything is required.
    SELECT STATEMENT, GOAL = RULE               4046     8     4048
    NESTED LOOPS OUTER               4046     8     4048
      NESTED LOOPS OUTER               4030     8     2952
       FILTER                         
        NESTED LOOPS OUTER                         
         NESTED LOOPS OUTER               4014     8     1728
          NESTED LOOPS               3998     8     936
           TABLE ACCESS BY INDEX ROWID     IFSAPP     CUSTOMER_ORDER_TAB     3966     8     440
            INDEX RANGE SCAN     IFSAPP     CUSTOMER_ORDER_1_IX     108     8     
           TABLE ACCESS BY INDEX ROWID     IFSAPP     CUSTOMER_ORDER_LINE_TAB     4     30667     1901354
            INDEX RANGE SCAN     IFSAPP     CUSTOMER_ORDER_LINE_PK     3     30667     
          TABLE ACCESS BY INDEX ROWID     IFSAPP     PWR_CONS_PARCEL_CONTENT_TAB     2     2000     198000
           INDEX RANGE SCAN     IFSAPP     PWR_CONS_PARCEL_CONTENT_1_IDX     1     2000     
         TABLE ACCESS BY INDEX ROWID     IFSAPP     PWR_CONS_PARCEL_TAB     1     2000     222000
          INDEX UNIQUE SCAN     IFSAPP     PWR_CONS_PARCEL_PK          2000     
       TABLE ACCESS BY INDEX ROWID     IFSAPP     CONSIGNMENT_PARCEL_TAB     1     2000     84000
        INDEX UNIQUE SCAN     IFSAPP     CONSIGNMENT_PARCEL_PK          2000     
      TABLE ACCESS BY INDEX ROWID     IFSAPP     PWR_OBJECT_CONNECTION_TAB     2     20     2740
       INDEX RANGE SCAN     IFSAPP     PWR_OBJECT_CONNECTION_IX1     1     20     Thanks.

    We are using PL/SQL Developer tool. The time what we have mentioned in the post is approximated time.
    Apologies for not mentioning these details in previous thread.Let it be approximate time but how did you arrive at that time? When a query fetches records how did you derived that one portion is fetched in x and the remaining in y time limit?
    I would suggest this could be some issue with PL/SQL Developer (Never used this tool by myself) But for performance testing i would suggest you to use SQL Plus. Thats the best tool to test performance.

  • Insert Query taking too much time (5 hrs)

    Hi experts,
    We are using "Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi" on our server, and we are trying to execute the SQL script which inserts data into a table from 5 different table.
    Note: The SQL query is been passed directly to the server.
    Need your expertise as to what is causing the script to run so slow.....(It's taking around 5 hrs to execute the script).
    The Script is as follow's;
    Insert /*+ append */ into TABLE1 NOLOGGING
    SELECT DISTINCT /*+ PARALLEL(TABLE1, 4) */ ROW_NUMBER() OVER
    (ORDER BY COL1, COL2, COL3, COL4, COL5, COL6, COL7, COL8, COL9) AS PK_ID, TABLE1.COL1 AS PRODUCT_ID, TABLE1.M_NBR AS M_NBR, TABLE1.P_NBR AS P_NBR, TABLE1.C_ID AS C_ID, TABLE1.TABLE1_SERV_DT AS TABLE1_SERV_DT, TABLE1.C_KEY AS C_KEY, TABLE1.C_DEN AS C_DEN, TABLE1.A_S_DT AS A_S_DT,
    TABLE1.F_ID AS F_ID, TABLE1.L_DT AS L_DT, TABLE1.A_DT AS A_DT, TABLE1.D_DT AS D_DT, TABLE1.C_TYPE AS C_TYPE,
    TABLE1.S_COL AS S_COL, TABLE1.S_TBL AS S_TBL, TABLE4.ME AS ME, TABLE1.ER_YN AS ER_YN
    FROM
    TABLE1,
    TABLE2,
    TABLE3,
    TABLE4,
    TABLE5
    WHERE
    TABLE1.COL1 = TABLE2.COL1
    AND
    TABLE1.COL2 = TABLE3.COL2
    AND
    TABLE1.COL3 = TABLE4.COL3
    AND
    (TABLE1.SERV_DT BETWEEN RY_START AND RY_END)
    AND
    (TABLE1.CLAIM = :"SYS_B_0")
    AND
    (TABLE2.INACTIVATED = :"SYS_B_1")
    AND
    (TABLE2.TABLE1 = :"SYS_B_2")
    AND
    (TABLE2.PRELIM = :"SYS_B_3")
    AND
    (TABLE3.MEASURE = :"SYS_B_4")
    AND
    (TABLE3.EVENTS = :"SYS_B_5");

    In addition to APC, here's some more on the subject:
    EXACT - (Default) Only statements with an exact text match will share the same SQL area.
    SIMILAR - Oracle will substitute bind variable for all literals, thereby increasing the chances of a text match. Oracle will force similar statements to share the SQL area without deteriorating execution plans.
    FORCE - The same as SIMILAR except that execution plans may deteriorate. This option should only be used if the risk of suboptimal plans is outweighed by the increase in cursor sharing .
    Summary from:
    http://www.oracle-base.com/articles/9i/performance-enhancements-9i.php#CursorSharing
    In-depth:
    http://www.oracle.com/technetwork/issue-archive/2006/06-jan/o16asktom-101983.html
    and you can search the docs @ http://www.oracle.com/pls/db112/homepage for more.
    Hope your DBA and you can fix it.
    Keep us posted.

  • Query taking too much time!!!!!

    Sorry for posting without format. I will post an another question with the use of your format blog.
    Edited by: San on 22 Feb, 2011 3:18 PM

    When your query takes too long ...
    HOW TO: Post a SQL statement tuning request - template posting

  • Crystal Report Query taking too much time

    Hi,
    We are developing one report based on SQL Server 2008 in Crystal Report. There are around 50,000 valid combination in database. Based on dynamic filter we need to bring few records in report. Since these filters are at report level, and crystal report is using microcube, it is taking more than 15 mins to execute.
    Is there any option to fetch record based on filter applied at report level.
    Regards
    Baby

    HI,
    First of all , thank you very much.
    Since having cascading prompt, we never thought in this way.
    Details:-
    For our report we have 4 prompts.
    1. category->family-brand (cascading- mandatory)
    2. season(madatory)
    3.collection (madatory)
    4.owner(not mandatory)
    previously we set all these filters at record level.
    Now we set season and collection at query level and brand, owner at report level. Report only query for selected season and collection only.
    Thanks once again.
    Regards
    Baby

  • Partition Table Query taking Too Much Time

    I have created partition table and Created local partition index on a column whose datatype is DATE.
    Now when I Query table and use index column in the where clause It is scaning all the table (Full scan) . The quey is :
    Select * From mytable
    where to_char(transaction_date, 'DD-MON-YY') = '01-Aug-07';
    I have to use to_char function not to_date due to Front end application problem.

    Before we go too far with this, if you manually query with TO_DATE on the variable instead of TO_CHAR on the column, does the query actually use the index?
    The TO_CHAR on the column will definitely stop Oracle from using any index on the column. If the query will use the index if you TO_DATE the variable, as I see it, you have three options. First, fix the application problem that won't let you use TO_DATE from the application. Second, change the application to call a function returning a ref cursor, get the date string as a parameter to the function, and do the TO_DATE in the function.
    Third, you could consider creating a function-based index on TO_CHAR(transaction_date, 'dd-Mon-yy'). This would be the least desirable option, particularly if you would also be selecting records based on a range of transaction_dates, since it loses a lot of information that the optimizer could use in devising an efficient query plan. It could also change your results for a range scan.
    John

  • Query is taking too much time for inserting into a temp table and for spooling

    Hi,
    I am working on a query optimization project where I have found a query which takes hell lot of time to execute.
    Temp table is defined as follows:
    DECLARE @CastSummary TABLE (CastID INT, SalesOrderID INT, ProductionOrderID INT, Actual FLOAT,
    ProductionOrderNo NVARCHAR(50), SalesOrderNo NVARCHAR(50), Customer NVARCHAR(MAX), Targets FLOAT)
    SELECT
    C.CastID,
    SO.SalesOrderID,
    PO.ProductionOrderID,
    F.CalculatedWeight,
    PO.ProductionOrderNo,
    SO.SalesOrderNo,
    SC.Name,
    SO.OrderQty
    FROM
    CastCast C
    JOIN Sales.Production PO ON PO.ProductionOrderID = C.ProductionOrderID
    join Sales.ProductionDetail d on d.ProductionOrderID = PO.ProductionOrderID
    LEFT JOIN Sales.SalesOrder SO ON d.SalesOrderID = SO.SalesOrderID
    LEFT JOIN FinishedGoods.Equipment F ON F.CastID = C.CastID
    JOIN Sales.Customer SC ON SC.CustomerID = SO.CustomerID
    WHERE
    (C.CreatedDate >= @StartDate AND C.CreatedDate < @EndDate)
    It takes almost 33% for Table Insert when I insert the data in a temp table and then 67% for Spooling. I had removed 2 LEFT joins and made it as JOIN from the above query and then tried. Query execution became bit fast. But still needs improvement.
    How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables?? Please suggest.
    -Pep

    How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables??
    I suggest you start with index tuning.  Specifically, make sure columns specified in the WHERE and JOIN columns are properly indexed (ideally clustered or covering, and unique when possible).  Changing outer joins to inner joins is appropriate
    if you don't need outer joins in the first place.
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Query taking too much time when running through discover

    Hi
    I have created report with sql query by creating custom folder in oracle discover desktop. Query is using parameter with sys_context. When report is executed from discover it takes more than 5 minutes and same query is executed in 30 seconds when executed in database through toad.
    Pls. let me know what could be the reason for this?
    Thanks

    Hi,
    The first thing to check is whether the query is running to completion in TOAD. By default, TOAD just selects the first 50 rows, where as Discoverer must return all the rows before displaying results if a crosstab report is used.
    Secondly, check that the queries and the explain plans are the same in Discoverer and Toad. Although, Discoverer shows the SQL in the SQL inspector this isn't necessarily the SQL actually sent to the database. Use TOAD to interogate the Discoverer session to determine the actual SQL and compare this SQL and explain plan to SQL you ran in TOAD.
    Thirdly, check that the session context is the same in both cases. So check that any custom contexts and the USER_ENV context is the same, and if any security packages or VPD policies are used in the SQL that these have been initialised the same.
    If you still cannot determine the difference then trace both sessions.
    Rod West

  • It is taking too much time run the page from Jdeveloper (R12)

    Hi Gurus
    When i m running a page in R12 from Jdeveloper ,it is taking abt 15-20 minutes to render the page ,it is happening first time with me ,i m working on Jdeveloper 10G .Would some one please give some hints . Am i missing some setting in Jdeveloper.
    Thanx in advance
    Pratap

    Pratap
    First thing that cause pages to load slow is the location of your database server.If you are working locally & your db server is located far in that case, your page loads slowly.Moreover you can try connecting with any server instance to test if this is not the issue with only that instance.Moreover if you would be running the page in Debug mode then also your page will load slowly.
    Please update the thread if you finds any other issue.So that it may help others too
    Thanks
    AJ

  • It is taking too much time run the page from Jdeveloper

    Hi Gurus
    When i m running a page in from Jdeveloper ,it is taking abt 20-25 minutes to render the page ,it is happening first time with me ,i m working on Jdeveloper 10G version .Would some please give some hints .
    Am i missing some setting in Jdeveloper.
    Thanx in advance
    Pratap

    thank you shay for reply ,
    i m sorry , i feel i have not given complete information ,
    I m using JDeveloper for oracle EBS R12 and downloaded the patch p8431482_R12_GENERIC from metalink ,and i running the same excercises which is shipped with Jdeveloper.
    Thanx
    Pratap

Maybe you are looking for