How to optimize this select statement  its a simple select....

how to optimize this select statement  as the records in earlier table is abt i million
and this simplet select statement is not executing and taking lot of time
  SELECT  guid  
                stcts      
  INTO table gt_corcts
  FROM   corcts
  FOR all entries in gt_mege
  WHERE  /sapsll/corcts~stcts = gt_mege-ctsex
  and /sapsll/corcts~guid_pobj = gt_Sagmeld-guid_pobj.
regards
Arora

Hi Arora,
Using Package size is very simple and you can avoid the time out and as well as the problem because of memory.  Some time if you have too many records in the internal table, then you will get a short dump called TSV_TNEW_PAGE_ALLOC_FAILED.
Below is the sample code.
DATA p_size = 50000
SELECT field1 field2 field3
   INTO TABLE itab1 PACKAGE SIZE p_size
   FROM dtab
   WHERE <condition>
Other logic or process on the internal table itab1
FREE itab1.
ENDSELECT.
Here the only problem is you have to put the ENDSELECT.
How it works
In the first select it will select 50000 records ( or the p_size you gave).  That will be in the internal table itab1.
In the second select it will clear the 50000 records already there and append next 50000 records from the database table.
So care should be taken to do all the logic or process with in select and endselect.
Some ABAP standards may not allow you to use select-endselect.  But this is the best way to handle huge data without short dumps and memory related problems. 
I am using this approach.  My data is much more huge than yours.  At an average of atleast 5 millions records per select.
Good luck and hope this help you.
Regards,
Kasthuri Rangan Srinivasan

Similar Messages

  • How to optimize this SQL. Help needed.

    Hi All,
    Can you please help with this SQL:
    SELECT /*+ INDEX(zl1 zipcode_lat1) */
    zl2.zipcode as zipcode,l.location_id as location_id,
    sqrt(POWER((69.1 * ((zl2.latitude*57.295779513082320876798154814105) - (zl1.latitude*57.295779513082320876798154814105))),2) + POWER((69.1 * ((zl2.longitude*57.295779513082320876798154814105) - (zl1.longitude*57.295779513082320876798154814105)) * cos((zl1.latitude*57.295779513082320876798154814105)/57.3)),2)) as distance
    FROM location_atao l, zipcode_atao zl1, client c, zipcode_atao zl2
    WHERE zl1.zipcode = l.zipcode
    AND l.client_id = c.client_id
    AND c.client_id = 306363
    And l.appType = 'HOURLY'
    and c.milessearchzipcode >= sqrt(POWER((69.1 * ((zl2.latitude*57.295779513082320876798154814105) - (zl1.latitude*57.295779513082320876798154814105))),2) + POWER((69.1 * ((zl2.longitude*57.295779513082320876798154814105) - (zl1.longitude*57.295779513082320876798154814105)) * cos((zl1.latitude*57.295779513082320876798154814105)/57.3)),2))
    I tried to optimize it by adding country column in zipcode_atao table. So that we can limit the search in zipcode_atao table based on country.
    Any other suggestions.
    Thanks

    Welcome to the forum.
    Please follow the instructions given in this thread:
    How to post a SQL statement tuning request
    HOW TO: Post a SQL statement tuning request - template posting
    and add the nessecary details we need to your thread.
    Depending on your database version (the result of: select * from v$version; ):
    Have you tried running the query without the index-hint?
    Are your table (and index) statatistics up-to-date?

  • Need help on how to code this SQL statement! (one key has leading zeros)

    Good day, everyone!
    First of all, I apologize if this isn't the best forum.  I thought of putting it in the SAP Oracle database forum, but the messages there seemed to be geared outside of ABAP SELECTs and programming.  Here's my question:
    I would like to join the tables FMIFIIT and AUFK.  The INNER JOIN will be done between FMIFIIT's MEASURE (Funded Program) field, which is char(24), and AUFK's AUFNR (Order Number) field, which is char(12).
    The problem I'm having is this:  All of the values in AUFNR are preceeded by two zeros.  For example, if I have a MEASURE value of '5200000017', the corresponding value in AUFNR is '005200000017'.  Because I have my SQL statement coded to just match the two fields, I obviously get no records returned because, I assume, of those leading zeros.
    Unfortunately, I don't have a lot of experience coding SQL, so I'm not sure how to resolve this.
    Please help!  As always, I will award points to ALL helpful responses!
    Thanks!!
    Dave

    >
    Dave Packard wrote:
    > Good day, everyone!
    > I would like to join the tables FMIFIIT and AUFK.  The INNER JOIN will be done between FMIFIIT's MEASURE (Funded Program) field, which is char(24), and AUFK's AUFNR (Order Number) field, which is char(12).
    >
    > The problem I'm having is this:  All of the values in AUFNR are preceeded by two zeros.  For example, if I have a MEASURE value of '5200000017', the corresponding value in AUFNR is '005200000017'.  Because I have my SQL statement coded to just match the two fields, I obviously get no records returned because, I assume, of those leading zeros.
    > Dave
    You can't do a join like this in SAP's open SQL.  You could do it in real SQL ie EXEC.... ENDEXEC by using SUSBTR to strip off the leading zeros from AUFNR but this would not be a good idea because a)  modifying a column in the WHERE clause will stop any index on that column being used and b) using real SQL rather than open SQL is really not something that should be encouraged for database portability reasons etc. 
    Forget about a database join and do it in two stages; get your AUFK data into an itab, strip off the leading zeros, and then use FAE to get the FMIFIIT data (or do it the other way round). 
    I do hope you've got an index on your FMIFIIT MEASURE field (we don't have one here); otherwise your SELECT could be slow if the table holds a lot of data.

  • How to find the select statement has written all selected values into text

    Hi,
    I am using form6i. i am selecting a set of values from database tables based upon some user parameters and writing into the text file using Text_io.put_line . In this case how we can make sure that the text fille contains all the data what we are selected from selected statement. somebody told that there might be chances of aborting of data while writing into the text file. is there any way to fild out the selected statements has written all the selected fields and corresponding output into the .txt file.
    Please suggest me.

    somebody told that there might be chances of aborting of data while writing into the text fileWhat kind of "chance" does that somebody refer to?
    If you want to verify if the number of records (lines) in the file matches the number of records from the cursor, you could re-open the written file in read-mode and count the number of lines by reading one by one, then compare the number of lines with the number of records from the cursor.

  • How to optimize this??

    hello,
    i wrote this for extracting 9th highest salary from emp table.
    SQL> select min(sal) from
    2 (select distinct sal from emp
    3 where sal is not null
    4 order by sal desc)
    5 where rownum<=9;
    MIN(SAL)
    1250
    But It has limitation. when we enter rownum more than existing rows then it gives minimum sal which is at the last position. rather than giving error or any different sense.
    following is the query which has not satisfied its use in proper.
    SQL> select min(sal) from
    2 (select distinct sal from emp
    3 where sal is not null
    4 order by sal desc)
    5 where rownum<=40
    6 /
    MIN(SAL)
    800
    where 800 is on last positon in 14 rows table.
    please optimize and solve my problem.

    user8710598 wrote:
    only optimize or edit posted query...i knw the rank is given correct ans.That's a little demanding and rude of you. People are trying to help by understanding what you are actually trying to achieve.
    Anyway...
    SQL> ed
    Wrote file afiedt.buf
      1  select sal
      2  from (
      3    select sal, rownum as rn
      4    from (
      5      select distinct sal
      6      from emp
      7      order by sal desc
      8      )
      9    )
    10* where rn = 9
    SQL> /
           SAL
          1250
    SQL> ed
    Wrote file afiedt.buf
      1  select sal
      2  from (
      3    select sal, rownum as rn
      4    from (
      5      select distinct sal
      6      from emp
      7      order by sal desc
      8      )
      9    )
    10* where rn = 40
    SQL> /
    no rows selected
    SQL>

  • How to optimize this sql by writing MINUS function.

    Hi all,
    how to optimize the sql by writing MINUS function.
    these are my tables
    1. CREATE TABLE POSTPAID
    RECORD VARCHAR2(2000 BYTE),
    FLAG NUMBER
    Record format:
    Mobile no in 1:10 of that length
    2. CREATE TABLE SUBSCRIBER
    PHONE_NO VARCHAR2(10 BYTE)
    My requirement is following sql need write using ‘minus’ as this one is very slow
    select record record from POSTPAID where substr(record,9,10) NOT in (select PHONE_NO from SUBSCRIBER)
    Thanks

    Why are you very particular about using "MINUS". You can optimize the sql by using "NOT EXISTS" instead of "NOT IN" as below:
    SELECT RECORD FROM POSTPAID A WHERE NOT EXISTS (SELECT 1 FROM SUBSCRIBER B WHERE SUBSTR(A.RECORD,9,10) = B.PHONE_NO)

  • How to optimize this query?

    Hi,
    I have this query:
    UPDATE t1
    SET update_date = SYSDATE
    WHERE (col1, col2, col3, col4, col5, col6)
    IN (Some SELECT statement that uses GROUP BY)
    My issue is that table t1 has a full scan on it. It is big and does not have index on update_date column. Is there anyway to accelerate this query?
    Thanks!

    It is 10g and I am not concerned what is happening in the IN clause..
    Plan
    UPDATE STATEMENT ALL_ROWSCost: 15,604 Bytes: 216 Cardinality: 1                               
         8 UPDATE t1                         
              7 HASH JOIN RIGHT SEMI Cost: 15,604 Bytes: 216 Cardinality: 1                     
                   5 VIEW VIEW SYS.VW_NSO_1 Cost: 4,940 Bytes: 167 Cardinality: 1                
                        4 SORT GROUP BY Cost: 4,940 Bytes: 212 Cardinality: 1           
                             3 HASH JOIN Cost: 4,939 Bytes: 212 Cardinality: 1      
                                  1 TABLE ACCESS FULL TABLE t2 Cost: 3 Bytes: 171 Cardinality: 1
                                  2 INDEX FAST FULL SCAN INDEX (UNIQUE) XPKt1 Cost: 4,918 Bytes: 118,869,250 Cardinality: 2,899,250
                   6 TABLE ACCESS FULL TABLE t1 Cost: 10,646 Bytes: 142,063,250 Cardinality: 2,899,250

  • How to automate this update statement

    Hello:
    I need to convert this update statement to a pl/sql procedure so that I can update this for each mk_product_id at a time because the primary update table has 80 million lines, If I do this for one mk_product_id is very fast, I need to update closely 2 million lines for month_id =55
    update processor a set a.mkrptqty =
    (select b.mkqty*p.rpt_conv_factor
    from
    processor b,
    product p
    where a.mk_record_id = b.mk_record_id
    and a.mk_line_nbr = b.mk_line_nbr
    and b.mk_product_id = p.part_code
    and a.mk_product_id = '480'
    and b.month_id = 55)
    where
    a.month_id = 55
    and a.mk_product_id = '480'
    Thanks,
    Mohan

    PL/SQL is slower than SQL.
    Keep your update as a single large update statement, but better correlate your inner select with your outer table:
    UPDATE processor a
    SET a.mkrptqty =
      (SELECT b.mkqty*p.rpt_conv_factor
         FROM processor b,
        product p
        WHERE a.mk_record_id = b.mk_record_id
      AND a.mk_line_nbr      = b.mk_line_nbr
      AND b.mk_product_id    = p.part_code
      AND a.month_id         = b.month_id
      WHERE a.month_id  = 55
    AND a.mk_product_id = '480';As you can see in the above code I correleated processor b completely with processor a from the outer update statement. But in the process noticed that you could probably completely dispose of processor b as noted below:
    UPDATE processor a
    SET a.mkrptqty =
      (SELECT a.mkqty*p.rpt_conv_factor
         FROM product p
        WHERE a.mk_product_id    = p.part_code
      WHERE a.month_id  = 55
    AND a.mk_product_id = '480';Please note that neither of these pieces of code have been tested as I don't have relevant tables at hand to test on.
    To update many mk_product_id's at a time just use the IN operator providing it with either a select list, or list of values:
    UPDATE processor a
    SET a.mkrptqty =
      (SELECT a.mkqty*p.rpt_conv_factor
         FROM product p
        WHERE a.mk_product_id    = p.part_code
      WHERE a.month_id  = 55
    AND a.mk_product_id in ('480','481');or
      WHERE (a.month_id, a.mk_product_id)
         in ((55,'480'), (65,'481'));or
      WHERE (a.month_id, a.mk_product_id)
         in (select month_id, mk_product_id from some_table where some_conditions = 'are met');Edited by: Sentinel on Sep 17, 2008 2:25 PM

  • Performance Tuning Issues  ( How to Optimize this Code)

    _How to Optimize this Code_
    FORM MATL_CODE_DESC.
      SELECT * FROM VBAK WHERE VKORG EQ SAL_ORG AND
                               VBELN IN VBELN AND
                               VTWEG IN DIS_CHN AND
                               SPART IN DIVISION AND
                               VKBUR IN SAL_OFF AND
                               VBTYP EQ 'C' AND
                               KUNNR IN KUNNR AND
                               ERDAT BETWEEN DAT_FROM AND DAT_TO.
        SELECT * FROM VBAP WHERE VBELN EQ VBAK-VBELN AND
                                 MATNR IN MATNR.
          SELECT SINGLE * FROM MAKT WHERE MATNR EQ VBAP-MATNR.
          IF SY-SUBRC EQ 0.
            IF ( VBAP-NETWR EQ 0 AND VBAP-UEPOS NE 0 ).
              IF ( VBAP-UEPVW NE 'B' AND VBAP-UEPVW NE 'C' ).
                MOVE VBAP-VBELN TO ITAB1-SAL_ORD_NUM.
                MOVE VBAP-POSNR TO ITAB1-POSNR.
                MOVE VBAP-MATNR TO ITAB1-FREE_MATL.
                MOVE VBAP-KWMENG TO ITAB1-FREE_QTY.
                MOVE VBAP-KLMENG TO ITAB1-KLMENG.
                MOVE VBAP-VRKME TO ITAB1-FREE_UNIT.
                MOVE VBAP-WAVWR TO ITAB1-FREE_VALUE.
                MOVE VBAK-VTWEG TO ITAB1-VTWEG.
                MOVE VBAP-UEPOS TO ITAB1-UEPOS.
              ENDIF.
            ELSE.
              MOVE VBAK-VBELN TO ITAB1-SAL_ORD_NUM.
              MOVE VBAK-VTWEG TO ITAB1-VTWEG.
              MOVE VBAK-ERDAT TO ITAB1-SAL_ORD_DATE.
              MOVE VBAK-KUNNR TO ITAB1-CUST_NUM.
              MOVE VBAK-KNUMV TO ITAB1-KNUMV.
             SELECT SINGLE * FROM KONV WHERE KNUMV EQ VBAK-KNUMV AND
                                             KSTEU = 'C' AND
                                             KHERK EQ 'A' AND
                                             KMPRS = 'X'.
             IF SY-SUBRC EQ 0.
               ITAB1-REMARKS = 'Manual Price Change'.
             ENDIF.
              SELECT SINGLE * FROM KONV WHERE KNUMV EQ VBAK-KNUMV AND
                                              KSTEU = 'C' AND
                                              KHERK IN ('C','D') AND
                                              KMPRS = 'X' AND
                                              KRECH IN ('A','B').
              IF SY-SUBRC EQ 0.
                IF KONV-KRECH EQ 'A'.
                  MOVE : KONV-KSCHL TO G_KSCHL.
                  G_KBETR = ( KONV-KBETR / 10 ).
                  MOVE G_KBETR TO G_KBETR1.
                  CONCATENATE G_KSCHL G_KBETR1 '%'
                              INTO ITAB1-REMARKS SEPARATED BY SPACE.
                ELSEIF KONV-KRECH EQ 'B'.
                  MOVE : KONV-KSCHL TO G_KSCHL.
                  G_KBETR = KONV-KBETR.
                  MOVE G_KBETR TO G_KBETR1.
                  CONCATENATE G_KSCHL G_KBETR1
                              INTO ITAB1-REMARKS SEPARATED BY SPACE.
                ENDIF.
              ELSE.
                ITAB1-REMARKS = 'Manual Price Change'.
              ENDIF.
              CLEAR : G_KBETR, G_KSCHL,G_KBETR1.
              MOVE VBAP-KWMENG TO ITAB1-QTY.
              MOVE VBAP-VRKME TO ITAB1-QTY_UNIT.
              IF VBAP-UMVKN NE 0.
                ITAB1-KLMENG = ( VBAP-UMVKZ / VBAP-UMVKN ) * VBAP-KWMENG.
              ENDIF.
              IF ITAB1-KLMENG NE 0.
                VBAP-NETWR = ( VBAP-NETWR / VBAP-KWMENG ).
                MOVE VBAP-NETWR TO ITAB1-INV_PRICE.
              ENDIF.
              MOVE VBAP-POSNR TO ITAB1-POSNR.
              MOVE VBAP-MATNR TO ITAB1-MATNR.
              MOVE MAKT-MAKTX TO ITAB1-MAKTX.
            ENDIF.
            SELECT SINGLE * FROM VBKD WHERE VBELN EQ VBAK-VBELN AND
                                            BSARK NE 'DFUE'.
            IF SY-SUBRC EQ 0.
              ITAB1-INV_PRICE = ITAB1-INV_PRICE * VBKD-KURSK.
              APPEND ITAB1.
              CLEAR ITAB1.
            ELSE.
              CLEAR ITAB1.
            ENDIF.
          ENDIF.
        ENDSELECT.
      ENDSELECT.
    ENDFORM.                               " MATL_CODE_DESC

    Hi Vijay,
    You could start by using INNER JOINS:
    SELECT ......
                FROM (                    VBAK
                             INNER JOIN VBAP
                                           ON VBAPVBELN = VBAKVBELN
                             INNER JOIN MAKT
                                           ON MAKTMATNR = VBAPMATNR AND
                                                 MAKT~SPRAS = SYST-LANGU )
                INTO TABLE itab
              WHERE VBAK~VBELN IN VBELN
                   AND VBAK~VTWEG IN DIS_CHN
                   AND VBAK~SPART IN DIVISION
                   AND VBAK~VKBUR IN SAL_OFF
                   AND VBAK~VBTYP EQ 'C'
                   AND VBAK~KUNNR IN KUNNR
                   AND VBAK~ERDAT BETWEEN DAT_FROM AND DAT_TO
                   AND VBAP~NETWR EQ 0
                   AND VBAP~UEPOS NE 0
    Regards,
    John.

  • How to optimize this SQL?

    Hi All!
    For example I have to records from a table:
    Value1 Value2 Day_ID
    000001 000002 20031211
    000001 000002 20031219
    You can see, that value 1 and 2 are the same in both records. Only Day_IDs are different. So I need to find a record with max(Day_ID). This is my SQL:
    select *
    from MyTable
    where day_id = (select max(day_id) from MyTable)
    Is there any way to get the same result without sub select?
    Any help will be appreciate.
    With best regards,
    Andrej Litowka.

    ORA-00937: not a single-group group functionThen you must have a non-aggregated column in your SELECT clause that is not in your GROUP BY clause.
    Let's be careful out there!
    APC

  • How to optimize this query? Please help

    i have one table(argus) having 80,000 rows and another table (p0f) having 30,000 rows and i have to join both table on the basis of time field. the query is as follows
    select distinct(start_time),res.port, res.dst_port from (select * from argus where argus.start_time between '2007-06-13 19:00:00' and '2007-06-22 20:00:00') res left outer join p0f on res.start_time=p0f.p0f_timestamp ;
    the query is taking very large time . i have created index on the start_time and p0f_timestamp ,it increased the performance but not so much. My date comparisons would vary every time i have to execute a new query.
    Plz tell me is there another way to execute such a query to output same results?
    plz help me as my records are increasing day by day
    Thanks
    Shaveta

    From my small testcase it seems that both queries are absolute identical and don't actually take too much time:
    SQL> create table argus as (select created start_time, object_id port, object_id dst_port from all_objects union all
      2                         select created start_time, object_id port, object_id dst_port from all_objects)
      3  /
    Table created.
    SQL> create table p0f as select created p0f_timestamp, object_id p0f_port, object_id p0f_dst_port from all_objects
      2  /
    Table created.
    SQL> create index argus_idx on argus (start_time)
      2  /
    Index created.
    SQL> create index p0f_idx on p0f (p0f_timestamp)
      2  /
    Index created.
    SQL>
    SQL> begin
      2   dbms_stats.gather_table_stats(user,'argus',cascade=>true);
      3   dbms_stats.gather_table_stats(user,'p0f',cascade=>true);
      4  end;
      5  /
    PL/SQL procedure successfully completed.
    SQL>
    SQL> select count(*) from argus
      2  /
      COUNT(*)
         94880
    SQL> select count(*) from p0f
      2  /
      COUNT(*)
         47441
    SQL>
    SQL> set timing on
    SQL> set autotrace traceonly explain statistics
    SQL>
    SQL> select distinct (start_time), res.port, res.dst_port
      2             from (select *
      3                     from argus
      4                    where argus.start_time between to_date('2007-06-13 19:00:00','RRRR-MM-DD HH24:MI:SS')
      5                                               and to_date('2007-06-22 20:00:00','RRRR-MM-DD HH24:MI:SS')) res
      6                  left outer join
      7                  p0f on res.start_time = p0f.p0f_timestamp
      8                  ;
    246 rows selected.
    Elapsed: 00:00:02.51
    Execution Plan
    Plan hash value: 1442901002
    | Id  | Operation               | Name    | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT        |         | 21313 |   520K|       |   250   (6)| 00:00:04 |
    |   1 |  HASH UNIQUE            |         | 21313 |   520K|  1352K|   250   (6)| 00:00:04 |
    |*  2 |   FILTER                |         |       |       |       |            |          |
    |*  3 |    HASH JOIN RIGHT OUTER|         | 21313 |   520K|       |    91  (11)| 00:00:02 |
    |*  4 |     INDEX RANGE SCAN    | P0F_IDX |  3661 | 29288 |       |    11   (0)| 00:00:01 |
    |*  5 |     TABLE ACCESS FULL   | ARGUS   |  7325 |   121K|       |    79  (12)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter(TO_DATE('2007-06-13 19:00:00','RRRR-MM-DD
                  HH24:MI:SS')<=TO_DATE('2007-06-22 20:00:00','RRRR-MM-DD HH24:MI:SS'))
       3 - access("ARGUS"."START_TIME"="P0F"."P0F_TIMESTAMP"(+))
       4 - access("P0F"."P0F_TIMESTAMP"(+)>=TO_DATE('2007-06-13 19:00:00','RRRR-MM-DD
                  HH24:MI:SS') AND "P0F"."P0F_TIMESTAMP"(+)<=TO_DATE('2007-06-22
                  20:00:00','RRRR-MM-DD HH24:MI:SS'))
       5 - filter("ARGUS"."START_TIME">=TO_DATE('2007-06-13 19:00:00','RRRR-MM-DD
                  HH24:MI:SS') AND "ARGUS"."START_TIME"<=TO_DATE('2007-06-22 20:00:00','RRRR-MM-DD
                  HH24:MI:SS'))
    Statistics
              1  recursive calls
              0  db block gets
            304  consistent gets
              0  physical reads
              0  redo size
           7354  bytes sent via SQL*Net to client
            557  bytes received via SQL*Net from client
             18  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
            246  rows processed
    SQL>
    SQL> select distinct start_time, port, dst_port
      2             from argus left outer join p0f on start_time = p0f_timestamp
      3            where start_time between to_date ('2007-06-13 19:00:00','RRRR-MM-DD HH24:MI:SS')
      4                                       and to_date ('2007-06-22 20:00:00','RRRR-MM-DD HH24:MI:SS')
      5  /
    246 rows selected.
    Elapsed: 00:00:02.47
    Execution Plan
    Plan hash value: 1442901002
    | Id  | Operation               | Name    | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT        |         | 21313 |   520K|       |   250   (6)| 00:00:04 |
    |   1 |  HASH UNIQUE            |         | 21313 |   520K|  1352K|   250   (6)| 00:00:04 |
    |*  2 |   FILTER                |         |       |       |       |            |          |
    |*  3 |    HASH JOIN RIGHT OUTER|         | 21313 |   520K|       |    91  (11)| 00:00:02 |
    |*  4 |     INDEX RANGE SCAN    | P0F_IDX |  3661 | 29288 |       |    11   (0)| 00:00:01 |
    |*  5 |     TABLE ACCESS FULL   | ARGUS   |  7325 |   121K|       |    79  (12)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter(TO_DATE('2007-06-13 19:00:00','RRRR-MM-DD
                  HH24:MI:SS')<=TO_DATE('2007-06-22 20:00:00','RRRR-MM-DD HH24:MI:SS'))
       3 - access("START_TIME"="P0F_TIMESTAMP"(+))
       4 - access("P0F_TIMESTAMP"(+)>=TO_DATE('2007-06-13 19:00:00','RRRR-MM-DD
                  HH24:MI:SS') AND "P0F_TIMESTAMP"(+)<=TO_DATE('2007-06-22 20:00:00','RRRR-MM-DD
                  HH24:MI:SS'))
       5 - filter("ARGUS"."START_TIME">=TO_DATE('2007-06-13 19:00:00','RRRR-MM-DD
                  HH24:MI:SS') AND "ARGUS"."START_TIME"<=TO_DATE('2007-06-22 20:00:00','RRRR-MM-DD
                  HH24:MI:SS'))
    Statistics
              1  recursive calls
              0  db block gets
            304  consistent gets
              0  physical reads
              0  redo size
           7354  bytes sent via SQL*Net to client
            557  bytes received via SQL*Net from client
             18  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
            246  rows processedCan you show us a similar testcase with explain plan and statistics?

  • All SELECT statements work except of SELECT *

    Hi,
    I have one table with 2 rows in Oracle 10.1.0.2.0
    All SELECT statements work (e.g. SELECT COUNT(*) FROM, SELECT COLUMNNAME FROM TABLE, etc.)
    But when I execute SELECT * FROM it never shows any response in SQLPlus. I can view the data with Enterprise Manager Console - using View/Edit Content.
    How this can be caused?
    Thanks for reply.

    Hi,
    I don't get any error. I enter the SELECT * FROM statement run it and cursor jumps to next line and is blinking forever...
    So I got no error response from database.
    SELECT COLUMNNAME FROM works normally.
    As I wrote the machine with database is probably used over its limits - I'm not DBA but SGA is set up to use about 90% of memory (as I learnt from other forums recomendation for Solaris/Sun is about 40%).
    Unfortunatelly I have just 2 rows in my table. There are metadata in the table which are needed to get info about other tables in the database.
    So I am wondering why is the database able to work normally but doesn't response to this statement. I run queries on other tables in database with millions of records, but I am not able to run this query.
    Can be table somehow corrupted? It is strange...
    Regards,
    Petr

  • How optimize this delete statement.

    Hi,
    It is possible to optimize the following delete?
    DELETE
    FROM subraw
    WHERE (((PERIOD_START_TIME >= to_date('20130918120000', 'yyyymmddhh24miss'))
            AND (PERIOD_START_TIME < to_date('20130918140000', 'yyyymmddhh24miss'))
            AND ((rn_id IN (4000000813562))))
           OR ((PERIOD_START_TIME >= to_date('20130918150000', 'yyyymmddhh24miss'))
               AND (PERIOD_START_TIME < to_date('20130919120000', 'yyyymmddhh24miss')))
           OR ((PERIOD_START_TIME >= to_date('20130919220000', 'yyyymmddhh24miss'))
               AND (PERIOD_START_TIME < to_date('20130919230000', 'yyyymmddhh24miss'))))
      AND (subraw.rn_id = 0
           OR EXISTS
             (SELECT *
              FROM CO_OBJECTS
              WHERE CO_ID = 4
                AND CO_OBJECTS.CO_GI = subraw.rn_id));
    The subraw table have 987792728 (aprox. 990 millions) rows
    The CO_OBJECTS have 3010749(aprox. 3millions) rows.
    The subraw_PK index (for the RN_ID,PERIOD_START_TIMEcolumns)
    The CO_OBJECTS_CO_GID index (for the CO_ID,CO_GI columns)
    PLAN_TABLE_OUTPUT
    Plan hash value: 2274303221
    | Id  | Operation                   | Name                         | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | DELETE STATEMENT            |                              |  1606K|   870M|  1116  (93)| 00:00:01 |       |       |
    |   1 |  DELETE                     | subraw                       |       |       |            |          |       |       |
    |   2 |   CONCATENATION             |                              |       |       |            |          |       |       |
    |*  3 |    FILTER                   |                              |       |       |            |          |       |       |
    |   4 |     PARTITION RANGE SINGLE  |                              |   891 |   494K|     3  (34)| 00:00:01 |    31 |    31 |
    |*  5 |      INDEX RANGE SCAN       | subraw_PK                    |   891 |   494K|     3  (34)| 00:00:01 |    31 |    31 |
    |*  6 |     INDEX RANGE SCAN        | CO_OBJECTS_CO_GID            |   753 | 19578 |     1   (0)| 00:00:01 |       |       |
    |*  7 |    FILTER                   |                              |       |       |            |          |       |       |
    |   8 |     PARTITION RANGE SINGLE  |                              |  1713K|   927M|   368  (93)| 00:00:01 |    32 |    32 |
    |*  9 |      INDEX FULL SCAN        | subraw_PK                   |  1713K|   927M|   368  (93)| 00:00:01 |    32 |    32 |
    |* 10 |     INDEX RANGE SCAN        | CO_OBJECTS_CO_GID            |   753 | 19578 |     1   (0)| 00:00:01 |       |       |
    |* 11 |    FILTER                   |                              |       |       |            |          |       |       |
    |  12 |     PARTITION RANGE ITERATOR|                              |    25M|    13G|   744  (94)| 00:00:01 |    31 |    32 |
    |* 13 |      INDEX FULL SCAN        | subraw_PK                    |    25M|    13G|   744  (94)| 00:00:01 |    31 |    32 |
    |* 14 |     INDEX RANGE SCAN        | CO_OBJECTS_CO_GID            |   753 | 19578 |     1   (0)| 00:00:01 |       |       |
    BR,
    Jorge

    Hi,
    Please find below the info requested:
    SQL> select count(*) from subraw partition(PM_20130919);
    COUNT(*)
      33109835
    All partitions for the table above have aprox, 30 millions of rows
    Predicate Information (identified by operation id):
       3 - filter("RN_ID"=0 OR  EXISTS (SELECT 0 FROM "UMA"."CO_OBJECTS" "CO_OBJECTS" WHERE
                  "CO_OBJECTS"."CO_GI"=:B1 AND "CO_ID"=4))
       5 - access("RN_ID"=4000000813562 AND "PERIOD_START_TIME">=TO_DATE(' 2013-09-18 12:00:00', 'syyyy-mm-dd
                  hh24:mi:ss') AND "PERIOD_START_TIME"<TO_DATE(' 2013-09-18 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))
           filter("PERIOD_START_TIME">=TO_DATE(' 2013-09-18 12:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "PERIOD_START_TIME"<TO_DATE(' 2013-09-18 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       6 - access("CO_ID"=4 AND "CO_OBJECTS"."CO_GI"=:B1)
       7 - filter("RN_ID"=0 OR  EXISTS (SELECT 0 FROM "UMA"."CO_OBJECTS" "CO_OBJECTS" WHERE
                  "CO_OBJECTS"."CO_GI"=:B1 AND "CO_ID"=4))
       9 - access("PERIOD_START_TIME">=TO_DATE(' 2013-09-19 22:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "PERIOD_START_TIME"<TO_DATE(' 2013-09-19 23:00:00', 'syyyy-mm-dd hh24:mi:ss'))
           filter("PERIOD_START_TIME">=TO_DATE(' 2013-09-19 22:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "PERIOD_START_TIME"<TO_DATE(' 2013-09-19 23:00:00', 'syyyy-mm-dd hh24:mi:ss') AND (LNNVL("RN_ID"=4000000813562) OR
                  LNNVL("PERIOD_START_TIME">=TO_DATE(' 2013-09-18 12:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR
                  LNNVL("PERIOD_START_TIME"<TO_DATE(' 2013-09-18 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))))
      10 - access("CO_ID"=4 AND "CO_OBJECTS"."CO_GI"=:B1)
      11 - filter("RN_ID"=0 OR  EXISTS (SELECT 0 FROM "UMA"."CO_OBJECTS" "CO_OBJECTS" WHERE
                  "CO_OBJECTS"."CO_GI"=:B1 AND "CO_ID"=4))
      13 - access("PERIOD_START_TIME">=TO_DATE(' 2013-09-18 15:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "PERIOD_START_TIME"<TO_DATE(' 2013-09-19 12:00:00', 'syyyy-mm-dd hh24:mi:ss'))
           filter("PERIOD_START_TIME">=TO_DATE(' 2013-09-18 15:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "PERIOD_START_TIME"<TO_DATE(' 2013-09-19 12:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  (LNNVL("PERIOD_START_TIME">=TO_DATE(' 2013-09-19 22:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR
                  LNNVL("PERIOD_START_TIME"<TO_DATE(' 2013-09-19 23:00:00', 'syyyy-mm-dd hh24:mi:ss'))) AND
                  (LNNVL("RN_ID"=4000000813562) OR LNNVL("PERIOD_START_TIME">=TO_DATE(' 2013-09-18 12:00:00', 'syyyy-mm-dd
                  hh24:mi:ss')) OR LNNVL("PERIOD_START_TIME"<TO_DATE(' 2013-09-18 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))))
      14 - access("CO_ID"=4 AND "CO_OBJECTS"."CO_GI"=:B1)
    Note
       - dynamic sampling used for this statement (level=4)
    BR,
    Jorge

  • How to tune this update statement?

    Hello,
    I have to solve the following task:
    Update every row in table A which has an appropriate row in table B and log what you have done in a log-table.
    It is possible that there are more than one fitting rows in table A for a row in table B.
    My first approach is looping over the table B and doing an update of table A for every entry in table B.
    This works and looks like this:
    Table A:
    PK number (This is the primary key of this table)
    KEY number
    Table B:
    KEY_OLD number
    KEY_NEW number
    Log table:
    PK number
    KEY_OLD number
    KEY_NEW number
    declare
      TYPE PK_TAB_TYPE IS TABLE OF number INDEX BY BINARY_INTEGER;
      v_tab_PK       PK_TAB_TYPE;
      v_empty_tab_PK PK_TAB_TYPE;
    begin
      for v_rec in (select * from table_B) loop
        v_tab_PK := v_empty_tab_PK;  /* clearing the array */
        update table_A
        set    KEY = v_rec.KEY_NEW
        where (KEY = v_rec.KEY_OLD)
        returning PK bulk collect into v_tab_PK;
        if (v_tab_PK.count > 0) then
          for i in v_tab_PK.first..v_tab_PK.last loop
            insert into TMP_TAB_LOG(PK, KEY_OLD, KEY_NEW)
              values (v_tab_PK(i), v_rec.KEY_OLD, v_rec.KEY_NEW);
          end loop;
        end if;
      end loop;
    end;Because the table B can have up to 500.000 entries (and the table A has even more entries) this solution will cause many update-statements.
    So I am looking for a solution which has better performance.
    My second approach was using an correlated update and looks like this:
    declare
      TYPE PK_TAB_TYPE IS TABLE OF number INDEX BY BINARY_INTEGER;
      v_tab_PK            PK_TAB_TYPE;
      v_empty_tab_PK PK_TAB_TYPE;
      v_tab_NewKey    PK_TAB_TYPE;
    begin
      v_tab_PK         := v_empty_tab_PK;  /* clear the arrays */
      v_tab_NewKey := v_empty_tab_PK;
      update table_A a
      set    KEY = (select KEY_NEW from table_B where (KEY_OLD = a.KEY))
      where exists (select 'x' as OK
                         from   table_B
                         where (KEY_OLD = a.KEY)
      returning PK, KEY bulk collect into v_tab_PK, v_tab_NewKey;
      if (v_tab_PK.count > 0) then
        for i in v_tab_PK.first..v_tab_PK.last loop
          insert into TMP_TAB_LOG_DUB(PK, KEY_OLD, KEY_NEW)
            values (v_tab_PK(i), null, v_tab_NewKey(i));
        end loop;
      end if;
    end;Now I have only one update statement.
    The only thing missing in this second approach is the old KEY before the update in the log table.
    But I have no idea how to get the old value.
    Is there a possibility to modify this second approach to get the old value of the KEY before the update to write it in the log-table?
    And now I need your help:
    What is the best way to get a performant solution for my task?
    Every help appreciated.
    Regards Hartmut

    Below is a script you can run in another testing schema to do the update with logging..... I have created the tables (A and B) with primary key constraints defined...
    create table table_a(pk number primary key
    , key number);
    create table table_b(key_old number primary key
    , key_new number);
    create table TMP_TAB_LOG_DUB(pk number primary key
    , key_old number
    , key_new number);
    ---------insert test data
    insert into table_a values(1,2);
    insert into table_a values(2,2);
    insert into table_a values(3,2);
    insert into table_a values(11,1);
    insert into table_a values(12,1);
    insert into table_a values(13,1);
    insert into table_a values(21,4);
    insert into table_a values(22,4);
    insert into table_a values(23,4);
    commit;
    insert into table_b values(1,3);
    insert into table_b values(4,2);
    commit;
    ----- insert to log
    insert into TMP_TAB_LOG_DUB(PK, KEY_OLD, KEY_NEW)
    select a.pk
    , a.key as key_old
    , b.key_new as key_new
    from table_a a
    join table_b b on a.key = b.key_old;
    ----- update table_a
    update(select a.pk
    , a.key as key_old
    , b.key_new as key_new
    from table_a a
    join table_b b on a.key = b.key_old)
    set key_old = key_new;
    commit;

  • How to Tuning this sql statement

    hi my friend:
                            SELECT    ie.ImportEntityAutoID,
                                        ie.ImportRawDataAutoID,
                                        ie.Amount,
                                        s.NAME AS StatusName,
                                        ir.SourceLocation AS SourceLocation,
                                        ie.ErrorMessage AS ErrorMessage,
                                        famis.PROJECT_CODE,
                                        famis.PROJECT_TITLE
                              FROM      dbo.ImportEntity ie   
                      INNER JOIN tblBatch B ON ie.BatchGuid = B.BatchGuid                      
                                        INNER JOIN dbo.ImportRawData ir ON ie.ImportRawDataAutoID
    = ir.ImportRawDataAutoID
                                        INNER JOIN dbo.ImportRawData_Famis famis ON ie.ImportRawDataAutoID
    =                          famis.ImportRawDataAutoID         
                                        LEFT OUTER JOIN dbo.Status s ON ( ie.StatusID
    = s.StatusID
    AND s.StatusTypeID = 35 --Import Entity Status                                                                        
                              WHERE     ie.EntityType <> 'Unknown'
             AND ie.[StatusID] IN ( 3, 5, 6 ) ----3: Pending; 5: Failed; 6: Cancel
                                        AND ( 0 = 0
                                              OR ie.StatusID
    = 0
                                        AND ie.Amount >= -99999999999999.9999 AND ie.Amount
    <= 99999999999999.9999 ;
           the ImportEntity TABLE has 5 milion record AND the value of entitytype COLUMN equal 'Unknown' IS more THEN 3 milion,
           statusid equal (3,5,6) Probably 2 milion. I had following index:
           table ImportEntity:
              clustered_index:ImportRawDataAutoID
              non-CLUSTERED INDEX IX_batchguid ON BatchGuid
              non-CLUSTERED INDEX IX_ImportRawDataAutoID ON ImportRawDataAutoID
              non-CLUSTERED INDEX IX_StatusID ON StatusID
              non-CLUSTERED INDEX IX_EntityType ON EntityType
           AND tblBatch has ONLY two records
           ImportRawData_Famis has 5 milion recors AND ImportRawDataAutoID IS its clustered index COLUMN
           ImportRawData has 5 milion recors AND ImportRawDataAutoID IS its clustered index COLUMN
           the query require 180 SECOND.how can i tuning this statment.

    hi
    Visakh16
         I do just as you say. but I understand the execution plan and can you give me some advice about procedure
    tuning and look the execution plan、clean execution plan and so on.
    thank you very much!
    ming

Maybe you are looking for