TUNING  TIP

제품 : ORACLE SERVER
작성날짜 : 2002-05-31
TUNING TIP
===========
PURPOSE
INDEX RANGE SCAN 을 사용하게 하기 위한 TIP
Explanation
index range scan 을 위해 index full scan 이나, fast full index 을 피하는
방법은 다음과 같습니다.
먼저 index range scan 보다 index full scan , FFIS 를 선택한 이유는
index 의 LEAFCNT(leaf block count)전체를 읽는 cost 보다, branch level
(BLEVEL ) 의 cost, 또는 CLUFAC(clustering factor) 의 cost 가 크다는
의미입니다.
근본적인 해결을 위해서는 index 의 height 를 줄이기 위해 db_block_size
를 늘리거나 , index reorganization 이 필요하며,
일시적으로 이를 방지하려면 역으로 FFIS 나 full index scan cost 를
높이기 위해 index cost 를 결정하는 parameter 를 조정합니다.
-fast_full_scan_enabled =false
-db_file_multiblock_read_count
-sort_area_size
의 값을 작게 조정합니다.
Example
Reference Document
------------------

>>
"Try to avoid large data retrievals, since if the data being retrieved from a table is more than 10%-20% of the total data, Oracle will most likely do the FST."
>>
There are sort of myths floating around.
Remember, FTS is not always as bad nor INDEX SCAN always are good.
It is not % of data, in fact, it is depends on number blocks the query touching to fetch the data.
FTS is Full Table Scan - Multi block read - A physical read
If optimizer things doing multi block read would achive the result faster then it produces FTS plan to retrieve the data.
Jaffar

Similar Messages

  • Tuning tips for sql

    Can some one tell me the tuning tips for sql statements, please.
    Thanks
    Ajwat

    Yes get EP (explain plan) going and try add /*+ RULE */ hint to your SIUD commands (Select/Insert/Update/Delete) . This changes the optimizer mode from CHOOSE to RULE, I find RULE uses indexes more often than CHOOSE (see example below).
    select /*+ RULE */ c1,c2,c3 from t1 where n1=123
    --[EP 1 results]
    SELECT STATEMENT Optimizer=HINT: RULE
    TABLE ACCESS (BY INDEX ROWID) OF T1
    INDEX (RANGE SCAN) OF I_NU_T1_N1 (NON-UNIQUE)
    select c1,c2,c3 from t1 where n1=123
    --[EP 2 results]
    SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=12)
    TABLE ACCESS (FULL) OF T1 (Cost=1 Card=1 Bytes=12)
    Look for any FULL TABLE SCAN entries in your EP results and try to get rid of them. As the above example shows, switching to RULE uses the index on table T1.
    There are a while pile of other HINTS listed as well (other than just RULE) which are at ...
    http://download-west.oracle.com/otndoc/oracle9i/901_doc/server.901/a87503/hintsref.htm#4894
    There is a whole section on Oracle Performance at the following address...
    http://download-west.oracle.com/otndoc/oracle9i/901_doc/server.901/a87503/toc.htm
    the Hints section is Chapter 5
    There are many more things to tuning tips, but by far, getting your SQL to use indexes is the primary one and you'll have to get EP results to see which indexes are being used (the FREE Toad program shows EP results nicely).
    Hope this helps,
    Tyler

  • Need SQL tuning tips in oracle 10g.

    From time to time I come across some slow running queries used in SQR. I want to know the tuning technics in oracle 10g as I believe it is different now becasue of CBO.
    As I am not an oracle man to tune the query I can try if there are any guidelines. Also while optimizing a query what are the important things need to be considered:
    Do you want the first rows back quickly (typically during online processing) or is the total time for the query (typically a batch process) more important?
    Are the tables properly indexed to take advantage of the various operations available?
    How large are the tables? Joining smaller tables first is usually more efficient.
    How selective are the indexes? Indexes on fields that have only a few values don't really help.
    How is sorting done? Are sorting and grouping operations necessary?
    Any help is greatly appreciated.

    user5846372 wrote:
    As I am not an oracle man to tune the query I can try if there are any guidelines. Also while optimizing a query what are the important things need to be considered:Some things to consider about tuning.
    Re: Explain  "Explain Plan"...
    >
    Do you want the first rows back quickly (typically during online processing) or is the total time for the query (typically a batch process) more important?
    Are the tables properly indexed to take advantage of the various operations available? These are important considerations
    How large are the tables? Joining smaller tables first is usually more efficient. The optimizer usually makes this decision
    How selective are the indexes? Indexes on fields that have only a few values don't really help. But can still be useful if the data can be read from the index instead of the table.
    How is sorting done? Are sorting and grouping operations necessary? This is a business requirement, if you need to sort you need to sort.

  • Performance Tuning Tips

    Dear All,
    In our project we are facing lot of problems with the Performance, users are compaining about the poor performance of the few reports and all, we are in the process of fine tuning the reports by following the all methods/suggestions provided by SAP ( like removing the select queries from Loops, For all entries , Binary serach etc )
    But still I want to know from you people what can we check from BASIS percpective ( all the settings ) and also ABAP percpective to improve the performance.
    And also I have one more query that what is " Table Statistics " , what is the use of this ...
    Please give ur valueble suggestions to us in improving the performance .
    Thanks in Advance !

    Hi
    <b>Ways of Performance Tuning</b>
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one Internal table
    <b>Selection Criteria</b>
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    <b>Points # 1/2</b>
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    <b>Select Statements   Select Queries</b>
    1.     Avoid nested selects
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    4.     For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    <b>Point # 1</b>
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops  only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    <b>Point # 2</b>
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    <b>Point # 3</b>
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    <b>Point # 4</b>
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    <b>Point # 5</b>
    If all primary key fields are supplied in the Where condition you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    <b>Select Statements           contd..  SQL Interface</b>
    1.     Use column updates instead of single-row updates
    to update your database tables.
    2.     For all frequently used Select statements, try to use an index.
    3.     Using buffered tables improves the performance considerably.
    <b>Point # 1</b>
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    <b>Point # 2</b>
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    <b>Point # 3</b>
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    <b>Select Statements       contd…           Aggregate Functions</b>
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    <b>Select Statements    contd…For All Entries</b>
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    <u>Points to be must considered FOR ALL ENTRIES</u> •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
    Loop at int_cntry.
           Select single * from zfligh into int_fligh
    where cntry = int_cntry-cntry.
    Append int_fligh.
    Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    <b>Select Statements    contd…  Select Over more than one Internal table</b>
    1.     Its better to use a views instead of nested Select statements.
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    <b>Point # 1</b>
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    <b>Point # 2</b>
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    <b>Point # 3</b>
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    <b>Internal Tables</b>
    1.     Table operations should be done using explicit work areas rather than via header lines.
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    <b>Point # 2</b>
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    <b>Point # 3</b>
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    <b>Point # 5</b>
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    <b>Point # 6</b>
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    <b>Point # 7</b>
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    <b>Point # 8</b>
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    <b>Point # 9</b>
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    <b>Point # 10</b>
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    <b>Point # 11</b>
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    <b>Point # 12</b>
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    <b>Point # 13</b>“SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    <b>Internal Tables         contd…
    Hashed and Sorted tables</b>
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    <b>Point # 1</b>
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    <b>Point # 2</b>
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    <b>Reward if usefull</b>

  • SQL-Tuning Tips

    Hi there,
    i need some general (i can't supply you with an execution plan) advice in tuning this statement running in 9i CBO mode:
    SELECT
          TAB_C1.CLIENT_NR, TAB_C1.DD_NR, last_day(trunc(TAB_C1.BELEGDAT)),
          0, TAB_C2.VALUE_DIM ,
          nvl(SUM(DECODE(TAB_A.FOLGBU || TAB_A.ZUABNEUT,'0A',0,'1A',0,'0Z',1,'1Z',0) * TAB_C2.VALUE),0) ,
          nvl(SUM(DECODE(TAB_A.FOLGBU || TAB_A.ZUABNEUT,'0A',1,'1A',0,'0Z',0,'1Z',0) * TAB_C2.VALUE),0) ,
          nvl(SUM(DECODE(TAB_A.FOLGBU || TAB_A.ZUABNEUT,'0A',0,'1A',0,'0Z',0,'1Z',1) * TAB_C2.VALUE),0) ,
          nvl(SUM(DECODE(TAB_A.FOLGBU || TAB_A.ZUABNEUT,'0A',0,'1A',1,'0Z',0,'1Z',0) * TAB_C2.VALUE),0) ,
          0 ,max(TAB_C1.ROWSEQ * 1000) + TAB_C2.BLENDTYPE * 100 + TAB_C2.VALUE_DIM
      FROM TAB_C1,
           TAB_A,
           TAB_C2,
           TAB_C3,
           TAB_B
    WHERE TAB_C1.ACC_TYPE = 'N'
           AND TAB_C1.CANCEL = 0
           AND TAB_A.CLIENT_NR = TAB_C1.CLIENT_NR
           AND TAB_A.ACC_MODE = TAB_C1.ACC_MODE
           AND TAB_C3.CLIENT_NR = TAB_C1.CLIENT_NR
           AND TAB_C3.ACC_TYPE = TAB_C1.ACC_TYPE
           AND TAB_C3.ACC_NR = TAB_C1.ACC_NR
           AND TAB_C3.POS = TAB_C1.POS
           AND TAB_C3.ACC_DAT = TAB_C1.ACC_DAT
           AND TAB_C2.CLIENT_NR = TAB_C1.CLIENT_NR
           AND TAB_C2.ACC_TYPE = TAB_C1.ACC_TYPE
           AND TAB_C2.ACC_NR = TAB_C1.ACC_NR
           AND TAB_C2.POS = TAB_C1.POS
           AND TAB_C2.ACC_DAT = TAB_C1.ACC_DAT
           AND TAB_B.CLIENT_NR = TAB_C2.CLIENT_NR
           AND TAB_B.AMOUNT_KEY = TAB_C2.VALUE_DIM
           AND TAB_B.AMOUNT_TYPE = 0
    GROUP BY TAB_C1.CLIENT_NR,
              releasetype,
              releaseid,
              releasegrp,
              releaseseq,
              last_day(trunc(TAB_C1.BELEGDAT)),
              blendtype,
              VALUE_DIM;
    Table           Rows     PK
    TAB_A          300     CLIENT_NR, ACC_MODE
    TAB_B          600     CLIENT_NR, AMOUNT_KEY
    TAB_C1          200.000     CLIENT_NR, ACC_TYPE, ACC_NR, POS, ACC_DAT
    TAB_C2          1 Mio.     CLIENT_NR, ACC_TYPE, ACC_NR, POS, ACC_DAT, COL_BP, VALUE_DIM, COL_BT
    TAB_C3          350.000     CLIENT_NR, ACC_TYPE, ACC_NR, POS, ACC_DAT, COL_RI, COL_RT, COL_RG, COL_RS, COL_RLIn my opinion i should try to avoid the function calls (nvl(sum...), but i dont't have an idea how to realize this the best way. Maybe you can give me some advice in solve this or point me out some other options to improve performance. The view is used to fill a table.
    Thank you in advance
    Kind regards
    Matthias

    I would be really surprised if changing anything in the select list would improve performance much. The Oracle built-in functions are for the most part about as fast as selecting the column without using a function.
    I would concentrate my efforts on the from and where parts, which is the part that causes all of the I/O, which is generally the most expensive and time consuming part of the query. Without looking too closely at your query, and without knowing anything about your data, I notice that tab_b and tab_c3 are not used in your select list, and only tab_b has a selective predicate against it. Are you sure you need these two tables?
    Can you add more selective predicates against some or all of the tables without changing the results of the query? For example, does the tab_b.amount_type = 0 imply anything about perhaps a date range or account range in the other tables.
    You say that
    the restrictions:
    TAB_C1.CANCEL = 0
    TAB_B.AMOUNT_TYPE = 0
    have very low selectivity so that indexing one of these columns seems to be not very helpfulbut you also have a predicate on TAB_C1.ACC_TYPE = 'N' AND TAB_C1.CANCEL = 0. Is that combination more selective?
    The list goes on, but as others have said, without more information we are only guessing.
    John

  • Pl/sql Package Tuning Tips

    Hi Friends,
    I have a requirement of Tunning a Pl/sql package.
    Kindly share some tips on Tunning the Package.
    Thanks & Regards
    Dilip Nomula

    Ramya,
    Profiler is a plsql api which is very effective to find out that in the pl sql program of yours, where is the time spent and on which line? This, otherwise would be very tough to do as the pl sql programs are long and its tough to find the time spent on each line. Using the profiler, a line by line comparison can be shown to you. Here is a link how to use it,
    http://www.oracle-base.com/articles/9i/DBMS_PROFILER.php
    In addition to it, you can also combine dbms_trace. Combining this with the Profiler, you can have a complete info about the plsql package.
    http://www.oracle-base.com/articles/9i/DBMS_TRACE.php
    HTH
    Aman....

  • Tuning tips

    Hi
    I am trying to retrieve data from multiple tables. The query is something like this:
    select a. col,b.col,c.col,d.col,e.col
    from table1 a, table2 b, table3 c, table4 d, table5 e
    where a.col=b.col and a.col=c.col and d.col=e.col
    This query is not efficient. Any tips on how to tune this query?
    Thanks.

    Use explain plan/tkprof to find out, if there are any problem areas and where they are.
    One thing:
    where a.col=b.col and a.col=c.col and d.col=e.colI don't see any connections from a,b,c to d,e ... is that on purpose?
    C.
    Message was edited by:
    cd

  • Help in tuning the Query

    Pls see this attached query. This is hitting a table which has around 110 mil recs. And this takes around 60 min to execute. IS there any way to tune it better? I am assuming that because this uses the ' With tmp as' clauses, it might be using the Indexes.
    Can anyone suggest some tuning tips?
    Satish
    WITH all_edbc_pending_edg
         AS (SELECT
                    a.day_sk,
                    a.case_num,
                    a.edg_num,
                    a.edg_trace_id,
                    a.application_dt, 
                    a.current_elig_ind,
                    a.payment_begin_dt payment_beg_dt
             FROM   fct_eligibility_2 a,
                    dim_edg_activity_type b
             WHERE  a.current_elig_ind IN ('P','S','A')
    and mod(case_num,2)=0
                    AND b.eff_end_dt IS NULL
                    AND b.activity_type_cd IN ('IN','RE','IR','PR','OG')
                    AND a.activity_type_sk=b.activity_type_sk
    , Pick_Latest_Prior_Med as
    SELECT  t3.case_num,
            t3.edg_num,
            t3.current_elig_ind,
            t3.application_dt,
            t3.day_sk,
            t3.payment_beg_dt,
            t3.edg_trace_id
    FROM   (SELECT t2.*,
                   Row_number()
                     OVER(PARTITION BY t2.case_num,t2.edg_num, t2.application_dt ORDER BY t2.day_sk DESC, t2.payment_beg_dt DESC, t2.edg_trace_id DESC) AS rn
            FROM   all_edbc_pending_edg t2) t3
    WHERE  t3.rn = 1
    Select
           case_num,
           edg_num,
           application_dt,
           payment_beg_dt,
           edg_trace_id,
           current_elig_ind
    FROM
            SELECT  t1.case_num,
                   t1.edg_num, 
                   t1.application_dt, 
                   t1.payment_beg_dt,
                   t1.edg_trace_id,
                   t1.current_elig_ind
            FROM   (SELECT  t.*,
                           Row_number()
                             OVER(PARTITION BY t.case_num,t.edg_num ORDER BY t.day_sk DESC, t.payment_beg_dt DESC, t.edg_trace_id DESC) AS rn
                    FROM   Pick_Latest_Prior_Med  t) t1
            WHERE  t1.rn = 1
            ) p
    WHERE
           current_elig_ind IN ('P','S')

    Check this link and post what is required as mentioned in the link.
    SQL and PL/SQL FAQ
    >
    I am assuming that because this uses the ' With tmp as' clauses, it might be using the Indexes.
    >
    You don't need to assume. You can find out what's exactly happening if you follow the steps described in the above mentioned link.
    Regards
    Raj

  • Tuning 12c Cloud Control (Grid).

    Hello All.
    Ive installed the latest version of 12c Cloud Control and since I was previously using 11gr2 Grid, I can tell you its amazingly improved!
    It still has the system moving window, but i like the way things are layed out and categorized now.
    I got it running on that patchset 11.2.0.3 and the database is humming right along after adding larger redo logs and pumping up the sga/pga, ect.
    But when i start oms, it literally sucks up all the ram on the svr.
    Its ridiculous.....
    Is there a way to actually tune the oms itself??
    Thanks,
    Xevv.

    Hi,
    Would recommend to have the below patches in place for your setup
    Applying Enterprise Manager 12c Recommended Patches (Doc ID 1664074.1)
    Please note : the doc also covers patch recommendation for repos DB which should also help
    Also you can refer to below doc which covers some important tuning tips
    12c EM: Steps to Tune the Cloud Control 12c Agent Performance When Monitoring a Large Number of Targets (Doc ID 1349887.1)
    Regards,
    Rahul

  • Tuning: Memory Abusers

    Hello,
    I'm working through the book Oracle Database 10g Performance Tuning Tips & Techniques from R. Niemiec. In chapter 15 is the topic - Top 10 "Memory Abusers" as a percent of all Statements - which I do not understand correct.
    Maybe someone has this book and can help me on this?
    I executed the sql-statement given in this book and it gives me about 66 percent for the top ten statements. The example has 44 percent.
    I don't understand the ranking given for the example, for the example the result is 60 points. Why?
    I don't know if I'm allowed to paste the rate-part here.
    Greets,
    Hannibal

    I customized the query on 10g a little;
    SELECT *
      FROM (SELECT rank() over(ORDER BY buffer_gets DESC) AS rank_bufgets,
                   (100 * ratio_to_report(buffer_gets) over()) pct_bufgets,
                   sql_text,
                   executions,
                   disk_reads,
                   cpu_time,
                   elapsed_time
              FROM v$sqlarea)
    WHERE rank_bufgets < 11;In my opinion above query helps to find and prioritize the problematic queries which are run from the last database startup, so the magic number and the conditions are not so important :)
    If you are on 10g and have the cost option AWR report of a problematic period or before 10g STATSPACK report will assist you for this purpose. Also after 10gR2 v$sqlstats is very useful, search and check them according to your version here; http://tahiti.oracle.com

  • Performance Tuning for OBIEE Reports

    Hi Experts,
    I had a requirement for which i have to end up building a snowflakt model in Physical layer i.e. One Dimension table with Three snowflake tables(Materialized views).
    The key point is the Dimension table is used in most of the OOTB reports.
    so all the reports use other three snowflakes tables in the Join conditions due to which the reports take longer time than ever like 10 mints.
    can anyone suggest good performance tuning tips to tune the reports.
    i created some indices on Materialized view columns and and on dimension table columns.
    i created the Materialized views with cache Enabled and refreshes only once in 24 hours etc
    is there anything i have to improve performance or have to consider re-designing the Physical layer without snowflake
    Please Provide valuable suggestions and comments
    Thank You
    Kumar

    Kumar,
    Most of the Performance Tuning should be done at the Back End , So calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
    Hope that helps
    ~Srix

  • What Book for the new 1Z0-117 '11gR2 SQL Tuning' ? No OPC available yet.

    Since 1Z0-117 '11gR2 SQL Tuning' is a new exam on beta yet, except Mathew Morris's summary is there a comprehensive Exam Reference Guide like OPC Oracle Experts Exam Guide ?
    Is the +"*Oracle Press Database 11g Release 2 Performance Tuning Tips & Techniques*"+ sufficient ?
    Or I just can look at the Covered Topics and study them from it and other resources ?
    Thank you

    Hmm understood.I will have to map the Sources Topics to the Exams Topics than just having a "served food", like an ordinary exam, its fair enough.
    Its just the concepts concerned are many and many are complex, so specifially here I would require a specially fit OPC book like for SQL Expert.
    Luckily I'd oredered yours at least.
    See SQL Experts, I found the topics quite affordable and I advance quick about 1-2 chapters + Sumed Revision per day, and without questions or errors.
    I guess because of
    1.the OPC Book
    2. since my curent internship is on SQL Analysis
    3. my Thesis was creating an essential DBMS in C++ with embeded SQL Parser and PhP ODBC driver from Scratch
    So I may try start having a look at SQL Tunning also, thus I dont think I could really achieve in paralel preparing for both and with the Internship also.
    Thank you very much once again.

  • Can anyone plz tell me the steps for performance tuning.

    hello friends
    what is performance tuning?
    can anyone plz tell me the steps for performance tuning.

    Hi Kishore, this will help u.
    Following are the different tools provided by SAP for performance analysis of an ABAP object
    Run time analysis transaction SE30
    This transaction gives all the analysis of an ABAP program with respect to the database and the non-database processing.
    SQL Trace transaction ST05
    The trace list has many lines that are not related to the SELECT statement in the ABAP program. This is because the execution of any ABAP program requires additional administrative SQL calls. To restrict the list output, use the filter introducing the trace list.
    The trace list contains different SQL statements simultaneously related to the one SELECT statement in the ABAP program. This is because the R/3 Database Interface - a sophisticated component of the R/3 Application Server - maps every Open SQL statement to one or a series of physical database calls and brings it to execution. This mapping, crucial to R/3s performance, depends on the particular call and database system. For example, the SELECT-ENDSELECT loop on the SPFLI table in our test program is mapped to a sequence PREPARE-OPEN-FETCH of physical calls in an Oracle environment.
    The WHERE clause in the trace list's SQL statement is different from the WHERE clause in the ABAP statement. This is because in an R/3 system, a client is a self-contained unit with separate master records and its own set of table data (in commercial, organizational, and technical terms). With ABAP, every Open SQL statement automatically executes within the correct client environment. For this reason, a condition with the actual client code is added to every WHERE clause if a client field is a component of the searched table.
    To see a statement's execution plan, just position the cursor on the PREPARE statement and choose Explain SQL. A detailed explanation of the execution plan depends on the database system in use.
    Need for performance tuning
    In this world of SAP programming, ABAP is the universal language. In most of the projects, the focus is on getting a team of ABAP programmers as soon as possible, handing over the technical specifications to them and asking them to churn out the ABAP programs within the “given deadlines”.
    Often due to this pressure of schedules and deliveries, the main focus of making a efficient program takes a back seat. An efficient ABAP program is one which delivers the required output to the user in a finite time as per the complexity of the program, rather than hearing the comment “I put the program to run, have my lunch and come back to check the results”.
    Leaving aside the hyperbole, a performance optimized ABAP program saves the time of the end user, thus increasing the productivity of the user, and in turn keeping the user and the management happy.
    This tutorial focuses on presenting various performance tuning tips and tricks to make the ABAP programs efficient in doing their work. This tutorial also assumes that the reader is well versed in all the concepts and syntax of ABAP programming.
    Use of selection criteria
    Instead of selecting all the data and doing the processing during the selection, it is advisable to restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code.
    Not recommended
    Select * from zflight.
    Check : zflight-airln = ‘LF’ and zflight-fligh = ‘BW222’.
    Endselect.
    Recommended
    Select * from zflight where airln = ‘LF’ and fligh = ‘222’.
    Endselect.
    One more point to be noted here is of the select *. Often this is a lazy coding practice. When a programmer gives select * even if one or two fields are to be selected, this can significantly slow the program and put unnecessary load on the entire system. When the application server sends this request to the database server, and the database server has to pass on the entire structure for each row back to the application server. This consumes both CPU and networking resources, especially for large structures.
    Thus it is advisable to select only those fields that are needed, so that the database server passes only a small amount of data back.
    Also it is advisable to avoid selecting the data fields into local variables as this also puts unnecessary load on the server. Instead attempt must be made to select the fields into an internal table.
    Use of aggregate functions
    Use the already provided aggregate functions, instead of finding out the minimum/maximum values using ABAP code.
    Not recommended
    Maxnu = 0.
    Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
    Check zflight-fligh > maxnu.
    Maxnu = zflight-fligh.
    Endselect.
    Recommended
    Select max( fligh ) from zflight into maxnu where airln = ‘LF’ and cntry = ‘IN’.
    The other aggregate functions that can be used are min (to find the minimum value), avg (to find the average of a Data interval), sum (to add up a data interval) and count (counting the lines in a data selection).
    Use of Views instead of base tables
    Many times ABAP programmers deal with base tables and nested selects. Instead it is always advisable to see whether there is any view provided by SAP on those base tables, so that the data can be filtered out directly, rather than specially coding for it.
    Not recommended
    Select * from zcntry where cntry like ‘IN%’.
    Select single * from zflight where cntry = zcntry-cntry and airln = ‘LF’.
    Endselect.
    Recommended
    Select * from zcnfl where cntry like ‘IN%’ and airln = ‘LF’.
    Endselect.
    Check this links
    http://www.sapdevelopment.co.uk/perform/performhome.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    kindly reward if found helpful.
    cheers,
    Hema.

  • IFS performance tuning

    I have oracle 9iFS setting on a windows box. It works well initially, after I batch loaded about 100,000 documents into it. It becomes quite slow. Open a folder through SMB client takes tens of seconds. Does anyone have some performance tuning tips? Or anyone has similiar situation with large amount of documents?
    Thanks

    I have oracle 9iFS setting on a windows box. It works well initially, after I batch loaded about 100,000 documents into it.It becomes quite slow. Open a folder through SMB client takes tens of seconds. Does anyone have some performance tuning tips?
    Or anyone has similiar situation with large amount of documents?
    Thanks If running analyze doesn't help the problem, please post a new thread, and we'll try to help you. When you do, please answer these questions:
    - How many documents per folder?
    - Can you detect whether your iFS Java processes or your Oracle processes are the bottleneck?
    + If the bottleneck is an 9iFS Java process, then go into the Enterprise Manager tool (configured with iFS), and bring up the Node Performance Dialog (see page 2-25 of the 9iFS Setup and Administration Guide).
    + For your server, bring up the information described in Figure 2-20 of the 9iFS Setup and Admin Guide. "Service Details: Committed Data Cache".
    - Adjust these settings higher and see if your performance improves.
    Alan

  • Advance Tuning Books suggestion.

    Hi Guru's,
    Please suggest me some books for Advance Tuning.
    Upto my understanding I feel I know some of the basic concepts of tuning.
    Please advice me books on RAC also.
    Thanks in advance

    Hi Hans,
    but not for the publisher of the other books you reference in which you appear to have an interest I'm only the Series Editor at Rampant (I'm not allowed to handle money!), and it's only my job to snag the best authors that I can find, then ride them hard to produce great Oracle books.
    I'm proud to be helping these authors with their work, so OK, I guess that they deserve a shameless plug too.
    Dr. Tim Hall (Oracle ACE of the year) - A FANTASTIC PL/SQL tuning book:
    http://www.rampant-books.com/book_2006_1_plsql_tune.htm
    Alexey Danchenkov and I created this 950-page monster tuning book which took us two years to complete:
    http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm
    Kent Crotty has a god HTML-DB (Apex) book that shows aplication tuning:
    http://www.rampant-books.com/book_2005_2_html_db.htm
    Mike Ault has a collection of Oracle tuning scripts:
    http://www.oracle-tuning.com
    But let's not forget some of the other great Oracle tuning authors. Here are some of my personal favs, but not all:
    Chris Lawson: A GREAT Oracle tuning book:
    http://www.amazon.com/Art-Science-Oracle-Performance-Tuning/dp/1590591992
    Guy Harrison of Quest - Dated, but the best SQL tuning book around:
    http://www.amazon.com/Oracle-SQL-High-Performance-Tuning-2nd/dp/0130123811
    For application server tuning, don't miss Col. John Garmany's 10g AS book:
    http://www.amazon.com/Oracle-Application-Administration-Handbook-Osborne/dp/0072229586/sr=1-5/qid=1167774708/ref=sr_1_5/104-1093814-8623900?ie=UTF8&s=books
    Jonathan Gennick has a Regular Expressions book which realy helps for Oracle tuning:
    http://www.amazon.com/Oracle-Regular-Expressions-Pocket-Reference/dp/0596006012/sr=1-1/qid=1167774799/ref=sr_1_1/104-1093814-8623900?ie=UTF8&s=books
    Dave Krienes has an outtanding DBA book with tuning tips, very nice:
    http://www.amazon.com/Oracle-Pocket-Guide-David-Kreines/dp/0596100493/sr=1-5/qid=1167774919/ref=sr_1_5/104-1093814-8623900?ie=UTF8&s=books

Maybe you are looking for

  • How to release function module...

    Hi All,    I have created one function module and activated that. How to release that function module?   But Release option is in deactive state...how can i release the FM....please let me know the resons for that. thank you.

  • HTML tag problem in jsf page

    Hello every body .I am newer in jsf.I have used the following segment in my jsf page .Now I want to read value from intl properties file in place of "*********************".How i will do it?plz help me Thanks in advance. <t:panelGrid id="guestUserIst

  • Generic data extractor using function module

    Hi All, I want to create a generic data extractor using a function module within the BW system. i.e. the extractor will run in BW and and store the data in a cube( in BW). No R/3 is invloved. I proceeded as follows: 1. Created a structure through se1

  • Just loaded IOS 5, How to handle Group Messages

    Just got our Iphones loaded with new IOS5, and noticed that all text messages are going to my wifes and my Iphones. They show up as a Group message. This is not what we wanted, how can we change this? I can't seem to find a setting.

  • Mysterious Foto Display

    In iWeb '09, my wife write a blog about her racing experiences. I developed the site. The blog features one image per entry that pertains to a running topic. Some pix has been added by dragging into the container from the Photos column and iPhoto, ot