SE30, Performance, nested loop

Hi,
I looked at SE30 and found something verny usefull for my actual report.
The nested reports and how to do it better.
In the Documentation is written:
The above parallel cursor algorithm assumes that ITAB2 contains only
entries also contained in ITAB1.
If this assumption does not hold, the parallel cursor algorithm
gets slightly more complicated, but its performance characteristics
remain the same.
I would like to know the more complicated way, because I compare two tables which contain different entries.
Would anybody be so nice and help me?

Check:
<a href="/people/rob.burbank/blog/2006/02/07/performance-of-nested-loops">Performance of Nested Loops</a>
Rob

Similar Messages

  • Why optimizer prefers nested loop over hash join?

    What do I look for if I want to find out why the server prefers a nested loop over hash join?
    The server is 10.2.0.4.0.
    The query is:
    SELECT p.*
        FROM t1 p, t2 d
        WHERE d.emplid = p.id_psoft
          AND p.flag_processed = 'N'
          AND p.desc_pool = :b1
          AND NOT d.name LIKE '%DUPLICATE%'
          AND ROWNUM < 2tkprof output is:
    Production
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          4           0
    Execute      1      0.00       0.01          0          4          0           0
    Fetch        1    228.83     223.48          0    4264533          0           1
    total        3    228.84     223.50          0    4264537          4           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 108  (SANJEEV)
    Rows     Row Source Operation
          1  COUNT STOPKEY (cr=4264533 pr=0 pw=0 time=223484076 us)
          1   NESTED LOOPS  (cr=4264533 pr=0 pw=0 time=223484031 us)
      10401    TABLE ACCESS FULL T1 (cr=192 pr=0 pw=0 time=228969 us)
          1    TABLE ACCESS FULL T2 (cr=4264341 pr=0 pw=0 time=223182508 us)Development
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          0           0
    Execute      1      0.00       0.01          0          4          0           0
    Fetch        1      0.05       0.03          0        512          0           1
    total        3      0.06       0.06          0        516          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 113  (SANJEEV)
    Rows     Row Source Operation
          1  COUNT STOPKEY (cr=512 pr=0 pw=0 time=38876 us)
          1   HASH JOIN  (cr=512 pr=0 pw=0 time=38846 us)
         51    TABLE ACCESS FULL T2 (cr=492 pr=0 pw=0 time=30230 us)
        861    TABLE ACCESS FULL T1 (cr=20 pr=0 pw=0 time=2746 us)

    sanjeevchauhan wrote:
    What do I look for if I want to find out why the server prefers a nested loop over hash join?
    The server is 10.2.0.4.0.
    The query is:
    SELECT p.*
    FROM t1 p, t2 d
    WHERE d.emplid = p.id_psoft
    AND p.flag_processed = 'N'
    AND p.desc_pool = :b1
    AND NOT d.name LIKE '%DUPLICATE%'
    AND ROWNUM < 2
    You've got already some suggestions, but the most straightforward way is to run the unhinted statement in both environments and then force the join and access methods you would like to see using hints, in your case probably "USE_HASH(P D)" in your production environment and "FULL(P) FULL(D) USE_NL(P D)" in your development environment should be sufficient to see the costs and estimates returned by the optimizer when using the alternate access and join patterns.
    This give you a first indication why the optimizer thinks that the chosen access path seems to be cheaper than the obviously less efficient plan selected in production.
    As already mentioned by Hemant using bind variables complicates things a bit since EXPLAIN PLAN is not reliable due to bind variable peeking performed when executing the statement, but not when explaining.
    Since you're already on 10g you can get the actual execution plan used for all four variants using DBMS_XPLAN.DISPLAY_CURSOR which tells you more than the TKPROF output in the "Row Source Operation" section regarding the estimates and costs assigned.
    Of course the result of your whole exercise might be highly dependent on the actual bind variable value used.
    By the way, your statement is questionable in principle since you're querying for the first row of an indeterministic result set. It's not deterministic since you've defined no particular order so depending on the way Oracle executes the statement and the physical storage of your data this query might return different results on different runs.
    This is either an indication of a bad design (If the query is supposed to return exactly one row then you don't need the ROWNUM restriction) or an incorrect attempt of a Top 1 query which requires you to specify somehow an order, either by adding a ORDER BY to the statement and wrapping it into an inline view, or e.g. using some analytic functions that allow you specify a RANK by a defined ORDER.
    This is an example of how a deterministic Top N query could look like:
    SELECT
    FROM
    SELECT p.*
        FROM t1 p, t2 d
        WHERE d.emplid = p.id_psoft
          AND p.flag_processed = 'N'
          AND p.desc_pool = :b1
          AND NOT d.name LIKE '%DUPLICATE%'
    ORDER BY <order_criteria>
    WHERE ROWNUM <= 1;Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Explain Plan shows Nested Loops, Is it good or bad?

    Hi All,
    I have a doubt in the explain plan, I would like to know if the Nested Loops , will it degrade the query performance?
    Note: I have pasted only few output that I had taken from the expalin plan.
    Do let me know if there is any article I could read to get clear understanding about the same.
    17 NESTED LOOPS ANTI Cost: 125 Bytes: 186 Cardinality: 1                                                                  
    15 NESTED LOOPS ANTI Cost: 124 Bytes: 166 Cardinality: 1                                                             
    12 NESTED LOOPS Cost: 122 Bytes: 140 Cardinality: 1                                                        
         9 NESTED LOOPS Cost: 121 Bytes: 117 Cardinality: 1           
    Thanks

    Hi,
    there is absolutely nothing wrong about nested loops (NL). It's a very efficient way of combining data from two rowsources. It works pretty much like a regular loop: it takes all rows from one rowsource (the "outer" one) and for each of them it looks up a row matching the join condition in the other rowsource (the "inner" one).
    Note that there are not so many alternatives in Oracle: there are only 3 ways to join data in Oracle, and one of them is used in rather special circumstances (merge join). So normally the choice is between a NL and a hash join. Hash join (HJ) takes the smaller dataset and builds an in-memory lookup table using a hash function on join column(s). Then it goes through the other dataset and as it goes, it applies the hashing function to join column(s) and picks the matching rows from the smaller dataset.
    Actually, hash joins and nested loops are not all that different. The basic mechanism is same: you go through one datasource and as you go, you pick matching rows from the other and do the join. The main difference is that a HJ requires some preparation work (it costs resources to build the in-memory table) and thus HJ are typically associated with less-selective queries and full table scans.
    In your particular case it's nor possible to tell whether or not NL is in order based on just a few rows from the explain plan. We need to know the structure of your tables (the DDL), what kind of data they hold (optimizer stats) and what query you are running to be able to tell.
    Best regards,
    Nikolay

  • Hi how to avoid nested loops in this program to improve the performence

    Hi all
    How to avoide the nested loops in this programing what is the replacement for the nested loops in this program coding......
    LOOP AT itb_ekpo.
        READ TABLE itb_marc WITH KEY
           matnr = itb_ekpo-matnr
           werks = itb_ekpo-werks BINARY SEARCH.
        CHECK sy-subrc = 0.
    FAE 26446 fin remplacement
        itb_pca-ebeln = itb_ekpo-ebeln.
        itb_pca-ebelp = itb_ekpo-ebelp.
      itb_pca-lifnr = itb_ekko-lifnr.   "-FAE26446
        itb_pca-lifnr = itb_ekpo-lifnr.                         "+FAE26446
        itb_pca-ekgrp = itb_ekpo-ekgrp.                         "+FAE26446
        itb_pca-dispo = itb_ekpo-dispo.                         "+FAE26446
        itb_pca-matnr = itb_ekpo-matnr.
        itb_pca-werks = itb_ekpo-werks.
      Recherche du libellé article
        READ TABLE itb_makt
                   WITH KEY matnr = itb_ekpo-matnr
                            spras = text-fra
                   BINARY SEARCH.
        IF sy-subrc = 0.
          itb_pca-maktx = itb_makt-maktx.
        ELSE.
          READ TABLE itb_makt
                    WITH KEY matnr = itb_ekpo-matnr
                             spras = text-ang
                    BINARY SEARCH.
          IF sy-subrc = 0.
            itb_pca-maktx = itb_makt-maktx.
          ENDIF.
        ENDIF.
        IF NOT itb_ekpo-bpumn IS INITIAL.
          itb_pca-menge = itb_ekpo-menge * itb_ekpo-bpumz /
                                           itb_ekpo-bpumn.
        ENDIF.
      Sélect° ds la table EKES des dates de livraisons et des qtés
      en transit
        CLEAR w_temoin_ar.
        CLEAR w_etens.
        LOOP AT itb_ekes
                FROM w_index_ekes.
          IF  itb_ekes-ebeln = itb_ekpo-ebeln
          AND itb_ekes-ebelp = itb_ekpo-ebelp.
            IF itb_ekes-ebtyp = text-arn.
              itb_pca-eindt = itb_ekes-eindt.
              w_temoin_ar = 'X'.
            ELSE.
            Si c'est une qté en transit alors on recupere
            la quantité et la date.
              IF itb_ekes-dabmg < itb_ekes-menge.
                itb_pca-qtran = itb_pca-qtran + itb_ekes-menge -
                                itb_ekes-dabmg.
              ENDIF.
              IF itb_ekes-etens > w_etens.
                w_etens = itb_ekes-etens.
                itb_pca-dtran = itb_ekes-eindt.
              ENDIF.
            ENDIF.
          ELSEIF itb_ekes-ebeln > itb_ekpo-ebeln
          OR ( itb_ekes-ebeln = itb_ekpo-ebeln
          AND itb_ekes-ebelp > itb_ekpo-ebelp ).
            w_index_ekes = sy-tabix.
            EXIT.
          ENDIF.
        ENDLOOP.
      S'il n'y a pas d'AR alors récupérat° de la date livraison dans EKET.
        LOOP AT itb_eket
                FROM w_index_eket.
          IF  itb_eket-ebeln = itb_ekpo-ebeln
          AND itb_eket-ebelp = itb_ekpo-ebelp.
            IF w_temoin_ar IS INITIAL.
              itb_pca-eindt = itb_eket-eindt.
            ENDIF.
            itb_pca-slfdt = itb_eket-slfdt.
          Calcul du portefeuille fournisseur à partir de la
          qté commandée et la qté reçue
            itb_pca-attdu = itb_pca-attdu + itb_eket-menge -
                            itb_eket-wemng.
          Calcul du montant du poste
            itb_pca-netpr = itb_ekpo-netpr * itb_pca-attdu.
            IF itb_ekpo-peinh NE 0.
              itb_pca-netpr = itb_pca-netpr / itb_ekpo-peinh.
            ENDIF.
          Calcul quantité réceptionnée.
            itb_pca-wemng = itb_pca-wemng + itb_eket-wemng.
          Calcul du retard en nombre de jours calendaires
          Le calcul du retard  ne doit pas prendre en compte
          le jour de livraison
            ADD 1 TO itb_eket-eindt.
            IF NOT itb_pca-attdu  IS INITIAL
            AND    itb_eket-eindt LT sy-datum.
            Calcul du retard en nombre de jours ouvrés
              CLEAR w_retard.
              CALL FUNCTION 'Z_00_BC_WORKDAYS_PER_PERIOD'
                   EXPORTING
                        date_deb = itb_eket-eindt
                        date_fin = sy-datum
                   IMPORTING
                        jours    = w_retard.
              itb_pca-rtard = itb_pca-rtard + w_retard .
            ENDIF.
          ELSEIF itb_eket-ebeln > itb_ekpo-ebeln
          OR (   itb_eket-ebeln = itb_ekpo-ebeln
          AND    itb_eket-ebelp > itb_ekpo-ebelp ).
            w_index_eket = sy-tabix.
            EXIT.
          ENDIF.
        ENDLOOP.
      Recherche de la derniere date de livraison.
        LOOP AT itb_mseg
                FROM w_index_mseg.
          IF  itb_mseg-ebeln = itb_ekpo-ebeln
          AND itb_mseg-ebelp = itb_ekpo-ebelp.
            READ TABLE itb_mkpf
                       WITH KEY mblnr = itb_mseg-mblnr
                                mjahr = itb_mseg-mjahr
                       BINARY SEARCH.
            IF sy-subrc = 0.
              IF itb_mkpf-bldat > itb_pca-bldat.
                itb_pca-bldat = itb_mkpf-bldat.
              ENDIF.
            ENDIF.
          ELSEIF itb_mseg-ebeln > itb_ekpo-ebeln
          OR (   itb_mseg-ebeln = itb_ekpo-ebeln
          AND    itb_mseg-ebelp > itb_ekpo-ebelp ).
            w_index_mseg = sy-tabix.
            EXIT.
          ENDIF.
        ENDLOOP.
        APPEND itb_pca.
        CLEAR itb_pca.
    FAE26446 suppression parag. suivant
         ELSEIF itb_ekpo-ebeln > itb_ekko-ebeln.
           w_index_ekpo = sy-tabix.
           EXIT.
         ENDIF.
       ENDLOOP.
    Fin FAE26446
      ENDLOOP.
    Thanks in advance for all.....

    Hi
    these are the performance tips
    Instead of using nested Select loops it is often better to use subqueries.
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    <b>Internal Tables</b>
    1.     Table operations should be done using explicit work areas rather than via header lines.
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    Point # 2
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    Point # 3
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    Point # 5
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    Point # 6
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    Point # 7
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    Point # 8
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    Point # 9
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 10
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    Point # 11
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    Point # 12
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 13
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    <b>Reward if usefufll</b>

  • Partition pruning, Nested loops

    Hi,
    I am having a problem with getting partition pruning in a query. I've managed to dumb down the problem to two tables and a minimal query (see below).
    Basically, I have a partitioned table "Yearly Facts", and a helper table "Current Year" that always contain 1 row. The sole purpose of this one row is to tell what the curent year is, what the previous year was and what the next year will be. (In the real problem, there are non-standard timeperiods, so one cannot just calculate previous and next by adding/subtracting 1 as would be possible in this example).
    The following query is executed the way I want.
    It performs a scan on current_year, and then nested loop into the facts. And partition pruning was happening.
    select sum(decode(a.year_key, b.curr_year, some_measure)) as curr_year_measure
      from yearly_fact_t a
          ,current_year  b
    where a.year_key = b.curr_year;
    | Id  | Operation                  | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT           |               |     1 |    39 |     4   (0)| 00:00:01 |       |    |
    |   1 |  SORT AGGREGATE            |               |     1 |    39 |            |          |       |    |
    |   2 |   NESTED LOOPS             |               |     1 |    39 |     4   (0)| 00:00:01 |       |    |
    |   3 |    INDEX FAST FULL SCAN    | SYS_C00247890 |     1 |    13 |     2   (0)| 00:00:01 |       |    |
    |   4 |    PARTITION RANGE ITERATOR|               |     1 |    26 |     2   (0)| 00:00:01 |   KEY |   KEY |
    |*  5 |     TABLE ACCESS FULL      | YEARLY_FACT_T |     1 |    26 |     2   (0)| 00:00:01 |   KEY |   KEY |
    Predicate Information (identified by operation id):
       5 - filter("A"."YEAR_KEY"="B"."CURR_YEAR")The following query is where I have my problem.
    The basic xplan is the same, but for some reason the cbo gave up and decide to scan all partitions, which was not very scalable on production data :)
    I would have thought that the plan would be the same.
    select sum(decode(a.year_key, b.curr_year, some_measure)) as curr_year_measure
          ,sum(decode(a.year_key, b.prev_year, some_measure)) as prev_year_measure
      from yearly_fact_t a
          ,current_year  b
    where a.year_key = b.curr_year
        or a.year_key = b.prev_year;
    | Id  | Operation             | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT      |               |     1 |    52 |    13   (0)| 00:00:01 |       |       |
    |   1 |  SORT AGGREGATE       |               |     1 |    52 |            |          |       |       |
    |   2 |   NESTED LOOPS        |               |     4 |   208 |    13   (0)| 00:00:01 |       |       |
    |   3 |    TABLE ACCESS FULL  | CURRENT_YEAR  |     1 |    26 |     4   (0)| 00:00:01 |       |       |
    |   4 |    PARTITION RANGE ALL|               |     4 |   104 |     9   (0)| 00:00:01 |     1 |     6 |
    |*  5 |     TABLE ACCESS FULL | YEARLY_FACT_T |     4 |   104 |     9   (0)| 00:00:01 |     1 |     6 |
    Predicate Information (identified by operation id):
       5 - filter("A"."YEAR_KEY"="B"."CURR_YEAR" OR "A"."YEAR_KEY"="B"."PREV_YEAR")
    -- drop table yearly_fact_t purge;
    -- drop table current_year  purge;
    create table current_year(
       curr_year number(4) not null
      ,prev_year number(4) not null
      ,next_year number(4) not null
      ,primary key(curr_year)
      ,unique(prev_year)
      ,unique(next_year)
    insert into current_year(curr_year, prev_year, next_year) values(2010, 2009, 2011);
    commit;
    create table yearly_fact_t(
       year_key     number(4) not null
      ,some_dim_key number    not null
      ,some_measure number    not null
    partition by range(year_key)(
       partition p2007 values less than(2008)
      ,partition p2008 values less than(2009)
      ,partition p2009 values less than(2010)
      ,partition p2010 values less than(2011)
      ,partition p2011 values less than(2012) 
      ,partition pmax  values less than(maxvalue)
    insert into yearly_fact_t(year_key, some_dim_key, some_measure) values(2007,1, 10);
    insert into yearly_fact_t(year_key, some_dim_key, some_measure) values(2008,1, 20);
    insert into yearly_fact_t(year_key, some_dim_key, some_measure) values(2009,1, 30);
    insert into yearly_fact_t(year_key, some_dim_key, some_measure) values(2010,1, 40);
    commit; What can I do to get partition pruning to happen in the second query?
    Or even better, what is it in my query that prevents it from happening?
    We're running Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit.
    Best regards
    Ronnie

    Hi Ronnie,
    Check whether your statistics are up to date.
    SQL> set autotrace traceonly exp
    SQL> select sum(decode(a.year_key, b.curr_year, some_measure)) as curr_year_measure
      2        ,sum(decode(a.year_key, b.prev_year, some_measure)) as prev_year_measure
      3    from yearly_fact_t a
      4        ,current_year  b
      5   where a.year_key = b.curr_year
      6      or a.year_key = b.prev_year;
    Execution Plan
    Plan hash value: 229094315
    | Id  | Operation             | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT      |               |     1 |    52 |    10   (0)| 00:00:01 |       |       |
    |   1 |  SORT AGGREGATE       |               |     1 |    52 |            |          |       |       |
    |   2 |   NESTED LOOPS        |               |     4 |   208 |    10   (0)| 00:00:01 |       |       |
    |   3 |    TABLE ACCESS FULL  | CURRENT_YEAR  |     1 |    26 |     3   (0)| 00:00:01 |       |       |
    |   4 |    PARTITION RANGE ALL|               |     4 |   104 |     7   (0)| 00:00:01 |     1 |     6 |
    |*  5 |     TABLE ACCESS FULL | YEARLY_FACT_T |     4 |   104 |     7   (0)| 00:00:01 |     1 |     6 |
    Predicate Information (identified by operation id):
       5 - filter("A"."YEAR_KEY"="B"."CURR_YEAR" OR "A"."YEAR_KEY"="B"."PREV_YEAR")
    Note
       - dynamic sampling used for this statement
    SQL> set autotrace off
    SQL> exec dbms_stats.gather_table_stats(user, 'current_year', estimate_percent=>100, cascade=>true, method_opt
    =>'for all columns size 1');
    PL/SQL procedure successfully completed.
    SQL>
    SQL> exec dbms_stats.gather_table_stats(user, 'yearly_fact_t', estimate_percent=>100, cascade=>true, method_op
    t=>'for all columns size 1');
    PL/SQL procedure successfully completed.
    SQL> set autotrace traceonly exp
    SQL> select sum(decode(a.year_key, b.curr_year, some_measure)) as curr_year_measure
      2        ,sum(decode(a.year_key, b.prev_year, some_measure)) as prev_year_measure
      3    from yearly_fact_t a
      4        ,current_year  b
      5   where a.year_key = b.curr_year
      6      or a.year_key = b.prev_year;
    Execution Plan
    Plan hash value: 2253546831
    | Id  | Operation                   | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT            |               |     1 |    15 |     8   (0)| 00:00:01 |       |       |
    |   1 |  SORT AGGREGATE             |               |     1 |    15 |            |          |       |       |
    |   2 |   CONCATENATION             |               |       |       |            |          |       |       |
    |   3 |    NESTED LOOPS             |               |     1 |    15 |     4   (0)| 00:00:01 |       |       |
    |   4 |     TABLE ACCESS FULL       | CURRENT_YEAR  |     1 |     8 |     3   (0)| 00:00:01 |       |       |
    |   5 |     PARTITION RANGE ITERATOR|               |     1 |     7 |     1   (0)| 00:00:01 |   KEY |   KEY |
    |*  6 |      TABLE ACCESS FULL      | YEARLY_FACT_T |     1 |     7 |     1   (0)| 00:00:01 |   KEY |   KEY |
    |   7 |    NESTED LOOPS             |               |     1 |    15 |     4   (0)| 00:00:01 |       |       |
    |   8 |     TABLE ACCESS FULL       | CURRENT_YEAR  |     1 |     8 |     3   (0)| 00:00:01 |       |       |
    |   9 |     PARTITION RANGE ITERATOR|               |     1 |     7 |     1   (0)| 00:00:01 |   KEY |   KEY |
    |* 10 |      TABLE ACCESS FULL      | YEARLY_FACT_T |     1 |     7 |     1   (0)| 00:00:01 |   KEY |   KEY |
    Predicate Information (identified by operation id):
       6 - filter("A"."YEAR_KEY"="B"."PREV_YEAR")
      10 - filter("A"."YEAR_KEY"="B"."CURR_YEAR" AND LNNVL("A"."YEAR_KEY"="B"."PREV_YEAR"))
    SQL> set autotrace off
    SQL>Asif Momen
    http://momendba.blogspot.com

  • Nested loop join v/s Sort merge

    I have seen that nested loops are better if the inner table is being indexed, because for each outer table row, we are looking for a match in the inner table. But is there any case when optimizer still goes for a nested loop even if there is no index on the inner table. That is my first question ?
    My second question := When doing a sort merge join oracle has to sort both result sets and then merge them. Oracle says that if both the row sets, if already sorted is definately better for performance. Ya thats obvious. But back to my upper question, when there is no index on the inner table, is it the situation when oracle goes for a sort merge join ?

    My response should really have examples but since I do not have any handy I will just say about your first question. If there is no index available from table A to table B yes it is possible a nested loop join may still be used and table B read via full table scan within a nested loop. If table B is very small and consists of only a block or two this may be relatively efficient plan. It is more likely you sould see table B full scanned and the result feed into a hash join, but I have seen the plan you mention.
    Back before hash joins were introduced with 7.3 (if my memory is correct) you would see sort/merge joins used more often than you do now. Generally speaking no index on the join conditions would exist for this option to be chosen.
    If you really want to know why and sometimes what the optimizer is going to do buy Jonathan Lewis's book Cost-Based Oracle Fundamentals. If explains the optimizer in more depth than any other source I know of.
    HTH -- Mark D Powell --

  • Nested Loop Help

    Hello,
    I have generated a simple VI, to make multiplication and division operations. I have the following operation performed with the following inputs.
    A1 has a value range from 1 -10 fixed
    A2 has an input range of 1 - 5
    A3,A4 are constants.
    so Result  = (A1*A3 ) / (A4*A2)*100 ,
    I want to  plot "Result Vs. A1" with a current value of A2, increment A2 and repeat the procedure. So I generate 5 graphs and display it in only one. I need to write a nested loop to perform this operation.however I am able to do it only once.
    I am trying to use for loop structure in one another but havent gone ahead in this.Any help will be appreciable..Thanks in advance.

    altenbach wrote:
    Dravi99 wrote:
    The Problem is that I want to plot the division result vs. the # of array out elements and i am unsuccessful at it. May be i am missing on the waveform graph properties.
    You should make the current values default before saving so we have some typical data to play with.
    Currently, your code makes very little sense, because the inner loop has no purpose.
    If you plot the division result versus array index (I assume that's what you want, I don't understand what you mean by "the # of array out elements "), you only have one plot. What should be on the other plots???
    Please explain or even attach an image of the desired output. Thanks!
    Thanks Altenbach for the inputs.
    I have made the values default. Srry for the previous one.
    Now the inner loop was placed so that i can create a 2D array.
    I need to plot the division result vs. the value of the element in the arry out. i.e. currently my array out has 0.0...,0.3 I want that to be the X axis.
     The graph plotted   is for 1st value of "in", this value will change like from 4.5 to 4.7 to 4.9, not necesarily in steps.
    Thus my one graph should have multiple plots of "in" which has the above mentioned axis.
    Hope this time i was clear in my msg. I have attached the modified VI.
    Attachments:
    For_loop_prob.vi ‏17 KB

  • Avoid nested loops

    I have an internal table (itab) with a field having vale in the form as shown below;
    0001-0003;0005;0007-0009 (In one row)
    This means it has values  0001 to 0003 0005 0007 to 0009
    I have to compare these values with some other table itab3 which has these values singly means 0001 0002 0003...etc (all in different rows ). so for that I have used nested loops because i need to expand all values to compare with itab3 table something like this :
    loop at itab
        split <itab-field1> at ';' into itab2.
      loop at itab2
         split itab2-field at '-' into var1 var2.
            while var1 le var2.
               append var1 to new table
              increment var1     
           endwhile
      endloop
    endloop
    By this way I get vaues in new table as :
    0001
    0002
    0003
    0005
    0007
    0008
    0009
    So plz tell me is thr some other way by which I can avoid nested loops

    But still it avoids 2 splits
    Yes Rainer, you are right but still my code improves performance.I checked it .
    data:start type i,
         end type i,
         res type i.
    data:begin of itab1 occurs 0,
         str type char255,
         end of itab1.
    data:begin of itab2 occurs 0,
         str type char255,
         end of itab2.
    data:var1 type char04,
         var2 type char04.
    get RUN TIME FIELD start.
    itab1-str = '0001-0003;0005;0007-0009'.
    append itab1.
    loop at itab1.
        split itab1-str at ';' into table itab2.
      loop at itab2.
         split itab2-str at '-' into var1 var2.
            while var1 le var2.
               var1 = var1 + 1.
           endwhile.
      endloop.
    endloop.
    get RUN TIME FIELD end.
    res = end - start.
    write :'code1-', res.
    skip 1.
    DATA:lv_str1 TYPE char255,
         lv_low(04) TYPE n,
         lv_high(04) TYPE n.
    TYPES:BEGIN OF ty_itab,
          line TYPE char255,
          END OF ty_itab.
    DATA:itab TYPE TABLE OF ty_itab,
         wa TYPE ty_itab.
    clear:start,end,res.
    get RUN TIME FIELD start.
    wa-line = '0001-0003;0005;0007-0009'.
    APPEND wa TO itab.
    LOOP AT itab INTO wa.
      DO.
        IF wa-line CA ';'.
          lv_str1 = wa-line+0(sy-fdpos).
          sy-fdpos = sy-fdpos + 1.
          wa-line+0(sy-fdpos) = ' '.
          CONDENSE wa-line.
          IF lv_str1 CA '-'.
            lv_low = lv_str1+0(sy-fdpos).
            lv_high = lv_str1+sy-fdpos(*).
            WHILE NOT lv_low GT lv_high.
               ADD 1 TO lv_low.
            ENDWHILE.
          ELSE.
          ENDIF.
        ELSE.
          IF wa-line CA '-'.
            lv_low = wa-line+0(sy-fdpos).
            lv_high = wa-line+sy-fdpos(*).
            WHILE NOT lv_low GT lv_high.
              ADD 1 TO lv_low.
            ENDWHILE.
          ELSE.
          ENDIF.
          EXIT.
        ENDIF.
      ENDDO.
    ENDLOOP.
    skip 1.
    get RUN TIME FIELD end.
    res = end - start.
    write: 'code2-',res.
    Edited by: Keshav.T on Jun 18, 2010 5:24 PM
    Edited by: Keshav.T on Jun 18, 2010 5:25 PM

  • NESTED LOOP or SORT MERGE

    Hi All,
    How can we judge which one is best to use for the query- NESTED LOOP
    - SORT MERGE
    - HASH JOINThanx.. Ratan

    Hi...
    For Nested Loop Joins:
    Nested loop joins are useful when small subsets of data are being joined and if the join condition is an efficient way of accessing the second table.
    It is very important to ensure that the inner table is driven from (dependent on) the outer table. If the inner table's access path is independent of the outer table, then the same rows are retrieved for every iteration of the outer loop, degrading performance considerably. In such cases, hash joins joining the two independent row sources perform better.
    For Hash Join:
    Hash joins are used for joining large data sets. The optimizer uses the smaller of two tables or data sources to build a hash table on the join key in memory. It then scans the larger table, probing the hash table to find the joined rows.
    For Sort Merge Joins:
    Sort merge joins can be used to join rows from two independent sources. Hash joins generally perform better than sort merge joins. On the other hand, sort merge joins can perform better than hash joins if both of the following conditions exist:
    The row sources are sorted already.
    A sort operation does not have to be done.
    However, if a sort merge join involves choosing a slower access method (an index scan as opposed to a full table scan), then the benefit of using a sort merge might be lost.
    Sort merge joins are useful when the join condition between two tables is an inequality condition (but not a nonequality) like <, <=, >, or >=. Sort merge joins perform better than nested loop joins for large data sets. You cannot use hash joins unless there is an equality condition.
    In a merge join, there is no concept of a driving table. The join consists of two steps:
    Sort join operation: Both the inputs are sorted on the join key.
    Merge join operation: The sorted lists are merged together.
    If the input is already sorted by the join column, then a sort join operation is not performed for that row source.

  • Nested loop, merge join and harsh join

    Can any one tell me the difference/relationship between nested loop, harsh join and merge join...Thanx

    Check Oracle Performance Tuning Guide
    13.6 Understanding Joins
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14211/optimops.htm#i51523

  • Generally when does optimizer use nested loop and Hash joins  ?

    Version: 11.2.0.3, 10.2
    Lets say I have a table called ORDER and ORDER_DETAIL.
    ORDER_DETAIL is the child table of ORDERS .
    This is what I understand about Nested Loop:
    When we join ORDER AND ORDER_DETAIL tables oracle will form a 'nested loop' in which for each order_ID in ORDER table (outer loop), oracle will look for corresponding multiple ORDER_IDs in the ORDER_DETAIL table.
    Is nested loop used when the driving table (ORDER in this case) is smaller than the child table (ORDER_DETAIL) ?
    Is nested loop more likely to use Indexes in general ?
    How will these two tables be joined using Hash joins ?
    When is it ideal to use hash joins  ?

    Your description of a nested loop is correct.
    The overall rule is that Oracle will use the plan that it calculates to be, in general, fastest. That mostly means fewest I/O's, but there are various factors that adjust its calculation - e.g. it expects index blocks to be cached, multiple reads entries in an index may reference the same block, full scans get multiple blocks per I/O. It also has CPU cost calculations, but they generally only become significant with function / package calls or domain indexes (spatial, text, ...).
    Nested loop with an index will require one indexed read of the child table per outer table row, so its I/O cost is roughly twice the number of rows expected to match the where clause conditions on the outer table.
    A hash join reads the of the smaller table into a hash table then matches the rows from the larger table against the hash table, so its I/O cost is the cost of a full scan of each table (unless the smaller table is too big to fit in a single in-memory hash table). Hash joins generally don't use indexes - it doesn't use the index to look up each result. It can use an index as a data source, as a narrow version of the table or a way to identify the rows satisfying the other where clause conditions.
    If you are processing the whole of both tables, Oracle is likely to use a hash join, and be very fast and efficient about it.
    If your where clause restricts it to a just few rows from the parent table and a few corresponding rows from the child table, and you have an index Oracle is likely to use a nested loops solution.
    If the tables are very small, either plan is efficient - you may be surprised by its choice.
    Don't be worry about plans with full scans and hash joins - they are usually very fast and efficient. Often bad performance comes from having to do nested loop lookups for lots of rows.

  • REMOVE NESTED LOOP

    HI SAP GURUS
    I AM USING THE NESTED LOOP
      LOOP AT IBKPF into IBKPF_str.
        loop at IBSEG into IBSEG_Str.
          if IBSEG_Str-BUKRS = IBKPF_Str-BUKRS and IBSEG_Str-BELNR = IBKPF_Str-BELNR and IBSEG_Str-GJAHR = IBKPF_Str-GJAHR.
            Move-Corresponding ibkpf_Str to idata_Str.
            Move-corresponding ibseg_Str to idata_Str.
            Append idata_Str to IDATA.
            clear : IDATA_Str , IBSEG_Str.
          ENDIF.
          Clear : IBSEG_Str.
        ENDLOOP.
        Clear :IBKPF_Str , IBSEG_Str.
      ENDLOOP.
    NOW TO INCREACE THE PERFORMANCE
    I AM USING
    Loop at IBSEG into IBSEg_str.
    *Read table IBKPF with key BELNR = IBSEG_str-BELNR into IBKPF_str Binary Search.
        If sy-subrc = 0 and IBSEG_Str-BUKRS = IBKPF_Str-BUKRS and IBSEG_Str-GJAHR = IBKPF_Str-GJAHR.
           Move-Corresponding ibkpf_Str to idata_Str.
           Move-corresponding ibseg_Str to idata_Str.
           Append idata_Str to IDATA.
           clear : IDATA_Str , IBKPF_Str.
        Endif.
       clear : IBSEG_Str , IBKPF_Str.
    *ENDLOOP.
    BUT NOW IT IS NOT PICKING ALL THE RECORDS ( MULTIPLE COMANY CODES)
    CAN ANY ONE HELP

    Check this:
    <a href="/people/rob.burbank/blog/2006/02/07/performance-of-nested-loops">Performance of Nested Loops</a>
    Rob

  • Help tuning NESTED LOOPS OUTER joins

    Hello,
    I have inherited this nasty query (below) that is taking an awful time to complete (more than 2 hrs a day)
    The worst bit is that I need to outer join my fact table so many times as I need bit’s and pieces from other tables/mviews.
    When I look at the explain plan I see that this situation means that the cbo is doing several NESTED LOOPS OUTER join operations. I understand that these nested loops mean going through every row in my primary table to see if there is a match in the secondary table (much smaller) which makes it extremely inefficient, is this right?
    The stats on the tables are all refreshed daily.
    Any ideas on how I can improve the performance here?
    Thanks in advance!
    The query:
    explain plan for
    SELECT x.user_id AS user_id,
    x.login_name AS login_name,
    c.date_of_birth AS date_of_birth,
    x.registration_site AS registration_site,
    x.organisation AS organisation,
    c.user_title AS user_title,
    c.first_name AS first_name,
    c.last_name AS last_name,
    x.email_address AS email_address,
    x.user_status AS user_status,
    x.user_privilege AS user_access_privilege,
    x.date_registration AS date_registration,
    x.affiliate_id AS affiliate_id,
    x.mobile_number AS mobile_number,
    x.optional_parameter AS vt_number,
    gud.display_name AS chat_name,
    REPLACE (s4.address_line_1, ',', '') AS address_line_1,
    REPLACE (s4.address_line_2, ',', '') AS address_line_2,
    REPLACE (s4.town, ',', '') AS town,
    REPLACE (s4.county, ',', '') AS county,
    REPLACE (s4.postcode, ',', '') AS postcode,
    s4.country AS country,
    s3.last_login AS last_login_date,
    x.email_send_newsletter AS email_send_newsletter,
    x.email_give_details_thirdparty AS email_give_details_thirdparty,
    NVL (ia.cash_balance, 0) AS current_cash_balance,
    NVL (ia.bonus_balance, 0) AS current_bonus_balance,
    x.external_affiliate_id AS external_affiliate_id,
    r.currency_code AS currency,
    NVL (ia.points_balance, 0) AS current_loyalty_points_balance,
    p.status AS buyer_status,
    NVL (ia.bi_bonus_balance, 0) AS current_bi_bonus_balance,
    NVL (ia.pending_balance, 0) AS current_pending_balance,
    l.level_name AS current_loyalty_level,
    l.date_level_achieved AS date_level_achieved,
    NVL (l.current_period_loyalty_points, 0) AS current_period_loyalty_points,
    r.region AS user_region,
    x.registration_platform AS registration_platform,
    x.external_user_name AS external_user_name,
    c.home_number AS home_number,
    pr.code AS reg_promo_code,
    g.date_first_buy AS date_first_buy
    FROM gl_user_registrations x,
    gl_region r,
    MVW_USER_BALANCES ia,
    gl_customers c,
    gl_user_display_names gud,
    gl_user_last_login s3,
    (SELECT z.user_id AS user_id,
    z.address_line_1 AS address_line_1,
    z.address_line_2 AS address_line_2,
    z.town AS town,
    z.county AS county,
    z.postcode AS postcode,
    z.country AS country
    FROM gl_user_addresses z
    WHERE z.is_current = 1) s4,
    gl_user_buyer_mapping upm,
    gl_buyer p,
    mvw_user_loyalty_points l,
    MVW_USER_PROMO_CODE_REG pr,
    MVW_USER_FIRST_BUY_DATE g
    WHERE x.base_region = r.region
    AND x.user_id = ia.user_id (+)
    AND x.customer_id = c.customer_id(+)
    AND x.user_id = gud.user_id (+)
    AND x.user_id = s4.user_id (+)
    AND x.user_id = s3.user_id (+)
    AND x.user_id = upm.user_id (+)
    AND upm.buyer_id = p.buyer_id
    AND x.user_id = l.user_id (+)
    AND x.user_id = pr.user_id (+)
    AND x.user_id = g.user_id (+);
    select * from table(dbms_xplan.display);
    Plan hash value: 2158171613
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 100 | 63100 | 135 (1)| 00:00:01 |
    | 1 | NESTED LOOPS OUTER | | 100 | 63100 | 135 (1)| 00:00:01 |
    | 2 | NESTED LOOPS OUTER | | 100 | 60600 | 120 (1)| 00:00:01 |
    | 3 | NESTED LOOPS OUTER | | 100 | 57100 | 105 (1)| 00:00:01 |
    | 4 | NESTED LOOPS OUTER | | 100 | 55400 | 90 (2)| 00:00:01 |
    | 5 | NESTED LOOPS OUTER | | 100 | 53600 | 70 (2)| 00:00:01 |
    |* 6 | HASH JOIN | | 100 | 47000 | 55 (2)| 00:00:01 |
    | 7 | TABLE ACCESS FULL | GL_REGION | 18 | 252 | 2 (0)| 00:00:01 |
    | 8 | NESTED LOOPS OUTER | | 100 | 22800 | 52 (0)| 00:00:01 |
    | 9 | NESTED LOOPS OUTER | | 100 | 19700 | 47 (0)| 00:00:01 |
    | 10 | NESTED LOOPS OUTER | | 100 | 17600 | 37 (0)| 00:00:01 |
    | 11 | NESTED LOOPS | | 100 | 15800 | 27 (0)| 00:00:01 |
    | 12 | NESTED LOOPS | | 102 | 2754 | 17 (0)| 00:00:01 |
    | 13 | TABLE ACCESS FULL | GL_BUYER | 6143K| 64M| 2 (0)| 00:00:01 |
    | 14 | TABLE ACCESS BY INDEX ROWID| GL_USER_BUYER_MAPPING | 1 | 16 | 1 (0)| 00:00:01 |
    |* 15 | INDEX RANGE SCAN | GL_USER_BUYER_MAPPPING_IX | 1 | | 1 (0)| 00:00:01 |
    | 16 | TABLE ACCESS BY INDEX ROWID | GL_USER_REGISTRATIONS | 1 | 131 | 1 (0)| 00:00:01 |
    |* 17 | INDEX UNIQUE SCAN | PK_GL_USER_REGISTRATIONS | 1 | | 1 (0)| 00:00:01 |
    | 18 | TABLE ACCESS BY INDEX ROWID | GL_USER_LAST_LOGIN | 1 | 18 | 1 (0)| 00:00:01 |
    |* 19 | INDEX UNIQUE SCAN | GL_USER_LAST_LOGIN_PK | 1 | | 1 (0)| 00:00:01 |
    | 20 | TABLE ACCESS BY INDEX ROWID | GL_USER_DISPLAY_NAMES | 1 | 21 | 1 (0)| 00:00:01 |
    |* 21 | INDEX UNIQUE SCAN | PK_GL_USER_DISPLAY_NAMES | 1 | | 1 (0)| 00:00:01 |
    | 22 | TABLE ACCESS BY INDEX ROWID | GL_CUSTOMERS | 1 | 31 | 1 (0)| 00:00:01 |
    |* 23 | INDEX UNIQUE SCAN | PK_GL_CUSTOMERS | 1 | | 1 (0)| 00:00:01 |
    |* 24 | TABLE ACCESS BY INDEX ROWID | GL_USER_ADDRESSES | 1 | 66 | 1 (0)| 00:00:01 |
    |* 25 | INDEX RANGE SCAN | IX_GL_USER_ADDRESSES1 | 1 | | 1 (0)| 00:00:01 |
    | 26 | MAT_VIEW ACCESS BY INDEX ROWID | MVW_USER_FIRST_BUY_DATE | 1 | 18 | 1 (0)| 00:00:01 |
    |* 27 | INDEX RANGE SCAN | MVW_USER_FS_DATE_IDX | 1 | | 1 (0)| 00:00:01 |
    | 28 | MAT_VIEW ACCESS BY INDEX ROWID | MVW_USER_PROMO_CODE_REG | 1 | 17 | 1 (0)| 00:00:01 |
    |* 29 | INDEX RANGE SCAN | MVW_USER_PROMO_CODE_IDX | 1 | | 1 (0)| 00:00:01 |
    | 30 | MAT_VIEW ACCESS BY INDEX ROWID | MVW_USER_LOYALTY_POINTS | 1 | 35 | 1 (0)| 00:00:01 |
    |* 31 | INDEX RANGE SCAN | MVW_USER_LYP_IDX | 1 | | 1 (0)| 00:00:01 |
    | 32 | MAT_VIEW ACCESS BY INDEX ROWID | MVW_USER_BALANCES | 1 | 25 | 1 (0)| 00:00:01 |
    |* 33 | INDEX RANGE SCAN | MVW_USER_BALANCES_IDX | 1 | | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    6 - access("X"."BASE_REGION"="R"."REGION")
    15 - access("UPM"."BUYER_ID"="P"."BUYER_ID")
    17 - access("X"."USER_ID"="UPM"."USER_ID")
    19 - access("X"."USER_ID"="S3"."USER_ID"(+))
    21 - access("X"."USER_ID"="GUD"."USER_ID"(+))
    23 - access("X"."CUSTOMER_ID"="C"."CUSTOMER_ID"(+))
    24 - filter("Z"."IS_CURRENT"(+)=1)
    25 - access("X"."USER_ID"="Z"."USER_ID"(+))
    27 - access("X"."USER_ID"="G"."USER_ID"(+))
    29 - access("X"."USER_ID"="PR"."USER_ID"(+))
    31 - access("X"."USER_ID"="L"."USER_ID"(+))
    33 - access("X"."USER_ID"="IA"."USER_ID"(+))

    Hi,
    1) What you are saying about nested loops is true about any join (except, of course, cartesian joins): you are taking rows from rowsource A and find matching rows from rowsource B. This doesn't make a join method efficient or inefficient.
    2) The plan you posted does not indicate any performance problem whatsoever. I know you have one, but it's not possible to address it without having any information about it. Trace it, get dbms_xplan.display_cursor dump with rowsource stats, or real-time SQL monitoring report (if your version and license allow it) and post the results here, then we'd be able to help
    3) One efficient way to perform queries of your type (big fact table joined to a bunch of small dimension tables) is star transformation, but there are certain pre-requisites for that (like bitmap indexes on FK constraints) -- please read the documentation on star queries/transformations and see if that is an option for you
    Best regards,
    Nikolay

  • Building Tree hierarchy Using nested loops and class cl_gui_column_tree

    Hello gurus,
    I want to create a tree report using custom container and class cl_gui_column_tree. I have read and understood the standard demo report which SAP has provided i.e. SAPCOLUMN_TREE_CONTROL_DEMO. But in this report all the levels nodes are created as constants and hardcoded. I want to create hierarchy using nested loops. For this i took one example of a hierarchy of VBAK-VBELN->VBAP-POSNR Like One sales order has many line items and each line item can have number of line items in billing plan.
    I have done some coding for it.
    FORM build_tree USING node_table TYPE treev_ntab
                                           item_table TYPE zitem_table.              " i created the zitem_table table type of mtreeitm in SE11.
      DATA: node TYPE treev_node,
                 item TYPE mtreeitm.
      node-node_key = root.
      CLEAR node-relatkey.
      CLEAR node-relatship.
      node-hidden = ' '.
      node-disabled = ' '.
      CLEAR node-n_image.
      CLEAR node-exp_image.
      node-isfolder = 'X'.
      node-expander = 'X'.
      APPEND node TO node_table.
      item-node_key = root.
      item-item_name = colm1.
      item-class = cl_gui_column_tree=>item_class_text.
      item-text = 'Root'.
      APPEND item TO item_table.
      item-node_key = root.
      item-item_name = colm2.
      item-class = cl_gui_column_tree=>item_class_text.
      item-text = 'Amount'.
      APPEND item TO item_table.
      LOOP AT it_vbeln INTO wa_vbeln.
        node-node_key = wa_vbeln-vbeln.
        node-relatkey = root.
        node-relatship = cl_gui_column_tree=>relat_last_child.
        node-hidden = ' '.
        node-disabled = ' '.
        CLEAR node-n_image.
        CLEAR node-exp_image.
        node-isfolder = 'X'.
        node-expander = 'X'.
        APPEND node TO node_table.
        item-node_key = wa_vbeln-vbeln.
        item-item_name = colm1.
        item-class = cl_gui_column_tree=>item_class_text.
        item-text = wa_vbeln-vbeln.
        APPEND item TO item_table.
        item-node_key = wa_vbeln-vbeln.
        item-item_name = colm2.
        item-class = cl_gui_column_tree=>item_class_text.
        item-text = wa_vbeln-netwr.
        APPEND item TO item_table.
        LOOP AT it_posnr INTO wa_posnr.
        node-node_key = wa_posnr-posnr.
        node-relatkey = wa_vbeln-vbeln.
        node-relatship = cl_gui_column_tree=>relat_last_child.
        node-hidden = ' '.
        node-disabled = ' '.
        CLEAR node-n_image.
        CLEAR node-exp_image.
        node-isfolder = ' '.
        node-expander = ' '.
        APPEND node TO node_table.
        item-node_key = wa_posnr-posnr.
        item-item_name = colm1.
        item-class = cl_gui_column_tree=>item_class_text.
        item-text = wa_posnr-posnr.
        APPEND item TO item_table.
        item-node_key = wa_posnr-posnr.
        item-item_name = colm2.
        item-class = cl_gui_column_tree=>item_class_text.
        item-text = wa_posnr-netpr.
        APPEND item TO item_table.
        ENDLOOP.
      ENDLOOP.
    ENDFORM.
    Now this program compiles fine and runs till there is only one level. That is root->vbeln. But when i add one more loop of it_posnr it gives me runtime error of message type 'X'. The problem i found was uniqueness of item-item_name as all the sales order have posnr = 0010. What could be done? I tried giving item_name unique hierarchy level using counters just like stufe field in prps eg. 10.10.10, 10.10.20,10.20.10,10.20.20,20.10.10 etc.. etc.. but still i am getting runtime error when i add one more hierarchy using nested loop. Plz guide.
    Edited by: Yayati6260 on Jul 14, 2011 7:25 AM

    Hello all,
    Thanks the issue is solved. The node key was not getting a unique identification as nodekey. I resolved the issue by generating unique identification for each level. Thanks all,
    Regards
    Yayati Ekbote

  • Problem with Nested loop in Fox-Formula

    Dear Experts,
    Let s share the scenario :
    MaterialGroups with following Keys for Example AAAA, BBBB, CCCC..., Materialgroups are selected in the 1st input ready query, which is assigned to DataProvider DP_1 in a  webtemplate.
    every Materialgroup has several Materials, for instance:
    Materialgroup AAAA has following Materials: AAAA10, AAAA11, AAAA12, AAAA13...
    Materials are  selected in a second  input ready Query, which is assigned to a second DataProvider DP_2 in the Same Webtemplate as the query 1.
    Both Materialgroup and Material are based on the same Aggreagtion level and same real time cube.
    I want to copy the input values for every  MaterialGroup ( 1st query, DP_1) only to it s own Materials (2cond Query, DP_2).
    To resolve this Issue i wrote the following Fox Formula code with a nested loop, however it does not work properly. when I m debugging the code, i could release that the second Loop was ignored.but wehn i replace the second loop (nested loop) with a fixed value it s seems to work properly
    DATA MG1 TYPE MATG.<------ MaterialGroup
    DATA MT1 TYPE MAT.<----
    Material
    DATA S1 TYPE STRING.
    DATA S2 TYPE STRING.
    DATA S3 TYPE STRING
    DATA K TYPE F.
    DATA Z TYPE F.
    FOREACH MG1.
    To check Materialgroup in debugger
    S1= MG1.
    BREAK-POINT.
    K = {KEYfIG, #, MG1}.
    BREAK-POINT.
    FOREACH  MT1.   <----- if i set MT1 to a fixed value like AAAA11 then S3 get the wished value namely AAAA
    S2 = MT1.
    BREAK-POINT.
    S3 =  SUBSTR (MT1, 0, 4).  
    BREAK-POINT.
    IF S1 = S3.
    {KEYFIG, MT1, #} = K.
    following Statement is only used To check in debugger if Material has become the same Materialgroup value
    Z = {KEYFIG, MT1, #}.
    BREAK-POINT.
    ENDIF.
    ENDFOR.
    ENDFOR.
    Thakns for any help
    Frank
    Edited by: FRYYYBM on Mar 17, 2011 10:54 PM
    Edited by: FRYYYBM on Mar 17, 2011 11:06 PM

    Hi,
    Please try this way.
    DATA MG1 TYPE MATG.<------ MaterialGroup
    DATA MT1 TYPE MAT.<----
    Material
    DATA S1 TYPE STRING.
    DATA S2 TYPE STRING.
    DATA S3 TYPE STRING
    DATA K TYPE F.
    DATA Z TYPE F.
    FOREACH MT1.
    FOREACH MG1.
    To check Materialgroup in debugger
    S1= MG1.
    BREAK-POINT.
    K = {KEYfIG, #, MG1}.
    BREAK-POINT.
    FOREACH MT1. <----- if i set MT1 to a fixed value like AAAA11 then S3 get the wished value namely AAAA
    S2 = MT1.
    BREAK-POINT.
    S3 = SUBSTR (MT1, 0, 4).
    BREAK-POINT.
    IF S1 = S3.
    {KEYFIG, MT1, #} = K.
    following Statement is only used To check in debugger if Material has become the same Materialgroup value
    Z = {KEYFIG, MT1, #}.
    BREAK-POINT.
    ENDIF.
    ENDFOR.
    ENDFOR.
    ENDFOR.
    Thanks.
    With regards,
    Anand Kumar

Maybe you are looking for