How to force Oracle using HASH JOIN

Hi,
I have performance issue with our SQL queries. If I use hint /*+ordered use_hash(table_name)*/ the queries run very fast as they use HASH JOIN instead of NESTED LOOPS. I have several thousands of such queries and cannot modify them in order to force Oracle to use HASH JOIN.
I have tried to change parameters in init.ora and re-analyze the database. But the queries still use NESTED LOOPS.
Here are some parameters in my database:
hash_area_size=256M
pga_aggregate_target=6G
pgamax_size=1953125K
pga_aggregate_target=6G
My servers has 16G of RAM each. We are using RAC with 2 nodes.
Please advice.
Thx

I donot have an answer (apart from the only answer to change each and every query to have an hint) but I have one question.
Per session Hash area size of 256M? Is it really required? It could be a data warehouse but 256M is a bit too much for me (may be I have not worked yet on such a big environment).

Similar Messages

  • Force "in-memory" hash join ?

    Hi everyone,
    can somebody point out to me how to force oracle to allocate a certain amount of memory to a certain hash join ?
    I´m using an oracle 10g instance with automating memory management - I assume that setting the "pga_aggregate_target" higher while keeping the number of sessions low will result in a greater amount of memory being allocated to the "remaining" session - is this correct ?
    I am under the impression that oracle still reserves quite a bit of memory and chooses to rather do a "ONE PASS" execution of a hash join and keep memory available than investing memory in an "OPTIMAL" execution while risking that other sessions (which are being repressed in my current scenario) might not get their share.
    Would be really glad to hear a solution for this.
    With regards,
    M. Wellbrock

    It's always possible to do:
    alter session set workarea_size_policy = manual;
    alter session set hash_area_size = XXXXXX;
    run query
    alter session set workarea_size_policy = manual;Be careful - because the automatic memory mechanism will see that your session has taken the memory, and the memory you use may be larger than the value you specify if your query includes more than one hash join.
    Regards
    Jonathan Lewis

  • How to force Oracle AS 10gR2 installer to use IP and hostname on 2nd NIC

    My target machine for Oracle AS 10gR2 10.1.2.0.2 has two NICs, for example:
    NIC1: 10.0.0.1 hostname1
    NIC2: 10.0.0.2 hostname2
    The hostname of this machine is hostname1
    Oracle AS installer will use hostname1, IP address 10.0.0.1 for installation by default. Does anyone know how can I force Oracle installer to use hostname2 and 10.0.0.2 for installation? Thank you.

    Were you able to force the installer to use a specific NIC interface when installing? I would like to have two mid tiers on the same box, each with a different host and IP.

  • How to access Oracle Using Dos Based FoxPro(2.5,2.6)

    Hello Sir,
    I need some clarification regarding the following question.
    Q. How can access Oracle Database using Dos Based Foxpro of version either 2.5 or 2.6
    Thanking you,
    Kishore.P

    Hi Alex,
    the host and port depends on your network setup of your VM.
    Do an ifconfig -a and see what IP adress your guest has.
    With this IP address you should be able to access EM from outside your VM (but on the VM host, not from outside the network) with the same port.
    Regards
    Sebastian

  • How to Decode.. using hash value

    Hi Guys,
    I have created a function Enc_Password on my form. following is the code:
    FUNCTION Enc_Password ( P_String IN VARCHAR2 ) RETURN VARCHAR2
    IS
    BEGIN
    RETURN DBMS_UTILITY.Get_Hash_Value( P_String, 9, 1001000200 );
    END Enc_Password;
    Now i call this function on button press event of my form by this code
    Dec_Pass := Enc_Password (:login_blk.password);
    THE PASSWORD IS ENCRYPTED.........
    HOW CAN I DECRYPT IT AGAIN, IS THERE ANY REVERSE.....
    S THERE ANY REVERSE USING HAS VALUE
    plis help,
    Imran Baig

    Hi,
    Thanks for the reply.
    Actually i have already used this in one of my form. Just now i have found that in some cases i have to decrypt/decode the password as well.
    It means that is no way yo decode data and hash value is just used for encode??
    pliz help,
    Imran Baig

  • Going Crazy! How to force startup using firewire drive on G3

    Okay, I think I'm going crazy. I have searched last night and today for my answers and still no solution.
    I have an old G3 I'm trying to install Tiger on. I don't have a dvd drive in the G3, so I installed onto a firewire drive from my G4. I booted from the firewire drive and manually copied all the file over to the internal drive on the G3. Some things wouldn't copy. Obviously, this is part of my problem. Unfortunately, I did not run the disk utility, etc before restarting after doing this. Now, I get a kernel panic on a grey screen with frozen beach ball cursor. When I reboot from my Techtool 4 pro disk, I can access the program, but it's all fuzzy grey on the screen. Plus that didn't work. I have tried to reboot from my older OS10.1 install disk. But I can't do anything with it, and it won't install over the newer "incomplete" system. If I restart pushing X or C or Option button, nothing happens. The G3 has two internal drives, each with a system on it. Neither will start up. The firewire drive is hooked up, and I can't reboot from that.
    Can anyone tell me the shortcut to reboot from the external firewire drive? What do I push? Or is

    G3s cannot boot from FireWire drives:
    http://docs.info.apple.com/article.html?artnum=58238#faq10

  • Generally when does optimizer use nested loop and Hash joins  ?

    Version: 11.2.0.3, 10.2
    Lets say I have a table called ORDER and ORDER_DETAIL.
    ORDER_DETAIL is the child table of ORDERS .
    This is what I understand about Nested Loop:
    When we join ORDER AND ORDER_DETAIL tables oracle will form a 'nested loop' in which for each order_ID in ORDER table (outer loop), oracle will look for corresponding multiple ORDER_IDs in the ORDER_DETAIL table.
    Is nested loop used when the driving table (ORDER in this case) is smaller than the child table (ORDER_DETAIL) ?
    Is nested loop more likely to use Indexes in general ?
    How will these two tables be joined using Hash joins ?
    When is it ideal to use hash joins  ?

    Your description of a nested loop is correct.
    The overall rule is that Oracle will use the plan that it calculates to be, in general, fastest. That mostly means fewest I/O's, but there are various factors that adjust its calculation - e.g. it expects index blocks to be cached, multiple reads entries in an index may reference the same block, full scans get multiple blocks per I/O. It also has CPU cost calculations, but they generally only become significant with function / package calls or domain indexes (spatial, text, ...).
    Nested loop with an index will require one indexed read of the child table per outer table row, so its I/O cost is roughly twice the number of rows expected to match the where clause conditions on the outer table.
    A hash join reads the of the smaller table into a hash table then matches the rows from the larger table against the hash table, so its I/O cost is the cost of a full scan of each table (unless the smaller table is too big to fit in a single in-memory hash table). Hash joins generally don't use indexes - it doesn't use the index to look up each result. It can use an index as a data source, as a narrow version of the table or a way to identify the rows satisfying the other where clause conditions.
    If you are processing the whole of both tables, Oracle is likely to use a hash join, and be very fast and efficient about it.
    If your where clause restricts it to a just few rows from the parent table and a few corresponding rows from the child table, and you have an index Oracle is likely to use a nested loops solution.
    If the tables are very small, either plan is efficient - you may be surprised by its choice.
    Don't be worry about plans with full scans and hash joins - they are usually very fast and efficient. Often bad performance comes from having to do nested loop lookups for lots of rows.

  • Forcing / reuse plan hash value ...

    friends....
    db: 11gr2
    os: linux
    for quite sometime I am trying to tune this query but somehow can't get runtime less than 35 seconds...
    I tried to get recent plan details from sql id but saw database was using "Cardinality feedback used for this statement" and was showing me many different plan with different hash value ....
    some of the plans are really good and finishes in 5 seconds and some took 55 seconds.
    my question is
    1. how can I force oracle to use/run query as per historic hash value with minimal runtime? is it possible?
    2. can we see what hint or changes done in query by oracle for that particular hash value?
    I can see explain plan but fail to use id/operation details in actual query?
    my current explain plan slows at below operation id:
    0. SELECT STATEMENT============12  rows======00:00:35
    1. SORT ORDER BY===============12 rows======00:00:35
    2. ---NESTED LOOPS OUTER==========12  rows=====00:00:35
    3. -----NESTED LOOPS==============12  rows=====00:00:35
    4. --------NESTED LOOPS ============ 12K rows ====00:00:01
    5. ----------SORT UNIQUE=============4  rows=====00:00:01
    ...but hash value with good plan using "Hash Join Right Semi" for step-3 and "access table by index" for step 4...

    using "Cardinality feedback used for this statement" and was showing me many different plan with different hash value ....
    some of the plans are really good and finishes in 5 seconds and some took 55 seconds.A not unusual experience with cardinality feedback.
    If you don't like the feature - and my experiences have been very mixed - turn it off.
    Either via system wide parameter optimizeruse_feedback or on a statement basis using the parameter via opt_param hint or alter session.
    1. how can I force oracle to use/run query as per historic hash value with minimal runtime? is it possible?a. SQL Plan Baselines
    b. SQL Profile via COE_XFR_SQL_PROFILE.sql, see Oracle Support doc id 215187.1.
    c. Hint the plan you want using the outline hints from plans above - see DBMS_XPLAN.DISPLAY_AWR and the format parameter '+OUTLINE'
    2. can we see what hint or changes done in query by oracle for that particular hash value?See DBMS_XPLAN.DISPLAY_CURSOR / DISPLAY_AWR and the format parameter '+OUTLINE'
    my current explain plan slows at below operation id:
    but hash value with good plan using "Hash Join Right Semi" for step-3 and "access table by index" for step 4... You've got the template tuning threads which detail the information required to comment on this.

  • Nested loop vs Hash Join

    Hi,
    Both the querys are returning same results, but in my first query hash join and second query nested loop . How ? PLs explain
    select *
    from emp a,dept b
    where a.deptno=b.deptno and b.deptno>20;
    6 rows
    Plan hash value: 4102772462
    | Id  | Operation                    | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |         |     6 |   348 |     6  (17)| 00:00:01 |
    |*  1 |  HASH JOIN                   |         |     6 |   348 |     6  (17)| 00:00:01 |
    |   2 |   TABLE ACCESS BY INDEX ROWID| DEPT    |     3 |    60 |     2   (0)| 00:00:01 |
    |*  3 |    INDEX RANGE SCAN          | PK_DEPT |     3 |       |     1   (0)| 00:00:01 |
    |*  4 |   TABLE ACCESS FULL          | EMP     |     7 |   266 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("A"."DEPTNO"="B"."DEPTNO")
       3 - access("B"."DEPTNO">20)
       4 - filter("A"."DEPTNO">20)
    select *
    from emp a,dept b
    where a.deptno=b.deptno and b.deptno=30;  
    6 rows
    Plan hash value: 568005898
    | Id  | Operation                    | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |         |     5 |   290 |     4   (0)| 00:00:01 |
    |   1 |  NESTED LOOPS                |         |     5 |   290 |     4   (0)| 00:00:01 |
    |   2 |   TABLE ACCESS BY INDEX ROWID| DEPT    |     1 |    20 |     1   (0)| 00:00:01 |
    |*  3 |    INDEX UNIQUE SCAN         | PK_DEPT |     1 |       |     0   (0)| 00:00:01 |
    |*  4 |   TABLE ACCESS FULL          | EMP     |     5 |   190 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - access("B"."DEPTNO"=30)
       4 - filter("A"."DEPTNO"=30)

    Hi,
    Unless specifically requested, Oracle picks the best execution plan based on estimates of table sizes, column selectivity and many other variables. Even though Oracle does its best to have the estimates as accurate as possible, they are frequently different, and in some cases quite different, from the actual values.
    In the first query, Oracle estimated that the predicate “ b.deptno>20” would limit the number of records to 6, and based on that it decided the use Hash Join.
    In the second query, Oracle estimated that the predicate “b.deptno=30” would limit the number of records to 5, and based on that it decided the use Nested Loops Join.
    The fact that the actual number of records is the same is irrelevant because Oracle used the estimate, rather the actual number of records to pick the best plan.
    HTH,
    Iordan
    Iotzov

  • HASH JOIN Issue - query performance issue

    Hello friends:
    I have a nested loop long query. When i execute this on Oracle 9i, the results comes back within no time (less than a sec) but the same query, same schema, the query takes over 20-30secs. Looking at the execution plan (pasted below) - the Oracle 10g is using HASH JOIN compared to NESTED LOOPS (Oracle 9i). The cost on 9i is much less than 10g due to the hash join form 10g --
    Here's the explain plan for -- How do i make 10g plan to use the one from 9i--the nested loop?
    Oracle 10g:
    SORT(AGGREGATE)               1     33
    HASH JOIN          25394     2082     68706
    MAT_VIEW ACCESS(FULL) GMDB.NODE_ATTRIBUTES     ANALYZED     17051     2082     56214
    VIEW          8318     1714180     10285080
    FILTER                    
    CONNECT BY(WITH FILTERING)                    
    Oracle 9i--
    SORT(AGGREGATE)               1     40     
    NESTED LOOPS          166     65     2600          
    VIEW          36     65     845               
    FILTER

    just noticed the "CONNECT BY" in the plan. connect by queries are special cases, should you should spell that out loud and clear for us next time. anyway, lots of problems in v10 with connect by:
    metalink Note:394358.1
    CONNECT BY SQL uses Full Table Scan In 10.2.0.2 - Used Index In earlier versions
    solution:
    alter session set "_optimizer_connect_by_cost_based"=false;
    and bugs related to connect by fixed in the 10.2.0.2 release (other than the bug from the note bug, which was introduced):
    metalink Note:358749.1
    and
    Note:3956141.8
    OR filter in CONNECT BY query may be applied at table level (v 10.1.0.3)

  • Forcing Index use with UPPER or LOWER in the WHERE clause.

    Does anyone know how to force Oracle to use an index/Key when using UPPER or LOWER in the WHERE clause?

    You have to create a function index. Check your documentation on it.

  • Merge joins turned into expensive hash joins!?

    Hello.....
    working with two tables....
    Table A is where I am bringing data into.....
    Table B is where the data resides.
    Table B has 32million rows...
    I was using just a quick.....select b.whatever from table.a, table.b where primary key and b.table=a.table.....
    I could grab a few rows at a time and it would use a merge join and take 2-3 seconds.
    Now....(after NO changes) it hangs. In production it is still fast and uses merge joins, but I compared the activity to that of test and it's using hash joins! The hidden parameter _use merge join will force a merge, but then none of my other hash joins (where I want them) will work.
    Optimizer dorked up? Whats the scoop?

    hmm, well your replies are just as ingenius. . .here's a cookie.
    10.2.0.4.0 and i put the relevent sql in there enough to ask why over a dblink (not that it matters) why this behavior would all of the sudden change.
    Again, simple matter of creating a nested loop over a dblink to grab a few rows from table b to insert them into table a. It used to merge join, now it's hash joining. Just wondering :-)

  • Hash Joins

    I was wondering if someone can give me an example of how hash join works in java... some code wld be really helpful
    How do I apply a hashing function? Does it mean just get the hashcode of the object ?

    I have 2 different array comprising of ints .. say int one[] int two[] and int two[] has unique values.. I want to get the common of the ints in the array by using hash joins.
    I converted the int one[] to Integer One[] &
    I converted the int two[] to Integer Two[] to be able to get hashcodes since only objects have hashcodes...
    Once I get both hashcodes I am comparing them...
    When two objects have the same value then they shld have the same hashcode.. I was under this impression until I got this result below..
    int Number in 1 Integer object is =121234
    Hashcode of 1 = 121234
    int Number in 2nd Integer =143906
    Hashcode of 2nd = 121234
    So I am not able to retrieve the common Integer based on hashcode... how shld I proceed? I think I am totally going in the wrong direction.

  • Seeking advice on a heavy hash join

    We have to self-join a 190 million row table 160 times. On our production 9i database we give "RULE" hint and it finishes in about an hour using nested loop join, On our test 10g database, since "RULE" is not available any more. The optimiser chooses to use hash join (160 levels). And query never finishes in a reasonable time. We have gradually increase the hash_area_size from 8M to 512M, thinking that will help. But apparently it does not. Can anyone provide suggestions? Thanks.

    There is an approach using Analytics Functions that may be useful here. The idea is to get all the data values required with 1 reference to Tab2 rather than 160, and let Analytics do the work of building the result set.
    The goal here is to 'never do multiple references to the same table where 1 reference can suffice (usually with the aid of Analytics)'.
    There are 2 variations depending on whether the ID values here (0,10,20,30,...) always follow an ascending pattern or if they don't. The second example is more generic and will also cover the ascending case.
    name ID val ID val ID val ID val ID val
    name1 0 1 10 1 20 1 30 1 40 1
    -- Performance Summary - This demo used a max value of 1600 vs 15000
    for a net number of rows of 2.5 million versus 225 million.
    Any of the test views ( replacing 5,10, or 160 tab2 references) using the Analytics function used at most 4099 consistent gets. The original approach used 20553 consistent gets for 5 tab2 references, 41016 consistent gets for 10 tab2 references. This was in 10g, doing the hash join. I did try alter session set optimizer_mode = rule (just for test purposes) and that resulted in 46462 consistent gets for the 10 tab2 reference view while it did a merge join operation.
    Autotrace for the Analytics version to replace the original 160 table sql.
    JT(147)@JTDB10G>select * from analytics_2_joins;
    1600 rows selected.
    Statistics
    0 recursive calls
    0 db block gets
    4099 consistent gets
    3710 physical reads
    0 redo size
    1112904 bytes sent via SQL*Net to client
    1250 bytes received via SQL*Net from client
    81 SQL*Net roundtrips to/from client
    2 sorts (memory)
    0 sorts (disk)
    1600 rows processed
    -- Minimal examples:
    --As always, test thoroughly before using in production.
    select name,
         id0, value value0,
         id1, value value1,
         id2, value value2,
         id3, value value3,
         id4, value value4
    from ( select name, id, value,
         lead(id,0 ) over(partition by name, value order by rn ) id0 ,
         lead(id,1 ) over(partition by name, value order by rn ) id1 ,
         lead(id,2 ) over(partition by name, value order by rn ) id2 ,
         lead(id,3 ) over(partition by name, value order by rn ) id3 ,
         lead(id,4 ) over(partition by name, value order by rn ) id4 ,
    rn
    from( select tab1.name, i0.id, i0.tab1value value,
    (row_number() over (partition by tab1value order by i0.id )) -1 rn
    from tab1, tab2 i0 where tab1.name='name1' and i0.tab1value=tab1.value
    and i0.id in (0,10,20,30,40)
    )) where rn = 0;
    -- execute a verions of the smaller analytics approch with bind variables
    -- referencing the binds within the numbered in-line view is needed only if
    -- the id0, id1, id2 values do not follow the ascending pattern shown in the
    -- example. This will handle case where id0 = 30, id2 =20, id4 = 40 , etc.
    variable l_id0 number;
    variable l_id1 number;
    variable l_id2 number;
    variable l_id3 number;
    variable l_id4 number;
    exec :l_id1 := 0;
    exec :l_id3 := 10;
    exec :l_id2 := 20;
    exec :l_id0 := 30;
    exec :l_id4 := 40;
    select name, bind_rn,
         id0, value value0,
         id1, value value1,
         id2, value value2,
         id3, value value3,
         id4, value value4
    from ( select name, id, value,
         lead(id,0 ) over(partition by name, value order by bind_rn ) id0 ,
         lead(id,1 ) over(partition by name, value order by bind_rn ) id1 ,
         lead(id,2 ) over(partition by name, value order by bind_rn ) id2 ,
         lead(id,3 ) over(partition by name, value order by bind_rn ) id3 ,
         lead(id,4 ) over(partition by name, value order by bind_rn ) id4 ,
         bind_rn
    from( select tab1.name, i0.id, i0.tab1value value, bind_rn
         from tab1, tab2 i0,
              (select 0 bind_rn, :l_id0 arg_value from dual union
              select 1 , :l_id1 from dual union
              select 2 , :l_id2 from dual union
              select 3 , :l_id3 from dual union
              select 4 , :l_id4 from dual ) table_of_args
         where tab1.name='name1' and i0.tab1value=tab1.value
    -- and i0.id in (0,10,20,30,40)
    and i0.id = table_of_args.arg_value
    )) where bind_rn = 0;
    -- Full Test Case
    -- table setup
    drop table tab1;
    drop table tab2;
    create table tab1(name varchar2(100), value number) pctfree 0;
    create table tab2(id number, tab1value number) pctfree 0;
    begin
    for x in 0 .. 1600 loop
    for y in 1 .. 1600 loop
         insert into tab1 values ('name' || x, y);
    end loop;
    end loop;
    end;
    -- 15000 results in 225,000,000
    -- 1600 results in 2,560,000
    begin
    for x in 0 .. 1600 loop
    for y in 1 .. 1600 loop
         insert into tab2 values (x, y);
    end loop;
    end loop;
    end;
    commit;
    CREATE BITMAP INDEX NAME_BITMAP ON TAB1(NAME);
    EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'JTOMMANEY',TABNAME => 'TAB1', -
         estimate_percent => 20,     CASCADE => TRUE);
    EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'JTOMMANEY',TABNAME => 'TAB2', -
         estimate_percent => 20,     CASCADE => TRUE);
    alter session set optimizer_mode = 'RULE';
    -- set up some views both the original approach, and the analytis approach
    create view original_5_tab2_tables_join as
    select tab1.name name,
         i0.id id0, i0.tab1value value0,
         i1.id id1, i1.tab1value value1,
         i2.id id2, i2.tab1value value2,
         i3.id id3, i3.tab1value value3,
         i4.id id4, i4.tab1value value4
    from tab1,
    tab2 i0, tab2 i1, tab2 i2, tab2 i3, tab2 i4
    where tab1.name='name1'
    and (i0.id=0 and i0.tab1value=tab1.value)
    and (i1.id=10 and i1.tab1value=tab1.value)
    and (i2.id=20 and i2.tab1value=tab1.value)
    and (i3.id=30 and i3.tab1value=tab1.value)
    and (i4.id=40 and i4.tab1value=tab1.value);
    create view replace_5_tab2_joins as
    select name,
         id0, value value0,
         id1, value value1,
         id2, value value2,
         id3, value value3,
         id4, value value4
    from ( select name, id, value,
         lead(id,0 ) over(partition by name, value order by rn ) id0 ,
         lead(id,1 ) over(partition by name, value order by rn ) id1 ,
         lead(id,2 ) over(partition by name, value order by rn ) id2 ,
         lead(id,3 ) over(partition by name, value order by rn ) id3 ,
         lead(id,4 ) over(partition by name, value order by rn ) id4 ,
    rn from( select tab1.name, i0.id, i0.tab1value value,
    (row_number() over (partition by tab1value order by i0.id )) -1 rn
    from tab1, tab2 i0 where tab1.name='name1' and i0.tab1value=tab1.value
    and i0.id in (0,10,20,30,40)
    )) where rn = 0;
    create view original_10_tab2_tables_join as
    select tab1.name name,
         i0.id id0, i0.tab1value value0,
         i1.id id1, i1.tab1value value1,
         i2.id id2, i2.tab1value value2,
         i3.id id3, i3.tab1value value3,
         i4.id id4, i4.tab1value value4,
         i5.id id5, i5.tab1value value5,
         i6.id id6, i6.tab1value value6,
         i7.id id7, i7.tab1value value7,
         i8.id id8, i8.tab1value value8,
         i9.id id9, i9.tab1value value9
    from tab1,
    tab2 i0, tab2 i1, tab2 i2, tab2 i3, tab2 i4,
    tab2 i5, tab2 i6, tab2 i7, tab2 i8, tab2 i9
    where tab1.name='name1'
    and (i0.id=0 and i0.tab1value=tab1.value)
    and (i1.id=10 and i1.tab1value=tab1.value)
    and (i2.id=20 and i2.tab1value=tab1.value)
    and (i3.id=30 and i3.tab1value=tab1.value)
    and (i4.id=40 and i4.tab1value=tab1.value)
    and (i5.id=50 and i5.tab1value=tab1.value)
    and (i6.id=60 and i6.tab1value=tab1.value)
    and (i7.id=70 and i7.tab1value=tab1.value)
    and (i8.id=80 and i8.tab1value=tab1.value)
    and (i9.id=90 and i9.tab1value=tab1.value);
    create view replace_10_tab2_joins as
    select name,
         id0, value value0,
         id1, value value1,
         id2, value value2,
         id3, value value3,
         id4, value value4,
         id5, value value5,
         id6, value value6,
         id7, value value7,
         id8, value value8,
         id9, value value9
    from ( select name, id, value,
         lead(id,0 ) over(partition by name, value order by rn ) id0 ,
         lead(id,1 ) over(partition by name, value order by rn ) id1 ,
         lead(id,2 ) over(partition by name, value order by rn ) id2 ,
         lead(id,3 ) over(partition by name, value order by rn ) id3 ,
         lead(id,4 ) over(partition by name, value order by rn ) id4 ,
         lead(id,5 ) over(partition by name, value order by rn ) id5 ,
         lead(id,6 ) over(partition by name, value order by rn ) id6 ,
         lead(id,7 ) over(partition by name, value order by rn ) id7 ,
         lead(id,8 ) over(partition by name, value order by rn ) id8 ,
         lead(id,9 ) over(partition by name, value order by rn ) id9 ,
    rn from( select tab1.name, i0.id, i0.tab1value value,
    (row_number() over (partition by tab1value order by i0.id )) -1 rn
    from tab1, tab2 i0 where tab1.name='name1' and i0.tab1value=tab1.value
    and i0.id in (0,10,20,30,40,50,60,70,80,90)
    )) where rn = 0;
    -- set up some views both the original approach, and the analytics approach
    spool cr_v1.sql  may need to clean up heading, linefeed from created file
    begin
    dbms_output.put_line('create or replace view original_160_joins as select /*+ rule */ tab1.name ');
    for x in 0 .. 160 loop
    dbms_output.put_line( ',i' || x || '.id id' || x || ' ,i' || x || '.tab1value value' || x ) ;
    end loop;
    dbms_output.put_line('from tab1' );
    for x in 0 .. 160 loop
    dbms_output.put_line( ',tab2 i' || x ) ;
    end loop;
    dbms_output.put_line(' where tab1.name = ''name1''' );
    for x in 0 .. 160 loop
    dbms_output.put_line( ' and i' || x || '.id=' || (x * 10) || ' and i' || x || '.tab1value=tab1.value ' ) ;
    end loop;
    dbms_output.put_line( ' ;');
    end;
    --spool off
    --@cr_v1.sql
    spool cr_v2.sql may need to clean up heading, linefeed from created file
    begin
    dbms_output.put_line('create or replace view analytics_2_joins as select name ');
    for x in 0 .. 160 loop
    dbms_output.put_line( ',id' || x || ', value value' || x ) ;
    end loop;
    dbms_output.put_line('from ( select name, id, value ' );
    for x in 0 .. 160 loop
    dbms_output.put_line( ',lead(id,' || x || ') over(partition by name, value order by rn ) id' || x ) ;
    end loop;
    dbms_output.put_line(' , rn from( select tab1.name, i0.id, i0.tab1value value, ');
    dbms_output.put_line(' (row_number() over (partition by tab1value order by i0.id )) -1 rn ');
    dbms_output.put_line(' from tab1, tab2 i0 where tab1.name=''name1'' and i0.tab1value=tab1.value and i0.id in ( ');
    for x in 0 .. 159 loop
    dbms_output.put_line( (x * 10) || ',' ) ;
    end loop;
    dbms_output.put_line( ' 1600))) where rn = 0;');
    end;
    --spool off
    --@cr_v2.sql
    -- We now have 6 views established
    -- Original Approach     Analytics Approach w/ 1 tab2 reference
    -- 5 tab2s     original_5_tab2_tables_join      replace_5_tab2_joins
    -- 10 tab2s     original_10_tab2_tables_join      replace_10_tab2_joins
    --160 tab2s original_160_joins          analytics_2_joins
    -- plus we will use call the version with bind variables, but not from a view.
    -- Data validation:
    select 'orig_minus_new: ' || count(*) from
    ( select * from original_5_tab2_tables_join minus select * from replace_5_tab2_joins ) union
    select 'new_minus_orig: ' || count(*) from
    ( select * from replace_5_tab2_joins minus select * from original_5_tab2_tables_join );
    select 'orig_minus_new: ' || count(*) from
    ( select * from original_10_tab2_tables_join minus select * from replace_10_tab2_joins ) union
    select 'new_minus_orig: ' || count(*) from
    ( select * from replace_10_tab2_joins minus select * from original_10_tab2_tables_join );
    select 'orig_minus_new: ' || count(*) from
    ( select * from original_160_joins minus select * from analytics_2_joins );
    select 'new_minus_orig: ' || count(*) from
    ( select * from analytics_2_joins minus select * from original_160_joins );
    -- Performance test
    alter session set workarea_size_policy=manual ;
    alter session set sort_area_size = 64000000;
    alter session set hash_area_size = 64000000;
    set autotrace traceonly stat
    select * from original_5_tab2_tables_join;
    select * from replace_5_tab2_joins;
    select * from original_10_tab2_tables_join;      
    select * from replace_10_tab2_joins;
    select * from analytics_2_joins;
    --select * from original_160_joins;          
    -- execute a verions of the smaller analytics approch with bind variables
    -- referencing the binds within the numbered in-line view is needed only if
    -- the id0, id1, id2 values do not follow the ascending pattern shown in the
    -- example. This will handle case where id0 = 30, id2 =20, id4 = 40 , etc.
    variable l_id0 number;
    variable l_id1 number;
    variable l_id2 number;
    variable l_id3 number;
    variable l_id4 number;
    exec :l_id1 := 0;
    exec :l_id3 := 10;
    exec :l_id2 := 20;
    exec :l_id0 := 30;
    exec :l_id4 := 40;
    select name, bind_rn,
         id0, value value0,
         id1, value value1,
         id2, value value2,
         id3, value value3,
         id4, value value4
    from ( select name, id, value,
         lead(id,0 ) over(partition by name, value order by bind_rn ) id0 ,
         lead(id,1 ) over(partition by name, value order by bind_rn ) id1 ,
         lead(id,2 ) over(partition by name, value order by bind_rn ) id2 ,
         lead(id,3 ) over(partition by name, value order by bind_rn ) id3 ,
         lead(id,4 ) over(partition by name, value order by bind_rn ) id4 ,
         bind_rn
    from( select tab1.name, i0.id, i0.tab1value value, bind_rn
         from tab1, tab2 i0,
              (select 0 bind_rn, :l_id0 arg_value from dual union
              select 1 , :l_id1 from dual union
              select 2 , :l_id2 from dual union
              select 3 , :l_id3 from dual union
              select 4 , :l_id4 from dual ) table_of_args
         where tab1.name='name1' and i0.tab1value=tab1.value
    -- and i0.id in (0,10,20,30,40)
    and i0.id = table_of_args.arg_value
    )) where bind_rn = 0;
    JT(147)@JTDB10G>select * from original_5_tab2_tables_join;
    1600 rows selected.
    Statistics
    8 recursive calls
    2 db block gets
    20553 consistent gets
    18555 physical reads
    0 redo size
    52052 bytes sent via SQL*Net to client
    1250 bytes received via SQL*Net from client
    81 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1600 rows processed
    JT(147)@JTDB10G>select * from replace_5_tab2_joins;
    1600 rows selected.
    Statistics
    8 recursive calls
    2 db block gets
    4101 consistent gets
    3710 physical reads
    0 redo size
    52052 bytes sent via SQL*Net to client
    1250 bytes received via SQL*Net from client
    81 SQL*Net roundtrips to/from client
    2 sorts (memory)
    0 sorts (disk)
    1600 rows processed
    JT(147)@JTDB10G>select * from original_10_tab2_tables_join;
    1600 rows selected.
    Statistics
    0 recursive calls
    0 db block gets
    41016 consistent gets
    37115 physical reads
    0 redo size
    85636 bytes sent via SQL*Net to client
    1250 bytes received via SQL*Net from client
    81 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1600 rows processed
    JT(147)@JTDB10G>select * from replace_10_tab2_joins;
    1600 rows selected.
    Statistics
    0 recursive calls
    0 db block gets
    4099 consistent gets
    3710 physical reads
    0 redo size
    85636 bytes sent via SQL*Net to client
    1250 bytes received via SQL*Net from client
    81 SQL*Net roundtrips to/from client
    2 sorts (memory)
    0 sorts (disk)
    1600 rows processed
    JT(147)@JTDB10G>select * from analytics_2_joins;
    1600 rows selected.
    Statistics
    0 recursive calls
    0 db block gets
    4099 consistent gets
    3710 physical reads
    0 redo size
    1112904 bytes sent via SQL*Net to client
    1250 bytes received via SQL*Net from client
    81 SQL*Net roundtrips to/from client
    2 sorts (memory)
    0 sorts (disk)
    1600 rows processed
    JT(147)@JTDB10G>--select * from original_160_joins;
    JT(147)@JTDB10G>
    JT(147)@JTDB10G>----------------------------------------------------------------------------------------
    JT(147)@JTDB10G>-- execute a verions of the smaller analytics approch with bind variables
    JT(147)@JTDB10G>-- referencing the binds within the numbered in-line view is needed only if
    JT(147)@JTDB10G>-- the id0, id1, id2 values do not follow the ascending pattern shown in the
    JT(147)@JTDB10G>-- example. This will handle case where id0 = 30, id2 =20, id4 = 40 , etc.
    JT(147)@JTDB10G>----------------------------------------------------------------------------------------
    JT(147)@JTDB10G>
    JT(147)@JTDB10G>variable l_id0 number;
    JT(147)@JTDB10G>variable l_id1 number;
    JT(147)@JTDB10G>variable l_id2 number;
    JT(147)@JTDB10G>variable l_id3 number;
    JT(147)@JTDB10G>variable l_id4 number;
    JT(147)@JTDB10G>
    JT(147)@JTDB10G>exec :l_id1 := 0;
    PL/SQL procedure successfully completed.
    JT(147)@JTDB10G>exec :l_id3 := 10;
    PL/SQL procedure successfully completed.
    JT(147)@JTDB10G>exec :l_id2 := 20;
    PL/SQL procedure successfully completed.
    JT(147)@JTDB10G>exec :l_id0 := 30;
    PL/SQL procedure successfully completed.
    JT(147)@JTDB10G>exec :l_id4 := 40;
    PL/SQL procedure successfully completed.
    JT(147)@JTDB10G>
    JT(147)@JTDB10G>select name, bind_rn,
    2      id0, value value0,
    3      id1, value value1,
    4      id2, value value2,
    5      id3, value value3,
    6      id4, value value4
    7 from ( select name, id, value,
    8      lead(id,0 ) over(partition by name, value order by bind_rn ) id0 ,
    9      lead(id,1 ) over(partition by name, value order by bind_rn ) id1 ,
    10      lead(id,2 ) over(partition by name, value order by bind_rn ) id2 ,
    11      lead(id,3 ) over(partition by name, value order by bind_rn ) id3 ,
    12      lead(id,4 ) over(partition by name, value order by bind_rn ) id4 ,
    13      bind_rn
    14 from( select tab1.name, i0.id, i0.tab1value value, bind_rn
    15      from tab1, tab2 i0,
    16           (select 0 bind_rn, :l_id0 arg_value from dual union
    17           select 1 , :l_id1 from dual union
    18           select 2 , :l_id2 from dual union
    19           select 3 , :l_id3 from dual union
    20           select 4 , :l_id4 from dual ) table_of_args
    21      where tab1.name='name1' and i0.tab1value=tab1.value
    22 -- and i0.id in (0,10,20,30,40)
    23 and i0.id = table_of_args.arg_value
    24 )) where bind_rn = 0;
    1600 rows selected.
    Statistics
    1 recursive calls
    0 db block gets
    4099 consistent gets
    3707 physical reads
    0 redo size
    52111 bytes sent via SQL*Net to client
    1250 bytes received via SQL*Net from client
    81 SQL*Net roundtrips to/from client
    2 sorts (memory)
    0 sorts (disk)
    1600 rows processed

  • Hash join, how to know how much of pga is using?

    Good Day everybody...
    Is there a way to know if i have enough pga for a given hash join??'
    so far i have PGA_AGGREGATE_TARGET = 200MB
    WORK AREA SIZE POLICY = AUTO
    The advisor from EM looks good, but i know that the query im interested on its expensive.
    so reworking the question...
    ¿how can i get the propper ammount of bytes needed by a given hash join?

    Its complicated - There are pages of stuff in the manual, histograms, forecast histograms.
    But I think you can get a long way with:
    SELECT name profile, cnt, decode(total, 0, 0, round(cnt*100/total)) percentage
    FROM (SELECT name, value cnt, (sum(value) over ()) total
    FROM V$SYSSTAT
    WHERE name like 'workarea exec%');
    The output of this query might look like the following:
    PROFILE CNT PERCENTAGE
    workarea executions - optimal 5395 95
    workarea executions - onepass 284 5
    workarea executions - multipass 0 0
    If all executions are optimal, who can ask for more?
    Also:
    V$PGA_TARGET_ADVICE
    This view predicts how the statistics cache hit percentage and over allocation count in V$PGASTAT will be impacted if you change the value of the initialization parameter PGA_AGGREGATE_TARGET. Example 14-8 shows a typical query of this view:
    Example 14-8 Querying V$PGA_TARGET_ADVICE
    SELECT round(PGA_TARGET_FOR_ESTIMATE/1024/1024) target_mb,
    ESTD_PGA_CACHE_HIT_PERCENTAGE cache_hit_perc,
    ESTD_OVERALLOC_COUNT
    FROM V$PGA_TARGET_ADVICE;
    The output of this query might look like the following:
    TARGET_MB CACHE_HIT_PERC ESTD_OVERALLOC_COUNT
    63 23 367
    125 24 30
    250 30 3
    375 39 0
    500 58 0
    600 59 0
    700 59 0
    800 60 0
    900 60 0
    1000 61 0
    1500 67 0
    2000 76 0
    3000 83 0
    4000 85 0
    As far as I understand if, if the overallocation count is nonzero, Oracle will override your pga_aggregate_target anyway. (Having said that, dont forget, this parameter is a target, not a hard & fast limit)

Maybe you are looking for

  • In month view, can't see more than 2 events per day...

    I always keep my calendar in the "month" view because I have so much going on... But each day won't display more than two events. If I have more than two events, there's a "..." at the top of the day with two of the events underneath it. Each box for

  • Ivocation II

    Hi, Strange things happen with this invocation trickery. I've class-loading related problem. I have class file (Exec.class) and list of libraries (jars) it uses. In my code (Invoke.java), I first, use URLClassLoader to create instance of Exec class a

  • Update multiple rows with datas from the same table

    i have a table like name version value1 value2 2 A 4,31 3,5 3 A 3,45 10 2 B 6,97 12 4 B 12 16 so name + version is unique i have to update the datas value1 and value2 ( version A) with the datas from version B where the name ( VersionA) = name versio

  • Looking for online collaboration

    Forgive me if this is the wrong place for a post like this, but I'm feeling a little bit lost. I bought an iMac for Garageband, and I am now looking to find people who use the same software as I do with the same musical interests as me for some onlin

  • DAYS-HOURS-MINUTES functionmodule....

    Hai all.. Can any one tell me a function module which converts days in to hours and also a function module which can convert hours in to minutes.Will be thankful if u can even give me the details of the import and export parameters to be passed Will