CBO vs RBO

Might seem very silly Qs.
1) Can u still be using CBO when the execution plan does not show the COST?
2) Without analyzing or collecting statistics by the DBA/developer, will oracle on its own generate information to be utilized by CBO.
I feel confused.:(

Hi,
Query shows optimizer = choose i.e. cost based optimizer in explain plan.
You can user /*+RULE*/ hint in sql statemenet to force sql engine to use rule based optimizer.
using set autotrace on and using hint /*+COST*/ you can collect information for cost based optimizer.
E.g.
select /*+cost*/ empno from emp;
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=17 Bytes=51)
1 0 INDEX (FULL SCAN) OF 'EMP_PRIMARY_KEY' (UNIQUE) (Cost=1 Ca
rd=17 Bytes=51)
Still I am not correctly getting your point. can you explain in detail ?
Adinath Kamode

Similar Messages

  • CBO or RBO

    Hi folks
    I am using Oracle 9i (which was Oracle 8 before, patch was applied,etc..)
    How can I determine if I am using CBO or RBO ?

    This is what choose does
    SQL> create table test( a number);
    Table created.
    SQL> alter session set optimizer_mode=choose;
    Session altered.
    SQL> set autot trace exp
    SQL> select * from test;
    Execution Plan
    Plan hash value: 1357081020
    | Id  | Operation         | Name |
    |   0 | SELECT STATEMENT  |      |
    |   1 |  TABLE ACCESS FULL| TEST |
    Note
       - rule based optimizer used (consider using cbo)
    SQL> exec dbms_stats.gather_table_Stats(user,'TEST'):
    BEGIN dbms_stats.gather_table_Stats(user,'TEST'):; END;
    ERROR at line 1:
    ORA-06550: line 1, column 49:
    PLS-00103: Encountered the symbol ":" when expecting one of the following:
    := . ( % ;
    The symbol ":" was ignored.
    SQL> exec dbms_stats.gather_table_Stats(user,'TEST');
    PL/SQL procedure successfully completed.
    SQL> select * from test;
    Execution Plan
    Plan hash value: 1357081020
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |     1 |    13 |     2   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| TEST |     1 |    13 |     2   (0)| 00:00:01 |
    SQL>If there are stats on the table, it woudl pick CBO otherwise RBO.
    There are MANY book/notes on the google which talk about the difference of two and tell why the CBO is better. So search for them.
    HTH
    Aman.....

  • What will be my optimizer after importing 9iR2 to 10gR2 CBO or RBO?

    Friends,
    OS: RHEL AS 3
    what will be my optimizer after importing 9iR2 to 10gR2?
    will it be in RBO or CBO?
    What are the precautionary steps should i follow for the optimizer part?
    thanks

    The default optimizer , as mentioned already will be CBO based. Oracle would go for All_rows as the one.
    RBO is still there ,even in 11g its there but oracle won't be accepting any bug reports or wont supply anything for issues about it. So if your stats are okay than it wil be cbo otherwise it will be RBO that would be kicking in,
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    With the Partitioning, Oracle Label Security, OLAP, Data Mining,
    Oracle Database Vault and Real Application Testing options
    SQL> set auot trace exp
    SP2-0158: unknown SET option "auot"
    SQL> set autot trace exp
    SQL> alter session set optimizer_mode=choose;
    Session altered.
    SQL> create table t( a char);
    Table created.
    SQL> select * from t;
    Execution Plan
    Plan hash value: 1601196873
    | Id  | Operation         | Name |
    |   0 | SELECT STATEMENT  |      |
    |   1 |  TABLE ACCESS FULL| T    |
    Note
       - rule based optimizer used (consider using cbo)
    SQL> create table iot(a number primary key) organization index;
    Table created.
    SQL> select * from iot;
    Execution Plan
    Plan hash value: 3063468998
    | Id  | Operation            | Name              | Rows  | Bytes | Cost (%CPU)|
    Time     |
    |   0 | SELECT STATEMENT     |                   |     1 |    13 |     2   (0)|
    00:00:01 |
    |   1 |  INDEX FAST FULL SCAN| SYS_IOT_TOP_71833 |     1 |    13 |     2   (0)|
    00:00:01 |
    Note
       - dynamic sampling used for this statement
    SQL> select * from t,iot;
    Execution Plan
    Plan hash value: 3684863450
    | Id  | Operation              | Name              | Rows  | Bytes | Cost (%CPU)
    | Time     |
    |   0 | SELECT STATEMENT       |                   |     1 |    16 |     4   (0)
    | 00:00:01 |
    |   1 |  MERGE JOIN CARTESIAN  |                   |     1 |    16 |     4   (0)
    | 00:00:01 |
    |   2 |   TABLE ACCESS FULL    | T                 |     1 |     3 |     2   (0)
    | 00:00:01 |
    |   3 |   BUFFER SORT          |                   |     1 |    13 |     2   (0)
    | 00:00:01 |
    |   4 |    INDEX FAST FULL SCAN| SYS_IOT_TOP_71833 |     1 |    13 |     2   (0)
    | 00:00:01 |
    Note
       - dynamic sampling used for this statement
    SQL>You can see that with a simple table,T , without stats, RBO is coming but with IOT,which is a somewhat new object to RBO, CBO is kicking in.When both combined together, CBO is coming up.
    If you would analyze the table, T CBO would come in.
    You should check after upgrade those queries with a keen eye who are using RBO hint and are doing some index scans in RBO. They might change the behaviour after theupgrade.
    HTH
    Aman....

  • How to tune the query and difference between CBO AND RBO.. Which is good

    Hello Friends,
    Here are some questions I have pls reply back with complete description and url if any ..
    1)How Did you tune Query,
    2)What approach you take to tune query? Do you use Hints?
    3)Where did you tune the query and what are the issue with query?
    4)What is difference between RBO and CBO? where u use RBO and CBO.
    5)Give some information about hash join?
    6) Using explain plan how do u know where the bottle neck in query .. how u will identify where the bottle neck is from explain plan .
    thanks/Kumar

    Hi,
    kumar73 wrote:
    Hello Friends,
    Here are some questions I have pls reply back with complete description and url if any ..
    1)How Did you tune Query, Use EXPLAIN PLAN to see exactly where it is spending its time, and address those areas.
    See the forum FAQ
    SQL and PL/SQL FAQ
    "3. How to improve the performance of my query?"
    2)What approach you take to tune query? Do you use Hints?Hints can help.
    Even more helpful is writing the SQL efficiently (avoiding multiple scans of the same table, filtering early, using built-in rather than user-defined functions, ...), creating and using indexes, and, for large tables, partitioning.
    Table design can have a big impact on performace.
    Look for ways to do part of what you need before the query. This includes denormalizing (when appropriate), the kind of pre-digesting that often takes place in data warehouses, function-based indexes, and, starting in Oracle 11, virtual columns.
    3)Where did you tune the query and what are the issue with query?Either this question is a vague summary of the entire thread, or I don't understand it. Can you re-phrase this part?
    4)What is difference between RBO and CBO? where u use RBO and CBO.Basically, use RBO if you have Oracle 7 or earlier.

  • What is the default optimization(CBO/RBO) method choose in Oracle 9i DB?

    What is the default optimization(CBO/RBO) method choose in Oracle 9i DB?
    Note : If we set the OPTIMIZER MODE as CHOOSE

    1) So you mean to say that, for all the tables if we are maintaining statistics, by default it will go for CBO, else RBO right?Yes, unless you dont specify a hint RULE to your queries. At least one table contains statsitics in a two or more table join select statement, oracle uses CBO and calculates statistics using their no. of blocks and other stuff, some internal calculations.
    Also, becarefull, old statistics can lead optimizer to choose very pool plan that impact query performance.
    2) If we are using CBO, then what will be the order of execution in the where clause, Is it from top to bottom, or reverse?If its a single table with where clause, then, oracle choose either full table scan, or index range scan depends upon the column mentioned in the where clause and availabl indexes.
    If query contains multiple tables, then, oracle has to decide between join methods, such as, nested loop, hash join, sort merge join and etc.
    3) And also If we are using RBO, then what will be the order of execution in the where clause, Is it from top to bottom, or reverse?
    RBO uses its internal calculation to produce the execution plan, more probably gives preference to indexes if a column has an index which appears in the where cluase.
    Jaffar

  • Cbo plan

    Pls tell me how the execution plan execute in CBO.
    from RBO Is right to left.
    I am runing one query is RBO as well as CBO Plan is different
    sequence of table access is different.
    Thanks
    Reena

    Hi,
    which version of Oracle are you runnning,
    In CBO the optimizer will determine the driving table. The CBO will determine upon the COST which will be the driving table which will be derermined by the INDEXES, the Statistics of the table and other parameters.
    Where as in RBO the table from the right in the FORM clause will be the driving table.
    thanks

  • BITMAP Index in RBO

    Hello friends,
    Can we use BITMAP Index in RBO (Rule Based Optimizer) ?
    I tested on my development database, the result always FULL TABLE SCAN.
    Please look at this:
    SQL> show parameter optimizer_mode;
    NAME TYPE VALUE
    optimizer_mode string RULE
    SQL> SET AUTOTRACE ON
    SQL> SELECT optn1 FROM C46010.BOXINVTRY WHERE BEQUIP = 'M';
    261164 rows selected.
    Execution Plan
    0 SELECT STATEMENT Optimizer=RULE
    1 0 TABLE ACCESS (FULL) OF 'BOXINVTRY'
    Statistics
    0 recursive calls
    0 db block gets
    42759 consistent gets
    466 physical reads
    0 redo size
    4558130 bytes sent via SQL*Net to client
    192162 bytes received via SQL*Net from client
    17412 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    261164 rows processed
    I created BITMAP index for column BEQUIP on that table because it has low cardinality.
    Please advice...
    Thanks,
    Bayu.

    Ordinary B-tree index should still do, especially as the general strategy of RBO is 'If there is any index use it' ;)
    (also if the index is detrimental to performance)
    My experience is in 9i CBO outperforms RBO on many counts.
    I would recommend (if you can spare the resources) to setup a test 9i database before you migrate, testing CBO in 9i.
    I definitely don't recommend upgrading to 11g (10g is desupported too) without testing it before going production.
    Sybrand Bakker
    Senior Oracle DBA

  • Mail server is too slow to deliver the mail to internal domain

    Hi,
    My mail server faster enough to send the mails to other domains, but when i try to send mail to my own domain it too slow some time it take 30 t0 40 minutes to deliver the mail.
    Please help
    Thanks,
    Gulab Pasha

    You should use statspack to check what are the main waits.
    Some indicators to check :
    - too many fts/excessive IO => check sql statements (missing index, wrong where clause)
    - explain plan for most important queries : using cbo or rbo ? If cbo, statistics should be up to date. If rbo, check acces path.
    -excessive logfile switch (> 5 per hour) : increase logfile or disable logging
    - undo waits => not enough rollback segments (if you don't set AUM)
    - data waits => alter initrans, pctfree, pctused
    - too many chaining rows => rebuild set of datas or rebuild table
    - too many levels in indexes => rebuild index
    - excessive parsing : use bind variable or alter parameter cursor_sharing
    - too many sort on disks => increase sort_area_size and create others temporary tablespace on separate disks
    - too many blocks reads for a row => db_block_size too few or too many chaining rows
    - too many lru contention => increase latches
    - OS swapping/paging ?
    Too improve performance :
    - alter and tune some parameters : optimizer_mode, sort_area_size, shared_pool_size, optimizer_index_cost_adj, db_file_multiblock_read_count...
    - keep most useful packages in memory
    - gather regularly statistics (if using cbo)
    How do your users access to the db ?
    Jean-François Léguillier
    Consultant DBA

  • SQL Query Optimization

    I have a query related to how the CBO and RBO actually works.
    I understand that Oracle uses CBO in 10g and was using both RBO and CBO from version 7 to 9i.
    Lets say I have a query
    Select * from X where A = &A and B = &B and C = &C
    Index is created on A and B.
    Please correct me if I am wrong, internally oracle maintains a hash for the fields A and B.
    When the above query is executed, oracle needs to search in the hash.
    If I change the query like
    Select * from X where B = &B and A = &A and C = &C, will it (order of cols in where clause) make any diff. to the oracle when it searches in the hash with respect to performance.
    On similar lines lets say i have a query (no index this time)
    select * from X where A = &A and B = &B
    Lets assume that the first condition A=&A will greatly reduce the list and the second condition B=&B will almost always be a subset of the result of first condition (if executed separately).
    As a develop I know about the above condition, so will it help if the query is written like this or even if I write something like select * from X where B = &B and A = &A, it will give the same performance.
    Please advise.

    Oracle does not maintain a hash internally, it uses B-Tree indexes. Essentially, the actual value of each indexed column plus the rowid (a pointer to the physical location of the row) is kept for each index entry. As long as you have the leading columns of the index in the where clause in any order, Oracle will be able to use an index on those columns. Under the RBO, it would certainly use the index. Under the CBO, Oracele may or may not use the index. The choice would depend on a number of factors such as the statistics on the table and index, and various initialization parameters.
    In your second example with no indexes, Oracle would do a full table scan examining every record. There is no documented order to the evaluation of predicates (at least none that I have ever seen), and I doubt very much whether it would have much of an effect on the performance of a full table scan if the predicates were evaluated in optimal order versus worst possible order.
    TTFN
    John

  • Server is too slow

    I have oracle 9i database on sun solaris OS
    IBM server.
    2GB RAM
    4 hard disk
    45 users are login in to the system.
    Its almost dead slow and some time user get out from the system.It is almost in hold sitution.
    When ever we restart the server all the process clear and works properly.After some time its started slow and performance got detoriate.
    gradualy its gets slow by the time and one point time it is dead slow and takes 30 min to commit transaction.which takes 20 sec at normal time.When i restart server it works normaly but gradually it gets slow again.
    please write some procedure to checkup
    database configuration is standard

    You should use statspack to check what are the main waits.
    Some indicators to check :
    - too many fts/excessive IO => check sql statements (missing index, wrong where clause)
    - explain plan for most important queries : using cbo or rbo ? If cbo, statistics should be up to date. If rbo, check acces path.
    -excessive logfile switch (> 5 per hour) : increase logfile or disable logging
    - undo waits => not enough rollback segments (if you don't set AUM)
    - data waits => alter initrans, pctfree, pctused
    - too many chaining rows => rebuild set of datas or rebuild table
    - too many levels in indexes => rebuild index
    - excessive parsing : use bind variable or alter parameter cursor_sharing
    - too many sort on disks => increase sort_area_size and create others temporary tablespace on separate disks
    - too many blocks reads for a row => db_block_size too few or too many chaining rows
    - too many lru contention => increase latches
    - OS swapping/paging ?
    Too improve performance :
    - alter and tune some parameters : optimizer_mode, sort_area_size, shared_pool_size, optimizer_index_cost_adj, db_file_multiblock_read_count...
    - keep most useful packages in memory
    - gather regularly statistics (if using cbo)
    How do your users access to the db ?
    Jean-François Léguillier
    Consultant DBA

  • What comands or different ways can I code this it's slow?

    select claim_1.acct_nbr,
    trim('LOC: '||nvl(osha.level_1,acct_level1.level_1)||' '||
              nvl(osha.level_2,acct_level2.level_2)||' '||
              nvl(osha.level_3,acct_level3.level_3)||' '||
              nvl(osha.level_4,acct_level4.level_4)||' '||
              nvl(osha.level_5,acct_level5.level_5))locations,
    claim_1.clm_nbr case_nbr,
    (nvl(osha.clmt_lst_name,claim_1.clmt_lst_name)) || ','|| (nvl(osha.clmt_first_name,claim_1.clmt_first_name)) name,
    nvl(osha.occup,claim_1.occup)occup,
    to_char(nvl(osha.dt_inj,claim_1.dt_inj),'mm/dd/yyyy')dt_inj,
    nvl(osha.incident_location,incident.incident_location)incident_location,
    nvl(osha.natr_of_inj,claim_1.natr_of_inj)natr_of_inj,
    ijpb_desc.tbl_desc,
    ijca_desc.tbl_desc,
    to_char(claim_1.dt_death,'mm/dd/yyyy')dt_death,
    decode(osha.injury_result_code,1,'X')death,
    decode(osha.injury_result_code,2,'X')daysaway,
    decode(osha.injury_result_code,3,'X')transfer,
    decode(osha.injury_result_code,4,'X')other,
    nvl(osha.actl_ldys_m1,absence.actl_ldys_m1)actl_ldys_m1,
    osha.days_on_job_transfer,
    decode(osha.injury_illness_code,1,'X')injury,
    decode(osha.injury_illness_code,2,'X')skin,
    decode(osha.injury_illness_code,3,'X')resp,
    decode(osha.injury_illness_code,4,'X')pois,
    decode(osha.injury_illness_code,5,'X')hearing,
    decode(osha.injury_illness_code,6,'X')other
    from group_security,acct_level1,acct_level2,acct_level3,acct_level4,acct_level5,
    claim_1,absence,incident,osha_claim_user_log osha,ijpb_desc,ijca_desc
    where
    acct_level1.level1_seq_id(+) = claim_1.level1_seq_id_fk
    and acct_level2.level2_seq_id(+) = claim_1.level2_seq_id_fk
    and acct_level3.level3_seq_id(+) = claim_1.level3_seq_id_fk
    and acct_level4.level4_seq_id(+) = claim_1.level4_seq_id_fk
    and acct_level5.level5_seq_id(+) = claim_1.level5_seq_id_fk
    and nvl(acct_level1.level_1,' ') =
    decode(group_security.level1,null,nvl(acct_level1.level_1,' '),
    nvl(group_security.level1,' '))
    and nvl(acct_level2.level_2,' ') =
    decode(group_security.level2,null,nvl(acct_level2.level_2,' '),
    nvl(group_security.level2,' '))
    and nvl(acct_level3.level_3,' ') =
    decode(group_security.level3,null,nvl(acct_level3.level_3,' '),
    nvl(group_security.level3,' '))
    and nvl(acct_level4.level_4,' ') =
    decode(group_security.level4,null,nvl(acct_level4.level_4,' '),
    nvl(group_security.level4,' '))
    and nvl(acct_level5.level_5,' ') =
    decode(group_security.level5,null,nvl(acct_level5.level_5,' '),
    nvl(group_security.level5,' '))
    and group_security.acct_nbr = claim_1.acct_nbr
    and claim_1.clm_seq_id = absence.clm_seq_id_fk(+)
    and
    (absence.dt_abs = ( select max(a.dt_abs)from absence a where a.clm_seq_id_fk = claim_1.clm_seq_id)
    or
    (not exists
    (select * from absence a
    where a.clm_seq_id_fk = claim_1.clm_seq_id)))
    and claim_1.inc_seq_id_fk = incident.INC_SEQ_ID
    and claim_1.clm_seq_id = osha.CLM_SEQ_ID(+)
    and trim(ijpb_desc.tbl_cd) = trim(nvl(osha.part_of_body,claim_1.part_of_body))
    and trim(ijca_desc.tbl_cd) = trim(nvl(osha.inj_cause,claim_1.inj_cause))
    and claim_1.loi in ('SI','IN')
    and nvl(osha.osha_recordable,'Y') = 'Y'
    and group_security.group_id = 'DEMO'
    and (claim_1.dt_inj between to_date('01/01/2003','MM/DD/YYYY') and to_date('12/31/2003','MM/DD/YYYY'))
    this is very slow. The bottom claim_1.loi is where I think it is making it slow but there isn't away of not doing that.
    Any idea's??

    Justin what does the CBO and RBO stand for? What does it do?
    Thanks for the help. We will have to see if we can get a plan together here. Do you feel I will have major problems with Discoverer running on Rule? I think I should be able to do it but it would be better if he changed to Cost.
    I'm running into the DBA not wanting to give me tablespace for my materialized views so I have to fight for that. Then he wants to know what size they are going to be and I had so many issues running the estimate size I just didn't have rights. I now have the rights and the code comes back with bytes blank and num blank. I don't know where to go from here with the DBA. I have ran the code on several differnent tables and I get indentifet too long or I get the bytes with no byte count and the num with not number.
    The DBA seems to never want to do anything for anyone. I have heard nothing but complaints.
    here is what I did and I get nothing I have ran against may tables.. Any suggestions..
    Thanks
    variable num_rows number
    variable num_bytes number
    set autoprint on
    set serveroutput on
    BEGIN
    dbms_olap.estimate_summary_size(stmt_id => '1',
    select_clause =>
    'select acct_nbr from csc.claim_1',
    num_rows => :num_rows,
    num_bytes => :num_bytes);
    END;
    PL/SQL procedure successfully completed.
    NUM_BYTES
    NUM_ROWS

  • RULE BASED OPTIMIZER

    hi,
    my database is 10.2.0.1...by default optimizer_mode=ALL_ROWS..
    for some sessions..i need rule based optimizer...
    so can i use
    alter session set optimizer_mode=rule;
    will it effect that session only or entire database....
    and following also.i want to make them at session level...
    ALTER SESSION SET "_HASH_JOIN_ENABLED" = FALSE;
    ALTER SESSION SET "_OPTIMIZER_SORTMERGE_JOIN_ENABLED" = FALSE ;
    ALTER SESSION SET "_OPTIMIZER_JOIN_SEL_SANITY_CHECK" = TRUE;
    will those effect only session or entire database...please suggest

    < CBO outperforms RBO ALWAYS! > I disagree - mildlyWhen I tune SQL, the first thing I try is a RULE hint, and in very simple databases, the RBO still does a good job.
    Of course, you should not use RULE hints in production (That's Oracle job).
    When Oracle eBusiness suite migrated to the CBO, they placed gobs of RULE hints into their own SQL!!
    Anyway, always adjust your CBO stats to replicate an RBO execution plan . . . .
    specifically CAST() conversions from collections and pipelined functions.Interesting. Hsve you tried dynamic sampling for that?
    Hope this helps. . .
    Don Burleson
    Oracle Press author
    Author of “Oracle Tuning: The Definitive Reference”
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • Weird SQL/View Problem

    I have the following query which runs quick...
    select a.col1
    ,b.col2
    ,b.col3
    from table1 a
    ,table2 b
    where a.key = b.key
    and a.col1 = 77902
    order by a.col3;
    Now if I create a view based on the above minus the order by and one of the conditions i.e..
    create or replace view v_test as
    select a.col1
    ,b.col2
    ,b.col3
    from table1 a
    ,table2 b
    where a.key = b.key;
    and now do the following query:
    select col1
    ,col2
    ,col3
    from v_test
    where col1 = 77902
    order by col3;
    The above query takes ages! I cannot understand why it should behave any different to my original query. All I have done is pushed the joins into a view and queried the view with the same condition and order by.
    However, if I do a the following it is quick:
    select *
    from v_test
    where col1 = 77902
    order by col3;
    Any suggestions will be appreciated.
    Thanks

    What is the database version?
    Are the tables analyzed - CBO vs RBO?
    What is the tkprof output for each of the three runs?

  • Not In Too Slow

    Hello. I have a simple performance problem.
    It's a common configuration error to have a user account, but no extended information about them. To help administrators identify this, I've created two views. Anyone who is in view 1, but not view 2 is a potential problem, so I'd like to show them to administrators in a report.
    The problem is that the statement:
    select user_name from view1 where user_name not in view2;
    is a real dud. It rarely returns in a reasonable amount of time. Table 1 has about 6000 entries and view2 has about 500 entries.
    How should I be doing this?

    Kevin:
    You raise a good point. If the queries underlying the views are slow, then optimizing this query is probably not going to help much, if at all.
    Sri:
    Which is better, depends on a number of factors. I think that some of the most important factors are:
    1. Absloute sizes of the two tables
    2. The relative sizes of the two tables
    3. CBO versus RBO
    4. Version of Oracle
    5. Which columns are being compared.
    Number 5 may need a little clarification. If the question is find all PK's in table1 which do not exist in table2, then I would more likely use NOT EXISTS or perhaps OUTER JOIN. However if the question is find entire rows in table1 that do not have a corresponding entire row in table2 then I would be more likely to use MINUS.
    If we assume that there are no criteria used agains either table (which might affect the choice of indexes), then all methods will require at least one full table scan (at best a fast full index scan), and sometimes two. So, the chief factor affecting performance is the method Oracle uses to get the rows in table1 that do not exist in table2. On a conceptual basis, this is how I understand the various approaches operate. This is almost certainly not the exact algorithm that Oracle uses.
    The NOT IN will do a full scan of table2 to get a list of values, then for each row of table1 scan this LOV to see if it finds a match (FILTER operation). If not, add the record to the result set. If table two is absolutely small (e.g. state_code table with 50 records) then this may well be the fastest approach since scanning the LOV should be fast in memory.
    MINUS does a full scan of both tables, sorts each, then scans the two lists together to identify mismatches (MINUS operation). If the two tables are relatively similar in size, then this is likely to be one of the faster methods. The optimizer may choose to use index scans if the columns being compared are all indexed. If you are trying to compare all columns, or a large (unindexed) subset, then this is likely to be the fastest method. It may also be faster if you expect a relatively large number of mis-matched records.
    In a NOT EXISTS (or EXISTS), for each row in table 1, the correlated sub-query is executed against table2. If the all of the indentity columns in the sub-query are indexed, then this is pretty quick. The relative sizes of the tables do not matter much, but if both are large, this may be a little slower. Also, it is not as efficient when you need to look at values in several columns. If table2 is not appropriately indexed, and it is large, then this could be incredibly slow since it will require a full scan of table2 for each row in table1.
    The OUTER JOIN approach uses the expected SORT JOIN - MERGE JOIN or SORT JOIN - NESTED LOOP approach (depending on the version and optimizer mode). This is pretty efficient, particularly if the two tables are large, and relativley equally sized, and you expect a relativley small number of mismatches.
    There is also a fifth approach.
    In sqlplus run $ORACLE_HOME/rdbms/admin/utlexcpt.sql then
    ALTER TABLE table1
    ADD constraint t1_t2_fk
    FOREIGN KEY (column_list) REFERENCES table2 (column list)
    EXCEPTIONS INTO exceptionsThen deal with the error rows you get in the table exceptions. This requires a unique index on column_list in table2.
    Probably not useful for a one-off excercise, but it will prevent it happening on an ongoing basis.
    John

  • Loading ASCII files - too slow

    I have script for loading ASCII file into database.
    First I use SQL loader to load data into temporary tables. After
    that I start procedure which does some linking of loaded data
    which is nessesary.
    After that I move data from temp tables to main tables.
    There are 7 main tables, two of them have 30,000,000 rows, other
    are smaller. Tablespace has 12 Gb, and it is localy managed.
    Problem is, that few days ago load of one ASCII file with 10,000
    rows took 10 seconds to finish, but now it takes 90 seconds.
    Problem is not SQL loader, but procedure for linking and coping.
    But why it is so slow now?
    Nothing is changed, we started a large load (some 6000 ASCII
    files). When we started it was fast (10 sec/file), now it's slow
    (90 sec/file).
    I also noticed that database process uses 40% of CPU. When it
    was working fine it took 10-15 % of CPU time.
    Also, when everithing was working fine used space in datafiles
    was growing, but now it seams to be on same level al the time.
    PCTFREE for tables is 10 and PCTUSED is 60.
    Can anybody help, please?

    Is this CBO or RBO ? I am curious -- are you building stats
    during data loads ? Might want to review your strategy either
    way.
    Are there any concurrent events happening on either the host or
    the rdb during the most recent (slower) loads that might not
    have been occuring before ? Eg: heavy queries against the tables
    occuring during data load events ?
    Is your 12g tablespace split over drives ? Can't help but wonder
    what kind of fragmentation is occuring on these tables and
    underlying storage ? 30,000,000 rows per tab is alot of rows.
    If you are practicing datawarehouse forced reload (ETL) of the
    tables then I would be inclined to truncate target tables first
    (including storage) to allow Oracle to use some 'smarts' as it
    re-allocated extents for your many millions of rows.
    Generally speaking, update degradation such as you are
    experiencing can be attributed to bottlenecks in TBSs and
    datafiles that are choking from write activity. I have resolved
    similar issues through partitioning and truncating.
    Hope this helps, RP.
    I have script for loading ASCII file into database.
    First I use SQL loader to load data into temporary tables. After
    that I start procedure which does some linking of loaded data
    which is nessesary.
    After that I move data from temp tables to main tables.
    There are 7 main tables, two of them have 30,000,000 rows, other
    are smaller. Tablespace has 12 Gb, and it is localy managed.
    Problem is, that few days ago load of one ASCII file with 10,000
    rows took 10 seconds to finish, but now it takes 90 seconds.
    Problem is not SQL loader, but procedure for linking and coping.
    But why it is so slow now?
    Nothing is changed, we started a large load (some 6000 ASCII
    files). When we started it was fast (10 sec/file), now it's slow
    (90 sec/file).
    I also noticed that database process uses 40% of CPU. When it
    was working fine it took 10-15 % of CPU time.
    Also, when everithing was working fine used space in datafiles
    was growing, but now it seams to be on same level al the time.
    PCTFREE for tables is 10 and PCTUSED is 60.
    Can anybody help, please?

Maybe you are looking for

  • SAP BPC 7.5 NW - BPF Error

    Hi all, I have created a BPF template in BPC 7.5 NW. When I click on a step, an input schdule should open up in Interface for Excel, but this does not happen. No action is excuted. Whereas the same BPF template in another system (in the same domain)

  • My photosmart C7250 all-in-one used to allow me to print black and white only, now it won't

    My photosmart c7250 all-in-one used to allow me to print in black and white only if I changed properties to print in grayscale, black and white only.  Now it says it won't print until I replace one of the empty color cartridges.  I do no understand w

  • Making a DVD in IDVD with a project from FCPx

       I've tried a number of things but have not been successful.    The first time I hit the "Share" button, I selected the "Export File" option. At the next dialog box just took all the default settings. The result was a very large SD file that didn't

  • Reconcilation problem ?

    hi Is there any SAP activity or any report where i can perform inter unit reconcilations ?? as per the requirement we need to do inter unit Reconcilation and identity unaccounted sales or purchse transactions the output of the report should show me t

  • Error 0 after using CleanMyMac

    Hi I'm using a 3.1 i5 quad core 27" iMac, with Lion. Everything has been fine since I got it last week but I installed CleanMyMac and ran a few scans and after that I tried to move some files I downloaded off my desktop into another folder I created