Dynamic Sampling

Hii All..
I am wondering somethings on dynamic sampling.I hope you help me.
OPTIMIZER_DYNAMIC_SAMPLING=2.
Oracle reads 32 random block on object when doing dynamic sampling after that what oracle will do ? Like ,looking data distribution on object ? looking distinct values etc ? and generate execution plan.
Please explain to me.
Best Regards

As per Dr. Tim
Dynamic sampling enables the server to improve performance by:
* Estimate single-table predicate selectivities where available statistics are missing or may lead to bad estimations.
* Estimate statatistics for tables and indexes with missing statistics.
* Estimate statatistics for tables and indexes with out of date statistics.
Dynamic sampling is controled by the OPTIMIZER_DYNAMIC_SAMPLING parameter which accepts values from "0" (off) to "10" (agressive sampling) with a default value of "2". At compile-time Oracle determines if dynamic sampling would improve query performance. If so it issues recursive statements to estimate the necessary statistics. Dynamic sampling can be beneficial when:
* The sample time is small compared to the overall query execution time.
* Dynamic sampling results in a better performing query.
* The query may be executed multiple times.

Similar Messages

  • Optimizer Dynamic Sampling doesn't work in our DB

    Hi,
    I'm trying to explore Optimizer Dynamic Sampling functionality,but it seems that database use this feature only if I use /*+ optimizer_dynamic_sampling */ hint.
    There is optimizer_dynamic_sampling=2 set on system level (init.ora) and I also set this parameter on session level. But dynamic sampling is fired only with hint. I would like to use this feature transparently without any hints needed.
    Could it be some kind of bug or I'm doing something wrong?
    Thanks. Filip
    See example below.
    DB version: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    OS version: AIX-Based Systems (64-bit)
    /* Check parameter setting*/
    select name,value,isdefault from v$parameter p
    where p.NAME='optimizer_dynamic_sampling';
    NAME VALUE ISDEFAULT
    optimizer_dynamic_sampling 2 TRUE
    /* Create table without STATS*/
    create table test_sampling as select * from all_objects;
    /* Create index */
    create index ix_tstsam on test_sampling (object_name, owner);
    /* Check if statistics exists*/
    select table_name,num_rows,last_analyzed from all_tables a where a.table_name= 'TEST_SAMPLING';
    TABLE_NAME NUM_ROWS LAST_ANALYZED
    TEST_SAMPLING NULL NULL
    /* Setting Dynamic sampling on session level* /
    Alter session set optimizer_dynamic_sampling=2;
    /************ Explain plan - Select without hint ************/
    explain plan set statement_id='TST_NOHINT' for
    select sa.object_name from test_sampling sa where sa.owner = 'X';
    PLAN_TABLE_OUTPUT
    Plan hash value: 2834243581
    | Id | Operation | Name |
    | 0 | SELECT STATEMENT | |
    |* 1 | TABLE ACCESS FULL| TEST_SAMPLING |
    Note
    - rule based optimizer used (consider using cbo)
    /************ Explain plan - Select WITH hint ************/
    explain plan set statement_id='TST_HINT' for
    select /*+ dynamic_sampling(2) */ sa.object_name from test_sampling sa where sa.owner = 'X';
    PLAN_TABLE_OUTPUT
    Plan hash value: 3916830885
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
    | 0 | SELECT STATEMENT | | 8 | 272 | 86 (2)| 00:00:02
    |* 1 | INDEX FAST FULL SCAN| IX_TSTSAM | 8 | 272 | 86 (2)| 00:00:02
    Note
    - dynamic sampling used for this statement (level=2)

    Yes, this was the cause. Optimizer mode was set to CHOOSE. If I change it to ALL_ROWS, Optimizer Dynamic Sampling works.
    Thank you !

  • Dynamic sampling coming in the  NOTE section of the explain plan.

    Hi i am using 10.2.0.4.0 of oracle.
    During findout the autotrace plan of one of the sql i am getting a note as '- dynamic sampling used for this statement'. So wanted to know, whether this will affect the path of execution?
    below is the plan .
    | Id  | Operation                                 | Name                       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |
    |
    |   1 |  SORT ORDER BY                            |                            |      1 |      1 |   8 |00:00:03.77 |    8919 |    188 |  4096 |  4096 | 40
    |   2 |   NESTED LOOPS                            |                            |      1 |      1 |   8 |00:00:00.07 |    8919 |    188 |          |       |
    |
    |   3 |    NESTED LOOPS                           |                            |      1 |      1 |   8 |00:00:00.06 |    8885 |    188 |          |       |
    |
    |   4 |     NESTED LOOPS OUTER                    |                            |      1 |      1 |   8 |00:00:00.06 |    8859 |    188 |          |       |
    |
    |   5 |      NESTED LOOPS OUTER                   |                            |      1 |      1 |   8 |00:00:00.06 |    8841 |    188 |          |       |
    |
    |   6 |       NESTED LOOPS                        |                            |      1 |      1 |   8 |00:00:00.06 |    8839 |    188 |          |       |
    |
    |   7 |        NESTED LOOPS                       |                            |      1 |     42 |   8 |00:00:00.01 |      51 | 6 |       |       |
    |   8 |         PARTITION LIST ALL                |                            |      1 |   2716 |   8 |00:00:00.01 |      25 | 6 |       |       |
    |   9 |          TABLE ACCESS BY LOCAL INDEX ROWID| a                          |     12 |   2716 |   8 |00:00:00.10 |      25 | 6 |       |       |
    |* 10 |           INDEX RANGE SCAN                | IDX_SPI_GRP                |     12 |   2716 |   8 |00:00:00.10 |      24 | 6 |       |       |
    |* 11 |         TABLE ACCESS BY INDEX ROWID       | b                          |      8 |      1 |   8 |00:00:00.01 |      26 | 0 |       |       |
    |* 12 |          INDEX RANGE SCAN                 | IDX_XMLONLINEVIEW_VENDORID |      8 |      2 |   8 |00:00:00.01 |      18 | 0 |       |       |
    |  13 |        PARTITION RANGE ALL                |                            |      8 |      1 |   8 |00:00:03.67 |    8788 |    182 |          |       |
    |
    |  14 |         PARTITION HASH ALL                |                            |    424 |      1 |   8 |00:00:03.67 |    8788 |    182 |          |       |
    |
    |* 15 |          TABLE ACCESS BY LOCAL INDEX ROWID| c                          |   3392 |      1 |   8 |00:00:03.66 |    8788 |    182 |          |       |
    |
    |* 16 |           INDEX RANGE SCAN                | IDX_UP_INVNUM_DISB         |   3392 |      3 |   8 |00:00:03.55 |    8772 |    177 |          |       |
    |
    |* 17 |       INDEX UNIQUE SCAN                   | SF_INVDISCOUNT_DISB_P1     |      8 |      1 |   0 |00:00:00.01 |       2 | 0 |       |       |
    |* 18 |      INDEX UNIQUE SCAN                    | P_CALCDISCNTPYMNT          |      8 |      1 |   0 |00:00:00.01 |      18 | 0 |       |       |
    |  19 |     TABLE ACCESS BY INDEX ROWID           | d                          |      8 |      1 |   8 |00:00:00.01 |      26 | 0 |       |       |
    |* 20 |      INDEX UNIQUE SCAN                    | PAYIDENTITY_BASE_PKI       |      8 |      1 |   8 |00:00:00.01 |      18 | 0 |       |       |
    |  21 |    TABLE ACCESS BY INDEX ROWID            | e                          |      8 |      1 |   8 |00:00:00.01 |      34 | 0 |       |       |
    |* 22 |     INDEX UNIQUE SCAN                     | P_INVOICESUMMARY           |      8 |      1 |   8 |00:00:00.01 |      26 | 0 |       |       |
    Note
       - dynamic sampling used for this statementEdited by: 930254 on Jun 5, 2012 4:18 AM

    930254 wrote:
    Hi i am using 10.2.0.4.0 of oracle.
    During findout the autotrace plan of one of the sql i am getting a note as '- dynamic sampling used for this statement'. So wanted to know, whether this will affect the path of execution?
    Note
    - dynamic sampling used for this statement
    It could do - you might get a different sample, which could result in different cardinality estimates, which could mean a different plan, every time you optimize this query.
    On the other hand, the sampling may result in cardinality estimates that always look the same and always produce the same plan - and could even match the plan you would get after gather statistics at 100% with histograms everywhere.
    Regards
    Jonathan Lewis

  • ORA-10173: Dynamic Sampling time-out error

    Hi ,
    I got the following error in Alert Log.
    ORA-10173: Dynamic Sampling time-out error
    can any one tell me on which situation this error occurs.
    Thanks

    I would suggest you get in touch with Oracle support and get an answer for this.
    There is no documentation or any sort of information in Metalink either for this error number.

  • Dynamic Sampling of query results

    Is there a way to implement dynamic sampling of expected result set in a way that we could invoke as a percentage of expected
    results?
    We are using Web Intelligence XIR3SP2.

    I have tested Oracle only. Teradata has a "sample" command and I would hope that Web Intelligence would use that. The Oracle method involves the dbms_random package to assign each row a random number, then the query is sorted by that number and a limit is placed for the sample size.
    I have not written about this on my blog yet, but it could be an interesting topic. Thanks for the suggestion.

  • Optimizer dynamic sampling issues

    hi gurus,
    emp_cur had 100K rows, i deleted many.
    experimenting the dynamic sampling feature...
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    SQL> select name,value from v$parameter where name like 'optimizer_dynamic%';
    NAME
    VALUE
    optimizer_dynamic_sampling
    2
    SQL> select count(*) from emp_par;
      COUNT(*)
          4999
    SQL> set autotrace traceonly explain
    SQL> select * from emp_par;
    Execution Plan
    Plan hash value: 3159588169
    | Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |         |   100K|   878K|    63   (2)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| EMP_PAR |   100K|   878K|    63   (2)| 00:00:01 |
    -----------------------------------------------------------------------------since dynamic sampling in enabled, i expected the rows value close to 4999.
    i tried to force dynamic sampling with a hint
    SQL> select /*+ dynamic_sampling(t 2) */ * from emp_par t;
    Execution Plan
    Plan hash value: 3159588169
    | Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |         |   100K|   878K|    63   (2)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| EMP_PAR |   100K|   878K|    63   (2)| 00:00:01 |
    -----------------------------------------------------------------------------though, dynamic sampling is enabled, optimizer plan is still showing the row count as 100K.
    am i missing something?
    thanks,
    charles
    Edited by: user570138 on Feb 23, 2010 9:43 PM

    user570138 wrote:
    hi gurus,
    SQL> select /*+ dynamic_sampling(t 2) */ * from emp_par t;
    Execution Plan
    Plan hash value: 3159588169
    | Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |         |   100K|   878K|    63   (2)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| EMP_PAR |   100K|   878K|    63   (2)| 00:00:01 |
    -----------------------------------------------------------------------------though, dynamic sampling is enabled, optimizer plan is still showing the row count as 100K.
    Check the 10053 trace - the hint would have made the optimizer take a dynamic sample, but it has then rejected the sample as irrelevant.
    If you want to change your test, try this:
    <ul>
    Create your table with 100,000 rows
    Create the stats
    Insert another 10%
    Do the massive delete
    </ul>
    Then see if you get a difference between hinted and unhinted query plans.
    (NB See also: http://jonathanlewis.wordpress.com/2010/02/23/dynamic-sampling/ )
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
    +"I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I'll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be."+
    Isaac Asimov                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Dyanamic sampling and statistics collection

    HI all,
    im a newbie and i have a question. please correct me if im wrong too.
    The documentation I studied specifies that Dyanmic sampling collects the statistics of the objects. im not sure during query execution or while an auto job.
    i also read that starting from 10g , Oracle collects stats for objects as a job (using dbms_stats package). the job is scheduled for new or stale objects , running between 10 pm and 6 am.
    if oracle already runs a job for collecting stats what is dynamic sampling good for.
    Please fill me in . can some body also explaing the query optimizer components.
    thanks in advance.
    Dev

    Assume stats are collected every day at 02:00. Beginning at 08:00 users start making changes. By 11:45 the stats may not bear a close relationship to what was collected the previous morning.
    I thought the explanation in the docs was clear. What doc are you reading (please post the link).

  • Pausing with dynamic streaming causes multiple streams to playback

    I have the streaming server 3.5.3. The sample player page for the dynamic streaming is here: http://fms.acpe.org ....click on the dynamic sample. pause and wait a few minutes for disconnect from the server. When you hit play, it starts from that spot and also starts from the beginning in the background audio while showing the buffering orange circle the whole time.
    Is there an updated player? Is there a fix for this? My users are students watching long lectures, they pause all the time. when they come back, it is a mess.
    Thanx to whoever can help...:)
    - Marcus

    I took a look at this and what I can infer from the loading time and constant rebuffering is that your FMS doesn't have enough bandwidth to serve the stream without consistent rebuffering.  This could be for a number of reasons, but this sample works well under full flow scenarios.  Is it possible for you to evaluate your bandwdith capacity from your FMS installation and ensure that you're not exceeding your available channel.  Consider also debugging the scenario with wireshark or a similar tool to make sure that traffic is flowing in the path and protocol that you expect in this case.
    Asa

  • How to wrap a view in oracle

    Does any one know how to wrap a view in Oracle, I know it is not possible, yet. Are there any third party software to wrap the logic in the view.
    Thanks,
    Sanjay

    Your best bet is to write a view that queries the source tables and contains any necessary business logic
    CREATE VIEW VBASE AS SELECT A.COLUMN_A FROM TABLE_1 A, TABLE_2 B, TABLE_3 C WHERE A.ID = B.ID AND B.ID = C.ID;
    create a view for exposure to the user that queries the base view.
    CREATE VIEW VSECURE AS SELECT COLUMN_B FROM VBASE;
    and grant privileges to VSECURE.
    GRANT SELECT ON VSECURE TO SECURE_USER;
    This will allow the user to see, query, and describe VSECURE without seeing the definition for VBASE.
    The advantage of the this approach is that the query engine can still push predicates down into the base view to optimize the performance or the query where as this is limited with the pipeline function and can become a tuning headache.
    eg.
    SQL> -----------------------------------------
    SQL> -- create some tables
    SQL> -----------------------------------------
    SQL> CREATE TABLE table_1(ID NUMBER, MESSAGE VARCHAR2(100))
    Table created.
    SQL> CREATE TABLE table_2(ID NUMBER, message2 VARCHAR2(100))
    Table created.
    SQL> CREATE TABLE table_3(ID NUMBER, message3 VARCHAR2(100))
    Table created.
    SQL> -----------------------------------------
    SQL> -- populate tables with some data
    SQL> -----------------------------------------
    SQL> INSERT INTO table_1
    SELECT ROWNUM,
    CASE
    WHEN MOD ( ROWNUM, 50 ) = 0 THEN 'HELLO there joe'
    ELSE 'goodbye joe'
    END
    FROM DUAL
    CONNECT BY LEVEL < 1000000
    999999 rows created.
    SQL> INSERT INTO table_2
    SELECT ROWNUM,
    CASE
    WHEN MOD ( ROWNUM, 50 ) = 0 THEN 'how are you joe'
    ELSE 'good to see you joe'
    END
    FROM DUAL
    CONNECT BY LEVEL < 1000000
    999999 rows created.
    SQL> INSERT INTO table_3
    SELECT ROWNUM,
    CASE
    WHEN MOD ( ROWNUM, 50 ) = 0 THEN 'just some data'
    ELSE 'other stuff'
    END
    FROM DUAL
    CONNECT BY LEVEL < 1000000
    999999 rows created.
    SQL> -----------------------------------------
    SQL> --create base view
    SQL> -----------------------------------------
    SQL> CREATE OR REPLACE VIEW vbase AS
    SELECT a.MESSAGE,
    c.message3
    FROM table_1 a,
    table_2 b,
    table_3 c
    WHERE a.ID = b.ID
    AND b.ID = c.ID
    View created.
    SQL> -----------------------------------------
    SQL> --create secure view using base view
    SQL> -----------------------------------------
    SQL> CREATE OR REPLACE VIEW vsecure AS
    SELECT MESSAGE,
    message3
    FROM vbase
    View created.
    SQL> -----------------------------------------
    SQL> -- create row type for pipeline function
    SQL> -----------------------------------------
    SQL> CREATE OR REPLACE TYPE vbase_row
    AS OBJECT
    message varchar2(100),
    message3 varchar2(100)
    Type created.
    SQL> -----------------------------------------
    SQL> -- create table type for pipeline function
    SQL> -----------------------------------------
    SQL> CREATE OR REPLACE TYPE vbase_table
    AS TABLE OF vbase_row;
    Type created.
    SQL> -----------------------------------------
    SQL> -- create package
    SQL> -----------------------------------------
    SQL> CREATE OR REPLACE PACKAGE pkg_getdata AS
    FUNCTION f_get_vbase
    RETURN vbase_table PIPELINED;
    END;
    Package created.
    SQL> -----------------------------------------
    SQL> -- create package body with pipeline function using same query as vbase
    SQL> -----------------------------------------
    SQL> CREATE OR REPLACE PACKAGE BODY pkg_getdata AS
    FUNCTION f_get_vbase
    RETURN vbase_table PIPELINED IS
    CURSOR cur IS
    SELECT a.MESSAGE,
    c.message3
    FROM table_1 a,
    table_2 b,
    table_3 c
    WHERE a.ID = b.ID
    AND b.ID = c.ID;
    BEGIN
    FOR rec IN cur
    LOOP
    PIPE ROW ( vbase_row ( rec.MESSAGE, rec.message3 ) );
    END LOOP;
    END;
    END pkg_getdata;
    Package body created.
    SQL> -----------------------------------------
    SQL> -- create secure view using pipeline function
    SQL> -----------------------------------------
    SQL> CREATE or replace VIEW vsecure_with_pipe AS
    SELECT *
    FROM TABLE ( pkg_getdata.f_get_vbase ( ) )
    View created.
    SQL> -----------------------------------------
    SQL> -- this would grant select on the 2 views, one with nested view, one with nested pipeline function
    SQL> -----------------------------------------
    SQL> GRANT SELECT ON vsecure TO test_user
    Grant complete.
    SQL> GRANT SELECT ON vsecure_with_pipe TO test_user
    Grant complete.
    SQL> explain plan for
    SELECT *
    FROM vsecure
    WHERE MESSAGE LIKE 'HELLO%'
    Explain complete.
    SQL> SELECT *
    FROM TABLE ( DBMS_XPLAN.display ( ) )
    PLAN_TABLE_OUTPUT
    Plan hash value: 3905984671
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 16939 | 2365K| | 3098 (3)| 00:00:54 |
    |* 1 | HASH JOIN | | 16939 | 2365K| 2120K| 3098 (3)| 00:00:54 |
    |* 2 | HASH JOIN | | 24103 | 1835K| | 993 (5)| 00:00:18 |
    |* 3 | TABLE ACCESS FULL| TABLE_1 | 24102 | 1529K| | 426 (5)| 00:00:08 |
    | 4 | TABLE ACCESS FULL| TABLE_2 | 1175K| 14M| | 559 (3)| 00:00:10 |
    | 5 | TABLE ACCESS FULL | TABLE_3 | 826K| 51M| | 415 (3)| 00:00:08 |
    Predicate Information (identified by operation id):
    1 - access("B"."ID"="C"."ID")
    2 - access("A"."ID"="B"."ID")
    3 - filter("A"."MESSAGE" LIKE 'HELLO%')
    Note
    PLAN_TABLE_OUTPUT
    - dynamic sampling used for this statement
    23 rows selected.
    SQL> -----------------------------------------
    SQL> -- note that the explain plan shows the predicate pushed down into the base view.
    SQL> -----------------------------------------
    SQL> explain plan for
    SELECT count(*)
    FROM vsecure_with_pipe
    WHERE MESSAGE LIKE 'HELLO%'
    Explain complete.
    SQL> SELECT *
    FROM TABLE ( DBMS_XPLAN.display ( ) )
    PLAN_TABLE_OUTPUT
    Plan hash value: 19045890
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 2 | 15 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 2 | | |
    |* 2 | COLLECTION ITERATOR PICKLER FETCH| F_GET_VBASE | | | | |
    Predicate Information (identified by operation id):
    2 - filter(VALUE(KOKBF$) LIKE 'HELLO%')
    14 rows selected.
    SQL> -----------------------------------------
    SQL> -- note that the filter is applied on the results of the pipeline function
    SQL> -----------------------------------------
    SQL> set timing on
    SQL> SELECT count(*)
    FROM vsecure
    WHERE MESSAGE LIKE 'HELLO%'
    COUNT(*)
    19999
    1 row selected.
    Elapsed: 00:00:01.42
    SQL> SELECT count(*)
    FROM vsecure_with_pipe
    WHERE MESSAGE LIKE 'HELLO%'
    COUNT(*)
    19999
    1 row selected.
    Elapsed: 00:00:04.11
    SQL> -----------------------------------------
    SQL> -- note the difference in the execution times.
    SQL> -----------------------------------------

  • Question on optimum choice of index - whether to use CTXCAT or CONTEXT

    Hi ,
    I have a situation in which there are short texts that are to be searched for diacritical characters and for that I implemented CTXCAT type of index. The solution works fine except for left side wild card search - in that case I have suggested the developers to use the query template feature. -this is the background information for the question I have and following example demonstrates it:
    CREATE TABLE TEST_USER
      FIRST_NAME  VARCHAR2(64 CHAR)                 NOT NULL,
      LAST_NAME   VARCHAR2(64 CHAR)                 NOT NULL
    CREATE INDEX TEST_USER_IDX3 ON TEST_USER
    (FIRST_NAME)
    INDEXTYPE IS CTXSYS.CTXCAT
    PARAMETERS('LEXER cust_lexer');
    CREATE INDEX TEST_USER_IDX4 ON TEST_USER
    (LAST_NAME)
    INDEXTYPE IS CTXSYS.CTXCAT
    PARAMETERS('LEXER cust_lexer');
    Don't worry about the cust_lexer, it is for diacritical search and it is not relevant to this question so I am not copying the code for the preference I created etc.
    Now I have a row of data in the table with first_name column as Supervisor. If I run the below sql, it gives output:
    SELECT *
      FROM test_user
    WHERE catsearch (first_name, 'Supervisor', NULL) > 0;
    FIRST_NAME                     LAST_NAME
    Supervisor                     upervisor
    --even the below sql with wild card (*) at the end works fine...
    SQL> SELECT *
      2    FROM test_user
      3   WHERE catsearch (first_name, 'Super*', NULL) > 0;
    FIRST_NAME                     LAST_NAME
    Supervisor                     upervisor
    However the below sql queries doesn't give any output, though they should return the same row as above!
    SQL> SELECT *
      2    FROM test_user
      3   WHERE catsearch (first_name, '*visor', NULL) > 0;
    no rows selected
    SQL> SELECT *
      2    FROM test_user
      3   WHERE catsearch (first_name, '*vis*', NULL) > 0;
    no rows selected
    --Using query template as below solves the issue:
    select * from test_user
    where catsearch(first_name,
    '<query>
      <textquery grammar="context">
         %viso%
      </textquery>
    </query>','')>0
    FIRST_NAME                     LAST_NAME
    Supervisor                     upervisor
    Note that I verified the query execution plan and it uses the index and there is no Full Table Scan:
    | Id  | Operation                   | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |                |       |       |     9 (100)|          |
    |   1 |  TABLE ACCESS BY INDEX ROWID| TEST_USER      |   376 |    99K|     9   (0)| 00:00:01 |
    |*  2 |   DOMAIN INDEX              | TEST_USER_IDX3 |       |       |            |          |
    ---------------------------------------------------------------------------------------------- Up to the above , all details were by way of the back ground...now the question is - it is this the right choice? I am using context grammer using query template. There is another thread on this forum where an expert (Barbara) said that
    ". It should be better to use a context index than a ctxcat index with a query template that uses context grammar. " -this was said on this question link: Re: Wildcard Search
    So I am getting this doubt. However I have good data here that shows that the query doesn't do full table scan - still is it a bad choice? Note that there are several issues with CONTENT type of indexes( as per my limited understanding) - because they are not transactional in nature and so we have to take extra steps/measures to have the indexes updated which seems like a major pain area to me.
    My doubt is , did I do the right thing by using query template or should I use the CONTEXT type of index instead of CTXCAT type of index?
    Thanks,
    Nirav
    Edited by: orausern on Jan 17, 2013 1:40 AM
    Edited by: orausern on Jan 17, 2013 1:43 AM

    I would just like to add a few comments.
    Alhough it is documented that the ctxcxat index and catsearch do not support a wildcard in front of the term, a workaround is to use two asterisks on the left side of the term, as demonstrated below. I provide this only for clarification and interesting trivia. I would still use a context index for various reasons.
    SCOTT@orcl_11gR2> CREATE TABLE TEST_USER
      2    (FIRST_NAME  VARCHAR2(64 CHAR))
      3  /
    Table created.
    SCOTT@orcl_11gR2> INSERT INTO test_user VALUES ('Supervisor')
      2  /
    1 row created.
    SCOTT@orcl_11gR2> CREATE INDEX TEST_USER_IDX
      2  ON TEST_USER (FIRST_NAME)
      3  INDEXTYPE IS CTXSYS.CTXCAT
      4  /
    Index created.
    SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
    SCOTT@orcl_11gR2> SELECT * FROM test_user
      2  WHERE  catsearch (first_name, '**vis*', NULL) > 0
      3  /
    FIRST_NAME
    Supervisor
    1 row selected.
    Execution Plan
    Plan hash value: 4046491764
    | Id  | Operation                   | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |               |     1 |   142 |     3   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| TEST_USER     |     1 |   142 |     3   (0)| 00:00:01 |
    |*  2 |   DOMAIN INDEX              | TEST_USER_IDX |       |       |            |          |
    Predicate Information (identified by operation id):
       2 - access("CTXSYS"."CATSEARCH"("FIRST_NAME",'**vis*',NULL)>0)
    Note
       - dynamic sampling used for this statement (level=2)
    SCOTT@orcl_11gR2>The only time that I am aware of that there is a conflict between a wordlist and a lexer is when you specify stemming in both. If you are using stemming, you can still use both a wordlist and a lexer, but only set the stemmer attribute in the wordlist, not the index_stems attribute in the lexer.

  • Is there a way to specify columns to be excluded from the NLSSORT function

    Hi, in order to enable case-insensitive searches in oracle without making significant app changes, we've added a login trigger that enables linguistic sorting by setting the session params:
    alter session set nls_comp=LINGUISTIC;
    alter session set nls_sort=BINARY_CI;
    While this gives us exactly the desired behavior for all our actual linguistic data, 90% of our primary key fields are typed as varchar2 fields that contain GUIDs. This is a problem because the primary key index is a binary index but when we query for a row by its primary key, we're getting table scans because the query is being translated to using the NLSSORT function because of our session setup.
    Here's a specific example of what we're seeing:
    SQL> create table t1 (c1 varchar2(255) not null, c2 varchar2(255) null, primary key (c1));
    Table created.
    SQL> set autotrace traceonly explain;
    SQL> select * from t1 where c1='t';
    Execution Plan
    Plan hash value: 3617692013
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 258 | 2 (0)| 00:00:01 |
    |* 1 | TABLE ACCESS FULL| T1 | 1 | 258 | 2 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter(NLSSORT("C1",'nls_sort=''BINARY_CI''')=HEXTORAW('7400') )
    Note
    - dynamic sampling used for this statement
    I understand why this is occurring and I know that if I add an NLSSORT based index to the primary key columns on each of our tables (in addition to the existing unique index used by the PK constraint) that second index will cause my query to perform an index range scan. However, for various reasons this is something of a daunting task in our app and we'd like to avoid the double indexes for performance reasons as well. So what I'd prefer to do is find a way to tell the session to NOT apply the NLSSORT function when it sees these columns in incoming SQL, but I haven't read anything that leads me to believe that such blacklisting is a feature of the NLS libraries. If anyone knows whether this is possible or not or if you have an alternate approach, I'd really appreciate your feedback.
    Thanks!
    Message was edited by:
    user616116

    Unfortunately, there is no way to avoid this problem globally. We are aware of this deficiency and we will look for a solution for some future release. In the meantime, you will need, unfortunately, to code the workaround you mentioned.
    -- Sergiusz

  • Index usage error

    i am having a table in my schema named provider_rate_history
    PROVIDER_RATE_HISTORY_ID      NOT NULL      NUMBER
    WORK_ORDER_HISTORY_ID      NOT NULL      NUMBER
    PROVIDER_RATE_TYPE_ID      NOT NULL      NUMBER
    CREATED_BY      NOT NULL      NUMBER
    DATE_CREATED      NOT NULL      DATE
    MODIFIED_BY           NUMBER
    DATE_MODIFIED           DATE
    I created an index named provider_rate_type_idx on the provider_rate_type_id column of the table, but when i am using the following query index is used, but if i replace count(*) keyword in the select statement with any of the column name of the table a ful table scan is performed. What could be the reason for this error and how to correct it
    select count(*) from provider_rate_history
    where rate_type_id=7;
    Index is used, if we replace count(*) with any column name of the table index is not used
    select work_order_history_id from provider_rate_history
    where rate_type_id=7;
    Index is not used

    Hi,
    Why count(*) will lead to full table scan?
    APC have clearly told reason for not using Index.
    Hope this illustration helps to clear the situation.
    SQL> create table test111 as select * from all_objects where rownum < 1001;
    Table created.
    SQL> desc test111
    Name Null? Type
    OWNER NOT NULL VARCHAR2(30)
    OBJECT_NAME NOT NULL VARCHAR2(30)
    SUBOBJECT_NAME VARCHAR2(30)
    OBJECT_ID NOT NULL NUMBER
    DATA_OBJECT_ID NUMBER
    OBJECT_TYPE VARCHAR2(19)
    CREATED NOT NULL DATE
    LAST_DDL_TIME NOT NULL DATE
    TIMESTAMP VARCHAR2(19)
    STATUS VARCHAR2(7)
    TEMPORARY VARCHAR2(1)
    GENERATED VARCHAR2(1)
    SECONDARY VARCHAR2(1)
    SQL> create index test111_indx1 on test111 (object_id);
    Index created.
    SQL>
    SQL> set autotrace on
    SQL> select count(1) from test111;
    COUNT(1)
    1000
    Execution Plan
    Plan hash value: 1326770390
    | Id | Operation | Name | Rows | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | | |
    | 2 | INDEX FAST FULL SCAN| TEST111_INDX1 | 1000 | 3 (0)| 00:00:01 |
    Note
    - dynamic sampling used for this statement
    Statistics
    5 recursive calls
    0 db block gets
    23 consistent gets
    3 physical reads
    0 redo size
    206 bytes sent via SQL*Net to client
    239 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    SQL>
    SQL> select count(distinct owner) from test111;
    COUNT(DISTINCTOWNER)
    3
    Execution Plan
    Plan hash value: 991123090
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 17 | 5 (0)| 00:00:01 |
    | 1 | SORT GROUP BY | | 1 | 17 | | |
    | 2 | TABLE ACCESS FULL| TEST111 | 1000 | 17000 | 5 (0)| 00:00:01 |
    Note
    - dynamic sampling used for this statement
    Statistics
    27 recursive calls
    0 db block gets
    33 consistent gets
    0 physical reads
    0 redo size
    233 bytes sent via SQL*Net to client
    239 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    1 rows processed
    SQL>

  • Index on date field

    Please see my table
    create table DATE_TEST
      ID         VARCHAR2(6) not null,
      ID_DESC    VARCHAR2(250),
      START_DATE DATE,
      END_DATE   DATE
    -- Create/Recreate primary, unique and foreign key constraints
    alter table DATE_TEST
      add constraint DATE_TEST_PK primary key (ID);
    -- Create/Recreate indexes
    create index DATE_TEST_IDX1 on DATE_TEST (END_DATE));
    create index DATE_TEST_IDX2 on DATE_TEST (START_DATE);No I added some data into table.My problem is it is not taking index for between query.
    EXPLAIN PLAN FOR SELECT * FROM       DATE_TEST WHERE           start_date between to_date('01/01/2012','DD/MM/YYYY') AND to_date('31/12/2012','DD/MM/YYYY')           OR  end_date between to_date('01/01/2012','DD/MM/YYYY') AND to_date('31/12/2012','DD/MM/YYYY')
      2  /
    Explained
    SQL>      SELECT * FROM  TABLE( DBMS_XPLAN.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4189439861
    | Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |           |   954 |   139K|     7   (0)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| DATE_TEST |   954 |   139K|     7   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("START_DATE">=TO_DATE(' 2012-01-01 00:00:00',
                  'syyyy-mm-dd hh24:mi:ss') AND "START_DATE"<=TO_DATE(' 2012-12-31
                  00:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "END_DATE">=TO_DATE('
                  2012-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "END_DATE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:s
    Note
    PLAN_TABLE_OUTPUT
       - dynamic sampling used for this statement (level=2)
    21 rows selectedWhy it is not taking index and what are the other solutions for this

    Full Table Scan is not evil..
    Is your table stats up to date?
    The number of rows in the table is shown as 954. Index scan wont be much useful..
    How many records are expected in the output?
    If it is a major percentage of the total records, Index scan will be more costly..
    Read at and ThisAskTom<a/> and This

  • 20 Index Restriction on Remote Tables (i.e. using Database Links)

    The Oracle Database Administrator's Guides for 10g and 11g document a performance restriction that "No more than 20 indexes are considered for a remote table." If I go back to the 8i documentation it says "In cost-based optimization, no more than 20 indexes per remote table are considered when generating query plans. The order of the indexes varies; if the 20-index limitation is exceeded, random variation in query plans may result."
    Does anyone have more details on this performance restriction? In particular I am trying to answer these questions:
    1) Are the 20 indexes which are considered by the CBO still random in 10g?
    2) Can I influence which indexes are considered with index hints or will my hints only be considered if they are for one of the "random" 20 indexes which are being considered by the CBO?
    3) Are there any other approaches or work-arounds to this restriction assuming you need to select from a large remote table with more than 20 indexes (and need to perform the selection using 1 of those indexes to get adequate performance) or do we need to abandon database links for this table?
    Thanks in advance for your input.

    So, here's my simple test.
    SQL>
    SQL> create table gurnish.indexes20plus ( n1 number, n2 number, n3 number, n4 number, n5 number, n6 number, n7 number,
    2 n8 number, n9 number, n10 number, n11 number, n12 number, n13 number, n14 number, n15 number, n16 number,
    3 n17 number, n18 number, n19 number, n20 number, n21 number, n22 number, n23 number, n24 number,
    4 n25 number, n26 number, n28 number);
    create index xin1 on indexes20plus (n1);
    Table created.
    SQL> SQL> create index xin2 on indexes20plus (n2);
    create index xin3 on indexes20plus (n3);
    Index created.
    SQL> SQL>
    Index created.
    SQL> SQL> create index xin4 on indexes20plus (n4);
    Index created.
    SQL> SQL>
    Index created.
    SQL> SQL> create index xin5 on indexes20plus (n5);
    create index xin6 on indexes20plus (n6);
    Index created.
    SQL> SQL>
    Index created.
    SQL> SQL> create index xin7 on indexes20plus (n7);
    Index created.
    SQL> SQL> create index xin8 on indexes20plus (n8);
    Index created.
    SQL> SQL> create index xin9 on indexes20plus (n9);
    Index created.
    SQL>
    SQL> create index xin10 on indexes20plus (n10);
    Index created.
    SQL> SQL> create index xin11 on indexes20plus (n11);
    create index xin12 on indexes20plus (n12);
    create index xin13 on indexes20plus (n13);
    Index created.
    SQL> SQL>
    Index created.
    SQL> SQL>
    Index created.
    SQL> SQL> create index xin14 on indexes20plus (n14);
    Index created.
    SQL> SQL> create index xin15 on indexes20plus (n15);
    Index created.
    SQL>
    SQL> create index xin16 on indexes20plus (n16);
    Index created.
    SQL>
    SQL> create index xin17 on indexes20plus (n17);
    Index created.
    SQL> SQL> create index xin18 on indexes20plus (n18);
    Index created.
    SQL> SQL> create index xin19 on indexes20plus (n19);
    Index created.
    SQL> SQL> create index xin20 on indexes20plus (n20);
    Index created.
    SQL> SQL> create index xin21 on indexes20plus (n21);
    Index created.
    declare
    i number;
    begin
    for i in 1..100
    loop
    dbms_random.seed(i+100);
    insert into indexes20plus values (dbms_random.value(1,5),dbms_random.value(1,21),dbms_random.RANDOM, dbms_random.RANDOM,dbms_random.value(1,20),
    dbms_random.value(1,4),dbms_random.value(1,6), dbms_random.value(1,7),dbms_random.value(1,9),dbms_random.value(1,10),
    dbms_random.value(1,11),dbms_random.value(1,12),dbms_random.value(1,13),dbms_random.value(1,14),dbms_random.value(1,1),
    dbms_random.value(1,1),dbms_random.value(1,19),dbms_random.value(1,122),dbms_random.value(1,20),dbms_random.value(1,20)
    ,dbms_random.value(4,20),dbms_random.value(1,20),dbms_random.value(1,20),dbms_random.value(1,20),dbms_random.value(1,20)
    ,dbms_random.value(4,20),dbms_random.value(4,20));
    end loop;
    commit;
    end;
    SQL> set autotrace traceonly
    SQL> l
    1* select * from gurnish.indexes20plus@lvoprds where n1 = 4
    SQL> /
    no rows selected
    Execution Plan
    Plan hash value: 441368878
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU
    )| Time | Inst |
    | 0 | SELECT STATEMENT REMOTE | | 1 | 351 | 1 (0
    )| 00:00:01 | |
    | 1 | TABLE ACCESS BY INDEX ROWID| INDEXES20PLUS | 1 | 351 | 1 (0
    )| 00:00:01 | LVPRD |
    |* 2 | INDEX RANGE SCAN | XIN1 | 1 | | 1 (0
    )| 00:00:01 | LVPRD |
    Predicate Information (identified by operation id):
    2 - access("A1"."N1"=4)
    Note
    - fully remote statement
    - dynamic sampling used for this statement
    Statistics
    0 recursive calls
    0 db block gets
    0 consistent gets
    0 physical reads
    0 redo size
    1897 bytes sent via SQL*Net to client
    481 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed
    SQL> select * from gurnish.indexes20plus@lvoprds where n21 = 4;
    no rows selected
    Execution Plan
    Plan hash value: 2929530649
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU
    )| Time | Inst |
    | 0 | SELECT STATEMENT REMOTE | | 1 | 351 | 1 (0
    )| 00:00:01 | |
    | 1 | TABLE ACCESS BY INDEX ROWID| INDEXES20PLUS | 1 | 351 | 1 (0
    )| 00:00:01 | LVPRD |
    |* 2 | INDEX RANGE SCAN | XIN21 | 1 | | 1 (0
    )| 00:00:01 | LVPRD |
    Predicate Information (identified by operation id):
    2 - access("A1"."N21"=4)
    Note
    - fully remote statement
    - dynamic sampling used for this statement
    Statistics
    1 recursive calls
    0 db block gets
    0 consistent gets
    0 physical reads
    0 redo size
    1897 bytes sent via SQL*Net to client
    481 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed
    SQL>

  • Issue found with Oracle 11.2.0.2 left outter joins.

    When performing a left outter join, that returns no data - we are seeing an issue with 11.2.0.2 where it does not return any rows at all - where it should in fact return one row with null data. We have thoroughly tested this against 11.2.0.1 as well as 10g and found no issues, but for some reason 11.2.0.2 does not perform as expected.
    The following queries demonstrate what we're experiencing, and the subsequent DDL / DML will expose this issue on a 11.2.0.2 oracle DB.
    -- QUERIES --
    --Query that exposes the LOJ issue (should return one row with 4 null columns)
    -- RETURNS: NO ROWS ARE RETURNED
    SELECT lt.*
    FROM Attr attr
    JOIN Obj obj ON attr.id = obj.id
    LEFT OUTER JOIN Loc_Txt lt ON attr.id = lt.objectid AND lt.typeid = 851 AND 1 = 2
    WHERE attr.id = 225;
    --PLEASE NOTE, the 'AND 1 = 2' is necessary for our particular use case.  While this doesn't make much logical sense here, it is necessary to expose this issue.
    --Query - shows the expected behavior by simply adding a column that is not null.
    --RETURNS: ONE ROW RETURNED with first 4 columns null, and '225' for obj.id.
    SELECT lt.*, obj.id
    FROM Attr attr
    JOIN Obj obj ON attr.id = obj.id
    LEFT OUTER JOIN Loc_Txt lt ON attr.id = lt.objectid AND lt.typeid = 851 AND 1 = 2
    WHERE attr.id = 225;
    --Query - shows that the expected behavior also resumes by swapping the LoJ against obj.id instead of attr.id.  Since these tables are joined together by id, this should be ARBITRARY!
    -- RETURNS: ONE ROW RETURNED with first 4 columns null, and '225' for obj.id
    SELECT lt.*
    FROM Attr attr
    JOIN Obj obj ON attr.id = obj.id
    LEFT OUTER JOIN Loc_Txt lt ON obj.id = lt.objectid AND lt.typeid = 851 AND 1 = 2
    WHERE attr.id = 225;
    -- DDL --
    -- OBJ TABLE --
    CREATE TABLE "TESTDB"."OBJ"
         "ID" NUMBER(10,0) NOT NULL ENABLE
         ,"TYPEID" NUMBER(10,0) NOT NULL ENABLE
         ,CONSTRAINT "OBJ_PK" PRIMARY KEY ("ID") RELY ENABLE
    commit;
    -- LOC_TXT TABLE --
    CREATE TABLE "TESTDB"."LOC_TXT"
         "ID" NUMBER(10,0) NOT NULL ENABLE,
         "OBJECTID" NUMBER(10,0) NOT NULL ENABLE,
    "TYPEID" NUMBER(10,0) NOT NULL ENABLE,
         "VALUE" NVARCHAR2(400) NOT NULL ENABLE,
         CONSTRAINT "LOC_TXT_PK" PRIMARY KEY ("ID") RELY ENABLE,
         CONSTRAINT "LOC_TXT1_FK" FOREIGN KEY ("OBJECTID") REFERENCES "TESTDB"."OBJ" ("ID") RELY ENABLE
    commit;
    -- ATTR TABLE --
    CREATE TABLE "TESTDB"."ATTR"
         "ID" NUMBER(10,0) NOT NULL ENABLE,
         "ATTRIBUTEVALUE" NVARCHAR2(255),
         CONSTRAINT "ATTR_PK" PRIMARY KEY ("ID") RELY ENABLE,
         CONSTRAINT "ATTR_FK" FOREIGN KEY ("ID") REFERENCES "TESTDB"."OBJ" ("ID") RELY ENABLE
    commit;
    -- DATA --
    insert into obj (id, typeid) values(225, 174);
    insert into attr (id, attributevalue) values(225, 'abc');
    insert into obj (id, typeid) values(2274, 846);
    insert into loc_txt(id, objectid, typeid, value) values(540, 2274, 851, 'Core Type');
    commit;
    -- DROP TABLES --
    --DROP TABLE "TESTDB"."ATTR";
    --DROP TABLE "TESTDB"."LOC_TXT";
    --DROP TABLE "TESTDB"."OBJ";
    --commit;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Whilst I agree that your test cases do show some anomalies, the statement logic does smell a little odd.
    In general, I'd be wary of trying to reproduce issues using hardcoded conditions such as "AND 1=2".
    As this predicate can be evaluated at parse time, you can get plans which are not representative of run time issues.
    However....
    The trouble with ANSI - trouble as in it seems to have a lot of bugs/issues - is that it is always transformed to orthodox Oracle syntax prior to execution - you can see this in a 10053 trace file. This is possibly not helped by the distinction ANSI has between join predicates and filter predicates - something that Oracle syntax does not really have without rewriting the SQL significantly.
    For more information on a similar sounding ANSI problem, see http://jonathanlewis.wordpress.com/2011/08/03/trouble-shooting-4/
    Yours might not even be ANSI related.
    If you check the execution plan - particularly the predicates section, then you can see some of the issues/symptoms.
    See the "NULL IS NOT NULL" FILTER operation at Id 1.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    SQL> SELECT lt.*
      2  FROM Attr attr
      3  JOIN Obj obj ON attr.id = obj.id
      4  LEFT OUTER JOIN Loc_Txt lt ON attr.id = lt.objectid AND lt.typeid = 851 AND 1 = 2
      5  WHERE attr.id = 225;
    no rows selected
    SQL> explain plan for
      2  SELECT lt.*
      3  FROM Attr attr
      4  JOIN Obj obj ON attr.id = obj.id
      5  LEFT OUTER JOIN Loc_Txt lt ON attr.id = lt.objectid AND lt.typeid = 851 AND 1 = 2
      6  WHERE attr.id = 225;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 1392151118
    | Id  | Operation            | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |         |     1 |   454 |     0   (0)|          |
    |*  1 |  FILTER              |         |       |       |            |          |
    |   2 |   NESTED LOOPS OUTER |         |     1 |   454 |     4   (0)| 00:00:01 |
    |*  3 |    INDEX UNIQUE SCAN | ATTR_PK |     1 |    13 |     1   (0)| 00:00:01 |
    |   4 |    VIEW              |         |     1 |   441 |     3   (0)| 00:00:01 |
    |*  5 |     TABLE ACCESS FULL| LOC_TXT |     1 |   441 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(NULL IS NOT NULL)
       3 - access("ATTR"."ID"=225)
       5 - filter("ATTR"."ID"="LT"."OBJECTID" AND "LT"."TYPEID"=851)
    Note
       - dynamic sampling used for this statement (level=4)
    23 rows selected.
    SQL> Whereas in the next example, the FILTER operation has moved further down.
    SQL> SELECT lt.*, obj.id
      2  FROM Attr attr
      3  JOIN Obj obj ON attr.id = obj.id
      4  LEFT OUTER JOIN Loc_Txt lt ON attr.id = lt.objectid AND lt.typeid = 851 AND 1 = 2
      5  WHERE attr.id = 225;
            ID   OBJECTID     TYPEID
    VALUE
            ID
           225
    1 row selected.
    SQL> explain plan for
      2  SELECT lt.*, obj.id
      3  FROM Attr attr
      4  JOIN Obj obj ON attr.id = obj.id
      5  LEFT OUTER JOIN Loc_Txt lt ON attr.id = lt.objectid AND lt.typeid = 851 AND 1 = 2
      6  WHERE attr.id = 225;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 2816285829
    | Id  | Operation            | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |         |     1 |   454 |     1   (0)| 00:00:01 |
    |   1 |  NESTED LOOPS OUTER  |         |     1 |   454 |     1   (0)| 00:00:01 |
    |*  2 |   INDEX UNIQUE SCAN  | ATTR_PK |     1 |    13 |     1   (0)| 00:00:01 |
    |   3 |   VIEW               |         |     1 |   441 |            |          |
    |*  4 |    FILTER            |         |       |       |            |          |
    |*  5 |     TABLE ACCESS FULL| LOC_TXT |     1 |   441 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("ATTR"."ID"=225)
       4 - filter(NULL IS NOT NULL)
       5 - filter("ATTR"."ID"="LT"."OBJECTID" AND "LT"."TYPEID"=851)
    Note
       - dynamic sampling used for this statement (level=4)
    23 rows selected.
    SQL> However, you might have also noticed that OBJ is not referenced at all in the execution plans - this may indicate where the issue lies, maybe in conjunction with ANSI or it may be nothing to do with ANSI.
    The foreign key constraint means that any reference to OBJ can be rewritten and eliminated.
    So, if we remove the foreign key constraint, we get the expected one row with all null values returned:
    SQL> alter table attr drop constraint attr_fk;
    Table altered.
    SQL> SELECT lt.*
      2  FROM Attr attr
      3  JOIN Obj obj ON attr.id = obj.id
      4  LEFT OUTER JOIN Loc_Txt lt ON attr.id = lt.objectid AND lt.typeid = 851 AND 1 = 2
      5  WHERE attr.id = 225;
            ID   OBJECTID     TYPEID
    VALUE
    1 row selected.
    SQL> explain plan for
      2  SELECT lt.*
      3  FROM Attr attr
      4  JOIN Obj obj ON attr.id = obj.id
      5  LEFT OUTER JOIN Loc_Txt lt ON attr.id = lt.objectid AND lt.typeid = 851 AND 1 = 2
      6  WHERE attr.id = 225;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 1136995246
    | Id  | Operation            | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |         |     1 |   467 |     1   (0)| 00:00:01 |
    |   1 |  NESTED LOOPS OUTER  |         |     1 |   467 |     1   (0)| 00:00:01 |
    |   2 |   NESTED LOOPS       |         |     1 |    26 |     1   (0)| 00:00:01 |
    |*  3 |    INDEX UNIQUE SCAN | OBJ_PK  |     1 |    13 |     1   (0)| 00:00:01 |
    |*  4 |    INDEX UNIQUE SCAN | ATTR_PK |     1 |    13 |     0   (0)| 00:00:01 |
    |   5 |   VIEW               |         |     1 |   441 |            |          |
    |*  6 |    FILTER            |         |       |       |            |          |
    |*  7 |     TABLE ACCESS FULL| LOC_TXT |     1 |   441 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - access("OBJ"."ID"=225)
       4 - access("ATTR"."ID"=225)
       6 - filter(NULL IS NOT NULL)
       7 - filter("ATTR"."ID"="LT"."OBJECTID" AND "LT"."TYPEID"=851)
    Note
       - dynamic sampling used for this statement (level=4)
    26 rows selected.Note also that with the constraint in place, I did not get a row returned regardless of whether the outer join was to attr.id or obj.id
    Edited by: Dom Brooks on Aug 15, 2011 11:41 AM

Maybe you are looking for

  • 10.4.8- dimmed screen & wireless won't work

    I downloaded the latest Mac update 10.4.8 and ever since my screen looks dim or darker. Not as bright and vibrant as it used to. I have read other posts of it looking blueish. I bet this is the same thing others are describing. In addition, my wirele

  • No billing document, but have FI document

    HI, I meet bellowed issue. When use VF01 or VF04, billing document can't be created, but FI document is created. What wrong? Thank your help. Cindylan

  • PDF openen in Adobe?

    Ik kan op mijn ipad een opgevraagde PDF niet openen in Adobe reader. Wie weet de oorzaak en de oplossing?

  • CS4 Batch processing only processes half the files!

    I scan 6x6 negs on an Epson scanner and when they are done they need to be rotated 90 degrees and flipped horizontaly. I set up an action to do this (Rotate, Flip, Save and Close) and everything was going swimmingly - then recently I've started havin

  • Power Mac G4 questions.

    So like so many other users, my Power Mac G4 tower will not power on. All that happens when I press the power button is the indicator light will flash in the blink of an eye. When I hold the power butoon, same thing, but there is a faint clicking sou