Subqueries Vs Joins

Hi Friends,
Is there any restriction like i can use only specific set of joins inside a subquerie or vice-verse. If anything like that is present please do share or a link would help me go through such documents.
Regards,
Manoj Chakravarthy

No restrictions that I can think of off the top of my head.
Ch9 SQL Queries & Subqueries from SQL Reference manual
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/queries.htm#i2068094
would be where you can find definitive information.
Scott

Similar Messages

  • Scalar subqueries vs. joins

    Hi,
    is someone able to tell me something about the impact on performance of scalar subqueries in select list?
    I found that scalar subqueries are processed faster than joins but don't understand why.
    E.g. first statement:
    select e.ename, d.deptno,
    (select dname from dept d where d.deptno=e.deptno) dname from emp e
    where e.deptno =10;
    Second statement:
    select e.ename, d.deptno, d.dname
    from emp e, dept.d
    where e.deptno=d.deptno and d.deptno=10;
    The optimizer calculates the first statement using a full table scan on emp, while the second statement using a nested loop join.
    First statement is executed faster. I found also that the first staement is executed more for throughput, the second for answer time.
    This is the behavior not only if there are thousands of lines in emp but also in real life applications.
    Regards Frank

    The relative performance of scalar subqueries and joins will largely depend on the relative sizes of the tables, and the indexes available.
    Essentially, the scalar subquery works in the same fashion as this PL/SQL construct.
    FOR r in SELECT ename FROM emp LOOP
       SELECT deptno, dname
       INTO v1, v2
       FROM dept
       WHERE deptno = r.deptno
       DBMS_OUTPUT.Put_Line(r.ename||' '||v1||' '||v2)
    END LOOPIf dept is indexed on deptno, then this query will begin to return rows quickly, since it is only a little more complex that SELECT * FROM emp.
    The join on the other hand, has to read all qualifying rows from both tables before it can begin doing the join. That is why it takes longer to return the first rows.
    The disadvantage of a scalar subquery is that it will usually require much more I/O than a join. For every row in the outer table, you will require at least one I/O to get the value from the table in the subquery, and in your example, you will require two I/O's (one to read the index to get rowid, and another to read the row).
    In addition, the two queries are not equivalent. Consider:
    SQL> SELECT * FROM emp;
        EMP_ID ENAME          DEPTNO
             1 JOHN               10
    SQL> SELECT * FROM dept;
        DEPTNO DNAME
            20 SIMS
    SQL> SELECT e.ename, e.deptno,
                (SELECT dname FROM dept d WHERE d.deptno=e.deptno) dname
         FROM emp e
         WHERE e.deptno =10;
    ENAME          DEPTNO DNAME
    JOHN               10
    SQL> SELECT e.ename, d.deptno, d.dname
         FROM emp e, dept d
         WHERE e.deptno=d.deptno and
               d.deptno=10;
    no rows selected
    SQL> SELECT e.ename, d.deptno, d.dname
         FROM emp e, dept d
         WHERE e.deptno=d.deptno and
               e.deptno=10;
    no rows selectedTTFN
    John

  • Correlated Subqueries Vs Cursor

    Hi all,
    For good performance which one is best amoung Correlated Subqueries or Cursors.(if both table with millions of rec)
    Thanks
    Kalinga

    Blind rule. If something can be done in SQL alone, then it is better in performance that doin git using PL/SQL.
    So using subqueries or JOINS is always better than using cursors (I think you meant nested cursors here).
    Cheers
    Sarma.

  • Creating data in a many-to-many-relationship

    Hello,
    we really have problems in implementing a JClient dialog based on BC4J for creating data in a many to many relationship - especially with cascade delete on both sides.
    Simplified our tables look like:
    create table A_TABLE
    A_ID VARCHAR2(5) not null,
    A_NAME VARCHAR2(30) not null,
    constraint PK_A_TABLE primary key (A_ID),
    constraint UK_A_TABLE unique (A_NAME)
    create table B_TABLE
    B_ID VARCHAR2(5) not null,
    B_NAME VARCHAR2(30) not null,
    constraint PK_B_TABLE primary key (B_ID),
    constraint UK_B_TABLE unique (B_NAME)
    create table AB_TABLE
    A_ID VARCHAR2(5) not null,
    B_ID VARCHAR2(5) not null,
    constraint PK_AB_TABLE primary key (A_ID, B_ID),
    constraint FK_AB_A foreign key (A_ID) references A_TABLE (A_ID) on delete cascade,
    constraint FK_AB_B foreign key (B_ID) references B_TABLE (B_ID) on delete cascade
    Could JDev Team please provide a BC4J/JClient sample that performs the following task:
    The dialog should use A_TABLE as master and AB_TABLE as detail. The detail displays the names associated with the IDs. Next to AB_TABLE should be a view of B_TABLE which only displays rows that are currently not in AB_TABLE. Two buttons are used for adding and removing rows in AB_TABLE. After adding or removing rows in the intersection the B_TABLE view should be updated. The whole thing should work in the middle and client tier. This means no database round trips after each add/remove, no posts for AB_TABLE and no query reexecution for B_TABLE until commit/rollback.
    This is a very common szenario: For an item group (A_TABLE) one can select and deselect items (AB_TABLE) from a list of available items (B_TABLE). Most of JDeveloper4s wizards use this. They can handle multi/single selections, selections from complex structures like trees and so on. Ok, the wizards are not based on BC4J - or? How can we do it with BC4J?
    Our main problems are:
    1. Updating the view of B_TABLE after add/remove reflecting the current selection
    2. A good strategy for displaying the names instead of the IDs (subqueries or joining the three tables)
    3. A JBO-27101 DeadEntityAccessException when removing an existing row from AB_TABLE and adding it again
    Other problems:
    4. We get a JBO-25030 InvalidOwnerException when creating a row in AB_TABLE. This is caused by the composition. We workaround this using createAndInitRow(AttributeList) on the view object (instead of create()). This is our add-Action:
    ViewObject abVO = panelBinding.getApplicationModule().findViewObject("ABView");
    ViewObject bVO = panelBinding.getApplicationModule().findViewObject("BView");
    Row bRow = bVO.getCurrentRow();
    NameValuePairs attribList = new NameValuePairs(
    new String[]{"BId"}, new Object[]{bRow.getAttribute("BId")});
    Row newRow = abVO.createAndInitRow(attribList);
    abVO.insertRow(newRow);
    5. After inserting the new row the NavigationBar has enabled commit/rollback buttons and AB_TABLE displays the row. But performing a commit does nothing. With the following statement after insertRow(newRow) the new row is created in the database:
    newRow.setAttribute("BId", bRow.getAttribute("BId"));
    Please give us some help on this subject.
    Best regards
    Michael Thal

    <Another attempt to post a reply. >
    Could JDev Team please provide a BC4J/JClient sample
    that performs the following task:
    The dialog should use A_TABLE as master and AB_TABLE
    as detail. The detail displays the names associated
    with the IDs. Next to AB_TABLE should be a view of
    B_TABLE which only displays rows that are currently
    not in AB_TABLE. Two buttons are used for adding and
    removing rows in AB_TABLE. After adding or removing
    rows in the intersection the B_TABLE view should be
    updated. The whole thing should work in the middle
    and client tier. This means no database round trips
    after each add/remove, no posts for AB_TABLE and no
    query reexecution for B_TABLE until commit/rollback.
    This is a very common szenario: For an item group
    (A_TABLE) one can select and deselect items
    (AB_TABLE) from a list of available items (B_TABLE).
    Most of JDeveloper4s wizards use this. They can
    handle multi/single selections, selections from
    complex structures like trees and so on. Ok, the
    wizards are not based on BC4J - or? How can we do it
    with BC4J?
    Our main problems are:
    1. Updating the view of B_TABLE after add/remove
    reflecting the current selectionYou should be able to use insertRow() to insert the row into proper collection.
    However to remove a row only from the collection, you need to add a method on the VO subclasses (and perhaps export this method so that the client side should see it) to unlink a row from a collection (but not remove the associated entities from the cache).
    This new method should use ViewRowSetImpl.removeRowAt() method to remove the row entry at the given index from it's collection. Note that this is an absolute index and not a range index in the collection.
    2. A good strategy for displaying the names instead
    of the IDs (subqueries or joining the three tables)You should join the three tables by using reference (and perhaps readonly) entities.
    3. A JBO-27101 DeadEntityAccessException when
    removing an existing row from AB_TABLE and adding it
    againThis is happening due to remove() method on the Row which is marking the row as removed. Attempts to add this row into another collection will throw a DeadEntityAccessException.
    You may 'remove the row from it's collection, then call 'Row.refresh' on it to revert the entity back to undeleted state.
    >
    Other problems:
    4. We get a JBO-25030 InvalidOwnerException when
    creating a row in AB_TABLE. This is caused by the
    composition. We workaround this using
    createAndInitRow(AttributeList) on the view object
    (instead of create()). This is our add-Action:
    ViewObject abVO =
    O =
    panelBinding.getApplicationModule().findViewObject("AB
    iew");
    ViewObject bVO =
    O =
    panelBinding.getApplicationModule().findViewObject("BV
    ew");
    Row bRow = bVO.getCurrentRow();
    NameValuePairs attribList = new NameValuePairs(
    new String[]{"BId"}, new
    String[]{"BId"}, new
    Object[]{bRow.getAttribute("BId")});
    Row newRow = abVO.createAndInitRow(attribList);
    abVO.insertRow(newRow);This is a handy approach. Note that Bc4j framework does not support dual composition where the same detail can be owned by two or more masters. In those cases, you also need to implement post ordering to post the masters before the detail (and reverse ordering for deletes).
    >
    5. After inserting the new row the NavigationBar has
    enabled commit/rollback buttons and AB_TABLE displays
    the row. But performing a commit does nothing. With
    the following statement after insertRow(newRow) the
    new row is created in the database:
    newRow.setAttribute("BId",
    d", bRow.getAttribute("BId"));This bug in JDev 903 was fixed and a patch set (9.0.3.1) is (I believe) available now via MetaLink.
    >
    >
    Please give us some help on this subject.
    Best regards
    Michael Thal

  • Newbie starting a database app need direction/suggestions

    I've been mucking about with C# and SQL databases for some time now and I've decided its time to try a serious project. I want to create an application for managing on the job training at work. I have played around the last week with SQLCE and making
    little experimental databases to learn more about relational databases. I have a rudimentary understanding oF RDBMS(wouldn't go so far as to say basic), I know how to do simple queries, subqueries, basic joins, and I've even played with triggers a bit (in
    MSSSEE).
    I could probably make a go of this project and make something work but I figured since its a pretty basic (to an expert anyway) I'd probably be better off going to the community for direction and advice (not to get people to code for me). I've partially planned
    out how to set up the database and I'd like to describe what I've come up with so more knowledgable folks can tell me if its a good or bad path to follow and perhaps point me in a better direction for the bad parts. I'd rather understand what I'm doing and
    learn the proper ways than simply say "can someone write this for me?".
    Anyway, I thought a lot about what this app needs to do and did a few experimental projects where I realized even more things that it needs to do. I think I have the requirements just about covered.
    Some training happens only once, some (by law or policy) have to be repeated periodically, some need a subset redone due to movement of an employee from one job or facility to another where some, but not all, of the previous training needs to be redone (machinery
    operator moving to a different plant with the same function but different systems would have to relearn the system specifics). Some subjects apply to all employees and others to very few.
    The most basic function is to let supervisors assign training tasks to employees and track their progress but the application also needs to alert a supervisor of upcoming due dates for assigned training and upcoming re-do dates for the periodic training. In
    some cases, like with mandatory periodic training, it could (should?) initiate assignment rather than wait for a supervisor to remember to assign the training manually.
    The application could be set up to have every bit of training for a job class organized under that job class but I think it would be more flexible to organize around a specific skill or area of duty.
    For example, an ice rink worker needs to know how to operate the refrigeration plant, operate the ice-resurfacer (generically called the Zamboni), perform emergency duties (fire, earthquake, chemical leak, etc), and several other areas of knowledge. One area/skill
    would be to operate refrigeration plant, another would be Emergeny response. Each of these would be a training "topic". Many topics would be applicable to mutiple job classes which is why they are under a seperate table from books.
    Each topic would be broken down into focused steps - small bite sized chunks that the employee would learn then be tested on that knowledge. As each step is completed the examiner (supervisor or "expert" in that field) marks the step as completed
    and records that in the application (might be a weekly or monthly hand-in of books to record progress).
    Each job would have several topics they have to know so the topics are organized in what I've ingeniously decided to call a "book" (talk about imaginative!). We have areas where a person in a job class actually has a different knowledge requirement
    than the same job class in another area. There are commonalities of course but I don't think it would be good to have every person in a job class expected to complete training in something they aren't doing and in many cases will never be required to do in
    their entire career.
    So a book would something like Arena Operation, Boiler Operation, WHMIS, or Emergency Response. A job class would be assigned several "books" to cover all the areas of knowledge they require.
    Now to kick off thinking about actual database design.
    The training definitions would be in three tables: Books, Topics, and Objectives. A book has multiple topics and a topic has multiple objectives.
    The book table would have an int primary key, a title field, a longer description field (explaining the aim of the book in some way), a time to complete field (add to assignment date to get a due date), a repeatability field (The values here would indicate
    a one shot, can repeat, or must repeat), and a repeat frequency field (how long before it has to be repeated - might be a date or a datediff, haven't figured that out yet).
    The topics table would have an ID primary key, a title, and another longer description field for information specific to that topic.
    The objectives table would had an ID primary key, a Topic foreign key, a short description, a long description, a "given" field outlining what the trainee needs and a "denied" field outlining what the trainee can't have (no drawings allowed
    when your describing a system).
    Later on I want the program to be able to print shirt pocket sized booklets for trainees that contains short descriptions and larger OJT manuals that have all the details that trainees can refer to.
    The next table needed would be the employees table. I think the obvious ID field, first and last name fields - possibly an initials field (in case of identical first and last names), and possibly a "deleted" field for employees that leave (don't want
    to actually delete them in case they come back at some later date). If not a deleted field then a "former" employee table that an employee is copied to when they leave.
    Several relations tables would be created when a book is assigned to an employee. I haven't quite figured out exactly what to use which is why I chose now to post this. Some direction from here could save a lot of headache later. The relations tables would
    contain only "current" books - ones that are still in progress.
    I was thinking of EmployeeBook, BookTopic, and TopicObjective.
    EmployeeBook would have EmployeeID and BookID, a date field for the assigned date and another date field for the due date. I had thought of just due date or just assigned date and a time allowed but I think it might be simpler to have both dates specified.
    I'd also thought of making a unique key on EmployeeID and BookID but quickly realized some books may have to be assigned many times (WHMIS etc).
    BookTopics would have a primary key of BookID and TopicID - cant think of any more fields here.
    TopicObjective would have TopicID and ObjectiveID as well as a status field (three values - not complete, complete, not applicable) and an examiner ID field (for who signed off on the objective).
    Once a book is completed I was thinking of deleting it from the relational tables and creating an entry in an archive table - no relations, everything would be the full text so if the training data is changed it won't change the completed training records.
    The indexes would be whole names and any reporting would have to be grouped by book name, topic name, etc.
    So thats the general idea. I'd dearly love any suggestions or advice people have. What sounds reasonable to me might be recognized as a fool's errand by someone with actual competence in database programming. I'll work on specifics once I know I'm not running
    full tilt without a flashlight down a blind alley in the dark.
    Thanks for getting through the entire post 8)

    Hi Ghidera,
    From my point of view, there is no need to design the BookTopic table. You can create a EmployeeID filed in the TopicObjective table, then
    create a foreign key on the column EmployeeID
    and reference the column EmployeeID in the EmployeeBook
    table. Also you need to create foreign key relation from EmployeeBook table to Employees table and Books table.
    Besides, you should create a foreign key on a column in Topics table and reference the column ID in the Books table.
    Thanks,
    Lydia Zhang
    Lydia Zhang
    TechNet Community Support

  • Correlated Cursors

    Hello guys !
    How can I to build a mapping using correlated cursors as in PL/SQL?
    Ex.:
    cursor c1 is select id from t1;
    cursor c2 (pkey in number)
    select sum(value) from t2 where id =pkey;
    What operators should I use ?
    Thank you
    Marcelo

    Blind rule. If something can be done in SQL alone, then it is better in performance that doin git using PL/SQL.
    So using subqueries or JOINS is always better than using cursors (I think you meant nested cursors here).
    Cheers
    Sarma.

  • Query by using Not Exists

    Hi,
    What would be the alternative for the below query by using not exists .
    select cust_name from cust_info where cust_name not in( select distinct cust_name from cust_details where status = 'CURRENT')
    Thanks

    it gives you all possible alternatives and ways tooptimize the query
    Is it? I've actually seen a couple tools that do that - Quest and Leccotech come to mind. they would actually rewrite a query hundreds, if not thousands, of different ways - using every hint, even ones that had no reason to be used (and using different hint combinations), using different join orders (with the ordered hint), rewriting subqueries to joins, to inline views, to scalar subqueries in the select list, etc, etc (possibly even giving the MINUS rewrite). but the tools had no way to know which rewrite was optimal, so it would then just execute each and every one, which could take several days (considering that some of the rewrites were terrible).
    so yeah, I think I'll hold onto that tuning guide for just a while longer ;)

  • FBI and Histogrammes

    Hi,
    Database configuration:
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - ProductionI have bad performance in a production database with the following query:
    SELECT X.NUBIX,X.NUFDP,X.COMAR,X.COINT,X.NUCPT,X.CDCOF,NVL(S.RGCOD,R.RGCOD),NVL(S.RGCID,R.RGCID),TO_DATE('16112009','DDMMYYYY'),
    X.COINI,X.CTFLU,X.NUPRO,X.NCCOF,X.CTCOF,X.MHCOF,X.MGCOF,X.MHPND,X.MGPND,X.TXCLO
    FROM VCOAT3 X,TYPSCR S,RGCCAL R
    WHERE X.COMAR IN ('ICE') AND S.CTCOF=X.CTCOF AND S.COLAN='A' AND R.COMAR(+)=X.COMAR AND R.COINT(+)=X.COINT AND NVL(R.NUCPT(+),'#*#')=NVL(X.NUCPT,'#*#') VCOAT3 is a view defined by this script below:
    SELECT /*+ OPT_PARAM('optimizer_index_cost_adj' 100) OPT_PARAM('optimizer_index_caching' 0) */
    L.NUBIX,L.NUFDP,
    L.COMAR,
    DECODE(X.NCCOF,5,L.COINT,6,L.COINT,7,L.COINT,8,L.COINI,9,L.COINI,10,L.COINI,11,NULL   ,12,D.COINA,13,D.COINV,14,L.COINT),
    DECODE(X.NCCOF,5,L.NUCPT,6,L.NUCPT,7,L.NUCPT,8,L.NUCPI,9,L.NUCPI,10,L.NUCPI,11,NULL   ,12,NULL   ,13,NULL   ,14,L.NUCPT),
    DECODE(X.NCCOF,5,L.COINI,6,L.COINI,7,L.COINI,8,NULL   ,9,NULL   ,10,NULL   ,11,NULL   ,12,NULL   ,13,NULL   ,14,L.COINI),
    DECODE(X.NCCOF,5 ,'D',
                   6 ,'D',
                   7 ,'D',
                   8 ,DECODE(I.CTINT,'I','S','T'),
                   9 ,DECODE(I.CTINT,'I','S','T'),
                   10,DECODE(I.CTINT,'I','S','T'),
                   11,'N',
                   12,'A',
                   13,'O',
                   14,'L'),
    L.DANEG,L.CSENS,L.QTCCP,L.MTULP,L.CSOPT,L.CNACT,L.CMECH,L.CAECH,L.MTSNA,L.NUCON,
    X.NUPRO,X.NCCOF,X.CTCOF,X.CDCOF,
    X.MHCOF,
    X.MGCOF,
    X.MHPND,
    X.MHPND*(1+X.TTCOF/100),
    X.TXCLO
    FROM
    LIGPOR L,
    HISCRD X,
    IHSDEP D,
    INTERV I
    WHERE
    X.NUBIX=L.NUBIX    AND
    X.NUFDP=L.NUFDP    AND
    SIGN(X.MHPND) IN (-1,1) AND
    D.NUBIX=L.NUBIX    AND
    D.NUFDP=L.NUFDP    AND
    I.COINT=L.COINT;here is the execution plan:
    | Id  | Operation                    | Name   | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop
    |
    |   0 | SELECT STATEMENT             |        |    49M|  9979M|       |   418K  (6)| 00:26:15 |       |
    |
    |*  1 |  HASH JOIN                   |        |    49M|  9979M|       |   418K  (6)| 00:26:15 |       |
    |
    |*  2 |   TABLE ACCESS FULL          | TYPSCR |    31 |   682 |       |     3   (0)| 00:00:01 |       |
    |
    |*  3 |   HASH JOIN RIGHT OUTER      |        |    34M|  6350M|       |   417K  (6)| 00:26:11 |       |
    |
    |   4 |    TABLE ACCESS FULL         | RGCCAL |  1826 | 60258 |       |     5   (0)| 00:00:01 |       |
    |
    |   5 |    VIEW                      | VCOAT3 |    34M|  5253M|       |   416K  (6)| 00:26:09 |       |
    |
    |*  6 |     HASH JOIN                |        |    34M|  3890M|    56M|   416K  (6)| 00:26:09 |       |
    |
    |*  7 |      HASH JOIN               |        |   678K|    49M|       |   153K  (4)| 00:09:37 |       |
    |
    |   8 |       TABLE ACCESS FULL      | INTERV |  1254 | 12540 |       |    13   (0)| 00:00:01 |       |
    |
    |*  9 |       HASH JOIN              |        |   678K|    42M|    36M|   153K  (4)| 00:09:37 |       |
    |
    |  10 |        PARTITION RANGE SINGLE|        |   689K|    28M|       | 10554   (3)| 00:00:40 |    16 |    16
    |
    |* 11 |         TABLE ACCESS FULL    | LIGPOR |   689K|    28M|       | 10554   (3)| 00:00:40 |    16 |    16
    |
    |  12 |        TABLE ACCESS FULL     | IHSDEP |    15M|   325M|       |   125K  (4)| 00:07:53 |       |
    |
    |* 13 |      TABLE ACCESS FULL       | HISCRD |    54M|  2150M|       |   174K  (8)| 00:10:58 |       |
    |
    Predicate Information (identified by operation id):
       1 - access("S"."CTCOF"="X"."CTCOF")
       2 - filter("S"."COLAN"='A')
       3 - access("R"."COMAR"(+)="X"."COMAR" AND "R"."COINT"(+)="X"."COINT" AND
                  NVL("R"."NUCPT"(+),'#*#')=NVL("X"."NUCPT",'#*#'))
       6 - access("X"."NUBIX"="L"."NUBIX" AND "X"."NUFDP"="L"."NUFDP")
       7 - access("I"."COINT"="L"."COINT")
       9 - access("D"."NUBIX"="L"."NUBIX" AND "D"."NUFDP"="L"."NUFDP")
      11 - filter("L"."COMAR"='ICE')
      13 - filter(SIGN("X"."MHPND")=(-1) OR SIGN("X"."MHPND")=1)
    Note
       - 'PLAN_TABLE' is old version
       - dynamic sampling used for this statementThe problem is the FTS performed on HISCRD table.
    The HISCRD table contains 100M of rows and 99.88% of the table have MHPN=0.
    Thats's why I have used the following filter predicate:
    AND
    SIGN(X.MHPND) IN (-1,1)I have created an FBI for this column :
    CREATE INDEX HISCRD2 ON HISCRD (SIGN("MHPND")); Statistics exists for this column and a histogramme is calculated:
    SQL> select INDEX_NAME,INDEX_TYPE,BLEVEL,CLUSTERING_FACTOR,STATUS,NUM_ROWS,LAST_ANALYZED
      2  from user_indexes
      3  where TABLE_NAME='HISCRD';
    INDEX_NAME                     INDEX_TYPE                      BLEVEL CLUSTERING_FACTOR STATUS     NUM_ROWS LAST_ANALYZED
    HISCRD2                        FUNCTION-BASED NORMAL                2           1032922 VALID     115850590 15/11/2009 00:30:1
    2
    HISCRD1                        NORMAL                               3          43339640 VALID     108514554 15/11/2009 00:30:1
    4
    SQL> select density,NUM_DISTINCT,num_nulls, num_buckets, histogram
      2  from user_tab_columns
      3  where table_name = 'HISCRD' and column_name in ('MHPND');
       DENSITY NUM_DISTINCT  NUM_NULLS NUM_BUCKETS HISTOGRAM
    4.5427E-09            5          0           5 FREQUENCY
    SQL> select num_rows, sample_size, blocks from user_tables where table_name = 'HISCRD';
      NUM_ROWS SAMPLE_SIZE     BLOCKS
    109981839     5987368     732097How does the CBO estimate that there's going to be 54M rows returned in HISCRD table after it has applied the filter predicate SIGN(X.MHPND) IN (-1,1):
    SQL> select count(1) from hiscrd where SIGN(MHPND) IN (1);
      COUNT(1)
        127451
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  80whqnuycnc5x, child number 0
    select count(:"SYS_B_0") from hiscrd where SIGN(MHPND) IN (:"SYS_B_1")
    Plan hash value: 3957584767
    | Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |         |       |       | 10743 (100)|          |
    |   1 |  SORT AGGREGATE   |         |     1 |     3 |            |          |
    |*  2 |   INDEX RANGE SCAN| HISCRD2 |    54M|   157M| 10743   (3)| 00:00:41 |
    Predicate Information (identified by operation id):
       2 - access("HISCRD"."SYS_NC00017$"=:SYS_B_1)
    19 rows selected.
    SQL> select count(1) from hiscrd where SIGN(MHPND) IN (0);
      COUNT(1)
    110380609
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  80whqnuycnc5x, child number 0
    select count(:"SYS_B_0") from hiscrd where SIGN(MHPND) IN (:"SYS_B_1")
    Plan hash value: 3957584767
    | Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |         |       |       | 10743 (100)|          |
    |   1 |  SORT AGGREGATE   |         |     1 |     3 |            |          |
    |*  2 |   INDEX RANGE SCAN| HISCRD2 |    54M|   157M| 10743   (3)| 00:00:41 |
    Predicate Information (identified by operation id):
       2 - access("HISCRD"."SYS_NC00017$"=:SYS_B_1)Why the CBO doesn't consider histogram in the MHPND column?

    Timur Akhmadeev wrote:
    Another oddity is why CBO decided to not merge the view's subquery - even though it is a costed tranformation in 10g, usually CBO prefers to merge subqueries.The join is an outer join, and the view being joined is a join view. This combination is one of the restrictions listed as blocking view merging. (It is a case where a push_pred() hint could be used to force the optimizer into using "join predicate pushdown").
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Trying to do the right queries..

    I have a blog source I'm working on that uses the following tables:
    TABLE posts:
    +---------+------------------+------+-----+---------+----------------+
    | Field | Type | Null | Key | Default | Extra |
    +---------+------------------+------+-----+---------+----------------+
    | id | int(10) unsigned | NO | PRI | NULL | auto_increment |
    | date | int(11) | NO | | NULL | |
    | title | varchar(80) | NO | | NULL | |
    | author | int(11) | NO | | NULL | |
    | content | longtext | NO | | NULL | |
    +---------+------------------+------+-----+---------+----------------+
    5 rows in set (0.00 sec)
    TABLE tags:
    +-------+------------------+------+-----+---------+----------------+
    | Field | Type | Null | Key | Default | Extra |
    +-------+------------------+------+-----+---------+----------------+
    | id | int(10) unsigned | NO | PRI | NULL | auto_increment |
    | name | varchar(20) | NO | | NULL | |
    | nav | tinyint(1) | NO | | NULL | |
    | icon | varchar(40) | YES | | NULL | |
    +-------+------------------+------+-----+---------+----------------+
    4 rows in set (0.01 sec)
    TABLE tagpost:
    +--------+------------------+------+-----+---------+-------+
    | Field | Type | Null | Key | Default | Extra |
    +--------+------------------+------+-----+---------+-------+
    | tagid | int(10) unsigned | NO | | NULL | |
    | postid | int(10) unsigned | NO | | NULL | |
    +--------+------------------+------+-----+---------+-------+
    tags and posts are pretty obvious; tagpost will be filled with multiple duplicates for normalization, basically. The issue I'm having is figuring out which sort of query I need to display posts properly. I'll need to fetch information from posts, tags, and users (for usernames).
    I'm not sure where I need to start with this.. I think maybe subqueries or joining is what I need but I can't figure it out for myself.. any help?
    I can post some of the code I'm using, but it's sort of hackish PHP.
    Edit: Here's a graphical representation of my problem: http://bb.xieke.com/files/mysql-tables.png
    Last edited by xelados (2008-10-15 19:19:44)

    e_tank wrote:select posts.id, posts.title, posts.author, \
    users.name as username, \
    tagpost.tagid, \
    tags.name \
    from posts inner join users on posts.author = users.id \
    inner join tagpost on posts.id = tagpost.postid \
    inner join tags on tagpost.tagid = tags.id;
    Unless you require that all posts have at least one tag, you should replace the last two INNER JOINs (or at least the second-to-last one -- I'm not certain what would happen in that case) with LEFT OUTER JOINs; as it stands, posts with no tags would be excluded from the results. (I assume an author is required, in which case the first INNER JOIN shouldn't cause a problem.)
    Edit: I just verified in MySQL that -- indeed -- both of the last two joins should be LEFT OUTER JOINs (or, equivalently, LEFT JOINs).
    Last edited by ssjlegendx (2008-10-16 01:04:32)

  • SQLX performance

    Hi all.
    I'm trying to produce an XML extract from the database - 9i r2 - using SQLX functions.
    There are several nested queries involved and it doesn't seem to scale properly.
    Running for 10,000 records on DEV takes about 20 minutes, 20,000 = 40 minutes but running for 200,000 records won't complete in 12 hours - and for the live extract we need to do 900,000+
    The extract is in the form
    select '<?xml version="1.0" encoding="iso-8859-1" ?>'
    ,xmlelement("tns:Customers" ,
    xmlattributes ( 'http://www.aaa.bbb.gov.uk/xxxxxxxx' as "xmlns:tns"
    ,'http://www.w3.org/2001/XMLSchema-instance' as "xmlns:xsi"
    ,'http://www.aaa.gov.uk/xxxxxxxxx D:\yyyyyy.xsd' as "xsi:schemaLocation")
    ,xmlagg(
    xmlelement("Customer" ,
    xmlelement("PersonDetails" ,
    xmlelement("PersonType" ,
    xmlforest(forename "Forename"
    ,surname "Surname"
    ,dob "DateOfBirth"
    ,xmlelement("UniqueId" , addressee_id)
    ,xmlelement("RecordTypeIndicator" , main.recordtype)
    ,(select xmlagg(
    xmlelement("Address" ,
    xmlelement("AddressLine1", adrhv.Address1)
    ,xmlelement("AddressLine2", adrhv.Address2)
    ,xmlelement("PostCode" , adrhv.Postcode)
    ) as "X"
    from adrh_vw adrhv
    where adrhv.customer_no = main.customer_no ) as "AddressData"
    ,xmlelement("Entitlements" ,
    xmlelement("CategoryData" ,
    (select xmlagg(
    xmlelement("CategoryDetail" ,
    xmlelement("CategoryCD" , de.discount_category )
    ,xmlelement("ValidFrom" , to_char(de.start_date,'YYYY-MM-DD')
    ,xmlelement("ValidTo" , to_char(de.end_date,'YYYY-MM-DD'))
    ,xmlelement("Status" , decode(de.entitlement,'1','PENDING','APPROVED'))
    ) as "X"
    from discounts di
    ,discount_entitlement de
    where de.discount_id = di.discount_id
    and di.customer_no = main.customer_no
    ))).getclobval()
    from cpc_details_vw main
    where recordtype != 'X';
    (with a couple of other sub-queries as well in a similar form)
    Everything appears to be using the indexes I would expect and it's driven by a full table scan of the "candidate" records put into the underlying table for cpc_details_vw.
    The explain plan (in SQL*Developer) shows a number of nodes at the same level starting with "SORT" - I hope it's not attempting to select all the data and then merge it - obviously, what I want is for the execution path to read each customer from the view and then use the indexed selects on the other tables.
    Has anyone any suggestions as to how this can be improved?
    Thanks
    Malcolm
    Edited by: user3483842 on Sep 1, 2008 8:18 AM

    Malcolm,
    looking closer at the execution plan posted I see at least three potentially time consuming issues:
    1. You're using scalar subqueries to obtain the XML expressions in the SELECT list, which means that potentially
    each of these queries get executed for each row produced by the main query driven by CPC_DETAILSVW.
    Oracle has some powerful built in run time optimizations for subqueries like that, but their efficiency depends
    on the data pattern and if they actually kick in or not.
    In case they're used in summary what Oracle does is the following: Keep an in-memory hash table of the input
    values used to execute the subquery, and if the same value is re-used, don't run the subquery but use
    immediately the corresponding output value stored along with the input value. Because the size of the in-memory
    table is quite small (in 9i I think 256 entries) and input values that generate a hash collision are discarded
    the effectiveness of this optimization depends on the number and the sorting of the input values. If they are
    many input values and they are processed in a largely random fashion then the optimization won't help much, but
    if there not too many and/or they are processed in sorted fashion then this can lead to significant time
    savings.
    By the way, that's the reason why there are actually three parts at the same level beginning of your execution
    plan: The upper two represent the two subqueries potentially executed for each row of the bottom main query.
    I assume that in your case (due to the object type returned by XMLAGG() function) this optimization might not be
    used at all.
    So from a performance perspective it might be beneficial to unnest the scalar subqueries by joining the
    "adrh_vw", "discounts" and "discount_entitlement" to "cpc_details_vw" in the main query but I guess content-wise
    it won't be possible to get the same XML output then.
    2. It looks like that the views used contain a lot of calls to PL/SQL package functions. I could imagine that
    it's not only the execution of the subqueries that consumes a lot of time but in addition calling the functions
    could account for a significant amount of time. Note that for the subqueries mentioned this could mean that
    those functions are going to be executed for each row of the main query that already executes some functions as
    part of the view definition for each row.
    My suggestion here would be to write a query that attempts to select the same data but without using the XML
    functions in order to find out how much time is spent in the function calls. You could use something similar to
    this:
    (Note: this is untested code)
    select forename "Forename"
    ,surname "Surname"
    ,dob "DateOfBirth"
    ,addressee_id
    ,main.recordtype
    ,(select max(adrhv.Postcode)
    from adrh_vw adrhv
    where adrhv.customer_no = main.customer_no ) as "AddressData"
    ,(select max(de.discount_category )
    ,max(to_char(de.start_date,'YYYY-MM-DD'))
    ,max(to_char(de.end_date,'YYYY-MM-DD'))
    ,max(decode(de.entitlement,'1','PENDING','APPROVED'))
    from discounts di
    ,discount_entitlement de
    where de.discount_id = di.discount_id
    and di.customer_no = main.customer_no
    from cpc_details_vw main
    where recordtype != 'X';If this query takes a similar amount of time, then you first need to have a look what these PL/SQL functions
    actually do and if there's a way to do this faster (optimally without using the PL/SQL functions because this
    will be the fastest way). Once this is sorted out you could again try to run the XML version to find out if the
    XML stuff adds another overhead that needs to be looked at.
    3. Since you're using rule based optimization the index ADR_PK on ADDRESSEE is used in the main query.
    A hash join or sort merge join could be more efficient in this case, but this could be determined by the
    cost based optimizer in case you have reasonable statistics gathered.
    Finally you still just might be unlucky that generating such a single, very large XMLTYPE resp. CLOB just becomes more
    and more inefficient the larger your input set gets.
    If this is the case you should think about alternatives how to generate that large XML, e.g. generate a CSV
    export from the database and generate the XML in a third-party tool, or generate the XML step-wise and merge it
    afterwards in some other tool.
    It might be that in 10g the XML functions are more efficient but that probably won't help since you're still on 9i.
    Regards,
    Randolf
    Oracle related stuff:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Using full outer join of subqueries named using with clause

    Hi,
    I am trying to create a view which is having 2 subqueries vol1 & vol2 with WITH clause. I am joining those 2 subqueries in the main query with FULL OUTER JOIN.
    When i compile that view in a tool like pl/sql developer, It has been compiled successfully.
    But when i call the view creation script from SQL command prompt, It is throwing error as
    from vol1 FULL JOIN vol2 o ON (vol1.ct_reference = vol2.ct_reference and vol1.table_name = vol2.table_name
    ERROR at line 29:
    ORA-00942: table or view does not exist
    Kindly advise whats going wrong.

    that's line 29. Maybe you get a better idea if you strip your operation of all the unneccessary elements until it works.
    There are some known bugs with subquery factoring (aka with clause) and also with ANSI join syntax, but it is hard to tell what happens here based on your description. But one thing is strange - if it is not a result of formatting (not formatting): I would expect the asterisk beneath the unknown table and not beneath the key word FULL.
    P.S.: my editor makes me think it's rather a proportional font thing. Have I already said that I don't like proportional font for SQL code examples?

  • Correlated Subqueries, NOT EXISTS & Anti Joins - Clarification

    I am a bit confused now regarding correlated subqueries and the NOT EXISTS operator (I had originally thought I understood but am all mixed up now!)
    I was reading around and have gathered that if one were to use EXISTS that this isnt the preferred method, and that the query should actually be re-written as a join (im guessing INNER JOIN) to improve performance.
    NOT EXISTS is also not a recommended method from what I read as well.
    Correlated subqueries in general are not recommended for performance issues, since the subquery needs to be executed once for every row returned by the outer query.
    I was reading up on anti joins, and found that a lot of people referred to anti joins with the use of NOT EXISTS and a correlated subquery, which if my above understanding is correct is super bad in performance as it combines two things that people dont recommend.
    I was wondering for anti joins, is there any other way to write them besides a NOT EXISTS with a correlated subquery?
    Essentially what would be the most efficient way of writing an anti join? Or when Im trying to find all the rows that are NOT a common match between two tables.

    Hi,
    chillychin wrote:
    I am a bit confused now regarding correlated subqueries and the NOT EXISTS operator (I had originally thought I understood but am all mixed up now!)That's easy to understand! This is a topic that does not lend itself to easy, general solutions. So much depends on the circumstances of a particular query that you can never honestly say anything like "EXISTS is bad".
    I was reading around and have gathered that if one were to use EXISTS that this isnt the preferred method, and that the query should actually be re-written as a join (im guessing INNER JOIN) to improve performance. It depends. EXISTS and joins do different things. For example, when you have a one-to-many relationship, joining can increase the number of rows. Even if the join is faster than EXISTS, you may have the additional cost of doing a GROUP BY or SELECT DISTINCT to get just one copy of each row.
    NOT EXISTS is also not a recommended method from what I read as well.
    Correlated subqueries in general are not recommended for performance issues, since the subquery needs to be executed once for every row returned by the outer query.There's a lot of truth in that. However, results of coirrelated queries can be cached. That is, the system may remeber the value of a correlation variable and the value it retuned, so the next time you need to run the sub-query for the same value, it will just return the cached result, and not actually run the query again.
    Remember that performance is only one consideration. Sometimes performance is extremely important, but sometimes it is not important at all. Whether a query is easy to understand and maintain is another consideration, that is sometimes more important than performace.
    The optimizer may re-write your code in any case. When perforance really is an issue, there's no substitute for doing a EXPLAIN PLAN, finding out what's making the query slow, and addressing those issues.
    I was reading up on anti joins, and found that a lot of people referred to anti joins with the use of NOT EXISTS and a correlated subquery, which if my above understanding is correct is super bad in performance as it combines two things that people dont recommend. It's actually only one thing that the people who don't recommend it don't recommend. EXISTS sub-queries are always correlated. (Well, almost always. In over 20 years of writing Oracle queries, I only remember seeing one uncorrelated EXISTS sub-query that wasn't a mistake, and that was in a forum question that might have been hypothetical.) Nobody worth listening to objects to EXISTS because it is EXISTS, or to a correlated sub-query because it is correlated. They object to things because they are slow (or confusing, or fragile, but you seem to be concerned with performance, so let's just say slow for now.) If someone tires to avoid an EXISTS sub-query, it precisely because the sub-query is correlated, and that is only an objection because they suspect the correlation will make it slow.
    I was wondering for anti joins, is there any other way to write them besides a NOT EXISTS with a correlated subquery?As the name implies, you can use a join.
    SELECT     d.*
    FROM          scott.dept     d
    LEFT OUTER JOIN     scott.emp     e  ON     d.deptno  = e.deptno
    WHERE     e.deptno     IS NULL
    ;is an anti-join, showing details about the departments that have no employees.
    Essentially what would be the most efficient way of writing an anti join? Or when Im trying to find all the rows that are NOT a common match between two tables.Anytime you can use EXISTS, you can also use IN, or a join. Personally, I tend to use the in the reverse of that order: I'll generally try it with a join first. If that looks like a problem, then I'll try IN. The query above can also be done like this:
    SELECT     *
    FROM     scott.dept
    WHERE     deptno     NOT IN (
                      SELECT  deptno
                      FROM    scott.emp
                      WHERE   deptno   IS NOT NULL   -- If needed
    ;Failing that, I'll try EXISTS.
    Sometimes other methods are useful, too. For example, if we only need to know the deptnos that exist in scott.dept but not scott.emp, and no other information about those departments, then we can say:
    SELECT  deptno
    FROM    scott.dept
        MINUS
    SELECT  deptno
    FROM    scott.emp
    ;

  • Joins subqueries set operators

    Hi All,
    I am new to oracle SQL, Could any body tell me what is the differences between Joins, subqueries, set operators ? What is the benifits or advantages and dis-advantages comparing the above three.
    Thanks in advance
    Mahesh Ragineni

    Not using that syntax you won't in PL/SQL as it's just wrong.
    As you're querying multiple rows you'd have to either loop or collect it in to a collection e.g.
    SQL> declare
      2    type aNums is table of number;
      3    vNums aNums;
      4  Begin
      5    select n
      6      bulk collect into vNums
      7    from
      8      (
      9      select 1 as n from dual
    10      union
    11      select 2 from dual
    12      );
    13  end;
    14  /
    PL/SQL procedure successfully completed.
    But yes, you can use any SQL statements in SQL that you use in PL/SQL.

  • Query to count no.of orders using joins and subqueries

    I have 3 tables i want to show the no. of orders when order placed date is equal to Create date(user registered date)
    order table contains following columns
    [OrderId]
          ,[UserId]
          ,[OrderPlaced]
          ,[Paid]
          ,[DatePaid]
          ,[PaymentMethod]
          ,[PaymentRef]
          ,[BillTo]
          ,[AddressLine1]
          ,[AddressLine2]
          ,[City]
          ,[County]
          ,[CountryId]
          ,[PostCode]
          ,[Shipped]
          ,[DateShipped]
          ,[Packing]
          ,[ShipTo]
          ,[ShippingAddressLine1]
          ,[ShippingAddressLine2]
          ,[ShippingCity]
          ,[ShippingCounty]
          ,[ShippingCountryId]
          ,[ShippingPostCode]
          ,[ShippingCost]
          ,[ShippingOptionId]
          ,[AllocatedPoint]
          ,[PointValue]
          ,[PromoCode]
          ,[DiscountValue]
          ,[InvoiceNumber]
          ,[IPAddress]
          ,[StatusCode]
          ,[IssueCode]
          ,[USStateId]
          ,[ShippingUSStateId]
          ,[CurrencyCode]
          ,[ExchangeRate]
          ,[LastActivityDate]
          ,[AwardedPoint]
          ,[Archived]
          ,[LastAlertDate]
    aspnet_Membership table contains
    [Password]
          ,[PasswordFormat]
          ,[PasswordSalt]
          ,[MobilePIN]
          ,[Email]
          ,[LoweredEmail]
          ,[PasswordQuestion]
          ,[PasswordAnswer]
          ,[IsApproved]
          ,[IsLockedOut]
          ,[CreateDate]
          ,[LastLoginDate]
          ,[LastPasswordChangedDate]
          ,[LastLockoutDate]
          ,[FailedPasswordAttemptCount]
          ,[FailedPasswordAttemptWindowStart]
          ,[FailedPasswordAnswerAttemptCount]
          ,[FailedPasswordAnswerAttemptWindowStart]
          ,[Comment]
    Account table contains 
    [UserId]
          ,[FirstName]
          ,[LastName]
          ,[Email]
          ,[ContactNumber]
          ,[DOB]
          ,[Note]
    my code is
    SELECT a.UserId, m.Email, o.OrderPlaced, o.OrderId, m.CreateDate
    FROM Account a, aspnet_Membership m, Orders o
    where a.UserId=o.UserId and a.Email=m.Email ORDER BY a.UserId 
    I had displayed the required details using joins but now i want the number of orders when order placed date is equal to Create date(user registered date)
    can anyone help me iam a fresher
    Thanks in advance.

    You cant use subquery in ORDER BY like this.
    you need to do it like this
    SELECT a.UserId, m.Email, o.OrderPlaced, o.OrderId, m.CreateDate
    FROM Account a
    INNER JOIN aspnet_Membership m
    ON a.Email=m.Email
    INNER JOIN (SELECT *,COUNT(OrderId) OVER (PARTITION BY Createddate) AS Cnt FROM Orders) o
    on a.UserId=o.UserId
    ORDER BY a.UserId,o.Cnt
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • JOIN의 종류와 이해

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-08
    PURPOSE
    Join의 종류와 이의 사용 방법을 이해한다.
    EXPLANATION
    (1) 개요
    Join 은 크게
    - outer join
    - semi join
    - anti join 이 있으며, 이에 대해 하나씩 다루어 보기로 한다.
    (2)Anti Join
    - 1. 전체적 설명
    anti-join 은 join 의 반대의 logic 을 갖는다.
    이를 실행하면 왼쪽과 오른쪽이 match 되는 row 를 반환해 주는 것이 아니고, 왼쪽이 오른쪽과 match 되지 않는 것을 return 한다.
    이 수행은 오른쪽의 subquery 에 NOT IN 이 있는 경우이다.
    Anti-join 은 NOT IN 을 수행하기 위해 sort-merge, 또는 hash joins 을
    사용한다.
    예를 들어 subquery 가 다음의 form 이라고 가정하자.
    (colA1, colA2, ... colAn) NOT IN (SELECT colB1, colB2, ..., colBn
    FROM ...).
    Hash 또는 sort_merge 의 anti join 으로 subquery 가 변환되기 위해 다음의 조건이 뒤따른다 :
    1> Table A 의 column reference 는 모두 simple reference 여야 한다.
    또 B의 reference 역시 simple 이거나, subquery 가 GROUP BY 를 포함하
    는 경우 aggregate functions (MIN, MAX, SUM, COUNT, or AVG)이어야 한
    다. 다른 expression 은 허용하지 않는다.
    2> All column references must be know to be not null.
    3> The subquery must not have any correlation predicates. That is,
    predicates referencing anything in surrounding query blocks.
    4> The WHERE clause of the surrounding query must not have ORs at
    the top-most logical level.
    5> Anti-joins 은 cost-based approach를 통해서만 사용 가능하다.
    - 2. Anti-Join 사용법
    Oracle 은 NOT IN subqueries를 sort-merge 또는 hash anti-joins 으로
    변환한다.
    만일 조건이 위의 조건과 모두 맞고, hint 또는 init.ora 의 parameter 로
    설정되어 있다면 변환은 일어난다.
    특정 query 에 대해 MERGE_AJ or HASH_AJ hints 가 NOT IN subquery에
    용되어진다. MERGE_AJ 는 sort-merge anti-join, HASH_AJ hash anti-join을
    사용한다.
    예제 :
    SELECT * FROM emp
    WHERE ename LIKE 'J%' AND
    deptno IS NOT NULL AND
    deptno NOT IN (SELECT /*+ HASH_AJ/ deptno FORM dept
    WHERE deptno IS NOT NULL AND
    loc = 'DALLAS');
    만일 anti-join transformation 을 항상 특정 join 을 사용하게 하고 싶으면
    init.ora 에 ALWAYS_ANTI_JOIN 을 MERGE 또는 HASH 로 준다.
    (2) Semi Join
    - 첫번째 match 되는 value 를 찾기만 하면 결과를 display 해주는 join 문이
    다. 예를 들어 EXISTS fuction 을 쓰는 경우이다.
    (3) Outer Join
    outer join 을 이해 하기 위해 outer join 의 기본을 다룬 다른 bulletin 을
    이용하여 이해를 한뒤 이 자료를 참고 한다.
    - data 가 모자라는 쪽에 (+) 를 붙인다.
    _ 이의 plan 순서는 (+)쪽에 index 가 있더라도 ,(+)가 붙지 않는 쪽의
    table 이 먼저 풀린다.
    - (+) 가 붙은쪽 table 의 모든 column 에 (+)를 붙인다.
    - outer join 되는 column 에는 in, between,like, or 를 사용하지 못하며
    이를 어길 경우 ora-1719 가 발생된다.
    Reference Doucument
    ---------------------

Maybe you are looking for