Join or Subquery (In terms of performance)

Hi All,
Could anyone please help me to find the best performing sql shown below.
Both SQL are having same elapsed time.
Oracle Version 10.2.0.4
Thanks.
Edited by: Yasu on Nov 26, 2010 9:46 PM
Added elapsed time information.
Edited by: Yasu on Nov 27, 2010 10:20 AM
Removed sensitive data

Yasu wrote:
We are in process of injecting new module to existing application, hence i think we can go for EXISTS, as we are not rewriting any SQL.
Also, does it affects in any way about estimated rows in EXISTS case which are high about 19103 rows. I noticed that cardinality problem prior to posting my earlier response, but failed to mention it. The cardinality estimate problem, while it did not cause a problem with your relatively simple SQL statement, it could cause changes in the join order if the cardinality estimate problem exists in other queries.
One more doubt.
Why Index fast full scan is occurring instead of index range scan for LIKE operator.
| Id  | Operation                           | Name               | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
|   1 |  NESTED LOOPS SEMI                  |                    |      1 |    684 |     64 |00:00:00.02 |   17148 |
|*  2 |   INDEX FAST FULL SCAN              | PK_ORG_ALIAS_NAMES |      1 |    684 |    180 |00:00:01.29 |   16713 |
|*  3 |   TABLE ACCESS BY GLOBAL INDEX ROWID| ORGANIZATIONS      |    170 |  19103 |     59 |00:00:00.01 |     435 |
|*  4 |    INDEX UNIQUE SCAN                | PK_ORGANIZATIONS   |    170 |      1 |    170 |00:00:00.01 |     346 |
Predicate Information (identified by operation id):
2 - filter((LOWER("ORG_ALIAS_NAME") LIKE 'technology%' AND
"F_GETFIRSTWORD"("ORG_ALIAS_NAME")='TECHNOLOGY'))
3 - filter((COALESCE("M"."ORG_COUNTRY_OF_DOMICILE","M"."ORG_COUNTRY_OF_INCORPORATION")='US' AND
"ORG_DATA_PROVIDER"<100))
4 - access("M"."ORGID"="AN"."ORGID")
DDL for index PK_ORG_ALIAS_NAMES:
CREATE UNIQUE INDEX "PK_ORG_ALIAS_NAMES" ON "OA"."ORG_ALIAS_NAMES" ("ORGID", "ORG_ALIAS_NAME", "ORG_ALIAS_EFF_FROM_DATE")
A range scan is not possible for the index for a couple of reasons:
1. Based on the supplied index definition, the ORG_ALIAS_NAME column is not the leading (first) column in the index definition. So, at best an index skip scan might happen, rather than an index range scan.
2. The LOWER() function is applied to the ORG_ALIAS_NAME indexed column.
To see an index range scan you would likely need a function based index constructd like this:
CREATE UNIQUE INDEX PK_ORG_ALIAS_NAMES ON OA.ORG_ALIAS_NAMES (LOWER(ORG_ALIAS_NAME), ORGID, ORG_ALIAS_EFF_FROM_DATE)However, changing the index definition as shown above might have a significant negative performance impact on other queries that are currently using the PK_ORG_ALIAS_NAMES index.
Charles Hooper
Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
http://hoopercharles.wordpress.com/
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.

Similar Messages

  • Difference between JOIN and Subquery

    HI all,
    What is the difference between JOIN and Subquery?
    Regards,
    - Sri

    JOIN is combining factor or two data sources.
    Subquery is the resultant set of datasource(s) which can be joined/compared to other data source in your main query.

  • Code comparission which is better in terms of performance

    In my pl/sql procedure the following code exist
    SELECT count(*) INTO L_exists
              FROM txt
         WHERE templ_id in (ISecurTempl,ESecurTempl);
    Now i want to replace the query with
    SELECT count(*) INTO L_exists
              FROM txt
    WHERE templ_id in (                                                       SELECT INT_SEC_TMP FROM table(cast(parse_table as sec_template_tbl))
                                                      UNION
    SELECT EXT_SEC_TMP FROM table(cast(parse_table as sec_template_tbl))
    Can you please tell me which part of code is better in terms of performance
    and why?
    Thanks in advance

    I want to know which is betterThere's not enough information for anyone here to be able to tell you.
    Obviously the first one "looks" faster, but without knowing the tables, structure, data, indexes, platform etc. etc. etc. we won't have a clue.

  • Creating table using joins in subquery

    can we create a table by using joins in subquery??
    like this
    create table emp as select * from employees e,departments d
    where d.department_id=e.department_id
    can we ??

    What happens when you try it?
    It worked for me. See below:
    drop table test_table;
    create table test_table as
    select e.employee_id, e.first_name || ' ' || e.last_name ename, e.email, e.salary, d.department_name
      from employees e
      join departments d
        on (e.department_id = d.department_id)
    where e.salary = 10000;
    select *
      from test_Table;
    EMPLOYEE_ID ENAME                                          EMAIL                       SALARY DEPARTMENT_NAME             
            150 Peter Tucker                                   PTUCKER                      10000 Sales                         
            156 Janette King                                   JKING                        10000 Sales                         
            169 Harrison Bloom                                 HBLOOM                       10000 Sales                         
            204 Hermann Baer                                   HBAER                        10000 Public Relations

  • Inner Join. How to improve the performance of inner join query

    Inner Join. How to improve the performance of inner join query.
    Query is :
    select f1~ablbelnr
             f1~gernr
             f1~equnr
             f1~zwnummer
             f1~adat
             f1~atim
             f1~v_zwstand
             f1~n_zwstand
             f1~aktiv
             f1~adatsoll
             f1~pruefzahl
             f1~ablstat
             f1~pruefpkt
             f1~popcode
             f1~erdat
             f1~istablart
             f2~anlage
             f2~ablesgr
             f2~abrdats
             f2~ableinh
                from eabl as f1
                inner join eablg as f2
                on f1ablbelnr = f2ablbelnr
                into corresponding fields of table it_list
                where f1~ablstat in s_mrstat
                %_HINTS ORACLE 'USE_NL (T_00 T_01) index(T_01 "EABLG~0")'.
    I wanted to modify the query, since its taking lot of time to load the data.
    Please suggest : -
    Treat this is very urgent.

    Hi Shyamal,
    In your program , you are using "into corresponding fields of ".
    Try not to use this addition in your select query.
    Instead, just use "into table it_list".
    As an example,
    Just give a normal query using "into corresponding fields of" in a program. Now go to se30 ( Runtime analysis), and give the program name and execute it .
    Now if you click on Analyze button , you can see, the analysis given for the query.The one given in "Red" line informs you that you need to find for alternate methods.
    On the other hand, if you are using "into table itab", it will give you an entirely different analysis.
    So try not to give "into corresponding fields" in your query.
    Regards,
    SP.

  • INNER JOIN with FOR ALL ENTRIES IN Performance ?

    I am using following the following <b>Select using Inner join with For All Entries in.</b>
          SELECT kebeln kebelp kvbeln kvbelp
            FROM ekkn AS k INNER JOIN ekbe AS b ON kebeln = bebeln
                                               AND kebelp = bebelp
            INTO TABLE gi_purchase
             FOR ALL ENTRIES
             IN gi_sales
          WHERE k~mandt EQ sy-mandt
            AND k~vbeln EQ gi_sales-vbeln
            AND k~vbelp EQ gi_sales-posnr
            AND b~budat EQ p_date.
    If i am not doing inner join then I will have to do 2 select with for all entries in on ekkn and ekbe tables and then compare them.
    <b>I want to know which one has better performance
    Inner join with for all entries in
                    or
    2 Selects with for all entries in</b>

    the join is almost aways faster:
    <a href="/people/rob.burbank/blog/2007/03/19/joins-vs-for-all-entries--which-performs-better">JOINS vs. FOR ALL ENTRIES - Which Performs Better?</a>
    <a href="http://blogs.ittoolbox.com/sap/db2/archives/for-all-entries-vs-db2-join-8912">FOR ALL ENTRIES vs DB2 JOIN</a>
    Rob

  • Diff between paper layout and weblayout in terms of performance

    Hi All,
    1.Can any one tell me diff between paper layout and weblayout in terms of performance.
    2.Can I save my rdf as jsp.Will there any performance diff in real time.
    Regards
    Srinivas

    Hi Rainer,
    Thanks for your reply.
    1.You said Paper layout is not a good choice for html-output ,Can you give some information on this,also if you have any documents or links supporting this please
    share with me.
    2.I designed my reports using web layout,my req is now to print page header in every page of the report,I am unable to do this as I didn't find any html tag to do this.kindly suggest a solution for this
    Regards
    Srinivas

  • Am I using my drives in the best way in terms of performance?

    I want to make sure I'm using my iMac and drives in the way that will give me the best FCPX performance.
    I have a mid-2010 iMac 2.93 GHz i7 core with 32 GB of RAM running OS 10.7.5.  It has an ATI Radeon HD 5750 1024 MB graphics card.  My iMac has two internal drives:  a 256 GB flash drive (my "Macintosh HD"), and a 2 TB standard drive (my "Macintosh HD2").  I also have an external 2 TB drive from OWC/Macsales connected with Firewire 800.  I can't find the specifics on this OWC external drive, but I know I bought it as a video editing drive (it is not just for backing-up).  Though they introduced soon after my iMac, mine does not have a Thunderbolt port.
    As things stand now, my OS and FCPX are on the main internal hard drive (it's my understanding that that is where they are automatically), which is the 256 GB flash drive labeled "Macintosh HD."  And I have my FCP event libraries and my projects on the second internal drive, which is the 2 TB internal drive labeled "Macintosh HD2."  Right now I'm only using the OWC 2 TB external firewire 800 drive for storing the video I upload from my cameras.
    Does this sound like the best way to go in terms of performance?  I mean, is it better to use the internal 2TB drive for my events and projects?  Or would they be better on the OWC 2 TB external firewire 800 drive?  Or should the projects be on one and the events on the other?

    T'bolt SSD or T'bolt RAID will be faster than your internal spinning disk drive.  For sure.
    Larger drives are faster than smaller drives, mechanically.  But the data throughput is the same.  The differences are so small no one can notice in real world work.  But you get so many advantages with larger drives.
    See, a hard drive encoluser has more than one platter in it.  A single 4 TB drive can have 4 platters in it.  so it access data, "mechanicaly" faster than a smaller drive with only 1 or 2 platters.  BUT when that data hits the controller card, the interface protocol (FW, USB, SATA, etc), it all shuts down to the same speed.  The platters and read/write heads are physicall MUCH faster than what the interface and controller card can handle.  So all that matters is how it gets connected.  USB 2 suck, FW is minimal for video, eSATA and SATA are pretty darned good, 6GB SSD is awesome, as is T'bolt.
    Let me put it this way.  spinning disks are slower than SSDs in terms of internal "data access" (read/write) speed.  Both have to deal with the same "bandwidth" issues when they hit an interface such as USB or FW or T'bolt.  Two mechanically different things, but the work together to give you the total sum of a drives ultimate real world performance.
    Now, a single SATA drive internally will be very slightly slower than a SATA drive in a T'bolt encoluser.  Because T'bolt has greater bandwidth than SATA.  But, the differences are so small, in daily real world operations no one would notice, and the price of the T'bot SATA enclosure would just be expensive.
    That is why you see T'bolt enclosures with SATA drives always as RAIDs, 2 or more drives.  That's were SATA drives and T'bolt really shine.   T'bot really shines with SSD RAIDs, but their super expensive and super limited capacity.
    But directly to your question, hard drives are the opposite of automobiles; bigger is always faster.

  • HT4972 in terms of performance: do you recommend to update iPhone 3gs to os5 or os6?

    in terms of performance: do you recommend to update iPhone 3gs to os5 or os6?

    If you update your 3GS, iOS 6.0.1 will be installed on it...you don't have a choice, as that is the only iOS currently signed by Apple, that is available for your phone.

  • Performance : join X subquery

    Hi folks,
    I have this view, developped by someone else, that have 8 selects concatenated with UNION ALL and each of these select makes a 7 tables joins.
    this view takes 2hours to run, and table sizes are :
    TABLE_NAME | SIZE(BYTES) | QTT_ROWS | SIZE/QTT_ROWS
    ------------------------------ | ----------- ---------- -------------
    CC_LANCAMENTO | 278672 | 3076347 | .09058536
    BC_PESSOA | 9880 | 31614 | .31251977
    CC_CC | 3520 | 12238 | .287628697
    CC_CONTA_GARANTIDA | 240 | 263 | .912547529
    CC_HISTORICO | 80 | 472 | .169491525
    CC_CONFIGURACAO | 16 | 1 | 16
    CC_TIPO_CC | 16 | 45 | .355555556
    Didn't SQL joins make cartesian product of joined tables and only after that extract rows that are needed?
    Instead of using all these joins, if I change some of them for subqueries, would I have a boost on performance?
    Tables have indexes and on plan I could see that in many cases they are been used. None of these tables are partitioned
    Thank you.

    I'll agree with Scott that I would generally prefer joins, but that's more of a readability issue. Particularly at the introductory level, it's hard enough to learn how to think about sets with straight joins. I've found that adding subqueries tends to muddy the waters further.
    My rule of thumb is to write the query the way it makes sense to you and the way it makes sense to others. 90+% of the time, Oracle is going to be smart enough to do the right thing, merge the subqueries, and execute the statement as if you had written the joins. If a query doesn't perform acceptably, tune it, but starting out trying to write the most efficient statement possible tends to be a productivity killer. I'd much rather have 20 queries that are easy to understand and maintain and know that I'll have to tune 1 query that confuses Oracle than to try to figure out the fastest possible way to write a query and end up with half as much code written for an imperceptible (to the user) performance gain.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • LEFT OUTER JOIN with SubQuery

    I have two tables and I want to return all data from the first table with a piece of information from the second based on an effective date being at least a certain value. The tables I am working with contain billions of rows so I need a solution that will also scale.
    Sample tables:
    create table pp ( x number, y number);
    create table ec ( x number, y number, effdate date);
    insert into pp values (1,2);
    insert into pp values (2,3);
    insert into pp values (1,3);
    insert into ec values (1,2,sysdate);
    insert into ec values (2,3,sysdate);
    insert into ec values (1,3,sysdate+365);
    commit;
    select * from pp, ec where pp.x = ec.x(+) and pp.y=ec.y(+)
    and (effdate = ( select max(effdate) from ec ecc where ecc.y=ec.y and ecc.x = ec.x and effdate < sysdate) or effdate is null);
    The above query (and the one below) returns two rows. it does not return where the date does not meet the criteria.
    select * from pp LEFT OUTER JOIN ec ON pp.x = ec.x and pp.y=ec.y
    WHERE (effdate = ( select max(effdate) from ec ecc where ecc.y=ec.y and ecc.x = ec.x and effdate < sysdate) or effdate is null);
    This returns the three rows BUT IS VERY SLOW when run against the billion+ row table (because we cannot correlate the subquery results.)
    select * from pp LEFT OUTER JOIN (SELECT x, y, effdate
    FROM ec WHERE effDate = (SELECT MAX(EFFdate) from ec ecc where ecc.y=ec.y and ecc.x = ec.x and effdate < sysdate)) c ON c.x = pp.x and c.y = pp.y;

    It would help quite a bit to know
    1) your Oracle version
    2) your indexes and data volumes (do BOTH tables have billions of rows?)
    But here's something that may perform faster.
    ME_XE? WITH Maximizied AS
      2  (
      3     SELECT
      4        MAX(EffDate),
      5        x,
      6        y
      7     FROM ec
      8     WHERE EffDate < SYSDATE
      9     GROUP BY x, y
    10  )
    11  SELECT *
    12  FROM pp p, Maximizied m
    13  WHERE p.x = m.x (+)
    14  AND   p.y = m.y (+);
                     X                  Y MAX(EFFDAT                  X                  Y
                     2                  3 05-29-2008                  2                  3
                     1                  2 05-29-2008                  1                  2
                     1                  3
    3 rows selected.
    Elapsed: 00:00:00.03

  • In terms of performance which is better?

    Hi I have a query ...
    I have table t1 which has 1.5 million records .
    now ,
    which will be a better option ?
    Pls note I need to do it at run time from a procedure.
    create procedure p1
    is
    begin
    execute immediate 'create table tidup nologging parallel as select * from t1';
    end;
    Or having the structure and do inserts in append mode ?
    say ,
    create procedure p1
    is
    begin
    insert /*+append*/ into  tidup select * from t1;
    Commit;
    end;
    I knew that a DDL is far faster than a DML .
    Kindly let me know if I am correct or is inserting in append mode is faster ?
    Regards
    SHUBH

    Hi I have a query ...
    have table t1 which has 1.5 million records .
    now ,
    which will be a better option ?The one that runs the fastest, obviously.
    Pls note I need to do it at run time from a
    procedure.
    create procedure p1
    is
    begin
    execute immediate 'create table tidup
    nologging parallel as select * from t1';
    nd;Creating tables at runtime is a bad idea. If you need temporary tables, use Global Temporary Tables.
    Or having the structure and do inserts in append mode
    say ,
    create procedure p1
    is
    begin
    insert /*+append*/ into  tidup select * from
    t1;
    Commit;This would be better than the first option in terms of sensible coding.
    I knew that a DDL is far faster than a DML .Oh really? Kindly provide proof of this.
    Kindly let me know if I am correct or is inserting in
    append mode is faster ?Logically think about it.
    1 = Create Table and Insert Records
    2 = Insert Records
    Which do you think may take a smidgen longer to perform?

  • Equi JOINS vs SUBQUERY

    May I know the difference in processing time or execution time when we use the Equi-Join and when we use Suquery in Search criteria.
    Plz help, because the query (137 query) which we have written contains Subquery and fetching thousands of records, taking processing time 10 min.
    We also altered the query batch with Equi-Joins and it is taking only 5 min. So can we say that Equi-Joins are much faster then Subquery.
    Do Reply.
    Thank in Advance
    Vishal
    (Database Developer)

    In theory, it shouldn't matter. If you are able to figure out how to unnest subquery, the optimiser query transformation engine would likely to be able to do that that as well.

  • Inner join Vs for all entries for performance

    hi,
        i need to fetch data from 5 tables where i have common key vbeln, is this suggestable to write a select query with inner join or write an inner join for 2 tables with more fileds and for remaining using for all entries.... please suggest how can i increase the performance...all points are rewarded....
    thnaks alot.

    Is this a dialog program or a data extract? Rob is right in that the difference is negligible IF the number of records involved are only a few.
    On the other hand, if you are extracting a large number of records, then the performance depends on a number of things and is generally unpredictable.
    The way I approach it is by first developing the extract program with a join because it is easier to code. If the program run time is within the acceptable range, I would let it be and migrate to production. If the performance is of high priority and if the join appears to take long time, then I will comment out the code and try the FAE approach. If the run time with FAE is not markedly better, then I would go back to join.

  • Is high speed daq done on a compactrio comparable in terms of performance specifications to that done on a pci6071e?

    I would like to perform continuous data aquisition, that I currently use a PCI6071E for, on Compact RIO. What are the similarities and differences in terms of DAQ capabilities and performance (resolution, settling times, ground reference issues, max input voltage range, channel gain, pre-/post gain errors, system noise, input impedance etc)?

    Any comparison will depend on which CompactRIO module you choose. Currently for analog inputs you only have the 9215 module as a choice though.
    The manual for the 6071 and 9215 will give you all the detailed specs and their differences, but here is a rough overview.
    6071 manual
    64SE/32Diff multiplexed input channels running at a sum sampling rate of ~1.25 MHz, 12 bit resolution, with a variety of gain settings to increase resolution in smaller voltage ranges.
    9215 (one module) manual
    4 Diff simultaneous iput channels, running at a maximum of 100 kHz (each
    channel), with 16-bit resolution across a +-10V range, no gains. You can place up to 8 modules in one CompactRIO chassis to increase the number of channels.
    One big difference will be how you process the data. With the 6071, data is transferred to the main processesor using DMA. Using CompactRIO you can use and process the data directly on the FPGA, or you can transfer it back to the RT processor on the CompactRIO controller to process the data in LabVIEW RT. The bandwidth of the data transfer from the FPGA to the RT processor is lower than a DMA transfer from the 6071, so you need to make sure that you can transfer the data fast enough for your application.
    Christian L
    Christian Loew, CLA
    Principal Systems Engineer, National Instruments
    Please tip your answer providers with kudos.
    Any attached Code is provided As Is. It has not been tested or validated as a product, for use in a deployed application or system,
    or for use in hazardous environments. You assume all risks for use of the Code and use of the Code is subject
    to the Sample Code License Terms which can be found at: http://ni.com/samplecodelicense

Maybe you are looking for

  • Outputting to a text file.

    Hey guys and gals, Im doing a program for work that is a form you type in, select save and it stores the data to a file. I have 2 problems. FIRST i need to know how to change the font. SECOND how do i put in tabs, returns using out.write(...); The go

  • UI element in Webdynpro to align the content area at the center of the page

    Hi Experts, I am looking for a UI element in webdynpro which can keep the entire content area aligned to the center of the page as well as can provide the borders along the content area. Please suggest if there exists any such UI element in webdynpro

  • Multiple wars sharing web dir

    Has anyone configured multiple web app wars to share a common web directory for something like images? Ideally, I'd like to use an exploded separate directory for web images and have multiple wars use it. Any pointers of advice are much appreciated.

  • CS2 CS3 CS4 on Lion

    How can I install CS2 or CS3 or CS4 on Lion OS?

  • Loaded Docs, Pictures & Videos not showing up on Playbook

    I used the desktop application to load pdf docs, picutures & videos. The Playbook shows the files are loaded due to storage space usage but the files are not showing up as accessible. Basically, they are somewhere in the memory but when I do a search