Scalar subqueries vs. joins

Hi,
is someone able to tell me something about the impact on performance of scalar subqueries in select list?
I found that scalar subqueries are processed faster than joins but don't understand why.
E.g. first statement:
select e.ename, d.deptno,
(select dname from dept d where d.deptno=e.deptno) dname from emp e
where e.deptno =10;
Second statement:
select e.ename, d.deptno, d.dname
from emp e, dept.d
where e.deptno=d.deptno and d.deptno=10;
The optimizer calculates the first statement using a full table scan on emp, while the second statement using a nested loop join.
First statement is executed faster. I found also that the first staement is executed more for throughput, the second for answer time.
This is the behavior not only if there are thousands of lines in emp but also in real life applications.
Regards Frank

The relative performance of scalar subqueries and joins will largely depend on the relative sizes of the tables, and the indexes available.
Essentially, the scalar subquery works in the same fashion as this PL/SQL construct.
FOR r in SELECT ename FROM emp LOOP
   SELECT deptno, dname
   INTO v1, v2
   FROM dept
   WHERE deptno = r.deptno
   DBMS_OUTPUT.Put_Line(r.ename||' '||v1||' '||v2)
END LOOPIf dept is indexed on deptno, then this query will begin to return rows quickly, since it is only a little more complex that SELECT * FROM emp.
The join on the other hand, has to read all qualifying rows from both tables before it can begin doing the join. That is why it takes longer to return the first rows.
The disadvantage of a scalar subquery is that it will usually require much more I/O than a join. For every row in the outer table, you will require at least one I/O to get the value from the table in the subquery, and in your example, you will require two I/O's (one to read the index to get rowid, and another to read the row).
In addition, the two queries are not equivalent. Consider:
SQL> SELECT * FROM emp;
    EMP_ID ENAME          DEPTNO
         1 JOHN               10
SQL> SELECT * FROM dept;
    DEPTNO DNAME
        20 SIMS
SQL> SELECT e.ename, e.deptno,
            (SELECT dname FROM dept d WHERE d.deptno=e.deptno) dname
     FROM emp e
     WHERE e.deptno =10;
ENAME          DEPTNO DNAME
JOHN               10
SQL> SELECT e.ename, d.deptno, d.dname
     FROM emp e, dept d
     WHERE e.deptno=d.deptno and
           d.deptno=10;
no rows selected
SQL> SELECT e.ename, d.deptno, d.dname
     FROM emp e, dept d
     WHERE e.deptno=d.deptno and
           e.deptno=10;
no rows selectedTTFN
John

Similar Messages

  • HTMLDB_ITEM with scalar subqueries

    How does HTMLDB_ITEM work with scalar subqueries?
    Suppose I want to do something like
    select
    c1,c2,c3,
    (select htmldb_item.checkbox(1,c4)
    from sometable where ....)
    from ...
    If the scalar subquery doesnt return a row, I dont get my checkbox.
    How else can I do this? Thanks

    Sort of. Let me try to explain what I am really trying to do.
    create table class
    class_id int primary key,
    class_name varchar2(25),
    class_start_date date,
    class_end_date date
    create table students
    student_id int primary key,
    sname varchar2(25),
    address varchar2(25)
    create table attendance
    student_id int,
    class_id int,
    registered varchar2(1) not null check (registered in ('Y','N')),
    reg_date date
    Given a list of students and a list of classes,I want to put up a updatable form on the attendance table. If a record exists in the attendance table, I should be able to upadte it. If it doesnt exist, the fields (registered and reg_date) should still be shown and if I enter a value in them, create the row (otherwise dont create the row!)
    How can I do this? Thanks

  • Subqueries Vs Joins

    Hi Friends,
    Is there any restriction like i can use only specific set of joins inside a subquerie or vice-verse. If anything like that is present please do share or a link would help me go through such documents.
    Regards,
    Manoj Chakravarthy

    No restrictions that I can think of off the top of my head.
    Ch9 SQL Queries & Subqueries from SQL Reference manual
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/queries.htm#i2068094
    would be where you can find definitive information.
    Scott

  • SQLX performance

    Hi all.
    I'm trying to produce an XML extract from the database - 9i r2 - using SQLX functions.
    There are several nested queries involved and it doesn't seem to scale properly.
    Running for 10,000 records on DEV takes about 20 minutes, 20,000 = 40 minutes but running for 200,000 records won't complete in 12 hours - and for the live extract we need to do 900,000+
    The extract is in the form
    select '<?xml version="1.0" encoding="iso-8859-1" ?>'
    ,xmlelement("tns:Customers" ,
    xmlattributes ( 'http://www.aaa.bbb.gov.uk/xxxxxxxx' as "xmlns:tns"
    ,'http://www.w3.org/2001/XMLSchema-instance' as "xmlns:xsi"
    ,'http://www.aaa.gov.uk/xxxxxxxxx D:\yyyyyy.xsd' as "xsi:schemaLocation")
    ,xmlagg(
    xmlelement("Customer" ,
    xmlelement("PersonDetails" ,
    xmlelement("PersonType" ,
    xmlforest(forename "Forename"
    ,surname "Surname"
    ,dob "DateOfBirth"
    ,xmlelement("UniqueId" , addressee_id)
    ,xmlelement("RecordTypeIndicator" , main.recordtype)
    ,(select xmlagg(
    xmlelement("Address" ,
    xmlelement("AddressLine1", adrhv.Address1)
    ,xmlelement("AddressLine2", adrhv.Address2)
    ,xmlelement("PostCode" , adrhv.Postcode)
    ) as "X"
    from adrh_vw adrhv
    where adrhv.customer_no = main.customer_no ) as "AddressData"
    ,xmlelement("Entitlements" ,
    xmlelement("CategoryData" ,
    (select xmlagg(
    xmlelement("CategoryDetail" ,
    xmlelement("CategoryCD" , de.discount_category )
    ,xmlelement("ValidFrom" , to_char(de.start_date,'YYYY-MM-DD')
    ,xmlelement("ValidTo" , to_char(de.end_date,'YYYY-MM-DD'))
    ,xmlelement("Status" , decode(de.entitlement,'1','PENDING','APPROVED'))
    ) as "X"
    from discounts di
    ,discount_entitlement de
    where de.discount_id = di.discount_id
    and di.customer_no = main.customer_no
    ))).getclobval()
    from cpc_details_vw main
    where recordtype != 'X';
    (with a couple of other sub-queries as well in a similar form)
    Everything appears to be using the indexes I would expect and it's driven by a full table scan of the "candidate" records put into the underlying table for cpc_details_vw.
    The explain plan (in SQL*Developer) shows a number of nodes at the same level starting with "SORT" - I hope it's not attempting to select all the data and then merge it - obviously, what I want is for the execution path to read each customer from the view and then use the indexed selects on the other tables.
    Has anyone any suggestions as to how this can be improved?
    Thanks
    Malcolm
    Edited by: user3483842 on Sep 1, 2008 8:18 AM

    Malcolm,
    looking closer at the execution plan posted I see at least three potentially time consuming issues:
    1. You're using scalar subqueries to obtain the XML expressions in the SELECT list, which means that potentially
    each of these queries get executed for each row produced by the main query driven by CPC_DETAILSVW.
    Oracle has some powerful built in run time optimizations for subqueries like that, but their efficiency depends
    on the data pattern and if they actually kick in or not.
    In case they're used in summary what Oracle does is the following: Keep an in-memory hash table of the input
    values used to execute the subquery, and if the same value is re-used, don't run the subquery but use
    immediately the corresponding output value stored along with the input value. Because the size of the in-memory
    table is quite small (in 9i I think 256 entries) and input values that generate a hash collision are discarded
    the effectiveness of this optimization depends on the number and the sorting of the input values. If they are
    many input values and they are processed in a largely random fashion then the optimization won't help much, but
    if there not too many and/or they are processed in sorted fashion then this can lead to significant time
    savings.
    By the way, that's the reason why there are actually three parts at the same level beginning of your execution
    plan: The upper two represent the two subqueries potentially executed for each row of the bottom main query.
    I assume that in your case (due to the object type returned by XMLAGG() function) this optimization might not be
    used at all.
    So from a performance perspective it might be beneficial to unnest the scalar subqueries by joining the
    "adrh_vw", "discounts" and "discount_entitlement" to "cpc_details_vw" in the main query but I guess content-wise
    it won't be possible to get the same XML output then.
    2. It looks like that the views used contain a lot of calls to PL/SQL package functions. I could imagine that
    it's not only the execution of the subqueries that consumes a lot of time but in addition calling the functions
    could account for a significant amount of time. Note that for the subqueries mentioned this could mean that
    those functions are going to be executed for each row of the main query that already executes some functions as
    part of the view definition for each row.
    My suggestion here would be to write a query that attempts to select the same data but without using the XML
    functions in order to find out how much time is spent in the function calls. You could use something similar to
    this:
    (Note: this is untested code)
    select forename "Forename"
    ,surname "Surname"
    ,dob "DateOfBirth"
    ,addressee_id
    ,main.recordtype
    ,(select max(adrhv.Postcode)
    from adrh_vw adrhv
    where adrhv.customer_no = main.customer_no ) as "AddressData"
    ,(select max(de.discount_category )
    ,max(to_char(de.start_date,'YYYY-MM-DD'))
    ,max(to_char(de.end_date,'YYYY-MM-DD'))
    ,max(decode(de.entitlement,'1','PENDING','APPROVED'))
    from discounts di
    ,discount_entitlement de
    where de.discount_id = di.discount_id
    and di.customer_no = main.customer_no
    from cpc_details_vw main
    where recordtype != 'X';If this query takes a similar amount of time, then you first need to have a look what these PL/SQL functions
    actually do and if there's a way to do this faster (optimally without using the PL/SQL functions because this
    will be the fastest way). Once this is sorted out you could again try to run the XML version to find out if the
    XML stuff adds another overhead that needs to be looked at.
    3. Since you're using rule based optimization the index ADR_PK on ADDRESSEE is used in the main query.
    A hash join or sort merge join could be more efficient in this case, but this could be determined by the
    cost based optimizer in case you have reasonable statistics gathered.
    Finally you still just might be unlucky that generating such a single, very large XMLTYPE resp. CLOB just becomes more
    and more inefficient the larger your input set gets.
    If this is the case you should think about alternatives how to generate that large XML, e.g. generate a CSV
    export from the database and generate the XML in a third-party tool, or generate the XML step-wise and merge it
    afterwards in some other tool.
    It might be that in 10g the XML functions are more efficient but that probably won't help since you're still on 9i.
    Regards,
    Randolf
    Oracle related stuff:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Inner Selects vs. Outer Joins

    We have a report where we are joining many tables (25 tables) together. Initially, we created the report with outer joins and noticed that the cost of the explain plan was 2600 and the performance was adequate. We then started to remove the outer joins and replacing with inner selects and the cost and performance of the report improved. Currently, we have replaced 13 of the outer joins with inner selects and the cost is down to 650.
    Typically, I have always used joins or outer joins because I thought this was best practice. However, I am now questioning this.
    Is it better to use inner selects when you are only returning one column from a table?
    Other than tables that require multiple columns to be returned, are there issues we should be concerned with?
    Below are examples:
    Thanks,
    Brian
    -- Using Outer Joins
    Select q.quote_id, q.quote_name, l.location_name, sr.sales_rep_name
    from quote q, location l, sales_rep sr
    where q.location_id = l.loation_id (+)
    and q.sales_rep_id = sr.sales_rep_id (+)
    -- Using Inner Selects
    Select q.quote_id, q.quote_name,
    (select location_name from location where location_id = q.location_id) location_name,
    (select sales_rep_name from sales_rep where sales_rep_id = q.sales_rep_id) sales_rep_name
    from quote q

    The cost of a query does not mean better performance. The scalar subqueries that you use when rewriting that query show up in your explain plan with no cost at all, and this is just no true. (a query must "cost" something, right?)
    Instead of starting to rewrite your queries using scalara subqueries, it's better to investigate why a query performs bad.
    If using scalar subquery would always yield a better performance, Oracle would probably rewrite all outer join queries to using scalar subquery.

  • Correlated Subqueries Vs Cursor

    Hi all,
    For good performance which one is best amoung Correlated Subqueries or Cursors.(if both table with millions of rec)
    Thanks
    Kalinga

    Blind rule. If something can be done in SQL alone, then it is better in performance that doin git using PL/SQL.
    So using subqueries or JOINS is always better than using cursors (I think you meant nested cursors here).
    Cheers
    Sarma.

  • Order of operations

    Hi guys
    I'm a noob in this field of using hints
    I cannot figure how to create the exact execution plan oracle sugests
    GENERAL INFORMATION SECTION
    Tuning Task Name   : sql_id_tuning_task_SD
    Tuning Task Owner  : APMIG
    Workload Type      : Single SQL Statement
    Scope              : COMPREHENSIVE
    Time Limit(seconds): 120
    Completion Status  : COMPLETED
    Started at         : 07/26/2011 09:05:28
    Completed at       : 07/26/2011 09:05:30
    Schema Name: APMIG
    SQL ID     : 26g474qmntzds
    SQL Text   : insert /*+ append */ into STG_AR_PAYMENTS_CHEQUE_DETAILS
                 select  /*+ leading(dp,p,ep,pdc) use_hash(ep) full(ep)*/
                 apswn.ar_payments_cheque_details_s.nextval
                 --AR_PAYMENT_CHEQUE_DETAIL_ID
                 ,null --CHEQUE_DATE
                 ,null --CHEQUE_NUMBER
                 ,null --ACCOUNT_NUMBER
                 ,null --CHEQUE_PAYER_NAME
                 ,null --CHEQUE_BANK_CODE
                 ,pdc.deposit_id --CHEQUE_DEP_SLIP_NR
                 ,'N' --CHEQUE_NEGOTIATION_IND for PS
                 ,ep.entry_date--CHEQUE_SENT_NEGOTIATION_DATE
                 ,decode(ep.negotiation_status,'Y',
                 ep.cntl_timestamp)--CHEQUE_NEGOTIATION_COMPL_DATE
                 ,ep.negotiated_amt--CHEQUE_NEGOTIATION_AMOUNT
                 ,ep.negotiated_currency_code--CHEQUE_NEGOTIATION_CURRENCY
                 ,decode(ep.returned_ind,'Y', ep.cntl_timestamp)--CHEQUE_RETURN_DA
                 TE
                 ,(select cd.cheque_deposit_id from stg_ar_cheque_deposits cd
                 where cd.cheque_dep_slip_nr = pdc.deposit_id and cd.deposit_bu =
                 pdc.deposit_bu)--CHEQUE_DEPOSIT_ID
                 ,(select sp.ar_payment_id from stg_ar_payments sp where
                 sp.deposit_id = p.deposit_id and sp.deposit_bu = p.deposit_bu )
                 --AR_PAYMENT_ID
                 ,ep.customer_reference--CHEQUE_CUSTOMER_REF
                 from ps_deposit_control pdc
                 join ps_payment p on p.deposit_id = pdc.deposit_id and
                 p.deposit_bu = pdc.deposit_bu and p.payment_method = 'CHK' and (
                 p.deposit_id not like '%TRA%'
                 and p.deposit_id not like '%TRN%'
                 and p.deposit_id not like '%TRTANS%'
                 left join deposit_payments dp on p.deposit_id = dp.deposit_id
                 and p.payment_seq_num = dp.payment_sequence and dp.payment_type
                 = 'CHK'
                 join entered_payments ep on ep.payment_type = dp.payment_type
                 and ep.payment_currency_code = dp.payment_currency_code
                 and ep.payment_id = dp.payment_id
                 and ep.entry_date = dp.entry_date
                 and ep.payment_sequence = dp.payment_sequence
                 log errors into E_STG_AR_PAYMENTS_CHEQUE_DS('cr8') reject limit
                 unlimited
    FINDINGS SECTION (1 finding)
    1- SQL Profile Finding (see explain plans section below)
      A potentially better execution plan was found for this statement.
      Recommendation (estimated benefit=10%)
      - Consider accepting the recommended SQL profile.
        execute dbms_sqltune.accept_sql_profile(task_name =>
                'sql_id_tuning_task_SD', task_owner => 'APMIG', replace => TRUE);
    EXPLAIN PLANS SECTION
    1- Original With Adjusted Cost
    Plan hash value: 721680283
    | Id  | Operation               | Name                           | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT        |                                | 89336 |    11M|       | 11473   (1)| 00:02:18 |
    |   1 |  LOAD AS SELECT         | STG_AR_PAYMENTS_CHEQUE_DETAILS |       |       |       |            |          |
    |*  2 |   TABLE ACCESS FULL     | STG_AR_CHEQUE_DEPOSITS         |     1 |    22 |       |    49   (0)| 00:00:01 |
    |*  3 |   TABLE ACCESS FULL     | STG_AR_PAYMENTS                |     2 |    22 |       |  5259   (2)| 00:01:04 |
    |   4 |   ERROR LOGGING FULL    | STG_AR_PAYMENTS                |       |       |       |  5259   (2)| 00:01:04 |
    |   5 |    SEQUENCE             | AR_PAYMENTS_CHEQUE_DETAILS_S   |       |       |       |            |          |
    |   6 |     NESTED LOOPS        |                                | 89336 |    11M|       | 11473   (1)| 00:02:18 |
    |*  7 |      HASH JOIN          |                                | 89336 |    10M|  6728K| 11464   (1)| 00:02:18 |
    |*  8 |       HASH JOIN         |                                | 89399 |  5674K|  4768K|  9504   (1)| 00:01:55 |
    |*  9 |        TABLE ACCESS FULL| DEPOSIT_PAYMENTS               | 91985 |  3682K|       |  1166   (1)| 00:00:14 |
    |* 10 |        TABLE ACCESS FULL| PS_PAYMENT                     |   524K|    12M|       |  7207   (1)| 00:01:27 |
    |* 11 |       TABLE ACCESS FULL | ENTERED_PAYMENTS               | 91920 |  5116K|       |  1332   (1)| 00:00:16 |
    |* 12 |      INDEX UNIQUE SCAN  | PS_DEPOSIT_CONTROL             |     1 |    17 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("CD"."CHEQUE_DEP_SLIP_NR"=:B1 AND "CD"."DEPOSIT_BU"=:B2)
       3 - filter("SP"."DEPOSIT_ID"=:B1 AND "SP"."DEPOSIT_BU"=:B2)
       7 - access("EP"."PAYMENT_TYPE"="DP"."PAYMENT_TYPE" AND
                  "EP"."PAYMENT_CURRENCY_CODE"="DP"."PAYMENT_CURRENCY_CODE" AND "EP"."PAYMENT_ID"="DP"."PAYMENT_ID" AND
                  "EP"."ENTRY_DATE"="DP"."ENTRY_DATE" AND "EP"."PAYMENT_SEQUENCE"="DP"."PAYMENT_SEQUENCE")
       8 - access("P"."PAYMENT_SEQ_NUM"="DP"."PAYMENT_SEQUENCE" AND "P"."DEPOSIT_ID"="DP"."DEPOSIT_ID")
       9 - filter("DP"."PAYMENT_TYPE"='CHK')
      10 - filter("P"."DEPOSIT_ID" NOT LIKE '%TRA%' AND "P"."DEPOSIT_ID" NOT LIKE '%TRN%' AND
                  "P"."DEPOSIT_ID" NOT LIKE '%TRTANS%' AND "P"."PAYMENT_METHOD"='CHK')
      11 - filter("EP"."PAYMENT_TYPE"='CHK')
      12 - access("P"."DEPOSIT_BU"="PDC"."DEPOSIT_BU" AND "P"."DEPOSIT_ID"="PDC"."DEPOSIT_ID")
    2- Original With Adjusted Cost
    Plan hash value: 721680283
    | Id  | Operation               | Name                           | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT        |                                | 89336 |    11M|       | 11473   (1)| 00:02:18 |
    |   1 |  LOAD AS SELECT         | STG_AR_PAYMENTS_CHEQUE_DETAILS |       |       |       |            |          |
    |*  2 |   TABLE ACCESS FULL     | STG_AR_CHEQUE_DEPOSITS         |     1 |    22 |       |    49   (0)| 00:00:01 |
    |*  3 |   TABLE ACCESS FULL     | STG_AR_PAYMENTS                |     2 |    22 |       |  5259   (2)| 00:01:04 |
    |   4 |   ERROR LOGGING FULL    | STG_AR_PAYMENTS                |       |       |       |  5259   (2)| 00:01:04 |
    |   5 |    SEQUENCE             | AR_PAYMENTS_CHEQUE_DETAILS_S   |       |       |       |            |          |
    |   6 |     NESTED LOOPS        |                                | 89336 |    11M|       | 11473   (1)| 00:02:18 |
    |*  7 |      HASH JOIN          |                                | 89336 |    10M|  6728K| 11464   (1)| 00:02:18 |
    |*  8 |       HASH JOIN         |                                | 89399 |  5674K|  4768K|  9504   (1)| 00:01:55 |
    |*  9 |        TABLE ACCESS FULL| DEPOSIT_PAYMENTS               | 91985 |  3682K|       |  1166   (1)| 00:00:14 |
    |* 10 |        TABLE ACCESS FULL| PS_PAYMENT                     |   524K|    12M|       |  7207   (1)| 00:01:27 |
    |* 11 |       TABLE ACCESS FULL | ENTERED_PAYMENTS               | 91920 |  5116K|       |  1332   (1)| 00:00:16 |
    |* 12 |      INDEX UNIQUE SCAN  | PS_DEPOSIT_CONTROL             |     1 |    17 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("CD"."CHEQUE_DEP_SLIP_NR"=:B1 AND "CD"."DEPOSIT_BU"=:B2)
       3 - filter("SP"."DEPOSIT_ID"=:B1 AND "SP"."DEPOSIT_BU"=:B2)
       7 - access("EP"."PAYMENT_TYPE"="DP"."PAYMENT_TYPE" AND
                  "EP"."PAYMENT_CURRENCY_CODE"="DP"."PAYMENT_CURRENCY_CODE" AND "EP"."PAYMENT_ID"="DP"."PAYMENT_ID" AND
                  "EP"."ENTRY_DATE"="DP"."ENTRY_DATE" AND "EP"."PAYMENT_SEQUENCE"="DP"."PAYMENT_SEQUENCE")
       8 - access("P"."PAYMENT_SEQ_NUM"="DP"."PAYMENT_SEQUENCE" AND "P"."DEPOSIT_ID"="DP"."DEPOSIT_ID")
       9 - filter("DP"."PAYMENT_TYPE"='CHK')
      10 - filter("P"."DEPOSIT_ID" NOT LIKE '%TRA%' AND "P"."DEPOSIT_ID" NOT LIKE '%TRN%' AND
                  "P"."DEPOSIT_ID" NOT LIKE '%TRTANS%' AND "P"."PAYMENT_METHOD"='CHK')
      11 - filter("EP"."PAYMENT_TYPE"='CHK')
      12 - access("P"."DEPOSIT_BU"="PDC"."DEPOSIT_BU" AND "P"."DEPOSIT_ID"="PDC"."DEPOSIT_ID")
    3- Using SQL Profile
    Plan hash value: 2436199625
    | Id  | Operation               | Name                           | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT        |                                | 89336 |    11M|       | 11473   (1)| 00:02:18 |
    |   1 |  LOAD AS SELECT         | STG_AR_PAYMENTS_CHEQUE_DETAILS |       |       |       |            |          |
    |*  2 |   TABLE ACCESS FULL     | STG_AR_CHEQUE_DEPOSITS         |     1 |    22 |       |    49   (0)| 00:00:01 |
    |*  3 |   TABLE ACCESS FULL     | STG_AR_PAYMENTS                |     2 |    22 |       |  5259   (2)| 00:01:04 |
    |   4 |   ERROR LOGGING FULL    | STG_AR_PAYMENTS                |       |       |       |  5259   (2)| 00:01:04 |
    |   5 |    SEQUENCE             | AR_PAYMENTS_CHEQUE_DETAILS_S   |       |       |       |            |          |
    |   6 |     NESTED LOOPS        |                                | 89336 |    11M|       | 11473   (1)| 00:02:18 |
    |*  7 |      HASH JOIN          |                                | 89336 |    10M|  6200K| 11464   (1)| 00:02:18 |
    |*  8 |       TABLE ACCESS FULL | ENTERED_PAYMENTS               | 91920 |  5116K|       |  1332   (1)| 00:00:16 |
    |*  9 |       HASH JOIN         |                                | 89399 |  5674K|  4768K|  9504   (1)| 00:01:55 |
    |* 10 |        TABLE ACCESS FULL| DEPOSIT_PAYMENTS               | 91985 |  3682K|       |  1166   (1)| 00:00:14 |
    |* 11 |        TABLE ACCESS FULL| PS_PAYMENT                     |   524K|    12M|       |  7207   (1)| 00:01:27 |
    |* 12 |      INDEX UNIQUE SCAN  | PS_DEPOSIT_CONTROL             |     1 |    17 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("CD"."CHEQUE_DEP_SLIP_NR"=:B1 AND "CD"."DEPOSIT_BU"=:B2)
       3 - filter("SP"."DEPOSIT_ID"=:B1 AND "SP"."DEPOSIT_BU"=:B2)
       7 - access("EP"."PAYMENT_TYPE"="DP"."PAYMENT_TYPE" AND
                  "EP"."PAYMENT_CURRENCY_CODE"="DP"."PAYMENT_CURRENCY_CODE" AND "EP"."PAYMENT_ID"="DP"."PAYMENT_ID" AND
                  "EP"."ENTRY_DATE"="DP"."ENTRY_DATE" AND "EP"."PAYMENT_SEQUENCE"="DP"."PAYMENT_SEQUENCE")
       8 - filter("EP"."PAYMENT_TYPE"='CHK')
       9 - access("P"."PAYMENT_SEQ_NUM"="DP"."PAYMENT_SEQUENCE" AND "P"."DEPOSIT_ID"="DP"."DEPOSIT_ID")
      10 - filter("DP"."PAYMENT_TYPE"='CHK')
      11 - filter("P"."DEPOSIT_ID" NOT LIKE '%TRA%' AND "P"."DEPOSIT_ID" NOT LIKE '%TRN%' AND
                  "P"."DEPOSIT_ID" NOT LIKE '%TRTANS%' AND "P"."PAYMENT_METHOD"='CHK')
      12 - access("P"."DEPOSIT_BU"="PDC"."DEPOSIT_BU" AND "P"."DEPOSIT_ID"="PDC"."DEPOSIT_ID")
    -------------------------------------------------------------------------------the difference out there is that it suggests a little different for steps 7 and 8
    Although the cost is a bit higher than my initial plan (and I don't know about the time column but this runs far longer)
    any ideea how to make the exact plan, and if this will help me?
    BR
    Florin POP

    without hints
    Plan hash value: 2884505198                                                                                                                                                                                                                                                                                 
    | Id  | Operation                         | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |                                                                                                                                                                                        
    |   0 | INSERT STATEMENT                  |                                |    61 |  8479 |  7420   (1)| 00:01:30 |                                                                                                                                                                                        
    |   1 |  LOAD AS SELECT                   | STG_AR_PAYMENTS_CHEQUE_DETAILS |       |       |            |          |                                                                                                                                                                                        
    |*  2 |   TABLE ACCESS FULL               | STG_AR_CHEQUE_DEPOSITS         |     1 |    22 |    49   (0)| 00:00:01 |                                                                                                                                                                                        
    |*  3 |   TABLE ACCESS FULL               | STG_AR_PAYMENTS                |     2 |    22 |  5259   (2)| 00:01:04 |                                                                                                                                                                                        
    |   4 |   ERROR LOGGING FULL              | STG_AR_PAYMENTS                |       |       |  5259   (2)| 00:01:04 |                                                                                                                                                                                        
    |   5 |    SEQUENCE                       | AR_PAYMENTS_CHEQUE_DETAILS_S   |       |       |            |          |                                                                                                                                                                                        
    |   6 |     NESTED LOOPS                  |                                |       |       |            |          |                                                                                                                                                                                        
    |   7 |      NESTED LOOPS                 |                                |    61 |  8479 |  7420   (1)| 00:01:30 |                                                                                                                                                                                        
    |   8 |       NESTED LOOPS                |                                |    61 |  5002 |  7332   (1)| 00:01:28 |                                                                                                                                                                                        
    |   9 |        NESTED LOOPS               |                                |    61 |  2501 |  7207   (1)| 00:01:27 |                                                                                                                                                                                        
    |* 10 |         TABLE ACCESS FULL         | PS_PAYMENT                     |    61 |  1464 |  7207   (1)| 00:01:27 |                                                                                                                                                                                        
    |* 11 |         INDEX UNIQUE SCAN         | PS_DEPOSIT_CONTROL             |     1 |    17 |     0   (0)| 00:00:01 |                                                                                                                                                                                        
    |  12 |        TABLE ACCESS BY INDEX ROWID| DEPOSIT_PAYMENTS               |     1 |    41 |     3   (0)| 00:00:01 |                                                                                                                                                                                        
    |* 13 |         INDEX RANGE SCAN          | DP_IX                          |     1 |       |     2   (0)| 00:00:01 |                                                                                                                                                                                        
    |* 14 |       INDEX UNIQUE SCAN           | ENPA_PK                        |     1 |       |     1   (0)| 00:00:01 |                                                                                                                                                                                        
    |  15 |      TABLE ACCESS BY INDEX ROWID  | ENTERED_PAYMENTS               |     1 |    57 |     2   (0)| 00:00:01 |                                                                                                                                                                                        
                                                                            with hint /*+ full(ep) full(dp)*/
    Plan hash value: 1147295939                                                                                                                                                                                                                                                                                 
    | Id  | Operation               | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |                                                                                                                                                                                                  
    |   0 | INSERT STATEMENT        |                                |    61 |  8479 |  9706   (1)| 00:01:57 |                                                                                                                                                                                                  
    |   1 |  LOAD AS SELECT         | STG_AR_PAYMENTS_CHEQUE_DETAILS |       |       |            |          |                                                                                                                                                                                                  
    |*  2 |   TABLE ACCESS FULL     | STG_AR_CHEQUE_DEPOSITS         |     1 |    22 |    49   (0)| 00:00:01 |                                                                                                                                                                                                  
    |*  3 |   TABLE ACCESS FULL     | STG_AR_PAYMENTS                |     2 |    22 |  5259   (2)| 00:01:04 |                                                                                                                                                                                                  
    |   4 |   ERROR LOGGING FULL    | STG_AR_PAYMENTS                |       |       |  5259   (2)| 00:01:04 |                                                                                                                                                                                                  
    |   5 |    SEQUENCE             | AR_PAYMENTS_CHEQUE_DETAILS_S   |       |       |            |          |                                                                                                                                                                                                  
    |*  6 |     HASH JOIN           |                                |    61 |  8479 |  9706   (1)| 00:01:57 |                                                                                                                                                                                                  
    |*  7 |      HASH JOIN          |                                |    61 |  5002 |  8374   (1)| 00:01:41 |                                                                                                                                                                                                  
    |   8 |       NESTED LOOPS      |                                |    61 |  2501 |  7207   (1)| 00:01:27 |                                                                                                                                                                                                  
    |*  9 |        TABLE ACCESS FULL| PS_PAYMENT                     |    61 |  1464 |  7207   (1)| 00:01:27 |                                                                                                                                                                                                  
    |* 10 |        INDEX UNIQUE SCAN| PS_DEPOSIT_CONTROL             |     1 |    17 |     0   (0)| 00:00:01 |                                                                                                                                                                                                  
    |* 11 |       TABLE ACCESS FULL | DEPOSIT_PAYMENTS               | 90911 |  3639K|  1166   (1)| 00:00:14 |                                                                                                                                                                                                  
    |* 12 |      TABLE ACCESS FULL  | ENTERED_PAYMENTS               | 91920 |  5116K|  1332   (1)| 00:00:16 |                                                                                                                                                                                                  
    ---------------------------------------------------------------------------------------------------------- Edited by: kquizak on Jul 26, 2011 1:57 PM
    Finally, you have 2 scalar subqueries ... this was the problem (If I have put them as outer joins I would have noticed a full scan where I should have an index scan).
    I greatly appreciate your help, you made me think in another way ... (also I marked a post of yours as correct)
    Thanks

  • IR Report found 1 million record with blob files performance is too slow!

    we are using
    oracle apex 4.2.x
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
    mod_plsql with Apache
    Hardware: HP proliant ML350P
    OS: WINDOWS 2008 R2
    customized content management system developed in apex.when open the IR report have 1 ml rows found and each rows have blob(<5MB as pdf/tiff/bmp/jpg) it will be raising rows in future! but the searching performance is very slow!
    how can increasing the performance?
    how can showing progressing status to user while searching progress going on IR report itself?
    Thanx,
    Ram

    It's impossible to make definitive recommendations on performance improvement based on the limited information provided (in particular the absence of APEX debug traces and SQL execution plans), and lacking knowledge of the application  requirements and access to real data.
    As noted above, this is mainly a matter of data model and application design rather than a problem with APEX.
    Based on what has been made available on apex.oracle.com, taking action on the following points may improve performance.
    I have concerns about the data model. The multiple DMS_TOPMGT_MASTER.NWM_DOC_LVL_0x_COD_NUM columns are indications of incomplete normalization, and the use of the DMS_TOPMGT_DETAILS table hints at an EAV model. Look at normalizing the model so that the WM_DOC_LVL_0x_COD_NUM relationship data can be retrieved using a single join rather than multiple scalar subqueries. Store 1:1 document attributes as column values in DMS_TOPMGT_MASTER rather than rows in DMS_TOPMGT_DETAILS.
    There are no statistics on any of the application tables. Make sure statistics are gathered and kept up to date to enable the optimizer to determine correct execution plans.
    There are no indexes on any of the FK columns or search columns. Create indexes on FK columns to improve join performance, and on searched columns to improve search performance.
    More than 50% of the columns in the report query are hidden and not apparently used anywhere in the report. Why is this? A number of these columns are retrieved using scalar subqueries, which will adversely impact performance in a query processing 1 million+ rows. Remove any unnecessary columns from the report query.
    A number of functions are applied to columns in the report query. These will incur processing time for the functions themselves and context switching overhead in the case of the non-kernel dbms_lob.get_length calls. Remove these function calls from the query and replace them with alternative processing that will not impact query performance, particularly the use of APEX column attributes that will only apply transformations to values that are actually displayed, rather than to all rows processed in the query.
    Remove to_char calls from date columns and format them using date format masks in column attributes.
    Remove decode/case switches. Replace this logic using Display as Text (based on LOV, escape special characters) display types based on appropriate LOVs.
    Remove the dbms_lob.get_length calls. Instead add a file length column to the table, compute the file size when files are added/modified using your application or a trigger, and use this as the BLOB column in the query.
    Searching using the Search Field text box in the APEX interactive report Search Bar generates query like:
    select
    from
      (select
      from
        (...your report query...)
      ) r
      where ((instr(upper("NWM_DOC_REF_NO"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("NWM_DOC_DESC"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("SECTION_NAME"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("CODE_TYPE"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("REF_NUMBER_INDEX"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("DATE_INDEX"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("SUBJECT_INDEX"), upper(:apxws_search_string_1)) > 0
      or instr(upper("NWM_DOC_SERIEL"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("NWM_DOC_DESCRIPTION"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("NWM_DOC_STATUS"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("MIME_TYPE"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("NWM_DOC_FILE_BINARY"), upper(:APXWS_SEARCH_STRING_1)) > 0 ))
      ) r
    where
      rownum <= to_number(:APXWS_MAX_ROW_CNT)
    This will clearly never make use of any available indexes on your table. If you only want users to be able to search using values from 3 columns then remove the Search Field from the Search Bar and only allow users to create explicit filters on those columns. It may then be possible for the optimizer to push the resulting simple predicates down into the inlined report query to make use of indexes on the searched column.
    I have created a copy of your search page on page 33 of your app and created an After Regions page process that will create Debug entries containing the complete IR query and bind variables used so they can be extracted for easier performance analysis and tuning outside of APEX. You can copy this to your local app and modify the page and region ID parameters as required.

  • Case statement in a multiple query

    Hi everyone,
    This is my first time to use case statement in a multiple query. I have tried to implement it but i got no luck.. Please see below
    set define off
    SELECT g.GROUP_NAME as Market
    ,t.NAME as "Template Name"
    ,t.TEMPLATE_ID as "Template ID"
    ,(SELECT created
    FROM material
    where template_id = t.template_id) as "Date Created"
    *,(SELECT DESTINATION_FOLDER_ID,*
    CASE DESTINATION_FOLDER_ID
    WHEN NULL THEN 'Upload'
    ELSE 'HQ'
    END
    from log_material_copy
    where destination_material_id in (select material_id
    from material
    where template_id = t.template_id ))as "Origin"
    ,(select material_id
    from log_material_copy
    where destination_material_id in (select material_id
    from material
    where template_id = t.template_id)) as "HQ/Upload ID"
    ,(SELECT COUNT (mse.ID)
    FROM MATERIAL_SEND_EVENT mse, material m, creative c
    WHERE mse.MATERIAL_ID = m.MATERIAL_ID
    AND mse.MATERIAL_TYPE_ID = m.MATERIAL_TYPE_ID
    AND m.ASSET_ID = c.id
    AND c.TEMPLATE_ID = t.TEMPLATE_ID) as Sent
    ,(SELECT COUNT (de.ID)
    FROM download_event de, material m, creative c
    WHERE de.MATERIAL_ID = m.MATERIAL_ID
    AND de.MATERIAL_TYPE_ID = m.MATERIAL_TYPE_ID
    AND m.ASSET_ID = c.id
    AND c.TEMPLATE_ID = t.TEMPLATE_ID) as Download
    ,(SELECT 'https://main.test.com/bm/servlet/' || 'UArchiveServlet?action=materialInfo&materialId=' || DESTINATION_MATERIAL_ID || '&materialFolderId=' || DESTINATION_FOLDER_ID
    from log_material_copy
    where destination_material_id in (select material_id
    from material
    where template_id = t.template_id)) as "URL to template on MPC layer"
    --, t.AVAILABLE_FOR_TRANSFER as "Available for transfer"
    FROM template t, layout l, groups g
    WHERE t.LAYOUT_ID = l.LAYOUT_ID
    AND l.ORGANIZATION_ID = g.IP_GROUPID
    AND g.IP_GROUPID in ( 1089, 903, 323, 30, 96, 80, 544, 1169, 584, 785, 827, 31, 10, 503, 1025 )
    ORDER BY g.GROUP_NAME ASC;
    The one in bold is my case statement.. Please let me know what is wrong with this.
    Regards,
    Jas

    I think you're getting the idea, but:
    You're still selecting 2 columns in the (scalar) subquery. Did you read the link I posted for you?
    "a) scalar subqueries - *a single row, single column query that you use in place of a "column"*, it looks like a column or function."
    You must move that query outside, join to template.
    Something like:
    NOT TESTED FOR OBVIOUS REASONS SO YOU'LL PROBABLY NEED TO TWEAK IT A BIT
    select g.group_name as market,
           t.name as "Template Name",
           t.template_id as "Template ID",
           m.created  as "Date Created",
           lmc.destination_folder_id,
           case lmc.destination_folder_id
             when null then 'Upload'
             else 'HQ'
           end as "Origin"
           (select material_id
              from log_material_copy
             where destination_material_id in
                   (select material_id
                      from material
                     where template_id = t.template_id)) as "HQ/Upload ID"
           (select count(mse.id)
              from material_send_event mse, material m, creative c
             where mse.material_id = m.material_id
               and mse.material_type_id = m.material_type_id
               and m.asset_id = c.id
               and c.template_id = t.template_id) as sent
           (select count(de.id)
              from download_event de, material m, creative c
             where de.material_id = m.material_id
               and de.material_type_id = m.material_type_id
               and m.asset_id = c.id
               and c.template_id = t.template_id) as download
           (select 'https://main.test.com/bm/servlet/' ||
                   'UArchiveServlet?action=materialInfo&materialId=' ||
                   destination_material_id || '&materialFolderId=' ||
                   destination_folder_id
              from log_material_copy
             where destination_material_id in
                   (select material_id
                      from material
                     where template_id = t.template_id)) as "URL to template on MPC layer"
    --, t.AVAILABLE_FOR_TRANSFER as "Available for transfer"
      from template t
      ,    layout l
      ,    groups group by
      ,    MATERIAL M
      ,    LOG_MATERIAL_COPY LMC
    where t.layout_id = l.layout_id
       and l.organization_id = g.ip_groupid
       and M.TEMPLATE_ID = t.template_id
       and LMC.destination_material_id in ( select material_id
                                            from   material
                                            where  template_id = t.template_id
       and g.ip_groupid in (1089,
                            903,
                            323,
                            30,
                            96,
                            80,
                            544,
                            1169,
                            584,
                            785,
                            827,
                            31,
                            10,
                            503,
                            1025)
    order by g.group_name asc;

  • Why same query runs on isqlplus but not in Forms/Reports trigger

    Hi,
    I have one query in which I extract one column with parent table join if I run same query on isqlplus prompt it works but if I run same on Forms/Reports trigger it says found "select" where something else expected.
    below is an example .
    select em1.mreading, em1.grid_code,
    (select em.mreading from energy_mreading em where em.grid_code=em1.grid_code and em.transformer_code=em1.transformer_code
    and em.bus_bar=em1.bus_bar and to_date(to_char(em.r_date,'dd/mm/yyyy'),'dd/mm/yy') = to_date('02/01/07' ,'dd/mm/yy') - 1)
    as Yreading
    from energy_mreading em1
    where to_date(to_char(em1.r_date,'dd/mm/yyyy'),'dd/mm/yy')= to_date('02/01/07' ,'dd/mm/yy')
    Any one can help me, is there any restriction/limitations in Forms/Reports triggers.
    Thanks, Khawar.

    In forms and cursors you can not use scalar subqueries as you do. You have to use joins!
    select
         em1.mreading
        ,em1.grid_code
        ,em.mreading as Yreading
    from
        energy_mreading em1
       ,energy_mreading em
    where 1=1
        and trunc(em1.r_date) = to_date('02/01/2007','dd/mm/rrrr')
        and em.grid_code = em1.grid_code
        and em.transformer_code = em1.transformer_code
        and em.bus_bar = em1.bus_bar
        and trunc(em.r_date) = trunc(em1.r_date) - 1
    Try this, hope it works.

  • Cost of using subquery vs using same table twice in query

    Hi all,
    In a current project, I was asked by my supervisor what is the cost difference between the following two methods. First method is using a subquery to get the name field from table2. A subquery is needed because it requires the field sa_id from table1. The second method is using table2 again under a different alias to obtain table2.name. The two table2 are not self-joined. The outcome of these two queries are the same.
    Using subquery:
    select a.sa_id R1, b.other_field R2,
    (select b.name from b
    where b.b_id = a.sa_id) R3
    from table1 a, table2 b
    where ...Using same table twice (table2 under 2 different aliases)
    select a.sa_id R1, b.other_field R2, c.name R3
    from table1 a, table2 b, table2 c
    where
    c.b_id = a.sa_id,
    and ....Can anyone tell me which version is better and why? (or under what circumstances, which version is better). And what are the costs involved? Many thanks.

    pl/sql novice wrote:
    Hi all,
    In a current project, I was asked by my supervisor what is the cost difference between the following two methods. First method is using a subquery to get the name field from table2. A subquery is needed because it requires the field sa_id from table1. The second method is using table2 again under a different alias to obtain table2.name. The two table2 are not self-joined. The outcome of these two queries are the same.
    Using subquery:
    Using same table twice (table2 under 2 different aliases)
    Can anyone tell me which version is better and why? (or under what circumstances, which version is better). And what are the costs involved? Many thanks.In theory, if you use the scalar "subquery" approach, the correlated subquery needs to be executed for each row of your result set. Depending on how efficient the subquery is performed this could require significant resources, since you have that recursive SQL that needs to be executed for each row.
    The "join" approach needs to read the table only twice, may be it can even use an indexed access path. So in theory the join approach should perform better in most cases.
    Now the Oracle runtime engine (since Version 8) introduces a feature called "filter optimization" that also applies to correlated scalar subqueries. Basically it works like an in-memory hash table that caches the (hashed) input values to the (deterministic) correlated subquery and the corresponding output values. The number of entries of the hash table is fixed until 9i (256 entries) whereas in 10g it is controlled by a internal parameter that determines the size of the table (and therefore can hold different number of entries depending on the size of each element).
    If the input value of the next row corresponds to the input value of the previous row then this optimization returns immediately the corresponding output value without any further action. If the input value can be found in the hash table, the corresponding output value is returned, otherwise execute the query and keep the result combination and eventually attempt to store this new combination in the hash table, but if a hash collision occurs the combination will be discarded.
    So the effectiveness of this clever optimization largely depends on three different factors: The order of the input values (because as long as the input value doesn't change the corresponding output value will be returned immediately without any further action required), the number of distinct input values and finally the rate of hash collisions that might occur when attempting to store a combination in the in-memory hash table.
    In summary unfortunately you can't really tell how good this optimization is going to work at runtime and therefore can't be properly reflected in the execution plan.
    You need to test both approaches individually because in the optimal case the optimization of the scalar subquery will be superior to the join approach, but it could also well be the other around, depending on the factors mentioned.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Can someone  see why im getting error in this query ?

    I had 2 queries , instead of using left join i put them together. Now i get error , can someone just take a look to see if syntax wrong somewhere ?
    select * from
    select i.ips,
    a.ips,
    a.question_type,
    sum(a.score) score,
    p.project_name,
    p.project_segment,p.location,p.project_exec_model,
    p.project_exec_model||' - '||p.project_config pmodel,
    one.score schedule,two.score cost,three.score execution,four.score commercial,
    nvl(one.score,0)+nvl(two.score,0)+nvl(three.score,0)+nvl(four.score,0) as total,
    (select sum(prev_score) prev from XT_RISK_PAST2 where ips = i.ips) prev_score,
    (select max(createdt) from tbl_risk_answer where (ips,sample_num) in
    (select ips,max(sample_num) from VW_RISK_SCORE group by ips) and ips=i.ips) last_dt
    from
    (select v.project_id,v.ips,v.sample_num,v.question_id,v.header_desc,v.section_area,v.score,
    decode(bi_recurse(q.active_question,1,2),2,'OTR','-')||decode(bi_recurse(q.active_question,1,1),1,'ITO','-') question_type
    from VW_RISK_SCORE v left join tbl_risk_question q on v.question_id=q.question_id
    where (v.project_id,v.sample_num) in
    (select project_id,max(sample_num) sample_num from VW_RISK_SCORE group by project_id)
    ) a,
    (select distinct ips from VW_RISK_SCORE) i,
    (select ips, sum(score) score from VW_RISK_SCORE where section_area=1 group by ips) one,
    (select ips, sum(score) score from VW_RISK_SCORE where section_area=2 group by ips) two,
    (select ips, sum(score) score from VW_RISK_SCORE where section_area=3 group by ips) three,
    (select ips, sum(score) score from VW_RISK_SCORE where section_area=4 group by ips) four,
    tbl_risk_project p
    where i.ips=one.ips(+) and i.ips=two.ips(+) and i.ips=three.ips(+) and i.ips=four.ips(+) and ito on scores.ips=ito.ips
    and i.ips=p.ips and  a.question_type='-ITO' group by  i.ips,a.ips, a.question_type, p.project_name, p.project_segment, p.location, p.project_exec_model, p.project_exec_model||' - '||p.project_config, one.score, two.score, three.score, four.score, nvl(one.score,0)+nvl(two.score,0)+nvl(three.score,0)+nvl(four.score,0), (select sum(prev_score) prev from XT_RISK_PAST2 where ips = i.ips), (select max(createdt) from tbl_risk_answer where (ips,sample_num) in
    (select ips,max(sample_num) from VW_RISK_SCORE group by ips) and ips=i.ips)
    ) scores and here is error I get.
    ORA-00604: error occurred at recursive SQL level 1
    ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ORA-06512: at line 12
    ORA-00920: invalid relational operator
    00604. 00000 - "error occurred at recursive SQL level %s"
    *Cause:    An error occurred while processing a recursive SQL statement
    (a statement applying to internal dictionary tables).
    *Action:   If the situation described in the next error on the stack
    can be corrected, do so; otherwise contact Oracle Support.
    Error at Line: 30 Column: 4

    You would move them to the from-clause, just like one, two, three and four.
    Something like:
    untested for obvious reasons
    select *
      from (select i.ips,
                   a.ips,
                   a.question_type,
                   sum(a.score) score,
                   p.project_name,
                   p.project_segment,
                   p.location,
                   p.project_exec_model,
                   p.project_exec_model || ' - ' || p.project_config pmodel,
                   one.score schedule,
                   two.score cost,
                   three.score execution,
                   four.score commercial,
                   nvl(one.score, 0) + nvl(two.score, 0) + nvl(three.score, 0) +
                   nvl(four.score, 0) as total,
                   (select sum(prev_score) prev
                      from xt_risk_past2
                     where ips = i.ips) prev_score,
                   (select max(createdt)
                      from tbl_risk_answer
                     where (ips, sample_num) in
                           (select ips, max(sample_num)
                              from vw_risk_score
                             group by ips)
                       and ips = i.ips) last_dt
              from (select v.project_id,
                           v.ips,
                           v.sample_num,
                           v.question_id,
                           v.header_desc,
                           v.section_area,
                           v.score,
                           decode(bi_recurse(q.active_question, 1, 2),
                                  2,
                                  'OTR',
                                  '-') ||
                           decode(bi_recurse(q.active_question, 1, 1),
                                  1,
                                  'ITO',
                                  '-') question_type
                      from vw_risk_score v
                      left join tbl_risk_question q
                        on v.question_id = q.question_id
                     where (v.project_id, v.sample_num) in
                           (select project_id, max(sample_num) sample_num
                              from vw_risk_score
                             group by project_id)) a,
                   (select distinct ips from vw_risk_score) i,
                   (select ips, sum(score) score
                      from vw_risk_score
                     where section_area = 1
                     group by ips) one,
                   (select ips, sum(score) score
                      from vw_risk_score
                     where section_area = 2
                     group by ips) two,
                   (select ips, sum(score) score
                      from vw_risk_score
                     where section_area = 3
                     group by ips) three,
                   (select ips, sum(score) score
                      from vw_risk_score
                     where section_area = 4
                     group by ips) four,
                   tbl_risk_project p
                   -- moved part I
                   (select ips,
                           sum(prev_score) prev
                      from xt_risk_past2
                     where ips = i.ips) five --or whatever
                   -- moved part II
                  (select ips,
                     max(createdt) maxcreatedt
                    from tbl_risk_answer
                   where (ips, sample_num) in  (select ips, max(sample_num)
                                                  from vw_risk_score
                                              group by ips)
                   group by ips) six -- or whatever              
             where i.ips = one.ips(+)
               and i.ips = two.ips(+)
               and i.ips = three.ips(+)
               and i.ips = four.ips(+)
               and i.ips = five.ips -- outerjoin if needed
               and i.ips = five.ips -- outerjoin if needed
               and ito on scores.ips = ito.ips
               and i.ips = p.ips
               and a.question_type = '-ITO'
             group by i.ips,
                      a.ips,
                      a.question_type,
                      p.project_name,
                      p.project_segment,
                      p.location,
                      p.project_exec_model,
                      p.project_exec_model || ' - ' || p.project_config,
                      one.score,
                      two.score,
                      three.score,
                      four.score,
                      nvl(one.score, 0) + nvl(two.score, 0) +
                      nvl(three.score, 0) + nvl(four.score, 0),
                      five.prev,
                      six.maxcreatedt
           ) scoresI wonder how all this is going to perform by the way....all those scalar subqueries and outer joins are expensive....
    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:1594885400346999596
    Read up on Subquery Factoring/WITH-clause, and try to rewrite parts of your query.
    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:4423923392083

  • Fetching un-matching data

    Hi,
    From the below code snippet, I want to fetch the non-matching data from table "T1".
    DROP TABLE T1;
    DROP TABLE T2;
    CREATE TABLE T1
    F1 INTEGER,
    F2 VARCHAR2(100 BYTE),
    F3 CHAR(1)
    CREATE TABLE T2
    FLD1 INTEGER,
    FLD2 VARCHAR2(100 BYTE)
    begin
         insert into t1
         values(1, 'A1', 'F');
         insert into t1
         values(1, 'A2', 'F');
         insert into t1
         values(1, 'A3', 'T');
         insert into t1
         values(2, 'A1', 'T');
         insert into t1
         values(3, 'A1', 'F');
         insert into t2
         values(1, 'T1');
         insert into t2
         values(2, 'T2');
    end;
    select * from t1;
    select * from t2;
    Query A)
    select t1.*, nvl(t2.fld1, -1), nvl(t2.fld2, 'EMPTY')
    from t1, t2
    where t1.f1 = t2.fld1 (+);
    Query B)
    select t1.*, t2.*
    from t1, t2
    where t1.f3 = 'T' and t1.f1 = t2.fld1;
    Upto this, it is fine, now, I want to fetch the resuts of "Query A" using something like given below,
    without applying outer join, this would look silly, unfortunately cannot avoid it.
    I am just trying to fetch the unmatching data from T1, when F3 field is
    when "F3 = 'T'" then matching data using "t1.f1 = t2.fld1" should be fetched, otherwise
    no join condition be applied.
    I just tried using the below SQL, it gives "ORA-00905: missing keyword" error, I understand it is not correct.
    select t1.*, t2.*
    from t1, t2
    where case when t1.f3 = 'T' then t1.f1 = t2.fld1 else NULL end;
    I cannot use scalar subqueries for this requirement, as it is quite expensive, please give me other alternatives
    for this, that should be helpful to me, thank you.

    I'm kind of confused about your requirements.
    Do you want to return ALL rows from T1 and only the values from T2 where T1.F3 = 'T'?
    Something like this maybe?
    SELECT  T1.*
                    CASE
                            WHEN T1.F3 = 'T'
                            THEN NVL(T2.FLD1,-1)
                    END
            )       AS FLD1
                    CASE
                            WHEN T1.F3 = 'T'
                            THEN NVL(T2.FLD2,'EMPTY')
                    END
            )       AS FLD2
    FROM    T1
    ,       T2
    WHERE   T1.F1 = T2.FLD1(+)Sample Results:
    SQL > /
    F1 F2  F FLD1 FLD2
      1 A3  T    1 T1
      1 A2  F
      1 A1  F
      2 A1  T    2 T2
      3 A1  FCan you please post the expected output as well?
    We all appreciate that you posted sample data and the DDL to go with it can you post the following too?
    1. Oracle version (e.g. 10.2.0.4)
    2. Sample data, and expected output in \ tags (see FAQ for more information).
    Edited by: Centinul on Aug 14, 2009 6:46 AM
    Added query...                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Query using views

    Since the query is too big, I have removed the query from the post.
    I would like to know whether using views in SQL queries degrade the performance of the queries?
    When views are used in sql queries, the operation 'FILTER' is displayed in the explain plan, however the cost doesnt seem to be high. If the views can be replaced by the base tables, it is better to do so?
    Edited by: user642116 on Nov 8, 2008 11:13 PM

    user642116 wrote:
    I have a main table called NATURAL_PERSON. There are several child tables based on this table, for e.g. PERSONAL_DETAILS, NATIONALITY_PERSON, CIVIL_STATUS etc. All these child tables have a foreign key NPN_ID which is joined with the ID of NATURAL_PERSON.
    I need to obtain data from these child tables and present in them xmlformat.
    A part of the query used is as below
    SELECT npn.ID npn_id,
    CONVERT(xmlelement("uwvb:NaturalPerson",
              XMLForest(LPAD(npn.nat_nummer,9,0) AS "uwvb:NatNr"),
              (XMLForest(LPAD(per.a_nummer, 10, 0) AS "uwvb:ANr"
              (SELECT XMLFOREST
                        (code_status AS "uwvb:ResidenceStatus")
                        FROM ebv_v_nep nep
                        WHERE npn_id = npn.ID
                        AND nep.nem_code = 'VBT'
                        AND nep.transactid =
                        (SELECT MAX (nep_i.transactid)
                             FROM ebv_v_nep nep_i
                             WHERE nep.npn_id = nep_i.npn_id
                             AND nep_i.nem_code = 'VBT'))
              entityelement),'WE8MSWIN1252', 'UTF8')
    FROM ebv_v_npn npn, ebv_v_per per
    WHERE npn.ID = per.npn_id
    As seen in the above query, views have been defined for all the tables. For e.g. the view ebv_v_npn is based on NATURAL_PERSON, ebv_v_per is based on PERSONAL_DETAILS, ebv_v_nep is based on RESIDENCE STATUS. All these views are independent of each other and do not contain common tables in their definition.
    The views can be replaced by the base tables as i dont see any advantage of using the views. I would like to know whether replacing the views with the base tables would also help to improve the performance.Replacing the views with the base tables might help, since not always Oracle is able to merge the views, so sometimes certain access paths are not available when working with views compared to accessing the base tables directly.
    You can see this in the execution plan if there are separate lines called "VIEW". In this case a view wasn't merged.
    The particular query that you've posted joins two views in the main query and (potentially) executes a scalar subquery that contains another correlated subquery for each row of the result set. "Potentially" due to the cunning "Filter optimization" feature of the Oracle runtime engine that basically attempts to cache the results of scalar subqueries to minimize the number of executions.
    If the statement doesn't perform as expected you need to find out which of the two operations is the main contributor to the statement's runtime.
    You can use DBMS_XPLAN.DISPLAY to find out what the FILTER operation you mentioned is actually performing (check the "Predicates Information" section below the plan output), and you can use SQL tracing to find out which row source generates how many rows. The following document explains how to enable SQL tracing and run the "tkprof" utility on the generated trace file: When your query takes too long ...
    The correlated subquery of the scalar subquery that is used to determine the maximum "transactid" may be replaced with a version of the statement that uses an analytic function to avoid the second access to the view (note: untested):
    SELECT npn.ID npn_id,
      CONVERT(xmlelement("uwvb:NaturalPerson",
              XMLForest(LPAD(npn.nat_nummer,9,0) AS "uwvb:NatNr"),
              (XMLForest(LPAD(per.a_nummer, 10, 0) AS "uwvb:ANr"
              (SELECT XMLFOREST
        (code_status AS "uwvb:ResidenceStatus")
        FROM (
          SELECT code_status,
          RANK() over (PARTITION BY npn_id ORDER BY transactid desc) as rnk
          FROM ebv_v_nep nep
          WHERE nep.npn_id = npn.ID
          AND nep.nem_code = 'VBT'
        where rnk = 1)
        entityelement),'WE8MSWIN1252', 'UTF8')
    FROM ebv_v_npn npn, ebv_v_per per
    WHERE npn.ID = per.npn_idRegards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/
    Edited by: Randolf Geist on Nov 10, 2008 9:27 AM
    Added the rewrite suggestion

  • FSG - Need to find beginning balance value from current year

    Hi,
    Could any one help me please,
    i designed FSG report and the issue is we need to find the beginning balance form the current year
    and i do not know which amount type i should use in column set.
    maybe there is any way or workaround for this issue?
    example :
    when i run the report with parameter JUN-12, the XXX (beginning balance current year) = 5000
    and when i run the report with parameter SEP-12, the XXX (beginning balance current year) is still 5000
    thanks
    Lim Johny

    This is some of the worst SQL I have seen. The data element names change from table to table, and they violate ISO-11179 rules. We seldom use OUTER JOINs in a properly designed schema, we seldom need to worry about NULLs; we use COALESCE(), not ISNULL();
    we use CURRENT_TIMESTAMP, not getdate(), etc. 
    Did you know that nesting scalar subqueries will screw any hope of optimization? The changes in the formatting of program text implies that many different, inexperienced younger programmers who wrote in many different non-SQL languages. You even posted in colors,
    like grade school! 
    My guess is that everyone wrote a query without any planning, and then threw them together in one pile. 
    It seems that this nightmare has three tables and we have no DDL or other specs:
     Stock 
     Stock_Lead_Times 
     Traces
    Want to follow Netiquette and post DDL with some specs? 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

Maybe you are looking for

  • Facebook video chat compatibility in window 8.1

    Please note that I can't able to find facebook video chat icon in window 8.1 operating system. Same has been available in window 7 and lover version. Please note that I was use same logging and I found this mismatch. Request you to please provide sol

  • Java.lang.class not found

    Hi, I configured scenario FILE TO JDBC, the sender channel is successful but in the receiving I am getting the error that java.lang.class not found. In the receiver communication channel, I gave "com.microsoft.jdbc.sqlserverdriver". what database con

  • Vendor Posting from Treasury Module

    Hi, Can some one please help me with posting directly to vendor / customer account instead of normal bank account from treasury module. Thanks, Devshree

  • ACS incremental backup is OFF

    Dear Team, Incremental backup settings is changing to OFF state and getting below error message in the logs  ACS . Please find the attachement. """""  Aug 31 2014 04:01:21 com.cisco.nm.acs.view.common.incrbackup.IncrBackupJob.execute(IncrBackupJob.ja

  • Feature request: superscript format

    just re-iterating pjazevedo's request for superscript format. It would be a very valuable addition for educators in math and sciences.