HELP! Query Using View to denormalize data

Help!!!
Below are two logical records of data that have been denormalized so that each column is represented as a different record in the database
RECORD NUMBER, COL POSITION, VALUE
1, 1, John
1, 2, Doe
1, 3, 123 Nowhere Lake
1, 4, Tallahassee
1, 5, FL
2, 1, Mary
2, 2, Jane
2, 3, 21 Shelby Lane
2, 4, Indianapolis
2, 5, IN
I need to write a view to return the data values in this format:
John, Doe, 123 Nowhere Lake, Tallahassee, FL
Mary, Jane, 21 Shelby Lane, Indianapolis, IN
I REALLY need this as soon as possible! Someone PLEASE come to my rescue!!!

Assuming that the other table has one record per record_num in the first table, then you could do something like:
SQL> SELECT * FROM t1;
RECORD_NUM DATE_COL
         1 17-MAR-05
         2 16-MAR-05
SQL> SELECT a.record_num, col1, col2, col3, col4, col5, t1.date_col
  2  FROM (SELECT record_num,
  3               MAX(DECODE(col_position, 1, value)) Col1,
  4               MAX(DECODE(col_position, 2, value)) Col2,
  5               MAX(DECODE(col_position, 3, value)) Col3,
  6               MAX(DECODE(col_position, 4, value)) Col4,
  7               MAX(DECODE(col_position, 5, value)) Col5
  8        FROM t
  9        GROUP BY record_num) a, t1
10  WHERE a.record_num = t1.record_num;
RECORD_NUM COL1  COL2  COL3              COL4            COL5  DATE_COL
         1 John  Doe   123 Nowhere Lake  Tallahassee     FL    17-MAR-05
         2 Mary  Jane  21 Shelby Lane    Indianapolis    IN    16-MAR-05If your second table is structured like the first table then something more like:
SELECT record_num,
       MAX(DECODE(source,'Tab1',DECODE(col_position, 1, value))) Col1,
       MAX(DECODE(source,'Tab1',DECODE(col_position, 2, value))) Col2,
       MAX(DECODE(source,'Tab1',DECODE(col_position, 3, value))) Col3,
       MAX(DECODE(source,'Tab1',DECODE(col_position, 4, value))) Col4,
       MAX(DECODE(source,'Tab1',DECODE(col_position, 5, value))) Col5,
       MAX(DECODE(source,'Tab2',DECODE(col_position, 1, value))) T2_Col1,
       MAX(DECODE(source,'Tab2',DECODE(col_position, 2, value))) T2_Col2
FROM (SELECT 'Tab1' source, record_num, col_position, value
      FROM t
      UNION ALL
      SELECT 'Tab2' source, record_num, col_position, value
      FROM t1)
GROUP BY record_numBy the way, I can't say I am enamoured of your data model.
John

Similar Messages

  • Need SQL query using View - Please help

    Hi,
    I have similar requirement like below.
    I have two tables DEPT and EMP and some departments may not have employees. I have created below view, which displays all DEPT records, even though there are no emplyees.
    CREATE OR REPLACE VIEW dept_emp_vw AS
    SELECT deptno, empid, 0 AS selected
    FROM dept d, emp e
    WHERE d.deptno = e.deptnno (+);
    Ex.
    DEPTNO         EMPID        SELECTED
    10 101 0
    10 102 0
    20 103 0
    30 103 0
    40 104 0
    50 <null> 0
    Application will pass "empid" to the view (for ex. empid = 103) and I want result like below.
    Ex.
    DEPTNO         EMPID        SELECTED
    10 101 0
    10 102 0
    20 103 1
    30 103 1
    40 104 0
    50 <null> 0
    Can you please let me know the query using "dept_emp_vw" view. We have Oracle 11g Release 2.
    Thanks a lot for the help.

    Not possible using normal SQL - as SQL is not a procedure language and does not support variable declaration and use (e.g. passing empid as a variable and using it both as a predicate and as a condition in the SQL projection).
    That said - SQL can be "+parameterised+". An approach that is ugly and contrary to the basic design and use of SQL. But it can support the (very weird) view definition of yours.
    E.g.
    SQL> create or replace procedure SetVariable( name varchar2, value varchar2 ) is
      2  begin
      3          DBMS_SESSION.set_context( 'MyVariables', name, value );
      4  end;
      5  /
    Procedure created.
    SQL>
    SQL>
    SQL> create or replace context MyVariables using SetVariable;
    Context created.
    SQL>
    SQL> create or replace view my_funky_weird_view as
      2  select
      3          e.empno,
      4          e.ename,
      5          e.job,
      6          case e.empno
      7                  when to_number(sys_context( 'MyVariables', 'empid' )) then
      8                          0
      9                  else
    10                          1
    11          end     as "SELECTED"
    12  from       emp e
    13  /
    View created.
    SQL>
    SQL> exec SetVariable( 'empid', 7499 )
    PL/SQL procedure successfully completed.
    SQL>
    SQL> select * from session_context where namespace = 'MYVARIABLES';
    NAMESPACE            ATTRIBUTE            VALUE
    MYVARIABLES          EMPID                7499
    SQL>
    SQL> select * from my_funky_weird_view order by selected;
         EMPNO ENAME      JOB               SELECTED
          7499 ALLEN      SALESMAN                 0
          7521 WARD       SALESMAN                 1
          7566 JONES      MANAGER                  1
          7654 MARTIN     SALESMAN                 1
          7698 BLAKE      MANAGER                  1
          7934 MILLER     CLERK                    1
          7788 SCOTT      ANALYST                  1
          7839 KING       PRESIDENT                1
          7844 TURNER     SALESMAN                 1
          7876 ADAMS      CLERK                    1
          7900 JAMES      CLERK                    1
          7902 FORD       ANALYST                  1
          7369 SMITH      CLERK                    1
          7782 CLARK      MANAGER                  1
    14 rows selected.
    SQL>But I will N\OT recommend doing it this way. It is not natural SQL as PL/SQL is needed to "+inject+" name-value pairs into the context for the SQL view to use. It is ugly. It is not standard. It cannot scale. It is complex to use. Etc.
    Yes, there are instances when this approach is exactly what one needs - when for example dealing with a trusted context and using the contents for implementing a security layer. But in the above case - I would rather want to see the actual business requirement first, as I think you're barking up the wrong tree with the view solution you have imagined.

  • Slow query using view on cube

    I have created a cube using Analytic workspace manager (oracle 10G R2) which is to be used (via a view) in OBIEE.
    One of the dimensions (event_dim) is extremely sparse (it has been marked as sparse and is at the appropriate level in the cube's dimensional hierarchy).
    In general, when I query the cube (via the view) at high levels of aggregation, the performance is good, but when I query the cube at the lowest level of granulrity for the event_dim dimension, the performance is extremely poor (more than a minute to return).
    The problem seems to be that the view is returning data for all possible rows in the cube even if most of the measures are NA (i.e null since there is no data present).
    For example if I run a query against the cube with no filter on the measures I get around 20,000 rows returned - obviously this takes a while. If I then put a 'my_measure > 0' clause on the query I get 2 rows back (which is correct). However this still takes more than a minute to return - I assume that this is because the query is having to process the 20,000 rows to find the two that actually have data.
    Is there any way to control this - I never need to see the NA data so would like to be able to disable this in either the cube or the view - and hence improve performance.
    Note: I cannot use the compression option since I need to be able to override the default aggregation plan for certain dimension/measure combinations and it appears that compression and overriding the plan are incompatible (AWM gives the error "Default Aggregation Plan for Cube is required when creating cube with the Compression option").
    Thanks,
    Chris

    I have seen this in some examples/mails. I havent tried it out myself :)
    Try using a OLAP_CONDITION filter with an appropriate entry point option (1) on the OLAP_TABLE based query and achieve the goal of restricting output from query to value with meas > 0. This condition can be added as part of a specific query or as part of the OLAP_TABLE view definition (applicable to all queries). Hopefully this way there is no need to customize the limitmap variable to suit the cube implementation internal details like compression, partitioning, presummarization, global composite etc.
    NOTE1: The olap_condition entry point 1 pushes the measure based dimension filter within the cube before fetching results from cube. Hopefully this will help speed up the retrieval of results. This should work well if we want the restriction to apply across 1 dimension.. Time or Product alone.. only 1 olap_condition is sufficient.
    SELECT ...
    FROM <olap_table_based_view>
    where ...
    and olap_condition(olap_calc, ' limit time KEEP sales_sales > 0', 1)=1
    --and olap_condition(olap_calc, ' limit time KEEP any(sales_sales, product) > 0', 1)=1
    NOTE2:
    For cases where both time and product (and more dimensions) need to be restricted then we can use 2 olap_conditions to restrict data to set of time and products where some data exists but you could still end up with a specific row (cross combination of product and time) with zero value. You may want to bolster the pre-fetch filtering by olap_condition via a regular sql filter referencing the external measure column (and sales_view_col >0) which is applied on to the results after it is fetched from the cube.
    E.g:
    SELECT ...
    FROM <olap_table_based_view>
    where ...
    and olap_condition(olap_calc, ' limit product KEEP any(sales_sales, time) > 0', 1)=1
    and olap_condition(olap_calc, ' limit time KEEP any(sales_sales, product) > 0', 1)=1
    and sales_view_col >0
    HTH
    Shankar

  • Query using views

    Since the query is too big, I have removed the query from the post.
    I would like to know whether using views in SQL queries degrade the performance of the queries?
    When views are used in sql queries, the operation 'FILTER' is displayed in the explain plan, however the cost doesnt seem to be high. If the views can be replaced by the base tables, it is better to do so?
    Edited by: user642116 on Nov 8, 2008 11:13 PM

    user642116 wrote:
    I have a main table called NATURAL_PERSON. There are several child tables based on this table, for e.g. PERSONAL_DETAILS, NATIONALITY_PERSON, CIVIL_STATUS etc. All these child tables have a foreign key NPN_ID which is joined with the ID of NATURAL_PERSON.
    I need to obtain data from these child tables and present in them xmlformat.
    A part of the query used is as below
    SELECT npn.ID npn_id,
    CONVERT(xmlelement("uwvb:NaturalPerson",
              XMLForest(LPAD(npn.nat_nummer,9,0) AS "uwvb:NatNr"),
              (XMLForest(LPAD(per.a_nummer, 10, 0) AS "uwvb:ANr"
              (SELECT XMLFOREST
                        (code_status AS "uwvb:ResidenceStatus")
                        FROM ebv_v_nep nep
                        WHERE npn_id = npn.ID
                        AND nep.nem_code = 'VBT'
                        AND nep.transactid =
                        (SELECT MAX (nep_i.transactid)
                             FROM ebv_v_nep nep_i
                             WHERE nep.npn_id = nep_i.npn_id
                             AND nep_i.nem_code = 'VBT'))
              entityelement),'WE8MSWIN1252', 'UTF8')
    FROM ebv_v_npn npn, ebv_v_per per
    WHERE npn.ID = per.npn_id
    As seen in the above query, views have been defined for all the tables. For e.g. the view ebv_v_npn is based on NATURAL_PERSON, ebv_v_per is based on PERSONAL_DETAILS, ebv_v_nep is based on RESIDENCE STATUS. All these views are independent of each other and do not contain common tables in their definition.
    The views can be replaced by the base tables as i dont see any advantage of using the views. I would like to know whether replacing the views with the base tables would also help to improve the performance.Replacing the views with the base tables might help, since not always Oracle is able to merge the views, so sometimes certain access paths are not available when working with views compared to accessing the base tables directly.
    You can see this in the execution plan if there are separate lines called "VIEW". In this case a view wasn't merged.
    The particular query that you've posted joins two views in the main query and (potentially) executes a scalar subquery that contains another correlated subquery for each row of the result set. "Potentially" due to the cunning "Filter optimization" feature of the Oracle runtime engine that basically attempts to cache the results of scalar subqueries to minimize the number of executions.
    If the statement doesn't perform as expected you need to find out which of the two operations is the main contributor to the statement's runtime.
    You can use DBMS_XPLAN.DISPLAY to find out what the FILTER operation you mentioned is actually performing (check the "Predicates Information" section below the plan output), and you can use SQL tracing to find out which row source generates how many rows. The following document explains how to enable SQL tracing and run the "tkprof" utility on the generated trace file: When your query takes too long ...
    The correlated subquery of the scalar subquery that is used to determine the maximum "transactid" may be replaced with a version of the statement that uses an analytic function to avoid the second access to the view (note: untested):
    SELECT npn.ID npn_id,
      CONVERT(xmlelement("uwvb:NaturalPerson",
              XMLForest(LPAD(npn.nat_nummer,9,0) AS "uwvb:NatNr"),
              (XMLForest(LPAD(per.a_nummer, 10, 0) AS "uwvb:ANr"
              (SELECT XMLFOREST
        (code_status AS "uwvb:ResidenceStatus")
        FROM (
          SELECT code_status,
          RANK() over (PARTITION BY npn_id ORDER BY transactid desc) as rnk
          FROM ebv_v_nep nep
          WHERE nep.npn_id = npn.ID
          AND nep.nem_code = 'VBT'
        where rnk = 1)
        entityelement),'WE8MSWIN1252', 'UTF8')
    FROM ebv_v_npn npn, ebv_v_per per
    WHERE npn.ID = per.npn_idRegards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/
    Edited by: Randolf Geist on Nov 10, 2008 9:27 AM
    Added the rewrite suggestion

  • NEED HELP IN USING ALL_TAB_COLUMNS FOR RETRIEVING DATA???

    A table say T1 contains column like Emp_id,Code.
    and there are several Code like C1,C2,C3.
    Another table say T2 contains column like
    Emp_id,C1,C2,C3.Here the value of the code field of the
    T1 table is now column of the T2 table.And the amount of
    each code of T1 table is equal to column value of T2
    table.
    Now I want to retrieve data from T2 table like
    C1 200
    C2 300
    C3 140
    I cannot retrieve data like this using all_tab_columns.
    I can only getting the column_name but cannot its value.
    PLEASE HELP ME...
    Edited by: user630863 on Apr 8, 2009 11:37 AM

    emp_id | code
    001 | C1
    001 | C2
    005 | C3
    005 | C1
    002 | C2
    002 | C3
    Table T1
    emp_id | C1 | C2 | C3
    001 | 10 | 15 |
    002 | | 7 | 12
    005 | 45 | | 94
    Table T2
    I have written a query
    select column_name from all_tab_columns a,T1 b
    where a.column_name=b.code
    and table_name='T2'
    OUTPUT:
    C1
    C2
    C3
    But I Need data for each employee like
    001 C1 10
    001 C2 15
    002 C2 7
    002 C3 12
    005 C1 45
    005 C3 94
    Edited by: user630863 on Apr 8, 2009 1:28 PM

  • Using views to show data in discoverer desktop

    Dear ALL,
    I am facing an issue in development of Relational reports in Oracle Discoverer Desktop.
    Following are the details of my warehouse (Data Mart) database:
    There are 2 Dimensions and 1 cube in the Data Mart. All the 2 dimensions and Cube are based on tables Currency, Accounts and Fact_Data respectively in our Data Mart.
    Fact table contains 2 facts (CLOSING_BALANCE,MAX_BALANCE) and and the values are stored in USD currency.
    The fact table contains 100,000 rows with all values/facts in USD currency.
    Details of EUL(End User Layer):
    I have derived all 2 dimensions and cube from OWB, which generates all the "Business Definitions" and "Business Areas".
    Issue:
    I need to show the end user a report (from discoverer) which shows all the values/facts in PKR currency.
    What is the best way to get this report?
    I have tried many methods, including "Creating a Materialized View (MV) on fact table which contains converted values in PKR and then making a view (Union of fact table and MV) which gives me 200,000 rows." But this solution has an issue that i can not join the View with other dimensions in Discoverer because View contains dimension keys (same as fact table) and dimension tables contain Surrogate Ids.
    Thanks in advance for your help and suggestions.
    Best Regards

    Hi,
    Then you should be able to import your exchange rate table as a new folder in the EUL, join this folder to your fact table using date and create a calculation for the converted currency.
    Rod West

  • Help! using FM for formating date with to_char()

    Hi,
    Basically my requirement is to format the date in specific format (i.e. DD MM YYYY HH24:MM) and it should avoid the padding zeros.
    Ex: 03-Jun-2009 10:01 should be displayed as 3 Jun 2009 10:01.
    I was using the below query
    to_char(sr.recievedat, 'FMDD Mon yyyy HH24:MI')...
    But the thing is it removing the padding zeros from minuites aswell, which i dont require.
    For example,
    Ex1: 03-Dec-2009 18:10    3 Dec 2009 18:10
    *04-Dec-2009 19:01 4 Dec 2009 19:1.. It should be 4 Dec 2009 19:01*
    Can some body help me how to control the FM to restrict the padding to some places
    Thanks
    Phani

    Hi try this
    sql> select to_char(sr.recievedat, 'FMDD Mon yyyy fmHH24:MI') from dual;example
    SQL> select to_char(to_date('01-jan-2009 7:01:06','dd-mon-rrrr HH:MI:SS'), 'FMDD Mon yyyy fmHH24:MI'
    ) from dual;
    TO_CHAR(TO_DATE('01-JAN
    1 Jan 2009 07:01

  • Writing a query using a multisource-enabled data foundation

    I know there is an easy way to do this but Iu2019m suffering from a mind block late on a Friday afternoon.
    What I have is a multisource-enabled data foundation that is reading data from a connection to multi-cube MC07 in our DEV system and multi-cube MC07 in our QAS system. The two multi-cubes are joined in the data foundation on create date and contract number.
    Here is what I have done so far:
    -     Created two relational multisource-enabled connections. One connections to multi-cube MC07 in DEV system and one to multi-cube MC07 in our QA system, one to a  to DEV BW and
    -     Created a multi-source data foundation on the connections with joins on create date and contract number.
    -     Created a relational data source business layer using my data foundation
    -     In the business layer I click on the Queries tab and then the u201C+u201D to bring up the query panel
    I want to write a query that combines the data from DEV and QA and list all the contract numbers, their create date and a gross quantity field from both systems
    How do I write the query?
    Appreciate any help
    Mike...

    Whenever we are creating DataFoundation with Multi source Enabled, the data sources are not enabled. For single source it is working fine. we are doing it with SAP BO4.0 SP4 Client tools. How to reslove this issue..

  • HR Query using two diffferent master data attribute values

    I have a requirement to calculate headount based on certain condition.
    Example: The condition is as follows:
    For each "Performace Rating" (ROW)
    Count Headcount for "Below MIn" (COLUMN)
    - Below Min = "Annual Salary" less than "Pay Grade Level MIn"
    - "Annual Salary" and "Pay Grade Level" are attributes of "Employee" Master data and is of type CURR
    -"Pay Grade Level Min" in turn is an attribute of "Pay Grade Level" master data.
    I am trying to build this query based on a standard  Infoset 0ECM_IS02 that has an ODS (Compensation Process), Employee and Person" .
    Any help in the required approach is appreciated like creating a restricetd KF using ROWCOUNT (Number of Records) and implement the logic for "Below Min"
    Thanks
    Raj

    Hi
    Have you tried to create a variable for ths kf with exit. I thnk it is possible here
    Assign points if useful
    Regards
    N Ganesh

  • SQL Query using a Variable in Data Flow Task

    I have a Data Flow task that I created. THe source query is in this file "LPSreason.sql" and is stored in a shared drive such as
    \\servername\scripts\LPSreason.sql
    How can I use this .sql file as a SOURCE in my Data Flow task? I guess I can use SQL Command as Access Mode. But not sure how to do that?

    Hi Desigal59,
    You can use a Flat File Source adapter to get the query statement from the .sql file. When creating the Flat File Connection Manager, set the Row delimiter to a character that won’t be in the SQL statement such as “Vertical Bar {|}”. In this way, the Flat
    File Source outputs only one row with one column. If necessary, you can set the data type of the column from DT_STR to DT_TEXT so that the Flat File Source can handle SQL statement which has more than 8000 characters.
    After that, connect the Flat File Source to a Recordset Destination, so that we store the column to a SSIS object variable (supposing the variable name is varQuery).
    In the Control Flow, we can use one of the following two methods to pass the value of the Object type variable varQuery to a String type variable QueryStr which can be used in an OLE DB Source directly.
    Method 1: via Script Task
    Add a Script Task under the Data Flow Task and connect them.
    Add User::varQuery as ReadOnlyVariables, User::QueryStr as ReadWriteVariables
    Edit the script as follows:
    public void Main()
    // TODO: Add your code here
    System.Data.OleDb.OleDbDataAdapter da = new System.Data.OleDb.OleDbDataAdapter();
    DataTable dt = new DataTable();
    da.Fill(dt, Dts.Variables["User::varQuery"].Value);
    Dts.Variables["QueryStr2"].Value = dt.Rows[0].ItemArray[0];
    Dts.TaskResult = (int)ScriptResults.Success;
    4. Add another Data Folw Task under the Script Task, and join them. In the Data Flow Task, add an OLE DB Source, set its Data access mode to “SQL command from variable”, and select the variable User::QueryStr.
    Method 2: via Foreach Loop Container
    Add a Foreach Loop Container under the Data Flow Task, and join them.
    Set the enumerator of the Foreach Loop Container to Foreach ADO Enumerator, and select the ADO object source variable as User::varQuery.
    In the Variable Mappings tab, map the collection value of the Script Task to User::QueryStr, and Index to 0.
    Inside the Foreach Loop Container, add a Data Flow Task like step 4 in method 1.
    Regards,
    Mike Yin
    TechNet Community Support

  • UPDATE SQL query using WHERE and a date/time data type... Multiple changes...

    I'm using the LabView Database Connectivity Toolset and am using the following query...
    UPDATE IndexStation
    SET Signal_Size=200
    WHERE 'StartTime=12:05:23'
    Now the problem is that this command seems to update all rows in the table IndexStation... Not just specifically the row where StartTime=12:05:23
    I have tries all sorts of {} [] / ' " around certain characters and column names but it always seems to update all rows...
    I've begun to use the SQL query tab in Access to try and narrow down as to why this happens, but no luck!
    Any ideas!?
    Thanks,
    Chris.

    Chris Walter wrote:
    I completely agree about the Microsoft issue.
    But it seems no SQL based manual states that { } will provide a Date/Time constant.
    Is this an NI only implementation? Because I can't seem to get it to function correctly within LabView or in any SQL query.
    Chris.
    There is nothing about the database toolkit in terms of SQL syntax that would be NI specific. The database Toolkit simply interfaces to MS ADO/DAO and the actual SQL syntax is usually implemented in the database driver or database itself although I wouldn't be surprised if ADO/DAO does at times munch a bit with that too.
    The Database Toolkit definitely does not. So this might be a documentation error indeed. My understanding of SQL syntax is in fact rather limited so not sure which databases might use what delimiters to format date/time values. I know that SQL Server is rather tricky thanks to MS catering for the local date/time format in all their tools and the so called universal date/time format has borked on me on several occasions.
    Rolf Kalbermatter
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Performance of query using view - what is happening here

    Hi,
    I can't explain the difference in performance between two queeries.
    For a datawarehouse I have 3 tables from 3 different sources, named source1, source2 and
    source3 they all have the identical columns:
    client_key
    ,client_revenue
    source1 has 90.000.000 rows, source2 1.000.000 rows and source3 50.000 rows
    I also made a view say, all_clients which is the union of the 3 tables plus a constant column called 'source'
    which corresponds to the table_name.
    If I run a query which shows the number of records it takes 15-20 minutes:
    select source,count(*)
    from all_clients
    group by source.
    If i run the following query it takes about 5 minutes!
    select 'source1',count(*)
    from source1
    union
    select 'source2',count(*)
    from source2
    union
    select 'source3',count(*)
    from source3.
    What makes the difference?

    Hmmm... Interesting. In my small example things seem pretty similar. Have you done the explain plans?
    An observation is that you are using a UNION rather than a UNION ALL which would be better as you may be incurring an unnecessary SORT UNIQUE.
    create table tab1 as(select object_id, object_type from all_objects);
    create table tab2 as(select object_id, object_type from all_objects);
    create table tab3 as(select object_id, object_type from all_objects);
    analyze table tab1 estimate statistics;
    analyze table tab2 estimate statistics;
    analyze table tab3 estimate statistics;
    create view v_tab123 as(select 'source1' source,count(*) cnt
    from tab1
    union
    select 'source2',count(*)
    from tab2
    union
    select 'source3',count(*)
    from tab3);
    select 'source1' source,count(*) cnt
    from tab1
    union
    select 'source2',count(*)
    from tab2
    union
    select 'source3',count(*)
    from tab3;
    Operation     Object Name     Rows     Bytes     Cost     TQ     In/Out     PStart     PStop
    SELECT STATEMENT Hint=CHOOSE          3           180                     
    SORT UNIQUE          3           180                     
    UNION-ALL                                        
    SORT AGGREGATE          1           60                     
    TABLE ACCESS FULL     TAB1     38 K          10                     
    SORT AGGREGATE          1           60                     
    TABLE ACCESS FULL     TAB2     38 K          10                     
    SORT AGGREGATE          1           60                     
    TABLE ACCESS FULL     TAB3     38 K          10                     
    -- Union
    select source, cnt from(
    select 'source1' source,count(*) cnt
    from tab1
    union
    select 'source2',count(*)
    from tab2
    union
    select 'source3',count(*)
    from tab3)
    Operation     Object Name     Rows     Bytes     Cost     TQ     In/Out     PStart     PStop
    SELECT STATEMENT Hint=CHOOSE          3           180                     
    VIEW          3      54      180                     
    SORT UNIQUE          3           180                     
    UNION-ALL                                        
    SORT AGGREGATE          1           60                     
    TABLE ACCESS FULL     TAB1     38 K          10                     
    SORT AGGREGATE          1           60                     
    TABLE ACCESS FULL     TAB2     38 K          10                     
    SORT AGGREGATE          1           60                     
    TABLE ACCESS FULL     TAB3     38 K          10                     
    -- Union ALL
    select source, cnt from(
    select 'source1' source,count(*) cnt
    from tab1
    union ALL
    select 'source2',count(*)
    from tab2
    union ALL
    select 'source3',count(*)
    from tab3)
    Operation     Object Name     Rows     Bytes     Cost     TQ     In/Out     PStart     PStop
    SELECT STATEMENT Hint=CHOOSE          3           180                     
    VIEW          3      54      180                     
    SORT UNIQUE          3           180      <<<<============== Unnecessary           
    UNION-ALL                                        
    SORT AGGREGATE          1           60                     
    TABLE ACCESS FULL     TAB1     38 K          10                     
    SORT AGGREGATE          1           60                     
    TABLE ACCESS FULL     TAB2     38 K          10                     
    SORT AGGREGATE          1           60                     
    TABLE ACCESS FULL     TAB3     38 K          10                     
    analyze table tab1 delete statistics;
    analyze table tab2 delete statistics;
    analyze table tab3 delete statistics;
    Now with RBO - the SORT UNIQUE goes away for the above query.
    Operation     Object Name     Rows     Bytes     Cost     TQ     In/Out     PStart     PStop
    SELECT STATEMENT Hint=CHOOSE                                        
    VIEW                                        
    UNION-ALL                                        
    SORT AGGREGATE                                        
    TABLE ACCESS FULL     TAB1                                   
    SORT AGGREGATE                                        
    TABLE ACCESS FULL     TAB2                                   
    SORT AGGREGATE                                        
    TABLE ACCESS FULL     TAB3                                   

  • Bug in 10.2.0.5 for query using index on trunc(date)

    Hi,
    We recently upgraded from 10.2.0.3 to 10.2.0.5 (enterprise edition, 64bit linux). This resulted in some strange behaviour, which I think could be the result of a bug.
    In 10.2.0.5, after running the script below the final select statement will give different results for TRUNC(b) and TRUNC (b + 0). Running the same script in 10.2.0.3 the select statement returns correct results.
    BTW: it is related to index usage, because skipping the "CREATE INDEX" statement leads to correct results for the select.
    Can somebody please confirm this bug?
    Regards,
    Henk Enting
    -- test script:
    DROP TABLE test_table;
    CREATE TABLE test_table(a integer, b date);
    CREATE INDEX test_trunc_ind ON test_table(trunc(b));
    BEGIN
    FOR i IN 1..100 LOOP
    INSERT INTO test_table(a,b) VALUES(i, sysdate -i);
    END LOOP;
    END;
    SELECT *
    FROM (
    SELECT DISTINCT
    trunc(b)
    , trunc(b + 0)
    FROM test_table
    Results on 10.2.0.3:
    TRUNC(B) TRUNC(B+0)
    05-08-2010 05-08-2010
    04-08-2010 04-08-2010
    01-08-2010 01-08-2010
    30-07-2010 30-07-2010
    28-07-2010 28-07-2010
    27-07-2010 27-07-2010
    23-07-2010 23-07-2010
    22-07-2010 22-07-2010
    17-07-2010 17-07-2010
    03-07-2010 03-07-2010
    26-06-2010 26-06-2010
    etc.
    Results on 10.2.0.5:
    TRUNC(B) TRUNC(B+0)
    04-05-2010 03-08-2010
    04-05-2010 31-07-2010
    04-05-2010 24-07-2010
    04-05-2010 06-07-2010
    04-05-2010 05-07-2010
    04-05-2010 01-07-2010
    04-05-2010 16-06-2010
    04-05-2010 14-06-2010
    04-05-2010 08-06-2010
    04-05-2010 07-06-2010
    04-05-2010 30-05-2010
    etc.

    Thanks four your reply.
    I already looked at the metalink doc. It lists 4 bugs introduced in 10.2.0.5, but none of them seems related to my problem. Did I overlook something?
    Regards,
    Henk

  • In DBI , how to find out the Source Query used for the Report

    Hi All,
    How to find out the Source Query used to display the data in the DBI Reports or Dashboards. We can get it in Apps Front end by Going to Help and Record Histroty. But DBI Runs in Internet Explorer so i dont know how to get the source query ( SELECT Query ) Used.
    In IE we have View --> Source . But that does not help since it gives the HTML Coding and not the SELECT Query used.
    If anyone has ever worked on it...Please help me in finding it.
    Thanks,
    Neeraj Shrivastava

    Hi neeraj,
    You can see the query used to display reports.Follow these steps to get the query.
    1)Login to oracle apps
    2)Select "Daily Business Intelligence Administrator" responsiblity.
    3)Now click on "Enable/Disable Debugging" (Now u enabled debugging)
    4)now open the report which you want to see the query of
    5)In view source it displays query along with the bind varilables.
    Feel free to ping me if you have any doubts
    thanks
    kittu

  • Using Views in Real time mappings?

    Hello fellow OW Builders!
    I have been investigating OWB 11gR2 as a solution for data warehousing. The requirement I have is to produce a real-time system, to propagate any changes in the source data as quickly as possible, to the targets. The problem I have is, I only have access to the source data via views on the database. I can create 'regular' mappings using views as a data source, as part of a batch load process. However, I cannot set up Queue Operators (required for real-time mappings), using views as the data source.
    I am hoping there are some OWB gurus out there, who might be able to suggest a suitable approach, for real-time data warehousing, using views as a data source? If it helps with your creativity, it doesn't have to be an OWB solution (although I think that would be preferable). The database version for the source and targets is 10gR2, should that be of interest.
    Thanks in advance for your time.
    Edited by: user13130528 on 17-Aug-2010 06:49

    Has this completely foxed everyone?! Even if the answer is 'no way, Jose', it would be good to know, so I can ditch OWB and concentrate on finding another solution.
    Thanks!

Maybe you are looking for

  • Adobe Muse site not working in Google Chrome

    Hi Guys, my adobe muse website www.focaldesigns.ie is not working in Google Chrome, i was advised to publish it again which I tried twice with no joy. Any help on how i can get the site back working in Chrome and on an iPad would be greatly appreciat

  • Why is the content of Photo Stream different on each device?

    I work on a 13" MacBook Pro and also have an iPad and iPhone 4s. I edited my Photo Stream photos down to a tight collection of 220 pictures and fully expected that to reflect on my iPhone and iPad, however, it simply is not the same. The iPad has mor

  • Is Time Machine Apple's worst piece of software EVER?

    I am sick of Time Machine not working properly. I've tried it with an Airport Extreme and an external disk, with Time Capsule, using Leopard, Snow Lepard and now Lion. I do an erase of whichever disk I'm using and then a full backup - and it works fi

  • Oracle 11g on Oracle Unbreakable Linux

    OK, I have installed 11g on a new installation of Oracle Enterprise Linux 5. Everything seemed to go OK except the swap space was set to 2Gb and 11g wanted 3.4Gb. The documentation I found on ONT said that was fine and normal as the default install g

  • Using trans-attribute of Required with public vs. private bean methods

              I'm having problems attempting to begin my transaction (using the Required transaction attribute           setting in the desployment descriptor) in a private stateless session bean method. The scenario is as follows:           public void