SQL challenge

Alright, I hate doing this, but I need some help...
Given is the following data in table 1:
ID
1
2
3
4
Now, ID 1 is connected with ID2, and ID3 is connected with ID4. This result is made by the following table, which shows the relationship cathesian style:
ID1 ID2 Connected
1 2 Y
1 3 N
1 4 N
2 3 N
2 4 N
3 4 Y
The challenge: I need a query to show the result of the output groups. In this case: 2 groups: 1 and 2 vs 3 and 4. The query needs to be build so that if we would have an ID 5 and 6 that don't have any connection, be taken into two new seperate groups...
I don't think the "CONNECT BY" statement would do the trick here, since all numbers are connected, not just the linked ones. It's just that the "Y" or "N" indicates their relationship...
Hope you guys can push me in the right direction... I've tried a lot :)
O by the way: it would be nice to solve this with SQL and not PLSQL because of performance issues.

What are "normalized cartesian relations"?
Laurent solution may be correct if the OP data are transitively closed. If it is not, then you need to build transitively closed relation first, and the only way to accomplish that is leveraging "connect by" query.
For mathematically inclined, here is the essence of the problem: Given a binary relation R(x,y), first, build an equivalence relation out of it. Then identify each equivalence class with a distinct number. This subject is covered in chapter 6 of my book (http://www.bookpool.com/sm/0977671542). Here is an extract:
======================================================
With proper graph terminology the question can be formulated in just one line:
Find a number of connected components in a graph.
(The problem in the book counts the connected components, rather than identifies them).
Connected component of a graph is a set of nodes reachable from each other. A node is reachable from another node if there is an undirected path between them.
Figure 6.4: A graph with two connected components.
Reachability is an equivalence relation: it’s reflective, symmetric, and transitive. Given a graph, we formally obtain reachability relation by closing the Edges relation to become reflective, symmetric and, transitive (fig. 6.5).
Figure 6.5: Reachability as an equivalence relation: graph from fig. 6.4 symmetrically and transitively closed.
Returning back to the problem of finding the number of connected components, let’s assume that we already calculated the reachability relation EquivalentNodes somehow. Then, we just select a smallest node from each component. Informally,
Select node(s) such that there is no node with smaller label reachable from it. Count them.
Formally:
select count(distinct tail) from EquivalentNodes e
where not exists (
select * from EquivalentNodes ee
where ee.head<e.tail and e.tail=ee.tail
--------------------Soapbox----------------------
Equivalence Relation and Group By (cont)
In one of the chapter 1 sidebars we have attributed the incredible efficiency of the group by operator to its proximity to one of the most fundamental mathematical constructions – the equivalence relation. There are two ways to define an equivalence relation. The first one is leveraging the existing equality operator on a domain of values. The second way is defining an equivalence relation explicitly, as a set of pairs. The standard group by operator is not able to understand an equivalence relation defined explicitly – this is the essence of the problem, which we just solved.
Being able to query the number of connected components earned us an unexpected bonus: we can redefine a connected graph as a graph that has a single connected component. Next, a connected graph with N nodes and N-1 edges must be a tree. Thus, counting nodes and edges together with transitive closure is another opportunity to enforce tree constraint.
Now that we established some important graph closure properties, we can move on to transitive closure implementations. Unfortunately, our story has to branch here, since database vendors approached hierarchical query differently.
Message was edited by:
Vadim Tropashko

Similar Messages

  • SQL challenge: avoid this self-join!!!

    Here's something of a challenging SQL problem. I'm trying to persist an arbitrary number of attributes for an object. I am trying to do this in a regular relational table both for performance and to make future upgrades easier.
    The problem is that I don't know what SQL cleverness I can use to only scan the ATTR table once.
    Does Oracle (or for that matter the SQL standard) have some way to help me? Here's a simplified example:
    Consider a table ATTR with columns OID, ATTR_ID, ATTR_VAL. Unique key is OID, ATTR_ID. Assume any other indexes that you want, but be aware that ATTR_VAL is modestly dynamic.
    I can easily look for a OID for any one ATTR_ID, ATTR_VAL pair:
    SELECT oid FROM attr
    WHERE attr_id = 1 AND attr_val = :b1
    I can also easily do this looking at multiple attributes when I only need one condition to be met with an OR, as:
    SELECT DISTINCT oid FROM attr
    WHERE (attr_id = 1 AND attr_val = :b1)
    OR (attr_id = 31 AND attr_val = :b2)
    But how to handle the condition where I want to have the two ATTR_ID, ATTR_VAL pairs "and-ed" together? I know that I can do this:
    SELECT oid FROM
    (SELECT oid FROM attr WHERE attr_id = 1 AND attr_val = :b1)
    UNION
    (SELECT oid FROM attr WHERE attr_id = 31 AND attr_val = :b2)
    But this will necessitate looking at ATTR twice. This is maybe okay if there are only two conditions, but what about when there might be 10 or even 50? At some point this technique becomes unacceptable.
    Clearly:
    SELECT DISTINCT oid FROM attr
    WHERE (attr_id = 1 AND attr_val = :b1)
    AND (attr_id = 31 AND attr_val = :b2)
    won't work (each row has but one ATTR_ID).
    The following will end up doing the same basic thing as the UNION (it avoids a sort so is preferable):
    SELECT oid FROM attr a1, attr a2
    WHERE a1.oid = a2.oid
    AND (a1.attr_id = 1 AND a1.attr_val = :b1)
    AND (a2.attr_id = 31 AND a2.attr_val = :b2)
    but the fundamental problem of scanning ATTR twice remains.
    What cleverness can I apply here to only scan ATTR once?
    Thanks,
    :-Phil

    An other way of having a dynamic in list build from a singel string is show at asktom at this link http://asktom.oracle.com/pls/ask/f?p=4950:8:2019864::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:210612357425,%7Bvarying%7D%20and%20%7Belements%7D%20and%20%7Bin%7D%20and%20%7Bin%7D%20and%20%7Blist%7D
    an modified version for two columns:
    Create or replace type in_list as object (col1 varchar2(20), col2 varchar2(30));
    Create or replace type in_list_tab as table of in_list;
    Create or replace function fn_in_list( p_string in varchar2) return in_list_tab
    as
    l_string long default p_string || ',';
    l_data in_list_tab := in_list_tab();
    pos number;
    begin
    pos := 0;
    loop
    exit when l_string is null;
    pos := instr( l_string, ',' );
    l_data.extend;
    l_data(l_data.count) := in_list('','');
    l_data(l_data.count).col1 := ltrim(rtrim(substr(l_string, 1, pos - 1)));
    l_string := substr( l_string, pos + 1 );
    if l_string is null
    then
         l_data.Trim;
         exit;
    end if;
    pos := instr( l_string, ',' );
    l_data(l_data.count).col2 := ltrim(rtrim(substr(l_string, 1, pos - 1)));
    l_string := substr( l_string, pos + 1 );
    end loop;
    return l_data;
    end;
    create table testII (cola varchar2(10), colb varchar2(30));
    insert into testII values ('abc',1);
    insert into testII values ('abc',2);
    insert into testII values ('def',1);
    insert into testII values ('def',2);
    commit;
    var b1 varchar2(200);
    exec :b1:='abc,1,def,2';
    select * from testII where (cola,colb) in
    (select col1, col2 from THE ( select cast(fn_in_list(:b1) as in_list_tab) from dual));
    to handle cases like
    attr_id = 41 and attr_val > :b3 I would say dynmaic SQL

  • SQL Challenge - Returning count=0 for non-existing values

    Hello there,
    I have a question about our requirement and an SQL query. I have posted this to some email groups but got no answer yet.
    Here is the test case:
    SQL> conn ...
    Connected.
    -- create the pattern table and populate
    SQL> create table pattern(id number, keydescription varchar2(50));
    Table created.
    SQL> insert into pattern values(1,'hata1');
    1 row created.
    SQL> insert into pattern values(2,'hata2');
    1 row created.
    SQL> insert into pattern values(3,'hata3');
    1 row created.
    SQL> insert into pattern values(4,'hata4');
    1 row created.
    SQL> insert into pattern values(5,'hata5');
    1 row created.
    SQL> select * from pattern;
    ID KEYDESCRIPTION
    1 hata1
    2 hata2
    3 hata3
    4 hata4
    5 hata5
    SQL> commit;
    Commit complete.
    -- create the messagetrack and populate
    SQL> create table messagetrack(pattern_id number, realdate date);
    Table created.
    SQL> insert into messagetrack values(1,to_date('26/08/2007 13:00:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> insert into messagetrack values(1,to_date('26/08/2007 13:05:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> insert into messagetrack values(2,to_date('26/08/2007 13:15:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> insert into messagetrack values(3,to_date('26/08/2007 14:15:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> insert into messagetrack values(4,to_date('26/08/2007 15:15:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> insert into messagetrack values(1,to_date('26/08/2007 15:15:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select * from messagetrack;
    PATTERN_ID REALDATE
    1 26-AUG-07
    1 26-AUG-07
    2 26-AUG-07
    3 26-AUG-07
    4 26-AUG-07
    1 26-AUG-07
    6 rows selected.
    Now, we have this simple query:
    SQL> select p.KeyDescription as rptBase , to_char( mt.realdate,'dd') as P1 , to_char(mt.realdate,'HH24') as P2, count(*) as countX
    2 from messageTrack mt, Pattern p
    3 Where mt.realDate >= to_date('26/08/2007 13:00:00','dd/MM/yyyy hh24:MI:ss')
    4 and mt.realDate <= to_date('27/08/2007 20:00:00','dd/MM/yyyy hh24:MI:ss')
    5 and mt.pattern_id=p.id
    6 group by p.KeyDescription, to_char(mt.realdate,'dd'), to_char( mt.realdate,'HH24')
    7 order by p.KeyDescription, to_char(mt.realdate,'dd'), to_char(mt.realdate,'HH24');
    RPTBASE P1 P2 COUNTX
    hata1 26 13 2
    hata1 26 15 1
    hata2 26 13 1
    hata3 26 14 1
    hata4 26 15 1
    But the result we need should contain the pattern values(hata1, hata2, hata3 and hata4) for each time interval(hour) although there are might be no records of some patterns for some hours.
    The result for our test case should look like this:
    RPTBASE P1 P2 COUNTX
    hata1 26 13 2
    hata1 26 14 0
    hata1 26 15 0
    hata2 26 13 1
    hata2 26 14 0
    hata2 26 15 0
    hata3 26 13 0
    hata3 26 14 1
    hata3 26 15 0
    hata4 26 13 0
    hata4 26 14 0
    hata4 26 15 1
    Our version is 10.2.0.2
    On my discussions some said model clause may be used, but i don't know model clause much and can't imagine how to use.
    You can download the test case code above to reproduce from:
    http://www.bhatipoglu.com/files/query1.txt
    You can see the output above more clearly(monospace font) on:
    http://www.bhatipoglu.com/files/query1_output.txt
    Additionally, I want to state that, in the resulting table, we don't want all the patterns(hata1, hata2, hata3, hata4 and hata5). We just want the ones that exists on messageTrack table(hata1, hata2, hata3 and hata4) as you see on the result.
    Thanks in advance.

    Here is an attempt with the Model Clause:
    Edit: I should mention that I created a view out of your original query.
    SELECT rptbase
          ,day
          ,hour
          ,countx
    FROM demoV
      MODEL
        DIMENSION BY (rptbase, day, hour)
        MEASURES (countx)
          RULES(countx[
                        FOR rptbase IN (SELECT rptbase
                                        FROM demoV)
                        ,FOR day IN   (SELECT day
                                        FROM demoV)
                        ,FOR hour FROM 13 to 15 INCREMENT 1
                        ] =
                        NVL(countx[CV(rptbase),CV(day),CV(hour)],0)
                order by 1,2,3;Which produces the following
    RPTBASE                                    DAY                 HOUR               COUNTX                
    hata1                                              26                     13                     2                     
    hata1                                              26                     14                     0                     
    hata1                                              26                     15                     1                     
    hata2                                              26                     13                     1                     
    hata2                                              26                     14                     0                     
    hata2                                              26                     15                     0                     
    hata3                                              26                     13                     0                     
    hata3                                              26                     14                     1                     
    hata3                                              26                     15                     0                     
    hata4                                              26                     13                     0                     
    hata4                                              26                     14                     0                     
    hata4                                              26                     15                     1               Note my Hata1 26 15 has a countx of 1 (I believe that this is correct and that your sample result is incorrect, if this is not the case, please explain why it should be 0)
    Message was edited by:
    JS1

  • Pl/sql challenge

    Given the following scenario:
    Need to count records inserted on particular days. Colums will be the actual days the records were inserted. For example if sysdate is 14 Apr 2006, need to generate 14 colums, if sysdate is 3 need to generate 3 colums and so forth. And need to count how many records were inserted on a particular day.
    The result should look like:
    4/4/2006 4/2/2006 Total
    2 1 3
    In above example. for 4/1/2006, 2 records where inserted, on 4/2/2006 1 record was inserted. Hence making the total of 3.
    Problem is I don't know how many colums I need to generate beforehand. It depends on the current date (sysdate) in the query.
    How can this be done.
    Please help.
    Thanks.
    Sum.
    How can this dynamic behavior be achieved ?

    How can this dynamic behavior be achieved ?There are different ways of achieving this dynamic behavior depending on where you are. In following example i have demonstrated a way of doing it in SQL*Plus. In other environments (oracle reports for instance), creating such a report should not be a big deal.
    SQL> create table test
      2  (td date)
      3  /
    Table created.
    SQL> insert into test
      2  select sysdate-(rownum-1)
      3  from all_objects
      4  where rownum<=to_number(to_char(sysdate,'dd'))
      5  /
    15 rows created.
    SQL> var cur refcursor
    SQL> set autoprint on
    SQL> declare
      2     v_cols varchar2(4000);
      3     v_cols1 varchar2(4000);
      4     v_total varchar2(1000);
      5  begin
      6     for r in 1..to_number(to_char(sysdate,'dd')) loop
      7             v_cols:=v_cols||'sum(decode(to_number(to_char(td,''dd'')),'||r||
      8                     ',1)) date'||r||',';
      9             v_cols1:=v_cols1||'date'||r||',';
    10             v_total:=v_total||'date'||r||'+';
    11     end loop;
    12     v_cols:=rtrim(v_cols,',');
    13     v_cols1:=rtrim(v_cols1,',');
    14     v_total:=rtrim(v_total,'+');
    15     open :cur for 'select '||v_cols1||','||v_total||' total from
    16                             (select '||v_cols||' from test
    17                             where trunc(td,''month'')=trunc(sysdate,''month''))';
    18  end;
    19  /
    PL/SQL procedure successfully completed.
         DATE1      DATE2      DATE3      DATE4      DATE5      DATE6      DATE7      DATE8      DATE9     DATE10     DATE11     DATE12     DATE13     DATE14     DATE15      TOTAL
             1          1          1          1          1          1          1          1          1          1          1          1          1          1          1         15-------------
    Anwar

  • SQL challenge to any gurus out there

    Hi,
    I am trying to create a view which shows inheritence but cannot get this right. I have provided the table and data to assist anyone kind enough to help us plus details of what we are trying to achieve.
    CREATE TABLE GLEN_TEST (
    TYPE VARCHAR2 (10),
    PARENT VARCHAR2 (10),
    PROPERTY NUMBER);
    INSERT INTO GLEN_TEST ( TYPE, PARENT, PROPERTY ) VALUES ( 'A', NULL, 1);
    INSERT INTO GLEN_TEST ( TYPE, PARENT, PROPERTY ) VALUES ( 'A', NULL, 2);
    INSERT INTO GLEN_TEST ( TYPE, PARENT, PROPERTY ) VALUES ( 'B', 'A', 3);
    INSERT INTO GLEN_TEST ( TYPE, PARENT, PROPERTY ) VALUES ( 'B', 'A', 4);
    INSERT INTO GLEN_TEST ( TYPE, PARENT, PROPERTY ) VALUES ( 'C', 'B', 5);
    INSERT INTO GLEN_TEST ( TYPE, PARENT, PROPERTY ) VALUES ( 'D', 'B', 6);
    COMMIT;
    We can see that this is a simple child-parent relationship.
    If we do a select * we would get the following result:
    TYPE PARENT PROPERTY
    A null 1
    A null 2
    B A 3
    B A 4
    C B 5
    D B 6
    What we are trying to achieve would look like this:
    TYPE PARENT PROPERTY
    A null 1
    A null 2
    B A 1
    B A 2
    B null 3
    B null 4
    C A 1
    C A 2
    C B 3
    C B 4
    C null 5
    D A 1
    D A 2
    D B 3
    D B 4
    D null 6
    What we are trying to achieve is for every type to show all it's inheritance i.e. type B inherits from type A because A is it's parent so the properties for type B are 3 and 4 plus the properties for type A which are 1 and 2. Type C would have it's own properties (5) plus those from type B (3,4) which is it's parent plus those of type A (1,2) which is B's parent.
    I have tried using the connect by prior but with no success as the inheritence could be n layers deep. If anyone out there likes a challenge and can help - we would be extremely grateful.
    Regards
    Glen

    Ups... a little mistake.
    This one works...
    SELECT DISTINCT glen_test.type type,
           DECODE(level_type.ty,glen_test.type,'null',level_type.ty) parent,
           level_type.property property
      FROM glen_test,
           (SELECT level lev, type ty, property
          FROM glen_test
         START WITH parent is null
         CONNECT BY PRIOR type = parent) level_type,
           (SELECT level lev, type ty
          FROM glen_test
         START WITH parent is null
         CONNECT BY PRIOR type = parent) level_type_1
    WHERE glen_test.type = level_type_1.ty
       AND (level_type.lev < level_type_1.lev
       AND level_type.ty IN (SELECT gl_test.type
                      FROM glen_test gl_test
                      START WITH gl_test.type = level_type_1.ty
                      CONNECT BY PRIOR gl_test.parent = gl_test.type))
       OR (glen_test.type = level_type.ty and level_type.lev = level_type_1.lev);
    DC                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Reg: Just for fun [Off-Topic] -

    Hi All,
    Has anybody ever seen Mr. Thomas Kyte over here?
    Or, Steven Feurstein behind the 'The PL/SQL Challenge'?
    Just getting curious to know because I've never seen those biggies over here.
    @Mod - Sorry for getting off-topic.
    - Ranit

    Re: DISTINCT not working with  wmsys.wm_concat

  • Populating OUT parameters when TOO_MANY_ROWS exception is thrown

    When I was trying to write code to test out today's PL/SQL challenge quiz, the behavior appears to depend on the table being queried. I can't figure out what attribute of the table drives the behavior (or if there is a different explanation for the behavior).
    The quiz is testing what happens when a procedure does a SELECT INTO an OUT parameter and a TOO_MANY_ROWS exception is thrown. The intent is to show that even though the behavior is technically undefined, what actually happens is that the OUT parameter is populated with the first row that is selected (obviously, you would never write code that depends on this behavior-- this is solely an academic exercise). The demonstration code works as expected
    CREATE TABLE plch_emp ( emp_name VARCHAR2(100) );
    INSERT INTO plch_emp VALUES ('Jones');
    INSERT INTO plch_emp VALUES ('Smith');
    COMMIT;
    CREATE OR REPLACE PROCEDURE plch_get
      (out_name OUT plch_emp.emp_name%TYPE) IS
    BEGIN
      SELECT emp_name
      INTO out_name
      FROM plch_emp
      ORDER BY emp_name;
    EXCEPTION
      WHEN OTHERS THEN
        dbms_output.put_line('A:' || out_name);
    END;
    What will be displayed after executing the following block:
    DECLARE
      l_name plch_emp.emp_name%TYPE;
    BEGIN
      plch_get(l_name);
      dbms_output.put_line('B:' || l_name);
    END;
    /and outputs
    A:Jones
    B:JonesWhen I replicate the logic while hitting the EMP table, the PLCH_EMP table, and the newly created EMP2 table, however, I get different behavior for the EMP and EMP2 tables. The procedure that queries PLCH_EMP works as expected but the OUT parameter is NULL when either EMP or EMP2 are created. Any idea what causes the behavior to differ?
    select *
      from v$version;
    create table emp2
    as
    select *
      from emp;
    create or replace procedure p1( p_out out varchar2 )
    is
    begin
      select emp_name
        into p_out
        from plch_emp
       order by emp_name;
    exception
      when others then
        dbms_output.put_line( 'P1 A:' || p_out );
    end;
    create or replace procedure p2( p_out out varchar2 )
    is
    begin
      select ename
        into p_out
        from emp
       order by ename;
    exception
      when others then
        dbms_output.put_line( 'P2 A:' || p_out );
    end;
    create or replace procedure p3( p_out out varchar2 )
    is
    begin
      select ename
        into p_out
        from emp2
       order by ename;
    exception
      when others then
        dbms_output.put_line( 'P3 A:' || p_out );
    end;
    declare
      l_ename varchar2(100);
    begin
      p1( l_ename );
      dbms_output.put_line( 'P1 B:' || l_ename );
      p2( l_ename );
      dbms_output.put_line( 'P2 B:' || l_ename );
      p3( l_ename );
      dbms_output.put_line( 'P3 B:' || l_ename );
    end;
    SQL> select *
      2    from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    SQL>
    SQL> create table emp2
      2  as
      3  select *
      4    from emp;
    Table created.
    SQL>
    SQL> create or replace procedure p1( p_out out varchar2 )
      2  is
      3  begin
      4    select emp_name
      5      into p_out
      6      from plch_emp
      7     order by emp_name;
      8  exception
      9    when others then
    10      dbms_output.put_line( 'P1 A:' || p_out );
    11  end;
    12  /
    Procedure created.
    SQL>
    SQL> create or replace procedure p2( p_out out varchar2 )
      2  is
      3  begin
      4    select ename
      5      into p_out
      6      from emp
      7     order by ename;
      8  exception
      9    when others then
    10      dbms_output.put_line( 'P2 A:' || p_out );
    11  end;
    12  /
    Procedure created.
    SQL>
    SQL> create or replace procedure p3( p_out out varchar2 )
      2  is
      3  begin
      4    select ename
      5      into p_out
      6      from emp2
      7     order by ename;
      8  exception
      9    when others then
    10      dbms_output.put_line( 'P3 A:' || p_out );
    11  end;
    12  /
    Procedure created.
    SQL>
    SQL> declare
      2    l_ename varchar2(100);
      3  begin
      4    p1( l_ename );
      5    dbms_output.put_line( 'P1 B:' || l_ename );
      6
      7    p2( l_ename );
      8    dbms_output.put_line( 'P2 B:' || l_ename );
      9
    10    p3( l_ename );
    11    dbms_output.put_line( 'P3 B:' || l_ename );
    12
    13  end;
    14  /
    P1 A:Jones
    P1 B:Jones
    P2 A:
    P2 B:
    P3 A:
    P3 B:
    PL/SQL procedure successfully completed.Justin

    Billy  Verreynne  wrote:
    So we can then reasonably assume that the test or environment itself somehow interferes with the results? After all, the very same cursor is executed and the same PL/SQL engine uses that cursor interface.Clearly, there is something in my environment that is wonky. It just bugs me that I can't figure out what that variable is.
    Your test was done as a single anonymous PL/SQL block. Which means each proc that was called referenced the same variable/memory - are there perhaps PL/SQL optimisation enabled for the database or session? Or any other settings that could influence the test? Have you tried calling the procedures directly from SQL*Plus using a bind var and not via an anon block and a local var in that block?I have. And the results were the same
    SQL> var ename varchar2(100);
    SQL> exec p1( :ename );
    P1 A:Jones
    PL/SQL procedure successfully completed.
    SQL> exec p2( :ename );
    P2 A:
    PL/SQL procedure successfully completed.
    SQL> exec p3( :ename );
    P3 A:
    PL/SQL procedure successfully completed.
    As a sanity test - what happens when proc P3 for example is changed to a catersian join/union of emp2 and plch_emp? Do the test results change? Or you can change it as follows:
    SQL> ed
    Wrote file afiedt.buf
      1  create or replace procedure p5( p_out out varchar2 )
      2  is
      3  begin
      4    --// force a value into p_out using a successful fetch
      5    select emp_name
      6      into p_out
      7      from plch_emp
      8     where rownum = 1;
      9    --// force an error and determine if p_out was overwritten
    10    select ename
    11      into p_out
    12      from emp2
    13     order by ename;
    14  exception
    15    when others then
    16      dbms_output.put_line( 'P5:' || p_out );
    17* end;
    18  /
    Procedure created.
    SQL> exec p5( :ename );
    P5:Jones
    PL/SQL procedure successfully completed.
    Of course, you can also put this one down to using an operating system like Windows as a poor database server and not as a magnificent client gaming platform... Well, sure. But you can probably only play World of Warcraft so long before you need to write some PL/SQL.
    Justin

  • Result cache question

    I create and populate the following table in my schema:
    create table plch_table (id number, time_sleep number);
    begin
      insert into plch_table values (1, 20);
      commit;
    end;
    Then I create this function (it compiles successfully, since my schema has EXECUTE authority on DBMS_LOCK):
    create or replace function plch_func
      return number
      result_cache
    is
      l_time_sleep number;
    begin
      select time_sleep
        into l_time_sleep
        from plch_table
       where id = 1;
      dbms_lock.sleep(l_time_sleep);
      return l_time_sleep;
    end;
    I then start up a second session, connected to the same schema, and execute this block:
    declare
      res number := plch_func;
    begin
      null;
    end;
    Within five seconds of executing the above block, I go back to the first session and I run this block:
    declare
      t1 number;
      t2 number;
    begin
      t1 := dbms_utility.get_time;
      dbms_output.put_line(plch_func);
      t2 := dbms_utility.get_time;
      dbms_output.put_line('Execute in '||round((t2-t1)/100)||' seconds');
    end;
    what will be displayed after this block executes?
    And the result is:
    20
    Execute in 30 secondsHowever, I don't understand why? I mean what is going on behind this? Why the result 30? Could somebody tell me why?

    Honestly, before yesterday's PL/SQL Challenge question, I had no idea how this worked either. This is very much a deep internals question-- you'd likely have to go looking for a very specialized presentation or blog post to get more detail (or you'd have to do the research yourself). And even then, it's relatively unlikely that they would go into much more detail than the PL/SQL Challenge answer did. Julain Dyke's Result Cache Internals (PPT) is probably one of the more detailed presentations about the internals of the result cache.
    The set of valid statuses for a result cache object are documented in the Oracle Database Reference entry for the v$result_cache_objects view. The two 10 second timeouts are controlled by the database- and session-level settings of the undocumented resultcache_timeout parameter (which, based on this blog post by Vladimir Begun was set to 60 seconds in 11.1.0.6 and changed to 11.1.0.7 to 10 seconds.
    Justin

  • Challenging Lexical Reference and SQL Problem

    Hi,
    I am trying to build a product hierarchy master report but I have a challenging problem at hand. To generate the report I need an SQL statement with lexical references and before parameter form. Basically, my SQL statement looks something like this:
    Select &Columns
    From &Tables
    Where &Criteria
    Before Parameter:
    In If X='1',
    &Columns := A.NAME, B.NAME, C.NAME
    &Tables:=A,B,C
    &Criteria:= A.CODE=B.CODE, B.CODE=C.CODE
    In If X='2',
    &Columns := A.NAME, B.NAME, C.NAME, D.NAME
    &Tables:=A,B,C,D
    &Criteria:= A.CODE=B.CODE, B.CODE=C.CODE,C.CODE=D.CODE
    In If X='3',
    &Columns := A.NAME, B.NAME, C.NAME, D.NAME, E.NAME
    &Tables:=A,B,C,D,E
    &Criteria:= A.CODE=B.CODE, B.CODE=C.CODE,C.CODE=D.CODE, C.CODE=E.CODE
    I need to build a group left report and group by A,B and up to E if X='3'.
    Any idea how can I accomplish this? Any kind of help or advice is urgently needed. Thank you in advance.

    Siak,
    build a kind of maximum modell. Set the initial values of your Parameters to the maximum (like for columns:
    a.name as aname, b.name as bname, ...., e.name as ename)
    and build a layout for this. Then in a before report trigger set the paramters you want and fill the not needed with dummy values. For example if X=1 then
    :columns := 'a.name as aname, b.name as bname, c.name as cname, ''x'' as dname, ''x'' as ename'
    In the layout supress the output of non used fields with a format trigger or use dependent from your X three different layouts. I've not tested it, but it's a chance ...
    regards
    Rainer

  • How do i write this in sql ? (another headcracker challenging  report)

    hi guys!,
    I need to create / generate a report. I intend to do all this with pure SQL alone.
    Been cracking my head for days but to no avail.
    Hope you gurus here will straightened me out.
    Here it goes. i Have a table
    TABLE USAGE_REPORT
    Date DATE -- everyday's date
    BalanceCF NUMBER -- an initial start amount or ( balancebf)
    Topup_amount NUMBER -- amount of topup that day
    Usage1 NUMBER -- amount of $ use on certain prod
    Usage2 NUMBER -- amount of $ use on certain prod
    BalanceBF NUMBER -- BalanceCF + topup - usage1 -usage2 (which is also the next date BalanceCF)
    Example1
    please see this link
    http://img9.imageshack.us/img9/708/88149028.gif
    asumming my sql is
    WITH dates AS
    SELECT TRUNC(SYSDATE) + level dmy
    FROM DUAL CONNECT BY level < 366
    TopUP as
    SELECT trunc(purchase_date) dated, sum(payment_amount)
    FROM purchase
    GROUP by trunc(purchase_date)
    Usage1 as
    SELECT trunc(connect_date) dated, sum(charged_amount)
    FROM tab1
    WHERE prod_id = 'xxx'
    GROUP BY trunc(connect_date)
    Usage2 as
    SELECT trunc(connect_date) dated, sum(charged_amount)
    FROM tab2
    WHERE prod_id = 'yyy'
    GROUP BY trunc(connect_date)
    SELECT * FROM DATES D
    LEFT OUTER JOIN TOPUP T
    ON (D.DMY = T.DATED)
    LEFT OUTER JOIN USAGE1 U1
    ON (D.DMY = U1.DATED)
    LEFT OUTER JOIN USAGE2 U2
    ON (D.DMY = U2.DATED);
    however
    q1) how do i start 'initiate' the 1st row
    BALANCECF so that i can do the calculation
    of
    BALANCECF + TOPUP - USAGE1 - USAGE2 = BALANCEBF
    q2) how do i bring the value of BALANCEBF into the 2nd row of BALANCECF to do further calculation ?
    q3) is it has something to do with connect by ? parent-child relationship
    q4) in short how do i make it look like the attach pic above?
    Please help!
    Best Regards,
    Noob

    I am using 200 as initial balance_cf. You did not provide sample data, so code below is not tested:
    WITH dates as (
                   SELECT  TRUNC(SYSDATE) + level dmy,
                           200 balance_cf
                     FROM  DUAL
                     CONNECT BY level < 366
         topUP as (
                   SELECT  trunc(purchase_date) dated,
                           sum(payment_amount) topup_amount
                     FROM  purchase
                     GROUP by trunc(purchase_date)
        Usage1 as (
                   SELECT  trunc(connect_date) dated,
                           sum(charged_amount) usage_amount
                     FROM  tab1
                     WHERE prod_id = 'xxx'
                     GROUP BY trunc(connect_date)
        Usage2 as (
                   SELECT  trunc(connect_date) dated,
                           sum(charged_amount) usage2_amount
                     FROM  tab2
                     WHERE prod_id = 'yyy'
                     GROUP BY trunc(connect_date)
    SELECT  dmy,
            balance_cf + sum(topup_amount - usage1_amount - usage2_amount) over order by dmy rows between unbounded preceding and 1 preceding) balance_cf
            topup_amount,
            usage1_amount,
            usage2_amount,
            balance_cf + sum(topup_amount - usage1_amount - usage2_amount) over order by dmy) balance_bf
      FROM  DATES D LEFT OUTER JOIN TOPUP T ON (D.DMY = T.DATED)
                    LEFT OUTER JOIN USAGE1 U1 ON (D.DMY = U1.DATED)
                    LEFT OUTER JOIN USAGE2 U2 ON (D.DMY = U2.DATED)
      ORDER BY dmy
    /SY.

  • SQL Query (challenge)

    Hello,
    I have 2 tables of events E1 and E2
    E1: (Time, Event), E2: (Time, Event)
    Where the columns Time in both tables are ordered.
    Ex.
       E1: ((1, a) (2, b) (4, d) (6, c))
       E2: ((2, x) (3, y) (6, z))
    To find the events of both tables at the same time it is obvious to do & join between E1 and E2
    Q1 -> select e1.Time, e1.Event, e2.Event from E1 e1, E2 e2 where e1.Time=e2.Time;
    The result of the query is:
    ((2, b, x) (6, c, z))
    Given that there is no indexes for this tables, an efficient execution plan can be a hash join (under conditions mentioned in Oracle Database Performance Tuning Guide Ch 14).
    Now, the hash join suffers from locality problem if the hash table is large and does not fit in memory; it may happen that one block of data is read in memory and swaped out frequently.
    Given that the Time columns are sorted is ascending order, I find the following algorithm, known idea in the literature, apropriate to this problem; The algorithm is in pseudocode close to pl/sql, for simplicity (I home the still is clear):
    -- start algorithm
    open cursors for e1 and e2
    loop
      if e1.Time = e2.Time then
         pipe row (e1.Time, e1.Event, e2.Event);
         fetch next e1 record
         exit when notfound
         fetch next e2 record
          exit when notfound
      else
         if e1.Time < e2.Time then
            fetch next e1 record
            exit when notfound
         else
            fetch next e2 record
            exit when notfound
         end if;
      end if;
    end loop
    -- end algorithm
    As you can see the algorithm does not suffer from locality issue since it iterates sequentially over the arrays.
    Now the problem: The algorithm shown below hints the use of pipelined function to implement it in pl/sql, but it is slow compared to hash join in the implicit cursor of the query shown above (Q1).
    Is there an implicit SQL query to implement this algorithm? The objective is to beat the hash join of the query (Q1), so queries that use sorting are not accepted.
    A difficulty I foound is that the explicit cursor are much slower that implict ones (SQL queries)
    Example: for a large table (2.5 million records)
    create table mytable (x number);
    declare
    begin
    open c for 'select 1 from mytable';
    fetch c bulk collect into l_data;
    close c;
    dbms_output.put_line('couont = '||l_data.count);
    end;
    is 5 times slower then
    select count(*) from mytable;
    I do not understand why it should be the case, I read that this may be explained because pl/sql is interpreted, but I think this does not explain the whole issue. May be because the fetch copies data from one space to your space and this takes a long time.

    Hi
    A correction in the algorithm:
    -- start algorithm
    open cursors for e1 and e2
    fetch next e1 record
    fetch next e2 record
    loop
      exit when e1%notfound
      exit when e2%notfound
      if e1.Time = e2.Time then
         pipe row (e1.Time, e1.Event, e2.Event);
         fetch next e1 record
         fetch next e2 record
      else
         if e1.Time < e2.Time then
            fetch next e1 record
         else
            fetch next e2 record
         end if;
      end if;
    end loop
    -- end algorithm
    Best regards
    Taoufik

  • Challenging SQL statement.

    Dear all,
    I tried to run some query on Oracle 9i using SQL Plus and I encounted some funny problem. Could you please help me.
    I have a view defined as below:
    SQL> desc lrpo_smr_daily_purchases_v;
    Name                                                  Null?    Type
    ORG_ID                                                         NUMBER(15)
    PURCHASE_TYPE                                                  VARCHAR2(7)
    PURCHASE_DATE                                                  DATE
    EFFECTIVE_PERIOD_NUM                                  NOT NULL NUMBER(15)
    AMOUNT                                                         NUMBER
    QUANTITY                                                       NUMBER
    AVG_PURCHASE_PRICE                                             NUMBER
    MARKET_PRICE                                                   NUMBER
    UNIT_COST                                                      NUMBERI wrote a query to show this year and last year purchase margin for the same period. To accomodate it I use outer join syntax:
    select * from
    (select   substr(name,1,40) name,
              org_id,
              purchase_type,
              ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*unit_cost)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
    from lrpo_smr_daily_purchases_v,
    hr_organization_units
    where org_id = organization_id
    and purchase_date between  '01-JAN-10' and '31-JAN-10'
    group by name,org_id, purchase_type) ty full outer join
    (select   substr(name,1,40) name,
              org_id,
              purchase_type,
              ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*unit_cost)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
    from lrpo_smr_daily_purchases_v,
    hr_organization_units
    where org_id = organization_id
    and purchase_date between  '01-JAN-09' and '31-JAN-09'
    group by name,org_id, purchase_type) ly
    on (ty.purchase_type = ly.purchase_type and ty.org_id = ly.org_id);So SQL told me
    ERROR at line 5:
    ORA-00904: "UNIT_COST": invalid identifierIf I remove unit_cost column from first subquery and replaced it with 1 then query can go thru. If I leave it in first subquery and replace it with 1 in second subquery then the query shows the same error.
    SQL> select * from
      2  (select   substr(name,1,40) name,
      3            org_id,
      4            purchase_type,
      5            ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*1)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
      6  from lrpo_smr_daily_purchases_v,
      7  hr_organization_units
      8  where org_id = organization_id
      9  and purchase_date between  '01-JAN-10' and '31-JAN-10'
    10  group by name,org_id, purchase_type) ty full outer join
    11  (select   substr(name,1,40) name,
    12            org_id,
    13            purchase_type,
    14            ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*unit_cost)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
    15  from lrpo_smr_daily_purchases_v,
    16  hr_organization_units
    17  where org_id = organization_id
    18  and purchase_date between  '01-JAN-09' and '31-JAN-09'
    19  group by name,org_id, purchase_type) ly
    20  on (ty.purchase_type = ly.purchase_type and ty.org_id = ly.org_id);
    NAME                                         ORG_ID PURCHAS NET_MARGIN
    NAME                                         ORG_ID PURCHAS NET_MARGIN
    018 - LR Kuala Krai                             143 Local     -.362399
    018 - LR Kuala Krai                             143 Local   -.60869098and
    SQL> select * from
      2  (select   substr(name,1,40) name,
      3            org_id,
      4            purchase_type,
      5            ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*unit_cost)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
      6  from lrpo_smr_daily_purchases_v,
      7  hr_organization_units
      8  where org_id = organization_id
      9  and purchase_date between  '01-JAN-10' and '31-JAN-10'
    10  group by name,org_id, purchase_type) ty full outer join
    11  (select   substr(name,1,40) name,
    12            org_id,
    13            purchase_type,
    14            ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*1)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
    15  from lrpo_smr_daily_purchases_v,
    16  hr_organization_units
    17  where org_id = organization_id
    18  and purchase_date between  '01-JAN-09' and '31-JAN-09'
    19  group by name,org_id, purchase_type) ly
    20  on (ty.purchase_type = ly.purchase_type and ty.org_id = ly.org_id);
              ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*unit_cost)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
    ERROR at line 5:
    ORA-00904: "UNIT_COST": invalid identifierSo what happened?
    Best Regards,
    Hien
    Edited by: user12032703 on Dec 22, 2010 4:48 AM

    I can't tell from the unformatted code if there is an obvious error.
    In general, please try to use table aliasing for clarity.
    If there isn't a syntax error, then I presume from the naming convention that lrpo_smr_daily_purchases_v is a view.
    In which case, you may be getting an ORA-00904 from some sort of view merging issue. If so, you may get relief from the error by setting complexview_merging to false at the session level as a test. Or you may get rid by rewriting the query, materializing certain bits of it or possibly the no_merge hint.
    Also, I don't really want to second guess what you're doing with this query, but it looks like you're trying to get the net_margin for 2009 and the net_margin for 2010 on the same row by running the same base SQL twice with different date ranges.
    If so, you may want to try something like this to:
    select name
    ,      org_id
    ,      purchase_type
    ,      max(case when purchase_year = '2009'
                    then net_margin
               end) net_margin_2009
    ,      max(case when purchase_year = '2010'
                    then net_margin
               end) net_margin_2010
    from (  select substr(name,1,40) name,
                   org_id,
                   purchase_type,
                   to_char(purchase_date,'YYYY') purchase_year
                   ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*unit_cost)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
            from   lrpo_smr_daily_purchases_v,
                   hr_organization_units
            where  org_id = organization_id
            and    purchase_date between '01-JAN-09' and '31-JAN-10'
            group by name,org_id, purchase_type, to_char(purchase_date,'YYYY')
    group by name, org_id, purchase_type;You might even delay that join to hr_organiszation_units - assuming it's used to get the name - to the outer query in that example.
    Edited by: DomBrooks on Dec 22, 2010 11:27 AM

  • A challenging dynamic SQL query problem

    hi All,
    I have a very interesting problem at work:
    We have this particular table defined as follows :
    CREATE TABLE sales_data (
    sales_id NUMBER,
    sales_m01 NUMBER,
    sales_m02 NUMBER,
    sales_m03 NUMBER,
    sales_m04 NUMBER,
    sales_m05 NUMBER,
    sales_m06 NUMBER,
    sales_m07 NUMBER,
    sales_m08 NUMBER,
    sales_m09 NUMBER,
    sales_m10 NUMBER,
    sales_m11 NUMBER,
    sales_m12 NUMBER,
    sales_prior_yr NUMBER );
    The columns 'sales_m01 ..... sales_m12' represents aggregated monthly sales, in which 'sales_m01' translates to 'sales for the month of january, january being the first month, 'sales_m02' sales for the month of february, and so on.
    The problem I face is that we have a project which requires that a parameter be passed to a stored procedure which stands for the month number which is then used to build a SQL query with the following required field aggregations, which depends on the parameter passed :
    Sample 1 : parameter input: 4
    Dynamically-built SQL query should be :
    SELECT
    SUM(sales_m04) as CURRENT_SALES,
    SUM(sales_m01+sales_m02+sales_m03+sales_m04) SALES_YTD
    FROM
    sales_data
    WHERE
    sales_id = '0599768';
    Sample 2 : parameter input: 8
    Dynamically-built SQL query should be :
    SELECT
    SUM(sales_m08) as CURRENT_SALES,
    SUM(sales_m01+sales_m02+sales_m03+sales_m04+
    sales_m05+sales_m06+sales_m07+sales_m08) SALES_YTD
    FROM
    sales_data
    WHERE
    sales_id = '0599768';
    So in a sense, the contents of SUM(sales_m01 ....n) would vary depending on the parameter passed, which should be a number between 1 .. 12 which corresponds to a month, which in turn corresponds to an actual field range on the table itself. The resulting dynamic query should only aggregate those columns/fields in the table which falls within the range given by the input parameter and disregards all the remaining columns/fields.
    Any solution is greatly appreciated.
    Thanks.

    Hi another simpler approach is using decode
    try like this
    SQL> CREATE TABLE sales_data (
      2  sales_id NUMBER,
      3  sales_m01 NUMBER,
      4  sales_m02 NUMBER,
      5  sales_m03 NUMBER,
      6  sales_m04 NUMBER,
      7  sales_m05 NUMBER,
      8  sales_m06 NUMBER,
      9  sales_m07 NUMBER,
    10  sales_m08 NUMBER,
    11  sales_m09 NUMBER,
    12  sales_m10 NUMBER,
    13  sales_m11 NUMBER,
    14  sales_m12 NUMBER,
    15  sales_prior_yr NUMBER );
    Table created.
    SQL> select * from sales_data;
      SALES_ID  SALES_M01  SALES_M02  SALES_M03  SALES_M04  SALES_M05  SALES_M06  SALES_M07  SALES_M08  SALES_M09  SALES_M10  SALES_M11  SALES_M12 SALES_PRIOR_YR
             1        124        123        145        146        124        126        178        189        456        235        234        789          19878
             2        124        123        145        146        124        126        178        189        456        235        234        789          19878
             1        100        200        300        400        500        150        250        350        450        550        600        700          10000
             1        101        201        301        401        501        151        251        351        451        551        601        701          10000----now for your requirement. see below query if there is some problem then tell.
    SQL> SELECT sum(sales_m&input_data), DECODE (&input_data,
      2                 1, SUM (sales_m01),
      3                 2, SUM (sales_m01 + sales_m02),
      4                 3, SUM (sales_m01 + sales_m02 + sales_m03),
      5                 4, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04),
      6                 5, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04 + sales_m05),
      7                 6, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06),
      8                 7, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07),
      9                 8, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07+sales_m08),
    10                 9, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07+sales_m08+sales_m09),
    11                 10,SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07+sales_m08+sales_m09+sales_m10),
    12                 11,SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07+sales_m08+sales_m09+sales_m10+sales_m11),
    13                 12,SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07+sales_m08+sales_m09+sales_m10+sales_m11+sales_m12)
    14                ) total
    15    FROM sales_data
    16   WHERE sales_id = 1;
    Enter value for input_data: 08
    Enter value for input_data: 08
    old   1: SELECT sum(sales_m&input_data), DECODE (&input_data,
    new   1: SELECT sum(sales_m08), DECODE (08,
    SUM(SALES_M08)      TOTAL
               890       5663

  • Mild challenge -pivoting *multiple* columns per row using only SQL

    Hello All,
    I'm in the process of learning the various pivoting techniques available
    in SQL, and I am becoming more familiar with the decode,function,group-by
    technique seen in many examples on these forums. However, I've got a case
    where I need to pivot out 3 different columns for 3 rows of data where the
    value of a different column is driving whether or not those columns are pivoted.
    I know that last sentence was as clear as mud so I'll show you/provide the simple
    scripts and data, and then I'll elaborate a little more beneath.
    create table temp_timeline (
    mkt_id varchar2(10),
    event_id number(8),
    event_type varchar2(3),
    mod_due_date date,
    cur_due_date date,
    act_due_date date
    insert into temp_timeline values('DSIM6',51,'S1','NOV-13-06','NOV-13-06',NULL);
    insert into temp_timeline values('DSIM6',51,'S2','DEC-20-06','DEC-20-06',NULL);
    insert into temp_timeline values('DSIM6',51,'S3','JAN-17-07','JAN-17-07',NULL);
    insert into temp_timeline values('DSIM6',51,'S4','FEB-14-07','FEB-14-07',NULL);
    commit;
    select * from temp_timeline;
    The "normal" output (formatted with period-separated fields) is:
    DSIM6.51.S1.NOV-13-06.NOV-13-06.NULL
    DSIM6.51.S2.DEC-20-06.DEC-20-06.NULL
    DSIM6.51.S3.JAN-17-07.JAN-17-07.NULL
    DSIM6.51.S4.FEB-14-07.FEB-14-07.NULL
    The DESIRED 1-row output (formatted with period-separated fields) is:
    DSIM6.51.NOV-13-06.NOV-13-06.NULL.DEC-20-06.DEC-20-06.NULL.JAN-17-07.JAN-17-07.NULL.FEB-14-07.FEB-14-07.NULL
    So, the first 2 columns in the table have the same data, and the third column
    makes the row unique (they could all have the same/similar dates).
    If this table only consisted of the first 3 columns then many of the examples seen
    on this forum would work well (grouping by the first 2 rows and pivoting out
    the "event_type" columns containing (S1,S2,S3,S4) etc.
    But, in my case, I need to discard the event_type column and pivot out the
    3 columns of date data onto the first row (for each different event_type).
    So the 3 Dates associated with the "S2" column would go to the first row, and the
    3 dates associated with the "S3" column would also go to the first row (and so on).
    The 3 dates need to be 3 distinct columns when they are
    pivoted out (not concatenated to each other and pivoted as one column).
    Given this, I will need to pivot out a total of 12 different columns for each distinct
    (mkt_id, event_id) pair.
    For the time being I have accomplished this with a union, but am trying to expand
    my abilities with other sql methods. I've seen some very elegant solutions on this
    forum so will be interested to see what others can come up with for this solution.
    Thanks in advance for any comments you may provide.

    Just DECODE based on the event type, which will generate your 12 columns.
    SELECT mkt_id, event_id,
           MAX(DECODE(event_type, 'S1', mod_due_date, NULL)) s1_mod_due,
           MAX(DECODE(event_type, 'S1', cur_due_date, NULL)) s1_cur_due,
           MAX(DECODE(event_type, 'S1', act_due_date, NULL)) s1_act_due,
           MAX(DECODE(event_type, 'S2', mod_due_date, NULL)) s2_mod_due,
           MAX(DECODE(event_type, 'S2', cur_due_date, NULL)) s2_cur_due,
           MAX(DECODE(event_type, 'S2', act_due_date, NULL)) s2_act_due,
           MAX(DECODE(event_type, 'S3', mod_due_date, NULL)) s3_mod_due,
           MAX(DECODE(event_type, 'S3', cur_due_date, NULL)) s3_cur_due,
           MAX(DECODE(event_type, 'S3', act_due_date, NULL)) s3_act_due,
           MAX(DECODE(event_type, 'S4', mod_due_date, NULL)) s4_mod_due,
           MAX(DECODE(event_type, 'S4', cur_due_date, NULL)) s4_cur_due,
           MAX(DECODE(event_type, 'S4', act_due_date, NULL)) s4_act_due
    FROM temp_timeline
    GROUP BY mkt_id, event_idTested, because you supplied create table and insert statements, thank you.
    John

  • SQL Developer 3.2 - Export DDL challenge

    Hi,
    I would like to Export DDL for approximately 300 of 1000 objects in a schema.  I have the names of all required tables for which I'd like to get the DDL in a table in my personal schema.  Is there a way that I can use this table as a driver for the built-in Export DDL utility or will I need to either go to the schema browser and hand-pick each of the 300 tables and/or from the Tools-Export DDL "Specify Objects" window?
    I would like to make this more automated so that I dont have to keep clicking, scrolling, clicking my way through the list of required objects.  Any thoughts are appreciated, thanks.

    1008686 wrote:
    Hi,
    I would like to Export DDL for approximately 300 of 1000 objects in a schema.  I have the names of all required tables for which I'd like to get the DDL in a table in my personal schema.  Is there a way that I can use this table as a driver for the built-in Export DDL utility or will I need to either go to the schema browser and hand-pick each of the 300 tables and/or from the Tools-Export DDL "Specify Objects" window?
    I would like to make this more automated so that I dont have to keep clicking, scrolling, clicking my way through the list of required objects.  Any thoughts are appreciated, thanks.
    There is no way to use sql developer to do that.
    You can:
    1. do it manually as you suggest
    2. do it manually by writing a script that makes the appropriate DBMS_METADATA calls
    3. use expdp to extract the metadata and create a DDL file.
    The full DDL for a table will include a lot of components that many people don't even want, for example storage clauses.
    The bigger issue you should address is why you don't already have the DDL to begin with. Best practices are to create the DDL and keep it in a version control system; not extract it after the fact.
    I suggest you use the EXPDP utility to extract the DDL into a file so that you have it for future use.
    If you plan to write a script there are plenty of examples on the web that show how to do that. Here is one:
    http://www.colestock.com/blogs/2008/02/extracting-ddl-from-oracle-2-approaches.html

Maybe you are looking for

  • Sharing Preference Pane no longer works.

    Hi, I can no longer open my Sharing pref pane. When I try to open it I have a message telling that pane loading failed. In console.log I have this : 2006-09-03 01:08:00.023 System Preferences[22750] * -[NSBundle load]: Error loading code /System/Libr

  • Airplay can't be found on Ipad 2

    I have resettled it many times, Bluetooth is on connected to wifi i'm unsure to why I can't find it,

  • Can't upgrade from OS 10.9.2 to 10.9.5

    Hi all.  I'm running OS X 10.9.2 and I'm trying to upgrade to 10.9.5.  Software Update says that it installed 10.9.5 today but when I go to About This Mac, it still says 10.9.2.  I then downloaded the 10.9.5 OS X Update file and it won't let me insta

  • Having Problems / Waits while Rollback

    Hi guys/Gurus, Hope you all doing well. I am having problem. The database is 9ir2.0.0.8. While doing an heavy transaction(Inserting in around 15 tables, 2000 records each.) The transaction encountered an error at the end before the commit statement.

  • Setting a new database location problems

    I've not ran into this before. I'm pointing a report to a new location , but I'm not seeing the report data. The other odd thing is that after pointing to the new location, if I show the SQL, it is still looking at the old location. Yet another odd t