Challenging SQL...

Hi Everyone,
I'm have a table that has the following kind of structure.
Table X
cont_id Number
rid Number
cont_nm_flg Char(1)
bus_nm_flg Char(1)
bt_add_flg Char(1)
Upd_date Date
Now the table will have multiple cont_id and also duplicates.I have to roll up ( group by) by cont_id. But here is the hitch :
I have to select on based on bt_add_flg. If this field had a value 1 then we select that record. If that is not '1' then get the record having the most recent upd_date and if there is a tie (means thare are two recs having same max Upd_date) then get the record having max rid.
Could anyone help me with this type of SQL...
Thanks in Advance...

Given the following data:
SQL> SELECT rownum,cont_id,rid,cont_nm_flg,bus_nm_flg,bt_add_flg,upd_date
  2  FROM x;
    ROWNUM    CONT_ID        RID C B B UPD_DATE
         1          1          1 a a 1 12-nov-2002 13:12:59
         2          2          1 a a x 12-nov-2002 13:05:00
         3          2          2 a a y 12-nov-2002 13:05:00
         4          3          1 a a x 11-nov-2002 13:12:59
         5          3          2 a a x 12-nov-2002 13:12:59if I understand your requirement correctly, I would expect to get the following rows returned.
Rownum=1 by rule a) bt_add_flg = '1'
Rownum=3 by rule c) max(upd_date) and max(rid) to break the tie
Rownum 5 by rule b) max(upd_date) no tie to break
So,
SQL> SELECT *
  2  FROM x
  3  WHERE exists (SELECT 1
  4                FROM x sub_x
  5                WHERE x.cont_id = sub_x.cont_id and
  6                      bt_add_flg = '1')
  7  UNION ALL --Changed this to avoid a sort
  8  SELECT *
  9  FROM x
10  WHERE to_char(upd_date,'yyyymmddhh24miss')||
11        to_char(rid) = (SELECT MAX(to_char(sub_x.upd_date,'yyyymmddhh24miss')
||
12                                   to_char(sub_x.rid))
13                        FROM x sub_x
14                        WHERE x.cont_id = sub_x.cont_id) and
15        NOT EXISTS (SELECT 1
16                    FROM x sub_x2
17                    WHERE x.cont_id = sub_x2.cont_id and
18                          sub_x2.bt_add_flg = '1');
   CONT_ID        RID C B B UPD_DATE
         1          1 a a 1 12-nov-2002 13:12:59
         2          2 a a y 12-nov-2002 13:05:00
         3          2 a a x 12-nov-2002 13:12:59as expected, 3 rows, one for each cont_id despite having duplicate rows for cont_id 2 and cont_id 3.
If it is possible that there may be multiple rows for a cont_id one with bt_add_flg = 1 and one or more with bt_add_flg <> 1, then you need to add one more condition to the top select in the union:
INSERT INTO x VALUES (1,2,'a','a','z',sysdate + 3/24);
SELECT *
FROM x
WHERE EXISTS (SELECT 1
              FROM x sub_x
              WHERE x.cont_id = sub_x.cont_id and
                    sub_x.bt_add_flg = '1') and
bt_add_flg = '1'
UNION ALL 
SELECT *
FROM x
WHERE to_char(upd_date,'yyyymmddhh24miss')||
      to_char(rid) = (SELECT MAX(to_char(sub_x.upd_date,'yyyymmddhh24miss')||
                                 to_char(sub_x.rid))
                      FROM x sub_x
                      WHERE x.cont_id = sub_x.cont_id) and
      NOT EXISTS (SELECT 1
                  FROM x sub_x2
                  WHERE x.cont_id = sub_x2.cont_id and
                        sub_x2.bt_add_flg = '1')John

Similar Messages

  • Challenging SQL statement.

    Dear all,
    I tried to run some query on Oracle 9i using SQL Plus and I encounted some funny problem. Could you please help me.
    I have a view defined as below:
    SQL> desc lrpo_smr_daily_purchases_v;
    Name                                                  Null?    Type
    ORG_ID                                                         NUMBER(15)
    PURCHASE_TYPE                                                  VARCHAR2(7)
    PURCHASE_DATE                                                  DATE
    EFFECTIVE_PERIOD_NUM                                  NOT NULL NUMBER(15)
    AMOUNT                                                         NUMBER
    QUANTITY                                                       NUMBER
    AVG_PURCHASE_PRICE                                             NUMBER
    MARKET_PRICE                                                   NUMBER
    UNIT_COST                                                      NUMBERI wrote a query to show this year and last year purchase margin for the same period. To accomodate it I use outer join syntax:
    select * from
    (select   substr(name,1,40) name,
              org_id,
              purchase_type,
              ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*unit_cost)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
    from lrpo_smr_daily_purchases_v,
    hr_organization_units
    where org_id = organization_id
    and purchase_date between  '01-JAN-10' and '31-JAN-10'
    group by name,org_id, purchase_type) ty full outer join
    (select   substr(name,1,40) name,
              org_id,
              purchase_type,
              ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*unit_cost)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
    from lrpo_smr_daily_purchases_v,
    hr_organization_units
    where org_id = organization_id
    and purchase_date between  '01-JAN-09' and '31-JAN-09'
    group by name,org_id, purchase_type) ly
    on (ty.purchase_type = ly.purchase_type and ty.org_id = ly.org_id);So SQL told me
    ERROR at line 5:
    ORA-00904: "UNIT_COST": invalid identifierIf I remove unit_cost column from first subquery and replaced it with 1 then query can go thru. If I leave it in first subquery and replace it with 1 in second subquery then the query shows the same error.
    SQL> select * from
      2  (select   substr(name,1,40) name,
      3            org_id,
      4            purchase_type,
      5            ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*1)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
      6  from lrpo_smr_daily_purchases_v,
      7  hr_organization_units
      8  where org_id = organization_id
      9  and purchase_date between  '01-JAN-10' and '31-JAN-10'
    10  group by name,org_id, purchase_type) ty full outer join
    11  (select   substr(name,1,40) name,
    12            org_id,
    13            purchase_type,
    14            ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*unit_cost)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
    15  from lrpo_smr_daily_purchases_v,
    16  hr_organization_units
    17  where org_id = organization_id
    18  and purchase_date between  '01-JAN-09' and '31-JAN-09'
    19  group by name,org_id, purchase_type) ly
    20  on (ty.purchase_type = ly.purchase_type and ty.org_id = ly.org_id);
    NAME                                         ORG_ID PURCHAS NET_MARGIN
    NAME                                         ORG_ID PURCHAS NET_MARGIN
    018 - LR Kuala Krai                             143 Local     -.362399
    018 - LR Kuala Krai                             143 Local   -.60869098and
    SQL> select * from
      2  (select   substr(name,1,40) name,
      3            org_id,
      4            purchase_type,
      5            ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*unit_cost)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
      6  from lrpo_smr_daily_purchases_v,
      7  hr_organization_units
      8  where org_id = organization_id
      9  and purchase_date between  '01-JAN-10' and '31-JAN-10'
    10  group by name,org_id, purchase_type) ty full outer join
    11  (select   substr(name,1,40) name,
    12            org_id,
    13            purchase_type,
    14            ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*1)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
    15  from lrpo_smr_daily_purchases_v,
    16  hr_organization_units
    17  where org_id = organization_id
    18  and purchase_date between  '01-JAN-09' and '31-JAN-09'
    19  group by name,org_id, purchase_type) ly
    20  on (ty.purchase_type = ly.purchase_type and ty.org_id = ly.org_id);
              ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*unit_cost)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
    ERROR at line 5:
    ORA-00904: "UNIT_COST": invalid identifierSo what happened?
    Best Regards,
    Hien
    Edited by: user12032703 on Dec 22, 2010 4:48 AM

    I can't tell from the unformatted code if there is an obvious error.
    In general, please try to use table aliasing for clarity.
    If there isn't a syntax error, then I presume from the naming convention that lrpo_smr_daily_purchases_v is a view.
    In which case, you may be getting an ORA-00904 from some sort of view merging issue. If so, you may get relief from the error by setting complexview_merging to false at the session level as a test. Or you may get rid by rewriting the query, materializing certain bits of it or possibly the no_merge hint.
    Also, I don't really want to second guess what you're doing with this query, but it looks like you're trying to get the net_margin for 2009 and the net_margin for 2010 on the same row by running the same base SQL twice with different date ranges.
    If so, you may want to try something like this to:
    select name
    ,      org_id
    ,      purchase_type
    ,      max(case when purchase_year = '2009'
                    then net_margin
               end) net_margin_2009
    ,      max(case when purchase_year = '2010'
                    then net_margin
               end) net_margin_2010
    from (  select substr(name,1,40) name,
                   org_id,
                   purchase_type,
                   to_char(purchase_date,'YYYY') purchase_year
                   ( sum(quantity*market_price)/sum(quantity) - sum(amount)/sum(quantity) - sum(quantity*unit_cost)/sum(quantity) - decode(purchase_type,'Foreign',0,'Local',0.1392) ) net_margin
            from   lrpo_smr_daily_purchases_v,
                   hr_organization_units
            where  org_id = organization_id
            and    purchase_date between '01-JAN-09' and '31-JAN-10'
            group by name,org_id, purchase_type, to_char(purchase_date,'YYYY')
    group by name, org_id, purchase_type;You might even delay that join to hr_organiszation_units - assuming it's used to get the name - to the outer query in that example.
    Edited by: DomBrooks on Dec 22, 2010 11:27 AM

  • Extremely Challenging SQL

    Hello
    I have a table that records Commands sent from “Commanding Objects” to “Listening Objects”. I need to find when Commands where sent to the same “Listening Object” from more then one “Commanding Object” within S number of seconds.
    This is what the Commands table looks like:
    CMD_UID    CMDING_OBJECT    LSNING_OBJECT    CMD_TIME (TIMESTAMP)
    1                AAA           XXX           3:00:00
    2                AAA           XXX           3:00:09
    3                AAA           YYY           3:00:10
    4                BBB           XXX           3:00:12
    6                AAA           XXX           3:00:17
    8                BBB           YYY           3:00:21
    9                CCC           ZZZ           3:00:22
    10                CCC           YYY           3:00:30
    11                AAA           XXX           3:00:33This is what the Results should look like with S = 10:
    CONTENTION_SET    CMD_UID    CMD_TIME_DELTA_SECONDS
    1                2                0
    1                4                3
    1                6                8                                        
    2                8                0
    2               10                9I sure would appreciate any help!

    Ok I have found a problem (My Fault)! I have
    implemented Rob's SQL and run it on the real table.
    The problem I am having is that is gives an error:
    ORA-00902: Invalid datatype at this line "13 order by
    cmd_time" My table has CMD_TIME as a TIMESTAMP
    datatype.Then you have to change the functions to handle timestamps instead of dates.
    Like this:
    SQL> create table commands
      2  as
      3  select 1 cmd_uid, 'AAA' cmding_object, 'XXX' lsning_object, to_timestamp('01012007 03:00:00','ddmmyyyy hh24:mi:ss') cmd_time from
    dual union all
      4  select 2, 'AAA', 'XXX', to_timestamp('01012007 03:00:09','ddmmyyyy hh24:mi:ss') from dual union all
      5  select 3, 'AAA', 'YYY', to_timestamp('01012007 03:00:10','ddmmyyyy hh24:mi:ss') from dual union all
      6  select 4, 'BBB', 'XXX', to_timestamp('01012007 03:00:12','ddmmyyyy hh24:mi:ss') from dual union all
      7  select 6, 'AAA', 'XXX', to_timestamp('01012007 03:00:17','ddmmyyyy hh24:mi:ss') from dual union all
      8  select 8, 'BBB', 'YYY', to_timestamp('01012007 03:00:21','ddmmyyyy hh24:mi:ss') from dual union all
      9  select 9, 'CCC', 'ZZZ', to_timestamp('01012007 03:00:22','ddmmyyyy hh24:mi:ss') from dual union all
    10  select 10, 'CCC', 'YYY', to_timestamp('01012007 03:00:30','ddmmyyyy hh24:mi:ss') from dual union all
    11  select 11, 'AAA', 'XXX', to_timestamp('01012007 03:00:33','ddmmyyyy hh24:mi:ss') from dual union all
    12  select 12, 'AAA', 'ZZZ', to_timestamp('01012007 03:00:35','ddmmyyyy hh24:mi:ss') from dual union all
    13  select 13, 'BBB', 'ZZZ', to_timestamp('01012007 03:00:40','ddmmyyyy hh24:mi:ss') from dual union all
    14  select 14, 'AAA', 'ZZZ', to_timestamp('01012007 03:00:45','ddmmyyyy hh24:mi:ss') from dual union all
    15  select 15, 'BBB', 'ZZZ', to_timestamp('01012007 03:00:50','ddmmyyyy hh24:mi:ss') from dual union all
    16  select 16, 'AAA', 'ZZZ', to_timestamp('01012007 03:00:55','ddmmyyyy hh24:mi:ss') from dual union all
    17  select 17, 'BBB', 'ZZZ', to_timestamp('01012007 03:01:00','ddmmyyyy hh24:mi:ss') from dual union all
    18  select 18, 'AAA', 'ZZZ', to_timestamp('01012007 03:01:05','ddmmyyyy hh24:mi:ss') from dual
    19  /
    Tabel is aangemaakt.
    SQL> var S number
    SQL> exec :S := 10
    PL/SQL-procedure is geslaagd.
    SQL> select dense_rank() over (order by lsning_object) contention_set
      2       , cmd_uid
      3       , extract(second from time_interval) delta_seconds
      4    from ( select cmd_uid
      5                , cmding_object
      6                , lsning_object
      7                , cmd_time
      8                , ( cmd_time - first_value(cmd_time) over (partition by lsning_object order by cmd_time)
      9                  ) time_interval
    10             from ( select c.*
    11                         , min(cmding_object) over
    12                           ( partition by lsning_object
    13                             order by cmd_time
    14                             range between numtodsinterval(:S,'second') preceding and numtodsinterval(:S,'second') following
    15                           ) mincmd
    16                         , max(cmding_object) over
    17                           ( partition by lsning_object
    18                             order by cmd_time
    19                             range between numtodsinterval(:S,'second') preceding and numtodsinterval(:S,'second') following
    20                           ) maxcmd
    21                      from commands c
    22                  ) t
    23            where mincmd <> maxcmd
    24         )
    25   where time_interval < numtodsinterval(:S+1,'second')
    26   order by cmd_uid
    27  /
    CONTENTION_SET    CMD_UID DELTA_SECONDS
                 1          2             0
                 1          4             3
                 1          6             8
                 2          8             0
                 2         10             9
                 3         12             0
                 3         13             5
                 3         14            10
    8 rijen zijn geselecteerd.Regards,
    Rob.

  • Extremely Challenging SQL (No.2)

    All right #2 in my Extremely Challenging series.
    I have a "X" that has status entries for two servos, SERVO_1 and SERVO_2. I also has a position value POS. Each status check entry as a time value in X_TIME and it is a TIMESTAMP.
    It is the job of the servos to keep the POS as 0.
    I need to see each time a servo becomes ACTIVE (Activity Event). This includes all ACTIVE rows and the REST row just prior to the activity event starting.
    This is a sample of the data:
    X_UID SERVO_1 SERVO_2 POS X_TIME (TIMESTAMP)
    1 REST REST 0 3:00:00
    2 REST REST 0 3:00:05
    4 REST REST 2 3:00:10
    5 ACTIVE REST 2 3:00:15
    7 ACTIVE REST 1 3:00:20
    8 ACTIVE REST 0 3:00:25
    9 REST REST 0 3:00:30
    10 REST REST -2 3:00:35
    11 REST ACTIVE -1 3:00:40
    13 REST ACTIVE 0 3:00:45
    14 REST REST -1 3:00:50
    15 REST ACTIVE -1 3:00:55
    16 ACTIVE ACTIVE 1 3:01:00
    17 ACTIVE REST 2 3:01:05
    18 ACTIVE REST 0 3:01:10
    19 REST REST 0 3:01:15
    20 REST REST 0 3:01:20
    21 REST ACTIVE 0 3:01:25
    22 REST ACTIVE 1 3:01:30
    24 REST ACTIVE 0 3:01:35
    25 REST REST 0 3:01:40
    This is what the Results should look like:
    EVENT_SET X_UID SERVO EVENT_TIME_DELTA
    1 4 SERVO_1 0
    1 5 SERVO_1 5
    1 7 SERVO_1 10
    1 8 SERVO_1 15
    2 10 SERVO_2 0
    2 11 SERVO_2 5
    2 13 SERVO_2 10
    3 14 SERVO_2 0
    3 15 SERVO_2 5
    3 16 SERVO_2 10
    4 15 SERVO_1 0
    4 16 SERVO_1 5
    4 17 SERVO_1 10
    4 18 SERVO_1 15
    5 20 SERVO_2 0
    5 21 SERVO_2 5
    5 22 SERVO_2 10
    5 24 SERVO_2 15
    Please note that EVENT_SET #5 was a misfire by SERVO_2 the POS at X_UID #20 was already at 0.
    Any help would be Great!
    P.S.
    Does anyone know how to preserve formatting (spaces) when posting this stuff.

    More Stuff (That I tried first):
    Hi Nicolas. I read your advise and have been working on this new request and trying out the LEAD function with success. How ever when I implement it as you will see in my attempts below the performance really suffers when I run this on my real million+ row tables.
    The new request:
    Add Temperature reading into the results. These readings come from a different table and have no direct reference to the SERVO records. I need to get the temperature readings using the TIME.
    This is what I have to work with:
    Z_UID   TEMP_1    TEMP_2   Z_TIME (TIMESTAMP)
      2      89.5      89.7      2:59:53
      3      88.7      88.9      3:00:06
      4      89.1      89.1      3:00:19
      5      90.0      90.1      3:00:32
      6      90.3      90.6      3:00:45
      8      89.9      89.7      3:00:58
      9      88.9      88.1      3:01:11
    10      89.1      89.7      3:01:24
    13      90.1      90.3      3:01:37
    14      91.0      89.9      3:01:50
    16      89.8      89.9      3:02:03
    EVENT_SET   X_UID  SERVO      Z_UID  TEMP_1  TEMP_2  EVENT_TIME_DELTA
        1         4    SERVO_1      3     88.7    88.9          0
        1         5    SERVO_1      3     88.7    88.9          5
        1         7    SERVO_1      4     89.1    89.1         10
        1         8    SERVO_1      4     89.1    89.1         15
        2        10    SERVO_2      5     90.0    90.1          0
        2        11    SERVO_2      5     90.0    90.1          5
        2        13    SERVO_2      6     90.3    90.6         10
        3        14    SERVO_2      6     90.3    90.6          0
        3        15    SERVO_2      6     90.3    90.6          5
        3        16    SERVO_2      8     89.9    89.7         10
        4        15    SERVO_1      6     90.3    90.6          0
        4        16    SERVO_1      8     89.9    89.7          5
        4        17    SERVO_1      8     89.9    89.7         10
        4        18    SERVO_1      8     89.9    89.7         15
        5        20    SERVO_2      9     88.9    88.1          0
        5        21    SERVO_2     10     89.1    89.7          5
        5        22    SERVO_2     10     89.1    89.7         10
        5        24    SERVO_2     10     89.1    89.7         15This is one of my attempts:
    select
        event_set,
        x_uid,
        servo,
        z_uid,
        temp_1,
        temp_2,
        (x_time-first_value(x_time) over (partition by event_set order by x_uid))*24*60*60 as x_time
    from  
        select
            sum(x_t) over (order by last_time,x_uid) as event_set,
            x_uid,
            servo,
            z_uid,
            temp_1,
            temp_2,       
            x_time       
        from
            select
                x_uid,
                servo,
                (select z.z_uid from test_z z where z.z_time between x_time and next_x_time) as z_uid,
                (select z.temp_1 from test_z z where z.z_time between x_time and next_x_time) as temp_1,
                (select z.temp_2 from test_z z where z.z_time between x_time and next_x_time) as temp_2,
                x_t,
                x_time,
                max(decode(ct,1,rn,0)) over (partition by servo order by x_uid) last_time
            from 
                select
                    x_uid,
                    'servo_1' as servo,
                    pos,
                    x_time,
                    lead(x_time,1) over (order by x_time) as next_x_time,
                    case
                        when servo_1 in ('ACTIVE','WAKE') then
                            servo_1
                        else
                            lead(servo_1) over (order by x_uid)
                    end next_servo,
                    case
                        when servo_1 = 'REST' then
                            case
                                when lag(servo_1) over (order by x_uid) in ('ACTIVE','WAKE') then
                                    0
                                else
                                    1
                            end
                    else
                        0
                    end ct,
                    case
                        when servo_1 = 'REST' then
                            1
                        else
                            0
                    end x_t,
                    row_number() over (order by x_uid) rn
                from TEST_X
                union
                select
                    x_uid,
                    'servo_2',
                    pos,
                    x_time,
                    lead(x_time,1) over (order by x_time) as next_x_time,
                    case
                        when servo_2 in ('ACTIVE','WAKE') then
                            servo_2
                        else
                            lead(servo_2) over (order by x_uid)
                    end next_servo,
                    case
                        when servo_2 = 'REST' then
                            case when lag(servo_2) over (order by x_uid) in ('ACTIVE','WAKE') then
                                    0
                                else
                                    1
                             end
                        else
                            0
                    end ct,
                    case
                        when servo_2 = 'REST' then
                            1
                        else
                            0
                    end x_t,
                    row_number() over (order by x_uid) rn
                from
                    TEST_X)
            where
                next_servo in ('ACTIVE','WAKE')))
            order by
                event_set,
                x_uid;This is what I got back:
    EVENT_SET  X_UID   SERVO    Z_UID  TEMP_1  TEMP_2  X_TIME                  
        1        4     servo_1                         +00 00:00:00.000000     
        1        5     servo_1    4     89.1    89.1   +05 00:00:00.000000     
        1        7     servo_1                         +10 00:00:00.000000     
        1        8     servo_1                         +15 00:00:00.000000     
        2       10     servo_2                         +00 00:00:00.000000     
        2       11     servo_2    6     90.3    90.6   +05 00:00:00.000000     
        2       13     servo_2    6     90.3    90.6   +10 00:00:00.000000     
        3       14     servo_2                         +00 00:00:00.000000     
        3       15     servo_2    8     89.9    89.7   +05 00:00:00.000000     
        3       16     servo_2                         +10 00:00:00.000000     
        4       15     servo_1    8     89.9    89.7   +00 00:00:00.000000     
        4       16     servo_1                         +05 00:00:00.000000     
        4       17     servo_1                         +10 00:00:00.000000     
        4       18     servo_1    9     88.9    88.1   +15 00:00:00.000000     
        5       20     servo_2   10     89.1    89.7   +00 00:00:00.000000     
        5       21     servo_2                         +05 00:00:00.000000     
        5       22     servo_2                         +10 00:00:00.000000     
        5       24     servo_2   13     90.1    89.9   +15 00:00:00.000000     

  • SQL challenge: avoid this self-join!!!

    Here's something of a challenging SQL problem. I'm trying to persist an arbitrary number of attributes for an object. I am trying to do this in a regular relational table both for performance and to make future upgrades easier.
    The problem is that I don't know what SQL cleverness I can use to only scan the ATTR table once.
    Does Oracle (or for that matter the SQL standard) have some way to help me? Here's a simplified example:
    Consider a table ATTR with columns OID, ATTR_ID, ATTR_VAL. Unique key is OID, ATTR_ID. Assume any other indexes that you want, but be aware that ATTR_VAL is modestly dynamic.
    I can easily look for a OID for any one ATTR_ID, ATTR_VAL pair:
    SELECT oid FROM attr
    WHERE attr_id = 1 AND attr_val = :b1
    I can also easily do this looking at multiple attributes when I only need one condition to be met with an OR, as:
    SELECT DISTINCT oid FROM attr
    WHERE (attr_id = 1 AND attr_val = :b1)
    OR (attr_id = 31 AND attr_val = :b2)
    But how to handle the condition where I want to have the two ATTR_ID, ATTR_VAL pairs "and-ed" together? I know that I can do this:
    SELECT oid FROM
    (SELECT oid FROM attr WHERE attr_id = 1 AND attr_val = :b1)
    UNION
    (SELECT oid FROM attr WHERE attr_id = 31 AND attr_val = :b2)
    But this will necessitate looking at ATTR twice. This is maybe okay if there are only two conditions, but what about when there might be 10 or even 50? At some point this technique becomes unacceptable.
    Clearly:
    SELECT DISTINCT oid FROM attr
    WHERE (attr_id = 1 AND attr_val = :b1)
    AND (attr_id = 31 AND attr_val = :b2)
    won't work (each row has but one ATTR_ID).
    The following will end up doing the same basic thing as the UNION (it avoids a sort so is preferable):
    SELECT oid FROM attr a1, attr a2
    WHERE a1.oid = a2.oid
    AND (a1.attr_id = 1 AND a1.attr_val = :b1)
    AND (a2.attr_id = 31 AND a2.attr_val = :b2)
    but the fundamental problem of scanning ATTR twice remains.
    What cleverness can I apply here to only scan ATTR once?
    Thanks,
    :-Phil

    An other way of having a dynamic in list build from a singel string is show at asktom at this link http://asktom.oracle.com/pls/ask/f?p=4950:8:2019864::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:210612357425,%7Bvarying%7D%20and%20%7Belements%7D%20and%20%7Bin%7D%20and%20%7Bin%7D%20and%20%7Blist%7D
    an modified version for two columns:
    Create or replace type in_list as object (col1 varchar2(20), col2 varchar2(30));
    Create or replace type in_list_tab as table of in_list;
    Create or replace function fn_in_list( p_string in varchar2) return in_list_tab
    as
    l_string long default p_string || ',';
    l_data in_list_tab := in_list_tab();
    pos number;
    begin
    pos := 0;
    loop
    exit when l_string is null;
    pos := instr( l_string, ',' );
    l_data.extend;
    l_data(l_data.count) := in_list('','');
    l_data(l_data.count).col1 := ltrim(rtrim(substr(l_string, 1, pos - 1)));
    l_string := substr( l_string, pos + 1 );
    if l_string is null
    then
         l_data.Trim;
         exit;
    end if;
    pos := instr( l_string, ',' );
    l_data(l_data.count).col2 := ltrim(rtrim(substr(l_string, 1, pos - 1)));
    l_string := substr( l_string, pos + 1 );
    end loop;
    return l_data;
    end;
    create table testII (cola varchar2(10), colb varchar2(30));
    insert into testII values ('abc',1);
    insert into testII values ('abc',2);
    insert into testII values ('def',1);
    insert into testII values ('def',2);
    commit;
    var b1 varchar2(200);
    exec :b1:='abc,1,def,2';
    select * from testII where (cola,colb) in
    (select col1, col2 from THE ( select cast(fn_in_list(:b1) as in_list_tab) from dual));
    to handle cases like
    attr_id = 41 and attr_val > :b3 I would say dynmaic SQL

  • Query to test

    I want to write complex queries to test my knowledge of sql and pl/sql.
    Is there any online site that provides sample data and sample queries to write or a challenging sql query puzzle.
    By complex i mean lots of lines of queries.
    Thanks

    Well, if you are looking for puzzles, go [url http://oraqa.com/author/frank-zhou/]to this site click on every link having word "puzzle" and go to the sites initally referred by Frank Zhou - and don't cheat ;-)
    Best regards
    Maxim

  • SQL challenge

    Alright, I hate doing this, but I need some help...
    Given is the following data in table 1:
    ID
    1
    2
    3
    4
    Now, ID 1 is connected with ID2, and ID3 is connected with ID4. This result is made by the following table, which shows the relationship cathesian style:
    ID1 ID2 Connected
    1 2 Y
    1 3 N
    1 4 N
    2 3 N
    2 4 N
    3 4 Y
    The challenge: I need a query to show the result of the output groups. In this case: 2 groups: 1 and 2 vs 3 and 4. The query needs to be build so that if we would have an ID 5 and 6 that don't have any connection, be taken into two new seperate groups...
    I don't think the "CONNECT BY" statement would do the trick here, since all numbers are connected, not just the linked ones. It's just that the "Y" or "N" indicates their relationship...
    Hope you guys can push me in the right direction... I've tried a lot :)
    O by the way: it would be nice to solve this with SQL and not PLSQL because of performance issues.

    What are "normalized cartesian relations"?
    Laurent solution may be correct if the OP data are transitively closed. If it is not, then you need to build transitively closed relation first, and the only way to accomplish that is leveraging "connect by" query.
    For mathematically inclined, here is the essence of the problem: Given a binary relation R(x,y), first, build an equivalence relation out of it. Then identify each equivalence class with a distinct number. This subject is covered in chapter 6 of my book (http://www.bookpool.com/sm/0977671542). Here is an extract:
    ======================================================
    With proper graph terminology the question can be formulated in just one line:
    Find a number of connected components in a graph.
    (The problem in the book counts the connected components, rather than identifies them).
    Connected component of a graph is a set of nodes reachable from each other. A node is reachable from another node if there is an undirected path between them.
    Figure 6.4: A graph with two connected components.
    Reachability is an equivalence relation: it’s reflective, symmetric, and transitive. Given a graph, we formally obtain reachability relation by closing the Edges relation to become reflective, symmetric and, transitive (fig. 6.5).
    Figure 6.5: Reachability as an equivalence relation: graph from fig. 6.4 symmetrically and transitively closed.
    Returning back to the problem of finding the number of connected components, let’s assume that we already calculated the reachability relation EquivalentNodes somehow. Then, we just select a smallest node from each component. Informally,
    Select node(s) such that there is no node with smaller label reachable from it. Count them.
    Formally:
    select count(distinct tail) from EquivalentNodes e
    where not exists (
    select * from EquivalentNodes ee
    where ee.head<e.tail and e.tail=ee.tail
    --------------------Soapbox----------------------
    Equivalence Relation and Group By (cont)
    In one of the chapter 1 sidebars we have attributed the incredible efficiency of the group by operator to its proximity to one of the most fundamental mathematical constructions – the equivalence relation. There are two ways to define an equivalence relation. The first one is leveraging the existing equality operator on a domain of values. The second way is defining an equivalence relation explicitly, as a set of pairs. The standard group by operator is not able to understand an equivalence relation defined explicitly – this is the essence of the problem, which we just solved.
    Being able to query the number of connected components earned us an unexpected bonus: we can redefine a connected graph as a graph that has a single connected component. Next, a connected graph with N nodes and N-1 edges must be a tree. Thus, counting nodes and edges together with transitive closure is another opportunity to enforce tree constraint.
    Now that we established some important graph closure properties, we can move on to transitive closure implementations. Unfortunately, our story has to branch here, since database vendors approached hierarchical query differently.
    Message was edited by:
    Vadim Tropashko

  • Challenging Lexical Reference and SQL Problem

    Hi,
    I am trying to build a product hierarchy master report but I have a challenging problem at hand. To generate the report I need an SQL statement with lexical references and before parameter form. Basically, my SQL statement looks something like this:
    Select &Columns
    From &Tables
    Where &Criteria
    Before Parameter:
    In If X='1',
    &Columns := A.NAME, B.NAME, C.NAME
    &Tables:=A,B,C
    &Criteria:= A.CODE=B.CODE, B.CODE=C.CODE
    In If X='2',
    &Columns := A.NAME, B.NAME, C.NAME, D.NAME
    &Tables:=A,B,C,D
    &Criteria:= A.CODE=B.CODE, B.CODE=C.CODE,C.CODE=D.CODE
    In If X='3',
    &Columns := A.NAME, B.NAME, C.NAME, D.NAME, E.NAME
    &Tables:=A,B,C,D,E
    &Criteria:= A.CODE=B.CODE, B.CODE=C.CODE,C.CODE=D.CODE, C.CODE=E.CODE
    I need to build a group left report and group by A,B and up to E if X='3'.
    Any idea how can I accomplish this? Any kind of help or advice is urgently needed. Thank you in advance.

    Siak,
    build a kind of maximum modell. Set the initial values of your Parameters to the maximum (like for columns:
    a.name as aname, b.name as bname, ...., e.name as ename)
    and build a layout for this. Then in a before report trigger set the paramters you want and fill the not needed with dummy values. For example if X=1 then
    :columns := 'a.name as aname, b.name as bname, c.name as cname, ''x'' as dname, ''x'' as ename'
    In the layout supress the output of non used fields with a format trigger or use dependent from your X three different layouts. I've not tested it, but it's a chance ...
    regards
    Rainer

  • How do i write this in sql ? (another headcracker challenging  report)

    hi guys!,
    I need to create / generate a report. I intend to do all this with pure SQL alone.
    Been cracking my head for days but to no avail.
    Hope you gurus here will straightened me out.
    Here it goes. i Have a table
    TABLE USAGE_REPORT
    Date DATE -- everyday's date
    BalanceCF NUMBER -- an initial start amount or ( balancebf)
    Topup_amount NUMBER -- amount of topup that day
    Usage1 NUMBER -- amount of $ use on certain prod
    Usage2 NUMBER -- amount of $ use on certain prod
    BalanceBF NUMBER -- BalanceCF + topup - usage1 -usage2 (which is also the next date BalanceCF)
    Example1
    please see this link
    http://img9.imageshack.us/img9/708/88149028.gif
    asumming my sql is
    WITH dates AS
    SELECT TRUNC(SYSDATE) + level dmy
    FROM DUAL CONNECT BY level < 366
    TopUP as
    SELECT trunc(purchase_date) dated, sum(payment_amount)
    FROM purchase
    GROUP by trunc(purchase_date)
    Usage1 as
    SELECT trunc(connect_date) dated, sum(charged_amount)
    FROM tab1
    WHERE prod_id = 'xxx'
    GROUP BY trunc(connect_date)
    Usage2 as
    SELECT trunc(connect_date) dated, sum(charged_amount)
    FROM tab2
    WHERE prod_id = 'yyy'
    GROUP BY trunc(connect_date)
    SELECT * FROM DATES D
    LEFT OUTER JOIN TOPUP T
    ON (D.DMY = T.DATED)
    LEFT OUTER JOIN USAGE1 U1
    ON (D.DMY = U1.DATED)
    LEFT OUTER JOIN USAGE2 U2
    ON (D.DMY = U2.DATED);
    however
    q1) how do i start 'initiate' the 1st row
    BALANCECF so that i can do the calculation
    of
    BALANCECF + TOPUP - USAGE1 - USAGE2 = BALANCEBF
    q2) how do i bring the value of BALANCEBF into the 2nd row of BALANCECF to do further calculation ?
    q3) is it has something to do with connect by ? parent-child relationship
    q4) in short how do i make it look like the attach pic above?
    Please help!
    Best Regards,
    Noob

    I am using 200 as initial balance_cf. You did not provide sample data, so code below is not tested:
    WITH dates as (
                   SELECT  TRUNC(SYSDATE) + level dmy,
                           200 balance_cf
                     FROM  DUAL
                     CONNECT BY level < 366
         topUP as (
                   SELECT  trunc(purchase_date) dated,
                           sum(payment_amount) topup_amount
                     FROM  purchase
                     GROUP by trunc(purchase_date)
        Usage1 as (
                   SELECT  trunc(connect_date) dated,
                           sum(charged_amount) usage_amount
                     FROM  tab1
                     WHERE prod_id = 'xxx'
                     GROUP BY trunc(connect_date)
        Usage2 as (
                   SELECT  trunc(connect_date) dated,
                           sum(charged_amount) usage2_amount
                     FROM  tab2
                     WHERE prod_id = 'yyy'
                     GROUP BY trunc(connect_date)
    SELECT  dmy,
            balance_cf + sum(topup_amount - usage1_amount - usage2_amount) over order by dmy rows between unbounded preceding and 1 preceding) balance_cf
            topup_amount,
            usage1_amount,
            usage2_amount,
            balance_cf + sum(topup_amount - usage1_amount - usage2_amount) over order by dmy) balance_bf
      FROM  DATES D LEFT OUTER JOIN TOPUP T ON (D.DMY = T.DATED)
                    LEFT OUTER JOIN USAGE1 U1 ON (D.DMY = U1.DATED)
                    LEFT OUTER JOIN USAGE2 U2 ON (D.DMY = U2.DATED)
      ORDER BY dmy
    /SY.

  • SQL Query (challenge)

    Hello,
    I have 2 tables of events E1 and E2
    E1: (Time, Event), E2: (Time, Event)
    Where the columns Time in both tables are ordered.
    Ex.
       E1: ((1, a) (2, b) (4, d) (6, c))
       E2: ((2, x) (3, y) (6, z))
    To find the events of both tables at the same time it is obvious to do & join between E1 and E2
    Q1 -> select e1.Time, e1.Event, e2.Event from E1 e1, E2 e2 where e1.Time=e2.Time;
    The result of the query is:
    ((2, b, x) (6, c, z))
    Given that there is no indexes for this tables, an efficient execution plan can be a hash join (under conditions mentioned in Oracle Database Performance Tuning Guide Ch 14).
    Now, the hash join suffers from locality problem if the hash table is large and does not fit in memory; it may happen that one block of data is read in memory and swaped out frequently.
    Given that the Time columns are sorted is ascending order, I find the following algorithm, known idea in the literature, apropriate to this problem; The algorithm is in pseudocode close to pl/sql, for simplicity (I home the still is clear):
    -- start algorithm
    open cursors for e1 and e2
    loop
      if e1.Time = e2.Time then
         pipe row (e1.Time, e1.Event, e2.Event);
         fetch next e1 record
         exit when notfound
         fetch next e2 record
          exit when notfound
      else
         if e1.Time < e2.Time then
            fetch next e1 record
            exit when notfound
         else
            fetch next e2 record
            exit when notfound
         end if;
      end if;
    end loop
    -- end algorithm
    As you can see the algorithm does not suffer from locality issue since it iterates sequentially over the arrays.
    Now the problem: The algorithm shown below hints the use of pipelined function to implement it in pl/sql, but it is slow compared to hash join in the implicit cursor of the query shown above (Q1).
    Is there an implicit SQL query to implement this algorithm? The objective is to beat the hash join of the query (Q1), so queries that use sorting are not accepted.
    A difficulty I foound is that the explicit cursor are much slower that implict ones (SQL queries)
    Example: for a large table (2.5 million records)
    create table mytable (x number);
    declare
    begin
    open c for 'select 1 from mytable';
    fetch c bulk collect into l_data;
    close c;
    dbms_output.put_line('couont = '||l_data.count);
    end;
    is 5 times slower then
    select count(*) from mytable;
    I do not understand why it should be the case, I read that this may be explained because pl/sql is interpreted, but I think this does not explain the whole issue. May be because the fetch copies data from one space to your space and this takes a long time.

    Hi
    A correction in the algorithm:
    -- start algorithm
    open cursors for e1 and e2
    fetch next e1 record
    fetch next e2 record
    loop
      exit when e1%notfound
      exit when e2%notfound
      if e1.Time = e2.Time then
         pipe row (e1.Time, e1.Event, e2.Event);
         fetch next e1 record
         fetch next e2 record
      else
         if e1.Time < e2.Time then
            fetch next e1 record
         else
            fetch next e2 record
         end if;
      end if;
    end loop
    -- end algorithm
    Best regards
    Taoufik

  • SQL Challenge - Returning count=0 for non-existing values

    Hello there,
    I have a question about our requirement and an SQL query. I have posted this to some email groups but got no answer yet.
    Here is the test case:
    SQL> conn ...
    Connected.
    -- create the pattern table and populate
    SQL> create table pattern(id number, keydescription varchar2(50));
    Table created.
    SQL> insert into pattern values(1,'hata1');
    1 row created.
    SQL> insert into pattern values(2,'hata2');
    1 row created.
    SQL> insert into pattern values(3,'hata3');
    1 row created.
    SQL> insert into pattern values(4,'hata4');
    1 row created.
    SQL> insert into pattern values(5,'hata5');
    1 row created.
    SQL> select * from pattern;
    ID KEYDESCRIPTION
    1 hata1
    2 hata2
    3 hata3
    4 hata4
    5 hata5
    SQL> commit;
    Commit complete.
    -- create the messagetrack and populate
    SQL> create table messagetrack(pattern_id number, realdate date);
    Table created.
    SQL> insert into messagetrack values(1,to_date('26/08/2007 13:00:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> insert into messagetrack values(1,to_date('26/08/2007 13:05:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> insert into messagetrack values(2,to_date('26/08/2007 13:15:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> insert into messagetrack values(3,to_date('26/08/2007 14:15:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> insert into messagetrack values(4,to_date('26/08/2007 15:15:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> insert into messagetrack values(1,to_date('26/08/2007 15:15:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select * from messagetrack;
    PATTERN_ID REALDATE
    1 26-AUG-07
    1 26-AUG-07
    2 26-AUG-07
    3 26-AUG-07
    4 26-AUG-07
    1 26-AUG-07
    6 rows selected.
    Now, we have this simple query:
    SQL> select p.KeyDescription as rptBase , to_char( mt.realdate,'dd') as P1 , to_char(mt.realdate,'HH24') as P2, count(*) as countX
    2 from messageTrack mt, Pattern p
    3 Where mt.realDate >= to_date('26/08/2007 13:00:00','dd/MM/yyyy hh24:MI:ss')
    4 and mt.realDate <= to_date('27/08/2007 20:00:00','dd/MM/yyyy hh24:MI:ss')
    5 and mt.pattern_id=p.id
    6 group by p.KeyDescription, to_char(mt.realdate,'dd'), to_char( mt.realdate,'HH24')
    7 order by p.KeyDescription, to_char(mt.realdate,'dd'), to_char(mt.realdate,'HH24');
    RPTBASE P1 P2 COUNTX
    hata1 26 13 2
    hata1 26 15 1
    hata2 26 13 1
    hata3 26 14 1
    hata4 26 15 1
    But the result we need should contain the pattern values(hata1, hata2, hata3 and hata4) for each time interval(hour) although there are might be no records of some patterns for some hours.
    The result for our test case should look like this:
    RPTBASE P1 P2 COUNTX
    hata1 26 13 2
    hata1 26 14 0
    hata1 26 15 0
    hata2 26 13 1
    hata2 26 14 0
    hata2 26 15 0
    hata3 26 13 0
    hata3 26 14 1
    hata3 26 15 0
    hata4 26 13 0
    hata4 26 14 0
    hata4 26 15 1
    Our version is 10.2.0.2
    On my discussions some said model clause may be used, but i don't know model clause much and can't imagine how to use.
    You can download the test case code above to reproduce from:
    http://www.bhatipoglu.com/files/query1.txt
    You can see the output above more clearly(monospace font) on:
    http://www.bhatipoglu.com/files/query1_output.txt
    Additionally, I want to state that, in the resulting table, we don't want all the patterns(hata1, hata2, hata3, hata4 and hata5). We just want the ones that exists on messageTrack table(hata1, hata2, hata3 and hata4) as you see on the result.
    Thanks in advance.

    Here is an attempt with the Model Clause:
    Edit: I should mention that I created a view out of your original query.
    SELECT rptbase
          ,day
          ,hour
          ,countx
    FROM demoV
      MODEL
        DIMENSION BY (rptbase, day, hour)
        MEASURES (countx)
          RULES(countx[
                        FOR rptbase IN (SELECT rptbase
                                        FROM demoV)
                        ,FOR day IN   (SELECT day
                                        FROM demoV)
                        ,FOR hour FROM 13 to 15 INCREMENT 1
                        ] =
                        NVL(countx[CV(rptbase),CV(day),CV(hour)],0)
                order by 1,2,3;Which produces the following
    RPTBASE                                    DAY                 HOUR               COUNTX                
    hata1                                              26                     13                     2                     
    hata1                                              26                     14                     0                     
    hata1                                              26                     15                     1                     
    hata2                                              26                     13                     1                     
    hata2                                              26                     14                     0                     
    hata2                                              26                     15                     0                     
    hata3                                              26                     13                     0                     
    hata3                                              26                     14                     1                     
    hata3                                              26                     15                     0                     
    hata4                                              26                     13                     0                     
    hata4                                              26                     14                     0                     
    hata4                                              26                     15                     1               Note my Hata1 26 15 has a countx of 1 (I believe that this is correct and that your sample result is incorrect, if this is not the case, please explain why it should be 0)
    Message was edited by:
    JS1

  • A challenging dynamic SQL query problem

    hi All,
    I have a very interesting problem at work:
    We have this particular table defined as follows :
    CREATE TABLE sales_data (
    sales_id NUMBER,
    sales_m01 NUMBER,
    sales_m02 NUMBER,
    sales_m03 NUMBER,
    sales_m04 NUMBER,
    sales_m05 NUMBER,
    sales_m06 NUMBER,
    sales_m07 NUMBER,
    sales_m08 NUMBER,
    sales_m09 NUMBER,
    sales_m10 NUMBER,
    sales_m11 NUMBER,
    sales_m12 NUMBER,
    sales_prior_yr NUMBER );
    The columns 'sales_m01 ..... sales_m12' represents aggregated monthly sales, in which 'sales_m01' translates to 'sales for the month of january, january being the first month, 'sales_m02' sales for the month of february, and so on.
    The problem I face is that we have a project which requires that a parameter be passed to a stored procedure which stands for the month number which is then used to build a SQL query with the following required field aggregations, which depends on the parameter passed :
    Sample 1 : parameter input: 4
    Dynamically-built SQL query should be :
    SELECT
    SUM(sales_m04) as CURRENT_SALES,
    SUM(sales_m01+sales_m02+sales_m03+sales_m04) SALES_YTD
    FROM
    sales_data
    WHERE
    sales_id = '0599768';
    Sample 2 : parameter input: 8
    Dynamically-built SQL query should be :
    SELECT
    SUM(sales_m08) as CURRENT_SALES,
    SUM(sales_m01+sales_m02+sales_m03+sales_m04+
    sales_m05+sales_m06+sales_m07+sales_m08) SALES_YTD
    FROM
    sales_data
    WHERE
    sales_id = '0599768';
    So in a sense, the contents of SUM(sales_m01 ....n) would vary depending on the parameter passed, which should be a number between 1 .. 12 which corresponds to a month, which in turn corresponds to an actual field range on the table itself. The resulting dynamic query should only aggregate those columns/fields in the table which falls within the range given by the input parameter and disregards all the remaining columns/fields.
    Any solution is greatly appreciated.
    Thanks.

    Hi another simpler approach is using decode
    try like this
    SQL> CREATE TABLE sales_data (
      2  sales_id NUMBER,
      3  sales_m01 NUMBER,
      4  sales_m02 NUMBER,
      5  sales_m03 NUMBER,
      6  sales_m04 NUMBER,
      7  sales_m05 NUMBER,
      8  sales_m06 NUMBER,
      9  sales_m07 NUMBER,
    10  sales_m08 NUMBER,
    11  sales_m09 NUMBER,
    12  sales_m10 NUMBER,
    13  sales_m11 NUMBER,
    14  sales_m12 NUMBER,
    15  sales_prior_yr NUMBER );
    Table created.
    SQL> select * from sales_data;
      SALES_ID  SALES_M01  SALES_M02  SALES_M03  SALES_M04  SALES_M05  SALES_M06  SALES_M07  SALES_M08  SALES_M09  SALES_M10  SALES_M11  SALES_M12 SALES_PRIOR_YR
             1        124        123        145        146        124        126        178        189        456        235        234        789          19878
             2        124        123        145        146        124        126        178        189        456        235        234        789          19878
             1        100        200        300        400        500        150        250        350        450        550        600        700          10000
             1        101        201        301        401        501        151        251        351        451        551        601        701          10000----now for your requirement. see below query if there is some problem then tell.
    SQL> SELECT sum(sales_m&input_data), DECODE (&input_data,
      2                 1, SUM (sales_m01),
      3                 2, SUM (sales_m01 + sales_m02),
      4                 3, SUM (sales_m01 + sales_m02 + sales_m03),
      5                 4, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04),
      6                 5, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04 + sales_m05),
      7                 6, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06),
      8                 7, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07),
      9                 8, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07+sales_m08),
    10                 9, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07+sales_m08+sales_m09),
    11                 10,SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07+sales_m08+sales_m09+sales_m10),
    12                 11,SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07+sales_m08+sales_m09+sales_m10+sales_m11),
    13                 12,SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07+sales_m08+sales_m09+sales_m10+sales_m11+sales_m12)
    14                ) total
    15    FROM sales_data
    16   WHERE sales_id = 1;
    Enter value for input_data: 08
    Enter value for input_data: 08
    old   1: SELECT sum(sales_m&input_data), DECODE (&input_data,
    new   1: SELECT sum(sales_m08), DECODE (08,
    SUM(SALES_M08)      TOTAL
               890       5663

  • Mild challenge -pivoting *multiple* columns per row using only SQL

    Hello All,
    I'm in the process of learning the various pivoting techniques available
    in SQL, and I am becoming more familiar with the decode,function,group-by
    technique seen in many examples on these forums. However, I've got a case
    where I need to pivot out 3 different columns for 3 rows of data where the
    value of a different column is driving whether or not those columns are pivoted.
    I know that last sentence was as clear as mud so I'll show you/provide the simple
    scripts and data, and then I'll elaborate a little more beneath.
    create table temp_timeline (
    mkt_id varchar2(10),
    event_id number(8),
    event_type varchar2(3),
    mod_due_date date,
    cur_due_date date,
    act_due_date date
    insert into temp_timeline values('DSIM6',51,'S1','NOV-13-06','NOV-13-06',NULL);
    insert into temp_timeline values('DSIM6',51,'S2','DEC-20-06','DEC-20-06',NULL);
    insert into temp_timeline values('DSIM6',51,'S3','JAN-17-07','JAN-17-07',NULL);
    insert into temp_timeline values('DSIM6',51,'S4','FEB-14-07','FEB-14-07',NULL);
    commit;
    select * from temp_timeline;
    The "normal" output (formatted with period-separated fields) is:
    DSIM6.51.S1.NOV-13-06.NOV-13-06.NULL
    DSIM6.51.S2.DEC-20-06.DEC-20-06.NULL
    DSIM6.51.S3.JAN-17-07.JAN-17-07.NULL
    DSIM6.51.S4.FEB-14-07.FEB-14-07.NULL
    The DESIRED 1-row output (formatted with period-separated fields) is:
    DSIM6.51.NOV-13-06.NOV-13-06.NULL.DEC-20-06.DEC-20-06.NULL.JAN-17-07.JAN-17-07.NULL.FEB-14-07.FEB-14-07.NULL
    So, the first 2 columns in the table have the same data, and the third column
    makes the row unique (they could all have the same/similar dates).
    If this table only consisted of the first 3 columns then many of the examples seen
    on this forum would work well (grouping by the first 2 rows and pivoting out
    the "event_type" columns containing (S1,S2,S3,S4) etc.
    But, in my case, I need to discard the event_type column and pivot out the
    3 columns of date data onto the first row (for each different event_type).
    So the 3 Dates associated with the "S2" column would go to the first row, and the
    3 dates associated with the "S3" column would also go to the first row (and so on).
    The 3 dates need to be 3 distinct columns when they are
    pivoted out (not concatenated to each other and pivoted as one column).
    Given this, I will need to pivot out a total of 12 different columns for each distinct
    (mkt_id, event_id) pair.
    For the time being I have accomplished this with a union, but am trying to expand
    my abilities with other sql methods. I've seen some very elegant solutions on this
    forum so will be interested to see what others can come up with for this solution.
    Thanks in advance for any comments you may provide.

    Just DECODE based on the event type, which will generate your 12 columns.
    SELECT mkt_id, event_id,
           MAX(DECODE(event_type, 'S1', mod_due_date, NULL)) s1_mod_due,
           MAX(DECODE(event_type, 'S1', cur_due_date, NULL)) s1_cur_due,
           MAX(DECODE(event_type, 'S1', act_due_date, NULL)) s1_act_due,
           MAX(DECODE(event_type, 'S2', mod_due_date, NULL)) s2_mod_due,
           MAX(DECODE(event_type, 'S2', cur_due_date, NULL)) s2_cur_due,
           MAX(DECODE(event_type, 'S2', act_due_date, NULL)) s2_act_due,
           MAX(DECODE(event_type, 'S3', mod_due_date, NULL)) s3_mod_due,
           MAX(DECODE(event_type, 'S3', cur_due_date, NULL)) s3_cur_due,
           MAX(DECODE(event_type, 'S3', act_due_date, NULL)) s3_act_due,
           MAX(DECODE(event_type, 'S4', mod_due_date, NULL)) s4_mod_due,
           MAX(DECODE(event_type, 'S4', cur_due_date, NULL)) s4_cur_due,
           MAX(DECODE(event_type, 'S4', act_due_date, NULL)) s4_act_due
    FROM temp_timeline
    GROUP BY mkt_id, event_idTested, because you supplied create table and insert statements, thank you.
    John

  • SQL Developer 3.2 - Export DDL challenge

    Hi,
    I would like to Export DDL for approximately 300 of 1000 objects in a schema.  I have the names of all required tables for which I'd like to get the DDL in a table in my personal schema.  Is there a way that I can use this table as a driver for the built-in Export DDL utility or will I need to either go to the schema browser and hand-pick each of the 300 tables and/or from the Tools-Export DDL "Specify Objects" window?
    I would like to make this more automated so that I dont have to keep clicking, scrolling, clicking my way through the list of required objects.  Any thoughts are appreciated, thanks.

    1008686 wrote:
    Hi,
    I would like to Export DDL for approximately 300 of 1000 objects in a schema.  I have the names of all required tables for which I'd like to get the DDL in a table in my personal schema.  Is there a way that I can use this table as a driver for the built-in Export DDL utility or will I need to either go to the schema browser and hand-pick each of the 300 tables and/or from the Tools-Export DDL "Specify Objects" window?
    I would like to make this more automated so that I dont have to keep clicking, scrolling, clicking my way through the list of required objects.  Any thoughts are appreciated, thanks.
    There is no way to use sql developer to do that.
    You can:
    1. do it manually as you suggest
    2. do it manually by writing a script that makes the appropriate DBMS_METADATA calls
    3. use expdp to extract the metadata and create a DDL file.
    The full DDL for a table will include a lot of components that many people don't even want, for example storage clauses.
    The bigger issue you should address is why you don't already have the DDL to begin with. Best practices are to create the DDL and keep it in a version control system; not extract it after the fact.
    I suggest you use the EXPDP utility to extract the DDL into a file so that you have it for future use.
    If you plan to write a script there are plenty of examples on the web that show how to do that. Here is one:
    http://www.colestock.com/blogs/2008/02/extracting-ddl-from-oracle-2-approaches.html

  • Pl/sql challenge

    Given the following scenario:
    Need to count records inserted on particular days. Colums will be the actual days the records were inserted. For example if sysdate is 14 Apr 2006, need to generate 14 colums, if sysdate is 3 need to generate 3 colums and so forth. And need to count how many records were inserted on a particular day.
    The result should look like:
    4/4/2006 4/2/2006 Total
    2 1 3
    In above example. for 4/1/2006, 2 records where inserted, on 4/2/2006 1 record was inserted. Hence making the total of 3.
    Problem is I don't know how many colums I need to generate beforehand. It depends on the current date (sysdate) in the query.
    How can this be done.
    Please help.
    Thanks.
    Sum.
    How can this dynamic behavior be achieved ?

    How can this dynamic behavior be achieved ?There are different ways of achieving this dynamic behavior depending on where you are. In following example i have demonstrated a way of doing it in SQL*Plus. In other environments (oracle reports for instance), creating such a report should not be a big deal.
    SQL> create table test
      2  (td date)
      3  /
    Table created.
    SQL> insert into test
      2  select sysdate-(rownum-1)
      3  from all_objects
      4  where rownum<=to_number(to_char(sysdate,'dd'))
      5  /
    15 rows created.
    SQL> var cur refcursor
    SQL> set autoprint on
    SQL> declare
      2     v_cols varchar2(4000);
      3     v_cols1 varchar2(4000);
      4     v_total varchar2(1000);
      5  begin
      6     for r in 1..to_number(to_char(sysdate,'dd')) loop
      7             v_cols:=v_cols||'sum(decode(to_number(to_char(td,''dd'')),'||r||
      8                     ',1)) date'||r||',';
      9             v_cols1:=v_cols1||'date'||r||',';
    10             v_total:=v_total||'date'||r||'+';
    11     end loop;
    12     v_cols:=rtrim(v_cols,',');
    13     v_cols1:=rtrim(v_cols1,',');
    14     v_total:=rtrim(v_total,'+');
    15     open :cur for 'select '||v_cols1||','||v_total||' total from
    16                             (select '||v_cols||' from test
    17                             where trunc(td,''month'')=trunc(sysdate,''month''))';
    18  end;
    19  /
    PL/SQL procedure successfully completed.
         DATE1      DATE2      DATE3      DATE4      DATE5      DATE6      DATE7      DATE8      DATE9     DATE10     DATE11     DATE12     DATE13     DATE14     DATE15      TOTAL
             1          1          1          1          1          1          1          1          1          1          1          1          1          1          1         15-------------
    Anwar

Maybe you are looking for

  • Re: filters in ALV

    Hi all !! When using standard filter (F4) on ALV grid (called with fm 'Reuse_alv_grid_display') it is genreated the choosing-list. But values on that list are shorter than should be. For example: for char value 'ABCDEFGHIJ' diplayed on alv list, only

  • Upgrade from windows 2008 R2 to windows 2012

    Hi all. I have 4 node hyper-v cluster servers win 2008 R2 data center connected to iSCSI storage . I went to upgrade it to win 2012 data center can i work on this scenario: 1- evict one node from the fail-over cluster. 2- install clean win 2012 data

  • PL/SQL Queries in OC

    Is it possible to write normal SQL Queries into the Custom Code of OC? for ex - "select count(a.visdate) from tablename" or "select max(col) from table name"? is it also possible to write and set up CURSORS in OC like how we do normally in PLSQL of o

  • Displaying Metadata in Lightroom 4

    I want to display the IPTC metadata in Lightroom 4 in the same way as in Bridge CS6 so that it only displays the fields I want to use - is this possible without having to install 3rd party software? I've tried the metadata pop-up menu selections but

  • Showing "0" photos in a library folder.  But the photos show in that same folder explore.

    Hi Everyone - Showing "0" photos in a library folder.  But the photos show in that same folder explore.  I have LR 5.6 set to "Show photos in Subfolder".  I have filters off.  There are 380 photos that I spent several days editing.  I have been build