Analytic sql question

Dear All,
I have the following problem of getting the output table from the input table below.
Do you have any idea of doing this with the help of analytic sql?
p.s. I have done this by using pure plsql block which is too slow to run with high amount of data. Below data is just a sample, in real I have millions rows of data.
Input table:
TIME     USER     VALUE     
1     A     X
2     A     X
3     B     Y
4     B     Y
5     A     X
5     B     X
6     A     Y
7     B     Y
7     A     Y
Output table:
START_TIME     END_TIME     USER     VALUE
1          2          A     X
5          5          A     X
6          7          A     Y
3          4          B     Y
5          5          B     X
7          7          B     Y

I feel sure I've over-complicated things here, and that there's an easier way of doing it, but here's one solution:
with my_tab as (select 1 col1, 'A' col2, 'X' col3 from dual union all
                select 2 col1, 'A' col2, 'Y' col3 from dual union all
                select 3 col1, 'B' col2, 'Y' col3 from dual union all
                select 4 col1, 'B' col2, 'X' col3 from dual union all
                select 5 col1, 'A' col2, 'X' col3 from dual union all
                select 5 col1, 'B' col2, 'X' col3 from dual union all
                select 6 col1, 'A' col2, 'Y' col3 from dual union all
                select 7 col1, 'B' col2, 'Y' col3 from dual union all
                select 7 col1, 'A' col2, 'Y' col3 from dual)
select distinct start_col1,
                end_col1,
                first_value(col2) over (partition by col2, start_col1, end_col1 order by col1, start_col1, end_col1) col2,
                first_value(col3) over (partition by col2, start_col1, end_col1 order by col1, start_col1, end_col1) col3
from   (select col1,
               col2,
               col3,
               last_value(start_col1 ignore nulls) over (order by col1, col2) start_col1,
               last_value(end_col1 ignore nulls) over (order by col1 desc, col2 desc) end_col1
        from   (select col1,
                       col2,
                       col3,
                       case when lag(col2, 1, '{NULL}') over (order by col1, col2) <> col2
                                 then col1
                       end start_col1,
                       case when lead(col2, 1, '{NULL}') over (order by col1, col2) <> col2
                                 then col1
                       end end_col1
                from   my_tab))
order by col2,
         start_col1,
         end_col1;
START_COL1   END_COL1 C C
         1          2 A X
         5          5 A X
         6          7 A Y
         3          4 B Y
         5          5 B X
         7          7 B Y

Similar Messages

  • Urgent SQL question : how to flip vertical row values to horizontal ?

    Hello, Oracle people !
    I have an urgent SQL question : (simple for you)
    using SELECT statement, how to convert vertical row values to horizontal ?
    For example :
    (Given result-set)
    MANAGER COLUMN1 COLUMN2 COLUMN3
    K. Smith ......1
    K. Smith ...............1
    K. Smith ........................1
    (Needed result-set)
    MANAGER COLUMN1 COLUMN2 COLUMN3
    K. Smith ......1 .......1 .......1
    I know you can, just don't remeber how and can't find exactly answer I'm looking for. Probably using some analytic SQL function (CAST OVER, PARTITION BY, etc.)
    Please Help !!!
    Thanx !
    Steve.

    scott@ORA92> column vice_president format a30
    scott@ORA92> SELECT f.VICE_PRESIDENT, A.DAYS_5, B.DAYS_10, C.DAYS_20, D.DAYS_30, E.DAYS_40
      2  FROM   (select t2.*,
      3                row_number () over
      4                  (partition by vice_president
      5                   order by days_5, days_10, days_20, days_30, days_40) rn
      6            from   t2) f,
      7           (SELECT T2.*,
      8                row_number () over (partition by vice_president order by days_5) RN
      9            FROM   T2 WHERE DAYS_5 IS NOT NULL) A,
    10           (SELECT T2.*,
    11                row_number () over (partition by vice_president order by days_10) RN
    12            FROM   T2 WHERE DAYS_10 IS NOT NULL) B,
    13           (SELECT T2.*,
    14                row_number () over (partition by vice_president order by days_20) RN
    15            FROM   T2 WHERE DAYS_20 IS NOT NULL) C,
    16           (SELECT T2.*,
    17                row_number () over (partition by vice_president order by days_30) RN
    18            FROM   T2 WHERE DAYS_30 IS NOT NULL) D,
    19           (SELECT T2.*,
    20                row_number () over (partition by vice_president order by days_40) RN
    21            FROM   T2 WHERE DAYS_40 IS NOT NULL) E
    22  WHERE  f.VICE_PRESIDENT = A.VICE_PRESIDENT (+)
    23  AND    f.VICE_PRESIDENT = B.VICE_PRESIDENT (+)
    24  AND    f.VICE_PRESIDENT = C.VICE_PRESIDENT (+)
    25  AND    f.VICE_PRESIDENT = D.VICE_PRESIDENT (+)
    26  AND    f.VICE_PRESIDENT = E.VICE_PRESIDENT (+)
    27  AND    f.RN = A.RN (+)
    28  AND    f.RN = B.RN (+)
    29  AND    f.RN = C.RN (+)
    30  AND    f.RN = D.RN (+)
    31  AND    f.RN = E.RN (+)
    32  and    (a.days_5 is not null
    33            or b.days_10 is not null
    34            or c.days_20 is not null
    35            or d.days_30 is not null
    36            or e.days_40 is not null)
    37  /
    VICE_PRESIDENT                     DAYS_5    DAYS_10    DAYS_20    DAYS_30    DAYS_40
    Fedele Mark                                                          35473      35209
    Fedele Mark                                                          35479      35258
    Schultz Christine                              35700
    South John                                                                      35253
    Stack Kevin                                    35701      35604      35402      35115
    Stack Kevin                                    35705      35635      35415      35156
    Stack Kevin                                    35706      35642      35472      35295
    Stack Kevin                                    35707      35666      35477
    Stack Kevin                                               35667      35480
    Stack Kevin                                               35686
    Unknown                             35817      35698      35596      35363      35006
    Unknown                                        35702      35597      35365      35149
    Unknown                                        35724      35599      35370      35155
    Unknown                                                   35600      35413      35344
    Unknown                                                   35601      35451      35345
    Unknown                                                   35602      35467
    Unknown                                                   35603      35468
    Unknown                                                   35607      35475
    Unknown                                                   35643      35508
    Unknown                                                   35644
    Unknown                                                   35669
    Unknown                                                   35684
    Walmsley Brian                                 35725      35598
    23 rows selected.

  • SQL Question Bank and Answers for Practise

    Dear Readers:
    Does any one have any recommendation on SQL question bank and answers where I could practice my SQL.
    I have developed some basic knowledge of SQL thanks to the MS community, but I am looking for some additional Questions or textbook  recommendations which has questions and answers to queries for practice.
    Best Wishes,
    SQL75

    Hi,
    Refer below post.
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/446b2247-5124-49c1-90c9-b7fea0aa4f83/sql-dba-books?forum=sqlgetstarted
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    Praveen Dsa | MCITP - Database Administrator 2008 |
    My Blog | My Page

  • Sql question : TRUNCATE vs Delete

    hi
    this is sql question, i don't know where i should post it, so here it is.
    i just want to know the best usage of each. both commands delete records in a table, one deletes all, and the other can do the same plus option to delete specified records. if i just want to purge the table. which one is better and why? thanks

    this is crucial to my design, i need to be able to
    rollback if one of the process in the transaction
    failed, the whole transaction should rollback. if
    truncate does not give me this capability, then i have
    to consider Delete.From the Oracle manual (sans the pretty formatting):
    TRUNCATE
    Caution: You cannot roll back a TRUNCATE statement.
    Purpose
    Use the TRUNCATE statement to remove all rows from a table or cluster. By default,
    Oracle also deallocates all space used by the removed rows except that specified by
    the MINEXTENTS storage parameter and sets the NEXT storage parameter to the size
    of the last extent removed from the segment by the truncation process.
    Removing rows with the TRUNCATE statement can be more efficient than dropping
    and re-creating a table. Dropping and re-creating a table invalidates the table?s
    dependent objects, requires you to regrant object privileges on the table, and
    requires you to re-create the table?s indexes, integrity constraints, and triggers and
    respecify its storage parameters. Truncating has none of these effects.
    See Also:
    DELETE on page 16-55 and DROP TABLE on page 17-6 for
    information on other ways to drop table data from the database
    DROP CLUSTER on page 16-67 for information on dropping
    cluster tables
    Prerequisites
    To truncate a table or cluster, the table or cluster must be in your schema or you
    must have DROP ANY TABLE system privilege.

  • An issue for analytical sql?

    Hi all,
    I am on the OracleXE 11gR2 and have the following requirements for the output of the select statement:
    Tables:
    GARAGE
    ======
    ID GARAGE_NAME
      1    GARAGE_1
      2    GARAGE_2
    PERSONS
    =======
    ID GARAGE_ID NAME
      1         1 NAME1_1
      2         1 NAME1_2
      3         1 NAME1_3
      4         1 NAME1_4
      5         1 NAME1_5
      6         1 NAME1_6
      7         2 NAME2_1
      8         2 NAME2_2
      9         2 NAME2_3
    10         2 NAME2_4
    CARS
    ====
    ID GARAGE_ID CAR
      1         1 CAR1_1
      2         1 CAR1_2
      3         1 CAR1_3
      4         1 CAR1_4
      5         2 CAR2_1
      6         1 CAR2_2
      7         1 CAR2_3
      8         1 CAR2_4
      9         1 CAR2_5
    10         1 CAR2_6
    The required output is:
    GARAGE_ID GARAGE_NAME CAR    PERSON
            1    GARAGE_1 CAR1_1 NAME1_1
            1    GARAGE_1 CAR1_2 NAME1_2
            1    GARAGE_1 CAR1_3 NAME1_3
            1    GARAGE_1 CAR1_4 NAME1_4
            1    GARAGE_1        NAME1_5
            1    GARAGE_1        NAME1_6
            2    GARAGE_2 CAR2_1 NAME2_1
            2    GARAGE_2 CAR2_2 NAME2_2
            2    GARAGE_2 CAR2_3 NAME2_3
            2    GARAGE_2 CAR2_4 NAME2_4
            2    GARAGE_2 CAR2_5
            2    GARAGE_2 CAR2_6 How can I achieve this output?
    Is it an issue for analytical SQL?
    Dear community, I need your help!
    Kind regards

    Looks like just an outer join..
    But how are you jining cars and persons -- based on name?
    Or Something like this?
    with c as
      select c.garage_id,c.car,g.garage_name g_name,
             row_number() over(partition by c.garage_id order by c.id) rn
      from cars c,garage g
      where c.garage_id = g.id
    p as
      select p.garage_id,p.name p_name,g.garage_name g_name,
             row_number() over(partition by p.garage_id order by p.id) rn
      from persons p,garage g
      where p.garage_id = g.id
    select nvl(p.garage_id,c.garage_id) garage_id,
           nvl(p.g_name,c.g_name) garage_name,
           car,p_name person
    from c
         full outer join p on
          (c.garage_id = p.garage_id and c.rn = p.rn);
    GARAGE_ID GARAGE_NAME CAR    PERSON
            1 GARAGE_1    CAR1_1 NAME1_1
            1 GARAGE_1    CAR1_2 NAME1_2
            1 GARAGE_1    CAR1_3 NAME1_3
            1 GARAGE_1    CAR1_4 NAME1_4
            1 GARAGE_1           NAME1_5
            1 GARAGE_1           NAME1_6
            2 GARAGE_2    CAR2_1 NAME2_1
            2 GARAGE_2    CAR2_2 NAME2_2
            2 GARAGE_2    CAR2_3 NAME2_3
            2 GARAGE_2    CAR2_4 NAME2_4
            2 GARAGE_2    CAR2_5        
            2 GARAGE_2    CAR2_6        
    12 rows selected If this is not what you want, please explain the logic to reach at your output..
    Edited by: jeneesh on Nov 17, 2012 3:15 PM

  • Which Oracle version for analytic sql?

    Can anyone tell me from which Oracle version on there exists the "analytic sql"?
    Thanks!

    Hi,
    Mark1970 wrote:
    Can anyone tell me from which Oracle version on there exists the "analytic sql"?
    Thanks!Analytic functions (that is, functions using the OVER keyword, such as
    RANK () OVER (ORDER BY hiredate)) were introduced in Oracle 8.1
    (It's no coincidence that in-line views were introduced around the same time, since so many uses of anaytic functions require sub-queries.
    In-line views were first documented in Oracle 8.1, but they worked in Oracle 8.0.)

  • Equivalent Analytical SQL equivalent

    DB version: 10Gr2
    Is there any analytical SQL equivalent of the below ANSI SQL
    select schemaname, count(*)
    from v$session
    group by schemaname
    order by count(*) desc

    Hi,
    Sure.
    It's less efficient than GROUP BY, but here's one way to do it:
    SELECT DISTINCT     schemaname
    ,          COUNT (*) OVER (PARTITION BY schemaname)     AS cnt
    FROM          v$session
    GROUP BY     schemaname
    ORDER BY     cnt          DESC
    ;"GROUP BY scemaname" produces one row per value of schemaname; without GROUP BY, you need SELECT DISTINCT .
    Almost all the aggregate functions (like COUNT) have analytic versions.
    PARTITION BY is the analytic counterpart to the aggregate GROUP BY.

  • Need analytical sql queries

    I want to learn about analytical sql quries...if anyone knows about analytical queries..please help me..[email protected]

    http://download.oracle.com/docs/cd/B28359_01/server.111/b28313/analysis.htm#i1007779
    http://www.orafaq.com/node/1874
    http://www.gplivna.eu/papers/using_analytic_functions_in_reports.htm

  • SQL - Analytical Query Question

    Hi All,
    I have a requirement for which I am trying to generate the output and I am not able to come up with good logic to solve this issue. I have been trying to solve this for some time now and am not able to figure out how.
    I have posted a similar kind of post some time back but this is different to the original one and little more complex than my previous question. I have listed below the script to create a table and insert data.
    DROP TABLE ITEMTABLE
    CREATE TABLE ITEMTABLE
      ITEMTABLEID1           NUMBER(9) NOT NULL, 
      ITEMTABLEID2           NUMBER(9) NOT NULL,
      PARENTTABLEID          NUMBER(9),
      PARENTINFO          VARCHAR2(20), 
    CONSTRAINT ITEMTABLE_PK PRIMARY KEY (ITEMTABLEID1,ITEMTABLEID2)          
    Insert into ITEMTABLE values (19217,10245,19216,'PARENTINFO-1');
    Insert into ITEMTABLE values (19217,10315,19216,'PARENTINFO-2' );
    Insert into ITEMTABLE values (19217,10336,19216,'PARENTINFO-2' );
    DROP TABLE FINANCE
    CREATE TABLE FINANCE
      FINANCEKEY          NUMBER(9) NOT NULL,
      PARENTID1           NUMBER(9) NOT NULL, 
      PARENTID2           NUMBER(9) NOT NULL,  
      CONSTRAINT FINANCE_PK PRIMARY KEY (FINANCEKEY)
    Insert into FINANCE values (8332, 19217,10245);
    Insert into FINANCE values (8404, 19217, 10315);
    Insert into FINANCE values (8425, 19217, 10336);
    DROP TABLE ACCT
    CREATE TABLE ACCT
      ACCTKEY             NUMBER(9)  NOT NULL,   
      FINANCEKEY          NUMBER(9),
      FLAG                VARCHAR2(1), 
      SOURCEKEY           NUMBER(9),
      CONSTRAINT ACCT_PK PRIMARY KEY (ACCTKEY)
    Insert into ACCT values (9874, 8332, 'N',0);
    Insert into ACCT values (9875, 8332, 'N',0 );
    Insert into ACCT values (9982, 8404, 'Y', 9874);
    Insert into ACCT values (9983, 8404, 'Y', 9875);
    Insert into ACCT values (10008, 8425, 'N', 9982);
    Insert into ACCT values (10009, 8425, 'Y', 9983);
    SQL> With tempacct1 as
      2    (Select  I.ITEMTABLEID1,I.ITEMTABLEID2, AC.SOURCEKEY, NVL(AC.FLAG,'N') AS FLAG, AC.ACCTKEY
      3     FROM ITEMTABLE I,FINANCE F,ACCT AC
      4    where I.ITEMTABLEID1 = F.PARENTID1
      5      and I.ITEMTABLEID2 =  F.PARENTID2
      6    and F.FINANCEKEY = AC.FINANCEKEY
      7        and I.PARENTTABLEID = 19216
      8         ORDER BY  acctkey ASC
      9        )
    10     SELECT  ITEMTABLEID1,ITEMTABLEID2,acctkey, flag ,SOURCEKEY
    11     FROM    tempacct1;
    ITEMTABLEID1 ITEMTABLEID2    ACCTKEY F  SOURCEKEY
           19217        10245       9874 N          0
           19217        10245       9875 N          0
           19217        10315       9982 Y       9874
           19217        10315       9983 Y       9875
           19217        10336      10008 N       9982
           19217        10336      10009 Y       9983
    6 rows selected.
    Desired Output -
    ITEMTABLEID1 ITEMTABLEID2    ACCTKEY F  SOURCEKEY
           19217        10336      10008 N       9982
           19217        10336      10009 Y       9983The solution by Frank for my previous post few weeks back looks like this :-
    SQL>    SELECT  sourcekey
      2  , flag
      3  , acctkey
      4  FROM (
      5       SELECT  ac.sourcekey
      6       ,     NVL (ac.flag, 'N') AS flag
      7       ,     ac.acctkey
      8       ,     RANK () OVER ( PARTITION BY  CASE
      9                         WHEN  sourcekey = 0
    10             THEN  acctkey
    11             ELSE  sourcekey
    12                     END
    13         ORDER BY      CASE
    14                              WHEN  ac.flag = 'Y'
    15                    THEN  1
    16             ELSE  2
    17                   END
    18         ,   SIGN (sourcekey)
    19                    ) AS rnk
    20          FROM    itemtable i
    21       ,     finance f
    22       ,     acct ac
    23         WHERE   i.itemtableid1  = f.parentid1
    24         AND     i.itemtableid2  = f.parentid2
    25       AND     f.financekey  = ac.financekey
    26         AND     i.parenttableid  = 19216
    27   )
    28  WHERE rnk = 1;
    SOURCEKEY F    ACCTKEY
          9874 Y       9982  -- Needs to be removed
          9875 Y       9983  -- Needs to be removed
          9982 N      10008  
          9983 Y      10009
    Output Desired would be
    ITEMTABLEID1 ITEMTABLEID2    ACCTKEY F  SOURCEKEY
           19217        10336      10008 N       9982
           19217        10336      10009 Y       9983
    SQL> The slight change to the requirement is when we have sourcekey that is same as acctkey then only display the row which has max acctkey. So in this case, the last two row have a sourcekey of 9982, 9983 which is equal to acctkey of first two rows. So, we look for Max(Acctkey) which would be 10008 and 10009 and only display those.
    This logic needs to be added on top of the existing logic. So I am not sure how it could be done.
    I would really appreciate any help.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Release 10.2.0.4.0 - 64bit Production
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - ProductionEdited by: ARIZ on Jun 16, 2010 7:56 PM

    Hi,
    This gets the right results from your sample data.
    SELECT  ac.sourcekey
    ,       NVL (ac.flag, 'N') AS flag
    ,       ac.acctkey
    FROM    itemtable     i
    ,          finance          f
    ,       acct          ac
    WHERE   i.itemtableid1   = f.parentid1
    AND     i.itemtableid2   = f.parentid2
    AND     f.financekey     = ac.financekey
    AND     i.parenttableid  = 19216
    AND      ac.acctkey     NOT IN ( SELECT  sourcekey
                          FROM      acct
                         WHERE      sourcekey     IS NOT NULL     -- If needed
    ; I'm a little uncertain of your requirements, so I'm not sure how it will work on your real data.
    At least in this new version of the problem, it looks like rows can be chained together, where the sourcekey of one row is the acctkey of the next row. If you want only the first row in each such chain, just look for the ones where the acctkey does not relate back to any sourcekey.
    NOT IN is never TRUE if the subquery returns any NULLs. Unless sourcekey has a NOT NULL constraint, you'd better check for it in the NOT IN sub-query.

  • Analytical fucntion question?

    hello all. I have an analytical question. I have found couple good examples on how to concatenate several rows into one. But I have not been able to get this to work a specific way.
    Oracle 9.2.0.6.0
    Given the following data:
    table fred.
    ORDER_NUMBER_HW                        ORDER_NUMBER_SVC                       SERIAL_NUMBER                
    11                                     123                                    bb                           
    11                                     123                                    aa                           
    11                                     456                                    bb                            I would like to see the following output
    ORDER_NUMBER_HW                        SERVICE_ORDERS          SERIAL_NUMBERS                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
    11                                     123,456                 aa,bb                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        I have used the following sql to come up with the results below, but i do not want to see the duplicate values in the concatenated strings.
    This is what i get now. Any suggestions on how to fix this so I don't get the duplicate info in the concatenated strings?
    ORDER_NUMBER_HW                        SERVICE_ORDERS      SERIAL_NUMBERS                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
    11                                     123,123,456         bb,aa,bb                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
    SELECT     order_number_hw
              ,LTRIM(SYS_CONNECT_BY_PATH(order_number_svc, ','), ',') service_orders
              ,LTRIM(SYS_CONNECT_BY_PATH(serial_number, ','), ',') serial_numbers
          FROM (SELECT order_number_hw
                      ,order_number_svc
                      ,serial_number
                      ,ROW_NUMBER() OVER(PARTITION BY order_number_hw ORDER BY order_number_svc) rn
                      ,COUNT(*) OVER(PARTITION BY order_number_hw) cnt
                  FROM fred)
         WHERE rn = cnt
    START WITH rn = 1
    CONNECT BY PRIOR order_number_hw = order_number_hw AND PRIOR rn = rn - 1
      ORDER BY serial_number;
    table creation.
    CREATE TABLE FRED
      ORDER_NUMBER_HW   NUMBER                      NOT NULL,
      ORDER_NUMBER_SVC  NUMBER                      NOT NULL,
      SERIAL_NUMBER     VARCHAR2(30 BYTE)
    SET DEFINE OFF;
    Insert into FRED
       (ORDER_NUMBER_HW, ORDER_NUMBER_SVC, SERIAL_NUMBER)
    Values
       (11, 123, 'bb');
    Insert into FRED
       (ORDER_NUMBER_HW, ORDER_NUMBER_SVC, SERIAL_NUMBER)
    Values
       (11, 123, 'aa');
    Insert into FRED
       (ORDER_NUMBER_HW, ORDER_NUMBER_SVC, SERIAL_NUMBER)
    Values
       (11, 456, 'bb');
    COMMIT;

    SQL> select order_number_hw,
      2         replace(replace(replace(LTRIM(SYS_CONNECT_BY_PATH(col1, ','), ','),',',',@'),'@,'),'@') service_orders,
      3         replace(replace(replace(LTRIM(SYS_CONNECT_BY_PATH(col2, ','), ','),',',',@'),'@,'),'@') serial_numbers
      4    from (select t.*,
      5                 lag(null, 1, order_number_svc) over(partition by order_number_hw, order_number_svc order by 1) col1,
      6                 lag(null, 1, serial_number) over(partition by order_number_hw, serial_number order by 1) col2,
      7                 row_number() over(partition by order_number_hw order by order_number_svc) rn,
      8                 COUNT(*) OVER(PARTITION BY order_number_hw) cnt
      9            from fred t)
    10   where rn = cnt
    11   START WITH rn = 1
    12  CONNECT BY PRIOR order_number_hw = order_number_hw
    13         AND PRIOR rn = rn - 1
    14  /
    ORDER_NUMBER_HW SERVICE_ORDERS      SERIAL_NUMBERS
                 11 123,456             aa,bb
    SQL>

  • General SQL question

    I have seen the following in some sql, but not sure of the difference.
    select abc, NULL from xyz where .....
    select abc, to_char(NULL) from xyz where ....
    my question is there a difference between NULL and TO_CHAR(NULL) ?
    Thanks.

    BluShadow wrote:
    Looks like someone was trying to cast the null to a varchar2 datatype so that SQL knew the datatype of the column. In that case they should have used the CAST function...
    Blu:
    Just out of curiosity, any particular reason for the preference for cast instead of to_datatype(null)? I often use that construct, particularly in union type queries. The cast is sematically meaningless in a union query since the column will get the size of the largest column in the query.
    SQL> set null null
    SQL> select *
      2  from (select 1 id, null c1, to_char(null) c2, cast(null as varchar2(10)) c3 from dual
      3        union all
      4        select 2, 'Hello there', 'Hello there', 'Hello there' from dual);
            ID C1          C2          C3
             1 null        null        null
             2 Hello there Hello there Hello there
    SQL> create view v as
      2  select *
      3  from (select 1 id, null c1, to_char(null) c2, cast(null as varchar2(10)) c3 from dual
      4        union all
      5        select 2, 'Hello there', 'Hello there', 'Hello there' from dual);
    View created.
    SQL> desc v;
    Name               Null?    Type
    ID                          NUMBER
    C1                          VARCHAR2(11)
    C2                          VARCHAR2(11)
    C3                          VARCHAR2(11)About the only place where I would use an explicit cast is if I was "hiding" the content of a column in a view, but needed to maintain the same structure as the table.
    John

  • SQL Questions (New to Cisco)

    Hello. I work for Clarian Health in Indianapolis and am trying to learn as much as possible about the SQL databases, both AWDB and HDS so that I can handle the reporting for our Revenue Cycle Customer Service.
    I am currently working my way through the Database Schema Handbook for Cisco Unified ICM /Contact Center Enterprise & Hosted. I’m also reviewing the explanation pages that are available for the reports on WebView. During my reviews, I have noticed a few things that confuse me.
    My questions are:
    1. Why do a majority of the tables on our SQL Server start with “t_”?
    2. Why do some of the tables have data on the AWDB server but not on the HDS server, and vice versa? (Examples: t_Agent and t_Agent_Team and t_Agent_Team_Member and t_Person are blank on the HDS database but not blank on the AWDB database; but the t_Agent_Logout is blank on the AWDB database and not blank on the HDS database)
    3. When data is moved to the HDS server every 30 minutes, is it also removed from the corresponding AWDB table?
    4. In review of the agent26: Agent Consolidated Daily Report syntax info located on the WebView, 1 of the calculations uses the LoggedOnTimeToHalf from the Agent_Half_Hour table while the remaining calculations uses the same field from the Agent_Skill_Group_Half_Hour table. Can you please tell me why this is? Why would all of the percent calculations not use the data from the same table? (The % of time Agent paused and/or put a task on hold uses the Agent_Half_Hour Table. All other % calculations uses the same field from the Agent_Skill_Group_Half_Hour Table.)
    5. Also in reviewing the agent26: Agent Consolidated Daily Report syntax info, I noticed that it contains the Skill_Group table, the Agent_Half_Hour table and the Media_Routing_Domain table. Both the Skill Group table and the Agen_Half_Hour table contain links to the Media_Routing_Domain table. Which relationship/join is actually utilized for this report?
    6. Why doesn't the LoggedOnTimeToHalf field on both the Agent_Half_Hour table and the Agent_Skill_Group_Half_Hour table have the same value in them?
    7. On the agent_26: Agent Consolidated Daily Report syntax explanation page, the Agent State Times: Log on Duration says that it is derived using the Agent_Half_Hour.LoggedOnTimeToHalf field, but when i convert this field to HH:MM:SS, it does not match the actual WebView report. But, when I use the Agent_Skill_Group_Half_Hour.LoggedOnTimeToHalfHour, it does match. Which one is correct?
    8. On the agent_26: Agent Consolidated Daily Report, why does the Completed Tasks: Transfer Out contain both the TransferredOutCallsToHalf and the NetTransferredOutCallsToHalf fields? What's the difference between the two? What Transfer out data writes to each field?
    Thank you.
    Angie Combest
    Clarian Health
    [email protected]

    You need to be careful when looking at the three databases - Logger, AW, HDS - which use the same schema. But many of what appear to be tables in the AW are really views into the t_ tables in the HDS - the data is not there in the AW DB. You are right to look at the schema - but check with SQL Enterprise to understand a bit more.
    In essence, the AW DB is for configuration data and real-time data. The HDS is for historical data. You can query the AW DB for (say) Call_Type_Half_Hour data and join with the Call_Type table to resolve the call type ID into its name - but the data is really in the HDS through the view.
    The DB design is quite complex and sophisticated - many things are not obvious.
    Keep up your research.
    Regards,
    Geoff

  • SQL question- on how to handle groups of records at a time.

    Hi,
    I have a sql that looks like the following:
    insert into INVALID_DATES_TMP
    (id, gid, created, t_rowid, updated)
    select id, gid, created, rowid, updated
    from TABLE1
    where fix_invalid_date_pkg.is_date_invalid('TABLE1', 'CREATED', ROWID) = 'Y';
    COMMIT;
    What the above sql does is selects all rows from TABLE1 where the CREATED column
    has invalid dates and inserts them into invalid_dates_tmp table. So we can process/fix
    those invalid dates from the temp table. Problem is our DBA said Table1 can have
    millions of rows so the above sql can be very database intensive. So, I need to
    figure out another way that may handle chunks of rows at a time from table1.
    Any ideas are appreciated!
    ThankYou,
    Radhika.

    Hallo,
    in general INSERT AS SELECT is the fastest method to insert into the table.
    Probably you can use direct load ? (Hint APPEND).
    Other options (INSERT IN LOOP or BULK + FORALL) are slower.
    I think, this method is optimal.
    Another question is, the function itself. It is not clear, whether it searches the invalid dates optimal. I suppose strong, that function uses dynamic SQL.
    Why ? It is better to search static . Or you use this function for many other columns ? Could you post the function also ?
    Regards
    Dmytro

  • Parent - child table issue wrt to count - SQL question

    I have a scenario:
    There are 2 tables (parent and child). lets say, case summary table and task level dimension table.
    for every case id in case summary table, there would be multiple tasks in task level dim table with a flag indicator set to 1 for all tasks.
    but while counting the number of cases active with flag indicator 1 (ofcourse when joining case summary table with task dimension table), for a case id only 1 instance of task needs to be accounted (even though it has more than one task , for counting active cases, the flag ind corresponding to a task in a case if set to 1 , then the case is considered active)..but while joining and taking count of case ids with flag indicator as 1, you get the count of every task row of a case which is incorrect logically. how to discard the rest of child records of a case in child table (task dimension table)?
    I am not sure how to achieve this in sql query
    Kindly help!
    Case summary table
    case id, busininess_unit, agent_name
    1001, admin, Ram
    1002, Finance, Sam
    task table
    case id, task_id,task_name, flag_indicator
    1001, 1, 'New', 1
    1001,2, 'Open',1
    1001,3,'In progress',1
    1002, 4, 'New', 1
    (In fact task_id is not a big deal... even you can assume task id doesn't exist..only task name ... )
    now my question... if my query should get the current active cases (ind=1); as per above it should essentially give 2... but my query gives me 4..you know the reason why.. but how do i get the correct count?
    Thanks!

    may be you need just this:
    select count(distinct case_id) from task
    where indicator = 1;
    If this is not what you are looking for, please elaborate and tell us the expected output and rest of the details as mentioned in FAQ Re: 2. How do I ask a question on the forums?:

  • SQL question - please help!

    Hi,
    I am working on a SQL, please help ms with the question
    below .... thanks
    (1)Increase by 10% salary of these captain pilots who have
    traveled more than 800,000 miles.
    Routes | | Flights | |Pilots |
    | | | | |
    #routeID | | #flightNO | |#pilotID |
    depAirportID |        |  airplaneNO| |*name |
    arrAirportID |_______/|  pilotID |\___________|*hours_in_air|
    length       |       \|  routeID |/ |*grade |
    ______________| |_____________| |*salary |
    |____________|

    If the length column in routes is in hours, and it represents
    additional hours to those shown in hours_in_air in pilots, then
    the following should work:
    UPDATE pilots
    SET salary = salary * 1.1
    WHERE pilotid in (SELECT a.pilotid
    FROM pilots a,
         (SELECT b.pilotid,sum(c.length) new_hours
          FROM flights b, routes c
          WHERE b.routeid = c.routeid
          GROUP BY b.pilotid) d
    WHERE a.pilotid = d.pilotid and
          new_hours + hours_in_air >= 80000)I suspect that you probably need to add additional criteria to
    the sub-query from flights and routes to take into account only
    flights since the hours_in_air column from pilots was last
    updated. However, your table structures do not indicate any
    date sensitivity. If the table flights is emptied every time
    hours_in_air is updated, then the query above will work.

Maybe you are looking for

  • 'owspe:PolicyAccess' error while invoking ADF BC Service Interface

    Hi,    I have deployed a custom ADF BC Service Interface application to a standalone weblogic server. On invoking the service interface i get the following error in response. <env:Envelope   xmlns:env="http://schemas.xmlsoap.org/soap/envelope/"> <env

  • Value Field in PA Document

    Hi, We are seeing a difference while doing an automatic posting and manual posting of an GL transaction. After posting a document, going to T.code:FB03 giving the document number and looking into the document and from there going to Environment>Docum

  • How do i change over from automatically adding songs to manually adding

    Using 2 ipods on the computer and don't want all of the other person's songs. How can i change from automatically adding songs to manually managing the songs that go on to the ipod. If i plug it in, i'm afraid that i will mess up all my current songs

  • New to LiveCycle

    I am not a form designer but I am trying to get a fillable form to use in a medical lab.  What I am trying to figure out is if it is possible to do the following: 1.) populate fields based on what choice is made from a drop down menu For example: I n

  • HOW TO USE APPLEtv WITH NETFLIXS

    How do I select netflix with appleTV. It appears I have no area to select netflix. Help please. Just setting up with no problems as long as it is with Itunes.