Analytic function problem

Hi,
I have a problem using analytic function: when I execute this query
SELECT TSIUPSITE, TSIUPCEAN , TSIUPDATE, sum(TSIUPCA) TSIUPCA, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
sum(TSIUPQTE) TSIUPQTE,sum(TSIUPQTEP) TSIUPQTEP, TSIUPMDIU,TSIUPMDar,
sum(TSIUPCRIU) TSIUPCRIU,sum(TSIUPCRAR) TSIUPCRAR, trunc(TSIUPDCRE) TSIUPDCRE ,trunc(TSIUPDMAJ) TSIUPDMAJ ,
TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, sum(TSIUPMHT) TSIUPMHT, 0 vtanfisc,
TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV ,count(*) over (partition by TSIUPSITE,TSIUPCEAN,TSIUP_TRT ) CONTA_ARTICOLO
FROM TST_FLIISR_VTEREMART
WHERE 1=1 --TSIUP_TRT = 1
AND TSIUPDATE=to_date('27082012','ddmmyyyy')
and TSIUP_NTRX =172
AND TSIUPSITE = 10025
AND TSIUPCEAN = '8012452018825'
GROUP BY TSIUPSITE, TSIUPCEAN , TSIUPDATE, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
TSIUPMDIU,TSIUPMDar, trunc(TSIUPDCRE),trunc(TSIUPDMAJ),TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, 0,
TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV
ORDER BY TSIUPSITE,TSIUPDATE ;
I have the error ORA-00979: not a GROUP BY expression related to TSIUP_TRT field,infact, if I execute this one
SELECT TSIUPSITE, TSIUPCEAN , TSIUPDATE, sum(TSIUPCA) TSIUPCA, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
sum(TSIUPQTE) TSIUPQTE,sum(TSIUPQTEP) TSIUPQTEP, TSIUPMDIU,TSIUPMDar,
sum(TSIUPCRIU) TSIUPCRIU,sum(TSIUPCRAR) TSIUPCRAR, trunc(TSIUPDCRE) TSIUPDCRE ,trunc(TSIUPDMAJ) TSIUPDMAJ ,
TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, sum(TSIUPMHT) TSIUPMHT, 0 vtanfisc,
TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV ,count(*) over (partition by TSIUPSITE,TSIUPCEAN ) CONTA_ARTICOLO
FROM TST_FLIISR_VTEREMART
WHERE 1=1 --TSIUP_TRT = 1
AND TSIUPDATE=to_date('27082012','ddmmyyyy')
and TSIUP_NTRX =172
AND TSIUPSITE = 10025
AND TSIUPCEAN = '8012452018825'
GROUP BY TSIUPSITE, TSIUPCEAN , TSIUPDATE, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
TSIUPMDIU,TSIUPMDar, trunc(TSIUPDCRE),trunc(TSIUPDMAJ),TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, 0,
TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV
ORDER BY TSIUPSITE,TSIUPDATE ;
I have no problem. Now the difference between TSIUPCEAN ( or TSIUPSITE) and TSIUP_TRT is that TSIUP_TRT is not in Group By clause, but, to be honest, I don't know why I have this problem using using an analitic function.
Thanks for help

Hi,
I think you are not using analytic function properly.
Analytical functions will execute for each row. Where as Group BY will execute for groups of data.
See below example for you reference.
Example 1:
-- Below query displays number of employees for each department. Since we have used analytical function for each row you are getting the number of employees based on the department id.
SQL> SELECT e.department_id,count(*) OVER (PARTITION BY e.department_id) cnt_analytic
  2  FROM employees e
  3  WHERE e.department_id IN (10,20,30);
DEPARTMENT_ID CNT_ANALYTIC
           10            1
           20            2
           20            2
           30            6
           30            6
           30            6
           30            6
           30            6
           30            6
9 rows selected.
Example 2:
-- Since I have used GROUP BY clause I'm getting only single row for each department.
SQL> SELECT e.department_id, count(*) cnt_group
  2  FROM employees e
  3  WHERE e.department_id IN (10,20,30)
  4  GROUP BY e.department_id;
DEPARTMENT_ID  CNT_GROUP
           10          1
           20          2
           30          6Finally, what I'm trying to explain you is - If you use Analytical function with GROUP BY clause, the query will not give the menaing ful result set.
See below
SQL> SELECT e.department_id,count(*) OVER (PARTITION BY e.department_id) cnt_analytic, count(*) cnt_grp
  2  FROM employees e
  3  WHERE e.department_id IN (10,20,30)
  4  GROUP BY e.department_id;
DEPARTMENT_ID CNT_ANALYTIC    CNT_GRP
           10            1          1
           20            1          2
           30            1          6

Similar Messages

  • Problem with SUM () analytic function

    Dear all,
    Please have a look at my problem.
    SELECT CURR, DT, AMT, RATE,
    SUM(AMT) OVER (PARTITION BY CURR ORDER BY DT) SUMOVER,
    sum( amt * rate) over (PARTITION BY CURR ORDER BY DT) / SUM(AMT) OVER (PARTITION BY CURR ORDER BY DT) avgrt
    FROM
    select 'CHF' CURR, ADD_MONTHS(TO_DATE('01-DEC-07'), LEVEL -1) DT, 100 * LEVEL AMT, 1 +  ( 5* LEVEL/100) RATE
    FROM DUAL CONNECT BY LEVEL < 10
    SQL> /
    CUR DT               AMT       RATE    SUMOVER      AVGRT
    CHF 01-DEC-07        100       1.05        100       1.05
    CHF 01-JAN-08        200        1.1        300 1.08333333
    CHF 01-FEB-08        300       1.15        600 1.11666667
    CHF 01-MAR-08        400        1.2       1000       1.15
    CHF 01-APR-08        500       1.25       1500 1.18333333
    CHF 01-MAY-08        600        1.3       2100 1.21666667
    CHF 01-JUN-08        700       1.35       2800       1.25
    CHF 01-JUL-08        800        1.4       3600 1.28333333
    CHF 01-AUG-08        900       1.45       4500 1.31666667
    Table Revaluation
    select 'CHF' CURR1, '31-DEC-07' DT , 1.08 RATE FROM DUAL UNION ALL
    select 'CHF' CURR1, '31-MAR-08' DT , 1.22 RATE FROM DUAL UNION ALL
    select 'CHF' CURR1, '30-JUN-08' DT , 1.38 RATE FROM DUAL
    CUR DT              RATE
    CHF 31-DEC-07       1.08
    CHF 31-MAR-08       1.22
    CHF 30-JUN-08       1.38.
    Problem is with the calculation of average rate.
    I want to consider the data in the revaluation table to be used in the calculation of
    average rate.
    So average rate for Jan-08 will be
    (100 * 1.08(dec revaluation rate) + 200 * 1.1 ) / (300) = 1.093333333
    for Feb-08
    (100 * 1.08(dec revaluation rate) + 200 * 1.1 + 300 * 1.15) / (600) = 1.121666667
    for mar-08
    (100 * 1.08(dec revaluation rate) + 200 * 1.1 + 300 * 1.15 + 400 * 1.2) / (1000) = 1.153
    for Apr-08
    (1000 * 1.22(Apr revaluation rate) + 500 * 1.25) /1500 = 1.23
    for May-08
    (1000 * 1.22(Apr revaluation rate) + 500 * 1.25 + 600 * 1.30 ) /2100 = 1.25
    and so on..
    Kindly advice

    Hi,
    The main thing in this problem is that for every dt you want to compute the cumulative total from previous rows using the formula
    SUM (amt * rate)
    But rate can be either the rate from the revaluation table or the rate from the main table. For evaluating prior dates, you wnat to use the most recent rate.
    I'm not sure if you can do this using analytic functions. Like Damorgan said, you should use a self-join.
    The query below gives you the results you requested:
    WITH
    revaluation     AS
         SELECT 'CHF' curr1, TO_DATE ('31-DEC-07', 'DD-MON-RR') dt, 1.08 rate     FROM dual     UNION ALL
         SELECT 'CHF' curr1, TO_DATE ('31-MAR-08', 'DD-MON-RR') dt, 1.22 rate     FROM dual     UNION ALL
         SELECT 'CHF' curr1, TO_DATE ('30-JUN-08', 'DD-MON-RR') dt, 1.38 rate     FROM dual
    original_data     AS
         select     'CHF'                              curr
         ,     ADD_MONTHS(TO_DATE('01-DEC-07'), LEVEL -1)     dt
         ,     100 * LEVEL                         amt
         ,     1 + ( 5* LEVEL/100)                    rate
         FROM     dual
         CONNECT BY     LEVEL < 10
    two_rates     AS
         SELECT     od.*
              SELECT     MAX (dt)
              FROM     revaluation
              WHERE     curr1     = od.curr
              AND     dt     <= od.dt
              )          AS r_dt
              SELECT     AVG (rate) KEEP (DENSE_RANK LAST ORDER BY dt)
              FROM     revaluation
              WHERE     curr1     = od.curr
              AND     dt     <= od.dt
              )          AS r_rate
         FROM     original_data     od
    SELECT     c.curr
    ,     c.dt
    ,     c.amt
    ,     c.rate
    ,     SUM (p.amt)          AS sumover
    ,     SUM     ( p.amt
              * CASE
                   WHEN     p.dt <= c.r_dt
                   THEN     c.r_rate
                   ELSE     p.rate
                END
         / SUM (p.amt)          AS avgrt
    FROM     two_rates     c
    JOIN     original_data     p     ON     c.curr     = p.curr
                        AND     c.dt     >= p.dt
    GROUP BY     c.curr,     c.dt,     c.amt,     c.rate
    ORDER BY     c.curr,     c.dt
    ;

  • Wired problem with LEAD Analytical function

    I am having wired problem with LEAD Analytical function. It works fine as a standalone query
    but when used in sub-query it is not behaving as expected (explained below). I unable to troubleshoot
    the issue. Please help me.
    Detail explanation and the data follows:
    ========================= Table & Test Data =======================================
    CREATE TABLE "TESTSCHED"
    (     "SEQ" NUMBER NOT NULL ENABLE,
         "EMPL_ID" NUMBER NOT NULL ENABLE,
         "SCHED_DT" DATE NOT NULL ENABLE,
         "START_DT_TM" DATE NOT NULL ENABLE,
         "END_DT_TM" DATE NOT NULL ENABLE,
         "REC_STAT" CHAR(1) NOT NULL ENABLE,
         CONSTRAINT "TESTSCHED_PK" PRIMARY KEY ("SEQ")     
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (1, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (2, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 11:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (3, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (4, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:15:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (5, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:15:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (6, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (7, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (8, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (9, 118327, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (10, 120033, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 19:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('04-03-2008', 'dd-mm-yyyy'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (11, 120033, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 19:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('04-03-2008', 'dd-mm-yyyy'), 'I');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (12, 120033, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 19:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (13, 120033, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 19:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (14, 126690, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 18:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 21:15:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (15, 126690, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 12:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 16:45:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (16, 126690, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 12:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 21:15:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (17, 126690, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 16:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 18:45:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (18, 126690, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 16:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 18:45:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (19, 169241, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 05:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 14:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (20, 169241, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 05:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 14:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (21, 169241, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (22, 169241, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (23, 200716, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (24, 200716, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (25, 200716, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (26, 252836, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 19:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (27, 252836, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (28, 252836, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 19:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (29, 252836, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 19:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (30, 260774, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 10:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (31, 260774, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 10:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 14:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (32, 260774, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 05:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (33, 260774, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:15:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 10:35:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (34, 260774, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:15:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 10:35:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (35, 260774, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 05:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 14:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (36, 260774, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 10:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (37, 289039, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 16:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:15:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (38, 289039, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 13:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:15:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (39, 289039, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 13:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 16:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
    insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
    values (40, 289039, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 13:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 16:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
    ========================= Problem Statement =======================================
    Problem Statement:
    Select all employees whose active schedule (rec_stat = 'A') start date & time falls
    between "16:30" and "17:45".
    An employee may be having multiple schedules on a day. It may be continuous or may be
    having breaks in between the schedules.
    If an employee schedule is continuous then the minimum schedule start date & time is
    considered as the start date & time of the work.
    If an employee is having breaks in the schedule, then each schedule start date & time is
    considered as start date & time.
    e.g:-1
    1     39609     2008/03/03     2008/03/03 07:00:00     2008/03/03 11:00:00     A
    2     39609     2008/03/03     2008/03/03 11:00:00     2008/03/03 11:30:00     A
    In above case, the employee schedule is continuous. The start date & time is 2008/03/03 07:00:00
    e.g:-2
    19     169241     2008/03/03     2008/03/03 05:30:00     2008/03/03 14:00:00     A
    21     169241     2008/03/03     2008/03/03 17:30:00     2008/03/03 22:00:00     A
    In above case, the employee is having a break in between. Then both records
    start date & time (i.e. 2008/03/03 05:30:00, 2008/03/03 05:30:00 ) is considered
    as start date & time.
    e.g:-3
    30     260774     2008/03/03     2008/03/03 07:00:00     2008/03/03 10:30:00     A
    31     260774     2008/03/03     2008/03/03 10:30:00     2008/03/03 14:00:00     A
    32     260774     2008/03/03     2008/03/03 05:30:00     2008/03/03 07:00:00     A
    In above case, the employee schedule is continuous. The start date & time is 2008/03/03 05:30:00
    ========================= Query to test Individual Employees =======================================
    Query-1: Test query to see the CASE and LEAD function works fine. If "CAL = 0" means that all
    schedules are continuous and the employee needs to ignored.
    SELECT fw.*,
    CASE WHEN LEAD(FW.END_DT_TM, 1)
    OVER(ORDER BY FW.START_DT_TM DESC ) IS NULL AND
    fw.start_dt_tm BETWEEN
    TO_DATE('03/03/2008 16:30:00','mm/dd/yyyy hh24:mi:ss') AND
    TO_DATE('03/03/2008 17:45:00','mm/dd/yyyy hh24:mi:ss')
    THEN Fw.seq
    WHEN LEAD(FW.END_DT_TM, 1)
    OVER(ORDER BY FW.START_DT_TM DESC) != FW.START_DT_TM AND
    fw.start_dt_tm BETWEEN
    TO_DATE('03/03/2008 16:30:00','mm/dd/yyyy hh24:mi:ss') AND
    TO_DATE('03/03/2008 17:45:00','mm/dd/yyyy hh24:mi:ss')
    THEN Fw.seq
    ELSE 0
    END AS "CAL"
    FROM TESTSCHED fw
    WHERE fw.empl_id = 289039
    and fw.rec_stat = 'A'
    Query Results
    SEQ     EMPL_ID     SCHED_DT     START_DT_TM          END_DT_TM          REC_STAT     CAL
    37     289039     2008/03/03     2008/03/03 16:30:00     2008/03/03 22:15:00     A          0
    39     289039     2008/03/03     2008/03/03 13:45:00     2008/03/03 16:30:00     A          0
    Since all "CAL" column is zero, the employee should be ignored.
    ========================= Query to be used in the application =======================================
    Query-2: Actual Query executed in the application which is not working. Query-1 is used as subquery
    here.
    SELECT
    F1.SEQ,
    F1.EMPL_ID,
    F1.SCHED_DT,
    F1.START_DT_TM,
    F1.END_DT_TM,
    F1.REC_STAT
    FROM TESTSCHED F1
    WHERE F1.SEQ IN
    (SELECT CASE
    WHEN LEAD(FW.END_DT_TM, 1)
    OVER(ORDER BY FW.START_DT_TM DESC) IS NULL AND
    FW.START_DT_TM BETWEEN
    TO_DATE('03/03/2008 16:30:00', 'mm/dd/yyyy hh24:mi:ss') AND
    TO_DATE('03/03/2008 17:45:00', 'mm/dd/yyyy hh24:mi:ss') THEN FW.SEQ
    WHEN LEAD(FW.END_DT_TM, 1)
    OVER(ORDER BY FW.START_DT_TM DESC) != FW.START_DT_TM AND
    FW.START_DT_TM BETWEEN
    TO_DATE('03/03/2008 16:30:00', 'mm/dd/yyyy hh24:mi:ss') AND
    TO_DATE('03/03/2008 17:45:00', 'mm/dd/yyyy hh24:mi:ss') THEN FW.SEQ
    ELSE 0
    END
    FROM TESTSCHED FW
    WHERE FW.EMPL_ID = F1.EMPL_ID
    AND FW.REC_STAT = 'A')
    Query-2 Results
    SEQ     EMPL_ID     SCHED_DT     START_DT_TM          END_DT_TM          REC_STAT
    9     118327     2008/03/03     2008/03/03 17:30:00     2008/03/03 22:00:00     A
    12     120033     2008/03/03     2008/03/03 17:30:00     2008/03/03 19:30:00     A
    17     126690     2008/03/03     2008/03/03 16:45:00     2008/03/03 18:45:00     A
    21     169241     2008/03/03     2008/03/03 17:30:00     2008/03/03 22:00:00     A
    24     200716     2008/03/03     2008/03/03 17:30:00     2008/03/03 22:00:00     A
    28     252836     2008/03/03     2008/03/03 17:30:00     2008/03/03 19:30:00     A
    37     289039     2008/03/03     2008/03/03 16:30:00     2008/03/03 22:15:00     A
    Problem with Query-2 Results:
    Employee IDs 126690 & 289039 should not appear in the List.
    Reason: Employee 126690: schedule start date & time is 2008/03/03 12:45:00 and falls outside
    "16:30" and "17:45"
    14     126690     2008/03/03     2008/03/03 18:45:00     2008/03/03 21:15:00     A
    17     126690     2008/03/03     2008/03/03 16:45:00     2008/03/03 18:45:00     A
    15     126690     2008/03/03     2008/03/03 12:45:00     2008/03/03 16:45:00     A
    Reason: Employee 289039: schedule start date & time is 2008/03/03 13:45:00 and falls outside
    37     289039     2008/03/03     2008/03/03 16:30:00     2008/03/03 22:15:00     A
    39     289039     2008/03/03     2008/03/03 13:45:00     2008/03/03 16:30:00     A

    Your LEAD function is order by fw.start_dt_tm, in your first sql, the where clause limited the records return to REC_STAT = A and EMPL_ID = 289039.
    In your 2nd query, there is no where clause, so to the LEAD function, the next record after seq 37 is seq 38 which has an end date of 2003-03-03 10:15 pm, not equal to the start time of record seq 37.
    Message was edited by:
    user477455

  • Aggregation of analytic functions not allowed

    Hi all, I have a calculated field called Calculation1 with the following calculation:
    AVG(Resolution_time) KEEP(DENSE_RANK FIRST ORDER BY RANK ) OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report #7 COMPL".Resource Name )
    The result of this calculation is correct, but is repeated for all the rows I have in the dataset.
    Group Name      Resourse name    Calculation1
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    5112 rowsI tried to create another calculation in order to have only ONE value for the couple "Group Name, Resource Name) as AVG(Calculation1) but I have the error: Aggregation of analytic functions not allowed
    I saw also inside the "Edit worksheet" panel that the Calculation1 *is not represented* with the "Sigma" symbol I(as for example a simple AVG(field_1)) and inside the SQL code I don't have GROUP BY Group Name, Resource Name......
    I'd like to see ONLY one row as:
    Group Name      Resourse name    Calculation1
    SH Group            Mr. A            10....that it means I grouped by Group Name, Resource Name
    Anyone knows how can I achieve this result or any workarounds ??
    Thanks in advance
    Alex

    Hi Rod unfortunately I can't use the AVG(Resolution_time) because my dataset is quite strange...I explain to you better.
    Ι start from this situation:
    !http://www.freeimagehosting.net/uploads/6c7bba26bd.jpg!
    There are 3 calculated fields:
    RANK is the first calculated field:
    ROW_NUMBER() OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report Assigned To & Created By COMPL".Resource Name,"Tickets Report Assigned To & Created By COMPL".Incident Id  ORDER BY  "Tickets Report Assigned To & Created By COMPL".Select Flag )
    RT Calc is the 2nd calculation:
    CASE WHEN RANK = 1 THEN Resolution_time END
    and Calculation2 is the 3rd calculation:
    AVG(Resolution_time) KEEP(DENSE_RANK FIRST ORDER BY  RANK ) OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report Assigned To & Created By COMPL".Resource Name )
    As you can see, from the initial dataset, I have duplicated incident id and a simple AVG(Resolution Time) counts also all the duplication.
    I used the rank (based on the field "flag) to take, for each ticket, ONLY a "resolution time" value (in my case I need the resolution time when the rank =1)
    So, with the Calculation2 I calculated for each couple Group Name, Resource Name the right AVG(Resolution time), but how yuo can see....this result is duplicated for each incident_id....
    What I need instead is to see *once* for each couple 'Group Name, Resource Name' the AVG(Resolution time).
    In other words I need to calculate the AVG(Resolution time) considering only the values written inside the RT Calc fields (where they are NOT NULL, and so, the total of the tickets it's not 14, but 9).
    I tried to aggregate again using AVG(Calculation2)...but I had the error "Aggregation of analytic functions not allowed"...
    Do you know a way to fix this problem ?
    Thanks
    Alex

  • Analytic function to retrieve a value one year ago

    Hello,
    I'm trying to find an analytic function to get a value on another row by looking on a date with Oracle 11gR2.
    I have a table with a date_id (truncated date), a flag and a measure. For each date, I have at least one row (sometimes 2), so it is gapless.
    I would like to find analytic functions to show for each date :
    sum of the measure for that date
    sum of the measure one week ago
    sum of the measure one year ago
    As it is gapless I managed to do it the week doing a group by date in a subquery and using a LAG with offset set to 7 on top of it (see below).
    However I'm struggling on how to do that for the data one year ago as we might have leap years. I cannot simply set the offset to 365.
    Is it possible to do it with a RANGE BETWEEN window clause? I can't manage to have it working with dates.
    Week :LAG with offset 7
    SQL Fiddle
    or
    create table daily_counts
      date_id date,
      internal_flag number,
      measure1 number
    insert into daily_counts values ('01-Jan-2013', 0, 8014);
    insert into daily_counts values ('01-Jan-2013', 1, 2);
    insert into daily_counts values ('02-Jan-2013', 0, 1300);
    insert into daily_counts values ('02-Jan-2013', 1, 37);
    insert into daily_counts values ('03-Jan-2013', 0, 19);
    insert into daily_counts values ('03-Jan-2013', 1, 14);
    insert into daily_counts values ('04-Jan-2013', 0, 3);
    insert into daily_counts values ('05-Jan-2013', 0, 0);
    insert into daily_counts values ('05-Jan-2013', 1, 1);
    insert into daily_counts values ('06-Jan-2013', 0, 0);
    insert into daily_counts values ('07-Jan-2013', 1, 3);
    insert into daily_counts values ('08-Jan-2013', 0, 33);
    insert into daily_counts values ('08-Jan-2013', 1, 9);
    commit;
    select
        date_id,
        total1,
        LAG(total1, 7) OVER(ORDER BY date_id) total_one_week_ago
      from
          select
            date_id,
            SUM(measure1) total1
          from daily_counts
          group by date_id
    order by 1;
    Year : no idea?
    I can't give a gapless example, would be too long but if there is a solution with the date directly :
    SQL Fiddle
    or add this to the schema above :
    insert into daily_counts values ('07-Jan-2012', 0, 11);
    insert into daily_counts values ('07-Jan-2012', 1, 1);
    insert into daily_counts values ('08-Jan-2012', 1, 4);
    Thank you for your help.
    Floyd

    Hi,
    Sorry, I;m not sure I understand the problem.
    If you are certain that there is at least 1 row for every day, then you can be sure that the GROUP BY will produce exactly 1 row per day, and you can use LAG (total1, 365) just like you already use LAG (total1, 7).
    Are you concerned about leap years?  That is, when the day is March 1, 2016, do you want the total_one_year_ago column to reflect March 1, 2015, which was 366 days earlier?  If that case, use
    date_id - ADD_MONTHS (date_id, -12)
    instead of  365.
    LAG only works with an exact number, but you can use RANGE BETWEEN with other analytic functions, such as MIN or SUM:
    SELECT DISTINCT
              date_id
    ,         SUM (measure1) OVER (PARTITION BY date_id)    AS total1
    ,         SUM (measure1) OVER ( ORDER BY      date_id
                                    RANGE BETWEEN 7 PRECEDING
                                          AND     7 PRECEDING
                                  )                       AS total1_one_week_ago
    ,         SUM (measure1) OVER ( ORDER BY      date_id
                                    RANGE BETWEEN 365 PRECEDING
                                          AND     365 PRECEDING
                                  )                       AS total1_one_year_ago
    FROM      daily_counts
    ORDER BY  date_id
    Again, use date arithmetic instead of the hard-coded 365, if that's an issue.
    As Hoek said, it really helps to post the exact results you want from the given sample data.  You're miles ahead of the people who don't even post the sample data, though.
    You're right not to post hundreds of INSERT statements to get a year's data.  Here's one way to generate sample data for lots of rows at the same time:
    -- Put a 0 into the table for every day in 2012
    INSERT INTO daily_counts (date_id, measure1)
    SELECT  DATE '2011-12-31' + LEVEL
    ,       0
    FROM    dual
    CONNECT BY LEVEL <= 366

  • Analytical function in OWB 10.2.0.4.0

    Dear -
    I am trying to implement analytical function in OWB but not sure how to do it. Can anyone help me?
    My SQL query looks like
    select sum (aamtorg),
    sum(sum(aamtorg)) over
    (order by cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl,
    cmgmint, cbasent, cbok, tamtlbl,
    cctygbk, caffgbk, dacggll, dctx
    rows between unbounded preceding and current row) cumulative_amountcctybbl
    from fmbnd_evt
    where cbssuntgbk = 'FM001'
    and caccgbk = '14300000029'
    and caccroo = '9146581'
    and ccrytrngbk = 'AUD'
    and creftrl = '~'
    and cmgmint = '~'
    and cbasent = 'U2725'
    and cbok = '0000'
    and tamtlbl = '~'
    and dacggll between '01aug2011' and '04aug11'
    group by cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl,
    cmgmint, ctrdnbmgint, cbasent, cbok, tamtlbl,
    cctygbk, caffgbk, dacggll, dctx
    I want to implement cumulative_amountcctybb column in the mapping.
    Can anyone help?

    Hi Arun,
    analytical functions don't require GROUP BY clause and that's why you can use an expression operator. You also have a normal SUM (aggregate) function in your query, which requires GROUP BY and can only be implemented using aggregator operator. If I understand your problem correctly, you need to use aggregate SUM with GROUP BY on your data set first, and then use analytical SUM on this set (which is already processed with an aggregate SUM). Your query would look something like this:
    select sum_aamtorg,
    sum(sum_aamtorg) over
    (order by cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl,
    cmgmint, cbasent, cbok, tamtlbl,
    cctygbk, caffgbk, dacggll, dctx
    rows between unbounded preceding and current row) cumulative_amountcctybbl
    from (
    select sum (aamtorg) sum_aamtorg,
    cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl,
    cmgmint, cbasent, cbok, tamtlbl,
    cctygbk, caffgbk, dacggll, dctx
    from fmbnd_evt
    where cbssuntgbk = 'FM001'
    and caccgbk = '14300000029'
    and caccroo = '9146581'
    and ccrytrngbk = 'AUD'
    and creftrl = '~'
    and cmgmint = '~'
    and cbasent = 'U2725'
    and cbok = '0000'
    and tamtlbl = '~'
    and dacggll between '01aug2011' and '04aug11'
    group by cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl,
    cmgmint, ctrdnbmgint, cbasent, cbok, tamtlbl,
    cctygbk, caffgbk, dacggll, dctx)
    Operator sequence would then look like: TABLE -> FILTER -> AGGREGATOR ->EXPRESSION.
    Hope this helps
    Mate
    Edited by: mate on Sep 26, 2011 1:36 PM
    Edited by: mate on Sep 26, 2011 1:36 PM

  • How to use group by in analytic function

    I need to write department which has minimum salary in one row. It must be with analytic function but i have problem with group by. I can not use min() without group by.
    select * from (select min(sal) min_salary, deptno, RANK() OVER (ORDER BY sal ASC, rownum ASC) RN from emp group by deptno) WHERE RN < 20 order by deptno;
    Edited by: senza on 6.11.2009 16:09

    different query, different results.
    LPALANI@l11gr2>select department_id, min(salary)
      2  from hr.employees
      3  group by department_id
      4  order by 2;
       DEPARTMENT_ID      MIN(SALARY)
                  50            2,100
                  20            2,100
                  30            2,500
                  60            4,200
                  10            4,400
                  80            6,100
                  40            6,500
                 100            6,900
                                7,000
                 110            8,300
                  70           10,000
                  90           17,000
    12 rows selected.
    LPALANI@l11gr2>
    LPALANI@l11gr2>-- Always lists one department in a non-deterministic way
    LPALANI@l11gr2>select * from (
      2  select department_id, min(salary) min_salary
      3  from hr.employees
      4  group by department_id
      5  order by 2) where rownum = 1;
       DEPARTMENT_ID       MIN_SALARY
                  20            2,100
    LPALANI@l11gr2>
    LPALANI@l11gr2>-- Out of the departments with the same least salary, returns the one with the least department number
    LPALANI@l11gr2>SELECT   MIN (department_id) KEEP (DENSE_RANK FIRST ORDER BY salary) AS dept_with_lowest_sal, min(salary) min_salary
      2  FROM        hr.employees;
    DEPT_WITH_LOWEST_SAL       MIN_SALARY
                      20            2,100
    LPALANI@l11gr2>
    LPALANI@l11gr2>-- This will list all the deparments with the minimum salary
    LPALANI@l11gr2>select department_id, min_salary
      2  from (select
      3  department_id,
      4  min(salary) min_salary,
      5  RANK() OVER (ORDER BY min(salary) ASC) RN
      6            from hr.employees
      7            group by department_id)
      8  WHERE rn=1;
       DEPARTMENT_ID       MIN_SALARY
                  20            2,100
                  50            2,100

  • Using Analytic Functions

    Hi all,
    I am using ODI 11g(11.1.1.3.0) and I am trying to make an interface using analytic functions in the column mapping, something like below.
    sum(salary) over (partition by .....)
    The problem is that when ODI saw sum it assumes this as an aggregate function and puts group by. Is there any way to make ODI understand it is not an aggregate function?
    I tried creating an option to specify whether it is analytic or not and updated IKM with no luck.
    <%if ( odiRef.getUserExit("ANALYTIC").equals("1") ) { %>
    <% } else { %>
    <%=odiRef.getGrpBy(i)%>
    <%=odiRef.getHaving(i)%>
    <% } %>
    Thanks in advance

    Thanks for the reply.
    But I think in ODI 11g getFrom() function is behaving differently, that is why it is not working.
    When I check out the A.2.18 getFrom() Method from Substitution API Reference document, it says
    Allows the retrieval of the SQL string of the FROM in the source SELECT clause for a given dataset. The FROM statement is built from tables and joins (and according to the SQL capabilities of the technologies) that are used in this dataset.
    I think getfrom also retrieves group by clause, I create a step in IKM just with *<%=odiRef.getFrom(0)%>* and I can see that even that query generated has a group by clause

  • Does sql analytic function help to determine continuity in occurences

    We need to solve this problem in a sql statement.
    imagine a table test with two columns
    create table test (id char(1), begin number, end number);
    and these values
    insert into test('a',1, 2);
    insert into test('a',2,3);
    insert into test('a',3,4);
    insert into test('a',7,10);
    insert into test('a',10,15);
    insert into test('b',5,9);
    insert into test('b',9,21);
    insert into test('c',1,5);
    our goal is to determine continuity in number sequence between begin and end attributes for a same id and determine min and max number from these contuinity chains.
    The result may be
    a, 1, 4
    a, 7, 15
    b, 5, 21
    c, 1, 5
    We test some analytic functions like lag, lead, row_number, min, max, partition by, etc to search a way to identify row set that represent a continuity but we didn't find a way to identify (mark) them so we can use min and max functions to extract extreme values.
    Any idea is really welcome !

    Here is our implementation in a real context for example:
    insert into requesterstage(requesterstage_i, requester_i, t_requesterstage_i, datefrom, dateto )
    With ListToAdd as
    (Select distinct support.requester_i,
    support.datefrom,
    support.dateto
    from support
    where support.datefrom < to_date('01.01.2006', 'dd.mm.yyyy')
    and support.t_relief_i = t_relief_ipar.fgetflextypologyclassitem_i(t_relief_ipar.fismedicalexpenses)
    and not exists
    (select null
    from requesterstage
    where requesterstage.requester_i = support.requester_i
    and support.datefrom < nvl(requesterstage.dateto, support.datefrom + 1)
    and nvl(support.dateto, requesterstage.datefrom + 1) > requesterstage.datefrom)
    ListToAddAnalyzed_1 as
    (select requester_i,
    datefrom,
    dateto,
    decode(datefrom,lag(dateto) over (partition by requester_i order by datefrom),0,1) data_set_start
    from ListToAdd),
    ListToAddAnalyzed_2 as
    (select requester_i,
    datefrom,
    dateto,
    data_set_start,
    sum(data_set_start) over(order by requester_i, datefrom ) data_set_id
    from ListToAddAnalyzed_1)
    select requesterstage_iseq.nextval,
    requester_i,
    t_requesterstage_ipar.fgetflextypologyclassitem_i(t_requesterstage_ipar.fisbefore2006),
    datefrom,
    decode(sign(nvl(dateto, to_date('01.01.2006', 'dd.mm.yyyy')) -to_date('01.01.2006', 'dd.mm.yyyy')), 0, to_date('01.01.2006', 'dd.mm.yyyy'), -1, dateto, 1, to_date('01.01.2006', 'dd.mm.yyyy'))
    from ( select requester_i
    , min(datefrom) datefrom
    , max(dateto) dateto
    From ListToAddAnalyzed_2
    group by requester_i, data_set_id
    );

  • Analytic Functions with GROUP-BY Clause?

    I'm just getting acquainted with analytical functions. I like them. I'm having a problem, though. I want to sum up the results, but either I'm running into a limitation or I'm writing the SQL wrong. Any hints for me?
    Hypothetical Table SALES consisting of a DAY_ID, PRODUCT_ID, PURCHASER_ID, PURCHASE_PRICE lists all the
    Hypothetical Business Question: Product prices can fluctuate over the course of a day. I want to know how much per day I would have made had I sold one each of all my products at their max price for that day. Silly question, I know, but it's the best I could come up with to show the problem.
    INSERT INTO SALES VALUES(1,1,1,1.0);
    INSERT INTO SALES VALUES(1,1,1,2.0);
    INSERT INTO SALES VALUES(1,2,1,3.0);
    INSERT INTO SALES VALUES(1,2,1,4.0);
    INSERT INTO SALES VALUES(2,1,1,5.0);
    INSERT INTO SALES VALUES(2,1,1,6.0);
    INSERT INTO SALES VALUES(2,2,1,7.0);
    INSERT INTO SALES VALUES(2,2,1,8.0);
    COMMIT;
    Day 1: Iif I had sold one product 1 at $2 and one product 2 at $4, I would have made 6$.
    Day 2: Iif I had sold one product 1 at $6 and one product 2 at $8, I would have made 14$.
    The desired result set is:
    DAY_ID                 MY_MEASURE
    1                        6
    1                       14The following SQL gets me tantalizingly close:
    SELECT DAY_ID,
      MAX(PURCHASE_PRICE)
      KEEP(DENSE_RANK FIRST ORDER BY PURCHASE_PRICE DESC)
      OVER(PARTITION BY DAY_ID, PRODUCT_ID) AS MY_MEASURE
      FROM SALES
    ORDER BY DAY_ID
    DAY_ID                 MY_MEASURE
    1                      2
    1                      2
    1                      4
    1                      4
    2                      6
    2                      6
    2                      8
    2                      8But as you can see, my result set is "longer" than I wanted it to be. I want a single row per DAY_ID. I understand what the analytical functions are doing here, and I acknowledge that I am "not doing it right." I just can't seem to figure out how to make it work.
    Trying to do a sum() of max() simply does not work, nor does any semblance of a group-by clause that I can come up with. Unfortunately, as soon as I add the windowing function, I am no longer allowed to use group-by expressions (I think).
    I am using a reporting tool, so unfortunately using things like inline views are not an option. I need to be able to define "MY_MEASURE" as something the query tool can apply the SUM() function to in its generated SQL.
    (Note: The actual problem is slightly less easy to conceptualize, but solving this conundrum will take me much closer to solving the other.)
    I humbly solicit your collective wisdom, oh forum.

    Thanks, SY. I went that way originally too. Unfortunately that's no different from what I could get without the RANK function.
    SELECT  DAY_ID,
            PRODUCT_ID,
            MAX(PURCHASE_PRICE) MAX_PRICE
      FROM  SALES
      GROUP BY DAY_ID,
               PRODUCT_ID
      ORDER BY DAY_ID,
               PRODUCT_ID
    DAY_ID                 PRODUCT_ID             MAX_PRICE             
    1                      1                      2                     
    1                      2                      4                     
    2                      1                      6                     
    2                      2                      8

  • Analytic Functions - Need resultset only in one select

    Hello Experts,
    Problem Definition: Using Analytic Function, get Total sales for the Product P1 and Customer C1 [Total sales for the customer itself] in one line. I want to restrict the ResultSet of the query to Product P1, please look at the data below, queries and problems..
    Data
    Customer Product Qtr Sales
    C1 P1 19991 100.00
    C1 P1 19992 125.00
    C1 P1 19993 175.00
    C1 P1 19994 300.00
    C1 P2 19991 100.00
    C1 P2 19992 125.00
    C1 P2 19993 175.00
    C1 P2 19994 300.00
    C2 P1 19991 100.00
    C2 P1 19992 125.00
    C2 P1 19993 175.00
    C2 P1 19994 300.00
    Problem, I want to display....
    Customer Product ProdSales CustSales
    C1 P1 700 1400
    But Without using outer query, i.e. please look below for the query that returns this reult with two select, I want this result in one query only..
    Select * From ----*** want to avoid this... ***----
    (Select Customer,Product,
    Sum(Sales) ProdSales,
    Sum(Sum(Sales)) Over(Partition By Customer) CustSales
    From t1
    Where customer='C1')
    Where
    Product='P1' ;
    Also, I want to avoid Hard coding of P1 in the select clause....
    I mean, I can do it in one shot/select, but look at the query below, it uses P1 in the select clause, which is No No!! P1 is allowed only in Where or Having ..
    Select Customer,Decode(Product, 'P1','P1','P1') Product,
    Decode(Product,'P1',Sales,0) ProdSales,
    Sum(Sum(Sales)) Over (Partition By Customer ) CustSales
    From t1
    Where customer='C1' ;
    This will get me what I want, but as I said earlier, I want to avoid using P1 in the
    Select clause..
    Goal is to Avoid using
    1-> Two Select/Outer Query/In Line Views
    2-> Product 'P1' in the Select clause...No hard coded product name in the select clause and group by clause..
    Thanks
    -Dhaval

    Select * From ----*** want to avoid this... ***----
    (Select Customer,Product,
    Sum(Sales) ProdSales,
    Sum(Sum(Sales)) Over(Partition By Customer)
    CustSales
    From t1
    Where customer='C1')
    Where
    Product='P1' ;
    Goal is to Avoid using
    1-> Two Select/Outer Query/In Line ViewsWhy?

  • Restrict Query Resultset  which uses Analytic Function

    Gents,
    Problem Definition: Using Analytic Function, get Total sales for the Product P1
    and Customer C1 [Total sales for the customer itself] in one line.
    I want to restrict the ResultSet of the query to Product P1,
    please look at the data below, queries and problems..
    Data
    Customer Product Qtr Sales
    C1 P1 19991 100.00
    C1 P1 19992 125.00
    C1 P1 19993 175.00
    C1 P1 19994 300.00
    C1 P2 19991 100.00
    C1 P2 19992 125.00
    C1 P2 19993 175.00
    C1 P2 19994 300.00
    C2 P1 19991 100.00
    C2 P1 19992 125.00
    C2 P1 19993 175.00
    C2 P1 19994 300.00
    Problem, I want to display....
    Customer Product ProdSales CustSales
    C1 P1 700 1400
    But Without using outer query, i.e. please look below for the query that
    returns this reult with two select, I want this result in one query only..
    Select * From ----*** want to avoid this... ***----
    (Select Customer,Product,
    Sum(Sales) ProdSales,
    Sum(Sum(Sales)) Over(Partition By Customer) CustSales
    From t1
    Where customer='C1')
    Where
    Product='P1' ;
    Also, I want to avoid Hard coding of P1 in the select clause....
    I mean, I can do it in one shot/select, but look at the query below, it uses
    P1 in the select clause, which is No No!! P1 is allowed only in Where or Having ..
    Select Customer,Decode(Product, 'P1','P1','P1') Product,
    Decode(Product,'P1',Sales,0) ProdSales,
    Sum(Sum(Sales)) Over (Partition By Customer ) CustSales
    From t1
    Where customer='C1' ;
    This will get me what I want, but as I said earlier, I want to avoid using P1 in the
    Select clause..
    Goal is to Avoid using
    1-> Two Select/Outer Query/In Line Views
    2-> Product 'P1' in the Select clause...
    Thanks
    -Dhaval Rasania

    I don't understand goal number 1 of not using an inline view.
    What is the harm?

  • Discoverer Analytic Function windowing - errors and bad aggregation

    I posted this first on Database General forum, but then I found this was the place to put it:
    Hi, I'm using this kind of windowing function:
    SUM(Receitas Especificas) OVER(PARTITION BY Tipo Periodo,Calculado,"Empresa Descrição (Operador)","Empresa Descrição" ORDER BY Ini Periodo RANGE BETWEEN INTERVAL '12' MONTH PRECEDING AND INTERVAL '12' MONTH PRECEDING )
    If I use the "Receitas Especificas SUM" instead of
    "Receitas Especificas" I get the following error running the report:
    "an error occurred while attempting to run..."
    This is not in accordance to:
    http://www.boku.ac.at/oradoc/ias/10g(9.0.4)/bi.904/b10268.pdf
    but ok, the version without SUM inside works.
    Another problem is the fact that for analytic function with PARTITION BY,
    this does not work (shows the cannot aggregate symbol) if we collapse or use "<All>" in page items.
    But it works if we remove the item from the PARTITION BY and also remove from workbook.
    It's even worse for windowing functions(query above), because the query
    only works if we remove the item from the PARTITION BY but we have to show it on the workbook - and this MAKES NO SENSE... :(
    Please help.

    Unfortunately Discoverer doesn't show (correct) values for analytical functions when selecting "<All>" in a page item. I found out that it does work when you add the analytical function to the db-view instead of to the report as a calculation or as a calculated item on the folder.
    The only problem is you've to name all page-items in the PARTITION window, so, when adding a page-item to the report, you,ve to change the db-view and alter the PARTITION window.
    Michael

  • Report Builder 6i not recognizing analytic functions

    Hi all, in an attempt to speed up a very slow query, I applied analytic function to it. I can save the query in report builder with no problems, however, I cannot create data links between this query and others. After I comment out the analytic function, data links can be made. My colleague said Report Builder 6i is too old so it can only recognize ANSI SQL syntax. Since our DB server is using Oracle 10gR2, is there a way for Report Builder to recognize and compile Oracle 10g syntax?
    Many thanks.

    Hello,
    Your colleague is right. Even if the SQL query is executed by the DB server , Reports needs to parse the SQL query.
    The SQL parser included in Reports 6i is based on 8.0.6
    You can see this version in the Reports Builder help :
    Menu : Help -> About Reports Builder ...
    ORACLE Server Release 8.0.6.0.0
    Regards

  • Using analytic function to get the right output.

    Dear all;
    I have the following sample date below
    create table temp_one
           id number(30),  
          placeid varchar2(400),
          issuedate  date,
          person varchar2(400),
          failures number(30),
          primary key(id)
    insert into temp_one values (1, 'NY', to_date('03/04/2011', 'MM/DD/YYYY'), 'John', 3);
    insert into temp_one values (2, 'NY', to_date('03/03/2011', 'MM/DD/YYYY'), 'Adam', 7);
    insert into temp_one values (3, 'Mexico', to_date('03/04/2011', 'MM/DD/YYYY'), 'Wendy', 3);
    insert into temp_one values (4, 'Mexico', to_date('03/14/2011', 'MM/DD/YYYY'), 'Gerry', 3);
    insert into temp_one values (5, 'Mexico', to_date('03/15/2011', 'MM/DD/YYYY'), 'Zick', 9);
    insert into temp_one values (6, 'London', to_date('03/16/2011', 'MM/DD/YYYY'), 'Mike', 8);this is output I desire
    placeid       issueperiod                               failures
    NY              02/28/2011 - 03/06/2011          10
    Mexico       02/28/2011 - 03/06/2011           3
    Mexico        03/14/2011 - 03/20/2011          12
    London        03/14/2011 - 03/20/2011          8All help is appreciated. I will post my query as soon as I am able to think of a good logic for this...

    hI,
    user13328581 wrote:
    ... Kindly note, I am still learning how to use analytic functions.That doesn't matter; analytic functions won't help in this problem. The aggregate SUM function is all you need.
    But what do you need to GROUP BY? What is each row of the result set going to represent? A placeid? Yes, each row will represent only one placedid, but it's going to be divided further. You want a separate row of output for every placeid and week, so you'll want to GROUP BY placeid and week. You don't want to GROUP BY the raw issuedate; that would put March 3 and March 4 into separate groups. And you don't want to GROUP BY failures; that would mean a row with 3 failures could never be in the same group as a row with 9 failures.
    This gets the output you posted from the sample data you posted:
    SELECT       placeid
    ,             TO_CHAR ( TRUNC (issuedate, 'IW')
                  , 'MM/DD/YYYY'
                ) || ' - '|| TO_CHAR ( TRUNC (issuedate, 'IW') + 6
                                             , 'MM/DD/YYY'
                               )     AS issueperiod
    ,       SUM (failures)                  AS sumfailures
    FROM        temp_one
    GROUP BY  placeid
    ,            TRUNC (issuedate, 'IW')
    ;You could use a sub-query to compute TRUNC (issuedate, 'IW') once. The code would be about as complicated, efficiency probably won't improve noticeably, and the the results would be the same.

Maybe you are looking for

  • How to use a custom script in a batch process?

    I was recently told that I can Photomerge, crop and save 2 separate folders of images to a 3rd folder in a batch process if I created a custom Javascript.  Are there any examples of how to integrate a custom script as an action in a batch process?

  • MTO with Forecast question

    Please share your ideas with the below business process and suggest me.. We are going with forecast for FG so that we can procure ROH materials in subcontract mfg process, So I would like to go with PS 50 so that we will be able to enter PIRs and whi

  • Hyperion Reports Question...

    Howdy!I am using Hyperion Reports 1.7 that was bundled with Hyperion Planning 2.3.1. I have several reports that take advantage of the "Expansion" capabilities, but I don't see a way to make the report "Initially Expanded". Is this possible in this o

  • BIDS 2008 DataFlow viewer not showing even though the sql statement returns data

    Have a dataflow and trying to see why no rows are being written to the destination.  I popped a viewer on the connection, but it doesnt show up when I run.  What can I check? The DataFlow source runs a sql statement and it runs fine and returns a res

  • Reading date format

    Hi, The date I am selecting is '08-FEB-2010'. I woukd like to select it as 'eight FEB 2010'. Means I would like to read 08 in words(eight).Is there anyway that I can select the above way. This is just my assumption.Please let me know if I am wrong. T