Moving sum using date intervals - analytic functions help

let's say you have the following set of data:
DATE SALES
     09/02/2012     100
     09/02/2012     50
     09/02/2012     10
     09/02/2012     1000
     09/02/2012     20
     12/02/2012     1000
     12/02/2012     1100
     14/02/2012     1000
     14/02/2012     100
     15/02/2012     112500
     15/02/2012     13500
     15/02/2012     45000
     15/02/2012     1500
     19/02/2012     1500
     20/02/2012     400
     23/02/2012     2000
     27/02/2012     4320
     27/02/2012     300000
     01/03/2012     100
     04/03/2012     17280
     06/03/2012     100
     06/03/2012     100
     06/03/2012     4320
     08/03/2012     100
     13/03/2012     1000
for each day i need to know the sum of the sales in the present and preceding 5 days (calendar) [not five rows].
What qurey could i use???
Please help!

Hi.
Here's one way.
WITH data AS
     SELECT TO_DATE('09/02/2012','DD/MM/YYYY') d,     100 n FROM DUAL UNION ALL
     SELECT TO_DATE('09/02/2012','DD/MM/YYYY') d,     50 n FROM DUAL UNION ALL
     SELECT TO_DATE('09/02/2012','DD/MM/YYYY') d,     10 n FROM DUAL UNION ALL
     SELECT TO_DATE('09/02/2012','DD/MM/YYYY') d,     1000 n FROM DUAL UNION ALL
     SELECT TO_DATE('09/02/2012','DD/MM/YYYY') d,     20 n FROM DUAL UNION ALL
     SELECT TO_DATE('12/02/2012','DD/MM/YYYY') d,     1000 n FROM DUAL UNION ALL
     SELECT TO_DATE('12/02/2012','DD/MM/YYYY') d,     1100 n FROM DUAL UNION ALL
     SELECT TO_DATE('14/02/2012','DD/MM/YYYY') d,     1000 n FROM DUAL UNION ALL
     SELECT TO_DATE('14/02/2012','DD/MM/YYYY') d,     100 n FROM DUAL UNION ALL
     SELECT TO_DATE('15/02/2012','DD/MM/YYYY') d,     112500 n FROM DUAL UNION ALL
     SELECT TO_DATE('15/02/2012','DD/MM/YYYY') d,     13500 n FROM DUAL UNION ALL
     SELECT TO_DATE('15/02/2012','DD/MM/YYYY') d,     45000 n FROM DUAL UNION ALL
     SELECT TO_DATE('15/02/2012','DD/MM/YYYY') d,     1500 n FROM DUAL UNION ALL
     SELECT TO_DATE('19/02/2012','DD/MM/YYYY') d,     1500 n FROM DUAL UNION ALL
     SELECT TO_DATE('20/02/2012','DD/MM/YYYY') d,     400 n FROM DUAL UNION ALL
     SELECT TO_DATE('23/02/2012','DD/MM/YYYY') d,     2000 n FROM DUAL UNION ALL
     SELECT TO_DATE('27/02/2012','DD/MM/YYYY') d,     4320 n FROM DUAL UNION ALL
     SELECT TO_DATE('27/02/2012','DD/MM/YYYY') d,     300000 n FROM DUAL UNION ALL
     SELECT TO_DATE('01/03/2012','DD/MM/YYYY') d,     100 n FROM DUAL UNION ALL
     SELECT TO_DATE('04/03/2012','DD/MM/YYYY') d,     17280 n FROM DUAL UNION ALL
     SELECT TO_DATE('06/03/2012','DD/MM/YYYY') d,     100 n FROM DUAL UNION ALL
     SELECT TO_DATE('06/03/2012','DD/MM/YYYY') d,     100 n FROM DUAL UNION ALL
     SELECT TO_DATE('06/03/2012','DD/MM/YYYY') d,     4320 n FROM DUAL UNION ALL
     SELECT TO_DATE('08/03/2012','DD/MM/YYYY') d,     100 n FROM DUAL UNION ALL
     SELECT TO_DATE('13/03/2012','DD/MM/YYYY') d,     1000 n FROM DUAL
days AS
     SELECT TO_DATE('2012-02-01','YYYY-MM-DD')+(LEVEL-1) d
     FROM DUAL
     CONNECT BY LEVEL <= 60
totals_per_day AS
     SELECT dy.d,SUM(NVL(dt.n,0)) total_day
     FROM
          data dt,
          days dy
     WHERE
          dy.d = dt.d(+)
     GROUP BY dy.d
     ORDER BY 1
SELECT
     d,
     SUM(total_day) OVER
          ORDER BY d
         RANGE BETWEEN  5 PRECEDING AND CURRENT ROW
     ) AS five_day_total
FROM totals_per_day;
2012-02-01 00:00:00     0
2012-02-02 00:00:00     0
2012-02-03 00:00:00     0
2012-02-04 00:00:00     0
2012-02-05 00:00:00     0
2012-02-06 00:00:00     0
2012-02-07 00:00:00     0
2012-02-08 00:00:00     0
2012-02-09 00:00:00     1180
2012-02-10 00:00:00     1180
2012-02-11 00:00:00     1180
2012-02-12 00:00:00     3280
2012-02-13 00:00:00     3280
2012-02-14 00:00:00     4380
2012-02-15 00:00:00     175700
2012-02-16 00:00:00     175700
2012-02-17 00:00:00     175700
2012-02-18 00:00:00     173600
2012-02-19 00:00:00     175100
2012-02-20 00:00:00     174400
2012-02-21 00:00:00     1900
2012-02-22 00:00:00     1900
2012-02-23 00:00:00     3900
2012-02-24 00:00:00     3900
2012-02-25 00:00:00     2400
2012-02-26 00:00:00     2000
2012-02-27 00:00:00     306320
2012-02-28 00:00:00     306320
2012-02-29 00:00:00     304320
2012-03-01 00:00:00     304420
2012-03-02 00:00:00     304420
2012-03-03 00:00:00     304420
2012-03-04 00:00:00     17380
2012-03-05 00:00:00     17380
2012-03-06 00:00:00     21900
2012-03-07 00:00:00     21800
2012-03-08 00:00:00     21900
2012-03-09 00:00:00     21900
2012-03-10 00:00:00     4620
2012-03-11 00:00:00     4620
2012-03-12 00:00:00     100
2012-03-13 00:00:00     1100
2012-03-14 00:00:00     1000
2012-03-15 00:00:00     1000
2012-03-16 00:00:00     1000
2012-03-17 00:00:00     1000
2012-03-18 00:00:00     1000
2012-03-19 00:00:00     0
2012-03-20 00:00:00     0
2012-03-21 00:00:00     0
2012-03-22 00:00:00     0
2012-03-23 00:00:00     0
2012-03-24 00:00:00     0
2012-03-25 00:00:00     0
2012-03-26 00:00:00     0
2012-03-27 00:00:00     0
2012-03-28 00:00:00     0
2012-03-29 00:00:00     0
2012-03-30 00:00:00     0
2012-03-31 00:00:00     0Hope this helps.
Regards.

Similar Messages

  • Bucket Senario using Date intervals

    Hi,
    I have developed a bucket report for Open tickets 0-7 days 8-30 days 31 to 60 days and >60 days .
    I have a key figure 0CRM_NUMDOC I  have restricted that with 0calday with variable 0DAT offset 0 and -7 similarly 0dat with offset -30 and -8 , -60 and -31 and for >60  0DAT with offset -60
    And to get the No of open tickets the restricted Keyfigures(0-7),(8-30),(31-60), >60 is restricted by Statuses E0001(To be Approved) and E0004  (Authorized)
    At the query run time user does not have to enter any date so the system by defult takes current date and minus with posting date and placed it in appropiate buckets.
    Now the Requirement is user want to enter From date and To date(Interval) based on that the tickets should fall on corresponding buckets.
    Any one having idea, suggestion how to crate buckets senario using Date intervals please suggest.
    Thanks
    Amit

    hi
    in place of 0DAT use the customised one that pics the system date with the option interval.
    Create a Variable zdate with replacement path of type customer exit.
    write the code to pick the sy-datum(current date).
    code to pic the current sys date
    WHEN 'ZVARIABLE NAME'.
        IF i_step = 1 .
          CLEAR l_s_range.
          l_s_range-low = sy-datum.
          l_s_range-opt = 'EQ'.
          l_s_range-sign = 'I'.
          APPEND l_s_range TO e_t_range.
    replace the 0DAT with the ZVARIABLE that you had created.
    Regards
    Kp
    Edited by: prashanthk on Oct 13, 2010 4:27 PM

  • Does sql analytic function help to determine continuity in occurences

    We need to solve this problem in a sql statement.
    imagine a table test with two columns
    create table test (id char(1), begin number, end number);
    and these values
    insert into test('a',1, 2);
    insert into test('a',2,3);
    insert into test('a',3,4);
    insert into test('a',7,10);
    insert into test('a',10,15);
    insert into test('b',5,9);
    insert into test('b',9,21);
    insert into test('c',1,5);
    our goal is to determine continuity in number sequence between begin and end attributes for a same id and determine min and max number from these contuinity chains.
    The result may be
    a, 1, 4
    a, 7, 15
    b, 5, 21
    c, 1, 5
    We test some analytic functions like lag, lead, row_number, min, max, partition by, etc to search a way to identify row set that represent a continuity but we didn't find a way to identify (mark) them so we can use min and max functions to extract extreme values.
    Any idea is really welcome !

    Here is our implementation in a real context for example:
    insert into requesterstage(requesterstage_i, requester_i, t_requesterstage_i, datefrom, dateto )
    With ListToAdd as
    (Select distinct support.requester_i,
    support.datefrom,
    support.dateto
    from support
    where support.datefrom < to_date('01.01.2006', 'dd.mm.yyyy')
    and support.t_relief_i = t_relief_ipar.fgetflextypologyclassitem_i(t_relief_ipar.fismedicalexpenses)
    and not exists
    (select null
    from requesterstage
    where requesterstage.requester_i = support.requester_i
    and support.datefrom < nvl(requesterstage.dateto, support.datefrom + 1)
    and nvl(support.dateto, requesterstage.datefrom + 1) > requesterstage.datefrom)
    ListToAddAnalyzed_1 as
    (select requester_i,
    datefrom,
    dateto,
    decode(datefrom,lag(dateto) over (partition by requester_i order by datefrom),0,1) data_set_start
    from ListToAdd),
    ListToAddAnalyzed_2 as
    (select requester_i,
    datefrom,
    dateto,
    data_set_start,
    sum(data_set_start) over(order by requester_i, datefrom ) data_set_id
    from ListToAddAnalyzed_1)
    select requesterstage_iseq.nextval,
    requester_i,
    t_requesterstage_ipar.fgetflextypologyclassitem_i(t_requesterstage_ipar.fisbefore2006),
    datefrom,
    decode(sign(nvl(dateto, to_date('01.01.2006', 'dd.mm.yyyy')) -to_date('01.01.2006', 'dd.mm.yyyy')), 0, to_date('01.01.2006', 'dd.mm.yyyy'), -1, dateto, 1, to_date('01.01.2006', 'dd.mm.yyyy'))
    from ( select requester_i
    , min(datefrom) datefrom
    , max(dateto) dateto
    From ListToAddAnalyzed_2
    group by requester_i, data_set_id
    );

  • Max  date in analytic function

    I have records that has repeating load dates.
    I would like to pick the records that has the maximum load_dates.
    My source data looks like this -
    ( select 60589 as C_number, to_date('01/08/2012','DD/MM/YYYY') as load_dt from dual union all
    select 60768, to_date('01/08/2012','DD/MM/YYYY') from dual union all
    select 60888, to_date('01/08/2012','DD/MM/YYYY') from dual union all
    select 12345, to_date('01/09/2012','DD/MM/YYYY') from dual union all
    select 54321, to_date('01/09/2012','DD/MM/YYYY') from dual union all
    select 66666, to_date('01/10/2012','DD/MM/YYYY') from dual union all
    select 55555, to_date('01/10/2012','DD/MM/YYYY') from dual)
    I would like to pick records with the max load_dt that means
    C_number load_dt
    666666 01-Oct-12
    555555 01-Oct-12
    I have written an oracle analytic function but it's not working the way it should be -
    My query looks like this -
    select a.*
    from
    select
    c_number,
    load_dt,
    max(load_dt) over (partition by load_dt) as mx_dt
    from table_name
    where
    load_dt = mx_dt;
    It returns all the rows for some reason.
    Any help or guidance is highly appreciated
    PJ

    without analytical..
    with mydata as
    ( select 60589 as C_number, to_date('01/08/2012','DD/MM/YYYY') as load_dt from dual union all
    select 60768, to_date('01/08/2012','DD/MM/YYYY') from dual union all
    select 60888, to_date('01/08/2012','DD/MM/YYYY') from dual union all
    select 12345, to_date('01/09/2012','DD/MM/YYYY') from dual union all
    select 54321, to_date('01/09/2012','DD/MM/YYYY') from dual union all
    select 66666, to_date('01/10/2012','DD/MM/YYYY') from dual union all
    select 55555, to_date('01/10/2012','DD/MM/YYYY') from dual)
    select *
              from mydata
              where load_dt = (select max(load_dt) from mydata);

  • I can't use the sticky note function-Help!

    I have version 9 and I am not able to makes sticky notes in reviewing the work that is sent to me. I had version 5 where all worked well and recently got a new hard drive and installed Adobe Reader 9. I read that the document has to have permission to make comments and when I go to preferences it says that this document does not have permission....but when I call the person who sent it(graphic artist who works in Quark) -she said that nothing has changed and that it does have permission. Help! What do I do to be able to use the sticky note function again.
    Thanks,
    Glen
    [email protected]

    I have noticed a similar problem at my friends computer. The small company he works for has a regular Apple Quad tower running Version Cue 3 w/ Leopard. The system has Adobe Acrobat 9 Professional installed. When he logs in and creates a PDF review, he is able to see the review window open up in Safari with the commenting features. Granted, since he has Adobe 9 Pro, he doesn't need to "Enable the Commenting" feature.
    So we tested on his lap top that just had the Adobe Reader 9 (AR9) recently installed on a fresh copy of Tiger 10.4.11. He got his invite email and went to the server and was able to do all the functions except for one: when he went to check out the review and clicked to open the PDF, it came up with an JavaScript error indicating that the: This Document is Available for Review in Acrobat Reader. You must use Adobe Acrobat .. blah blah
    So I did the next obvious step and went to AR9 and and "Enabled the Commenting". Saved it as a different document name and re imported it into the Version Cue through Bridge. I went back in and tried to initiate a new review with the updated file. SAME ERROR POPPED UP. Now, I was able to directly open the PDF document that I exported from AR9 on my friend's lap top (solo Reader) and the document loaded successfully.
    The users all have the proper permissions. It works on a system with Acrobat but not with just the Reader installed. Any ideas out there?

  • ResultSet date into JavaScript Function HELP!!

    I am developing an auction web site for a end year project in college, it is all done except this countdown thingy. I got the 'endDate' of an auction in a table. I can get the date out no problem at all, where im stuck is where i gotta pass the date from the result set into the javascript countdown function so it can countdown to the end of the auction. Code below
    <%@ page language = "java" contentType = "text/html"%>
    <%@ page language = "java" import = "com.sybase.jdbcx.SybDriver" %>
    <%@ page language = "java" import = "java.sql.*" %>
    <%@ page language = "java" import = "java.util.*"%>
    <% Connection dbConn = null;
    try
    Class.forName("com.sybase.jdbc2.jdbc.SybDriver");
    dbConn = DriverManager.getConnection("jdbc:sybase:Tds:compserver:5000/syb3044","
    syb3044", "syb3044");
    Statement b = dbConn.createStatement();
    ResultSet bid = b.executeQuery("select * from bid where carID=1");
    if(bid.next()){}
    %>
    <HTML>
    <HEAD>
    <SCRIPT LANGUAGE="JavaScript">
    mDate = new Date("20 April 2004 16:45") //Hard Coded Date Here, Want ResultSet Date Passed Here
    function countdown()
         var now=new Date()
         var diff=mDate.getTime()-now.getTime()
         if (diff <= 0)
         document.bid.mseconds.value = 0
         return 0;
         document.bid.days.value = Math.round(diff/(24*60*60*1000))
         document.bid.hours.value = Math.round(diff/(60*60*1000))
         document.bid.minutes.value = Math.round(diff/(60*1000))
         document.bid.seconds.value = Math.round(diff/1000)
         document.bid.mseconds.value = diff
         var id=setTimeout("countdown()",0)
    </SCRIPT>
    </HEAD>
    <BODY onLoad="countdown()">
    <BR>
    <form name="bid" method="post" action="">
    <p>Timeleft</p>
    <TABLE BORDER=0>
    <TD width="79">Days: </TD>
    <TD width="81">
    <INPUT TYPE="text" NAME="days" SIZE=15></TD> <TR>
    <TD width="79">Hours: </TD>
    <TD width="81">
    <INPUT TYPE="text" NAME="hours" SIZE=15></TD> <TR>
    <TD width="79">Minutes:</TD>
    <TD width="81">
    <INPUT TYPE="text" NAME="minutes" SIZE=15></TD> <TR>
    <TD width="79">Seconds: </TD>
    <TD width="81">
    <INPUT TYPE="text" NAME="seconds" SIZE=15></TD> <TR>
    <TD width="79">Milliseconds:</TD>
    <TD width="81">
    <INPUT TYPE="text" NAME="mseconds" SIZE=15></TD> <TR>
    </TABLE>
    </form>
    </BODY>
    </HTML>
    <%     
    bid.close();
    catch (SQLException sqle)
    out.println(sqle.getMessage());     
    catch (ClassNotFoundException cnfe)
    out.println(cnfe.getMessage());
    catch (Exception e)
    out.println(e.getMessage());
    finally
    try
         if(dbConn != null)
         dbConn.close();
    catch (SQLException sqle)
         out.println(sqle.getMessage());
    %>
    ...........................................................SQLTable.......................................................................................................................
    create TABLE bid
    bidID     integer primary key,
    carID     numeric NOT NULL,
    seller     varchar(50) NOT NULL,
    username varchar(50),
    startDate varchar(25) not null,
    endDate datetime not null
    PS, tried putting endDate as varchar but same result
    Please note I am a computer science student
    Regards Don Colvin

    org.apache.jasper.JasperException: Unable to compile class for JSP
    An error occurred at line: 19 in the jsp file: /Test.jsp
    Generated servlet error:
    [javac] Compiling 1 source file
    C:\Tomcat\work\Catalina\localhost\Mess\org\apache\jsp\Test_jsp.java:55: cannot resolve symbol
    symbol : class SimpleDateFormat
    location: class org.apache.jsp.Test_jsp
    out.print( new SimpleDateFormat("dd MMMM YYYY hh:MM").format(bid.getDate("endDate")) );
    ^
    1 error
    It throwing back this error, any suggestions

  • Understanding sum() over(order by) analytic function

    Could you please explain Having_order_by column values computation for below query?
    I understand that No_Partition column has been computed over entire result set
    select level
    ,sum(level) over(order by level) Having_order_by
    ,sum(level) over() No_Partition
    from dual
    connect by level < 6

    Hi,
    ActiveSomeTimes wrote:
    Could you please explain Having_order_by column values computation for below query?
    I understand that No_Partition column has been computed over entire result set
    select level
    ,sum(level) over(order by level) Having_order_by
    ,sum(level) over() No_Partition
    from dual
    connect by level < 6
    When you have an ORDER BY clause, the function only operates on a window, that is, a subset of the result set, relative to the current row.
    When you say "ORDER BY LEVEL", it will only operate on LEVELs less that or equal to the current LEVEL, so on
    LEVEL = 1, the analytic fucntion will only look at LEVEL <= 1, that is, just 1; on
    LEVEL = 2, the analytic fucntion will only look at LEVEL <= 2, that is, 1 and 2; on
    LEVEL = 3, the analytic fucntion will only look at LEVEL <= 3, that is, 1, 2 and 3
    LEVEL = 6, the analytic fucntion will only look at LEVEL <= 6, that is, 1, 2, 3, 4, 5 and 6
    In the function call without the ORDER BY clause, the function looks at the entire result set, regrdless of what vlaue LEVEL has on the current row.

  • Analytical function help needed

    hi i'm using oracle 10g.
    CREATE TABLE test100(
      hcim    VARCHAR2(10 BYTE),
      bcim     VARCHAR2(10 BYTE),
      num    VARCHAR2(6 BYTE),
      mindate    varchar2(10 byte))
      insert into test100 values ('03217979','03236915','76120F','10/1/2006')
      insert into test100 values ('03217979','03236916','76121F','10/1/2006')
      insert into test100 values ('03217979','03236917','76122F','10/1/2006')
      insert into test100 values ('03217979','03236918','76123F',null)
      insert into test100 values ('03217979','03236919','76124F','11/1/2009')
    SELECT hcim
         , bcim
         , num
         , mindate
         , Max(TO_DATE(mindate,'MM/DD/YYYY')) OVER (PARTITION BY hcim)  AS mindate1
    FROM   test100
    ;output:
    03217979     03236915     76120F     10/1/2006     11/1/2009
    03217979     03236916     76121F     10/1/2006     11/1/2009
    03217979     03236919     76124F     11/1/2009     11/1/2009
    03217979     03236918     76123F                  11/1/2009
    03217979     03236917     76122F     10/1/2006     11/1/2009how can i show null in mindate1 column since one of the date value in mindate has a null. Only if there is no nulls then i need to show max(mindate) in mindate1
    Thanks in advance

    Hi,
    Thanks for posting the CREATE TABLE and INSERT statements; that's very helpful.
    Do you mean you want mindate1 to be NULL on every row for that hcim, because at least one row in that hcim had a NULL mindate? It would help if you posted the exact results you want. (I was typing this message before your message, clarifying this point, was posted.) It would also help to have a couple of different hcims in the sample data, at least one with a NULL mindate, and another where mindate is never NULL.
    I think you want something like this:
    SELECT hcim
         , bcim
         , num
         , mindate
         , FIRST_VALUE ( TO_DATE (mindate, 'MM/DD/YYYY')
                        ) OVER ( PARTITION BY  hcim
                            ORDER BY         TO_DATE (mindate, 'MM/DD/YYYY')     DESC     NULLS FIRST
                     )  AS mindate1
    FROM   test100
    ;Output:
    HCIM       BCIM       NUM    MINDATE    MINDATE1
    03217979   03236916   76121F 10/1/2006
    03217979   03236915   76120F 10/1/2006
    03217979   03236917   76122F 10/1/2006
    03217979   03236919   76124F 11/1/2009
    03217979   03236918   76123FStoring dates in a VARCHAR2 column is a really bad idea. Why not use a DATE column? Coding will be simpler, errors will be fewer, and execution will be faster.
    Edited by: Frank Kulash on Nov 11, 2011 4:53 PM

  • Using Data Security under Functional Developer / User Manager

    Has anyone succesfully carried out any Data Security policies in Oracle Apps. I would like to get details on this.
    Thanks in advance.

    Actually the other scripts on the http://www.petefinnigan.com/tools.htm site seem to do the trick where you can check for who has DBA and who has SELECT ANY TABLE.
    The next questions is .... what other privs should I be concerned with? Just want to make sure I am checking for all possibilities of access to a particular object.

  • Analytic Function help

    TABLE T1(R1_ID,R2,R3)
    insert into t1 values(63,800,'1/1/2005')
    insert into t1 values(64,841,'1/1/2005')
    insert into t1 values(64,862,'1/1/2006')
    insert into t1 values(64,879,'4/1/2007')
    insert into t1 values(64,952,'4/1/2008')
    insert into t1 values(64,980,'2/1/2009')
    insert into t1 values(64,1010,'2/1/2010')
    insert into t1 values(64,1041,'2/1/2011')
    insert into t1 values(66,841,'1/1/2005')
    insert into t1 values(66,862,'1/1/2006')
    insert into t1 values(66,879,'4/1/2007')
    insert into t1 values(66,952,'4/1/2008')
    insert into t1 values(66,980,'2/1/2009')
    insert into t1 values(66,1010,'2/1/2010')
    insert into t1 values(66,1042,'2/1/2011')
    insert into t1 values(67,841,'1/1/2005')
    insert into t1 values(67,862,'1/1/2006')
    insert into t1 values(67,879,'4/1/2007')
    insert into t1 values(67,952,'4/1/2008')
    insert into t1 values(67,980,'2/1/2009')
    insert into t1 values(67,1009,'2/1/2010')
    insert into t1 values(67,1035,'2/1/2011')
    insert into t1 values(112,3660,'1/1/2005')
    insert into t1 values(112,3806,'1/1/2006')
    insert into t1 values(112,4500,'8/1/2006')
    insert into t1 values(112,7280,'3/1/2007')
    insert into t1 values(112,8600,'2/1/2008')
    insert into t1 values(112,8818,'5/1/2008')
    insert into t1 values(112,9170,'2/1/2009')
    insert into t1 values(112,9489,'2/1/2010')
    insert into t1 values(112,9778,'2/1/2011')
    insert into t1 values(537,7000,'11/27/2005')
    insert into t1 values(537,7000,'12/1/2005')
    SELECT distinct R1_ID,MAX(R2) OVER(PARTITION BY R1_ID),MAX(R3) OVER(PARTITION BY R1_ID)
    FROM T1
    order by R1_ID
    I WANT MAX OF R2 & R3 IN ONE SQL STATEMENT. IS THAT POSSIBLE?
    ALTHOUGH I CAN WRITE A SUBQUERY WITH R3 TO GET THE RESULT.

    with temp as(
    SELECT R1_ID,R2,R3,
    row_number() over(partition by R1_ID
    order by R2 desc,R3 desc) rn
    FROM T1)
    select * from tmp where rn=1 order by R1_ID

  • Help with analytical function

    I successfully use the following analytical function to sum all net_movement of a position (key for a position: bp_id, prtfl_num, instrmnt_id, cost_prc_crncy) from first occurrence until current row:
    SELECT SUM (net_movement) OVER (PARTITION BY bp_id, prtfl_num, instrmnt_id, cost_prc_crncy ORDER BY TRUNC (val_dt) RANGE BETWEEN UNBOUNDED PRECEDING AND 0 FOLLOWING) holding,
    what i need is another column to sum net_movement of a position but only for the current date, but all my approaches fail..
    - add the date (val_dt) to the 'partition by' clause and therefore sum only values with same position and date
    SELECT SUM (net_movement) OVER (PARTITION BY val_dt, bp_id, prtfl_num, instrmnt_id, cost_prc_crncy ORDER BY TRUNC (val_dt) RANGE BETWEEN UNBOUNDED PRECEDING AND 0 FOLLOWING) today_net_movement
    - take the holding for the last date and subtract it from the current holding afterwards
    SELECT SUM (net_movement) OVER (PARTITION BY bp_id, prtfl_num, instrmnt_id, cost_prc_crncy ORDER BY TRUNC (val_dt) RANGE BETWEEN UNBOUNDED PRECEDING AND -1 FOLLOWING) last_holding,
    - using lag on the analytical function which calculates holding fails too
    I also want to avoid creating a table which stores the last holding..
    Does anyone sees where I make a mistake or knows an alternative to get this value?
    It would help me much!
    Thanks in advance!

    Thank you,
    but I already tried that but it returns strange values which are not the correct ones for sure.
    It is always the same value for each row, if its not 0, and a very high one (500500 for example), even if the sum of all net_movement of that date is 0 (and the statement for holding returns 0 too)
    I also tried witch trunc(val_dt,'DDD') with the same result (without trunc it is the same issue)
    please help if you can, thanks in advance!

  • How to use analytic function with aggregate function

    hello
    can we use analytic function and aggrgate function in same qurey? i tried to find any example on Net but not get any example how both of these function works together. Any link or example plz share with me
    Edited by: Oracle Studnet on Nov 15, 2009 10:29 PM

    select
    t1.region_name,
    t2.division_name,
    t3.month,
    t3.amount mthly_sales,
    max(t3.amount) over (partition by t1.region_name, t2.division_name)
    max_mthly_sales
    from
    region t1,
    division t2,
    sales t3
    where
    t1.region_id=t3.region_id
    and
    t2.division_id=t3.division_id
    and
    t3.year=2004
    Source:http://www.orafusion.com/art_anlytc.htm
    Here max (aggregate) and over partition by (analytic) function is in same query. So it means we can use aggregate and analytic function in same query and more than one analytic function in same query also.
    Hth
    Girish Sharma

  • Moving files by using date

    i have my oracle installed on Linux.i want to move my trace files to backup directory.my trace files got generated irregularly.so that i need to move them my using the date they got generated so as to keep the recently generated files still in the present directory.
    so need a command to move files by using date.
    can any one help me
    thanks in advance.

    There are several ways of doing this.
    If you want to do it all yourself, use the 'find' command. There's the 'ctime' switch (find based on creation date) and 'mtime' (find based on modification time)
    You might want to look into logrotate (available on all unix and linx platforms).

  • Analytical Functions: Parent/child bucketing (distribution)

    OK here is an interest problem for the analytical functions hardcore users. I have a parent and child tables. A parent record can have N number of child records. The child table has a FK to the parent table so all child records have a parent. I need to distribute my parents into an N number of buckets based on the number of children they have. The number of buckets will be variable and user-defined. Each bucket should have a similar number of child records but the number of parents is not important. Here a simple output example with 6 parents using 3 buckets of 10 children each:
    Parent   Number_Of_Children           Bucket
    1        5                            1
    2        5                            1
    3        6                            2
    4        3                            2
    5        1                            2
    6        4                            3At first this looks like an easy job for the NTILE analytical function to bucket the child records. Another solution could be to use the ROW_NUMBER analytical function to number every child record and then use the WIDTH_BUCKET analytical function to bucket the parents based on their children ROW_NUMBER's value. But there is an additional requirement that makes these 2 approaches unusable. When doing the bucket distribution I need to guarantee that all children from the same parent are bucketed into the same bucket. This obviously makes it much more difficult. The distribution will obviously leave non-equiheight buckets since there will be no guarantee I can fill all my buckets with the same number of child records. Small differences are aceptable however. The distribution can be random and doesn't need to be the best possible distribution (the one that will cause the buckets to be as equiheight as possible). Finally this needs to be a SQL query not PL/SQL. Thanks

    Thanks Frank, that was very helpful. I did some research on bin packing and indeed there doesn't seem to be a SQL solution for this kind of problem. Lucky for me some of my requirements are slightly different to the bin packing problem so I was able to come up with something that works. My requirements are that I need to bucket the data by the number of bins rather than by the size of the buckets (unlike most bin packing problems where the size of the bin is fixed). So the bin's size will based on the total number of children divided by the number of desired buckets. The buckets don't have to (and probably won't be) of the same size given that parents can have N number of children. Furthermore we are not lucking to fill the buckets as much as possible, small differences are OK. Based on the above I came up with the following solution.
    1) I calculate the size of each bin based on the total childs and the number of bins.
    2) I then adjust this size finding the Top N parents by child where N = number of bins. This is done since I am going to bucket parents by simple order (Parent ID and Child ID). This means I will most likely end up with parents that have children in two buckets which obviously is not desired. My approach is to move these parents to the last bucket hence I have to increase the bucket size. In the worst case scenario I will have the parents with most childs overflowing into the next bucket for all buckets. So I find out which are the Top N parents and then increase my total child universe by that much.
    3) I then bucket the data using the WIDTH_BUCKET function and ROW_NUMBER by simple order (Parent ID and Child ID)
    4) I then find out the parents that have children in two buckets.
    5) And finally I move the "broken" parents to the last bucket.
    This approach seems to work well. I ran it for ~130,000 parents 500,000 childs and the query returns the bucketed data in 4 seconds. The number of buckets does not affect the performance of the query.
    WITH total_bins AS
         -- Load the number of bins
         SELECT 5 AS BIN_COUNT
           FROM DUAL
    count_childs AS
         -- Calculate the sum of child records for top N parents based on the number of bins and the total number of child records
         SELECT SUM(CASE WHEN ranked.RANK <= total_bins.BIN_COUNT THEN ranked.CHILD_COUNT ELSE NULL END) AS TOP_N_CHILDS,
                SUM(CHILD_COUNT) AS TOTAL_CHILDS 
           FROM
                   -- Rank records by their child count
                   SELECT CHILD_COUNT, ROW_NUMBER() OVER (ORDER BY CHILD_COUNT DESC) AS RANK FROM 
                          -- Count all child records for each parent
                          SELECT PARENT_ID, COUNT(1) AS CHILD_COUNT FROM PARENT_CHILD GROUP BY PARENT_ID ORDER BY 2 DESC
              ) ranked
              CROSS JOIN
              total_bins
    bins AS
         -- Calculate each bin's size based on the number of childs and bins
         -- top_n_childs is used to increase the bins in case of childs from 1
         -- parent falling into 2 different bins
         SELECT CEIL((TOTAL_CHILDS + TOP_N_CHILDS) / total_bins.BIN_COUNT) AS SIZE_EACH,
                CEIL((TOTAL_CHILDS + TOP_N_CHILDS) / total_bins.BIN_COUNT) * total_bins.BIN_COUNT AS SIZE_ALL
           FROM count_childs
                CROSS JOIN
                total_bins
    bucket_data AS
         -- Bucket data using WIDTH_BUCKET data function, most likely child records will end in 2 buckets
         SELECT PARENT_ID,
                CHILD_ID,
                WIDTH_BUCKET(ORDER_NUMBER, 1, (SELECT bins.SIZE_ALL FROM bins), (SELECT total_bins.BIN_COUNT FROM total_bins)) AS BUCKET
           FROM
                SELECT PARENT_ID, CHILD_ID, ROW_NUMBER() OVER (ORDER BY PARENT_ID, CHILD_ID) ORDER_NUMBER FROM PARENT_CHILD
    broken_parents AS
         -- This finds out which parents have their child records on more than 1 bucket so they can be fixed
         SELECT PARENT_ID FROM bucket_data GROUP BY PARENT_ID HAVING COUNT(DISTINCT BUCKET) > 1
    fixed_data AS
         -- Join bucket_data and broken_parents and moved broken_parents to the last bucket
         SELECT bucket_data.PARENT_ID,
                bucket_data.CHILD_ID,
                CASE WHEN broken_parents.PARENT_ID IS NOT NULL THEN (SELECT total_bins.BIN_COUNT FROM total_bins) ELSE bucket_data.BUCKET END AS BUCKET
           FROM bucket_data,
                broken_parents
          WHERE bucket_data.PARENT_ID = broken_parents.PARENT_ID (+)
    SELECT PARENT_ID,
           CHILD_ID,
           BUCKET
      FROM fixed_data
    -- Check number of childs per bucket
    -- SELECT BUCKET, COUNT(1) FROM fixed_data GROUP BY BUCKET ORDER BY 1
    -- Check all parents have their childs on the same bucket
    -- SELECT PARENT_ID FROM fixed_data GROUP BY PARENT_ID HAVING COUNT(DISTINCT BUCKET) > 1

  • EVALUATE in OBIEE with Analytic function LAST_VALUE

    Hi,
    I'm trying to use EVALUATE with analytic function LAST_VALUE but it is giving me error below:
    [nQSError: 17001] Oracle Error code: 30483, message: ORA-30483: window functions are not allowed here at OCI call OCIStmtExecute. [nQSError: 17010] SQL statement preparation failed. (HY000)
    Thanks
    Kumar.

    Hi Kumar,
    The ORA error tells me that this is something conveyed by the oracle database but not the BI Server. In this case, the BI server might have fired the incorrect query onto the DB and you might want to check what's wrong with it too.
    The LAST_VALUE is an analytic function which works over a set/partition of records. Request you to refer to the semantics at http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions073.htm and see if it is violating any rules here. You may want to post the physical sql here too to check.
    Hope this helps.
    Thank you,
    Dhar

Maybe you are looking for