Analytical Function - COUNT issue

Hello All,
DB Oracle 11g running on Linux x86 64-bit platform ..
I'm curious to know why the below two sets of queries produce different reults.
Q 1
WITH TEMP AS
SELECT 'ONE' VAL, 5 PAR FROM DUAL UNION ALL
SELECT 'ONE' VAL, 7 PAR FROM DUAL UNION ALL
SELECT 'FIVE' VAL, 17 PAR FROM DUAL UNION ALL
SELECT 'FOUR' VAL, 0 PAR FROM DUAL UNION ALL
SELECT 'THREE' VAL, 78 PAR FROM DUAL UNION ALL
SELECT 'THREE' VAL, 34 PAR FROM DUAL UNION ALL
SELECT 'THREE' VAL, -4 PAR FROM DUAL UNION ALL
SELECT 'TWO' VAL, 6 PAR FROM DUAL UNION ALL
SELECT 'TWO' VAL, 12 PAR FROM DUAL 
SELECT DISTINCT SUM(PAR) OVER(PARTITION BY VAL), VAL CNT FROM TEMP;
Q2
WITH TEMP AS
SELECT 'ONE' VAL, 5 PAR FROM DUAL UNION ALL
SELECT 'ONE' VAL, 7 PAR FROM DUAL UNION ALL
SELECT 'FIVE' VAL, 17 PAR FROM DUAL UNION ALL
SELECT 'FOUR' VAL, 0 PAR FROM DUAL UNION ALL
SELECT 'THREE' VAL, 78 PAR FROM DUAL UNION ALL
SELECT 'THREE' VAL, 34 PAR FROM DUAL UNION ALL
SELECT 'THREE' VAL, -4 PAR FROM DUAL UNION ALL
SELECT 'TWO' VAL, 6 PAR FROM DUAL UNION ALL
SELECT 'TWO' VAL, 12 PAR FROM DUAL 
SELECT DISTINCT SUM(PAR) OVER(PARTITION BY VAL ORDER BY PAR) CNT, VAL FROM TEMP;
The only difference between the 2 queries is Q2 has the 'ORDER BY' extra.
For Q2, I was assuming the partitions are ordered by PAR and the SUM is applied over the partition. Obviously, the results prove my assumption is wrong.
So, Q2 is just as good as select * from temp;

Hi,
You're right about "ORDER BY par"; it means that the SUM will be from the lowest value of par up to and including the current row.
I think you're ignoring "PARTITION BY val", which means every distinct value of val will be a world unto itself, not affected by any rows with a different value of val.  Since val is different on every row, that means the SUM shown will always be the SUM of just that 1 item, so SUM (par) will indeed be the same as par.
Try it without the PARTITION BY clause.

Similar Messages

  • Analytical function count(*) with order by Rows unbounded preceding

    Hi
    I have query about analytical function count(*) with order by (col) ROWS unbounded preceding.
    If i removed order by rows unbouned preceding then it behaves some other way.
    Can anybody tell me what is the impact of order by ROWS unbounded preceding with count(*) analytical function?
    Please help me and thanks in advance.

    Sweety,
    CURRENT ROW is the default behaviour of the analytical function if no windowing clause is provided. So if you are giving ROWS UNBOUNDED PRECEDING, It basically means that you want to compute COUNT(*) from the beginning of the window to the current ROW. In other words, the use of ROWS UNBOUNDED PRECEDING is to implicitly indicate the end of the window is the current row
    The beginning of the window of a result set will depend on how you have defined your partition by clause in the analytical function.
    If you specify ROWS 2 preceding, then it will calculate COUNT(*) from 2 ROWS prior to the current row. It is a physical offset.
    Regards,
    Message was edited by:
    henryswift

  • Analytic function to count rows based on Special criteria

    Hi
    I have the following query with analytic function but wrong results on the last column COUNT.
    Please help me to achive the required result.Need to change the way how I select the last column.
    1)I am getting the output order by b.sequence_no column . This is a must.
    2)COUNT Column :
    I don't want the total count based on thor column hence there is no point in grouping by that column.
    The actual requirement to achieve COUNT is:
    2a -If in the next row, if either the THOR and LOC combination changes to a new value, then COUNT=1
    (In other words, if it is different from the following row)
    2b-If the values of THOR and LOC repeats in the following row, then the count should be the total of all those same value rows until the rows become different.
    (In this case 2b-WHERE THE ROWS ARE SAME- also I only want to show these same rows only once. This is shown in the "MY REQUIRED OUTPUT) .
    My present query:
    select    r.name REGION ,
              p.name PT,
              do.name DELOFF,
              ro.name ROUTE,
    decode(th.thorfare_name,'OSIUNKNOWN',NULL,th.thorfare_name)
               THOR,
             l.name LOC ,
              b.sequence_no SEQ,
               CASE WHEN th.thorfare_name = LAG (th.thorfare_name)
                OVER (order by b.sequence_no)
                or th.thorfare_name = LEAD (th.thorfare_name)
                OVER (order by b.sequence_no)
                THEN  COUNT(b.sequence_no) OVER (partition by r.name,th.thorfare_name,l.name order BY b.sequence_no
              ELSE 1
              END COUNT
    from   t_regions r,t_post_towns p,t_delivery_offices do, t_routes ro, t_counties c,t_head_offices ho,
    t_buildings b,t_thoroughfares th,t_localities l
    where   th.thorfare_id = b.thorfare_id
    and    nvl(b.invalid,'N')='N'
    and    b.route_id=ro.route_id(+)
    and    b.locality_id =l.locality_id(+)
    and    ro.delivery_office_id=do.delivery_office_id(+)
    and    do.post_town_id = p.post_town_id(+)
    and    p.ho_id=ho.ho_id(+)
    and     ho.county_id = c.county_id(+)
    and     c.region_id = r.region_id(+)
    and    r.name='NAAS'
    and    do.DELIVERY_OFFICE_id= &&DELIVERY_OFFICE_id
    and    ro.route_id=3405
    group by r.name,p.name,do.name,ro.name,th.thorfare_name,l.name,b.sequence_no
    ORDER BY ro.name,b.sequence_no;My incorrect output[PART OF DATA]:
    >
    REGION     PT DELOFF ROUTE     THOR LOC SEQ COUNT
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 1 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 2 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 PRIMHILL CEL 4 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 5 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 THEGROVE CEL 2 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 7 3
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 8 4
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 9 5
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 10 6
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 11 7
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 12 8
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 15 2
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 19 3
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 24 4
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 29 5
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 34 6
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 39 7
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 42 2
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 PRIMHILL CEL 43 2
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 PRIMHILL CEL 44 3
    My required output[PART OF DATA]-Please compare with the above.:
    >
    REGION     PT DELOFF ROUTE     THOR LOC COUNT
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 PRIMHILL CEL 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 THEGROVE CEL 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 6
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 7
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 PRIMHILL CEL 2
    NOTE :Count as 1 is correctly coming.
    But where there is same rows and I want to take the total count on them, I am not getting.
    Pls pls help.
    Thanks
    Edited by: Krithi on 04-Nov-2010 05:28

    Nicosa wrote:
    Hi,
    Can you give us some sample data (create table + inserts orders) to play with ?
    Considering your output, I'm not even sure you need analytic count.Yes sure.
    I am describing the query again here with 3 tables now to make this understand better.
    Given below are the create table statements and insert statements for these 3 tables.
    These tables are - BULDINGSV,THORV and LOCV
    CREATE TABLE BUILDINGSV
      BUILDING_ID                  NUMBER(10)       NOT NULL,
      INVALID                      VARCHAR2(1 BYTE),
      ROUTE_ID                     NUMBER(10),
      LOCALITY_ID                  NUMBER(10),
      SEQUENCE_NO                  NUMBER(4),
      THORFARE_ID                  NUMBER(10) NOT NULL
    CREATE TABLE THORV
      THORFARE_ID            NUMBER(10)             NOT NULL,
      THORFARE_NAME          VARCHAR2(40 BYTE)      NOT NULL
    CREATE TABLE LOCV
      LOCALITY_ID            NUMBER(10)             NOT NULL,
      NAME                   VARCHAR2(40 BYTE)      NOT NULL);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002372, 'N', 3405, 37382613, 5, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002363, 'N', 3405, 37382613, 57, 9002364);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002362, 'N', 3405, 37382613, 56, 9002364);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002360, 'N', 3405, 37382613, 52, 9002364);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002358, 'N', 3405, 37382613, 1, 9002364);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002240, 'N', 3405, 37382613, 6, 9002284);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002229, 'N', 3405, 37382613, 66, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002228, 'N', 3405, 37382613, 65, 35291872);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002226, 'N', 3405, 37382613, 62, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002222, 'N', 3405, 37382613, 43, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002217, 'N', 3405, 37382613, 125, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002221, 'N', 3405, 37382613, 58, 9002364);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002214, 'N', 3405, 37382613, 128, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33363182, 'N', 3405, 37382613, 114, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33363185, 'N', 3405, 37382613, 115, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002371, 'N', 3405, 37382613, 2, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003329, 'N', 3405, 37382613, 415, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002359, 'N', 3405, 37382613, 15, 9002364);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002224, 'N', 3405, 37382613, 61, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003318, 'N', 3405, 37382613, 411, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003326, 'N', 3405, 37382613, 412, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003327, 'N', 3405, 37382613, 413, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003328, 'N', 3405, 37382613, 414, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003330, 'N', 3405, 37382613, 416, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003331, 'N', 3405, 37382613, 417, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003332, 'N', 3405, 37382613, 410, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27004795, 'N', 3405, 37382613, 514, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27004807, 'N', 3405, 37382613, 515, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002227, 'N', 3405, 37382613, 64, 35291872);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33230805, 'N', 3405, 37382613, 44, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33231027, 'N', 3405, 37382613, 7, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33231058, 'N', 3405, 37382613, 9, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33231078, 'N', 3405, 37382613, 10, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33231087, 'N', 3405, 37382613, 11, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33231093, 'N', 3405, 37382613, 12, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33229890, 'N', 3405, 37382613, 55, 9002364);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561996, 'N', 3405, 34224751, 544, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561997, 'N', 3405, 34224751, 543, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561998, 'N', 3405, 34224751, 555, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562000, 'N', 3405, 34224751, 541, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562001, 'N', 3405, 34224751, 538, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562028, 'N', 3405, 35417256, 525, 0);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562031, 'N', 3405, 35417256, 518, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562032, 'N', 3405, 35417256, 519, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562033, 'N', 3405, 35417256, 523, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561939, 'N', 3405, 34224751, 551, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561940, 'N', 3405, 34224751, 552, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561941, 'N', 3405, 34224751, 553, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561942, 'N', 3405, 35417256, 536, 0);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561943, 'N', 3405, 35417256, 537, 0);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561970, 'N', 3405, 35417256, 522, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561972, 'N', 3405, 35417256, 527, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561974, 'N', 3405, 35417256, 530, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561975, 'N', 3405, 35417256, 531, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561980, 'N', 3405, 34224751, 575, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561981, 'N', 3405, 34224751, 574, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561983, 'N', 3405, 34224751, 571, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561984, 'N', 3405, 34224751, 570, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561985, 'N', 3405, 34224751, 568, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561986, 'N', 3405, 34224751, 567, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561987, 'N', 3405, 34224751, 566, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561989, 'N', 3405, 34224751, 563, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561990, 'N', 3405, 34224751, 562, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561991, 'N', 3405, 34224751, 560, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561992, 'N', 3405, 34224751, 559, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561993, 'N', 3405, 34224751, 558, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561994, 'N', 3405, 34224751, 548, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561995, 'N', 3405, 34224751, 546, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562160, 'N', 3405, 37382613, 139, 35291878);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562161, 'N', 3405, 37382613, 140, 35291878);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562162, 'N', 3405, 37382613, 141, 35291878);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562163, 'N', 3405, 37382613, 142, 35291878);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562164, 'N', 3405, 37382613, 143, 35291878);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562165, 'N', 3405, 37382613, 145, 35291878);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562166, 'N', 3405, 37382613, 100, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562167, 'N', 3405, 37382613, 102, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562171, 'N', 3405, 37382613, 107, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562172, 'N', 3405, 37382613, 108, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562174, 'N', 3405, 37382613, 110, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562175, 'N', 3405, 37382613, 111, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562176, 'N', 3405, 37382613, 112, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562177, 'N', 3405, 37382613, 113, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562182, 'N', 3405, 37382613, 123, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562183, 'N', 3405, 37382613, 121, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562184, 'N', 3405, 37382613, 120, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562185, 'N', 3405, 37382613, 118, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562186, 'N', 3405, 37382613, 117, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562187, 'N', 3405, 37382613, 116, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562189, 'N', 3405, 37382613, 95, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562190, 'N', 3405, 37382613, 94, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562213, 'N', 3405, 37382613, 89, 35291872);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562240, 'N', 3405, 35417256, 516, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329559, 'N', 3405, 35329152, 443, 35329551);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329560, 'N', 3405, 35329152, 444, 35329551);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329562, 'N', 3405, 35329152, 446, 35329551);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329109, 'N', 3405, 35329152, 433, 35329181);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329169, 'N', 3405, 35329152, 434, 35329181);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329557, 'N', 3405, 35329152, 441, 35329551);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329558, 'N', 3405, 35329152, 442, 35329551);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329191, 'N', 3405, 35329152, 436, 35329181);
    COMMIT;
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (0, 'OSIUNKNOWN');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (9002284, 'THE GROVE');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (9002364, 'DUBLIN ROAD');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (9002375, 'NEWTOWN ROAD');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35291872, 'HAZELHATCH ROAD');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35291878, 'SIMMONSTOWN PARK');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35291883, 'PRIMROSE HILL');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35329181, 'THE COPSE');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35329213, 'THE COURT');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35329529, 'THE CRESCENT');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35329551, 'THE LAWNS');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35329580, 'THE DRIVE');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35417271, 'TEMPLEMILLS COTTAGES');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35417360, 'CHELMSFORD');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (36500023, 'THE CLOSE');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (36500101, 'THE GREEN');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (37375569, 'THE DOWNS');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (37375595, 'THE PARK');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (37375754, 'THE AVENUE');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (37375781, 'THE VIEW');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (37376046, 'THE CRESCENT');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (37376048, 'THE GLADE');
    COMMIT;
    Insert into LOCV
       (LOCALITY_ID, NAME)
    Values
       (34224751, 'SIMMONSTOWN');
    Insert into LOCV
       (LOCALITY_ID, NAME)
    Values
       (35417256, 'TEMPLEMILLS');
    Insert into LOCV
       (LOCALITY_ID, NAME)
    Values
       (35329152, 'TEMPLE MANOR');
    Insert into LOCV
       (LOCALITY_ID, NAME)
    Values
       (37382613, 'CELBRIDGE');
    Insert into LOCV
       (LOCALITY_ID, NAME)
    Values
       (37375570, 'SAINT WOLSTAN''S ABBEY');
    COMMIT;
    ------------------------------------------------------------------------------Now the query with wrong result:
    select decode(th.thorfare_name,'OSIUNKNOWN',NULL,th.thorfare_name)
               THOR,
                l.name LOC,
                b.sequence_no SEQ,
               CASE WHEN th.thorfare_name = LAG (th.thorfare_name)
                OVER (order by b.sequence_no)
                or th.thorfare_name = LEAD (th.thorfare_name)
                OVER (order by b.sequence_no)
                THEN  COUNT(b.sequence_no) OVER (partition by th.thorfare_name,l.name order BY b.sequence_no
              ELSE 1
              END COUNT   from BUILDINGSV b,THORV th,LOCV l                 
    where   th.thorfare_id = b.thorfare_id
    and    nvl(b.invalid,'N')='N'
    and    b.route_id=3405
    and    b.locality_id =l.locality_id(+)
    order by b.sequence_no;The query result -WRONG (only first few lines)
    THOR                        LOC        SEQ    COUNT
    DUBLIN ROAD     CELBRIDGE    1     1
    NEWTOWN ROAD     CELBRIDGE        2     1
    NEWTOWN ROAD     CELBRIDGE        5     2
    THE GROVE     CELBRIDGE        6     1
    NEWTOWN ROAD     CELBRIDGE        7     3
    NEWTOWN ROAD     CELBRIDGE        9     4
    NEWTOWN ROAD     CELBRIDGE       10     5
    NEWTOWN ROAD     CELBRIDGE       11     6
    NEWTOWN ROAD     CELBRIDGE       12     7
    DUBLIN ROAD     CELBRIDGE       15     1
    PRIMROSE HILL     CELBRIDGE       43     1
    PRIMROSE HILL     CELBRIDGE       44     2
    DUBLIN ROAD     CELBRIDGE       52     3
    DUBLIN ROAD     CELBRIDGE       55     4
    DUBLIN ROAD     CELBRIDGE       56     5
    DUBLIN ROAD     CELBRIDGE       57     6
    DUBLIN ROAD     CELBRIDGE       58     7
    PRIMROSE HILL     CELBRIDGE       61     3
    PRIMROSE HILL     CELBRIDGE       62     4
    HAZELHATCH ROAD     CELBRIDGE       64     1
    HAZELHATCH ROAD     CELBRIDGE       65     2The query result -EXPECTED (only first few lines)
    THOR                     LOC     COUNT
    DUBLIN ROAD     CELBRIDGE      1
    NEWTOWN ROAD     CELBRIDGE      2
    THE GROVE     CELBRIDGE      1
    NEWTOWN ROAD     CELBRIDGE      5
    DUBLIN ROAD     CELBRIDGE      1
    PRIMROSE HILL     CELBRIDGE      2
    DUBLIN ROAD     CELBRIDGE      5
    PRIMROSE HILL     CELBRIDGE      2
    HAZELHATCH ROAD     CELBRIDGE      2Please note, in the expected result, I only need 1 row but need to show the total count of rows until the names change.
    So the issues are
    1) the count column values are wrong in my query.
    2)I dont want to repeat the same rows(Please see EXPECTED output and compare it against the original)
    3)Want the output in exactly same way as in EXPECTED OUTPUT as I dont want to group by thor name(Eg. I dont want the count for all DUBLIN ROAD but I want to examine rows for the next one, if THOR/LOC combination is different in next row then COUNT=1 else COUNT=Count of no. of rows for that thor/loc combination until the combination change -So there are same value multiple rows which i need to show it in 1 row with the total count)
    I am explaining below this in more detail!!
    I only need 1 row per same THOR/LOC names coming multiple times but I need the count shown against that 1 row(i.e COUNT= how many rows with same thor/loc combination until THOR/LOC combo changes value).
    Then repeat the process until all rows are finished..
    If there is no multiple row with same THOR/LOC coming in the following row-i.e the following row is a different THOR/LOC combination, then the count for that row is 1.
    Hope this is clear.
    Is this doable?
    Thanks in advance.
    Edited by: Krithi on 04-Nov-2010 07:45
    Edited by: Krithi on 04-Nov-2010 07:45
    Edited by: Krithi on 04-Nov-2010 08:31

  • COUNT(DISTINCT) WITH ORDER BY in an analytic function

    -- I create a table with three fields: Name, Amount, and a Trans_Date.
    CREATE TABLE TEST
    NAME VARCHAR2(19) NULL,
    AMOUNT VARCHAR2(8) NULL,
    TRANS_DATE DATE NULL
    -- I insert a few rows into my table:
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '110', TO_DATE('06/01/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '20', TO_DATE('06/01/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '110', TO_DATE('06/02/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '21', TO_DATE('06/03/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '68', TO_DATE('06/04/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '110', TO_DATE('06/05/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '20', TO_DATE('06/06/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '43', TO_DATE('06/01/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '77', TO_DATE('06/02/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '221', TO_DATE('06/03/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '43', TO_DATE('06/04/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '73', TO_DATE('06/05/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    commit;
    /* I want to retrieve all the distinct count of amount for every row in an analytic function with COUNT(DISTINCT AMOUNT) sorted by name and ordered by trans_date where I get only calculate for the last four trans_date for each row (i.e., for the row "Anna 110 6/5/2005 8:00:00.000 PM," I only want to look at the previous dates from 6/2/2005 to 6/5/2005 and get the distinct count of how many amounts there are different for Anna). Note, I cannot use the DISTINCT keyword in this query because it doesn't work with the ORDER BY */
    select NAME, AMOUNT, TRANS_DATE, COUNT(/*DISTINCT*/ AMOUNT) over ( partition by NAME
    order by TRANS_DATE range between numtodsinterval(3,'day') preceding and current row ) as COUNT_AMOUNT
    from TEST t;
    This is the results I get if I just count all the AMOUNT without using distinct:
    NAME     AMOUNT     TRANS_DATE     COUNT_AMOUNT
    Anna 110 6/1/2005 8:00:00.000 PM     2
    Anna 20 6/1/2005 8:00:00.000 PM     2
    Anna 110     6/2/2005 8:00:00.000 PM     3
    Anna 21     6/3/2005 8:00:00.000 PM     4
    Anna 68     6/4/2005 8:00:00.000 PM     5
    Anna 110     6/5/2005 8:00:00.000 PM     4
    Anna 20     6/6/2005 8:00:00.000 PM     4
    Bill 43     6/1/2005 8:00:00.000 PM     1
    Bill 77     6/2/2005 8:00:00.000 PM     2
    Bill 221     6/3/2005 8:00:00.000 PM     3
    Bill 43     6/4/2005 8:00:00.000 PM     4
    Bill 73     6/5/2005 8:00:00.000 PM     4
    The COUNT_DISTINCT_AMOUNT is the desired output:
    NAME     AMOUNT     TRANS_DATE     COUNT_DISTINCT_AMOUNT
    Anna     110     6/1/2005 8:00:00.000 PM     1
    Anna     20     6/1/2005 8:00:00.000 PM     2
    Anna     110     6/2/2005 8:00:00.000 PM     2
    Anna     21     6/3/2005 8:00:00.000 PM     3
    Anna     68     6/4/2005 8:00:00.000 PM     4
    Anna     110     6/5/2005 8:00:00.000 PM     3
    Anna     20     6/6/2005 8:00:00.000 PM     4
    Bill     43     6/1/2005 8:00:00.000 PM     1
    Bill     77     6/2/2005 8:00:00.000 PM     2
    Bill     221     6/3/2005 8:00:00.000 PM     3
    Bill     43     6/4/2005 8:00:00.000 PM     3
    Bill     73     6/5/2005 8:00:00.000 PM     4
    Thanks in advance.

    you can try to write your own udag.
    here is a fake example, just to show how it "could" work. I am here using only 1,2,4,8,16,32 as potential values.
    create or replace type CountDistinctType as object
       bitor_number number,
       static function ODCIAggregateInitialize(sctx IN OUT CountDistinctType) 
         return number,
       member function ODCIAggregateIterate(self IN OUT CountDistinctType, 
         value IN number) return number,
       member function ODCIAggregateTerminate(self IN CountDistinctType, 
         returnValue OUT number, flags IN number) return number,
        member function ODCIAggregateMerge(self IN OUT CountDistinctType,
          ctx2 IN CountDistinctType) return number
    create or replace type body CountDistinctType is 
    static function ODCIAggregateInitialize(sctx IN OUT CountDistinctType) 
    return number is 
    begin
       sctx := CountDistinctType('');
       return ODCIConst.Success;
    end;
    member function ODCIAggregateIterate(self IN OUT CountDistinctType, value IN number)
      return number is
      begin
        if (self.bitor_number is null) then
          self.bitor_number := value;
        else
          self.bitor_number := self.bitor_number+value-bitand(self.bitor_number,value);
        end if;
        return ODCIConst.Success;
      end;
      member function ODCIAggregateTerminate(self IN CountDistinctType, returnValue OUT
      number, flags IN number) return number is
      begin
        returnValue := 0;
        for i in 0..log(2,self.bitor_number) loop
          if (bitand(power(2,i),self.bitor_number)!=0) then
            returnValue := returnValue+1;
          end if;
        end loop;
        return ODCIConst.Success;
      end;
      member function ODCIAggregateMerge(self IN OUT CountDistinctType, ctx2 IN
      CountDistinctType) return number is
      begin
        return ODCIConst.Success;
      end;
      end;
    CREATE or REPLACE FUNCTION CountDistinct (n number) RETURN number 
    PARALLEL_ENABLE AGGREGATE USING CountDistinctType;
    drop table t;
    create table t as select rownum r, power(2,trunc(dbms_random.value(0,6))) p from all_objects;
    SQL> select r,p,countdistinct(p) over (order by r) d from t where rownum<10 order by r;
             R          P          D
             1          4          1
             2          1          2
             3          8          3
             4         32          4
             5          1          4
             6         16          5
             7         16          5
             8          4          5
             9          4          5buy some good book if you want to start at writting your own "distinct" algorythm.
    Message was edited by:
    Laurent Schneider
    a simpler but memory killer algorithm would use a plsql table in an udag and do the count(distinct) over that table to return the value

  • Using analytical function - value with highest count

    Hi
    i have this table below
    CREATE TABLE table1
    ( cust_name VARCHAR2 (10)
    , txn_id NUMBER
    , txn_date DATE
    , country VARCHAR2 (10)
    , flag number
    , CONSTRAINT key1 UNIQUE (cust_name, txn_id)
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9870,TO_DATE ('15-Jan-2011', 'DD-Mon-YYYY'), 'Iran', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9871,TO_DATE ('16-Jan-2011', 'DD-Mon-YYYY'), 'China', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9872,TO_DATE ('17-Jan-2011', 'DD-Mon-YYYY'), 'China', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9873,TO_DATE ('18-Jan-2011', 'DD-Mon-YYYY'), 'Japan', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9874,TO_DATE ('19-Jan-2011', 'DD-Mon-YYYY'), 'Japan', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9875,TO_DATE ('20-Jan-2011', 'DD-Mon-YYYY'), 'Russia', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9877,TO_DATE ('22-Jan-2011', 'DD-Mon-YYYY'), 'China', 0);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9878,TO_DATE ('26-Jan-2011', 'DD-Mon-YYYY'), 'Korea', 0);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9811,TO_DATE ('17-Jan-2011', 'DD-Mon-YYYY'), 'China', 0);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9854,TO_DATE ('13-Jan-2011', 'DD-Mon-YYYY'), 'Taiwan', 0);
    The requirement is to create an additional column in the resultset with country name where the customer has done the maximum number of transactions
    (with transaction flag 1). In case we have two or more countries tied with the same count, then we need to select the country (among the tied ones)
    where the customer has done the last transaction (with transaction flag 1)
    e.g. The count is 2 for both 'China' and 'Japan' for transaction flag 1 ,and the latest transaction is for 'Japan'. So the new column should contain 'Japan'
    CUST_NAME TXN_ID TXN_DATE COUNTRY FLAG country_1
    Peter 9811 17-JAN-11 China 0 Japan
    Peter 9854 13-JAN-11 Taiwan 0 Japan
    Peter 9870 15-JAN-11 Iran 1 Japan
    Peter 9871 16-JAN-11 China 1 Japan
    Peter 9872 17-JAN-11 China 1 Japan
    Peter 9873 18-JAN-11 Japan 1 Japan
    Peter 9874 19-JAN-11 Japan 1 Japan
    Peter 9875 20-JAN-11 Russia 1 Japan
    Peter 9877 22-JAN-11 China 0 Japan
    Peter 9878 26-JAN-11 Korea 0 Japan
    Please let me know how to accomplish this using analytical functions
    Thanks
    -Learnsequel

    Does this work (not spent much time checking it)?
    WITH ana AS (
    SELECT cust_name, txn_id, txn_date, country, flag,
            Sum (flag)
                OVER (PARTITION BY cust_name, country)      n_trx,
            Max (CASE WHEN flag = 1 THEN txn_date END)
                OVER (PARTITION BY cust_name, country)      l_trx
      FROM cnt_trx
    SELECT cust_name, txn_id, txn_date, country, flag,
            First_Value (country) OVER (PARTITION BY cust_name ORDER BY n_trx DESC, l_trx DESC) top_cnt
      FROM ana
    CUST_NAME      TXN_ID TXN_DATE  COUNTRY          FLAG TOP_CNT
    Fred             9875 20-JAN-11 Russia              1 Russia
    Fred             9874 19-JAN-11 Japan               1 Russia
    Peter            9873 18-JAN-11 Japan               1 Japan
    Peter            9874 19-JAN-11 Japan               1 Japan
    Peter            9872 17-JAN-11 China               1 Japan
    Peter            9871 16-JAN-11 China               1 Japan
    Peter            9811 17-JAN-11 China               0 Japan
    Peter            9877 22-JAN-11 China               0 Japan
    Peter            9875 20-JAN-11 Russia              1 Japan
    Peter            9870 15-JAN-11 Iran                1 Japan
    Peter            9878 26-JAN-11 Korea               0 Japan
    Peter            9854 13-JAN-11 Taiwan              0 Japan
    12 rows selected.

  • How to ignore nulls in analytic functions ( row_number() and count())

    how to ignore nulls in analytic functions ( row_number() and count())

    Iam attaching test data can any one help me please
    thanks in advanceeeee
    CREATE TABLE TEMP_table
    ACCTNUM NUMBER,
    l_DATE TIMESTAMP(3),
    CODE VARCHAR2(35 BYTE),
    VENDOR VARCHAR2(35 BYTE)
    insert into temp_table values (1,sysdate+1/60,'bso','v1');
    insert into temp_table values (1,sysdate+2/60,'bsof','v1');
    insert into temp_table values (1,sysdate+3/60,'bsof','v2');
    insert into temp_table values (1,sysdate+4/60,'','v1');
    ian executing this my ;
    SELECT acctnum,l_date,vendor,code_1,
           CASE
             WHEN code = 'bsof'
              AND COUNT (DISTINCT code) OVER (PARTITION BY acctnum, vendor) > 1
              AND row_number () OVER (PARTITION BY acctnum, vendor ORDER BY vendor, l_date) != 1
             THEN 'yes'
           ELSE 'no' END result
      FROM  (select  acctnum,l_date,vendor, code code_1, case when code IN ('bso', 'bsof') then code
      else null end code   from  TEMP_TABLE
    ORDER BY acctnum ,l_date);
    my result :
    1    3/23/2011 5:24:34.000 PM    v1    bso    no
    1    3/23/2011 5:48:36.000 PM    v1    bsof    yes
    1    3/23/2011 6:36:41.000 PM    v1    bsof    yes
    1    3/24/2011 11:55:53.000 AM    v1        no
    1    3/23/2011 6:12:38.000 PM    v2    bsof    no
    I need to eliminate nulls  in top query not in inner query (not using where condition in inner query)
    [\code]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Oracle Analytic Function Issue

    Hi, I have created a simple table contaning 3 columns and 3 records.
    Insert into MYTABLE (ID, AMOUNT, RESULT) Values (1, 1, 1);
    Insert into MYTABLE (ID, AMOUNT, RESULT) Values (2, 4, 1);
    Insert into MYTABLE (ID, AMOUNT, RESULT) Values (3, 7, 0);
    COMMIT;
    I can SUM the AMOUNT using the analytic functions as
    SELECT ID, AMOUNT, RESULT, SUM(AMOUNT) OVER() as TOTAL
    FROM MYTABLE;
    ID      AMOUNT      RESULT      TOTAL
    1      1      1      12
    2      4      1 12
    3      7      0      12
    What I want to be able to do is summing the AMOUNTs by RESULT in this case 0 and 1.
    To get the following result, how should I rewrite the query?
    ID      AMOUNT      RESULT      TOTAL RESULT_0_TOTAL RESULT_1_TOTAL
    1 1 1      12 7 5
    2      4      1      12 7 5
    3      7      0      12 7 5

    SELECT ID, AMOUNT, RESULT
    , SUM(CASE WHEN RESULT = 0 THEN AMOUNT ELSE 0 END) OVER() as TOTAL_0
    , SUM(CASE WHEN RESULT = 1 THEN AMOUNT ELSE 0 END) OVER() as TOTAL_1
    FROM MYTABLE;

  • Analytical functions approach for this scenario?

    Here is my data:
    SQL*Plus: Release 11.2.0.2.0 Production on Tue Feb 26 17:03:17 2013
    Copyright (c) 1982, 2010, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> select * from batch_parameters;
           LOW         HI MIN_ORDERS MAX_ORDERS
            51        100          6          8
           121        200          1          5
           201       1000          1          1
    SQL> select * from orders;
    ORDER_NUMBER LINE_COUNT   BATCH_NO
            4905        154
            4899        143
            4925        123
            4900        110
            4936        106
            4901        103
            4911        101
            4902         91
            4903         91
            4887         90
            4904         85
            4926         81
            4930         75
            4934         73
            4935         71
            4906         68
            4907         66
            4896         57
            4909         57
            4908         56
            4894         55
            4910         51
            4912         49
            4914         49
            4915         48
            4893         48
            4916         48
            4913         48
            2894         47
            4917         47
            4920         46
            4918         46
            4919         46
            4886         45
            2882         45
            2876         44
            2901         44
            4921         44
            4891         43
            4922         43
            4923         42
            4884         41
            4924         40
            4927         39
            4895         38
            2853         38
            4890         37
            2852         37
            4929         37
            2885         37
            4931         37
            4928         37
            2850         36
            4932         36
            4897         36
            2905         36
            4933         36
            2843         36
            2833         35
            4937         35
            2880         34
            4938         34
            2836         34
            2872         34
            2841         33
            4889         33
            2865         31
            2889         30
            2813         29
            2902         28
            2818         28
            2820         27
            2839         27
            2884         27
            4892         27
            2827         26
            2837         22
            2883         20
            2866         18
            2849         17
            2857         17
            2871         17
            4898         16
            2840         15
            4874         13
            2856          8
            2846          7
            2847          7
            2870          7
            4885          6
            1938          6
            2893          6
            4888          2
            4880          1
            4875          1
            4881          1
            4883          1
    ORDER_NUMBER LINE_COUNT   BATCH_NO
            4879          1
            2899          1
            2898          1
            4882          1
            4877          1
            4876          1
            2891          1
            2890          1
            2892          1
            4878          1
    107 rows selected.
    SQL>The batch_parameters:
    hi - high count of lines in the batch.
    low - low count of lines in the batch.
    min_orders - number of minimum orders in the batch
    max_orders - number of maximum orders in the batch.
    The issue is to create an optimal size of batches for us to pick the orders. Usually, you have stick within a given low - hi count but, there is a leeway of around let's say 5%percent on the batch size (for the number of lines in the batch).
    But, for the number of orders in a batch, the leeway is zero.
    So, I have to assign these 'orders' into the optimal mix of batches. Now, for every run, if I don't find the mix I am looking for, then, the last batch could be as small as a one line one order too. But, every Order HAS to be batched in that run. No exceptions.
    I have a procedure that does 'sort of' this, but, it leaves non - optimal orders alone. There is a potential of the orders not getting batched, because they didn't fall in the optimal mix potentially missing our required dates. (I can write another procedure that can clean up after).
    I was thinking (maybe just a general direction would be enough), with what analytical functions can do these days, if somebody can come up with the 'sql' that gets us the batch number (think of it as a sequence starting at 1).
    Also, the batch_parameters limits are not hard and fast. Those numbers can change but, give you a general idea.
    Any ideas?

    Ok, sorry about that. That was just guesstimates. I ran the program and here are the results.
    SQL> SELECT SUM(line_count) no_of_lines_in_batch,
      2         COUNT(*) no_of_orders_in_batch,
      3         batch_no
      4    FROM orders o
      5   GROUP BY o.batch_no;
    NO_OF_LINES_IN_BATCH NO_OF_ORDERS_IN_BATCH   BATCH_NO
                     199                     4     241140
                      99                     6     241143
                     199                     5     241150
                     197                     6     241156
                     196                     5     241148
                     199                     6     241152
                     164                     6     241160
                     216                     2     241128
                     194                     6     241159
                     297                     2     241123
                     199                     3     241124
                     192                     2     241132
                     199                     6     241136
                     199                     5     241142
                      94                     7     241161
                     199                     6     241129
                     154                     2     241135
                     193                     6     241154
                     199                     5     241133
                     199                     4     241138
                     199                     6     241146
                     191                     6     241158
    22 rows selected.
    SQL> select * from orders;
    ORDER_NUMBER LINE_COUNT   BATCH_NO
            4905        154     241123
            4899        143     241123
            4925        123     241124
            4900        110     241128
            4936        106     241128
            4901        103     241129
            4911        101     241132
            4903         91     241132
            4902         91     241129
            4887         90     241133
            4904         85     241133
            4926         81     241135
            4930         75     241124
            4934         73     241135
            4935         71     241136
            4906         68     241136
            4907         66     241138
            4896         57     241136
            4909         57     241138
            4908         56     241138
            4894         55     241140
            4910         51     241140
            4914         49     241142
            4912         49     241140
            4915         48     241142
            4916         48     241142
            4913         48     241142
            4893         48     241143
            2894         47     241143
            4917         47     241146
            4919         46     241146
            4918         46     241146
            4920         46     241146
            2882         45     241148
            4886         45     241148
            2901         44     241148
            2876         44     241148
            4921         44     241140
            4891         43     241150
            4922         43     241150
            4923         42     241150
            4884         41     241150
            4924         40     241152
            4927         39     241152
            2853         38     241152
            4895         38     241152
            4931         37     241154
            2885         37     241152
            4929         37     241154
            4890         37     241154
            4928         37     241154
            2852         37     241154
            2843         36     241156
            2850         36     241156
            4932         36     241156
            4897         36     241156
            4933         36     241158
            2905         36     241156
            2833         35     241158
            4937         35     241158
            4938         34     241158
            2880         34     241159
            2872         34     241159
            2836         34     241158
            2841         33     241159
            4889         33     241159
            2865         31     241159
            2889         30     241150
            2813         29     241159
            2902         28     241160
            2818         28     241160
            4892         27     241160
            2884         27     241160
            2820         27     241160
            2839         27     241160
            2827         26     241161
            2837         22     241133
            2883         20     241138
            2866         18     241148
            2849         17     241161
            2871         17     241156
            2857         17     241158
            4898         16     241161
            2840         15     241161
            4874         13     241146
            2856          8     241154
            2847          7     241161
            2846          7     241161
            2870          7     241152
            2893          6     241142
            1938          6     241161
            4888          2     241129
            2890          1     241133
            2899          1     241136
            4877          1     241143
            4875          1     241143
            2892          1     241136
    ORDER_NUMBER LINE_COUNT   BATCH_NO
            4878          1     241146
            4876          1     241136
            2891          1     241133
            4880          1     241129
            4883          1     241143
            4879          1     241143
            2898          1     241129
            4882          1     241129
            4881          1     241124
    106 rows selected.As you can see, my code is a little buggy in that it may not have followed the strict batch_parameters. But, this is acceptable to be in the general area.

  • Yet another question on analytical functions

    Hi,
    A "simple" issue with date overlapping: I need to display a "y" or "n" where a start time/end time overlaps with another row for the same employee.
    For this, I used analytical functions to find out which rows were overlapping. This is what I did:
    create table test (id number, emp_id number, date_worked date, start_time date, end_time date);
    insert into test values (1, 333, to_date('2008-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 08:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS'));
    insert into test values (2, 333, to_date('2008-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 12:00:00', 'YYYY-MM-DD HH24:MI:SS'));
    insert into test values (3, 444, to_date('2008-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 08:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 09:00:00', 'YYYY-MM-DD HH24:MI:SS'));
    insert into test values (4, 333, to_date('2008-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 11:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 13:00:00', 'YYYY-MM-DD HH24:MI:SS'));
    SQL> select * from test;
            ID     EMP_ID DATE_WORKED         START_TIME          END_TIME
             1        333 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 10:00:00 
             2        333 01-01-2008 00:00:00 01-01-2008 10:00:00 01-01-2008 12:00:00  --> conflict
             3        444 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 09:00:00
             4        333 01-01-2008 00:00:00 01-01-2008 11:00:00 01-01-2008 13:00:00  --> conflict Here I see that Employee 333 is scheduled from 10 to 12 am, but it's also scheduled from 11 to 13. This is a conflict.
    To find this conflict, I did this (please correct me if this is incorrect):
    SQL> select id, emp_id, date_worked, start_time, end_time, next_date
      2  from (
      3        select lead( start_time, 1 ) over ( partition by emp_id order by start_time ) next_date,
      4               start_time, end_time, emp_id, date_worked, id
      5        from   test
      6       )
      7  where  next_date < end_time;
            ID     EMP_ID DATE_WORKED         START_TIME          END_TIME            NEXT_DATE
             2        333 01-01-2008 00:00:00 01-01-2008 10:00:00 01-01-2008 12:00:00 01-01-2008 11:00:00So far, so good. Where I'm stuck is, I need to display the conflicts in this way:
            ID     EMP_ID DATE_WORKED         START_TIME          END_TIME            OVERLAPPED
             1        333 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 10:00:00
             2        333 01-01-2008 00:00:00 01-01-2008 10:00:00 01-01-2008 12:00:00 yes
             3        444 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 09:00:00
             4        333 01-01-2008 00:00:00 01-01-2008 11:00:00 01-01-2008 13:00:00 yes Is there a way I can achieve this?
    I tried doing a nested query with a count(*) but no success...

    I found an issue. Say we insert this row.
    insert into test values (5, 333, to_date('2008-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 07:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 08:00:00', 'YYYY-MM-DD HH24:MI:SS'));
    SQL> select * from test order by start_time;
            ID     EMP_ID DATE_WORKED         START_TIME          END_TIME
             5        333 01-01-2008 00:00:00 01-01-2008 07:00:00 01-01-2008 08:00:00
             1        333 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 10:00:00
             3        444 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 09:00:00
             2        333 01-01-2008 00:00:00 01-01-2008 10:00:00 01-01-2008 12:00:00 --> conflict
             4        333 01-01-2008 00:00:00 01-01-2008 11:00:00 01-01-2008 13:00:00 --> conflictIf I run the SQL suggested by Nicloei, I get:
    SQL> select id,
      2         emp_id,
      3         date_worked,
      4         start_time,
      5         end_time,
      6           Case When end_time > prev_date Or next_date > End_time Then 'Yes' Else 'No' End over_lap
      7      from (
      8            select lead( start_time, 1 ) over ( partition by emp_id order by start_time ) next_date,
      9                   lag( start_time, 1 ) over ( partition by emp_id order by start_time ) prev_date,
    10                   start_time, end_time, emp_id, date_worked, id
    11            from   test
    12           );
            ID     EMP_ID DATE_WORKED         START_TIME          END_TIME            OVER_LAP
             5        333 01-01-2008 00:00:00 01-01-2008 07:00:00 01-01-2008 08:00:00 No
             1        333 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 10:00:00 Yes
             2        333 01-01-2008 00:00:00 01-01-2008 10:00:00 01-01-2008 12:00:00 Yes
             4        333 01-01-2008 00:00:00 01-01-2008 11:00:00 01-01-2008 13:00:00 Yes
             3        444 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 09:00:00 NoIt's basically saying that there's an overlap between id 1 and id 2 (8-10 and 10-12), which is incorrect. I ran the inner query by itself:
    SQL> select lead( start_time, 1 ) over ( partition by emp_id order by start_time ) next_date,
      2                   lag( start_time, 1 ) over ( partition by emp_id order by start_time ) prev_date,
      3                   start_time, end_time, emp_id, date_worked, id
      4            from   test;
    NEXT_DATE           PREV_DATE           START_TIME          END_TIME                EMP_ID DATE_WORKED                 ID
    01-01-2008 08:00:00                     01-01-2008 07:00:00 01-01-2008 08:00:00        333 01-01-2008 00:00:00          5
    01-01-2008 10:00:00 01-01-2008 07:00:00 01-01-2008 08:00:00 01-01-2008 10:00:00        333 01-01-2008 00:00:00          1
    01-01-2008 11:00:00 01-01-2008 08:00:00 01-01-2008 10:00:00 01-01-2008 12:00:00        333 01-01-2008 00:00:00          2
                        01-01-2008 10:00:00 01-01-2008 11:00:00 01-01-2008 13:00:00        333 01-01-2008 00:00:00          4
                                            01-01-2008 08:00:00 01-01-2008 09:00:00        444 01-01-2008 00:00:00          3with this data, I can't seem to find a way to conditional test in order to print my overlap column with "yes" or "no" accordingly.
    am I missing something?

  • Aggregation of analytic functions not allowed

    Hi all, I have a calculated field called Calculation1 with the following calculation:
    AVG(Resolution_time) KEEP(DENSE_RANK FIRST ORDER BY RANK ) OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report #7 COMPL".Resource Name )
    The result of this calculation is correct, but is repeated for all the rows I have in the dataset.
    Group Name      Resourse name    Calculation1
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    5112 rowsI tried to create another calculation in order to have only ONE value for the couple "Group Name, Resource Name) as AVG(Calculation1) but I have the error: Aggregation of analytic functions not allowed
    I saw also inside the "Edit worksheet" panel that the Calculation1 *is not represented* with the "Sigma" symbol I(as for example a simple AVG(field_1)) and inside the SQL code I don't have GROUP BY Group Name, Resource Name......
    I'd like to see ONLY one row as:
    Group Name      Resourse name    Calculation1
    SH Group            Mr. A            10....that it means I grouped by Group Name, Resource Name
    Anyone knows how can I achieve this result or any workarounds ??
    Thanks in advance
    Alex

    Hi Rod unfortunately I can't use the AVG(Resolution_time) because my dataset is quite strange...I explain to you better.
    Ι start from this situation:
    !http://www.freeimagehosting.net/uploads/6c7bba26bd.jpg!
    There are 3 calculated fields:
    RANK is the first calculated field:
    ROW_NUMBER() OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report Assigned To & Created By COMPL".Resource Name,"Tickets Report Assigned To & Created By COMPL".Incident Id  ORDER BY  "Tickets Report Assigned To & Created By COMPL".Select Flag )
    RT Calc is the 2nd calculation:
    CASE WHEN RANK = 1 THEN Resolution_time END
    and Calculation2 is the 3rd calculation:
    AVG(Resolution_time) KEEP(DENSE_RANK FIRST ORDER BY  RANK ) OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report Assigned To & Created By COMPL".Resource Name )
    As you can see, from the initial dataset, I have duplicated incident id and a simple AVG(Resolution Time) counts also all the duplication.
    I used the rank (based on the field "flag) to take, for each ticket, ONLY a "resolution time" value (in my case I need the resolution time when the rank =1)
    So, with the Calculation2 I calculated for each couple Group Name, Resource Name the right AVG(Resolution time), but how yuo can see....this result is duplicated for each incident_id....
    What I need instead is to see *once* for each couple 'Group Name, Resource Name' the AVG(Resolution time).
    In other words I need to calculate the AVG(Resolution time) considering only the values written inside the RT Calc fields (where they are NOT NULL, and so, the total of the tickets it's not 14, but 9).
    I tried to aggregate again using AVG(Calculation2)...but I had the error "Aggregation of analytic functions not allowed"...
    Do you know a way to fix this problem ?
    Thanks
    Alex

  • Replacing Oracle's FIRST_VALUE and LAST_VALUE analytical functions.

    Hi,
    I am using OBI 10.1.3.2.1 where, I guess, EVALUATE is not available. I would like to know alternatives, esp. to replace Oracle's FIRST_VALUE and LAST_VALUE analytical functions.
    I want to track some changes. For example, there are four methods of travel - Air, Train, Road and Sea. Would like to know traveler's first method of traveling and the last method of traveling in an year. If both of them match then a certain action is taken. If they do not match, then another action is taken.
    I tried as under.
    1. Get Sequence ID for each travel within an year per traveler as Sequence_Id.
    2. Get the Lowest Sequence ID (which should be 1) for travels within an year per traveler as Sequence_LId.
    3. Get the Highest Sequence ID (which could be 1 or greater than 1) for travels within an year per traveler as Sequence_HId.
    4. If Sequence ID = Lowest Sequence ID then display the method of travel as First Method of Travel.
    5. If Sequence ID = Highest Sequence ID then display the method of travel as Latest Method of Travel.
    6. If First Method of Travel = Latest Method of Travel then display Yes/No as Match.
    The issue is cells could be blank in First Method of Travel and Last Method of Travel unless the traveler traveled only once in an year.
    Using Oracle's FIRST_VALUE and LAST_VALUE analytical functions, I can get a result like
    Traveler | Card Issue Date | Journey Date | Method | First Method of Travel | Last Method of Travel | Match?
    ABC | 01/01/2000 | 04/04/2000 | Road | Road | Air | No
    ABC | 01/01/2000 | 15/12/2000 | Air | Road | Air | No
    XYZ | 01/01/2000 | 04/05/2000 | Train | Train | Train | Yes
    XYZ | 01/01/2000 | 04/11/2000 | Train | Train | Train | Yes
    Using OBI Answers, I am getting something like this.
    Traveler | Card Issue Date | Journey Date | Method | First Method of Travel | Last Method of Travel | Match?
    ABC | 01/01/2000 | 04/04/2000 | Road | Road | <BLANK> | No
    ABC | 01/01/2000 | 15/12/2000 | Air | <BLANK> | Air | No
    XYZ | 01/01/2000 | 04/05/2000 | Train | Train | <BLANK> | No
    XYZ | 01/01/2000 | 04/11/2000 | Train | <BLANK> | Train | No
    Above, for XYZ traveler the Match? clearly shows a wrong result (although somehow it's correct for traveler ABC).
    Would appreciate if someone can guide me how to resolve the issue.
    Many thanks,
    Manoj.
    Edited by: mandix on 27-Nov-2009 08:43
    Edited by: mandix on 27-Nov-2009 08:47

    Hi,
    Just to recap, in OBI 10.1.3.2.1, I am trying to find an alternative way to FIRST_VALUE and LAST_VALUE analytical functions used in Oracle. Somehow, I feel it's achievable. I would like to know answers to the following questions.
    1. Is there any way of referring to a cell value and displaying it in other cells for a reference value?
    For example, can I display the First Method of Travel for traveler 'ABC' and 'XYZ' for all the rows returned in the same column, respectively?
    2. I tried RMIN, RMAX functions in the RDP but it does not accept "BY" clause (for example, RMIN(Transaction_Id BY Traveler) to define Lowest Sequence Id per traveler). Am I doing something wrong here? Why can a formula with "BY" clause be defined in Answers but not the RPD? The idea is to use this in Answers. This is in relation to my first question.
    Could someone please let me know?
    I understand that this thread that I have posted is related to something that can be done outside OBI, but still would like to know.
    If anything is not clear please let me know.
    Thanks,
    Manoj.

  • Analytic function to retrieve a value one year ago

    Hello,
    I'm trying to find an analytic function to get a value on another row by looking on a date with Oracle 11gR2.
    I have a table with a date_id (truncated date), a flag and a measure. For each date, I have at least one row (sometimes 2), so it is gapless.
    I would like to find analytic functions to show for each date :
    sum of the measure for that date
    sum of the measure one week ago
    sum of the measure one year ago
    As it is gapless I managed to do it the week doing a group by date in a subquery and using a LAG with offset set to 7 on top of it (see below).
    However I'm struggling on how to do that for the data one year ago as we might have leap years. I cannot simply set the offset to 365.
    Is it possible to do it with a RANGE BETWEEN window clause? I can't manage to have it working with dates.
    Week :LAG with offset 7
    SQL Fiddle
    or
    create table daily_counts
      date_id date,
      internal_flag number,
      measure1 number
    insert into daily_counts values ('01-Jan-2013', 0, 8014);
    insert into daily_counts values ('01-Jan-2013', 1, 2);
    insert into daily_counts values ('02-Jan-2013', 0, 1300);
    insert into daily_counts values ('02-Jan-2013', 1, 37);
    insert into daily_counts values ('03-Jan-2013', 0, 19);
    insert into daily_counts values ('03-Jan-2013', 1, 14);
    insert into daily_counts values ('04-Jan-2013', 0, 3);
    insert into daily_counts values ('05-Jan-2013', 0, 0);
    insert into daily_counts values ('05-Jan-2013', 1, 1);
    insert into daily_counts values ('06-Jan-2013', 0, 0);
    insert into daily_counts values ('07-Jan-2013', 1, 3);
    insert into daily_counts values ('08-Jan-2013', 0, 33);
    insert into daily_counts values ('08-Jan-2013', 1, 9);
    commit;
    select
        date_id,
        total1,
        LAG(total1, 7) OVER(ORDER BY date_id) total_one_week_ago
      from
          select
            date_id,
            SUM(measure1) total1
          from daily_counts
          group by date_id
    order by 1;
    Year : no idea?
    I can't give a gapless example, would be too long but if there is a solution with the date directly :
    SQL Fiddle
    or add this to the schema above :
    insert into daily_counts values ('07-Jan-2012', 0, 11);
    insert into daily_counts values ('07-Jan-2012', 1, 1);
    insert into daily_counts values ('08-Jan-2012', 1, 4);
    Thank you for your help.
    Floyd

    Hi,
    Sorry, I;m not sure I understand the problem.
    If you are certain that there is at least 1 row for every day, then you can be sure that the GROUP BY will produce exactly 1 row per day, and you can use LAG (total1, 365) just like you already use LAG (total1, 7).
    Are you concerned about leap years?  That is, when the day is March 1, 2016, do you want the total_one_year_ago column to reflect March 1, 2015, which was 366 days earlier?  If that case, use
    date_id - ADD_MONTHS (date_id, -12)
    instead of  365.
    LAG only works with an exact number, but you can use RANGE BETWEEN with other analytic functions, such as MIN or SUM:
    SELECT DISTINCT
              date_id
    ,         SUM (measure1) OVER (PARTITION BY date_id)    AS total1
    ,         SUM (measure1) OVER ( ORDER BY      date_id
                                    RANGE BETWEEN 7 PRECEDING
                                          AND     7 PRECEDING
                                  )                       AS total1_one_week_ago
    ,         SUM (measure1) OVER ( ORDER BY      date_id
                                    RANGE BETWEEN 365 PRECEDING
                                          AND     365 PRECEDING
                                  )                       AS total1_one_year_ago
    FROM      daily_counts
    ORDER BY  date_id
    Again, use date arithmetic instead of the hard-coded 365, if that's an issue.
    As Hoek said, it really helps to post the exact results you want from the given sample data.  You're miles ahead of the people who don't even post the sample data, though.
    You're right not to post hundreds of INSERT statements to get a year's data.  Here's one way to generate sample data for lots of rows at the same time:
    -- Put a 0 into the table for every day in 2012
    INSERT INTO daily_counts (date_id, measure1)
    SELECT  DATE '2011-12-31' + LEVEL
    ,       0
    FROM    dual
    CONNECT BY LEVEL <= 366

  • Using analytical function to calculate concurrency between date range

    Folks,
    I'm trying to use analytical functions to come up with a query that gives me the
    concurrency of jobs executing between a date range.
    For example:
    JOB100 - started at 9AM - stopped at 11AM
    JOB200 - started at 10AM - stopped at 3PM
    JOB300 - started at 12PM - stopped at 2PM
    The query would tell me that JOB1 ran with a concurrency of 2 because JOB1 and JOB2
    were running started and finished within the same time. JOB2 ran with the concurrency
    of 3 because all jobs ran within its start and stop time. The output would look like this.
    JOB START STOP CONCURRENCY
    === ==== ==== =========
    100 9AM 11AM 2
    200 10AM 3PM 3
    300 12PM 2PM 2
    I've been looking at this post, and this one if very similar...
    Analytic functions using window date range
    Here is the sample data..
    CREATE TABLE TEST_JOB
    ( jobid NUMBER,
    created_time DATE,
    start_time DATE,
    stop_time DATE
    insert into TEST_JOB values (100, sysdate -1, to_date('05/04/08 09:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 11:00:00','MM/DD/YY hh24:mi:ss'));
    insert into TEST_JOB values (200, sysdate -1, to_date('05/04/08 10:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 13:00:00','MM/DD/YY hh24:mi:ss'));
    insert into TEST_JOB values (300, sysdate -1, to_date('05/04/08 12:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 14:00:00','MM/DD/YY hh24:mi:ss'));
    select * from test_job;
    JOBID|CREATED_TIME |START_TIME |STOP_TIME
    ----------|--------------|--------------|--------------
    100|05/04/08 09:28|05/04/08 09:00|05/04/08 11:00
    200|05/04/08 09:28|05/04/08 10:00|05/04/08 13:00
    300|05/04/08 09:28|05/04/08 12:00|05/04/08 14:00
    Any help with this query would be greatly appreciated.
    thanks.
    -peter

    after some checking the model rule wasn't working exactly as expected.
    I believe it's working right now. I'm posting a self-contained example for completeness sake.I use 2 functions to convert back and forth between epoch unix timestamps, so
    I'll post them here as well.
    Like I said I think this works okay, but any feedback is always appreciated.
    -peter
    CREATE OR REPLACE FUNCTION date_to_epoch(p_dateval IN DATE)
    RETURN NUMBER
    AS
    BEGIN
    return (p_dateval - to_date('01/01/1970','MM/DD/YYYY')) * (24 * 3600);
    END;
    CREATE OR REPLACE FUNCTION epoch_to_date (p_epochval IN NUMBER DEFAULT 0)
    RETURN DATE
    AS
    BEGIN
    return to_date('01/01/1970','MM/DD/YYYY') + (( p_epochval) / (24 * 3600));
    END;
    DROP TABLE TEST_MODEL3 purge;
    CREATE TABLE TEST_MODEL3
    ( jobid NUMBER,
    start_time NUMBER,
    end_time NUMBER);
    insert into TEST_MODEL3
    VALUES (300,date_to_epoch(to_date('05/07/2008 10:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 19:00','MM/DD/YYYY hh24:mi')));
    insert into TEST_MODEL3
    VALUES (200,date_to_epoch(to_date('05/07/2008 09:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 12:00','MM/DD/YYYY hh24:mi')));
    insert into TEST_MODEL3
    VALUES (400,date_to_epoch(to_date('05/07/2008 10:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 14:00','MM/DD/YYYY hh24:mi')));
    insert into TEST_MODEL3
    VALUES (500,date_to_epoch(to_date('05/07/2008 11:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 16:00','MM/DD/YYYY hh24:mi')));
    insert into TEST_MODEL3
    VALUES (600,date_to_epoch(to_date('05/07/2008 15:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 22:00','MM/DD/YYYY hh24:mi')));
    insert into TEST_MODEL3
    VALUES (100,date_to_epoch(to_date('05/07/2008 09:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 23:00','MM/DD/YYYY hh24:mi')));
    commit;
    SELECT jobid,
    epoch_to_date(start_time)start_time,
    epoch_to_date(end_time)end_time,
    n concurrency
    FROM TEST_MODEL3
    MODEL
    DIMENSION BY (start_time,end_time)
    MEASURES (jobid,0 n)
    (n[any,any]=
    count(*)[start_time<= cv(start_time),end_time>=cv(start_time)]+
    count(*)[start_time > cv(start_time) and start_time <= cv(end_time), end_time >= cv(start_time)]
    ORDER BY start_time;
    The results look like this:
    JOBID|START_TIME|END_TIME |CONCURRENCY
    ----------|---------------|--------------|-------------------
    100|05/07/08 09:00|05/07/08 23:00| 6
    200|05/07/08 09:00|05/07/08 12:00| 5
    300|05/07/08 10:00|05/07/08 19:00| 6
    400|05/07/08 10:00|05/07/08 14:00| 5
    500|05/07/08 11:00|05/07/08 16:00| 6
    600|05/07/08 15:00|05/07/08 22:00| 4

  • How to use sum analytic function in adf

    Hi
    jdev 11.1.1.5
    oracle 11g r2
    I want to use analytic function (sum,count,avg and ...) .
    I see [url http://andrejusb.blogspot.co.uk/2013/02/oracle-analytic-functions-for-total-and.html]Oracle Analytic Functions for Total and Average Calculation in ADF BC
    and use it in my vo and jsf page,my vo have too much record and I want to have sum in table footer on demand (because of performance) and if user do not want to see the sum in footer of table it do not calculate sum,
    what is your idea?

    Before I read that blog I use another vo for sum but after that blog decide to use analytic fuction becuase we have some page that have to many dvt graph and table and know we use seperate vo for them and it has not good performance and too many query must run in database ,I want to have 1 vo with some analytic function for graph and tables

  • Analytical function fine within TOAD but throwing an error for a mapping.

    Hi,
    When I validate an expression based on SUM .... OVER PARTITION BY in a mapping, I am getting the following error.
    Line 4, Col 23:
    PLS-00103: Encountered the symbol "OVER" when expecting one of the following:
    * & = - + < / > at in is mod remainder not rem then
    <an exponent (**)> <> or != or ~= >= <= <> and or like LIKE2_
    LIKE4_ LIKEC_ between || multiset member SUBMULTISET_
    However, using TOAD, the expression is working fine.
    A staging table has got three columns, col1, col2 and col3. The expression is checking for a word in col3. The expression is as under.
    (CASE WHEN SUM (CASE WHEN UPPER(INGRP1.col3) LIKE 'some_value%'
    THEN 1
    ELSE 0
    END) OVER (PARTITION BY INGRP1.col1
    ,INGRP1.col2) > 0
    THEN 'Y'
    ELSE 'N'
    END)
    I searched the forum for similar issues, but not able to resolve my issue.
    Could you please let me know what's wrong here?
    Many thanks,
    Manoj.

    Yes, expression validation in 10g simply does not work for (i.e. does not recognize) analytic functions.
    It can simply be ignored. You should also set Generation mode to "Set Based only". Otherwise the mapping will fail to deploy under certain circumstances (when using non-set-based (PL/SQL) operators after the analytic function).

Maybe you are looking for

  • Help! Voice memos will not sync!

    I have checked all the right boxes for my voice memos to sync, but still 1 out of 3 will sync. Why won't the other 2 sync??

  • Automatically rasterize all vector drawings (minus texts) in the PDF

    Hi I looked at the Adobe Acrobat documentation and also inside FAQs of this forum for did not find this specific information. If I missed anything, please just tell me I have a PDF containing lot's of vector drawing made in Illustrator. The PDF was g

  • Where is the runtime repository assistant???

    Hi all, I am working with the OBI SE. When I am deploying the objects I am getting an error because there is not a valid runtime repository. I found that I have to create the runtime schema with the runtime repository assistant but I don't find it in

  • Completely lost internet after upgrade over 10.7.4, Mac Mini

    Hi Everyone, I've upgraded from 10.7.4 on a Mac Mini bought late last year i5 2.3 GHz, 500GB HDD to 10.8.2. I've now completely lost the internet. My FreeNAS box started disconnecting intermitently and today, after a reboot, I cannot connect to the i

  • N97 won't use WLAN internet automatically

    Hi, I configure 1 WLAN access point at home and another WLAN access at office , but when I am in office, Nokia N97 keep trying to connect the access point at home instead of using office WLAN.  Is that the default behaviour of Nokia N97? Thanks Vince