Grouping error in Oracle's analytic function  PERCENTILE_CONT()

Hi,
I have a question regarding the usage of Oracle's analytic function PERCENTILE_CONT(). The underlying time data in the table is of hourly granularity and I want to fetch average, peak values for the day along with 80th percentile for that day. For the sake of clarification I am only posting relevant portion of the query.
Any idea how to rewrite the query and achieve the same objective?
SELECT   TRUNC (sdd.ts) AS ts,
         max(sdd.maxvalue) AS max_value, avg(sdd.avgvalue) AS avg_value,
         PERCENTILE_CONT(0.80) WITHIN GROUP (ORDER BY  sdd.avgvalue ASC) OVER (PARTITION BY pm.sysid,trunc(sdd.ts)) as Percentile_Cont_AVG
FROM     XYZ
WHERE
          XYZ
GROUP BY  TRUNC (sdd.ts)  
ORDER BY  TRUNC (sdd.ts)
Oracle Error:
ERROR at line 5:
ORA-00979: not a GROUP BY expression

You probably mixed up the aggregate and analytical versin of PERCENTILE_CONT.
The below should work, but i dont know if it produces the desireed results.
SELECT   TRUNC (sdd.ts) AS ts,
         max(sdd.maxvalue) AS max_value, avg(sdd.avgvalue) AS avg_value,
         PERCENTILE_CONT(0.80) WITHIN GROUP (ORDER BY  sdd.avgvalue ASC)  as Percentile_Cont_AVG
FROM     XYZ
sorry, what is this where clause for??
WHERE
          XYZ
GROUP BY  TRUNC (sdd.ts)  
ORDER BY  TRUNC (sdd.ts) Edited by: chris227 on 26.03.2013 05:45

Similar Messages

  • Ugent: Regarding Oracle 10g Analytic functions

    Hi,
    EMP TABLE:
    EMPNO     ENAME          JOB          MGR          HIREDATE          SAL          COMM     DEPTNO
    ==================================================================
    7369          SMITH          CLERK          7902          12/17/1980     800                    20
    7499          ALLEN          SALESMAN          7698          2/20/1981          1600          300          30
    7521          WARD          SALESMAN          7698          2/22/1981          1250          500          30
    7566          JONES          MANAGER          7839          4/2/1981          2975                    20
    7654          MARTIN          SALESMAN          7698          9/28/1981          1250          1400          30
    7698          BLAKE          MANAGER          7839          5/1/1981          2850                    30
    7782          CLARK          MANAGER          7839          6/9/1981          2450                    
    7788          SCOTT          ANALYST          7566          12/9/1982          3000                    20
    7839          KING          PRESIDENT                    11/17/1981     5000                    
    7844          TURNER          SALESMAN          7698          9/8/1981          1500          0          30
    7876          ADAMS          CLERK          7788          1/12/1983          1100                    20
    7900          JAMES          CLERK          7698          12/3/1981          950                    30
    7902          FORD          ANALYST          7566          12/3/1981          3000                    20
    7934          MILLER          CLERK          7782          1/23/1982          1300                    
    ================================================================
    I would like the output group by manger and the employees under that manager using analytic functions.
    Output should look like:
    ManagerName EMPNAME
    ==========================================================
    KING JONES,BLAKE,CLARK
    JONES SCOTT,FORD
    BLAKE ALLEN,WARD,MARTIN,TURNER,JAMES
    CLARK MILLER
    FORD SMITH
    SCOTT ADAMS
    Also I would like to run this query in unix shell script in order to create a folder structure like this:
    Root Folder: King -> Jones -> SCOTT -> ADAMS
    -> FORD -> SMITH
    -> BLAKE -> ALLEN
    -> WARD
    -> MARTIN
    -> TURNER
    -> JAMES
    -> CLARK -> MILLER
    On a total 14 folders should be created.
    Thanks in Advance
    G.Vamsi Krishna
    Edited by: user10733211 on Apr 20, 2009 11:30 PM

    user10633982 wrote:
    hey guys can you please give your personal opinions and remarks out of this thread.
    this thread is supposed to be solving query and not for chit chattingNot chit chatting and not just personal opinions, just letting you know how it works around here...
    Urgent is it?
    Why? Have you forgotten to do your coursework and you'll get thrown off your course if you don't hand it in today?
    What makes you believe that your request for help is more important than someone else who has requested help? It's very rude to assume you are more important than somebody else, and I'm sure they would like an answer to their issue as soon as they can get one too, but they've generally been polite and not demanded that it is urgent.
    Also, you assume that people giving answers are all sitting here just waiting to answer your question for you. That's not so. We're all volunteers with our own jobs to do. How dare you presume to demand our attention with urgency.
    If you want help and you want it answering quickly you simply just put your issue forward and provide as much valuable information as possible.
    You will find if you post on here demanding your post is urgent then most people will just ignore it, some will tell you to get lost, and some will explain to you why you shouldn't post "urgent" requests. Occasionally you may find somebody who's got nothing better to do who will actually provide you with an answer, but you really are limiting your options by not asking properly.
    How can something being run against the SCOTT schema be something that is "urgent"?
    For the first part of your enquiry:
    SQL> ed
    Wrote file afiedt.buf
      1  with emps as (select ename, mgr, row_number() over (partition by mgr order by ename) as rn from emp)
      2  select ename as managername
      3        ,(select ltrim(sys_connect_by_path(emps.ename,','),',')
      4          from   emps
      5          where emps.mgr = emp.empno
      6          and connect_by_isleaf = 1
      7          connect by rn = prior rn + 1 and mgr = prior mgr
      8          start with rn = 1
      9         ) as empname
    10  from emp
    11* where empno in (select mgr from emp)
    SQL> /
    MANAGERNAM EMPNAME
    JONES      FORD,SCOTT
    BLAKE      ALLEN,JAMES,MARTIN,TURNER,WARD
    CLARK      MILLER
    SCOTT      ADAMS
    KING       BLAKE,CLARK,JONES
    FORD       SMITH
    6 rows selected.
    SQL>As for using that output in a unix shell script to create directory structures you should consider asking in a unix forum.

  • Help on Using Analytical Functions

    I am hetting error when i use Analytical functions in Expressions
    AVG( INGRP1.Test1 ) OVER (PARTITION BY INGRP1.Test2)
    Error is as follows
    Line 1, Col 28:
    PLS-00103: Encountered the symbol "OVER" when expecting one of the following:
    * & = - + ; < / > at in is mod remainder not rem
    <an exponent (**)> <> or != or ~= >= <= <> and or like LIKE2_
    LIKE4_ LIKEC_ between || multiset member SUBMULTISET_

    Hi,
    the syntax of this part of the sql statement is okay. Please post the complete statement to identify the error.
    Sometimes oracle identifies the wrong point for the error.
    Regards,
    Detlef

  • How to use aggregate functions into Analytical functions

    Can we use aggregate functions into analytical functions?
    Please provide one example.
    Smiles.

    HI Learner6
    for information:
    Aggregate Functions
    Analytic Functions
    for practic:
    ORACLE-BASE - Analytic Functions
    Thank you

  • AVG as an analytic function - gives error ORA-0439

    I'm trying my first implementation of AVG as an analytic function. I took the following query which gave a simple average:
    SELECT
    PERSON.LASTNAME,
    COUNT(TO_NUMBER(RPTOBS.OBSVALUE)) CNT,
    AVG(TO_NUMBER(RPTOBS.OBSVALUE)) AVRG
    FROM
    TUT.PERSON PERSON,
    TUT.RPTOBS RPTOBS
    WHERE
    PERSON.PID = RPTOBS.PID AND
    RPTOBS.HDID = 54
    GROUP BY
    PERSON.LASTNAME;
    and was rewrote this to give an average of the values of the last 3 dates (I think..)
    SELECT
    PERSON.LASTNAME,
    RPTOBS.OBSDATE,
    AVG(TO_NUMBER(RPTOBS.OBSVALUE))
    OVER
    (PARTITION BY PERSON.LASTNAME
    ORDER BY RPTOBS.OBSDATE
    ROWS BETWEEN UNBOUNDED PRECEDING AND 2 FOLLOWING)
    AS AVRG3
    FROM
    TUT.PERSON PERSON,
    TUT.RPTOBS RPTOBS
    WHERE
    PERSON.PID = RPTOBS.PID AND
    RPTOBS.HDID = 54;
    (this seemed to be a direct translation of a similar query in the SQL Reference.
    I am getting an error message of:
    ORA-0439 - feature not enabled - OLAP Window Functions
    Can some one tell me why?
    Thanks,
    Will Salomon
    [email protected]

    I haven'y done it personally, but I am told that it's really as simple as de-insatlling the Standard Edition software and then installing the Enterprise Edition. Th einstaller will prompt you for an Oracle SID and you just have to point it to your existing database.
    You may wish to test this proposition before risking your actual system. But, in any case, take a back up.
    With 9i things are simpler: everyting gets installed, you're just not allowed to use the EE features if you haven't paid for them.
    Cheers, APC

  • Analytical Function in Oracle 8.1.5

    I am using the following sql
    select empno,ename,count(*) over()
    from emp;
    But it shows the error
    ORA-00923: FROM keyword not found where expected
    I am using Oracle 8.1.5
    pls help me

    Analytical function were introduced in ORACLE 8iBut only for the Enterprise Edition. If the OP has the Standard Edition they can't use Analytic functions.
    Cheers, APC

  • Error using Analytic function in reports

    Hi,
    I am trying to use Oracle analytic function (lag) in a report. But am getting the below error:
    Encountered the symbol "(" when expecting one of the following:
    ,from into bulk
    This is the code in the formula column:
    function extend_lifeFormula return VARCHAR2 is
    l_extend_life VARCHAR2(80);
    l_life_in_months VARCHAR2(80);
    l_asset_id NUMBER;
    begin
    SRW.REFERENCE(:P_BOOK_NAME);
    SRW.REFERENCE(:ASSET_ID);
    SELECT asset_id,
         lag(life_in_months,1,0) over (PARTITION BY asset_id
                   ORDER BY transaction_header_id_in) Extend_Life
    INTO l_asset_id,
    l_life_in_months
    FROM fa_books
    WHERE book_type_code = 'US GAAP'
    AND asset_id = 1;
    return life_in_months;
    end;
    Has anyone experienced this error before? Does client pl/sql engine not support Analytic functions? The above query runs fine in SQL.
    Thanks,
    Ashish

    From our version of 6i Reports Builder Help, I got ...
    Oracle ORACLE PL/SQL V8.0.6.3.0 - Production
    You may check yours.

  • Help with Oracle Analytic Function scenario

    Hi,
    I am new to analytic functions and was wondering if someone could help me with the data scenario below. I have a table with the following data
    COLUMN A COLUMN B COLUMN C
    13368834 34323021 100
    13368835 34438258 50
    13368834 34438258 50
    13368835 34323021 100
    The output I want is
    COLUMN A COLUMN B COLUMN C
    13368834 34323021 100
    13368835 34438258 50
    A simple DISTINCT won't give me the desired output so i was wondering if there is any way that I can get the result using ANALYTIC FUNCTIONS and DISTINCT ..
    Any help will be greatly appreciated.
    Thanks.

    Hi,
    Welcome to the forum!
    Whenever you have a question, please post your sample data in a form that people can use to re-create the problem and test their solutions.
    For example:
    CREATE TABLE     table_x
    (      columna     NUMBER
    ,      columnb     NUMBER
    ,      columnc     NUMBER
    INSERT INTO table_x (columna, columnb, columnc) VALUES (13368834, 34323021, 100);
    INSERT INTO table_x (columna, columnb, columnc) VALUES (13368835, 34438258, 50);
    INSERT INTO table_x (columna, columnb, columnc) VALUES (13368834, 34438258, 50);
    INSERT INTO table_x (columna, columnb, columnc) VALUES (13368835, 34323021, 100);Do you want something that works in your version or Oracle? Of course you do! So tell us which version that is.
    How do you get the results that you want? Explain what each row of output represents. It looks like
    the 1st row contains the 1st distinct value from each column (where "first" means descending order for columnc, and ascending order for the others),
    the 2nd row contains the 2nd distinct value,
    the 3rd row contains the 3rd distinct value, and so on.
    If that's what you want, here's one way to get it (in Oracle 9 and up):
    WITH     got_nums     AS
         SELECT     columna, columnb, columnc
         ,     DENSE_RANK () OVER (ORDER BY  columna        )     AS a_num
         ,     DENSE_RANK () OVER (ORDER BY  columnb        )     AS b_num
         ,     DENSE_RANK () OVER (ORDER BY  columnc  DESC)     AS c_num
         FROM     table_x
    SELECT       MAX (a.columna)          AS columna
    ,       MAX (b.columnb)          AS columnb
    ,       MAX (c.columnc)          AS columnc
    FROM              got_nums     a
    FULL OUTER JOIN  got_nums     b     ON     b.b_num     =           a.a_num
    FULL OUTER JOIN  got_nums     c     ON     c.c_num     = COALESCE (a.a_num, b.b_num)
    GROUP BY  COALESCE (a.a_num, b.b_num, c.c_num)
    ORDER BY  COALESCE (a.a_num, b.b_num, c.c_num)
    ;I've been trying to find a good name for this type of query. The best I've heard so far is "Prix Fixe Query", named after the menus where you get a choice of soups (listed in one column), appetizers (in another column), main dishes (in a 3rd column), and so on. The items on the first row don't necessaily have any relationship to each other.
    The solution does not assume that there are the same number of distinct items in each column.
    For example, if you add this row to the sample data:
    INSERT INTO table_x (columna, columnb, columnc) VALUES (13368835, 34323021, 99);which is a copy of the last row, except that there is a completely new value for columnc, then the output is:
    `  COLUMNA    COLUMNB    COLUMNC
      13368834   34323021        100
      13368835   34438258         99
                                  50starting in Oracle 11, you can also do this with an unpivot-pivot query.

  • Replacing Oracle's FIRST_VALUE and LAST_VALUE analytical functions.

    Hi,
    I am using OBI 10.1.3.2.1 where, I guess, EVALUATE is not available. I would like to know alternatives, esp. to replace Oracle's FIRST_VALUE and LAST_VALUE analytical functions.
    I want to track some changes. For example, there are four methods of travel - Air, Train, Road and Sea. Would like to know traveler's first method of traveling and the last method of traveling in an year. If both of them match then a certain action is taken. If they do not match, then another action is taken.
    I tried as under.
    1. Get Sequence ID for each travel within an year per traveler as Sequence_Id.
    2. Get the Lowest Sequence ID (which should be 1) for travels within an year per traveler as Sequence_LId.
    3. Get the Highest Sequence ID (which could be 1 or greater than 1) for travels within an year per traveler as Sequence_HId.
    4. If Sequence ID = Lowest Sequence ID then display the method of travel as First Method of Travel.
    5. If Sequence ID = Highest Sequence ID then display the method of travel as Latest Method of Travel.
    6. If First Method of Travel = Latest Method of Travel then display Yes/No as Match.
    The issue is cells could be blank in First Method of Travel and Last Method of Travel unless the traveler traveled only once in an year.
    Using Oracle's FIRST_VALUE and LAST_VALUE analytical functions, I can get a result like
    Traveler | Card Issue Date | Journey Date | Method | First Method of Travel | Last Method of Travel | Match?
    ABC | 01/01/2000 | 04/04/2000 | Road | Road | Air | No
    ABC | 01/01/2000 | 15/12/2000 | Air | Road | Air | No
    XYZ | 01/01/2000 | 04/05/2000 | Train | Train | Train | Yes
    XYZ | 01/01/2000 | 04/11/2000 | Train | Train | Train | Yes
    Using OBI Answers, I am getting something like this.
    Traveler | Card Issue Date | Journey Date | Method | First Method of Travel | Last Method of Travel | Match?
    ABC | 01/01/2000 | 04/04/2000 | Road | Road | <BLANK> | No
    ABC | 01/01/2000 | 15/12/2000 | Air | <BLANK> | Air | No
    XYZ | 01/01/2000 | 04/05/2000 | Train | Train | <BLANK> | No
    XYZ | 01/01/2000 | 04/11/2000 | Train | <BLANK> | Train | No
    Above, for XYZ traveler the Match? clearly shows a wrong result (although somehow it's correct for traveler ABC).
    Would appreciate if someone can guide me how to resolve the issue.
    Many thanks,
    Manoj.
    Edited by: mandix on 27-Nov-2009 08:43
    Edited by: mandix on 27-Nov-2009 08:47

    Hi,
    Just to recap, in OBI 10.1.3.2.1, I am trying to find an alternative way to FIRST_VALUE and LAST_VALUE analytical functions used in Oracle. Somehow, I feel it's achievable. I would like to know answers to the following questions.
    1. Is there any way of referring to a cell value and displaying it in other cells for a reference value?
    For example, can I display the First Method of Travel for traveler 'ABC' and 'XYZ' for all the rows returned in the same column, respectively?
    2. I tried RMIN, RMAX functions in the RDP but it does not accept "BY" clause (for example, RMIN(Transaction_Id BY Traveler) to define Lowest Sequence Id per traveler). Am I doing something wrong here? Why can a formula with "BY" clause be defined in Answers but not the RPD? The idea is to use this in Answers. This is in relation to my first question.
    Could someone please let me know?
    I understand that this thread that I have posted is related to something that can be done outside OBI, but still would like to know.
    If anything is not clear please let me know.
    Thanks,
    Manoj.

  • How to use group by in analytic function

    I need to write department which has minimum salary in one row. It must be with analytic function but i have problem with group by. I can not use min() without group by.
    select * from (select min(sal) min_salary, deptno, RANK() OVER (ORDER BY sal ASC, rownum ASC) RN from emp group by deptno) WHERE RN < 20 order by deptno;
    Edited by: senza on 6.11.2009 16:09

    different query, different results.
    LPALANI@l11gr2>select department_id, min(salary)
      2  from hr.employees
      3  group by department_id
      4  order by 2;
       DEPARTMENT_ID      MIN(SALARY)
                  50            2,100
                  20            2,100
                  30            2,500
                  60            4,200
                  10            4,400
                  80            6,100
                  40            6,500
                 100            6,900
                                7,000
                 110            8,300
                  70           10,000
                  90           17,000
    12 rows selected.
    LPALANI@l11gr2>
    LPALANI@l11gr2>-- Always lists one department in a non-deterministic way
    LPALANI@l11gr2>select * from (
      2  select department_id, min(salary) min_salary
      3  from hr.employees
      4  group by department_id
      5  order by 2) where rownum = 1;
       DEPARTMENT_ID       MIN_SALARY
                  20            2,100
    LPALANI@l11gr2>
    LPALANI@l11gr2>-- Out of the departments with the same least salary, returns the one with the least department number
    LPALANI@l11gr2>SELECT   MIN (department_id) KEEP (DENSE_RANK FIRST ORDER BY salary) AS dept_with_lowest_sal, min(salary) min_salary
      2  FROM        hr.employees;
    DEPT_WITH_LOWEST_SAL       MIN_SALARY
                      20            2,100
    LPALANI@l11gr2>
    LPALANI@l11gr2>-- This will list all the deparments with the minimum salary
    LPALANI@l11gr2>select department_id, min_salary
      2  from (select
      3  department_id,
      4  min(salary) min_salary,
      5  RANK() OVER (ORDER BY min(salary) ASC) RN
      6            from hr.employees
      7            group by department_id)
      8  WHERE rn=1;
       DEPARTMENT_ID       MIN_SALARY
                  20            2,100
                  50            2,100

  • Analytic Functions with GROUP-BY Clause?

    I'm just getting acquainted with analytical functions. I like them. I'm having a problem, though. I want to sum up the results, but either I'm running into a limitation or I'm writing the SQL wrong. Any hints for me?
    Hypothetical Table SALES consisting of a DAY_ID, PRODUCT_ID, PURCHASER_ID, PURCHASE_PRICE lists all the
    Hypothetical Business Question: Product prices can fluctuate over the course of a day. I want to know how much per day I would have made had I sold one each of all my products at their max price for that day. Silly question, I know, but it's the best I could come up with to show the problem.
    INSERT INTO SALES VALUES(1,1,1,1.0);
    INSERT INTO SALES VALUES(1,1,1,2.0);
    INSERT INTO SALES VALUES(1,2,1,3.0);
    INSERT INTO SALES VALUES(1,2,1,4.0);
    INSERT INTO SALES VALUES(2,1,1,5.0);
    INSERT INTO SALES VALUES(2,1,1,6.0);
    INSERT INTO SALES VALUES(2,2,1,7.0);
    INSERT INTO SALES VALUES(2,2,1,8.0);
    COMMIT;
    Day 1: Iif I had sold one product 1 at $2 and one product 2 at $4, I would have made 6$.
    Day 2: Iif I had sold one product 1 at $6 and one product 2 at $8, I would have made 14$.
    The desired result set is:
    DAY_ID                 MY_MEASURE
    1                        6
    1                       14The following SQL gets me tantalizingly close:
    SELECT DAY_ID,
      MAX(PURCHASE_PRICE)
      KEEP(DENSE_RANK FIRST ORDER BY PURCHASE_PRICE DESC)
      OVER(PARTITION BY DAY_ID, PRODUCT_ID) AS MY_MEASURE
      FROM SALES
    ORDER BY DAY_ID
    DAY_ID                 MY_MEASURE
    1                      2
    1                      2
    1                      4
    1                      4
    2                      6
    2                      6
    2                      8
    2                      8But as you can see, my result set is "longer" than I wanted it to be. I want a single row per DAY_ID. I understand what the analytical functions are doing here, and I acknowledge that I am "not doing it right." I just can't seem to figure out how to make it work.
    Trying to do a sum() of max() simply does not work, nor does any semblance of a group-by clause that I can come up with. Unfortunately, as soon as I add the windowing function, I am no longer allowed to use group-by expressions (I think).
    I am using a reporting tool, so unfortunately using things like inline views are not an option. I need to be able to define "MY_MEASURE" as something the query tool can apply the SUM() function to in its generated SQL.
    (Note: The actual problem is slightly less easy to conceptualize, but solving this conundrum will take me much closer to solving the other.)
    I humbly solicit your collective wisdom, oh forum.

    Thanks, SY. I went that way originally too. Unfortunately that's no different from what I could get without the RANK function.
    SELECT  DAY_ID,
            PRODUCT_ID,
            MAX(PURCHASE_PRICE) MAX_PRICE
      FROM  SALES
      GROUP BY DAY_ID,
               PRODUCT_ID
      ORDER BY DAY_ID,
               PRODUCT_ID
    DAY_ID                 PRODUCT_ID             MAX_PRICE             
    1                      1                      2                     
    1                      2                      4                     
    2                      1                      6                     
    2                      2                      8

  • Analytical function fine within TOAD but throwing an error for a mapping.

    Hi,
    When I validate an expression based on SUM .... OVER PARTITION BY in a mapping, I am getting the following error.
    Line 4, Col 23:
    PLS-00103: Encountered the symbol "OVER" when expecting one of the following:
    * & = - + < / > at in is mod remainder not rem then
    <an exponent (**)> <> or != or ~= >= <= <> and or like LIKE2_
    LIKE4_ LIKEC_ between || multiset member SUBMULTISET_
    However, using TOAD, the expression is working fine.
    A staging table has got three columns, col1, col2 and col3. The expression is checking for a word in col3. The expression is as under.
    (CASE WHEN SUM (CASE WHEN UPPER(INGRP1.col3) LIKE 'some_value%'
    THEN 1
    ELSE 0
    END) OVER (PARTITION BY INGRP1.col1
    ,INGRP1.col2) > 0
    THEN 'Y'
    ELSE 'N'
    END)
    I searched the forum for similar issues, but not able to resolve my issue.
    Could you please let me know what's wrong here?
    Many thanks,
    Manoj.

    Yes, expression validation in 10g simply does not work for (i.e. does not recognize) analytic functions.
    It can simply be ignored. You should also set Generation mode to "Set Based only". Otherwise the mapping will fail to deploy under certain circumstances (when using non-set-based (PL/SQL) operators after the analytic function).

  • Discoverer Analytic Function windowing - errors and bad aggregation

    I posted this first on Database General forum, but then I found this was the place to put it:
    Hi, I'm using this kind of windowing function:
    SUM(Receitas Especificas) OVER(PARTITION BY Tipo Periodo,Calculado,"Empresa Descrição (Operador)","Empresa Descrição" ORDER BY Ini Periodo RANGE BETWEEN INTERVAL '12' MONTH PRECEDING AND INTERVAL '12' MONTH PRECEDING )
    If I use the "Receitas Especificas SUM" instead of
    "Receitas Especificas" I get the following error running the report:
    "an error occurred while attempting to run..."
    This is not in accordance to:
    http://www.boku.ac.at/oradoc/ias/10g(9.0.4)/bi.904/b10268.pdf
    but ok, the version without SUM inside works.
    Another problem is the fact that for analytic function with PARTITION BY,
    this does not work (shows the cannot aggregate symbol) if we collapse or use "<All>" in page items.
    But it works if we remove the item from the PARTITION BY and also remove from workbook.
    It's even worse for windowing functions(query above), because the query
    only works if we remove the item from the PARTITION BY but we have to show it on the workbook - and this MAKES NO SENSE... :(
    Please help.

    Unfortunately Discoverer doesn't show (correct) values for analytical functions when selecting "<All>" in a page item. I found out that it does work when you add the analytical function to the db-view instead of to the report as a calculation or as a calculated item on the folder.
    The only problem is you've to name all page-items in the PARTITION window, so, when adding a page-item to the report, you,ve to change the db-view and alter the PARTITION window.
    Michael

  • Can i use an analytic function instead of a group by clause?

    Can i use an analytic function instead of a group by clause? Will this help in any performance improvement?

    analytic can sometimes avoid scanning the table more than once :
    SQL> select ename,  sal, (select sum(sal) from emp where deptno=e.deptno) sum from emp e;
    ENAME             SAL        SUM
    SMITH             800      10875
    ALLEN            1600       9400
    WARD             1250       9400
    JONES            2975      10875
    MARTIN           1250       9400
    BLAKE            2850       9400
    CLARK            2450       8750
    SCOTT            3000      10875
    KING             5000       8750
    TURNER           1500       9400
    ADAMS            1100      10875
    JAMES             950       9400
    FORD             3000      10875
    MILLER           1300       8750
    14 rows selected.
    Execution Plan
    Plan hash value: 3189885365
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    14 |   182 |     3   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE    |      |     1 |     7 |            |          |
    |*  2 |   TABLE ACCESS FULL| EMP  |     5 |    35 |     3   (0)| 00:00:01 |
    |   3 |  TABLE ACCESS FULL | EMP  |    14 |   182 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("DEPTNO"=:B1)which could be rewritten as
    SQL> select ename, sal, sum(sal) over (partition by deptno) sum from emp e;
    ENAME             SAL        SUM
    CLARK            2450       8750
    KING             5000       8750
    MILLER           1300       8750
    JONES            2975      10875
    FORD             3000      10875
    ADAMS            1100      10875
    SMITH             800      10875
    SCOTT            3000      10875
    WARD             1250       9400
    TURNER           1500       9400
    ALLEN            1600       9400
    JAMES             950       9400
    BLAKE            2850       9400
    MARTIN           1250       9400
    14 rows selected.
    Execution Plan
    Plan hash value: 1776581816
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    14 |   182 |     4  (25)| 00:00:01 |
    |   1 |  WINDOW SORT       |      |    14 |   182 |     4  (25)| 00:00:01 |
    |   2 |   TABLE ACCESS FULL| EMP  |    14 |   182 |     3   (0)| 00:00:01 |
    ---------------------------------------------------------------------------well, there is no group by and no visible performance enhancement in my example, but Oracle7, you must have written the query as :
    SQL> select ename, sal, sum from emp e,(select deptno,sum(sal) sum from emp group by deptno) s where e.deptno=s.deptno;
    ENAME             SAL        SUM
    SMITH             800      10875
    ALLEN            1600       9400
    WARD             1250       9400
    JONES            2975      10875
    MARTIN           1250       9400
    BLAKE            2850       9400
    CLARK            2450       8750
    SCOTT            3000      10875
    KING             5000       8750
    TURNER           1500       9400
    ADAMS            1100      10875
    JAMES             950       9400
    FORD             3000      10875
    MILLER           1300       8750
    14 rows selected.
    Execution Plan
    Plan hash value: 2661063502
    | Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |      |    14 |   546 |     8  (25)| 00:00:01 |
    |*  1 |  HASH JOIN           |      |    14 |   546 |     8  (25)| 00:00:01 |
    |   2 |   VIEW               |      |     3 |    78 |     4  (25)| 00:00:01 |
    |   3 |    HASH GROUP BY     |      |     3 |    21 |     4  (25)| 00:00:01 |
    |   4 |     TABLE ACCESS FULL| EMP  |    14 |    98 |     3   (0)| 00:00:01 |
    |   5 |   TABLE ACCESS FULL  | EMP  |    14 |   182 |     3   (0)| 00:00:01 |
    -----------------------------------------------------------------------------So maybe it helps

  • GROUP BY and analytical functions

    Hi all,
    I need your help with grouping my data.
    Below you can see sample of my data (in my case I have view where data is in almost same format).
    with test_data as(
    select '01' as code, 'SM' as abbreviation, 1010 as groupnum, 21 as pieces, 4.13 as volume, 3.186 as avgvolume from dual
    union
    select '01' as code, 'SM' as abbreviation, 2010 as groupnum, 21 as pieces, 0 as volume, 3.186 as avgvolume from dual
    union
    select '01' as code, 'SM' as abbreviation, 3000 as groupnum, 21 as pieces, 55 as volume, 3.186 as avgvolume from dual
    union
    select '01' as code, 'SM' as abbreviation, 3010 as groupnum, 21 as pieces, 7.77 as volume, 3.186 as avgvolume from dual
    union
    select '02' as code, 'SMP' as abbreviation, 1010 as groupnum, 30 as pieces, 2.99 as volume, 0.1 as avgvolume from dual
    union
    select '03' as code, 'SMC' as abbreviation, 1010 as groupnum, 10 as pieces, 4.59 as volume, 0.459 as avgvolume from dual
    union
    select '40' as code, 'DB' as abbreviation, 1010 as groupnum, 21 as pieces, 5.28 as avgvolume, 0.251 as avgvolume from dual
    select
    DECODE (GROUPING (code), 1, 'report total:', code)     as code,
    abbreviation as abbreviation,
    groupnum as pricelistgrp,
    sum(pieces) as pieces,
    sum(volume) as volume,
    sum(avgvolume) as avgvolume
    --sum(sum(distinct pieces)) over (partition by code,groupnum) as piecessum,
    --sum(volume) volume,
    --round(sum(volume) / 82,3) as avgvolume
    from test_data
    group by grouping sets((code,abbreviation,groupnum,pieces,volume,avgvolume),null)
    order by 1,3;Select statement which I have written returns the output below:
    CODE    ABBR    GRPOUP  PIECES   VOLUME  AVGVOL
    01     SM     1010     21     4.13     3.186
    01     SM     2010     21     0     3.186
    01     SM     3000     21     55     3.186
    01     SM     3010     21     7.77     3.186
    02     SMP     1010     30     2.99     0.1
    03     SMC     1010     10     4.59     0.459
    40     DB     1010     21     5.28     0.251
    report total:          145     79.76     13.554Number of pieces and avg volume is same for same codes (01 - pieces = 21, avgvolume = 3.186 etc.)
    What I need is to get output like below:
    CODE    ABBR    GRPOUP  PIECES   VOLUME  AVGVOL
    01     SM     1010     21     4.13     3.186
    01     SM     2010     21     0     3.186
    01     SM     3000     21     55     3.186
    01     SM     3010     21     7.77     3.186
    02     SMP     1010     30     2.99     0.1
    03     SMC     1010     10     4.59     0.459
    40     DB     1010     21     5.28     0.251
    report total:          82     79.76     0.973Where total number of pieces is computed as sum of distinct numbers of pieces for each code -> *82 = 21 + 30 + 10 +21*.
    Total volume is just sum of volumes in each row -> *79.76 = 4.13+0+55+7.77+2.99+4.59+5.28*.
    And Average volume is computed as total volume / total number of pieces -> *0.973 = 79.76 / 82*.
    I was trying to use analytical function (sum() over (partition by)) to get desired output, but without good results.
    Could anyone help me with this issue?
    Thanks in advance!
    Regards,
    Jiri

    Hi, Jiri,
    Jiri N. wrote:
    Hi all,
    I need your help with grouping my data.
    Below you can see sample of my data (in my case I have view where data is in almost same format).I assume the view guarantees that all rows with the same code (or the same code and groupnum) will always have the same pieces and the same avgvolume.
    with test_data as( ...Thanks for posting this; it's very helpful.
    What I need is to get output like below:
    CODE    ABBR    GRPOUP  PIECES   VOLUME  AVGVOL
    01     SM     1010     21     4.13     3.186
    01     SM     2010     21     0     3.186
    01     SM     3000     21     55     3.186
    01     SM     3010     21     7.77     3.186
    02     SMP     1010     30     2.99     0.1
    03     SMC     1010     10     4.59     0.459
    40     DB     1010     21     5.28     0.251
    report total:          82     79.76     0.973
    Except for the last row, you're just displaying data straight from the table (or view).
    It might be easier to get the results you want uisng a UNION. One branch of the UNION would get the"report total" row, and the other branch would get all the rest.
    >
    Where total number of pieces is computed as sum of distinct numbers of pieces for each code -> *82 = 21 + 30 + 10 +21*.It's not just distinct numbers. In this example, two different codes have pieces=21, so the total of distinct pieces is 61 = 21 + 30 + 10.
    >
    Total volume is just sum of volumes in each row -> *79.76 = 4.13+0+55+7.77+2.99+4.59+5.28*.
    And Average volume is computed as total volume / total number of pieces -> *0.973 = 79.76 / 82*.
    I was trying to use analytical function (sum() over (partition by)) to get desired output, but without good results. I would use nested aggregate functions to do that:
    SELECT    code
    ,       abbreviation
    ,       groupnum          AS pricelistgrp
    ,       pieces
    ,       volume
    ,       avgvolume
    FROM      test_data
         UNION ALL
    SELECT        'report total:'     AS code
    ,        NULL                  AS abbreviaion
    ,        NULL               AS pricelistgrp
    ,        SUM (MAX (pieces))     AS pieces
    ,        SUM (SUM (volume))     AS volume
    ,        SUM (SUM (volume))
          / SUM (MAX (pieces))     AS avgvolume
    FROM        test_data
    GROUP BY   code     -- , abbreviation?
    ORDER BY  code
    ,            pricelistgrp
    ;Output:
    CODE          ABB PRICELISTGRP     PIECES  VOLUME  AVGVOLUME
    01            SM          1010         21    4.13      3.186
    01            SM          2010         21    0.00      3.186
    01            SM          3000         21   55.00      3.186
    01            SM          3010         21    7.77      3.186
    02            SMP         1010         30    2.99       .100
    03            SMC         1010         10    4.59       .459
    40            DB          1010         21    5.28       .251
    report total:                          82   79.76       .973It's unclear if you want to GROUP BY just code (like I did above) or by both code and abbreviation.
    Given that this data is coming from a view, it might be simpler and/or more efficient to make separate version of the view, or to replicate most of the view in a query.

Maybe you are looking for

  • Creation of a customer group

    Hi, I need to create a customer group.Navigated to  spro->crm->master data->Business partner->define attribute->define customer group,selected define customer group and see two customer group already exist. How do we associate BPs to these customer g

  • Ie shows flash9d.ocx add-on but in FaceBook...

    Hi Adobe People, The problem is that the flash won't work in our FaceBook page. ie shows flash9d.ocx add-on and I tried loading again but still doesn't work. Please note that other websites that use flash are OK. Can this be a similar problem as desc

  • Runtime Error - PSE 10

    Have lost connection with PSE 10 on Windows 8. Was working fine. I was simply moving a photo within an album and connection lost. No shows - 'Runtime Error - this application has requested the Runtime to terminate it in an unusual way - contact the s

  • Database design - Better approach than XML/ XSD

    Hi, I am designing web based application. The scenario I am working on is - I have around 50+ odd objects. They have few common things but other fields will change (Say e.g. Employee / Customers/ Assets, etc). We are also providing a facility where i

  • Make problem: no rule to make target modules

    i have dell xps 15z laptop with 64bit arch installed. i would like to install bbswitch, so i can use the optimus nvidia card. I downloaded from AUR the makefile and run makepkg. I got this error that i cannotfind out what is causing: ==> Starting bui