Analytic functions in expression

Hi list,
is there a way of using an analytic function in an OWB expression? Just like with DECODE it doesn't seem possible, because OWB creates something like x := analytic_function(), I think for tracing or error handling purposes. If I want to have, say, a rank() in the outcome of a join (i.e. the SELECT part), I have to create a view and integrate that into the mapping, is that correct? Or is there another way around?
TIA,
Bjoern

Not sure what you are trying to accomplish - can you please elaborate? You should be able to invoke functions that accept scalars as parameters and return scalars in most cases. Another solution is (as mentioned by you) a view.
Regards:
Igor

Similar Messages

  • Help on Using Analytical Functions

    I am hetting error when i use Analytical functions in Expressions
    AVG( INGRP1.Test1 ) OVER (PARTITION BY INGRP1.Test2)
    Error is as follows
    Line 1, Col 28:
    PLS-00103: Encountered the symbol "OVER" when expecting one of the following:
    * & = - + ; < / > at in is mod remainder not rem
    <an exponent (**)> <> or != or ~= >= <= <> and or like LIKE2_
    LIKE4_ LIKEC_ between || multiset member SUBMULTISET_

    Hi,
    the syntax of this part of the sql statement is okay. Please post the complete statement to identify the error.
    Sometimes oracle identifies the wrong point for the error.
    Regards,
    Detlef

  • OLAP Expression Analytical Functions and NA Values

    Hello,
    I am trying to use the SUM and MAX functions over a hierarchy where there are potentially NA values. I believe in OLAP DML, the natural behavior is to skip these values. Can a skip be accomplished with either the SUM or MAX OLAP Expression Syntax functions?
    Cheers!

    Pre-requisites:
    ===============
    Time dimension with level=DAY.... i have restricted data to 1 month approx.. 20100101 to 20100201 (32 days).
    Measure of interest - a (say)
    Time Dimension attribute which indicates WEEKDAY.... if you have END_DATE attribute with date datatype so we can extract the DAY (MON/TUE/WED/...) from it and decipher wkday/wkend status for DAY.
    Sort time as per END_DATE ..
    Take care of other dimensions during testing... restrict all other dimensions of cube to single value. Final formula would be independent of other dimensions but this helps development/testing.
    Step 1:
    ======
    "Firm up the required design in olap dml
    "rpr down time
    " w 10 heading 't long' time_long_description
    " w 10 heading 't end date' time_end_date
    " w 20 heading 'Day Type' convert(time_end_date text 'DY')
    " a
    NOTE: version 1 of moving total
    " heading 'moving minus 2 all' movingtotal(a, -2, 0, 1, time status)
    " w 20 heading 'Day Type' convert(time_end_date text 'DY')
    " heading 'a wkday' if convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' then a else na
    NOTE: version 2 of moving total
    " heading 'moving minus 2 wkday' movingtotal(a, -2, 0, 1, time convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN')
    " w 20 heading 'Day Type' convert(time_end_date text 'DY')
    " heading 'a wkday non-na' if convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na then a else na
    NOTE: version 3 of moving total
    " heading 'moving minus 2 wkday non-na' movingtotal(a, -2, 0, 1, time convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na)
    OLAP DML Command:
    rpr down time w 10 heading 't long' time_long_description w 10 heading 't end date' time_end_date w 20 heading 'Day Type' convert(time_end_date text 'DY') a heading 'moving minus 2 all' movingtotal(a, -2, 0, 1, time status) w 20 heading 'Day Type' convert(time_end_date text 'DY') heading 'a wkday' if convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' then a else na heading 'moving minus 2 wkday' movingtotal(a, -2, 0, 1, time convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN') w 20 heading 'Day Type' convert(time_end_date text 'DY') heading 'a wkday non-na' if convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na then a else na heading 'moving minus 2 wkday non-na' movingtotal(a, -2, 0, 1, time convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na)
    Step 2:
    ======
    "Define additional measure to contain the required/desired formula implementing the business requirements (version 3 above)
    " create formula AF1 which points to last column... i.e. OLAP_DML_EXPRESSION
    dfn af1 formula movingtotal(a, -2, 0, 1, time convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na)
    "NOTE: Do this via AWM using calculated member with template type = OLAP_DML_EXPRESSION so that the cube view for cube contains a column for measure AF1
    OLAP DML Command:
    rpr down time w 10 heading 't long' time_long_description w 10 heading 't end date' time_end_date w 20 heading 'Day Type' convert(time_end_date text 'DY') a heading 'a wkday non-na' if convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na then a else na heading 'moving minus 2 wkday non-na (AF1)' af1
    ->
    Step 3:
    =======
    Extend Oracle OLAP with regular SQL functionality like SQL ANALYTICAL functions to fill up the gaps for intermediate week days like DAY_20100104 (TUE), DAY_20100105 (WED) etc.
    Use: SQL Analytical Function LAST_VALUE() in query.. i.e. in report or query.. dont use AF1 but use LAST_VALUE(af1).... as below pseudo-code:
    LAST_VALUE(cube_view.af1) over (partition by <product, organization, ... non-time dimensions> order by <DAY_KEY_Col> range unbounded preceeding and current row)
    HTH
    Shankar

  • Analytical function in OWB 10.2.0.4.0

    Dear -
    I am trying to implement analytical function in OWB but not sure how to do it. Can anyone help me?
    My SQL query looks like
    select sum (aamtorg),
    sum(sum(aamtorg)) over
    (order by cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl,
    cmgmint, cbasent, cbok, tamtlbl,
    cctygbk, caffgbk, dacggll, dctx
    rows between unbounded preceding and current row) cumulative_amountcctybbl
    from fmbnd_evt
    where cbssuntgbk = 'FM001'
    and caccgbk = '14300000029'
    and caccroo = '9146581'
    and ccrytrngbk = 'AUD'
    and creftrl = '~'
    and cmgmint = '~'
    and cbasent = 'U2725'
    and cbok = '0000'
    and tamtlbl = '~'
    and dacggll between '01aug2011' and '04aug11'
    group by cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl,
    cmgmint, ctrdnbmgint, cbasent, cbok, tamtlbl,
    cctygbk, caffgbk, dacggll, dctx
    I want to implement cumulative_amountcctybb column in the mapping.
    Can anyone help?

    Hi Arun,
    analytical functions don't require GROUP BY clause and that's why you can use an expression operator. You also have a normal SUM (aggregate) function in your query, which requires GROUP BY and can only be implemented using aggregator operator. If I understand your problem correctly, you need to use aggregate SUM with GROUP BY on your data set first, and then use analytical SUM on this set (which is already processed with an aggregate SUM). Your query would look something like this:
    select sum_aamtorg,
    sum(sum_aamtorg) over
    (order by cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl,
    cmgmint, cbasent, cbok, tamtlbl,
    cctygbk, caffgbk, dacggll, dctx
    rows between unbounded preceding and current row) cumulative_amountcctybbl
    from (
    select sum (aamtorg) sum_aamtorg,
    cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl,
    cmgmint, cbasent, cbok, tamtlbl,
    cctygbk, caffgbk, dacggll, dctx
    from fmbnd_evt
    where cbssuntgbk = 'FM001'
    and caccgbk = '14300000029'
    and caccroo = '9146581'
    and ccrytrngbk = 'AUD'
    and creftrl = '~'
    and cmgmint = '~'
    and cbasent = 'U2725'
    and cbok = '0000'
    and tamtlbl = '~'
    and dacggll between '01aug2011' and '04aug11'
    group by cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl,
    cmgmint, ctrdnbmgint, cbasent, cbok, tamtlbl,
    cctygbk, caffgbk, dacggll, dctx)
    Operator sequence would then look like: TABLE -> FILTER -> AGGREGATOR ->EXPRESSION.
    Hope this helps
    Mate
    Edited by: mate on Sep 26, 2011 1:36 PM
    Edited by: mate on Sep 26, 2011 1:36 PM

  • Analytic Functions with GROUP-BY Clause?

    I'm just getting acquainted with analytical functions. I like them. I'm having a problem, though. I want to sum up the results, but either I'm running into a limitation or I'm writing the SQL wrong. Any hints for me?
    Hypothetical Table SALES consisting of a DAY_ID, PRODUCT_ID, PURCHASER_ID, PURCHASE_PRICE lists all the
    Hypothetical Business Question: Product prices can fluctuate over the course of a day. I want to know how much per day I would have made had I sold one each of all my products at their max price for that day. Silly question, I know, but it's the best I could come up with to show the problem.
    INSERT INTO SALES VALUES(1,1,1,1.0);
    INSERT INTO SALES VALUES(1,1,1,2.0);
    INSERT INTO SALES VALUES(1,2,1,3.0);
    INSERT INTO SALES VALUES(1,2,1,4.0);
    INSERT INTO SALES VALUES(2,1,1,5.0);
    INSERT INTO SALES VALUES(2,1,1,6.0);
    INSERT INTO SALES VALUES(2,2,1,7.0);
    INSERT INTO SALES VALUES(2,2,1,8.0);
    COMMIT;
    Day 1: Iif I had sold one product 1 at $2 and one product 2 at $4, I would have made 6$.
    Day 2: Iif I had sold one product 1 at $6 and one product 2 at $8, I would have made 14$.
    The desired result set is:
    DAY_ID                 MY_MEASURE
    1                        6
    1                       14The following SQL gets me tantalizingly close:
    SELECT DAY_ID,
      MAX(PURCHASE_PRICE)
      KEEP(DENSE_RANK FIRST ORDER BY PURCHASE_PRICE DESC)
      OVER(PARTITION BY DAY_ID, PRODUCT_ID) AS MY_MEASURE
      FROM SALES
    ORDER BY DAY_ID
    DAY_ID                 MY_MEASURE
    1                      2
    1                      2
    1                      4
    1                      4
    2                      6
    2                      6
    2                      8
    2                      8But as you can see, my result set is "longer" than I wanted it to be. I want a single row per DAY_ID. I understand what the analytical functions are doing here, and I acknowledge that I am "not doing it right." I just can't seem to figure out how to make it work.
    Trying to do a sum() of max() simply does not work, nor does any semblance of a group-by clause that I can come up with. Unfortunately, as soon as I add the windowing function, I am no longer allowed to use group-by expressions (I think).
    I am using a reporting tool, so unfortunately using things like inline views are not an option. I need to be able to define "MY_MEASURE" as something the query tool can apply the SUM() function to in its generated SQL.
    (Note: The actual problem is slightly less easy to conceptualize, but solving this conundrum will take me much closer to solving the other.)
    I humbly solicit your collective wisdom, oh forum.

    Thanks, SY. I went that way originally too. Unfortunately that's no different from what I could get without the RANK function.
    SELECT  DAY_ID,
            PRODUCT_ID,
            MAX(PURCHASE_PRICE) MAX_PRICE
      FROM  SALES
      GROUP BY DAY_ID,
               PRODUCT_ID
      ORDER BY DAY_ID,
               PRODUCT_ID
    DAY_ID                 PRODUCT_ID             MAX_PRICE             
    1                      1                      2                     
    1                      2                      4                     
    2                      1                      6                     
    2                      2                      8

  • Analytical function fine within TOAD but throwing an error for a mapping.

    Hi,
    When I validate an expression based on SUM .... OVER PARTITION BY in a mapping, I am getting the following error.
    Line 4, Col 23:
    PLS-00103: Encountered the symbol "OVER" when expecting one of the following:
    * & = - + < / > at in is mod remainder not rem then
    <an exponent (**)> <> or != or ~= >= <= <> and or like LIKE2_
    LIKE4_ LIKEC_ between || multiset member SUBMULTISET_
    However, using TOAD, the expression is working fine.
    A staging table has got three columns, col1, col2 and col3. The expression is checking for a word in col3. The expression is as under.
    (CASE WHEN SUM (CASE WHEN UPPER(INGRP1.col3) LIKE 'some_value%'
    THEN 1
    ELSE 0
    END) OVER (PARTITION BY INGRP1.col1
    ,INGRP1.col2) > 0
    THEN 'Y'
    ELSE 'N'
    END)
    I searched the forum for similar issues, but not able to resolve my issue.
    Could you please let me know what's wrong here?
    Many thanks,
    Manoj.

    Yes, expression validation in 10g simply does not work for (i.e. does not recognize) analytic functions.
    It can simply be ignored. You should also set Generation mode to "Set Based only". Otherwise the mapping will fail to deploy under certain circumstances (when using non-set-based (PL/SQL) operators after the analytic function).

  • Understanding row_number() and using it in an analytic function

    Dear all;
    I have been playing around with row_number and trying to understand how to use it and yet I still cant figure it out...
    I have the following code below
    create table Employee(
        ID                 VARCHAR2(4 BYTE)         NOT NULL,
       First_Name         VARCHAR2(10 BYTE),
       Last_Name          VARCHAR2(10 BYTE),
        Start_Date         DATE,
        End_Date           DATE,
         Salary             Number(8,2),
       City               VARCHAR2(10 BYTE),
        Description        VARCHAR2(15 BYTE)
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary,  City,       Description)
                 values ('01','Jason',    'Martin',  to_date('19960725','YYYYMMDD'), to_date('20060725','YYYYMMDD'), 1234.56, 'Toronto',  'Programmer');
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary,  City,       Description)
                  values('02','Alison',   'Mathews', to_date('19760321','YYYYMMDD'), to_date('19860221','YYYYMMDD'), 6661.78, 'Vancouver','Tester')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary,  City,       Description)
                 values('03','James',    'Smith',   to_date('19781212','YYYYMMDD'), to_date('19900315','YYYYMMDD'), 6544.78, 'Vancouver','Tester')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary,  City,       Description)
                  values('04','Celia',    'Rice',    to_date('19821024','YYYYMMDD'), to_date('19990421','YYYYMMDD'), 2344.78, 'Vancouver','Manager')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary,  City,       Description)
                  values('05','Robert',   'Black',   to_date('19840115','YYYYMMDD'), to_date('19980808','YYYYMMDD'), 2334.78, 'Vancouver','Tester')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary, City,        Description)
                  values('06','Linda',    'Green',   to_date('19870730','YYYYMMDD'), to_date('19960104','YYYYMMDD'), 4322.78,'New York',  'Tester')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary, City,        Description)
                  values('07','David',    'Larry',   to_date('19901231','YYYYMMDD'), to_date('19980212','YYYYMMDD'), 7897.78,'New York',  'Manager')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary, City,        Description)
                   values('08','James',    'Cat',     to_date('19960917','YYYYMMDD'), to_date('20020415','YYYYMMDD'), 1232.78,'Vancouver', 'Tester')I did a simple select statement
    select * from Employee e
    and it returns this below
    ID   FIRST_NAME LAST_NAME  START_DAT END_DATE      SALARY CITY       DESCRIPTION
    01   Jason      Martin     25-JUL-96 25-JUL-06    1234.56 Toronto    Programmer
    02   Alison     Mathews    21-MAR-76 21-FEB-86    6661.78 Vancouver  Tester
    03   James      Smith      12-DEC-78 15-MAR-90    6544.78 Vancouver  Tester
    04   Celia      Rice       24-OCT-82 21-APR-99    2344.78 Vancouver  Manager
    05   Robert     Black      15-JAN-84 08-AUG-98    2334.78 Vancouver  Tester
    06   Linda      Green      30-JUL-87 04-JAN-96    4322.78 New York   Tester
    07   David      Larry      31-DEC-90 12-FEB-98    7897.78 New York   Manager
    08   James      Cat        17-SEP-96 15-APR-02    1232.78 Vancouver  TesterI wrote another select statement with row_number. see below
    SELECT first_name, last_name, salary, city, description, id,
       ROW_NUMBER() OVER(PARTITION BY description ORDER BY city desc) "Test#"
       FROM employee
       and I get this result
    First_name  last_name   Salary         City             Description         ID         Test#
    Celina          Rice         2344.78      Vancouver    Manager             04          1
    David          Larry         7897.78      New York    Manager             07          2
    Jason          Martin       1234.56      Toronto      Programmer        01          1
    Alison         Mathews    6661.78      Vancouver   Tester               02          1 
    James         Cat           1232.78      Vancouver    Tester              08          2
    Robert        Black         2334.78     Vancouver     Tester              05          3
    James        Smith         6544.78     Vancouver     Tester              03          4
    Linda         Green        4322.78      New York      Tester             06           5
    I understand the partition by which means basically for each associated group a unique number wiill be assigned for that row, so in this case since tester is one group, manager is another group, and programmer is another group then tester gets its own unique number for each row, manager as well and etc.What is throwing me off is the order by and how this numbering are assigned. why is
    1 assigned to Alison Mathews for the tester group and 2 assigned to James Cat and 3 assigned Robert Black
    I apologize if this is a stupid question, i have tried reading about it online and looking at the oracle documentation but that still dont fully understand why.

    user13328581 wrote:
    understanding row_number() and using it in an analytic functionROW_NUMBER () IS an analytic fucntion. Are you trying to use the results of ROW_NUMBER in another analytic function? If so, you need a sub-query. Analuytic functions can't be nested within other analytic functions.
    ...I have the following code below
    ... I did a simple select statementThanks for posting all that! It's really helpful.
    ... and I get this result
    First_name  last_name   Salary         City             Description         ID         Test#
    Celina          Rice         2344.78      Vancouver    Manager             04          1
    David          Larry         7897.78      New York    Manager             07          2
    Jason          Martin       1234.56      Toronto      Programmer        01          1
    Alison         Mathews    6661.78      Vancouver   Tester               02          1 
    James         Cat           1232.78      Vancouver    Tester              08          2
    Robert        Black         2334.78     Vancouver     Tester              05          3
    James        Smith         6544.78     Vancouver     Tester              03          4
    Linda         Green        4322.78      New York      Tester             06           5... What is throwing me off is the order by and how this numbering are assigned. why is
    1 assigned to Alison Mathews for the tester group and 2 assigned to James Cat and 3 assigned Robert Black That's determined by the analytic ORDER BY clause. Yiou said "ORDER BY city desc", so a row where city='Vancouver' will get a lower namber than one where city='New York', since 'Vancouver' comes after 'New York' in alphabetic order.
    If you have several rows that all have the same city, then you can be sure that ROW_NUMBER will assign them consecutive numbers, but it's arbitrary which one of them will be lowest and which highest. For example, you have 5 'Tester's: 4 from Vancouver and 1 from New York. There's no particular reason why the one with first_name='Alison' got assinge 1, and 'James' got #2. If you run the same query again, without changing the table at all, then 'Robert' might be #1. It's certain that the 4 Vancouver rows will be assigned numbers 1 through 4, but there's no way of telling which of those 4 rows will get which of those 4 numbers.
    Similar to a query's ORDER BY clause, the analytic ORDER BY clause can have two or more expressions. The N-th one will only be considered if there was a tie for all (N-1) earlier ones. For example "ORDER BY city DESC, last_name, first_name" would mena 'Vancouver' comes before 'New York', but, if multiple rows all have city='Vancouver', last_name would determine the order: 'Black' would get a lower number than 'Cat'. If you had multiple rows with city='Vancouver' and last_name='Black', then the order would be determined by first_name.

  • Analytic function problem

    Hi,
    I have a problem using analytic function: when I execute this query
    SELECT TSIUPSITE, TSIUPCEAN , TSIUPDATE, sum(TSIUPCA) TSIUPCA, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
    sum(TSIUPQTE) TSIUPQTE,sum(TSIUPQTEP) TSIUPQTEP, TSIUPMDIU,TSIUPMDar,
    sum(TSIUPCRIU) TSIUPCRIU,sum(TSIUPCRAR) TSIUPCRAR, trunc(TSIUPDCRE) TSIUPDCRE ,trunc(TSIUPDMAJ) TSIUPDMAJ ,
    TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
    TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, sum(TSIUPMHT) TSIUPMHT, 0 vtanfisc,
    TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV ,count(*) over (partition by TSIUPSITE,TSIUPCEAN,TSIUP_TRT ) CONTA_ARTICOLO
    FROM TST_FLIISR_VTEREMART
    WHERE 1=1 --TSIUP_TRT = 1
    AND TSIUPDATE=to_date('27082012','ddmmyyyy')
    and TSIUP_NTRX =172
    AND TSIUPSITE = 10025
    AND TSIUPCEAN = '8012452018825'
    GROUP BY TSIUPSITE, TSIUPCEAN , TSIUPDATE, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
    TSIUPMDIU,TSIUPMDar, trunc(TSIUPDCRE),trunc(TSIUPDMAJ),TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
    TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, 0,
    TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV
    ORDER BY TSIUPSITE,TSIUPDATE ;
    I have the error ORA-00979: not a GROUP BY expression related to TSIUP_TRT field,infact, if I execute this one
    SELECT TSIUPSITE, TSIUPCEAN , TSIUPDATE, sum(TSIUPCA) TSIUPCA, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
    sum(TSIUPQTE) TSIUPQTE,sum(TSIUPQTEP) TSIUPQTEP, TSIUPMDIU,TSIUPMDar,
    sum(TSIUPCRIU) TSIUPCRIU,sum(TSIUPCRAR) TSIUPCRAR, trunc(TSIUPDCRE) TSIUPDCRE ,trunc(TSIUPDMAJ) TSIUPDMAJ ,
    TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
    TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, sum(TSIUPMHT) TSIUPMHT, 0 vtanfisc,
    TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV ,count(*) over (partition by TSIUPSITE,TSIUPCEAN ) CONTA_ARTICOLO
    FROM TST_FLIISR_VTEREMART
    WHERE 1=1 --TSIUP_TRT = 1
    AND TSIUPDATE=to_date('27082012','ddmmyyyy')
    and TSIUP_NTRX =172
    AND TSIUPSITE = 10025
    AND TSIUPCEAN = '8012452018825'
    GROUP BY TSIUPSITE, TSIUPCEAN , TSIUPDATE, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
    TSIUPMDIU,TSIUPMDar, trunc(TSIUPDCRE),trunc(TSIUPDMAJ),TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
    TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, 0,
    TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV
    ORDER BY TSIUPSITE,TSIUPDATE ;
    I have no problem. Now the difference between TSIUPCEAN ( or TSIUPSITE) and TSIUP_TRT is that TSIUP_TRT is not in Group By clause, but, to be honest, I don't know why I have this problem using using an analitic function.
    Thanks for help

    Hi,
    I think you are not using analytic function properly.
    Analytical functions will execute for each row. Where as Group BY will execute for groups of data.
    See below example for you reference.
    Example 1:
    -- Below query displays number of employees for each department. Since we have used analytical function for each row you are getting the number of employees based on the department id.
    SQL> SELECT e.department_id,count(*) OVER (PARTITION BY e.department_id) cnt_analytic
      2  FROM employees e
      3  WHERE e.department_id IN (10,20,30);
    DEPARTMENT_ID CNT_ANALYTIC
               10            1
               20            2
               20            2
               30            6
               30            6
               30            6
               30            6
               30            6
               30            6
    9 rows selected.
    Example 2:
    -- Since I have used GROUP BY clause I'm getting only single row for each department.
    SQL> SELECT e.department_id, count(*) cnt_group
      2  FROM employees e
      3  WHERE e.department_id IN (10,20,30)
      4  GROUP BY e.department_id;
    DEPARTMENT_ID  CNT_GROUP
               10          1
               20          2
               30          6Finally, what I'm trying to explain you is - If you use Analytical function with GROUP BY clause, the query will not give the menaing ful result set.
    See below
    SQL> SELECT e.department_id,count(*) OVER (PARTITION BY e.department_id) cnt_analytic, count(*) cnt_grp
      2  FROM employees e
      3  WHERE e.department_id IN (10,20,30)
      4  GROUP BY e.department_id;
    DEPARTMENT_ID CNT_ANALYTIC    CNT_GRP
               10            1          1
               20            1          2
               30            1          6

  • Sum Analytic Function

    Hi,
    I have a query in SQL that generates percentage totals. I am having trouble replicating this code in BMM layer of the repository. I have created a new logical column, the sql query is below:
    SELECT id, seq, asset_cost ,
    CASE
    WHEN asset_cost > 0
    THEN ROUND(RATIO_TO_REPORT (
    CASE
    WHEN asset_cost > 0
    THEN SUM (asset_cost)
    END) OVER (partition BY id)*100)
    END total
    FROM test
    GROUP BY id, seq asset_cost
    Can anyone help with replicating the above expression in the logical layer column. ]
    *** how can i use the Ratio_to_report function in obiee
    The above link shows a workaround
    Are there any alternatives to 'RATIO_TO_REPORT' in OBIEE functions?
    Thanks
    Edited by: sliderrules on 16-May-2012 04:23

    Hi,
    I have just been through the Oracle documentation to understand that 'RATIO_TO_REPORT' would compute the ratio of a value to sum of values. For your requirement, what you could do is
    1. Bring in the measure 'asset_cost' into the BMM with aggregation rule as sum. (I think you could include a condition here itself as asset_cost >0)
    2. Create another measure with the 'Derived from another logical column as source' option chosen and the function as
    EVALUATE('RATIO_TO_REPORT(%1) OVER (PARTITION BY %2)' AS DOUBLE, asset_cost,id)
    The above function does the following steps:
    EVALUATE will send the analytic function to the database.
    SUM(asset_cost) would be the first parameter
    id would be the second parameter.
    I might not be pretty good with the syntax here, but hope you could get it while implementing.
    Hope this helps.
    Thank you,
    Dhar

  • Analytic function and aggregate function

    What are analytic function and aggregate function. What is difference between them?

    hi,
    Analytic Functions :----------
    Analytic functions compute an aggregate value based on a group of rows. They differ from aggregate functions in that they return multiple rows for each group. The group of rows is called a window and is defined by the analytic_clause. For each row, a sliding window of rows is defined. The window determines the range of rows used to perform the calculations for the current row. Window sizes can be based on either a physical number of rows or a logical interval such as time.
    Analytic functions are the last set of operations performed in a query except for the final ORDER BY clause. All joins and all WHERE, GROUP BY, and HAVING clauses are completed before the analytic functions are processed. Therefore, analytic functions can appear only in the select list or ORDER BY clause.
    Analytic functions are commonly used to compute cumulative, moving, centered, and reporting aggregates.
    Aggregate Functions :----------
    Aggregate functions return a single result row based on groups of rows, rather than on single rows. Aggregate functions can appear in select lists and in ORDER BY and HAVING clauses. They are commonly used with the GROUP BY clause in a SELECT statement, where Oracle Database divides the rows of a queried table or view into groups. In a query containing a GROUP BY clause, the elements of the select list can be aggregate functions, GROUP BY expressions, constants, or expressions involving one of these. Oracle applies the aggregate functions to each group of rows and returns a single result row for each group.
    If you omit the GROUP BY clause, then Oracle applies aggregate functions in the select list to all the rows in the queried table or view. You use aggregate functions in the HAVING clause to eliminate groups from the output based on the results of the aggregate functions, rather than on the values of the individual rows of the queried table or view.
    let me know if you are feeling any problem in understanding.
    thanks.
    Edited by: varun4dba on Jan 27, 2011 3:32 PM

  • Need analytic function suggestion

    Hi,
    I need advice related to analytic ( I think ) function in Oracle 9.
    create table testx ( id number, arr number, fore number, actual number, result_x number, is_first number);
    insert into testx values ( 1, null, null, 12, null , 0 );
    insert into testx values ( 2, null, null, 14 , null, 0 );
    insert into testx values ( 3, 4, 5, 16, 16, 1 );
    insert into testx values ( 4, 5, 5, 18, 16, 0 );
    insert into testx values ( 5, 5, 5, 20, 16, 0 );
    insert into testx values ( 6, 5, 5, 22, 16, 0 );
    insert into testx values ( 7, 5, 5, 24, 16, 0 );
    insert into testx values ( 8, 5, 5, 25, 16, 0 );
    insert into testx values ( 9, 5, 8, 25, 13, 0 );
    insert into testx values ( 10, 5, 8, 21, 10, 0 );
    insert into testx values ( 11, 5, 8, 19, 7, 0 );
    insert into testx values ( 12, 5, 8, 18, 4, 0 );
    I need ONE level query ( no subqueries ) which will calculate value stored in RESULT_X column.
    Rule for calculation is:
    1. when arr and fore columns are available first time then result_x = actual ( row with id = 3)
    2. in other case result_x = (previous value of result_x + arr - fore )
    3. order of records is stored in id column
    I have problem with calculating previous value of result_x since it should be available in next row calculation and dependents on other columns values.
    Thanks for help,
    Regards,
    Piotr

    Hi, Piotr,
    This produces the results you requested:
    SELECT       testx.*
    ,       SUM ( CASE
                  WHEN  is_first = 1
                  THEN  result_x
                  ELSE  arr - fore
                 END
               ) OVER (ORDER BY  id)     AS computed_result_x
    FROM      testx
    ORDER BY  id
    ;This relies on the fact that there is only one row where is_first=1, and that all the earlier rows have NULL as arr or fore.
    If that's not the case in your real data, then I don't think it's possible in SQL without sub-queries. Why can't you use a sub-query?
    The problem is that rows up to the one with is_first=1 have to be treated differently from rows after that point, so ithe CASE expression might need to know if a given row is before or after the one with is_first=1. If you need an analytic function to determine that, then you need a sub-query, becuase analytic functions can not be nested.
    You could use MODEL or a recursive WITH clause to get the results you want, but they require sub-querries.

  • Want to use analytical function as a Virtual column

    I am wondering if I can use an analytic function as a virtual column to a table?
    The table conatins a field named BUSINESS_RUN_DATE, which becomes the EXPIRY_DATE of the on the previous record. So we want to add this value right into the table without resorting t a view.
    This is what I tried to add the column to the table:
    alter table stg_xref_test_virtual
    ADD (expiry_date2 date generated always AS (max (business_run_date) over
    *(PARTITION BY ntrl_src_sys_key order by business_run_date*
    rows between 1 preceding and 1 following))) ;
    It give me an error that GROUP BY is not allowed.
    Can someone help out>?
    Thanks,
    Ian

    From the documentation.
    [Column Expressions|http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/expressions005.htm#BABIGHHI]
    A column expression, which is designated as column_expr in subsequent syntax diagrams, is a limited form of expr. A column expression can be a simple expression, compound expression, function expression, or expression list, but it can contain only the following forms of expression:* Columns of the subject table — the table being created, altered, or indexed
    * Constants (strings or numbers)
    ** Deterministic functions — either SQL built-in functions or user-defined functions*
    No other expression forms described in this chapter are valid. In addition, compound expressions using the PRIOR keyword are not supported, nor are aggregate functions.
    You can use a column expression for these purposes:
    * To create a function-based index.
    * To explicitly or implicitly define a virtual column. When you define a virtual column, the defining column_expr must refer only to columns of the subject table that have already been defined, in the current statement or in a prior statement.
    The combined components of a column expression must be deterministic. That is, the same set of input values must return the same set of output values.

  • Need some kind of Analytical Function

    Hi Oracle experts
    I need a little help from you experts. I have a PARTY table as listed below
    The existing data
    Party key     ID_INTERNAL     EID          BID
    1          11111          123
    1          11111          321
    1          22222          321          899
    1          66666          ------          888
    New records comes
    I have to assign a party key to each record based on which attribute is matching
    Now the situation is as new records comes.
    New records comes
    ID_INTERNAL     EID          BID
    22222          555
    44444          555          
    89898          ------          888
    If I match on ID_INTERNAL I may not be able to match ID_INTERNAL 44444 and 89898 and if I match EID or BID the same situation.
    Is thera any analytical function which helps me assigning a party key to all the recoords. ALl the above records should be assigned PARTY KEY 1 only.
    Please help
    Thanks
    Rajesh

    Justin
    My main goal is to assign a party key from existing set of records to the new records which are being selected/inserted. I have to write my algoritum in such a way that the new values should match their value in existing records.
    Example
    my first new record has a value of 11111 under ID_INTERNAL and in the same record it has a value of 555 under EID attribute. so based on matching algoritum for ID INTERNAL it will be assigned existing party key 1.
    Similarly second new record has a value of 87777 under ID INTERNAL and has a value of 555 under EID and this ID INTERNAL does not exists in the target table. but the value of 555 is available under EID attribute so I have to write algoritum based on EID.
    Now the delima is my target table is as follows
    Party key PARTYID PARTYNAME
    1 11111 ITSID
    1 123 EID
    1 321 EID
    Now when new records come I have to write match algortium for ID_INTERNAL to PARTYID for Partyname='ITSID'
    Once matched this record ID INTERNAL=11111 and EID =555 assigned a party key=1. So after first record the output table slooks like
    Party key PARTYID PARTYNAME
    1 11111 ITSID
    1 123 EID
    1 321 EID
    1 555 EID
    Same way for second new record where the values are ID_INTERNAL=87777 and EID=555. I have to write match algortium based on EID because the EID value of 555 already exists in target tabel with party key.
    SO after second record the target table will look like
    Party key PARTYID PARTYNAME
    1 11111 ITSID
    1 123 EID
    1 321 EID
    1 555 EID
    1 87777 ITSID
    So this is how I have to solve this match algoritum.
    Please help me if you need any information I will be glad to provide you all.
    Thanks
    Regards
    Rajesh

  • Reports 6i and analytical function

    hi
    I have this query which wrks fine in TOAD
    SELECT rvt.receipt_num srv_no, rvt.supplier supplier,
                    rvt.transaction_date srv_date, inv.segment1 item_no,
                    rvt.item_desc item_description, hrov.NAME,               
                    (   SUBSTR (v.standard_industry_class, 1, 1)
                     || '-'
                     || po_headers.segment1
                     || '-'
                     || TO_CHAR (po_headers.creation_date, 'RRRR')
                    ) po_no,
                    po_headers.creation_date_disp po_date,   
                          (  (rvt.currency_conversion_rate * po_lines.unit_price)
                     * rvt.transact_qty
                    )aMOUNT  ,
    ----Analytic function used here                      
            SUM(          (  (rvt.currency_conversion_rate * po_lines.unit_price)
                     * rvt.transact_qty)) over(partition by hrov.NAME) SUM_AMOUNT,                                                                                 
                    (SELECT SUM (mot.on_hand)
                       FROM mtl_onhand_total_mwb_v mot
                      WHERE inv.inventory_item_id = mot.inventory_item_id
                        --  AND INV.ORGANIZATION_ID=MOT.ORGANIZATION_ID
                        AND loc.inventory_location_id = mot.locator_id
                        AND loc.organization_id = mot.organization_id
                        AND rvt.locator_id = mot.locator_id) onhand
               FROM rcv_vrc_txs_v rvt,
                    mtl_system_items_b inv,
                    mtl_item_locations loc,
                    hr_organization_units_v hrov,
                    po_headers_v po_headers,
                    ap_vendors_v v,
                    po_lines_v po_lines
              WHERE inv.inventory_item_id(+) = rvt.item_id
                AND po_headers.vendor_id = v.vendor_id
                AND rvt.po_line_id = po_lines.po_line_id
                AND rvt.po_header_id = po_lines.po_header_id
                AND rvt.po_header_id = po_headers.po_header_id
                AND rvt.supplier_id = v.vendor_id
                AND inv.organization_id = hrov.organization_id
                AND rvt.transaction_type = 'DELIVER'
                AND rvt.inspection_status_code <> 'REJECTED'
                AND rvt.organization_id = inv.organization_id(+)
                AND to_char(to_date(rvt.transaction_date, 'DD/MM/YYYY'), 'DD-MON-YYYY') BETWEEN (:p_from_date)
                                                     AND NVL (:p_to_date,
                                                              :p_from_date
                AND rvt.locator_id = loc.physical_location_id(+)
                AND transaction_id NOT IN (
                       SELECT parent_transaction_id
                         FROM rcv_vrc_txs_v rvtd
                        WHERE rvt.item_id = rvtd.item_id
                          AND rvtd.transaction_type IN
                                      ('RETURN TO RECEIVING', 'RETURN TO VENDOR'))
                                      GROUP BY rvt.receipt_num , rvt.supplier ,
                    rvt.transaction_date , inv.segment1 ,
                    rvt.item_desc , hrov.NAME,v.standard_industry_clasS,po_headers.segment1,po_headers.creation_datE,
                    po_headers.creation_date_disp,inv.inventory_item_iD,loc.inventory_location_id,loc.organization_id,
                    rvt.locator_iD,rvt.currency_conversion_rate,po_lines.unit_price, rvt.transact_qty
    but it gives blank page in reports 6i
    could it be that reports 6i donot support analytical functionskindly guide another alternaive
    thanking in advance
    Edited by: makdutakdu on Mar 25, 2012 2:22 PM

    hi
    will the view be like
    create view S_Amount as SELECT rvt.receipt_num srv_no, rvt.supplier supplier,
                    rvt.transaction_date srv_date, inv.segment1 item_no,
                    rvt.item_desc item_description, hrov.NAME,               
                    (   SUBSTR (v.standard_industry_class, 1, 1)
                     || '-'
                     || po_headers.segment1
                     || '-'
                     || TO_CHAR (po_headers.creation_date, 'RRRR')
                    ) po_no,
                    po_headers.creation_date_disp po_date,   
                          (  (rvt.currency_conversion_rate * po_lines.unit_price)
                     * rvt.transact_qty
                    )aMOUNT  ,
    ----Analytic function used here                      
            SUM(          (  (rvt.currency_conversion_rate * po_lines.unit_price)
                     * rvt.transact_qty)) over(partition by hrov.NAME) SUM_AMOUNT,                                                                                 
                    (SELECT SUM (mot.on_hand)
                       FROM mtl_onhand_total_mwb_v mot
                      WHERE inv.inventory_item_id = mot.inventory_item_id
                        --  AND INV.ORGANIZATION_ID=MOT.ORGANIZATION_ID
                        AND loc.inventory_location_id = mot.locator_id
                        AND loc.organization_id = mot.organization_id
                        AND rvt.locator_id = mot.locator_id) onhand
               FROM rcv_vrc_txs_v rvt,
                    mtl_system_items_b inv,
                    mtl_item_locations loc,
                    hr_organization_units_v hrov,
                    po_headers_v po_headers,
                    ap_vendors_v v,
                    po_lines_v po_lines
              WHERE inv.inventory_item_id(+) = rvt.item_id
                AND po_headers.vendor_id = v.vendor_id
                AND rvt.po_line_id = po_lines.po_line_id
                AND rvt.po_header_id = po_lines.po_header_id
                AND rvt.po_header_id = po_headers.po_header_id
                AND rvt.supplier_id = v.vendor_id
                AND inv.organization_id = hrov.organization_id
                AND rvt.transaction_type = 'DELIVER'
                AND rvt.inspection_status_code <> 'REJECTED'
                AND rvt.organization_id = inv.organization_id(+)
                           AND rvt.locator_id = loc.physical_location_id(+)
                AND transaction_id NOT IN (
                       SELECT parent_transaction_id
                         FROM rcv_vrc_txs_v rvtd
                        WHERE rvt.item_id = rvtd.item_id
                          AND rvtd.transaction_type IN
                                      ('RETURN TO RECEIVING', 'RETURN TO VENDOR'))
                                      GROUP BY rvt.receipt_num , rvt.supplier ,
                    rvt.transaction_date , inv.segment1 ,
                    rvt.item_desc , hrov.NAME,v.standard_industry_clasS,po_headers.segment1,po_headers.creation_datE,
                    po_headers.creation_date_disp,inv.inventory_item_iD,loc.inventory_location_id,loc.organization_id,
                    rvt.locator_iD,rvt.currency_conversion_rate,po_lines.unit_price, rvt.transact_qtyis this correct ? i mean i have not included the bind parameters in the view ..moreover shoud this view be joined with all the columns in the from clause of the original query?
    kindly guide
    thanking in advance

  • Aggregation of analytic functions not allowed

    Hi all, I have a calculated field called Calculation1 with the following calculation:
    AVG(Resolution_time) KEEP(DENSE_RANK FIRST ORDER BY RANK ) OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report #7 COMPL".Resource Name )
    The result of this calculation is correct, but is repeated for all the rows I have in the dataset.
    Group Name      Resourse name    Calculation1
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    5112 rowsI tried to create another calculation in order to have only ONE value for the couple "Group Name, Resource Name) as AVG(Calculation1) but I have the error: Aggregation of analytic functions not allowed
    I saw also inside the "Edit worksheet" panel that the Calculation1 *is not represented* with the "Sigma" symbol I(as for example a simple AVG(field_1)) and inside the SQL code I don't have GROUP BY Group Name, Resource Name......
    I'd like to see ONLY one row as:
    Group Name      Resourse name    Calculation1
    SH Group            Mr. A            10....that it means I grouped by Group Name, Resource Name
    Anyone knows how can I achieve this result or any workarounds ??
    Thanks in advance
    Alex

    Hi Rod unfortunately I can't use the AVG(Resolution_time) because my dataset is quite strange...I explain to you better.
    Ι start from this situation:
    !http://www.freeimagehosting.net/uploads/6c7bba26bd.jpg!
    There are 3 calculated fields:
    RANK is the first calculated field:
    ROW_NUMBER() OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report Assigned To & Created By COMPL".Resource Name,"Tickets Report Assigned To & Created By COMPL".Incident Id  ORDER BY  "Tickets Report Assigned To & Created By COMPL".Select Flag )
    RT Calc is the 2nd calculation:
    CASE WHEN RANK = 1 THEN Resolution_time END
    and Calculation2 is the 3rd calculation:
    AVG(Resolution_time) KEEP(DENSE_RANK FIRST ORDER BY  RANK ) OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report Assigned To & Created By COMPL".Resource Name )
    As you can see, from the initial dataset, I have duplicated incident id and a simple AVG(Resolution Time) counts also all the duplication.
    I used the rank (based on the field "flag) to take, for each ticket, ONLY a "resolution time" value (in my case I need the resolution time when the rank =1)
    So, with the Calculation2 I calculated for each couple Group Name, Resource Name the right AVG(Resolution time), but how yuo can see....this result is duplicated for each incident_id....
    What I need instead is to see *once* for each couple 'Group Name, Resource Name' the AVG(Resolution time).
    In other words I need to calculate the AVG(Resolution time) considering only the values written inside the RT Calc fields (where they are NOT NULL, and so, the total of the tickets it's not 14, but 9).
    I tried to aggregate again using AVG(Calculation2)...but I had the error "Aggregation of analytic functions not allowed"...
    Do you know a way to fix this problem ?
    Thanks
    Alex

Maybe you are looking for