OLAP Expression Analytical Functions and NA Values

Hello,
I am trying to use the SUM and MAX functions over a hierarchy where there are potentially NA values. I believe in OLAP DML, the natural behavior is to skip these values. Can a skip be accomplished with either the SUM or MAX OLAP Expression Syntax functions?
Cheers!

Pre-requisites:
===============
Time dimension with level=DAY.... i have restricted data to 1 month approx.. 20100101 to 20100201 (32 days).
Measure of interest - a (say)
Time Dimension attribute which indicates WEEKDAY.... if you have END_DATE attribute with date datatype so we can extract the DAY (MON/TUE/WED/...) from it and decipher wkday/wkend status for DAY.
Sort time as per END_DATE ..
Take care of other dimensions during testing... restrict all other dimensions of cube to single value. Final formula would be independent of other dimensions but this helps development/testing.
Step 1:
======
"Firm up the required design in olap dml
"rpr down time
" w 10 heading 't long' time_long_description
" w 10 heading 't end date' time_end_date
" w 20 heading 'Day Type' convert(time_end_date text 'DY')
" a
NOTE: version 1 of moving total
" heading 'moving minus 2 all' movingtotal(a, -2, 0, 1, time status)
" w 20 heading 'Day Type' convert(time_end_date text 'DY')
" heading 'a wkday' if convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' then a else na
NOTE: version 2 of moving total
" heading 'moving minus 2 wkday' movingtotal(a, -2, 0, 1, time convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN')
" w 20 heading 'Day Type' convert(time_end_date text 'DY')
" heading 'a wkday non-na' if convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na then a else na
NOTE: version 3 of moving total
" heading 'moving minus 2 wkday non-na' movingtotal(a, -2, 0, 1, time convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na)
OLAP DML Command:
rpr down time w 10 heading 't long' time_long_description w 10 heading 't end date' time_end_date w 20 heading 'Day Type' convert(time_end_date text 'DY') a heading 'moving minus 2 all' movingtotal(a, -2, 0, 1, time status) w 20 heading 'Day Type' convert(time_end_date text 'DY') heading 'a wkday' if convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' then a else na heading 'moving minus 2 wkday' movingtotal(a, -2, 0, 1, time convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN') w 20 heading 'Day Type' convert(time_end_date text 'DY') heading 'a wkday non-na' if convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na then a else na heading 'moving minus 2 wkday non-na' movingtotal(a, -2, 0, 1, time convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na)
Step 2:
======
"Define additional measure to contain the required/desired formula implementing the business requirements (version 3 above)
" create formula AF1 which points to last column... i.e. OLAP_DML_EXPRESSION
dfn af1 formula movingtotal(a, -2, 0, 1, time convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na)
"NOTE: Do this via AWM using calculated member with template type = OLAP_DML_EXPRESSION so that the cube view for cube contains a column for measure AF1
OLAP DML Command:
rpr down time w 10 heading 't long' time_long_description w 10 heading 't end date' time_end_date w 20 heading 'Day Type' convert(time_end_date text 'DY') a heading 'a wkday non-na' if convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na then a else na heading 'moving minus 2 wkday non-na (AF1)' af1
->
Step 3:
=======
Extend Oracle OLAP with regular SQL functionality like SQL ANALYTICAL functions to fill up the gaps for intermediate week days like DAY_20100104 (TUE), DAY_20100105 (WED) etc.
Use: SQL Analytical Function LAST_VALUE() in query.. i.e. in report or query.. dont use AF1 but use LAST_VALUE(af1).... as below pseudo-code:
LAST_VALUE(cube_view.af1) over (partition by <product, organization, ... non-time dimensions> order by <DAY_KEY_Col> range unbounded preceeding and current row)
HTH
Shankar

Similar Messages

  • Analytic function and aggregate function

    What are analytic function and aggregate function. What is difference between them?

    hi,
    Analytic Functions :----------
    Analytic functions compute an aggregate value based on a group of rows. They differ from aggregate functions in that they return multiple rows for each group. The group of rows is called a window and is defined by the analytic_clause. For each row, a sliding window of rows is defined. The window determines the range of rows used to perform the calculations for the current row. Window sizes can be based on either a physical number of rows or a logical interval such as time.
    Analytic functions are the last set of operations performed in a query except for the final ORDER BY clause. All joins and all WHERE, GROUP BY, and HAVING clauses are completed before the analytic functions are processed. Therefore, analytic functions can appear only in the select list or ORDER BY clause.
    Analytic functions are commonly used to compute cumulative, moving, centered, and reporting aggregates.
    Aggregate Functions :----------
    Aggregate functions return a single result row based on groups of rows, rather than on single rows. Aggregate functions can appear in select lists and in ORDER BY and HAVING clauses. They are commonly used with the GROUP BY clause in a SELECT statement, where Oracle Database divides the rows of a queried table or view into groups. In a query containing a GROUP BY clause, the elements of the select list can be aggregate functions, GROUP BY expressions, constants, or expressions involving one of these. Oracle applies the aggregate functions to each group of rows and returns a single result row for each group.
    If you omit the GROUP BY clause, then Oracle applies aggregate functions in the select list to all the rows in the queried table or view. You use aggregate functions in the HAVING clause to eliminate groups from the output based on the results of the aggregate functions, rather than on the values of the individual rows of the queried table or view.
    let me know if you are feeling any problem in understanding.
    thanks.
    Edited by: varun4dba on Jan 27, 2011 3:32 PM

  • Clipboard functionality and Multi Value in an attribute of component WD_SELECT_OPTIONS_20

    Hi All,
    Clipboard functionality and Multi Value in an attribute functionalities are not working.
    I tried to look into the standard component and could not found relevant piece of code.
    Our system is on 7.3 EHP1 SP9 but functionality does not seem to work.
    Does anyone know in which EHP of 7.3 NetWeaver these functionalities are delivered.
    Any note avalibale compatible with our system level?
    Regards
    Sounak

    Hi All,
    Clipboard functionality and Multi Value in an attribute functionalities are not working.
    I tried to look into the standard component and could not found relevant piece of code.
    Our system is on 7.3 EHP1 SP9 but functionality does not seem to work.
    Does anyone know in which EHP of 7.3 NetWeaver these functionalities are delivered.
    Any note avalibale compatible with our system level?
    Regards
    Sounak

  • Analytic Function - Return 2 values

    I am sure I need to use an analytic function to do this, I just cannot seem to get it right. I appreciate the help.
    Table and insert statements:
    create table TST_CK
    DOC_ID NUMBER(6)      not null,
    ROW_SEQ_NBR NUMBER(6) not null,
    IND_VALUE VARCHAR2(2) null
    INSERT INTO TST_CK VALUES ('1','6',NULL);
    INSERT INTO TST_CK VALUES ('1','5',NULL);
    INSERT INTO TST_CK VALUES ('1','4','T');
    INSERT INTO TST_CK VALUES ('1','3','R');
    INSERT INTO TST_CK VALUES ('1','9',NULL);
    INSERT INTO TST_CK VALUES ('1','10',NULL);
    INSERT INTO TST_CK VALUES ('1','7','T');
    INSERT INTO TST_CK VALUES ('1','8','R');
    INSERT INTO TST_CK VALUES ('2','1',NULL);
    INSERT INTO TST_CK VALUES ('2','2',NULL);
    INSERT INTO TST_CK VALUES ('2','3','T');
    INSERT INTO TST_CK VALUES ('2','4','R');
    INSERT INTO TST_CK VALUES ('2','5',NULL);
    INSERT INTO TST_CK VALUES ('2','6',NULL);
    INSERT INTO TST_CK VALUES ('2','7','T');
    INSERT INTO TST_CK VALUES ('2','8','R');
    INSERT INTO TST_CK VALUES ('4','1',NULL);
    INSERT INTO TST_CK VALUES ('4','2',NULL);
    INSERT INTO TST_CK VALUES ('4','3','X1');
    INSERT INTO TST_CK VALUES ('4','4',NULL);
    INSERT INTO TST_CK VALUES ('4','5',NULL);
    INSERT INTO TST_CK VALUES ('4','6',NULL);
    INSERT INTO TST_CK VALUES ('4','7','T');
    INSERT INTO TST_CK VALUES ('4','8','R');
    INSERT INTO TST_CK VALUES ('4','9',NULL);
    INSERT INTO TST_CK VALUES ('4','10',NULL);
    INSERT INTO TST_CK VALUES ('4','11',NULL);
    INSERT INTO TST_CK VALUES ('4','12',NULL);
    INSERT INTO TST_CK VALUES ('4','13','T');
    INSERT INTO TST_CK VALUES ('4','14','R');
    INSERT INTO TST_CK VALUES ('4','15',NULL);
    INSERT INTO TST_CK VALUES ('4','16',NULL);
    COMMIT;Here is what I have tried that gets me close:
    SELECT MAX (TST_CK.DOC_ID), MAX (TST_CK.ROW_SEQ_NBR), TST_CK.IND_VALUE
      FROM ASAP.TST_CK TST_CK
    WHERE (TST_CK.IND_VALUE IS NOT NULL)
    GROUP BY TST_CK.IND_VALUE
    ORDER BY 2 ASCHere is my desired result:
    CV_1      CV_2
    T           ROr even better result would be:
    concat(CV_1,CV_2)With result:
    T,RThanks for looking
    G

    Hi,
    I am sure I need to use an analytic function to do this, I just cannot seem to get it right. I appreciate the help.
    Table and insert statements: ...Thanks for posting the CREATE TABLE and INSERT statements.
    Don't forget to explain how you get the results you want from that sample data.
    GMoney wrote:
    create table TST_CK
    DOC_ID NUMBER(6)      not null,
    ROW_SEQ_NBR NUMBER(6) not null,
    IND_VALUE VARCHAR2(2) null
    INSERT INTO TST_CK VALUES ('1','6',NULL);
    If doc_id and row_seq_nbr are NUMBERs, why are you inserting VARCHAR2 values, such as '1' and '6' (in single-quotes)?
    Here is my desired result:
    CV_1      CV_2
    T           ROr even better result would be:
    concat(CV_1,CV_2)With result:
    T,R
    The results from the query you posted are:
    MAX(TST_CK.DOC_ID) MAX(TST_CK.ROW_SEQ_NBR) IN
                     4                       3 X1
                     4                      13 T
                     4                      14 RWhat do the desired results represent?
    Why do your desired results include 'R' and 'T', but not 'X1'? Why do you want
    'T,R'     and not
    'X1,T,R'     or
    'X1,T'     or
    'T,X1'     or something else?
    Whatever your reasons are, there's a good chance you'll want to use String Aggregation . Your Oracle version is always important, but it's especially important in string aggregation problems, because some helpful new functions have beeen added in recent versions. Always say which version of Oracle (e.g., 11.2.0.3.0) you're using.

  • Lag analytical function and controlling the offset

    Can you help me on that? Small challenge. At least I gave up in half a day.
    Data
    ACCOUNT_NUMBER     BATCH_ID     TRANSACTION_DATE     TRANSACTION_TYPE     TRANSACTION_NUMBER     PARENT_TRANSACTION_NUMBER
    124680     ZY000489     1/11/2011     62     377     NULL
    124680     ZY000489     1/11/2011     1     378     NULL
    124680     ZY000489     1/11/2011     1     379     NULL
    124680     ZY000489     1/11/2011     1     380     NULL
    124680     ZY000489     1/11/2011     62     381     NULL
    124680     ZY000489     1/11/2011     1     381     NULL
    124681     ZY000490     1/11/2011     350     4000     NULL
    124681     ZY000490     1/11/2011     1     4001     NULL
    124681     ZY000490     1/11/2011     1     4002     NULL
    I want to identify parent Transaction Number for each row in above data.
    The way to identify it is My parent transaction Id is
    -     All child transaction have type as 1
    -     One main transaction can have multiple line items.
    -     Any transaction (type) can have an related child transaction (Transaction Type as 1)
    -     Each logical group of transactions have same account number, batch id, transaction date and consecutive Transaction Number (like 377, 378, 379, 380 in above example)
    The data should look like below once I identified parent transaction columns:
    ACCOUNT_NUMBER     BATCH_ID     TRANSACTION_DATE     TRANSACTION_TYPE     TRANSACTION_NUMBER     PARENT_TRANSACTION_NUMBER
    124680     ZY000489     1/11/2011     62     377     377
    124680     ZY000489     1/11/2011     1     378     377
    124680     ZY000489     1/11/2011     1     379     377
    124680     ZY000489     1/11/2011     1     380     377
    124680     ZY000489     1/11/2011     62     381     381
    124680     ZY000489     1/11/2011     1     382     381
    124681     ZY000490     1/11/2011     350     4000     4000
    124681     ZY000490     1/11/2011     1     4001     4000
    124681     ZY000490     1/11/2011     1     4002     4000
    I tried using LAG Analytical function trying to lag dynamically with offset but had difficulties dynamically expanding the offset. Its an Control Break kind of functionality that i want to achieve in single SQL.
    i Know we can do it using pl/sql construct but the challenge is to do it using single sql. Please help
    Please let me know if you are able to do it in single SQL.
    Thanks

    rohitgoswami wrote:
    Can you help me on that? Small challenge. At least I gave up in half a day.
    Data
    ACCOUNT_NUMBER     BATCH_ID     TRANSACTION_DATE     TRANSACTION_TYPE     TRANSACTION_NUMBER     PARENT_TRANSACTION_NUMBER
    124680     ZY000489     1/11/2011     62     377     NULL
    124680     ZY000489     1/11/2011     1     378     NULL
    124680     ZY000489     1/11/2011     1     379     NULL
    124680     ZY000489     1/11/2011     1     380     NULL
    124680     ZY000489     1/11/2011     62     381     NULL
    124680     ZY000489     1/11/2011     1     381     NULL
    124681     ZY000490     1/11/2011     350     4000     NULL
    124681     ZY000490     1/11/2011     1     4001     NULL
    124681     ZY000490     1/11/2011     1     4002     NULL
    I want to identify parent Transaction Number for each row in above data.
    The way to identify it is My parent transaction Id is
    -     All child transaction have type as 1
    -     One main transaction can have multiple line items.
    -     Any transaction (type) can have an related child transaction (Transaction Type as 1)
    -     Each logical group of transactions have same account number, batch id, transaction date and consecutive Transaction Number (like 377, 378, 379, 380 in above example)
    The data should look like below once I identified parent transaction columns:
    ACCOUNT_NUMBER     BATCH_ID     TRANSACTION_DATE     TRANSACTION_TYPE     TRANSACTION_NUMBER     PARENT_TRANSACTION_NUMBER
    124680     ZY000489     1/11/2011     62     377     377
    124680     ZY000489     1/11/2011     1     378     377
    124680     ZY000489     1/11/2011     1     379     377
    124680     ZY000489     1/11/2011     1     380     377
    124680     ZY000489     1/11/2011     62     381     381
    124680     ZY000489     1/11/2011     1     382     381
    124681     ZY000490     1/11/2011     350     4000     4000
    124681     ZY000490     1/11/2011     1     4001     4000
    124681     ZY000490     1/11/2011     1     4002     4000
    I tried using LAG Analytical function trying to lag dynamically with offset but had difficulties dynamically expanding the offset. Its an Control Break kind of functionality that i want to achieve in single SQL.
    i Know we can do it using pl/sql construct but the challenge is to do it using single sql. Please help
    Please let me know if you are able to do it in single SQL.
    ThanksCan probably pretty this up ... i just went for functional code for the moment.
    TUBBY_TUBBZ?with
      2     data (acc_no, batch_id, trans_date, trans_type, trans_no) as
      3  (
      4    select 124680, 'ZY000489', to_date('1/11/2011', 'mm/dd/yyyy'), 62   , 377   from dual union all
      5    select 124680, 'ZY000489', to_date('1/11/2011', 'mm/dd/yyyy'), 1    , 378   from dual union all
      6    select 124680, 'ZY000489', to_date('1/11/2011', 'mm/dd/yyyy'), 1    , 379   from dual union all
      7    select 124680, 'ZY000489', to_date('1/11/2011', 'mm/dd/yyyy'), 1    , 380   from dual union all
      8    select 124680, 'ZY000489', to_date('1/11/2011', 'mm/dd/yyyy'), 62   , 381   from dual union all
      9    select 124680, 'ZY000489', to_date('1/11/2011', 'mm/dd/yyyy'), 1    , 382   from dual union all
    10    select 124681, 'ZY000490', to_date('1/11/2011', 'mm/dd/yyyy'), 350  , 4000  from dual union all
    11    select 124681, 'ZY000490', to_date('1/11/2011', 'mm/dd/yyyy'), 1    , 4001  from dual union all
    12    select 124681, 'ZY000490', to_date('1/11/2011', 'mm/dd/yyyy'), 1    , 4002  from dual
    13  )
    14  select
    15    acc_no,
    16    batch_id,
    17    trans_date,
    18    trans_type,
    19    trans_no,
    20    case when trans_type != 1
    21    then
    22      trans_no
    23    else
    24      lag
    25      (
    26        case when trans_type = 1
    27        then
    28           null
    29        else
    30           trans_no
    31        end
    32        ignore nulls
    33      ) over (partition by acc_no, batch_id, trans_date order by trans_no asc)
    34    end as parent_trans_no
    35  from data;
                ACC_NO BATCH_ID                 TRANS_DATE                         TRANS_TYPE           TRANS_NO    PARENT_TRANS_NO
                124680 ZY000489                 11-JAN-2011 12 00:00                       62                377                377
                124680 ZY000489                 11-JAN-2011 12 00:00                        1                378                377
                124680 ZY000489                 11-JAN-2011 12 00:00                        1                379                377
                124680 ZY000489                 11-JAN-2011 12 00:00                        1                380                377
                124680 ZY000489                 11-JAN-2011 12 00:00                       62                381                381
                124680 ZY000489                 11-JAN-2011 12 00:00                        1                382                381
                124681 ZY000490                 11-JAN-2011 12 00:00                      350               4000               4000
                124681 ZY000490                 11-JAN-2011 12 00:00                        1               4001               4000
                124681 ZY000490                 11-JAN-2011 12 00:00                        1               4002               4000
    9 rows selected.
    Elapsed: 00:00:00.01
    TUBBY_TUBBZ?

  • Analytical function and Update from Subquery

    Hi
    as my head is already spinning I kindly ask you to have a try:
    I'm trying to calculate a date that lies X working days in the future of a given start date. I need to consider a calendar table that will give me all dates that are considered to be working days.
    Testdata:
    create table test_date
    (start_date date,
    days number,
    end_date date);
    Commit;
    insert into test_date values (to_date('20100104','yyyymmdd') ,2,NULL);
    insert into test_date values (to_date('20100108','yyyymmdd') ,1,NULL);
    Commit;
    create table my_calendar
    (cal_date date,
    use_day char(1));
    Commit;
    insert into my_calendar values (to_date('20100104','yyyymmdd') ,'Y');
    insert into my_calendar values (to_date('20100105','yyyymmdd') ,'Y');
    insert into my_calendar values (to_date('20100106','yyyymmdd') ,'Y');
    insert into my_calendar values (to_date('20100107','yyyymmdd') ,'Y');
    insert into my_calendar values (to_date('20100108','yyyymmdd') ,'Y');
    insert into my_calendar values (to_date('20100109','yyyymmdd') ,'N');
    insert into my_calendar values (to_date('20100110','yyyymmdd') ,'N');
    insert into my_calendar values (to_date('20100111','yyyymmdd') ,'N');
    insert into my_calendar values (to_date('20100112','yyyymmdd') ,'Y');
    insert into my_calendar values (to_date('20100113','yyyymmdd') ,'Y');
    insert into my_calendar values (to_date('20100114','yyyymmdd') ,'Y');
    Commit;Desired output:
    start_date days end_date
    04.01.2010 2 06.01.2010
    08.01.2010 1 12.01.2010
    I can formulate this as a simple select with a subquery.
    Select end_date
    from (
    select cal_date, MAX(cal_date) Over (ORDER BY cal_date ROWS between current row and 1 FOLLOWING) as end_date
                   from my_calendar
                   where cal_date >= to_date('20100108','yyyymmdd') 
                   and use_day = 'Y')
    Where cal_date = to_date('20100108','yyyymmdd')But I have no idea how to do the update as I cannot get the number of days and the start date into my inner subquery:
    update test_date t
    SET end_date = (
    Select end_date
    from (
    select cal_date, MAX(cal_date) Over (ORDER BY cal_date ROWS between current row and t.days FOLLOWING) as end_date
                   from my_calendar
                   where cal_date >= t.start_date
                   and use_day = 'Y')
    Where cal_date = t.start_date)I need to perform this calculation on approx 100k rows and would prefer not to use a function to wrap the subquery.
    Any ideas how to solve this without a function call ?

    Something like this ?
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE     10.2.0.4.0     Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL> create table test_date
      2  (start_date date,
      3  days number,
      4  end_date date);
    Table created.
    SQL> insert into test_date values (to_date('20100104','yyyymmdd') ,2,NULL);
    1 row created.
    SQL> insert into test_date values (to_date('20100108','yyyymmdd') ,1,NULL);
    1 row created.
    SQL> Commit;
    Commit complete.
    SQL> create table my_calendar
      2  (cal_date date,
      3  use_day char(1));
    Table created.
    SQL>
    SQL> insert into my_calendar values (to_date('20100104','yyyymmdd') ,'Y');
    1 row created.
    SQL> insert into my_calendar values (to_date('20100105','yyyymmdd') ,'Y');
    1 row created.
    SQL> insert into my_calendar values (to_date('20100106','yyyymmdd') ,'Y');
    1 row created.
    SQL> insert into my_calendar values (to_date('20100107','yyyymmdd') ,'Y');
    1 row created.
    SQL> insert into my_calendar values (to_date('20100108','yyyymmdd') ,'Y');
    1 row created.
    SQL> insert into my_calendar values (to_date('20100109','yyyymmdd') ,'N');
    1 row created.
    SQL> insert into my_calendar values (to_date('20100110','yyyymmdd') ,'N');
    1 row created.
    SQL> insert into my_calendar values (to_date('20100111','yyyymmdd') ,'N');
    1 row created.
    SQL> insert into my_calendar values (to_date('20100112','yyyymmdd') ,'Y');
    1 row created.
    SQL> insert into my_calendar values (to_date('20100113','yyyymmdd') ,'Y');
    1 row created.
    SQL> insert into my_calendar values (to_date('20100114','yyyymmdd') ,'Y');
    1 row created.
    SQL> Commit;
    Commit complete.
    SQL> select * from test_date ;
    START_DAT       DAYS END_DATE
    04-JAN-10          2
    08-JAN-10          1
    SQL> select * from my_calendar ;
    CAL_DATE  U
    04-JAN-10 Y
    05-JAN-10 Y
    06-JAN-10 Y
    07-JAN-10 Y
    08-JAN-10 Y
    09-JAN-10 N
    10-JAN-10 N
    11-JAN-10 N
    12-JAN-10 Y
    13-JAN-10 Y
    14-JAN-10 Y
    11 rows selected.
    SQL> update test_date te set te.end_date = (select end_date from (select start_date, days, cal_date, use_day, lead(cal_date, t.days) over (partition by start_date order by cal_date) end_date from test_date t, my_calendar m where m.cal_date >= t.start_date and use_day = 'Y') where start_date = cal_date and start_date = te.start_date) ;
    2 rows updated.
    SQL> select * from test_date ;
    START_DAT       DAYS END_DATE
    04-JAN-10          2 06-JAN-10
    08-JAN-10          1 12-JAN-10
    SQL> spool offNot sure how this will perform on 100K rows.
    p.s. Just wanted to add that this query assumes that START_DATE in TEST_DATE will always be present in MY_CALENDAR as CAL_DATE and CAL_DATE in MY_CALENDAR is unique.
    Edited by: user503699 on Jan 21, 2010 3:52 PM

  • Anylytical Functions and DrillDown

    Hi everybody!
    I wnat to know, if someone has solved the problem with the
    analytical functions and the drill down.
    the analytical functions are based on fixed parameters. when you
    drill down, these parameters change and the result is not very
    meanfull. eg drill down from year to quatal an the yearago value
    is not korrect for the quartal, only for the year.
    is there someone who has solved this problem?
    Thanks

    Hi biswajit Das,
                 thnax for you response, but i feel
    T52c0  - SAP standard schemas
    T52c1  - customer specific and Modified sap standard schemas
    T52c2  - Table for PCR.
      but i want to know when you crate function or operations where these are getting stored i mean which table?
    bye
    phanikumar

  • Tuning sql with analytic function

    Dear friends I've developed one sql :
    with REP as
    (select /*+ MATERIALIZE */ branch_code,
       row_number() over(partition by branch_code, account order by bkg_date desc  ) R,
             account,
             bkg_date,
             lcy_closing_bal
        from history t
    select REP1.branch_code,
           REP1.account,
           REP1.bkg_date,
           REP1.lcy_closing_bal,
             NULL  AS second,
           REP2.bkg_date        bkg_date2,
           REP2.lcy_closing_bal lcy_closing_bal2,
            NULL  AS third,
           REP3.bkg_date        bkg_date3,
           REP3.lcy_closing_bal lcy_closing_bal3
      from (SELECT * FROM REP WHERE R=1) REP1, (SELECT * FROM REP WHERE R=2) REP2, (SELECT * FROM REP WHERE R=3) REP3
    where
           (REP1.BRANCH_CODE = REP2.BRANCH_CODE(+) AND REP1.ACCOUNT = REP2.ACCOUNT(+)) AND
           (REP1.BRANCH_CODE = REP3.BRANCH_CODE(+) AND REP1.ACCOUNT = REP3.ACCOUNT(+))The point is I want to restrict (tune) REP before it used ,because , as you can see I need maximum three value from REP (where R=1,R=2,R=3) . Which analytic function and with wich options I have to use to receive only 3 values in each branch_code,account groups at the materializing time ?

    Radrigez wrote:
    Dear friends I've developed one sql :
    with REP as
    from (SELECT * FROM REP WHERE R=1) REP1,
    (SELECT * FROM REP WHERE R=2) REP2,
    (SELECT * FROM REP WHERE R=3) REP3
    where
    (REP1.BRANCH_CODE = REP2.BRANCH_CODE(+) AND REP1.ACCOUNT = REP2.ACCOUNT(+)) AND
    (REP1.BRANCH_CODE = REP3.BRANCH_CODE(+) AND REP1.ACCOUNT = REP3.ACCOUNT(+))
    The first step is to put your subquery (which doesn't need to be materialized) into an inline view and restrict the result set on r in (1,2,3) as suggested by thtsang - you don't need to query the same result set three times.
    Then you're looking at a simple pivot operation (assuming the number of rows you want per branch and account is fixed). If you're on 11g search the manuals for PIVOT, on earlier versions you can do this with a decode() or case() operator.
    Step 1 (which could go into another factored subquery) would be something like:
    select
            branch_code, account,
            case r = 1 then bkg_date end bkg_date,
            case r = 1 then lcy_closing_bal end lcy_closing_bal,
            case r = 2 then bkg_date end bkg_date2,
            case r = 2 then lcy_closing_bal end lcy_closing_bal2,
            case r = 3 then bkg_date end bkg_date3,
            case r = 3 then lcy_closing_bal end lcy_closing_bal3
    from
            repThis gives you the eight necessary columns, but still (up to) three rows per branch/account.
    Then you aggregate this (call it rep1) on branch and account.
    select
            branch_code, account,
            max(bkg_date),
            max(lcy_closing_bal),
            max(bkg_date2),
            max(lcy_closing_bal2),
            max(bkg_date3),
            max(lcy_closing_bal3)
    from
            rep1
    group by
            branch_code, account
    order by
            branch_code, accountRegards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

  • Help with Oracle Analytic Function scenario

    Hi,
    I am new to analytic functions and was wondering if someone could help me with the data scenario below. I have a table with the following data
    COLUMN A COLUMN B COLUMN C
    13368834 34323021 100
    13368835 34438258 50
    13368834 34438258 50
    13368835 34323021 100
    The output I want is
    COLUMN A COLUMN B COLUMN C
    13368834 34323021 100
    13368835 34438258 50
    A simple DISTINCT won't give me the desired output so i was wondering if there is any way that I can get the result using ANALYTIC FUNCTIONS and DISTINCT ..
    Any help will be greatly appreciated.
    Thanks.

    Hi,
    Welcome to the forum!
    Whenever you have a question, please post your sample data in a form that people can use to re-create the problem and test their solutions.
    For example:
    CREATE TABLE     table_x
    (      columna     NUMBER
    ,      columnb     NUMBER
    ,      columnc     NUMBER
    INSERT INTO table_x (columna, columnb, columnc) VALUES (13368834, 34323021, 100);
    INSERT INTO table_x (columna, columnb, columnc) VALUES (13368835, 34438258, 50);
    INSERT INTO table_x (columna, columnb, columnc) VALUES (13368834, 34438258, 50);
    INSERT INTO table_x (columna, columnb, columnc) VALUES (13368835, 34323021, 100);Do you want something that works in your version or Oracle? Of course you do! So tell us which version that is.
    How do you get the results that you want? Explain what each row of output represents. It looks like
    the 1st row contains the 1st distinct value from each column (where "first" means descending order for columnc, and ascending order for the others),
    the 2nd row contains the 2nd distinct value,
    the 3rd row contains the 3rd distinct value, and so on.
    If that's what you want, here's one way to get it (in Oracle 9 and up):
    WITH     got_nums     AS
         SELECT     columna, columnb, columnc
         ,     DENSE_RANK () OVER (ORDER BY  columna        )     AS a_num
         ,     DENSE_RANK () OVER (ORDER BY  columnb        )     AS b_num
         ,     DENSE_RANK () OVER (ORDER BY  columnc  DESC)     AS c_num
         FROM     table_x
    SELECT       MAX (a.columna)          AS columna
    ,       MAX (b.columnb)          AS columnb
    ,       MAX (c.columnc)          AS columnc
    FROM              got_nums     a
    FULL OUTER JOIN  got_nums     b     ON     b.b_num     =           a.a_num
    FULL OUTER JOIN  got_nums     c     ON     c.c_num     = COALESCE (a.a_num, b.b_num)
    GROUP BY  COALESCE (a.a_num, b.b_num, c.c_num)
    ORDER BY  COALESCE (a.a_num, b.b_num, c.c_num)
    ;I've been trying to find a good name for this type of query. The best I've heard so far is "Prix Fixe Query", named after the menus where you get a choice of soups (listed in one column), appetizers (in another column), main dishes (in a 3rd column), and so on. The items on the first row don't necessaily have any relationship to each other.
    The solution does not assume that there are the same number of distinct items in each column.
    For example, if you add this row to the sample data:
    INSERT INTO table_x (columna, columnb, columnc) VALUES (13368835, 34323021, 99);which is a copy of the last row, except that there is a completely new value for columnc, then the output is:
    `  COLUMNA    COLUMNB    COLUMNC
      13368834   34323021        100
      13368835   34438258         99
                                  50starting in Oracle 11, you can also do this with an unpivot-pivot query.

  • Completion of data series by analytical function

    I have the pleasure of learning the benefits of analytical functions and hope to get some help
    The case is as follows:
    Different projects gets funds from different sources over several years, but not from each source every year.
    I want to produce the cumulative sum of funds for each source for each year for each project, but so far I have not been able to do so for years without fund for a particular source.
    I have used this syntax:
    SUM(fund) OVER(PARTITION BY project, source ORDER BY year ROWS UNBOUNDED PRECEDING)
    I have also experimented with different variations of the window clause, but without any luck.
    This is the last step in a big job I have been working on for several weeks, so I would be very thankful for any help.

    If you want to use Analytic functions and if you are on 10.1.3.3 version of BI EE then try using Evaluate, Evaluate_aggr that support native database functions. I have blogged about it here http://oraclebizint.wordpress.com/2007/09/10/oracle-bi-ee-10133-support-for-native-database-functions-and-aggregates/. But in your case all you might want to do is have a column with the following function.
    SUM(Measure BY Col1, Col2...)
    I have also blogged about it here http://oraclebizint.wordpress.com/2007/10/02/oracle-bi-ee-101332-varying-aggregation-based-on-levels-analytic-functions-equivalence/.
    Thanks,
    Venkat
    http://oraclebizint.wordpress.com

  • How to use analytic function with aggregate function

    hello
    can we use analytic function and aggrgate function in same qurey? i tried to find any example on Net but not get any example how both of these function works together. Any link or example plz share with me
    Edited by: Oracle Studnet on Nov 15, 2009 10:29 PM

    select
    t1.region_name,
    t2.division_name,
    t3.month,
    t3.amount mthly_sales,
    max(t3.amount) over (partition by t1.region_name, t2.division_name)
    max_mthly_sales
    from
    region t1,
    division t2,
    sales t3
    where
    t1.region_id=t3.region_id
    and
    t2.division_id=t3.division_id
    and
    t3.year=2004
    Source:http://www.orafusion.com/art_anlytc.htm
    Here max (aggregate) and over partition by (analytic) function is in same query. So it means we can use aggregate and analytic function in same query and more than one analytic function in same query also.
    Hth
    Girish Sharma

  • How to achive this using analytical function-- please help

    version 10g.
    this code works just fine with my requirement. i am tyring to learn analytical functions and implement that in the below query. i tried using row_number ,
    but i could nt achive the desired results. please give me some ideas.
    SELECT c.tax_idntfctn_nmbr irs_number, c.legal_name irs_name,
           f.prvdr_lctn_iid
      FROM tax_entity_detail c,
           provider_detail e,
           provider_location f,
           provider_location_detail pld
    WHERE c.tax_entity_sid = e.tax_entity_sid
       AND e.prvdr_sid = f.prvdr_sid
       AND pld.prvdr_lctn_iid = f.prvdr_lctn_iid
       AND c.oprtnl_flag = 'A'
       AND c.status_cid = 2
       AND e.oprtnl_flag = 'A'
       AND e.status_cid = 2
       AND (c.from_date) =
              (SELECT MAX (c1.from_date)
                 FROM tax_entity_detail c1
                WHERE c1.tax_entity_sid = c.tax_entity_sid
                  AND c1.oprtnl_flag = 'A'
                  AND c1.status_cid = 2)
       AND (e.from_date) =
              (SELECT MAX (c1.from_date)
                 FROM provider_detail c1
                WHERE c1.prvdr_sid = e.prvdr_sid
                  AND c1.oprtnl_flag = 'A'
                  AND c1.status_cid = 2)
       AND pld.oprtnl_flag = 'A'
       AND pld.status_cid = 2
       AND (pld.from_date) =
              (SELECT MAX (a1.from_date)
                 FROM provider_location_detail a1
                WHERE a1.prvdr_lctn_iid = pld.prvdr_lctn_iid
                  AND a1.oprtnl_flag = 'A'
                  AND a1.status_cid = 2)thanks
    Edited by: new learner on May 24, 2010 7:53 AM
    Edited by: new learner on May 24, 2010 10:50 AM

    May be like this not tested...
    select *
    from
    SELECT c.tax_idntfctn_nmbr irs_number, c.legal_name irs_name,
    f.prvdr_lctn_iid, c.from_date as c_from_date, max(c.from_date) over(partition by c.tax_entity_sid) as max_c_from_date,
    e.from_date as e_from_date, max(e.from_date) over(partition by e.prvdr_sid) as max_e_from_date,
    pld.from_date as pld_from_date, max(pld.from_date) over(partition by pld.prvdr_lctn_iid) as max_pld_from_date
    FROM tax_entity_detail c,
    provider_detail e,
    provider_location f,
    provider_location_detail pld
    WHERE c.tax_entity_sid = e.tax_entity_sid
    AND e.prvdr_sid = f.prvdr_sid
    AND pld.prvdr_lctn_iid = f.prvdr_lctn_iid
    AND c.oprtnl_flag = 'A'
    AND c.status_cid = 2
    AND e.oprtnl_flag = 'A'
    AND e.status_cid = 2
    AND pld.oprtnl_flag = 'A'
    AND pld.status_cid = 2
    )X
    where c_from_date=max_c_from_date AND e_from_date =max_e_from_date AND
    pld_from_date=max_pld_from_date

  • From analytical function to regular query

    Hi all,
    I was reading a tutorial for analytical function and i found something like this
    sum(princial) keep(dense_rank first order by d_date) over partition by (userid, alias, sec_id, flow, p_date)
    can somebody translate this into simple queries / subquery? i am aware that analytical function are faster but i would like to know
    how this can translate to regular query.
    can someone help me writing a regular query that will be produce same result as query above?
    . thanks
    Edited by: Devx on Jun 10, 2010 11:16 AM

    Hi,
    WITH CUSTOMERS AS
    SELECT 1 CUST_ID ,'NJ' STATE_CODE,1 TIMES_PURCHASED FROM DUAL UNION ALL
    SELECT 1,'CT',1 FROM DUAL UNION ALL
    SELECT 2,'NY',10 FROM DUAL UNION ALL
    SELECT 2,'NY',10 FROM DUAL UNION ALL
    SELECT 1,'CT',10 FROM DUAL UNION ALL
    SELECT 3,'NJ',2 FROM DUAL UNION ALL
    SELECT 4,'NY',4 FROM DUAL
    SELECT SUM(TIMES_PURCHASED) KEEP(DENSE_RANK FIRST ORDER BY CUST_ID ASC) OVER (PARTITION BY STATE_CODE) SUM_TIMES_PURCHASED_WITH_MIN,
           SUM(TIMES_PURCHASED) KEEP(DENSE_RANK LAST ORDER BY CUST_ID) OVER (PARTITION BY STATE_CODE) SUM_TIMES_PURCHASED_WITH_MAX,
           C.*
    FROM   CUSTOMERS C;
    SUM_TIMES_PURCHASED_WITH_MIN     SUM_TIMES_PURCHASED_WITH_MAX     CUST_ID     STATE_CODE     TIMES_PURCHASED
    11     11     1     CT     10
    11     11     1     CT     1
    1     2     3     NJ     2
    1     2     1     NJ     1
    20     4     4     NY     4
    20     4     2     NY     10
    20     4     2     NY     10The above given example is self explanatory, execute the SQL, you'll notice that in the first column the sum of TIMES_PURCHASED partitioned by state code of FIRST cust_id will be repeated for the STATE_CODE partition, in the second column, the sum of TIMES_PURCHASED partitioned by state code of LAST cust_id will be repeated for the STATE_CODE partition.
    HTH
    *009*
    Edited by: 009 on Jun 10, 2010 10:53 PM

  • Understanding row_number() and using it in an analytic function

    Dear all;
    I have been playing around with row_number and trying to understand how to use it and yet I still cant figure it out...
    I have the following code below
    create table Employee(
        ID                 VARCHAR2(4 BYTE)         NOT NULL,
       First_Name         VARCHAR2(10 BYTE),
       Last_Name          VARCHAR2(10 BYTE),
        Start_Date         DATE,
        End_Date           DATE,
         Salary             Number(8,2),
       City               VARCHAR2(10 BYTE),
        Description        VARCHAR2(15 BYTE)
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary,  City,       Description)
                 values ('01','Jason',    'Martin',  to_date('19960725','YYYYMMDD'), to_date('20060725','YYYYMMDD'), 1234.56, 'Toronto',  'Programmer');
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary,  City,       Description)
                  values('02','Alison',   'Mathews', to_date('19760321','YYYYMMDD'), to_date('19860221','YYYYMMDD'), 6661.78, 'Vancouver','Tester')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary,  City,       Description)
                 values('03','James',    'Smith',   to_date('19781212','YYYYMMDD'), to_date('19900315','YYYYMMDD'), 6544.78, 'Vancouver','Tester')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary,  City,       Description)
                  values('04','Celia',    'Rice',    to_date('19821024','YYYYMMDD'), to_date('19990421','YYYYMMDD'), 2344.78, 'Vancouver','Manager')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary,  City,       Description)
                  values('05','Robert',   'Black',   to_date('19840115','YYYYMMDD'), to_date('19980808','YYYYMMDD'), 2334.78, 'Vancouver','Tester')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary, City,        Description)
                  values('06','Linda',    'Green',   to_date('19870730','YYYYMMDD'), to_date('19960104','YYYYMMDD'), 4322.78,'New York',  'Tester')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary, City,        Description)
                  values('07','David',    'Larry',   to_date('19901231','YYYYMMDD'), to_date('19980212','YYYYMMDD'), 7897.78,'New York',  'Manager')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary, City,        Description)
                   values('08','James',    'Cat',     to_date('19960917','YYYYMMDD'), to_date('20020415','YYYYMMDD'), 1232.78,'Vancouver', 'Tester')I did a simple select statement
    select * from Employee e
    and it returns this below
    ID   FIRST_NAME LAST_NAME  START_DAT END_DATE      SALARY CITY       DESCRIPTION
    01   Jason      Martin     25-JUL-96 25-JUL-06    1234.56 Toronto    Programmer
    02   Alison     Mathews    21-MAR-76 21-FEB-86    6661.78 Vancouver  Tester
    03   James      Smith      12-DEC-78 15-MAR-90    6544.78 Vancouver  Tester
    04   Celia      Rice       24-OCT-82 21-APR-99    2344.78 Vancouver  Manager
    05   Robert     Black      15-JAN-84 08-AUG-98    2334.78 Vancouver  Tester
    06   Linda      Green      30-JUL-87 04-JAN-96    4322.78 New York   Tester
    07   David      Larry      31-DEC-90 12-FEB-98    7897.78 New York   Manager
    08   James      Cat        17-SEP-96 15-APR-02    1232.78 Vancouver  TesterI wrote another select statement with row_number. see below
    SELECT first_name, last_name, salary, city, description, id,
       ROW_NUMBER() OVER(PARTITION BY description ORDER BY city desc) "Test#"
       FROM employee
       and I get this result
    First_name  last_name   Salary         City             Description         ID         Test#
    Celina          Rice         2344.78      Vancouver    Manager             04          1
    David          Larry         7897.78      New York    Manager             07          2
    Jason          Martin       1234.56      Toronto      Programmer        01          1
    Alison         Mathews    6661.78      Vancouver   Tester               02          1 
    James         Cat           1232.78      Vancouver    Tester              08          2
    Robert        Black         2334.78     Vancouver     Tester              05          3
    James        Smith         6544.78     Vancouver     Tester              03          4
    Linda         Green        4322.78      New York      Tester             06           5
    I understand the partition by which means basically for each associated group a unique number wiill be assigned for that row, so in this case since tester is one group, manager is another group, and programmer is another group then tester gets its own unique number for each row, manager as well and etc.What is throwing me off is the order by and how this numbering are assigned. why is
    1 assigned to Alison Mathews for the tester group and 2 assigned to James Cat and 3 assigned Robert Black
    I apologize if this is a stupid question, i have tried reading about it online and looking at the oracle documentation but that still dont fully understand why.

    user13328581 wrote:
    understanding row_number() and using it in an analytic functionROW_NUMBER () IS an analytic fucntion. Are you trying to use the results of ROW_NUMBER in another analytic function? If so, you need a sub-query. Analuytic functions can't be nested within other analytic functions.
    ...I have the following code below
    ... I did a simple select statementThanks for posting all that! It's really helpful.
    ... and I get this result
    First_name  last_name   Salary         City             Description         ID         Test#
    Celina          Rice         2344.78      Vancouver    Manager             04          1
    David          Larry         7897.78      New York    Manager             07          2
    Jason          Martin       1234.56      Toronto      Programmer        01          1
    Alison         Mathews    6661.78      Vancouver   Tester               02          1 
    James         Cat           1232.78      Vancouver    Tester              08          2
    Robert        Black         2334.78     Vancouver     Tester              05          3
    James        Smith         6544.78     Vancouver     Tester              03          4
    Linda         Green        4322.78      New York      Tester             06           5... What is throwing me off is the order by and how this numbering are assigned. why is
    1 assigned to Alison Mathews for the tester group and 2 assigned to James Cat and 3 assigned Robert Black That's determined by the analytic ORDER BY clause. Yiou said "ORDER BY city desc", so a row where city='Vancouver' will get a lower namber than one where city='New York', since 'Vancouver' comes after 'New York' in alphabetic order.
    If you have several rows that all have the same city, then you can be sure that ROW_NUMBER will assign them consecutive numbers, but it's arbitrary which one of them will be lowest and which highest. For example, you have 5 'Tester's: 4 from Vancouver and 1 from New York. There's no particular reason why the one with first_name='Alison' got assinge 1, and 'James' got #2. If you run the same query again, without changing the table at all, then 'Robert' might be #1. It's certain that the 4 Vancouver rows will be assigned numbers 1 through 4, but there's no way of telling which of those 4 rows will get which of those 4 numbers.
    Similar to a query's ORDER BY clause, the analytic ORDER BY clause can have two or more expressions. The N-th one will only be considered if there was a tie for all (N-1) earlier ones. For example "ORDER BY city DESC, last_name, first_name" would mena 'Vancouver' comes before 'New York', but, if multiple rows all have city='Vancouver', last_name would determine the order: 'Black' would get a lower number than 'Cat'. If you had multiple rows with city='Vancouver' and last_name='Black', then the order would be determined by first_name.

  • Oracle Express(Analytical workspace?) and Oracle warehousebuilder

    Can somebody please explain when and why to use
    Oracle Express(Analytical workspace?) and Oracle warehousebuilder.
    When should each of these tools be used?

    Oracle Express is an old, de-supported product that has been replaced by Oracle Database OLAP Server http://otn.oracle.com/products/bi/olap/olap.html.
    Oracle Warehouse Builder (OWB) is a data integration design and management tool. One of OWB's areas of functionality is enabling OLAP. All OWB OLAP related materials are summarized and linked from OWB Self-service Education page here http://otn.oracle.com/products/warehouse/selfserv_edu/OLAP_cube_creation.html
    Nikolai Rochnik

Maybe you are looking for