Understanding sum() over(order by) analytic function
Could you please explain Having_order_by column values computation for below query?
I understand that No_Partition column has been computed over entire result set
select level
,sum(level) over(order by level) Having_order_by
,sum(level) over() No_Partition
from dual
connect by level < 6
Hi,
ActiveSomeTimes wrote:
Could you please explain Having_order_by column values computation for below query?
I understand that No_Partition column has been computed over entire result set
select level
,sum(level) over(order by level) Having_order_by
,sum(level) over() No_Partition
from dual
connect by level < 6
When you have an ORDER BY clause, the function only operates on a window, that is, a subset of the result set, relative to the current row.
When you say "ORDER BY LEVEL", it will only operate on LEVELs less that or equal to the current LEVEL, so on
LEVEL = 1, the analytic fucntion will only look at LEVEL <= 1, that is, just 1; on
LEVEL = 2, the analytic fucntion will only look at LEVEL <= 2, that is, 1 and 2; on
LEVEL = 3, the analytic fucntion will only look at LEVEL <= 3, that is, 1, 2 and 3
LEVEL = 6, the analytic fucntion will only look at LEVEL <= 6, that is, 1, 2, 3, 4, 5 and 6
In the function call without the ORDER BY clause, the function looks at the entire result set, regrdless of what vlaue LEVEL has on the current row.
Similar Messages
-
Moving sum using date intervals - analytic functions help
let's say you have the following set of data:
DATE SALES
09/02/2012 100
09/02/2012 50
09/02/2012 10
09/02/2012 1000
09/02/2012 20
12/02/2012 1000
12/02/2012 1100
14/02/2012 1000
14/02/2012 100
15/02/2012 112500
15/02/2012 13500
15/02/2012 45000
15/02/2012 1500
19/02/2012 1500
20/02/2012 400
23/02/2012 2000
27/02/2012 4320
27/02/2012 300000
01/03/2012 100
04/03/2012 17280
06/03/2012 100
06/03/2012 100
06/03/2012 4320
08/03/2012 100
13/03/2012 1000
for each day i need to know the sum of the sales in the present and preceding 5 days (calendar) [not five rows].
What qurey could i use???
Please help!Hi.
Here's one way.
WITH data AS
SELECT TO_DATE('09/02/2012','DD/MM/YYYY') d, 100 n FROM DUAL UNION ALL
SELECT TO_DATE('09/02/2012','DD/MM/YYYY') d, 50 n FROM DUAL UNION ALL
SELECT TO_DATE('09/02/2012','DD/MM/YYYY') d, 10 n FROM DUAL UNION ALL
SELECT TO_DATE('09/02/2012','DD/MM/YYYY') d, 1000 n FROM DUAL UNION ALL
SELECT TO_DATE('09/02/2012','DD/MM/YYYY') d, 20 n FROM DUAL UNION ALL
SELECT TO_DATE('12/02/2012','DD/MM/YYYY') d, 1000 n FROM DUAL UNION ALL
SELECT TO_DATE('12/02/2012','DD/MM/YYYY') d, 1100 n FROM DUAL UNION ALL
SELECT TO_DATE('14/02/2012','DD/MM/YYYY') d, 1000 n FROM DUAL UNION ALL
SELECT TO_DATE('14/02/2012','DD/MM/YYYY') d, 100 n FROM DUAL UNION ALL
SELECT TO_DATE('15/02/2012','DD/MM/YYYY') d, 112500 n FROM DUAL UNION ALL
SELECT TO_DATE('15/02/2012','DD/MM/YYYY') d, 13500 n FROM DUAL UNION ALL
SELECT TO_DATE('15/02/2012','DD/MM/YYYY') d, 45000 n FROM DUAL UNION ALL
SELECT TO_DATE('15/02/2012','DD/MM/YYYY') d, 1500 n FROM DUAL UNION ALL
SELECT TO_DATE('19/02/2012','DD/MM/YYYY') d, 1500 n FROM DUAL UNION ALL
SELECT TO_DATE('20/02/2012','DD/MM/YYYY') d, 400 n FROM DUAL UNION ALL
SELECT TO_DATE('23/02/2012','DD/MM/YYYY') d, 2000 n FROM DUAL UNION ALL
SELECT TO_DATE('27/02/2012','DD/MM/YYYY') d, 4320 n FROM DUAL UNION ALL
SELECT TO_DATE('27/02/2012','DD/MM/YYYY') d, 300000 n FROM DUAL UNION ALL
SELECT TO_DATE('01/03/2012','DD/MM/YYYY') d, 100 n FROM DUAL UNION ALL
SELECT TO_DATE('04/03/2012','DD/MM/YYYY') d, 17280 n FROM DUAL UNION ALL
SELECT TO_DATE('06/03/2012','DD/MM/YYYY') d, 100 n FROM DUAL UNION ALL
SELECT TO_DATE('06/03/2012','DD/MM/YYYY') d, 100 n FROM DUAL UNION ALL
SELECT TO_DATE('06/03/2012','DD/MM/YYYY') d, 4320 n FROM DUAL UNION ALL
SELECT TO_DATE('08/03/2012','DD/MM/YYYY') d, 100 n FROM DUAL UNION ALL
SELECT TO_DATE('13/03/2012','DD/MM/YYYY') d, 1000 n FROM DUAL
days AS
SELECT TO_DATE('2012-02-01','YYYY-MM-DD')+(LEVEL-1) d
FROM DUAL
CONNECT BY LEVEL <= 60
totals_per_day AS
SELECT dy.d,SUM(NVL(dt.n,0)) total_day
FROM
data dt,
days dy
WHERE
dy.d = dt.d(+)
GROUP BY dy.d
ORDER BY 1
SELECT
d,
SUM(total_day) OVER
ORDER BY d
RANGE BETWEEN 5 PRECEDING AND CURRENT ROW
) AS five_day_total
FROM totals_per_day;
2012-02-01 00:00:00 0
2012-02-02 00:00:00 0
2012-02-03 00:00:00 0
2012-02-04 00:00:00 0
2012-02-05 00:00:00 0
2012-02-06 00:00:00 0
2012-02-07 00:00:00 0
2012-02-08 00:00:00 0
2012-02-09 00:00:00 1180
2012-02-10 00:00:00 1180
2012-02-11 00:00:00 1180
2012-02-12 00:00:00 3280
2012-02-13 00:00:00 3280
2012-02-14 00:00:00 4380
2012-02-15 00:00:00 175700
2012-02-16 00:00:00 175700
2012-02-17 00:00:00 175700
2012-02-18 00:00:00 173600
2012-02-19 00:00:00 175100
2012-02-20 00:00:00 174400
2012-02-21 00:00:00 1900
2012-02-22 00:00:00 1900
2012-02-23 00:00:00 3900
2012-02-24 00:00:00 3900
2012-02-25 00:00:00 2400
2012-02-26 00:00:00 2000
2012-02-27 00:00:00 306320
2012-02-28 00:00:00 306320
2012-02-29 00:00:00 304320
2012-03-01 00:00:00 304420
2012-03-02 00:00:00 304420
2012-03-03 00:00:00 304420
2012-03-04 00:00:00 17380
2012-03-05 00:00:00 17380
2012-03-06 00:00:00 21900
2012-03-07 00:00:00 21800
2012-03-08 00:00:00 21900
2012-03-09 00:00:00 21900
2012-03-10 00:00:00 4620
2012-03-11 00:00:00 4620
2012-03-12 00:00:00 100
2012-03-13 00:00:00 1100
2012-03-14 00:00:00 1000
2012-03-15 00:00:00 1000
2012-03-16 00:00:00 1000
2012-03-17 00:00:00 1000
2012-03-18 00:00:00 1000
2012-03-19 00:00:00 0
2012-03-20 00:00:00 0
2012-03-21 00:00:00 0
2012-03-22 00:00:00 0
2012-03-23 00:00:00 0
2012-03-24 00:00:00 0
2012-03-25 00:00:00 0
2012-03-26 00:00:00 0
2012-03-27 00:00:00 0
2012-03-28 00:00:00 0
2012-03-29 00:00:00 0
2012-03-30 00:00:00 0
2012-03-31 00:00:00 0Hope this helps.
Regards. -
How to use analytic function with aggregate function
hello
can we use analytic function and aggrgate function in same qurey? i tried to find any example on Net but not get any example how both of these function works together. Any link or example plz share with me
Edited by: Oracle Studnet on Nov 15, 2009 10:29 PMselect
t1.region_name,
t2.division_name,
t3.month,
t3.amount mthly_sales,
max(t3.amount) over (partition by t1.region_name, t2.division_name)
max_mthly_sales
from
region t1,
division t2,
sales t3
where
t1.region_id=t3.region_id
and
t2.division_id=t3.division_id
and
t3.year=2004
Source:http://www.orafusion.com/art_anlytc.htm
Here max (aggregate) and over partition by (analytic) function is in same query. So it means we can use aggregate and analytic function in same query and more than one analytic function in same query also.
Hth
Girish Sharma -
Case Statement in Analytic Function SUM(n) OVER(PARTITION BY x)
Hi Guys,
I have the following SQL that doesn't seem to consider the When clause I am using in the case staement inside the analytic function(SUM). Could somebody let me know why? and suggest the solution?
Select SUM(Case When (A.Flag = 'B' and B.Status != 'C') Then (NVL(A.Amount_Cr, 0) - (NVL(A.Amount_Dr,0))) Else 0 End) OVER (PARTITION BY A.Period_Year) Annual_amount
, A.period_year
, B.status
, A.Flag
from A, B, C
where A.period_year = 2006
and C.Account = '301010'
--and B.STATUS != 'C'
--and A.Flag = 'B'
and A.Col_x = B.Col_x
and A.Col_y = C.Col_y
When I use this SQL, I get
Annual_Amount Period_Year Status Flag
5721017.5 --------- 2006 ---------- C -------- B
5721017.5 --------- 2006 ---------- O -------- B
5721017.5 --------- 2006 ---------- NULL ----- A
And when I put the conditions in the where clause, I get
Annual_Amount Period_Year Status Flag
5721017.5 ---------- 2006 ---------- O -------- BHere are some scripts,
create table testtable1 ( ColxID number(10), ColyID number(10) , Periodname varchar2(15), Flag varchar2(1), Periodyear number(15), debit number, credit number)
insert into testtable1 values(1, 1000, 'JAN-06', 'A', 2006, 7555523.71, 7647668)
insert into testtable1 values(2, 1001, 'FEB-06', 'B', 2006, 112710, 156047)
insert into testtable1 values(3, 1002, 'MAR-06', 'A', 2006, 200.57, 22376.43)
insert into testtable1 values(4, 1003, 'APR-06', 'B', 2006, 0, 53846)
insert into testtable1 values(5, 1004, 'MAY-06', 'A', 2006, 6349227.19, 6650278.03)
create table testtable2 ( ColxID number(10), Account number(10))
insert into testtable2 values(1, 300100)
insert into testtable2 values(2, 300200)
insert into testtable2 values(3, 300300)
insert into testtable2 values(4, 300400)
insert into testtable2 values(5, 300500)
create table apps.testtable3 ( ColyID number(10), Status varchar2(1))
insert into testtable3 values(1000, 'C')
insert into testtable3 values(1001, 'O')
insert into testtable3 values(1002, 'C')
My SQL:
select t1.periodyear
, SUM(Case When (t1.Flag = 'B' and t3.Status != 'C') Then (NVL(t1.credit, 0) - (NVL(t1.debit,0))) Else 0 End) OVER (PARTITION BY t1.PeriodYear)
Annual_amount
, t1.flag
, t3.status
, t2.account
from testtable1 t1, testtable2 t2, testtable3 t3
where t1.colxid = t2.colxid
and t1.colyid = t3.colyid(+)
--and t1.Flag = 'B' and t3.Status != 'C'
Result:
PeriodYear ----- AnnualAmount ----- Flag ----- Status ----- Account
2006 ------------------ 43337 --------------- A ----------- C ---------- 300100
2006 ------------------ 43337 --------------- B ----------- O ---------- 300200
2006 ------------------ 43337 --------------- A ----------- C ---------- 300300
2006 ------------------ 43337 --------------- B ------------ ----------- 300400
2006 ------------------ 43337 --------------- A ------------ ----------- 300500
With condition "t1.Flag = 'B' and t3.Status != 'C'" in where clause instead of in Case statement, Result is (which is desired)
PeriodYear ----- AnnualAmount ----- Flag ----- Status ----- Account
2006 ------------------ 43337 --------------- B ----------- O ---------- 300200 -
Analytical function SUM() OVER (PARTITION BY ) in Crosstab
I am trying to resolve this from a very long time. I have an amount column that has to be grouped on Year, but all the other columns grouped by month. I am trying to achieve this using analytic function SUM(Case when (Condition1 and Condition2) then Sum(Amount) else 0 end) OVER ( PARTITION BY Account, Year), Where Account, Sub Account are the left axis columns. Now, column displays the values correctly, but at different rows. This is confusing.............
For Ex: For Account 00001, there are 3 sub accounts 1000,2000,3000. For Sub account 3000, conditions 1 and 2 are satisfied, so it should display the Amount in the row corresponding to Sub account 3000, and 0 for remaining Sub Accounts. And the Total amount of all the sub accounts, which will be the same as amount for SubAccount 3000 should be displayed in the row corresponding to Account 00001.
But I get blank rows for 1000 and 3000 Sub accounts and Amount displayed in 2000 Sub account, and blank for Account 00001 also.
When I created the same workbook in Tabular form, the same amount is displayed for all the SubAccounts of a single Account.
When I used this CASE statement in TOAD, I figured that this is due to the Analytic function. When I use a group by clause as shown below instead of partition by, I get the results I need.
SELECT (Case when (Condition1 and Condition2) then Sum(Amount) else 0 end), Account, Sub Account FROM tables WHERE conditions GROUP BY Year, Account, Sub Account
But I cannot use groupby for whole SQL of the workbook as I need the other columns with page item 'MONTH' not 'Year'.
Could somebody please help me with this?Hi,
In your tabular form do you get the correct total display against all you subaccounts and account? If this correct then you can use case to ensure that the total is displayed only for the single account.
Once you have the correct totals working in a tabular form it is easier to re-produce what you want in a cross-tab.
Rod West -
HTMLDB 1.6 and "order by" in analytic functions
In HTMLDB 1.6, oracle 10g, when i enter the string "order by" in the region source of a report of the type "sql query (pl/sql function body returning sql query", I get
1 error has occurred
* Your query can't include an "ORDER BY" clause when having column heading sorting enabled.
I understand the reason for this error, but unfortunately i need this for an analytic function:
row_number() over (partition by ... order by ...)
It seems that the check is performed by simply looking for the string "order by" in the "region source" (in fact the error fires even if that string is contained within a comment).
I know possible workarounds (eg creating a view and select'ing from it), i just wanted to let you know.
Regards
AlbertoAnother one under the 'obvious route' category:
Seems that the ORDER BY check is apparentl for ORDER<space>BY... so simply adding extra whitespace between ORDER and BY bypasses the check (at least in 2.1.0.00.39).
To make it a bit more obious that a separation is intended, an empty comment, i.e. ORDER/*/BY*, works nicely
Edited by: mcstock on Nov 19, 2008 10:29 AM -
Understanding row_number() and using it in an analytic function
Dear all;
I have been playing around with row_number and trying to understand how to use it and yet I still cant figure it out...
I have the following code below
create table Employee(
ID VARCHAR2(4 BYTE) NOT NULL,
First_Name VARCHAR2(10 BYTE),
Last_Name VARCHAR2(10 BYTE),
Start_Date DATE,
End_Date DATE,
Salary Number(8,2),
City VARCHAR2(10 BYTE),
Description VARCHAR2(15 BYTE)
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values ('01','Jason', 'Martin', to_date('19960725','YYYYMMDD'), to_date('20060725','YYYYMMDD'), 1234.56, 'Toronto', 'Programmer');
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values('02','Alison', 'Mathews', to_date('19760321','YYYYMMDD'), to_date('19860221','YYYYMMDD'), 6661.78, 'Vancouver','Tester')
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values('03','James', 'Smith', to_date('19781212','YYYYMMDD'), to_date('19900315','YYYYMMDD'), 6544.78, 'Vancouver','Tester')
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values('04','Celia', 'Rice', to_date('19821024','YYYYMMDD'), to_date('19990421','YYYYMMDD'), 2344.78, 'Vancouver','Manager')
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values('05','Robert', 'Black', to_date('19840115','YYYYMMDD'), to_date('19980808','YYYYMMDD'), 2334.78, 'Vancouver','Tester')
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values('06','Linda', 'Green', to_date('19870730','YYYYMMDD'), to_date('19960104','YYYYMMDD'), 4322.78,'New York', 'Tester')
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values('07','David', 'Larry', to_date('19901231','YYYYMMDD'), to_date('19980212','YYYYMMDD'), 7897.78,'New York', 'Manager')
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values('08','James', 'Cat', to_date('19960917','YYYYMMDD'), to_date('20020415','YYYYMMDD'), 1232.78,'Vancouver', 'Tester')I did a simple select statement
select * from Employee e
and it returns this below
ID FIRST_NAME LAST_NAME START_DAT END_DATE SALARY CITY DESCRIPTION
01 Jason Martin 25-JUL-96 25-JUL-06 1234.56 Toronto Programmer
02 Alison Mathews 21-MAR-76 21-FEB-86 6661.78 Vancouver Tester
03 James Smith 12-DEC-78 15-MAR-90 6544.78 Vancouver Tester
04 Celia Rice 24-OCT-82 21-APR-99 2344.78 Vancouver Manager
05 Robert Black 15-JAN-84 08-AUG-98 2334.78 Vancouver Tester
06 Linda Green 30-JUL-87 04-JAN-96 4322.78 New York Tester
07 David Larry 31-DEC-90 12-FEB-98 7897.78 New York Manager
08 James Cat 17-SEP-96 15-APR-02 1232.78 Vancouver TesterI wrote another select statement with row_number. see below
SELECT first_name, last_name, salary, city, description, id,
ROW_NUMBER() OVER(PARTITION BY description ORDER BY city desc) "Test#"
FROM employee
and I get this result
First_name last_name Salary City Description ID Test#
Celina Rice 2344.78 Vancouver Manager 04 1
David Larry 7897.78 New York Manager 07 2
Jason Martin 1234.56 Toronto Programmer 01 1
Alison Mathews 6661.78 Vancouver Tester 02 1
James Cat 1232.78 Vancouver Tester 08 2
Robert Black 2334.78 Vancouver Tester 05 3
James Smith 6544.78 Vancouver Tester 03 4
Linda Green 4322.78 New York Tester 06 5
I understand the partition by which means basically for each associated group a unique number wiill be assigned for that row, so in this case since tester is one group, manager is another group, and programmer is another group then tester gets its own unique number for each row, manager as well and etc.What is throwing me off is the order by and how this numbering are assigned. why is
1 assigned to Alison Mathews for the tester group and 2 assigned to James Cat and 3 assigned Robert Black
I apologize if this is a stupid question, i have tried reading about it online and looking at the oracle documentation but that still dont fully understand why.user13328581 wrote:
understanding row_number() and using it in an analytic functionROW_NUMBER () IS an analytic fucntion. Are you trying to use the results of ROW_NUMBER in another analytic function? If so, you need a sub-query. Analuytic functions can't be nested within other analytic functions.
...I have the following code below
... I did a simple select statementThanks for posting all that! It's really helpful.
... and I get this result
First_name last_name Salary City Description ID Test#
Celina Rice 2344.78 Vancouver Manager 04 1
David Larry 7897.78 New York Manager 07 2
Jason Martin 1234.56 Toronto Programmer 01 1
Alison Mathews 6661.78 Vancouver Tester 02 1
James Cat 1232.78 Vancouver Tester 08 2
Robert Black 2334.78 Vancouver Tester 05 3
James Smith 6544.78 Vancouver Tester 03 4
Linda Green 4322.78 New York Tester 06 5... What is throwing me off is the order by and how this numbering are assigned. why is
1 assigned to Alison Mathews for the tester group and 2 assigned to James Cat and 3 assigned Robert Black That's determined by the analytic ORDER BY clause. Yiou said "ORDER BY city desc", so a row where city='Vancouver' will get a lower namber than one where city='New York', since 'Vancouver' comes after 'New York' in alphabetic order.
If you have several rows that all have the same city, then you can be sure that ROW_NUMBER will assign them consecutive numbers, but it's arbitrary which one of them will be lowest and which highest. For example, you have 5 'Tester's: 4 from Vancouver and 1 from New York. There's no particular reason why the one with first_name='Alison' got assinge 1, and 'James' got #2. If you run the same query again, without changing the table at all, then 'Robert' might be #1. It's certain that the 4 Vancouver rows will be assigned numbers 1 through 4, but there's no way of telling which of those 4 rows will get which of those 4 numbers.
Similar to a query's ORDER BY clause, the analytic ORDER BY clause can have two or more expressions. The N-th one will only be considered if there was a tie for all (N-1) earlier ones. For example "ORDER BY city DESC, last_name, first_name" would mena 'Vancouver' comes before 'New York', but, if multiple rows all have city='Vancouver', last_name would determine the order: 'Black' would get a lower number than 'Cat'. If you had multiple rows with city='Vancouver' and last_name='Black', then the order would be determined by first_name. -
Problem with SUM () analytic function
Dear all,
Please have a look at my problem.
SELECT CURR, DT, AMT, RATE,
SUM(AMT) OVER (PARTITION BY CURR ORDER BY DT) SUMOVER,
sum( amt * rate) over (PARTITION BY CURR ORDER BY DT) / SUM(AMT) OVER (PARTITION BY CURR ORDER BY DT) avgrt
FROM
select 'CHF' CURR, ADD_MONTHS(TO_DATE('01-DEC-07'), LEVEL -1) DT, 100 * LEVEL AMT, 1 + ( 5* LEVEL/100) RATE
FROM DUAL CONNECT BY LEVEL < 10
SQL> /
CUR DT AMT RATE SUMOVER AVGRT
CHF 01-DEC-07 100 1.05 100 1.05
CHF 01-JAN-08 200 1.1 300 1.08333333
CHF 01-FEB-08 300 1.15 600 1.11666667
CHF 01-MAR-08 400 1.2 1000 1.15
CHF 01-APR-08 500 1.25 1500 1.18333333
CHF 01-MAY-08 600 1.3 2100 1.21666667
CHF 01-JUN-08 700 1.35 2800 1.25
CHF 01-JUL-08 800 1.4 3600 1.28333333
CHF 01-AUG-08 900 1.45 4500 1.31666667
Table Revaluation
select 'CHF' CURR1, '31-DEC-07' DT , 1.08 RATE FROM DUAL UNION ALL
select 'CHF' CURR1, '31-MAR-08' DT , 1.22 RATE FROM DUAL UNION ALL
select 'CHF' CURR1, '30-JUN-08' DT , 1.38 RATE FROM DUAL
CUR DT RATE
CHF 31-DEC-07 1.08
CHF 31-MAR-08 1.22
CHF 30-JUN-08 1.38.
Problem is with the calculation of average rate.
I want to consider the data in the revaluation table to be used in the calculation of
average rate.
So average rate for Jan-08 will be
(100 * 1.08(dec revaluation rate) + 200 * 1.1 ) / (300) = 1.093333333
for Feb-08
(100 * 1.08(dec revaluation rate) + 200 * 1.1 + 300 * 1.15) / (600) = 1.121666667
for mar-08
(100 * 1.08(dec revaluation rate) + 200 * 1.1 + 300 * 1.15 + 400 * 1.2) / (1000) = 1.153
for Apr-08
(1000 * 1.22(Apr revaluation rate) + 500 * 1.25) /1500 = 1.23
for May-08
(1000 * 1.22(Apr revaluation rate) + 500 * 1.25 + 600 * 1.30 ) /2100 = 1.25
and so on..
Kindly adviceHi,
The main thing in this problem is that for every dt you want to compute the cumulative total from previous rows using the formula
SUM (amt * rate)
But rate can be either the rate from the revaluation table or the rate from the main table. For evaluating prior dates, you wnat to use the most recent rate.
I'm not sure if you can do this using analytic functions. Like Damorgan said, you should use a self-join.
The query below gives you the results you requested:
WITH
revaluation AS
SELECT 'CHF' curr1, TO_DATE ('31-DEC-07', 'DD-MON-RR') dt, 1.08 rate FROM dual UNION ALL
SELECT 'CHF' curr1, TO_DATE ('31-MAR-08', 'DD-MON-RR') dt, 1.22 rate FROM dual UNION ALL
SELECT 'CHF' curr1, TO_DATE ('30-JUN-08', 'DD-MON-RR') dt, 1.38 rate FROM dual
original_data AS
select 'CHF' curr
, ADD_MONTHS(TO_DATE('01-DEC-07'), LEVEL -1) dt
, 100 * LEVEL amt
, 1 + ( 5* LEVEL/100) rate
FROM dual
CONNECT BY LEVEL < 10
two_rates AS
SELECT od.*
SELECT MAX (dt)
FROM revaluation
WHERE curr1 = od.curr
AND dt <= od.dt
) AS r_dt
SELECT AVG (rate) KEEP (DENSE_RANK LAST ORDER BY dt)
FROM revaluation
WHERE curr1 = od.curr
AND dt <= od.dt
) AS r_rate
FROM original_data od
SELECT c.curr
, c.dt
, c.amt
, c.rate
, SUM (p.amt) AS sumover
, SUM ( p.amt
* CASE
WHEN p.dt <= c.r_dt
THEN c.r_rate
ELSE p.rate
END
/ SUM (p.amt) AS avgrt
FROM two_rates c
JOIN original_data p ON c.curr = p.curr
AND c.dt >= p.dt
GROUP BY c.curr, c.dt, c.amt, c.rate
ORDER BY c.curr, c.dt
; -
Order by in analytic functions
Hi All,
Please explain on how the order by clause in an analytic function vary the results ?.For eg.I get different results for SALSUM column if I order the below query by deptno/empno.
SELECT empno en,deptno dn,sal sal,SUM(sal) OVER (partition by deptno ORDER BY deptno) salsum FROM emp;
EN DN SAL SALSUM
10 1 100 525
20 1 200 525
50 1 225 525
60 2 125 275
30 2 150 275
40 3 250 250
SELECT empno en,deptno dn,sal sal,SUM(sal) OVER (partition by deptno ORDER BY empno) salsum
FROM emp;
EN DN SAL SALSUM
10 1 100 100
20 1 200 300
50 1 225 525
30 2 150 150
60 2 125 275
40 3 250 250Hi,
SUM(sal) OVER (partition by deptno ORDER BY deptno)
In the above example. you compute sum(sal) department wise in the ascending order of the employee id
then the sum of amt dept 1 will be 525 (since it will sum up amounts acorrding to ascending order of dept no. which must be 1 since the data is partitioned by dept no. hence amt is same accross all the records i.e. 525 accross dept no 1)
and for dept 2 = 275
SUM(sal) OVER (partition by deptno ORDER BY empno)
for the second scenario ..
it will sum up the amounts partitioning by dept no. but considering the order of employee id in ascending order.
e.g for dept 1 there are 3 employess 10,20,50 arranged in ascending order
so the sumsal for emp no 10 = 100
for 10 & 20 = 100+200 = 300
& for 10 ,20, 50 = 100+200+225 = 525
Hope this calrifies.. -
Hi,
I have a query in SQL that generates percentage totals. I am having trouble replicating this code in BMM layer of the repository. I have created a new logical column, the sql query is below:
SELECT id, seq, asset_cost ,
CASE
WHEN asset_cost > 0
THEN ROUND(RATIO_TO_REPORT (
CASE
WHEN asset_cost > 0
THEN SUM (asset_cost)
END) OVER (partition BY id)*100)
END total
FROM test
GROUP BY id, seq asset_cost
Can anyone help with replicating the above expression in the logical layer column. ]
*** how can i use the Ratio_to_report function in obiee
The above link shows a workaround
Are there any alternatives to 'RATIO_TO_REPORT' in OBIEE functions?
Thanks
Edited by: sliderrules on 16-May-2012 04:23Hi,
I have just been through the Oracle documentation to understand that 'RATIO_TO_REPORT' would compute the ratio of a value to sum of values. For your requirement, what you could do is
1. Bring in the measure 'asset_cost' into the BMM with aggregation rule as sum. (I think you could include a condition here itself as asset_cost >0)
2. Create another measure with the 'Derived from another logical column as source' option chosen and the function as
EVALUATE('RATIO_TO_REPORT(%1) OVER (PARTITION BY %2)' AS DOUBLE, asset_cost,id)
The above function does the following steps:
EVALUATE will send the analytic function to the database.
SUM(asset_cost) would be the first parameter
id would be the second parameter.
I might not be pretty good with the syntax here, but hope you could get it while implementing.
Hope this helps.
Thank you,
Dhar -
Hi,
I am using the SUM analytical function to accumulate some data from one record to the other record (data per month):
TPS_MOI_CODE PRD_PRD_CODE PDV_PDV_CODE RTTCAVCANV
200510 01 9302 -8050
200511 01 9302 -15500
200512 01 9302 -16150
200601 01 9302 -16150
200602 01 9302 -16150
200603 01 9302 -16150
The result is correct. However, I also want to restart the sum from January, i.e every months contain the sum of all the previous month, and it must restart in January.
How do I do that ?
Thanks in advance for your answers.You should extract a year and use it as the partition in over() clause, for example:
SQL> select * from t;
DATE# QTY
200510 1
200511 2
200512 3
200601 4
200602 5
200603 6
6 rows selected.
SQL> desc t;
Name Null? Type
DATE# NUMBER
QTY NUMBER
SQL> select date#, sum(qty) over(partition by substr(date#,1,4) order by date#) cum_sum
2 from t;
DATE# CUM_SUM
200510 1
200511 3
200512 6
200601 4
200602 9
200603 15
6 rows selected.Rgds. -
Analytical function sum() ...for Till-date reporting
Hi,
I need help in forming an SQL with analytical function.
Here is my scenario:
create table a (name varchar2(10), qty_sold number,on_date date);
insert into a values ('abc',10,'10-JAN-2007 00:01:00');
insert into a values ('abc',01,'10-JUL-2007 00:01:00');
insert into a values ('abc',05,'10-JUL-2007 08:11:00');
insert into a values ('abc',17,'10-JUL-2007 09:11:00');
insert into a values ('def',10,'10-JAN-2006 08:01:00');
insert into a values ('def',01,'10-JUN-2006 10:01:00');
insert into a values ('def',05,'10-JUL-2006 08:10:00');
insert into a values ('pqr',17,'10-JUL-2006 09:11:00');
Now I want to have a sql which displays the following:
NAME--TOTAL_QTY_SOLD_IN_LAST_10_DAYS, TOTAL_QTY_SOLD_IN_LAST_20_DAYS...etc
I know we can do it using sum(qty_sold) over (order on_date range interval '10' days and preceding) .... but I get too many rows for each "NAME" ....for each of the date in the database table a ... I want just one row for each "Name"...and sum() should be till SYSDATE ....
Any help is highly appreciated.
Thanks.SQL> select name
2 , sum(case when sysdate - on_date <= 10 then qty_sold end) total_qty_last_10_days
3 , sum(case when sysdate - on_date <= 100 then qty_sold end) total_qty_last_100_days
4 , sum(case when sysdate - on_date <= 500 then qty_sold end) total_qty_last_500_days
5 from a
6 group by name
7 /
NAME TOTAL_QTY_LAST_10_DAYS TOTAL_QTY_LAST_100_DAYS TOTAL_QTY_LAST_500_DAYS
abc 23 33
def 6
pqr 17
3 rijen zijn geselecteerd.Regards,
Rob. -
Help using oracle syntax "SUM(col1) over (order by col2)" using ODI
Hi all
I want to load data from oracle to ESSBASE using ODI, and I know oracle have such syntax sum(col1) over (order by col2,col3) which can get the accumulation data, e.g
Oracle data table
col1, col2, value
A 2009-1 10
A 2009-2 10
A 2009-3 10
And the essbase need
col1 col2 value
A 2009-1 10
A 2009-2 20
A 2009-3 30
However after i try this in ODI, error occur:
org.apache.bsf.BSFException: exception from Jython:
Traceback (innermost last):
File "<string>", line 32, in ?
java.sql.SQLException: ORA-00979: not a GROUP BY expression
and the original generated SQl by ODI :
select 'HSP_InputValue' "HSP_Rates",MAP_KMDZ_TABLE.BUD_DYKM "Account",MAP_MONTH.ESS_MONTH "Period",MAP_YEAR.ESS_YEAR "Year",'Actual' "Scenario",'Draft' "Version",TEMP_LIRUN.CURRENCY "Currency",MAP_COMPANYCODE.ESS_COMPCODE "Entity",substr(MAP_KMDZ_TABLE.BUD_BUSINESSOBJECT,1,80) "BusinessObject",'Route_NoRoute' "Route",MAP_TRANSPORT.ESS_TRANSPORT "Transport",substr(MAP_KMDZ_TABLE.BUD_BUSINESSACTIVITY,1,80) "BusinessActivity",substr(MAP_KMDZ_TABLE.BUD_CHANNEL,1,80) "Source",'NoCounterparty' "Counterparty",sum(TEMP_LIRUN.DATAVALUE) over (order by MAP_KMDZ_TABLE.BUD_DYKM,MAP_YEAR.ESS_YEAR,MAP_MONTH.ESS_MONTH,TEMP_LIRUN.CURRENCY,MAP_COMPANYCODE.ESS_COMPCODE,MAP_TRANSPORT.ESS_TRANSPORT,MAP_KMDZ_TABLE.BUD_BUSINESSACTIVITY,MAP_KMDZ_TABLE.BUD_BUSINESSOBJECT,MAP_KMDZ_TABLE.BUD_CHANNEL) "Data" from ETL_DEV.TEMP_LIRUN TEMP_LIRUN, ETL_DEV.MAP_KMDZ_TABLE MAP_KMDZ_TABLE, ETL_DEV.MAP_MONTH MAP_MONTH, ETL_DEV.MAP_YEAR MAP_YEAR, ETL_DEV.MAP_COMPANYCODE MAP_COMPANYCODE, ETL_DEV.MAP_TRANSPORT MAP_TRANSPORT where (1=1) And (TEMP_LIRUN.COSTELMNT=MAP_KMDZ_TABLE.SAP_ZZKM)
AND (TEMP_LIRUN.FISCYEAR=MAP_YEAR.SAP_YEAR)
AND (TEMP_LIRUN.FISCPER3=MAP_MONTH.SAP_MONTH)
AND (TEMP_LIRUN.COMP_CODE=MAP_COMPANYCODE.SAP_COMPCODE)
AND (TEMP_LIRUN.WWHC=MAP_TRANSPORT.SAP_WWHC) Group By MAP_KMDZ_TABLE.BUD_DYKM,
MAP_MONTH.ESS_MONTH,
MAP_YEAR.ESS_YEAR,
TEMP_LIRUN.CURRENCY,
MAP_COMPANYCODE.ESS_COMPCODE,
substr(MAP_KMDZ_TABLE.BUD_BUSINESSOBJECT,1,80),
MAP_TRANSPORT.ESS_TRANSPORT,
substr(MAP_KMDZ_TABLE.BUD_BUSINESSACTIVITY,1,80),
substr(MAP_KMDZ_TABLE.BUD_CHANNEL,1,80)
I know ODI think sum.. over must append group by , however it's not! How to solve this problem.
Thank All for your attention
SOS!
EthanHi Ethan,
In my exeprnc I faced a similar kind of situation.
Two work arounds.
1. Write one procedure and execute the same using ODI procedure.
2. Customize a Km and use that KM in your interface.
I guess in your query Group by function is not needed. (if this is the case you can achive this by a smple customization step in KM)
for example : your current KM will generate a query like this:-
select x,y, sum(x) over (order by y) as sumx FROM TestTable group by x, y
and you need a query like this
select x,y, sum(x) over (order by y) as sumx FROM TestTable
go to your KM (duplicate the KM which you are using and rename _withoutGroup )
remove the group by function from select query
(remove the API function <%=snpRef.getGrpBy()%> from insert into i$ table step)
please let me know if you need more help on this
regards,
Rathish -
COUNT(DISTINCT) WITH ORDER BY in an analytic function
-- I create a table with three fields: Name, Amount, and a Trans_Date.
CREATE TABLE TEST
NAME VARCHAR2(19) NULL,
AMOUNT VARCHAR2(8) NULL,
TRANS_DATE DATE NULL
-- I insert a few rows into my table:
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '110', TO_DATE('06/01/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '20', TO_DATE('06/01/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '110', TO_DATE('06/02/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '21', TO_DATE('06/03/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '68', TO_DATE('06/04/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '110', TO_DATE('06/05/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '20', TO_DATE('06/06/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '43', TO_DATE('06/01/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '77', TO_DATE('06/02/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '221', TO_DATE('06/03/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '43', TO_DATE('06/04/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '73', TO_DATE('06/05/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
commit;
/* I want to retrieve all the distinct count of amount for every row in an analytic function with COUNT(DISTINCT AMOUNT) sorted by name and ordered by trans_date where I get only calculate for the last four trans_date for each row (i.e., for the row "Anna 110 6/5/2005 8:00:00.000 PM," I only want to look at the previous dates from 6/2/2005 to 6/5/2005 and get the distinct count of how many amounts there are different for Anna). Note, I cannot use the DISTINCT keyword in this query because it doesn't work with the ORDER BY */
select NAME, AMOUNT, TRANS_DATE, COUNT(/*DISTINCT*/ AMOUNT) over ( partition by NAME
order by TRANS_DATE range between numtodsinterval(3,'day') preceding and current row ) as COUNT_AMOUNT
from TEST t;
This is the results I get if I just count all the AMOUNT without using distinct:
NAME AMOUNT TRANS_DATE COUNT_AMOUNT
Anna 110 6/1/2005 8:00:00.000 PM 2
Anna 20 6/1/2005 8:00:00.000 PM 2
Anna 110 6/2/2005 8:00:00.000 PM 3
Anna 21 6/3/2005 8:00:00.000 PM 4
Anna 68 6/4/2005 8:00:00.000 PM 5
Anna 110 6/5/2005 8:00:00.000 PM 4
Anna 20 6/6/2005 8:00:00.000 PM 4
Bill 43 6/1/2005 8:00:00.000 PM 1
Bill 77 6/2/2005 8:00:00.000 PM 2
Bill 221 6/3/2005 8:00:00.000 PM 3
Bill 43 6/4/2005 8:00:00.000 PM 4
Bill 73 6/5/2005 8:00:00.000 PM 4
The COUNT_DISTINCT_AMOUNT is the desired output:
NAME AMOUNT TRANS_DATE COUNT_DISTINCT_AMOUNT
Anna 110 6/1/2005 8:00:00.000 PM 1
Anna 20 6/1/2005 8:00:00.000 PM 2
Anna 110 6/2/2005 8:00:00.000 PM 2
Anna 21 6/3/2005 8:00:00.000 PM 3
Anna 68 6/4/2005 8:00:00.000 PM 4
Anna 110 6/5/2005 8:00:00.000 PM 3
Anna 20 6/6/2005 8:00:00.000 PM 4
Bill 43 6/1/2005 8:00:00.000 PM 1
Bill 77 6/2/2005 8:00:00.000 PM 2
Bill 221 6/3/2005 8:00:00.000 PM 3
Bill 43 6/4/2005 8:00:00.000 PM 3
Bill 73 6/5/2005 8:00:00.000 PM 4
Thanks in advance.you can try to write your own udag.
here is a fake example, just to show how it "could" work. I am here using only 1,2,4,8,16,32 as potential values.
create or replace type CountDistinctType as object
bitor_number number,
static function ODCIAggregateInitialize(sctx IN OUT CountDistinctType)
return number,
member function ODCIAggregateIterate(self IN OUT CountDistinctType,
value IN number) return number,
member function ODCIAggregateTerminate(self IN CountDistinctType,
returnValue OUT number, flags IN number) return number,
member function ODCIAggregateMerge(self IN OUT CountDistinctType,
ctx2 IN CountDistinctType) return number
create or replace type body CountDistinctType is
static function ODCIAggregateInitialize(sctx IN OUT CountDistinctType)
return number is
begin
sctx := CountDistinctType('');
return ODCIConst.Success;
end;
member function ODCIAggregateIterate(self IN OUT CountDistinctType, value IN number)
return number is
begin
if (self.bitor_number is null) then
self.bitor_number := value;
else
self.bitor_number := self.bitor_number+value-bitand(self.bitor_number,value);
end if;
return ODCIConst.Success;
end;
member function ODCIAggregateTerminate(self IN CountDistinctType, returnValue OUT
number, flags IN number) return number is
begin
returnValue := 0;
for i in 0..log(2,self.bitor_number) loop
if (bitand(power(2,i),self.bitor_number)!=0) then
returnValue := returnValue+1;
end if;
end loop;
return ODCIConst.Success;
end;
member function ODCIAggregateMerge(self IN OUT CountDistinctType, ctx2 IN
CountDistinctType) return number is
begin
return ODCIConst.Success;
end;
end;
CREATE or REPLACE FUNCTION CountDistinct (n number) RETURN number
PARALLEL_ENABLE AGGREGATE USING CountDistinctType;
drop table t;
create table t as select rownum r, power(2,trunc(dbms_random.value(0,6))) p from all_objects;
SQL> select r,p,countdistinct(p) over (order by r) d from t where rownum<10 order by r;
R P D
1 4 1
2 1 2
3 8 3
4 32 4
5 1 4
6 16 5
7 16 5
8 4 5
9 4 5buy some good book if you want to start at writting your own "distinct" algorythm.
Message was edited by:
Laurent Schneider
a simpler but memory killer algorithm would use a plsql table in an udag and do the count(distinct) over that table to return the value -
Running Sum without analytic function
Hi
I have data like below
Create table Test (Name Varchar(30),M Int, Y Int, Val Int);
Insert into Test Values ('A',1,2011,2);
Insert into Test Values ('A',2,2011,2);
Insert into Test Values ('A',3,2011,2);
Insert into Test Values ('A',4,2011,2);
Insert into Test Values ('A',5,2011,2);
Insert into Test Values ('A',6,2011,2);
Insert into Test Values ('A',7,2011,2);
Insert into Test Values ('A',8,2011,2);
Insert into Test Values ('A',9,2011,2);
Insert into Test Values ('A',10,2011,2);
Insert into Test Values ('A',11,2011,2);
Insert into Test Values ('A',12,2011,2);
Insert into Test Values ('A',1,2012,2);
Insert into Test Values ('A',2,2012,2);
Insert into Test Values ('A',3,2012,2);
Insert into Test Values ('A',4,2012,2);
Insert into Test Values ('A',5,2012,2);
Insert into Test Values ('A',6,2012,2);
Insert into Test Values ('A',7,2012,2);
Now based on above data I need to calculate running sum for past 18 Months. Condition is I can not use analytic function or Oracle specific SQL functions (for portability).
I tries following SQL but it dint work
select Name,rnk, SUM(val) from (
SELECT a.Name,a.m,a.Y,b.val, count(*) rnk
from Test a, Test b
where (a.Name=b.Name and (a.M <= b.M and a.Y<= b.Y))
group by a.Name,a.Y,a.m
order by a.Name,a.Y,a.m
) abc
group By Name,rnk
Order by Name,rnk
Can some one give suggastion.Hi,
I don't see what your query or your desired results have to do with the last 18 months. Is the task here to show for a given month (July, 2012, for example) the total of the 18 months ending in that month (February, 2011 through July, 2012 in this case) for the same name? If so:
SELECT c.name, c.y, c.m
, SUM (p.val) AS running_total
FROM test c
JOIN test p ON ( ((12 * c.y) + c.m)
- ((12 * p.y) + p.m)
) BETWEEN 0 AND 17
GROUP BY c.name, c.y, c.m
ORDER BY c.name, c.y, c.m
;Output:
NAME Y M RUNNING_TOTAL
A 2011 1 2
A 2011 2 4
A 2011 3 6
A 2011 4 8
A 2011 5 10
A 2011 6 12
A 2011 7 14
A 2011 8 16
A 2011 9 18
A 2011 10 20
A 2011 11 22
A 2011 12 24
A 2012 1 26
A 2012 2 28
A 2012 3 30
A 2012 4 32
A 2012 5 34
A 2012 6 36
A 2012 7 36
Maybe you are looking for
-
Is daisy chaining on a macbook air through the thunderbolt to Firewire 800 adaptor possible?
Hello, I am attempting to daisy chain two Glyph portagig 50's and connect them to my MacBook Air. They are both 1TB, 7200RPM drives and are both connected to external power. When I attempt the daisy chain only the first drive mounts and when I attemp
-
SC DPM 2012 SP1 does not display cleaning tape in console
Hi everyone, I'm running DPM and have a IBM TS3100 tape library. I have installed a LTO5 compatible IBM cleaning tape into the library. But a cannot see this tape in console. In web interface, I can see it as "Unknown" tape in slot 15. There was an a
-
HP Pavilion tx2500z - Default Function Keys not working
I have a HP Pavilion tx2500z (model: KD436AV) laptop with Windows 7 64 bit OS installed. Since last few months, I am not able to use default function keys (F1, F2...F12) and other keys at the top row of the keyboard, including Pg Up, Pg Down, Home, E
-
Secunia Software Inspector Reporting Adobe Reader 8.1.2 Security Update 1 Missing
After deploying Adobe Reader 8.1.2 Security Update 1 to all of our managed computers, I ran a vulnerability scan using Secunia Software Inspector (http://secunia.com/software_inspector) and it is reporting that Adobe Reader is insecure (it is version
-
I Have to force quit Photoshop CS5 in order to get it to quit at all? Then it takes quit azwhile to quit after which a user report comes up to be sent in to Adobe. I have tried opening it resetting the prefs file, to no avail. Anyone have anything on