Max date in analytic function
I have records that has repeating load dates.
I would like to pick the records that has the maximum load_dates.
My source data looks like this -
( select 60589 as C_number, to_date('01/08/2012','DD/MM/YYYY') as load_dt from dual union all
select 60768, to_date('01/08/2012','DD/MM/YYYY') from dual union all
select 60888, to_date('01/08/2012','DD/MM/YYYY') from dual union all
select 12345, to_date('01/09/2012','DD/MM/YYYY') from dual union all
select 54321, to_date('01/09/2012','DD/MM/YYYY') from dual union all
select 66666, to_date('01/10/2012','DD/MM/YYYY') from dual union all
select 55555, to_date('01/10/2012','DD/MM/YYYY') from dual)
I would like to pick records with the max load_dt that means
C_number load_dt
666666 01-Oct-12
555555 01-Oct-12
I have written an oracle analytic function but it's not working the way it should be -
My query looks like this -
select a.*
from
select
c_number,
load_dt,
max(load_dt) over (partition by load_dt) as mx_dt
from table_name
where
load_dt = mx_dt;
It returns all the rows for some reason.
Any help or guidance is highly appreciated
PJ
without analytical..
with mydata as
( select 60589 as C_number, to_date('01/08/2012','DD/MM/YYYY') as load_dt from dual union all
select 60768, to_date('01/08/2012','DD/MM/YYYY') from dual union all
select 60888, to_date('01/08/2012','DD/MM/YYYY') from dual union all
select 12345, to_date('01/09/2012','DD/MM/YYYY') from dual union all
select 54321, to_date('01/09/2012','DD/MM/YYYY') from dual union all
select 66666, to_date('01/10/2012','DD/MM/YYYY') from dual union all
select 55555, to_date('01/10/2012','DD/MM/YYYY') from dual)
select *
from mydata
where load_dt = (select max(load_dt) from mydata);
Similar Messages
-
Moving sum using date intervals - analytic functions help
let's say you have the following set of data:
DATE SALES
09/02/2012 100
09/02/2012 50
09/02/2012 10
09/02/2012 1000
09/02/2012 20
12/02/2012 1000
12/02/2012 1100
14/02/2012 1000
14/02/2012 100
15/02/2012 112500
15/02/2012 13500
15/02/2012 45000
15/02/2012 1500
19/02/2012 1500
20/02/2012 400
23/02/2012 2000
27/02/2012 4320
27/02/2012 300000
01/03/2012 100
04/03/2012 17280
06/03/2012 100
06/03/2012 100
06/03/2012 4320
08/03/2012 100
13/03/2012 1000
for each day i need to know the sum of the sales in the present and preceding 5 days (calendar) [not five rows].
What qurey could i use???
Please help!Hi.
Here's one way.
WITH data AS
SELECT TO_DATE('09/02/2012','DD/MM/YYYY') d, 100 n FROM DUAL UNION ALL
SELECT TO_DATE('09/02/2012','DD/MM/YYYY') d, 50 n FROM DUAL UNION ALL
SELECT TO_DATE('09/02/2012','DD/MM/YYYY') d, 10 n FROM DUAL UNION ALL
SELECT TO_DATE('09/02/2012','DD/MM/YYYY') d, 1000 n FROM DUAL UNION ALL
SELECT TO_DATE('09/02/2012','DD/MM/YYYY') d, 20 n FROM DUAL UNION ALL
SELECT TO_DATE('12/02/2012','DD/MM/YYYY') d, 1000 n FROM DUAL UNION ALL
SELECT TO_DATE('12/02/2012','DD/MM/YYYY') d, 1100 n FROM DUAL UNION ALL
SELECT TO_DATE('14/02/2012','DD/MM/YYYY') d, 1000 n FROM DUAL UNION ALL
SELECT TO_DATE('14/02/2012','DD/MM/YYYY') d, 100 n FROM DUAL UNION ALL
SELECT TO_DATE('15/02/2012','DD/MM/YYYY') d, 112500 n FROM DUAL UNION ALL
SELECT TO_DATE('15/02/2012','DD/MM/YYYY') d, 13500 n FROM DUAL UNION ALL
SELECT TO_DATE('15/02/2012','DD/MM/YYYY') d, 45000 n FROM DUAL UNION ALL
SELECT TO_DATE('15/02/2012','DD/MM/YYYY') d, 1500 n FROM DUAL UNION ALL
SELECT TO_DATE('19/02/2012','DD/MM/YYYY') d, 1500 n FROM DUAL UNION ALL
SELECT TO_DATE('20/02/2012','DD/MM/YYYY') d, 400 n FROM DUAL UNION ALL
SELECT TO_DATE('23/02/2012','DD/MM/YYYY') d, 2000 n FROM DUAL UNION ALL
SELECT TO_DATE('27/02/2012','DD/MM/YYYY') d, 4320 n FROM DUAL UNION ALL
SELECT TO_DATE('27/02/2012','DD/MM/YYYY') d, 300000 n FROM DUAL UNION ALL
SELECT TO_DATE('01/03/2012','DD/MM/YYYY') d, 100 n FROM DUAL UNION ALL
SELECT TO_DATE('04/03/2012','DD/MM/YYYY') d, 17280 n FROM DUAL UNION ALL
SELECT TO_DATE('06/03/2012','DD/MM/YYYY') d, 100 n FROM DUAL UNION ALL
SELECT TO_DATE('06/03/2012','DD/MM/YYYY') d, 100 n FROM DUAL UNION ALL
SELECT TO_DATE('06/03/2012','DD/MM/YYYY') d, 4320 n FROM DUAL UNION ALL
SELECT TO_DATE('08/03/2012','DD/MM/YYYY') d, 100 n FROM DUAL UNION ALL
SELECT TO_DATE('13/03/2012','DD/MM/YYYY') d, 1000 n FROM DUAL
days AS
SELECT TO_DATE('2012-02-01','YYYY-MM-DD')+(LEVEL-1) d
FROM DUAL
CONNECT BY LEVEL <= 60
totals_per_day AS
SELECT dy.d,SUM(NVL(dt.n,0)) total_day
FROM
data dt,
days dy
WHERE
dy.d = dt.d(+)
GROUP BY dy.d
ORDER BY 1
SELECT
d,
SUM(total_day) OVER
ORDER BY d
RANGE BETWEEN 5 PRECEDING AND CURRENT ROW
) AS five_day_total
FROM totals_per_day;
2012-02-01 00:00:00 0
2012-02-02 00:00:00 0
2012-02-03 00:00:00 0
2012-02-04 00:00:00 0
2012-02-05 00:00:00 0
2012-02-06 00:00:00 0
2012-02-07 00:00:00 0
2012-02-08 00:00:00 0
2012-02-09 00:00:00 1180
2012-02-10 00:00:00 1180
2012-02-11 00:00:00 1180
2012-02-12 00:00:00 3280
2012-02-13 00:00:00 3280
2012-02-14 00:00:00 4380
2012-02-15 00:00:00 175700
2012-02-16 00:00:00 175700
2012-02-17 00:00:00 175700
2012-02-18 00:00:00 173600
2012-02-19 00:00:00 175100
2012-02-20 00:00:00 174400
2012-02-21 00:00:00 1900
2012-02-22 00:00:00 1900
2012-02-23 00:00:00 3900
2012-02-24 00:00:00 3900
2012-02-25 00:00:00 2400
2012-02-26 00:00:00 2000
2012-02-27 00:00:00 306320
2012-02-28 00:00:00 306320
2012-02-29 00:00:00 304320
2012-03-01 00:00:00 304420
2012-03-02 00:00:00 304420
2012-03-03 00:00:00 304420
2012-03-04 00:00:00 17380
2012-03-05 00:00:00 17380
2012-03-06 00:00:00 21900
2012-03-07 00:00:00 21800
2012-03-08 00:00:00 21900
2012-03-09 00:00:00 21900
2012-03-10 00:00:00 4620
2012-03-11 00:00:00 4620
2012-03-12 00:00:00 100
2012-03-13 00:00:00 1100
2012-03-14 00:00:00 1000
2012-03-15 00:00:00 1000
2012-03-16 00:00:00 1000
2012-03-17 00:00:00 1000
2012-03-18 00:00:00 1000
2012-03-19 00:00:00 0
2012-03-20 00:00:00 0
2012-03-21 00:00:00 0
2012-03-22 00:00:00 0
2012-03-23 00:00:00 0
2012-03-24 00:00:00 0
2012-03-25 00:00:00 0
2012-03-26 00:00:00 0
2012-03-27 00:00:00 0
2012-03-28 00:00:00 0
2012-03-29 00:00:00 0
2012-03-30 00:00:00 0
2012-03-31 00:00:00 0Hope this helps.
Regards. -
Hi,
I have the following columns in a report:
Vendor Name
Trade date
Prod Desc
Order Type
Qty
Gross Amt
I need to display only those rows where trade date = max(trade date).
I am aware of the MAX analytic function, but it keeps giving me an error.
Could someone please let me know how to achieve this scenario.
Thanks.Hi
As pupppethead mentioned, it doesn't look as though you need a PARTITION BY clause.
A straight MAX(Trade Date) OVER () = Trade Date should suffice. The OVER() tells Discoverer to look at all dates in the query. You will only need a PARTITION BY if you are using Page Items or Group Sorts and you wanted a different date in each set.
By the way, as a word of warning, be very careful not to name items in folders or workbooks using Oracle reserved words. Naming an item as Desc is dangerous because that is the name of the descending switch in an ORDER BY clause of an analytic function. Thus, if you have for example something called ITEM DESC and try to do this:
RANK() OVER(ORDER BY ITEM DESC) you will have serious problems because the database (not Discoverer) will think you want to sort ITEM descending. Either ITEM does not exist and you will get an error or it does exist and you will end up with the sort on the wrong item.
Imagine also if you named something ORDER BY and then place this in a PARTITION BY clause like this:
MAX(Trade Date) OVER (PARTITION BY ORDER BY)
Interesting conundrum, don't you think?
Best wishes
Michael -
Date ranges - possible to use analytic functions?
The next datastructure needs to be converted to a daterange datastructure.
START_DATE END_DATE AMMOUNT
01-01-2010 28-02-2010 10
01-02-2010 31-03-2010 20
01-03-2010 31-05-2010 30
01-09-2010 31-12-2010 40Working solution:
with date_ranges
as ( select to_date('01-01-2010','dd-mm-yyyy') start_date
, to_date('28-02-2010','dd-mm-yyyy') end_date
, 10 ammount
from dual
union all
select to_date('01-02-2010','dd-mm-yyyy') start_date
, to_date('31-03-2010','dd-mm-yyyy') end_date
, 20 ammount
from dual
union all
select to_date('01-03-2010','dd-mm-yyyy') start_date
, to_date('31-05-2010','dd-mm-yyyy') end_date
, 30 ammount
from dual
union all
select to_date('01-09-2010','dd-mm-yyyy') start_date
, to_date('31-12-2010','dd-mm-yyyy') end_date
, 40 ammount
from dual
select rne.start_date
, lead (rne.start_date-1,1) over (order by rne.start_date) end_date
, ( select sum(dre2.ammount)
from date_ranges dre2
where rne.start_date >= dre2.start_date
and rne.start_date <= dre2.end_date
) range_ammount
from ( select dre.start_date
from date_ranges dre
union -- implicit distinct
select dre.end_date + 1
from date_ranges dre
) rne
order by rne.start_date
/Output:
START_DATE END_DATE RANGE_AMMOUNT
01-01-2010 31-01-2010 10
01-02-2010 28-02-2010 30
01-03-2010 31-03-2010 50
01-04-2010 31-05-2010 30
01-06-2010 31-08-2010
01-09-2010 31-12-2010 40
01-01-2011
7 rows selected.However, I would like to use an analytic function to calculate the range_ammount. Is this possible?
Edited by: user5909557 on Jul 29, 2010 6:19 AMHi,
Welcome to the forum!
Yes, you can replace the scalar sub-queriy with an analytic SUM, like this:
WITH change_data AS
SELECT start_date AS change_date
, ammount AS net_amount
FROM date_ranges
UNION
SELECT end_date + 1 AS change_date
, -ammount AS net_amount
FROM date_ranges
, got_range_amount AS
SELECT change_date AS start_date
, LEAD (change_date) OVER (ORDER BY change_date) - 1
AS end_date
, SUM (net_amount) OVER (ORDER BY change_date)
AS range_amount
FROM change_data
, got_grp AS
SELECT start_date
, end_date
, range_amount
, ROW_NUMBER () OVER ( ORDER BY start_date, end_date)
- ROW_NUMBER () OVER ( PARTITION BY range_amount
ORDER BY start_date, end_date
) AS grp
FROM got_range_amount
SELECT MIN (start_date) AS start_date
, MAX (end_date) AS end_date
, range_amount
FROM got_grp
GROUP BY grp
, range_amount
ORDER BY grp
;This should be much more efficient.
The code is longer than what you posted. That's largely because it consolidates consecutive groups with the same amount.
For example, if we add this row to the sample data:
union all
select to_date('02-01-2010','dd-mm-yyyy') start_date
, to_date('30-12-2010','dd-mm-yyyy') end_date
, 0 ammount
from dualThe query you posted produces:
START_DAT END_DATE RANGE_AMMOUNT
01-JAN-10 01-JAN-10 10
02-JAN-10 31-JAN-10 10
01-FEB-10 28-FEB-10 30
01-MAR-10 31-MAR-10 50
01-APR-10 31-MAY-10 30
01-JUN-10 31-AUG-10 0
01-SEP-10 30-DEC-10 40
31-DEC-10 31-DEC-10 40
01-JAN-11I assume you only want a new row of output when the range_amount changes., that is:
START_DAT END_DATE RANGE_AMOUNT
01-JAN-10 31-JAN-10 10
01-FEB-10 28-FEB-10 30
01-MAR-10 31-MAR-10 50
01-APR-10 31-MAY-10 30
01-JUN-10 31-AUG-10 0
01-SEP-10 31-DEC-10 40
01-JAN-11 0Of course, you could modify the original query so that it did this, but it would end up about as complex as the query above, but less efficient.
Conversely, if you prefer the longer output, then you don't need the suib-query got_grp in the query above.
Thanks for posting the CREATE TABLE and INSERT statments; that's very helpful.
There are some people who have been using this forum for years who still have to be begged to do that. -
Using analytical function to calculate concurrency between date range
Folks,
I'm trying to use analytical functions to come up with a query that gives me the
concurrency of jobs executing between a date range.
For example:
JOB100 - started at 9AM - stopped at 11AM
JOB200 - started at 10AM - stopped at 3PM
JOB300 - started at 12PM - stopped at 2PM
The query would tell me that JOB1 ran with a concurrency of 2 because JOB1 and JOB2
were running started and finished within the same time. JOB2 ran with the concurrency
of 3 because all jobs ran within its start and stop time. The output would look like this.
JOB START STOP CONCURRENCY
=== ==== ==== =========
100 9AM 11AM 2
200 10AM 3PM 3
300 12PM 2PM 2
I've been looking at this post, and this one if very similar...
Analytic functions using window date range
Here is the sample data..
CREATE TABLE TEST_JOB
( jobid NUMBER,
created_time DATE,
start_time DATE,
stop_time DATE
insert into TEST_JOB values (100, sysdate -1, to_date('05/04/08 09:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 11:00:00','MM/DD/YY hh24:mi:ss'));
insert into TEST_JOB values (200, sysdate -1, to_date('05/04/08 10:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 13:00:00','MM/DD/YY hh24:mi:ss'));
insert into TEST_JOB values (300, sysdate -1, to_date('05/04/08 12:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 14:00:00','MM/DD/YY hh24:mi:ss'));
select * from test_job;
JOBID|CREATED_TIME |START_TIME |STOP_TIME
----------|--------------|--------------|--------------
100|05/04/08 09:28|05/04/08 09:00|05/04/08 11:00
200|05/04/08 09:28|05/04/08 10:00|05/04/08 13:00
300|05/04/08 09:28|05/04/08 12:00|05/04/08 14:00
Any help with this query would be greatly appreciated.
thanks.
-peterafter some checking the model rule wasn't working exactly as expected.
I believe it's working right now. I'm posting a self-contained example for completeness sake.I use 2 functions to convert back and forth between epoch unix timestamps, so
I'll post them here as well.
Like I said I think this works okay, but any feedback is always appreciated.
-peter
CREATE OR REPLACE FUNCTION date_to_epoch(p_dateval IN DATE)
RETURN NUMBER
AS
BEGIN
return (p_dateval - to_date('01/01/1970','MM/DD/YYYY')) * (24 * 3600);
END;
CREATE OR REPLACE FUNCTION epoch_to_date (p_epochval IN NUMBER DEFAULT 0)
RETURN DATE
AS
BEGIN
return to_date('01/01/1970','MM/DD/YYYY') + (( p_epochval) / (24 * 3600));
END;
DROP TABLE TEST_MODEL3 purge;
CREATE TABLE TEST_MODEL3
( jobid NUMBER,
start_time NUMBER,
end_time NUMBER);
insert into TEST_MODEL3
VALUES (300,date_to_epoch(to_date('05/07/2008 10:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 19:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (200,date_to_epoch(to_date('05/07/2008 09:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 12:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (400,date_to_epoch(to_date('05/07/2008 10:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 14:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (500,date_to_epoch(to_date('05/07/2008 11:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 16:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (600,date_to_epoch(to_date('05/07/2008 15:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 22:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (100,date_to_epoch(to_date('05/07/2008 09:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 23:00','MM/DD/YYYY hh24:mi')));
commit;
SELECT jobid,
epoch_to_date(start_time)start_time,
epoch_to_date(end_time)end_time,
n concurrency
FROM TEST_MODEL3
MODEL
DIMENSION BY (start_time,end_time)
MEASURES (jobid,0 n)
(n[any,any]=
count(*)[start_time<= cv(start_time),end_time>=cv(start_time)]+
count(*)[start_time > cv(start_time) and start_time <= cv(end_time), end_time >= cv(start_time)]
ORDER BY start_time;
The results look like this:
JOBID|START_TIME|END_TIME |CONCURRENCY
----------|---------------|--------------|-------------------
100|05/07/08 09:00|05/07/08 23:00| 6
200|05/07/08 09:00|05/07/08 12:00| 5
300|05/07/08 10:00|05/07/08 19:00| 6
400|05/07/08 10:00|05/07/08 14:00| 5
500|05/07/08 11:00|05/07/08 16:00| 6
600|05/07/08 15:00|05/07/08 22:00| 4 -
Are analytic functions usefull only for data warehouses?
Hi,
I deal with reporting queries on Oracle databases but I don't work on Data Warehouses, thus I'd like to know if learning to use analytic functions (sql for anaylis such as rollup, cube, grouping, ...) might be usefull in helping me to develop better reports or if analytic functions are usually usefull only for data warehouses queries. I mean are rollup, cube, grouping, ... usefull also on operational database or do they make sense only on DWH?
Thanks!Mark1970 wrote:
thus does it worth learning them for improving report queries also not on DHW but on common operational databases?Why pigeonhole report queries as "+operational+" or "+data warehouse+"?
Do you tell a user/manager that "<i>No, this report cannot be done as it looks like a data warehouse report and we have an operational database!</i>"?
Data processing and data reporting requirements not not care what label you assign to your database.
Simple real world example of using analytical queries on a non warehouse. We supply data to an external system via XML. They require that we limit the number of parent entities per XML file we supply. E.g. 100 customer elements (together with all their child elements) per file. Analytical SQL enables this to be done by creating "buckets" that can only contain 100 parent elements at a time. Complete process is SQL driven - no slow-by-slow row by row processing in PL/SQL using nested cursor loops and silly approaches like that.
Analytical SQL is a tool in the developer toolbox. It would be unwise to remove it from the toolbox, thinking that it is not applicable and won't be needed for the work that's to be done. -
Analytical function sum() ...for Till-date reporting
Hi,
I need help in forming an SQL with analytical function.
Here is my scenario:
create table a (name varchar2(10), qty_sold number,on_date date);
insert into a values ('abc',10,'10-JAN-2007 00:01:00');
insert into a values ('abc',01,'10-JUL-2007 00:01:00');
insert into a values ('abc',05,'10-JUL-2007 08:11:00');
insert into a values ('abc',17,'10-JUL-2007 09:11:00');
insert into a values ('def',10,'10-JAN-2006 08:01:00');
insert into a values ('def',01,'10-JUN-2006 10:01:00');
insert into a values ('def',05,'10-JUL-2006 08:10:00');
insert into a values ('pqr',17,'10-JUL-2006 09:11:00');
Now I want to have a sql which displays the following:
NAME--TOTAL_QTY_SOLD_IN_LAST_10_DAYS, TOTAL_QTY_SOLD_IN_LAST_20_DAYS...etc
I know we can do it using sum(qty_sold) over (order on_date range interval '10' days and preceding) .... but I get too many rows for each "NAME" ....for each of the date in the database table a ... I want just one row for each "Name"...and sum() should be till SYSDATE ....
Any help is highly appreciated.
Thanks.SQL> select name
2 , sum(case when sysdate - on_date <= 10 then qty_sold end) total_qty_last_10_days
3 , sum(case when sysdate - on_date <= 100 then qty_sold end) total_qty_last_100_days
4 , sum(case when sysdate - on_date <= 500 then qty_sold end) total_qty_last_500_days
5 from a
6 group by name
7 /
NAME TOTAL_QTY_LAST_10_DAYS TOTAL_QTY_LAST_100_DAYS TOTAL_QTY_LAST_500_DAYS
abc 23 33
def 6
pqr 17
3 rijen zijn geselecteerd.Regards,
Rob. -
Completion of data series by analytical function
I have the pleasure of learning the benefits of analytical functions and hope to get some help
The case is as follows:
Different projects gets funds from different sources over several years, but not from each source every year.
I want to produce the cumulative sum of funds for each source for each year for each project, but so far I have not been able to do so for years without fund for a particular source.
I have used this syntax:
SUM(fund) OVER(PARTITION BY project, source ORDER BY year ROWS UNBOUNDED PRECEDING)
I have also experimented with different variations of the window clause, but without any luck.
This is the last step in a big job I have been working on for several weeks, so I would be very thankful for any help.If you want to use Analytic functions and if you are on 10.1.3.3 version of BI EE then try using Evaluate, Evaluate_aggr that support native database functions. I have blogged about it here http://oraclebizint.wordpress.com/2007/09/10/oracle-bi-ee-10133-support-for-native-database-functions-and-aggregates/. But in your case all you might want to do is have a column with the following function.
SUM(Measure BY Col1, Col2...)
I have also blogged about it here http://oraclebizint.wordpress.com/2007/10/02/oracle-bi-ee-101332-varying-aggregation-based-on-levels-analytic-functions-equivalence/.
Thanks,
Venkat
http://oraclebizint.wordpress.com -
Grouping error in Oracle's analytic function PERCENTILE_CONT()
Hi,
I have a question regarding the usage of Oracle's analytic function PERCENTILE_CONT(). The underlying time data in the table is of hourly granularity and I want to fetch average, peak values for the day along with 80th percentile for that day. For the sake of clarification I am only posting relevant portion of the query.
Any idea how to rewrite the query and achieve the same objective?
SELECT TRUNC (sdd.ts) AS ts,
max(sdd.maxvalue) AS max_value, avg(sdd.avgvalue) AS avg_value,
PERCENTILE_CONT(0.80) WITHIN GROUP (ORDER BY sdd.avgvalue ASC) OVER (PARTITION BY pm.sysid,trunc(sdd.ts)) as Percentile_Cont_AVG
FROM XYZ
WHERE
XYZ
GROUP BY TRUNC (sdd.ts)
ORDER BY TRUNC (sdd.ts)
Oracle Error:
ERROR at line 5:
ORA-00979: not a GROUP BY expressionYou probably mixed up the aggregate and analytical versin of PERCENTILE_CONT.
The below should work, but i dont know if it produces the desireed results.
SELECT TRUNC (sdd.ts) AS ts,
max(sdd.maxvalue) AS max_value, avg(sdd.avgvalue) AS avg_value,
PERCENTILE_CONT(0.80) WITHIN GROUP (ORDER BY sdd.avgvalue ASC) as Percentile_Cont_AVG
FROM XYZ
sorry, what is this where clause for??
WHERE
XYZ
GROUP BY TRUNC (sdd.ts)
ORDER BY TRUNC (sdd.ts) Edited by: chris227 on 26.03.2013 05:45 -
My first real analytic function... any unexpected results?
Hello all. I have a table that contains transactions from bank accounts. The columns I am concerned with (I think) are the account number and the status date.
The status date has the date that the transaction cleared through the bank. I would like a query that returns all rows for an account that have cleared since the last reconciliation of that account. (the reconciliation will occur monthly)
This will produce some test data that replicates what we'll have in this table.
DROP TABLE dave_test;
DROP TABLE dave_test succeeded.
CREATE TABLE dave_test AS
SELECT level id, ROUND(TO_NUMBER(level), -1) account, TO_DATE('2007-08-01','YYYY-MM-DD') test_date
FROM DUAL
CONNECT BY LEVEL < 20 UNION ALL
SELECT 21, 10, TO_DATE('2007-07-01','YYYY-MM-DD') FROM DUAL UNION ALL
SELECT 22, 10, TO_DATE('2007-06-01','YYYY-MM-DD') FROM DUAL UNION ALL
SELECT 23, 0, TO_DATE('2007-09-01', 'YYYY-MM-DD') FROM DUAL;
CREATE TABLE succeeded.
SELECT * FROM dave_test ORDER BY id;
ID ACCOUNT TEST_DATE
1 0 01-AUG-07
2 0 01-AUG-07
3 0 01-AUG-07
4 0 01-AUG-07
5 10 01-AUG-07
6 10 01-AUG-07
7 10 01-AUG-07
8 10 01-AUG-07
9 10 01-AUG-07
10 10 01-AUG-07
11 10 01-AUG-07
12 10 01-AUG-07
13 10 01-AUG-07
14 10 01-AUG-07
15 20 01-AUG-07
16 20 01-AUG-07
17 20 01-AUG-07
18 20 01-AUG-07
19 20 01-AUG-07
21 10 01-JUL-07
22 10 01-JUN-07
23 0 01-SEP-07
22 rows selected
I have developed a query that returns accurate results for my test data. My request is this:
Will you look over this query and see if there is a better way of doing things? This is my first real attempt with an analytic function, so I would appreciate some input on anything that looks like it could be improved. Also, perhaps some test cases that might produce results I haven't thought of.
Thank you for your time.
SELECT
id
,account
,test_date
,max(date_sort)
FROM
SELECT
id id
,account account
,test_date test_date
,CASE DENSE_RANK() OVER(PARTITION BY account ORDER BY TRUNC(test_date, 'DD') DESC)
WHEN 1 THEN TO_DATE('1', 'J')
WHEN 2 THEN test_date
ELSE NULL
END date_sort
FROM
dave_test
WHERE
account = &account_number
HAVING
test_date > MAX(date_sort)
GROUP BY
id
,account
,test_date
ORDER BY
idRun with 0 as account number:
ID ACCOUNT TEST_DATE MAX(DATE_SORT)
23 0 01-SEP-07 01-JAN-13
1 rows selectedRun with 10 as account number
ID ACCOUNT TEST_DATE MAX(DATE_SORT)
5 10 01-AUG-07 01-JAN-13
6 10 01-AUG-07 01-JAN-13
7 10 01-AUG-07 01-JAN-13
8 10 01-AUG-07 01-JAN-13
9 10 01-AUG-07 01-JAN-13
10 10 01-AUG-07 01-JAN-13
11 10 01-AUG-07 01-JAN-13
12 10 01-AUG-07 01-JAN-13
13 10 01-AUG-07 01-JAN-13
14 10 01-AUG-07 01-JAN-13
10 rows selectedRun with 20 as account_number
ID ACCOUNT TEST_DATE MAX(DATE_SORT)
15 20 01-AUG-07 01-JAN-13
16 20 01-AUG-07 01-JAN-13
17 20 01-AUG-07 01-JAN-13
18 20 01-AUG-07 01-JAN-13
19 20 01-AUG-07 01-JAN-13
5 rows selectedLet me know if I need to clarify anything.Sorry, Volder, for being unclear.
Here is the table the query is based on.
desc bank_account_transactions
Name Null Type
ID NOT NULL NUMBER(28)
BKA_ID NOT NULL NUMBER(28)
BKATC_ID NOT NULL NUMBER(28)
ST_TABLE_SHORT_NAME VARCHAR2(10)
KEY_VALUE NUMBER(28)
EDF_ID NUMBER(28)
GLFS_ID NOT NULL NUMBER(28)
GLTT_ID NUMBER(28)
AMOUNT NOT NULL NUMBER(11,2)
PAYMENT_NUMBER NUMBER(9)
BANK_SERIAL_NUMBER NUMBER(15)
PAYEE_NAME VARCHAR2(60)
STATUS NOT NULL VARCHAR2(1)
STATUS_DATE DATE
EFFECTIVE_DATE NOT NULL DATE
POSITIVE_PAY_DATE DATE
DATA_SOURCE NOT NULL VARCHAR2(1)
REPORTED_TO_ACCOUNT_OWNER NOT NULL VARCHAR2(1)
PAYEE_BANK_ACCOUNT_NUMBER NUMBER(30)
PAYEE_BANK_ABA_NUMBER NUMBER(9)
DESCRIPTION VARCHAR2(4000)
DATE_CREATED NOT NULL DATE
CREATED_BY NOT NULL VARCHAR2(30)
DATE_MODIFIED DATE
MODIFIED_BY VARCHAR2(30)
25 rows selectedThe bka_id is the account number, status is 'C' for cleared checks and the status_date is the date the check cleared.
When I reconcile, I set the status to 'C' and set the status_date to SYSDATE. So the "last reconciliation date" is stored in status_date.
Like so
ID Account_No status_date
1 10 05-04-07
2 10 05-04-07
3 10 05-04-07
4 20 05-04-07
5 20 05-04-07
6 10 06-03-07
7 10 06-03-07
8 20 06-03-07
9 10 07-05-07
10 10 07-05-07In this example, account 10 was reconciled on May 5, June 3, and July 5. So the previous reconciliation date would be 06-03-07, and my report would return the transactions from 07-05-07.
For account 20, it was reconciled on May 5 and June 3. The previous reconciliation date would be 05-04-07, and the transactions from 06-03-07 would be reported.
Does this help?
I appreciate your time. -
Problem with SUM () analytic function
Dear all,
Please have a look at my problem.
SELECT CURR, DT, AMT, RATE,
SUM(AMT) OVER (PARTITION BY CURR ORDER BY DT) SUMOVER,
sum( amt * rate) over (PARTITION BY CURR ORDER BY DT) / SUM(AMT) OVER (PARTITION BY CURR ORDER BY DT) avgrt
FROM
select 'CHF' CURR, ADD_MONTHS(TO_DATE('01-DEC-07'), LEVEL -1) DT, 100 * LEVEL AMT, 1 + ( 5* LEVEL/100) RATE
FROM DUAL CONNECT BY LEVEL < 10
SQL> /
CUR DT AMT RATE SUMOVER AVGRT
CHF 01-DEC-07 100 1.05 100 1.05
CHF 01-JAN-08 200 1.1 300 1.08333333
CHF 01-FEB-08 300 1.15 600 1.11666667
CHF 01-MAR-08 400 1.2 1000 1.15
CHF 01-APR-08 500 1.25 1500 1.18333333
CHF 01-MAY-08 600 1.3 2100 1.21666667
CHF 01-JUN-08 700 1.35 2800 1.25
CHF 01-JUL-08 800 1.4 3600 1.28333333
CHF 01-AUG-08 900 1.45 4500 1.31666667
Table Revaluation
select 'CHF' CURR1, '31-DEC-07' DT , 1.08 RATE FROM DUAL UNION ALL
select 'CHF' CURR1, '31-MAR-08' DT , 1.22 RATE FROM DUAL UNION ALL
select 'CHF' CURR1, '30-JUN-08' DT , 1.38 RATE FROM DUAL
CUR DT RATE
CHF 31-DEC-07 1.08
CHF 31-MAR-08 1.22
CHF 30-JUN-08 1.38.
Problem is with the calculation of average rate.
I want to consider the data in the revaluation table to be used in the calculation of
average rate.
So average rate for Jan-08 will be
(100 * 1.08(dec revaluation rate) + 200 * 1.1 ) / (300) = 1.093333333
for Feb-08
(100 * 1.08(dec revaluation rate) + 200 * 1.1 + 300 * 1.15) / (600) = 1.121666667
for mar-08
(100 * 1.08(dec revaluation rate) + 200 * 1.1 + 300 * 1.15 + 400 * 1.2) / (1000) = 1.153
for Apr-08
(1000 * 1.22(Apr revaluation rate) + 500 * 1.25) /1500 = 1.23
for May-08
(1000 * 1.22(Apr revaluation rate) + 500 * 1.25 + 600 * 1.30 ) /2100 = 1.25
and so on..
Kindly adviceHi,
The main thing in this problem is that for every dt you want to compute the cumulative total from previous rows using the formula
SUM (amt * rate)
But rate can be either the rate from the revaluation table or the rate from the main table. For evaluating prior dates, you wnat to use the most recent rate.
I'm not sure if you can do this using analytic functions. Like Damorgan said, you should use a self-join.
The query below gives you the results you requested:
WITH
revaluation AS
SELECT 'CHF' curr1, TO_DATE ('31-DEC-07', 'DD-MON-RR') dt, 1.08 rate FROM dual UNION ALL
SELECT 'CHF' curr1, TO_DATE ('31-MAR-08', 'DD-MON-RR') dt, 1.22 rate FROM dual UNION ALL
SELECT 'CHF' curr1, TO_DATE ('30-JUN-08', 'DD-MON-RR') dt, 1.38 rate FROM dual
original_data AS
select 'CHF' curr
, ADD_MONTHS(TO_DATE('01-DEC-07'), LEVEL -1) dt
, 100 * LEVEL amt
, 1 + ( 5* LEVEL/100) rate
FROM dual
CONNECT BY LEVEL < 10
two_rates AS
SELECT od.*
SELECT MAX (dt)
FROM revaluation
WHERE curr1 = od.curr
AND dt <= od.dt
) AS r_dt
SELECT AVG (rate) KEEP (DENSE_RANK LAST ORDER BY dt)
FROM revaluation
WHERE curr1 = od.curr
AND dt <= od.dt
) AS r_rate
FROM original_data od
SELECT c.curr
, c.dt
, c.amt
, c.rate
, SUM (p.amt) AS sumover
, SUM ( p.amt
* CASE
WHEN p.dt <= c.r_dt
THEN c.r_rate
ELSE p.rate
END
/ SUM (p.amt) AS avgrt
FROM two_rates c
JOIN original_data p ON c.curr = p.curr
AND c.dt >= p.dt
GROUP BY c.curr, c.dt, c.amt, c.rate
ORDER BY c.curr, c.dt
; -
GROUP BY and analytical functions
Hi all,
I need your help with grouping my data.
Below you can see sample of my data (in my case I have view where data is in almost same format).
with test_data as(
select '01' as code, 'SM' as abbreviation, 1010 as groupnum, 21 as pieces, 4.13 as volume, 3.186 as avgvolume from dual
union
select '01' as code, 'SM' as abbreviation, 2010 as groupnum, 21 as pieces, 0 as volume, 3.186 as avgvolume from dual
union
select '01' as code, 'SM' as abbreviation, 3000 as groupnum, 21 as pieces, 55 as volume, 3.186 as avgvolume from dual
union
select '01' as code, 'SM' as abbreviation, 3010 as groupnum, 21 as pieces, 7.77 as volume, 3.186 as avgvolume from dual
union
select '02' as code, 'SMP' as abbreviation, 1010 as groupnum, 30 as pieces, 2.99 as volume, 0.1 as avgvolume from dual
union
select '03' as code, 'SMC' as abbreviation, 1010 as groupnum, 10 as pieces, 4.59 as volume, 0.459 as avgvolume from dual
union
select '40' as code, 'DB' as abbreviation, 1010 as groupnum, 21 as pieces, 5.28 as avgvolume, 0.251 as avgvolume from dual
select
DECODE (GROUPING (code), 1, 'report total:', code) as code,
abbreviation as abbreviation,
groupnum as pricelistgrp,
sum(pieces) as pieces,
sum(volume) as volume,
sum(avgvolume) as avgvolume
--sum(sum(distinct pieces)) over (partition by code,groupnum) as piecessum,
--sum(volume) volume,
--round(sum(volume) / 82,3) as avgvolume
from test_data
group by grouping sets((code,abbreviation,groupnum,pieces,volume,avgvolume),null)
order by 1,3;Select statement which I have written returns the output below:
CODE ABBR GRPOUP PIECES VOLUME AVGVOL
01 SM 1010 21 4.13 3.186
01 SM 2010 21 0 3.186
01 SM 3000 21 55 3.186
01 SM 3010 21 7.77 3.186
02 SMP 1010 30 2.99 0.1
03 SMC 1010 10 4.59 0.459
40 DB 1010 21 5.28 0.251
report total: 145 79.76 13.554Number of pieces and avg volume is same for same codes (01 - pieces = 21, avgvolume = 3.186 etc.)
What I need is to get output like below:
CODE ABBR GRPOUP PIECES VOLUME AVGVOL
01 SM 1010 21 4.13 3.186
01 SM 2010 21 0 3.186
01 SM 3000 21 55 3.186
01 SM 3010 21 7.77 3.186
02 SMP 1010 30 2.99 0.1
03 SMC 1010 10 4.59 0.459
40 DB 1010 21 5.28 0.251
report total: 82 79.76 0.973Where total number of pieces is computed as sum of distinct numbers of pieces for each code -> *82 = 21 + 30 + 10 +21*.
Total volume is just sum of volumes in each row -> *79.76 = 4.13+0+55+7.77+2.99+4.59+5.28*.
And Average volume is computed as total volume / total number of pieces -> *0.973 = 79.76 / 82*.
I was trying to use analytical function (sum() over (partition by)) to get desired output, but without good results.
Could anyone help me with this issue?
Thanks in advance!
Regards,
JiriHi, Jiri,
Jiri N. wrote:
Hi all,
I need your help with grouping my data.
Below you can see sample of my data (in my case I have view where data is in almost same format).I assume the view guarantees that all rows with the same code (or the same code and groupnum) will always have the same pieces and the same avgvolume.
with test_data as( ...Thanks for posting this; it's very helpful.
What I need is to get output like below:
CODE ABBR GRPOUP PIECES VOLUME AVGVOL
01 SM 1010 21 4.13 3.186
01 SM 2010 21 0 3.186
01 SM 3000 21 55 3.186
01 SM 3010 21 7.77 3.186
02 SMP 1010 30 2.99 0.1
03 SMC 1010 10 4.59 0.459
40 DB 1010 21 5.28 0.251
report total: 82 79.76 0.973
Except for the last row, you're just displaying data straight from the table (or view).
It might be easier to get the results you want uisng a UNION. One branch of the UNION would get the"report total" row, and the other branch would get all the rest.
>
Where total number of pieces is computed as sum of distinct numbers of pieces for each code -> *82 = 21 + 30 + 10 +21*.It's not just distinct numbers. In this example, two different codes have pieces=21, so the total of distinct pieces is 61 = 21 + 30 + 10.
>
Total volume is just sum of volumes in each row -> *79.76 = 4.13+0+55+7.77+2.99+4.59+5.28*.
And Average volume is computed as total volume / total number of pieces -> *0.973 = 79.76 / 82*.
I was trying to use analytical function (sum() over (partition by)) to get desired output, but without good results. I would use nested aggregate functions to do that:
SELECT code
, abbreviation
, groupnum AS pricelistgrp
, pieces
, volume
, avgvolume
FROM test_data
UNION ALL
SELECT 'report total:' AS code
, NULL AS abbreviaion
, NULL AS pricelistgrp
, SUM (MAX (pieces)) AS pieces
, SUM (SUM (volume)) AS volume
, SUM (SUM (volume))
/ SUM (MAX (pieces)) AS avgvolume
FROM test_data
GROUP BY code -- , abbreviation?
ORDER BY code
, pricelistgrp
;Output:
CODE ABB PRICELISTGRP PIECES VOLUME AVGVOLUME
01 SM 1010 21 4.13 3.186
01 SM 2010 21 0.00 3.186
01 SM 3000 21 55.00 3.186
01 SM 3010 21 7.77 3.186
02 SMP 1010 30 2.99 .100
03 SMC 1010 10 4.59 .459
40 DB 1010 21 5.28 .251
report total: 82 79.76 .973It's unclear if you want to GROUP BY just code (like I did above) or by both code and abbreviation.
Given that this data is coming from a view, it might be simpler and/or more efficient to make separate version of the view, or to replicate most of the view in a query. -
Using analytical function - value with highest count
Hi
i have this table below
CREATE TABLE table1
( cust_name VARCHAR2 (10)
, txn_id NUMBER
, txn_date DATE
, country VARCHAR2 (10)
, flag number
, CONSTRAINT key1 UNIQUE (cust_name, txn_id)
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9870,TO_DATE ('15-Jan-2011', 'DD-Mon-YYYY'), 'Iran', 1);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9871,TO_DATE ('16-Jan-2011', 'DD-Mon-YYYY'), 'China', 1);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9872,TO_DATE ('17-Jan-2011', 'DD-Mon-YYYY'), 'China', 1);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9873,TO_DATE ('18-Jan-2011', 'DD-Mon-YYYY'), 'Japan', 1);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9874,TO_DATE ('19-Jan-2011', 'DD-Mon-YYYY'), 'Japan', 1);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9875,TO_DATE ('20-Jan-2011', 'DD-Mon-YYYY'), 'Russia', 1);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9877,TO_DATE ('22-Jan-2011', 'DD-Mon-YYYY'), 'China', 0);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9878,TO_DATE ('26-Jan-2011', 'DD-Mon-YYYY'), 'Korea', 0);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9811,TO_DATE ('17-Jan-2011', 'DD-Mon-YYYY'), 'China', 0);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9854,TO_DATE ('13-Jan-2011', 'DD-Mon-YYYY'), 'Taiwan', 0);
The requirement is to create an additional column in the resultset with country name where the customer has done the maximum number of transactions
(with transaction flag 1). In case we have two or more countries tied with the same count, then we need to select the country (among the tied ones)
where the customer has done the last transaction (with transaction flag 1)
e.g. The count is 2 for both 'China' and 'Japan' for transaction flag 1 ,and the latest transaction is for 'Japan'. So the new column should contain 'Japan'
CUST_NAME TXN_ID TXN_DATE COUNTRY FLAG country_1
Peter 9811 17-JAN-11 China 0 Japan
Peter 9854 13-JAN-11 Taiwan 0 Japan
Peter 9870 15-JAN-11 Iran 1 Japan
Peter 9871 16-JAN-11 China 1 Japan
Peter 9872 17-JAN-11 China 1 Japan
Peter 9873 18-JAN-11 Japan 1 Japan
Peter 9874 19-JAN-11 Japan 1 Japan
Peter 9875 20-JAN-11 Russia 1 Japan
Peter 9877 22-JAN-11 China 0 Japan
Peter 9878 26-JAN-11 Korea 0 Japan
Please let me know how to accomplish this using analytical functions
Thanks
-LearnsequelDoes this work (not spent much time checking it)?
WITH ana AS (
SELECT cust_name, txn_id, txn_date, country, flag,
Sum (flag)
OVER (PARTITION BY cust_name, country) n_trx,
Max (CASE WHEN flag = 1 THEN txn_date END)
OVER (PARTITION BY cust_name, country) l_trx
FROM cnt_trx
SELECT cust_name, txn_id, txn_date, country, flag,
First_Value (country) OVER (PARTITION BY cust_name ORDER BY n_trx DESC, l_trx DESC) top_cnt
FROM ana
CUST_NAME TXN_ID TXN_DATE COUNTRY FLAG TOP_CNT
Fred 9875 20-JAN-11 Russia 1 Russia
Fred 9874 19-JAN-11 Japan 1 Russia
Peter 9873 18-JAN-11 Japan 1 Japan
Peter 9874 19-JAN-11 Japan 1 Japan
Peter 9872 17-JAN-11 China 1 Japan
Peter 9871 16-JAN-11 China 1 Japan
Peter 9811 17-JAN-11 China 0 Japan
Peter 9877 22-JAN-11 China 0 Japan
Peter 9875 20-JAN-11 Russia 1 Japan
Peter 9870 15-JAN-11 Iran 1 Japan
Peter 9878 26-JAN-11 Korea 0 Japan
Peter 9854 13-JAN-11 Taiwan 0 Japan
12 rows selected. -
Question for analytic functions experts
Hi,
I have an ugly table containing an implicit master detail relation.
The table can be ordered by sequence and then each detail is beneath it's master (in sequence).
If it is a detail, the master column is NULL and vice versa.
Sample:
SEQUENCE MASTER DETAIL BOTH_PRIMARY_KEYS
1____________A______________1
2___________________A_______1
3___________________B_______2
4____________B______________2
5___________________A_______3
6___________________B_______4
Task: Go into the table with the primary key of my detail, and search the primary key of it's master.
I already have a solution how to get it, but I would like to know if there is an analytic statement,
which is more elegant, instead of selfreferencing my table three times. Somebody used to analytic functions?
Thanks,
DirkHi,
Do you mean like this?
with data as (
select 1 sequence, 'A' master, null detail, 1 both_primary_keys from dual union all
select 2, null, 'A', 1 from dual union all
select 3, null, 'B', 2 from dual union all
select 4, 'B', null, 2 from dual union all
select 5, null, 'A', 3 from dual union all
select 6, null, 'B', 4 from dual )
select (select max(both_primary_keys) keep (dense_rank last order by sequence)
from data
where sequence < detail_record.sequence and detail is null) master_primary_key
from data detail_record
where (both_primary_keys=3 /*lookup detail key 3 */ and master is null) -
Hello,
I need to write a query that will look at two tables - one of which contains event information (pk = event_id), the other containing event_id's and broker_id's. One broker may have attended one or more events.
I need to find the most recent event that each broker attended. Also, it is possible that a broker attended more than one event on a given day, in which case I will need some logic that will prioritize which event to select if a broker attended more than one on the given max event date.
Any help with this sql would be greatly appreciated.
Thanks!
ChristineThe business is currently determining the priority of
all event types. Once that is clear, I'll need to
incorporate that logic into my sql statement.
However, I'm not sure how to do that using this
logic:
ROW_NUMBER () OVER (PARTITION BY broker_id ORDER BY
event_date DESC)
I haven't used row number over and partitioning
before, so I'm not sure how it works. If, forSearch the Oracle documentation or this site for "Analytic Functions". Specifically:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions137.htm#sthref1981
instance, the business decides that event type
priority is something like: 5, 7, 1, 3, 11,
2......., how would I include that logic in the
statement above?
Would it be better to just add a priority column on
my event_type lookup table?
Thanks to all of you for your help with this!!Note how the row_number() value is assigned a value of 1 if the event_date is more recent. It has been assumed that you are storing timestamps as well, in the "event_date" column.
test@ORA10G>
test@ORA10G> with event as (
2 select 957 as event_id, to_date('10/19/2005 09:10:23','mm/dd/yyyy hh24:mi:ss') as event_date, 3 as type from dual union all
3 select 97, to_date('3/7/2006 09:10:23','mm/dd/yyyy hh24:mi:ss'), 1 from dual union all
4 select 2142, to_date('2/5/2008 10:34:56','mm/dd/yyyy hh24:mi:ss'), 1 from dual union all
5 select 728, to_date('5/19/2005 17:29:11','mm/dd/yyyy hh24:mi:ss'), 1 from dual union all
6 select 363, to_date('5/12/2006 08:02:25','mm/dd/yyyy hh24:mi:ss'), 2 from dual union all
7 select 30, to_date('1/19/2006 08:02:25','mm/dd/yyyy hh24:mi:ss'), 1 from dual union all
8 select 31, to_date('1/19/2006 15:00:00','mm/dd/yyyy hh24:mi:ss'), 3 from dual ),
9 contacts as (
10 select 1000073 as broker_id from dual union all
11 select 1000127 from dual union all
12 select 1000140 from dual union all
13 select 1000144 from dual union all
14 select 1000154 from dual union all
15 select 1000155 from dual),
16 event_registration as (
17 select 1000073 as broker_id, 957 as event_id from dual union all
18 select 1000127, 97 from dual union all
19 select 1000140, 2142 from dual union all
20 select 1000144, 728 from dual union all
21 select 1000154, 363 from dual union all
22 select 1000155, 30 from dual union all
23 select 1000155, 31 from dual)
24 --
25 select
26 er.broker_id, e.event_date, e.type, e.event_id,
27 row_number() over (partition by er.broker_id order by e.event_date desc) as seq
28 from
29 event_registration er,
30 event e,
31 contacts c
32 where er.event_id=e.event_id
33 and er.broker_id=c.broker_id;
BROKER_ID EVENT_DATE TYPE EVENT_ID SEQ
1000073 10/19/2005 09:10:23 3 957 1
1000127 03/07/2006 09:10:23 1 97 1
1000140 02/05/2008 10:34:56 1 2142 1
1000144 05/19/2005 17:29:11 1 728 1
1000154 05/12/2006 08:02:25 2 363 1
1000155 01/19/2006 15:00:00 3 31 1
1000155 01/19/2006 08:02:25 1 30 2
7 rows selected.
test@ORA10G>
test@ORA10G>That's due to the way the ORDER BY clause in the analytic function (in bold) has been defined. For every broker_id (partition by er.broker_id), it orders the records, most recent first (order by e.event_date desc), and hands out a "row number".
If you want to go ahead with Case (B) earlier, then you need to order by event_type ascending, so that the lowest event_type would have a "row_number()" values of 1, thusly:
test@ORA10G>
test@ORA10G> with event as (
2 select 957 as event_id, to_date('10/19/2005 09:10:23','mm/dd/yyyy hh24:mi:ss') as event_date, 3 as type from dual union all
3 select 97, to_date('3/7/2006 09:10:23','mm/dd/yyyy hh24:mi:ss'), 1 from dual union all
4 select 2142, to_date('2/5/2008 10:34:56','mm/dd/yyyy hh24:mi:ss'), 1 from dual union all
5 select 728, to_date('5/19/2005 17:29:11','mm/dd/yyyy hh24:mi:ss'), 1 from dual union all
6 select 363, to_date('5/12/2006 08:02:25','mm/dd/yyyy hh24:mi:ss'), 2 from dual union all
7 select 30, to_date('1/19/2006 08:02:25','mm/dd/yyyy hh24:mi:ss'), 1 from dual union all
8 select 31, to_date('1/19/2006 15:00:00','mm/dd/yyyy hh24:mi:ss'), 3 from dual ),
9 contacts as (
10 select 1000073 as broker_id from dual union all
11 select 1000127 from dual union all
12 select 1000140 from dual union all
13 select 1000144 from dual union all
14 select 1000154 from dual union all
15 select 1000155 from dual),
16 event_registration as (
17 select 1000073 as broker_id, 957 as event_id from dual union all
18 select 1000127, 97 from dual union all
19 select 1000140, 2142 from dual union all
20 select 1000144, 728 from dual union all
21 select 1000154, 363 from dual union all
22 select 1000155, 30 from dual union all
23 select 1000155, 31 from dual)
24 --
25 select
26 er.broker_id, e.event_date, e.type, e.event_id,
27 row_number() over (partition by er.broker_id order by e.type) as seq
28 from
29 event_registration er,
30 event e,
31 contacts c
32 where er.event_id=e.event_id
33 and er.broker_id=c.broker_id;
BROKER_ID EVENT_DATE TYPE EVENT_ID SEQ
1000073 10/19/2005 09:10:23 3 957 1
1000127 03/07/2006 09:10:23 1 97 1
1000140 02/05/2008 10:34:56 1 2142 1
1000144 05/19/2005 17:29:11 1 728 1
1000154 05/12/2006 08:02:25 2 363 1
1000155 01/19/2006 08:02:25 1 30 1
1000155 01/19/2006 15:00:00 3 31 2
7 rows selected.
test@ORA10G>
test@ORA10G>Note how the more recent meeting has been relegated because it did not have a lower value of event_type.
Thereafter you just select the records with seq = 1.
test@ORA10G>
test@ORA10G> with event as (
2 select 957 as event_id, to_date('10/19/2005 09:10:23','mm/dd/yyyy hh24:mi:ss') as event_date, 3 as type from dual union all
3 select 97, to_date('3/7/2006 09:10:23','mm/dd/yyyy hh24:mi:ss'), 1 from dual union all
4 select 2142, to_date('2/5/2008 10:34:56','mm/dd/yyyy hh24:mi:ss'), 1 from dual union all
5 select 728, to_date('5/19/2005 17:29:11','mm/dd/yyyy hh24:mi:ss'), 1 from dual union all
6 select 363, to_date('5/12/2006 08:02:25','mm/dd/yyyy hh24:mi:ss'), 2 from dual union all
7 select 30, to_date('1/19/2006 08:02:25','mm/dd/yyyy hh24:mi:ss'), 1 from dual union all
8 select 31, to_date('1/19/2006 15:00:00','mm/dd/yyyy hh24:mi:ss'), 3 from dual ),
9 contacts as (
10 select 1000073 as broker_id from dual union all
11 select 1000127 from dual union all
12 select 1000140 from dual union all
13 select 1000144 from dual union all
14 select 1000154 from dual union all
15 select 1000155 from dual),
16 event_registration as (
17 select 1000073 as broker_id, 957 as event_id from dual union all
18 select 1000127, 97 from dual union all
19 select 1000140, 2142 from dual union all
20 select 1000144, 728 from dual union all
21 select 1000154, 363 from dual union all
22 select 1000155, 30 from dual union all
23 select 1000155, 31 from dual)
24 --
25 select broker_id,event_date,type,event_id
26 from (
27 select
28 er.broker_id, e.event_date, e.type, e.event_id,
29 row_number() over (partition by er.broker_id order by e.type) as seq
30 from
31 event_registration er,
32 event e,
33 contacts c
34 where er.event_id=e.event_id
35 and er.broker_id=c.broker_id
36 )
37 where seq = 1;
BROKER_ID EVENT_DATE TYPE EVENT_ID
1000073 10/19/2005 09:10:23 3 957
1000127 03/07/2006 09:10:23 1 97
1000140 02/05/2008 10:34:56 1 2142
1000144 05/19/2005 17:29:11 1 728
1000154 05/12/2006 08:02:25 2 363
1000155 01/19/2006 08:02:25 1 30
6 rows selected.
test@ORA10G>
test@ORA10G>You could use a "priority" column, with values like, say, 1, 3 and 5 for low, medium and high priorities, for each event.
In that case, your query could be:
test@ORA10G>
test@ORA10G> with event as (
2 select 957 as event_id, to_date('10/19/2005 09:10:23','mm/dd/yyyy hh24:mi:ss') as event_date, 3 as type, 1 as priority from dual union all
3 select 97, to_date('3/7/2006 09:10:23','mm/dd/yyyy hh24:mi:ss'), 1, 3 from dual union all
4 select 2142, to_date('2/5/2008 10:34:56','mm/dd/yyyy hh24:mi:ss'), 1, 3 from dual union all
5 select 728, to_date('5/19/2005 17:29:11','mm/dd/yyyy hh24:mi:ss'), 1, 1 from dual union all
6 select 363, to_date('5/12/2006 08:02:25','mm/dd/yyyy hh24:mi:ss'), 2, 5 from dual union all
7 select 30, to_date('1/19/2006 08:02:25','mm/dd/yyyy hh24:mi:ss'), 1, 5 from dual union all
8 select 31, to_date('1/19/2006 15:00:00','mm/dd/yyyy hh24:mi:ss'), 3, 3 from dual union all
9 select 32, to_date('1/19/2006 19:00:00','mm/dd/yyyy hh24:mi:ss'), 3, 1 from dual),
10 contacts as (
11 select 1000073 as broker_id from dual union all
12 select 1000127 from dual union all
13 select 1000140 from dual union all
14 select 1000144 from dual union all
15 select 1000154 from dual union all
16 select 1000155 from dual),
17 event_registration as (
18 select 1000073 as broker_id, 957 as event_id from dual union all
19 select 1000127, 97 from dual union all
20 select 1000140, 2142 from dual union all
21 select 1000144, 728 from dual union all
22 select 1000154, 363 from dual union all
23 select 1000155, 30 from dual union all
24 select 1000155, 31 from dual union all
25 select 1000155, 32 from dual)
26 --
27 select
28 er.broker_id, e.event_date, e.type, e.event_id,
29 row_number() over (partition by er.broker_id order by e.priority) as seq
30 from
31 event_registration er,
32 event e,
33 contacts c
34 where er.event_id=e.event_id
35 and er.broker_id=c.broker_id;
BROKER_ID EVENT_DATE TYPE EVENT_ID SEQ
1000073 10/19/2005 09:10:23 3 957 1
1000127 03/07/2006 09:10:23 1 97 1
1000140 02/05/2008 10:34:56 1 2142 1
1000144 05/19/2005 17:29:11 1 728 1
1000154 05/12/2006 08:02:25 2 363 1
1000155 01/19/2006 19:00:00 3 32 1
1000155 01/19/2006 15:00:00 3 31 2
1000155 01/19/2006 08:02:25 1 30 3
8 rows selected.
test@ORA10G>
test@ORA10G>And hence:
test@ORA10G>
test@ORA10G>
test@ORA10G> with event as (
2 select 957 as event_id, to_date('10/19/2005 09:10:23','mm/dd/yyyy hh24:mi:ss') as event_date, 3 as type, 1 as priority from dual union all
3 select 97, to_date('3/7/2006 09:10:23','mm/dd/yyyy hh24:mi:ss'), 1, 3 from dual union all
4 select 2142, to_date('2/5/2008 10:34:56','mm/dd/yyyy hh24:mi:ss'), 1, 3 from dual union all
5 select 728, to_date('5/19/2005 17:29:11','mm/dd/yyyy hh24:mi:ss'), 1, 1 from dual union all
6 select 363, to_date('5/12/2006 08:02:25','mm/dd/yyyy hh24:mi:ss'), 2, 5 from dual union all
7 select 30, to_date('1/19/2006 08:02:25','mm/dd/yyyy hh24:mi:ss'), 1, 5 from dual union all
8 select 31, to_date('1/19/2006 15:00:00','mm/dd/yyyy hh24:mi:ss'), 3, 3 from dual union all
9 select 32, to_date('1/19/2006 19:00:00','mm/dd/yyyy hh24:mi:ss'), 3, 1 from dual),
10 contacts as (
11 select 1000073 as broker_id from dual union all
12 select 1000127 from dual union all
13 select 1000140 from dual union all
14 select 1000144 from dual union all
15 select 1000154 from dual union all
16 select 1000155 from dual),
17 event_registration as (
18 select 1000073 as broker_id, 957 as event_id from dual union all
19 select 1000127, 97 from dual union all
20 select 1000140, 2142 from dual union all
21 select 1000144, 728 from dual union all
22 select 1000154, 363 from dual union all
23 select 1000155, 30 from dual union all
24 select 1000155, 31 from dual union all
25 select 1000155, 32 from dual)
26 --
27 select broker_id,event_date,type,event_id
28 from (
29 select
30 er.broker_id, e.event_date, e.type, e.event_id,
31 row_number() over (partition by er.broker_id order by e.priority) as seq
32 from
33 event_registration er,
34 event e,
35 contacts c
36 where er.event_id=e.event_id
37 and er.broker_id=c.broker_id
38 )
39 where seq=1;
BROKER_ID EVENT_DATE TYPE EVENT_ID
1000073 10/19/2005 09:10:23 3 957
1000127 03/07/2006 09:10:23 1 97
1000140 02/05/2008 10:34:56 1 2142
1000144 05/19/2005 17:29:11 1 728
1000154 05/12/2006 08:02:25 2 363
1000155 01/19/2006 19:00:00 3 32
6 rows selected.
test@ORA10G>
test@ORA10G>
test@ORA10G>If you go for mnemonic values of priority like, say, "L", "M", "H" then you could try something like:
test@ORA10G>
test@ORA10G> with event as (
2 select 957 as event_id, to_date('10/19/2005 09:10:23','mm/dd/yyyy hh24:mi:ss') as event_date, 3 as type, 'H' as priority from dual union all
3 select 97, to_date('3/7/2006 09:10:23','mm/dd/yyyy hh24:mi:ss'), 1, 'M' from dual union all
4 select 2142, to_date('2/5/2008 10:34:56','mm/dd/yyyy hh24:mi:ss'), 1, 'M' from dual union all
5 select 728, to_date('5/19/2005 17:29:11','mm/dd/yyyy hh24:mi:ss'), 1, 'H' from dual union all
6 select 363, to_date('5/12/2006 08:02:25','mm/dd/yyyy hh24:mi:ss'), 2, 'L' from dual union all
7 select 30, to_date('1/19/2006 08:02:25','mm/dd/yyyy hh24:mi:ss'), 1, 'L' from dual union all
8 select 31, to_date('1/19/2006 15:00:00','mm/dd/yyyy hh24:mi:ss'), 3, 'M' from dual union all
9 select 32, to_date('1/19/2006 19:00:00','mm/dd/yyyy hh24:mi:ss'), 3, 'H' from dual),
10 contacts as (
11 select 1000073 as broker_id from dual union all
12 select 1000127 from dual union all
13 select 1000140 from dual union all
14 select 1000144 from dual union all
15 select 1000154 from dual union all
16 select 1000155 from dual),
17 event_registration as (
18 select 1000073 as broker_id, 957 as event_id from dual union all
19 select 1000127, 97 from dual union all
20 select 1000140, 2142 from dual union all
21 select 1000144, 728 from dual union all
22 select 1000154, 363 from dual union all
23 select 1000155, 30 from dual union all
24 select 1000155, 31 from dual union all
25 select 1000155, 32 from dual)
26 --
27 select broker_id,event_date,type,event_id
28 from (
29 select
30 er.broker_id, e.event_date, e.type, e.event_id,
31 row_number() over (partition by er.broker_id order by (case e.priority when 'L' then 1 when 'M' then 2 when 'H' then 3 end) desc) as seq
32 from
33 event_registration er,
34 event e,
35 contacts c
36 where er.event_id=e.event_id
37 and er.broker_id=c.broker_id
38 )
39 where seq=1;
BROKER_ID EVENT_DATE TYPE EVENT_ID
1000073 10/19/2005 09:10:23 3 957
1000127 03/07/2006 09:10:23 1 97
1000140 02/05/2008 10:34:56 1 2142
1000144 05/19/2005 17:29:11 1 728
1000154 05/12/2006 08:02:25 2 363
1000155 01/19/2006 19:00:00 3 32
6 rows selected.
test@ORA10G>
test@ORA10G>to achieve the same result.
HTH,
pratz
Maybe you are looking for
-
User Exit or BADI for Blocking process orders from R/3 to APO?
Dear Experts, I am looking for a user exit or badi to block the process orders from R/3 to APO. As per standard it is not transferring orders which are clsd(closed status). Means that it is not updating live cache but its reading from R3. We are faci
-
Re: [SunONE-JATO] Re: Using an object to store and display data
Personally, I think there is little or no value to creating a "domain" object that itself relies on a JATO QueryModel internally, but hides that fact and requires use of BeanAdapterModel. It would be more appropriate (and much less work, and more sca
-
Grey Out a Field only when Creating a New Campaign
Hi , I Have a requirement to Grey out the Language Field when Creating a new Campaign. The same field should be in editable mode when we go and edit an existing Campaign. Method GET_I_LANGU. if parent_entity->is_changeable( ) = abap_true. rv
-
I have already write a RMI example for self-study successfully, but there are a little class file problem that I'm not able fix it. In my RMI example there are 3 .java files (RMIServer.java, RMIClient.java and a MethodImpl.java) After I have javac an
-
Is the Itunes store temporarily unavailable
I have been trying the past few hours to download a song and I get a message that says I can't use my credits, but that I will be charged. I hit ok, and it then says that credit card purchases are temporarily unavailable. Is this something on my end