Analytic function with GROUP BY
Hi:
using the query below I am getting the following error --> 3:19:15 PM ORA-00979: not a GROUP BY expression
SELECT a.proj_title_ds, b.prgm_sers_title_nm,
SUM(c.PRGM_TOT_EXP_AMT) OVER(PARTITION BY c.prgm_id) AS "Total $ Spend1"
FROM iPlanrpt.VM_RPT_PROJECT a INNER JOIN iPlanrpt.VM_RPT_PRGM_SERS b
ON a.proj_id = b.proj_id INNER JOIN iPlanrpt.VM_RPT_PRGM c
ON b.prgm_sers_id = c.prgm_sers_id
WHERE a.proj_id IN (1209624,1209623,1209625, 1211122,1211123)
AND c.PRGM_STATE_ID in (6,7)
GROUP BY a.proj_title_ds, b.prgm_sers_title_nm
Any suggestions to get the desired result (Sum of c.PRGM_TOT_EXP_AMT for each / distinct c.prgm_id within the group by specified) will be helpful
@OP,
Please mark the "other duplicate thread as complete or duplicate". I responded to the other thread and asked to sample data.
With the sample included here...would the following work for you?
SELECT a.proj_title_ds,
b.prgm_sers_title_nm,
SUM (c.prgm_tot_exp_amt) AS "Total $ Spend1"
FROM iplanrpt.vm_rpt_project a
INNER JOIN
iplanrpt.vm_rpt_prgm_sers b
ON a.proj_id = b.proj_id
INNER JOIN
(select distinct prgm_id, prgm_tot_exp_amt from iplanrpt.vm_rpt_prgm ) c
ON b.prgm_sers_id = c.prgm_sers_id
WHERE a.proj_id IN (1209624, 1209623, 1209625, 1211122, 1211123)
AND c.prgm_state_id IN (6, 7)
GROUP BY a.proj_title_ds, b.prgm_sers_title_nm
;vr,
Sudhakar B.
Similar Messages
-
Analytic Functions with GROUP-BY Clause?
I'm just getting acquainted with analytical functions. I like them. I'm having a problem, though. I want to sum up the results, but either I'm running into a limitation or I'm writing the SQL wrong. Any hints for me?
Hypothetical Table SALES consisting of a DAY_ID, PRODUCT_ID, PURCHASER_ID, PURCHASE_PRICE lists all the
Hypothetical Business Question: Product prices can fluctuate over the course of a day. I want to know how much per day I would have made had I sold one each of all my products at their max price for that day. Silly question, I know, but it's the best I could come up with to show the problem.
INSERT INTO SALES VALUES(1,1,1,1.0);
INSERT INTO SALES VALUES(1,1,1,2.0);
INSERT INTO SALES VALUES(1,2,1,3.0);
INSERT INTO SALES VALUES(1,2,1,4.0);
INSERT INTO SALES VALUES(2,1,1,5.0);
INSERT INTO SALES VALUES(2,1,1,6.0);
INSERT INTO SALES VALUES(2,2,1,7.0);
INSERT INTO SALES VALUES(2,2,1,8.0);
COMMIT;
Day 1: Iif I had sold one product 1 at $2 and one product 2 at $4, I would have made 6$.
Day 2: Iif I had sold one product 1 at $6 and one product 2 at $8, I would have made 14$.
The desired result set is:
DAY_ID MY_MEASURE
1 6
1 14The following SQL gets me tantalizingly close:
SELECT DAY_ID,
MAX(PURCHASE_PRICE)
KEEP(DENSE_RANK FIRST ORDER BY PURCHASE_PRICE DESC)
OVER(PARTITION BY DAY_ID, PRODUCT_ID) AS MY_MEASURE
FROM SALES
ORDER BY DAY_ID
DAY_ID MY_MEASURE
1 2
1 2
1 4
1 4
2 6
2 6
2 8
2 8But as you can see, my result set is "longer" than I wanted it to be. I want a single row per DAY_ID. I understand what the analytical functions are doing here, and I acknowledge that I am "not doing it right." I just can't seem to figure out how to make it work.
Trying to do a sum() of max() simply does not work, nor does any semblance of a group-by clause that I can come up with. Unfortunately, as soon as I add the windowing function, I am no longer allowed to use group-by expressions (I think).
I am using a reporting tool, so unfortunately using things like inline views are not an option. I need to be able to define "MY_MEASURE" as something the query tool can apply the SUM() function to in its generated SQL.
(Note: The actual problem is slightly less easy to conceptualize, but solving this conundrum will take me much closer to solving the other.)
I humbly solicit your collective wisdom, oh forum.Thanks, SY. I went that way originally too. Unfortunately that's no different from what I could get without the RANK function.
SELECT DAY_ID,
PRODUCT_ID,
MAX(PURCHASE_PRICE) MAX_PRICE
FROM SALES
GROUP BY DAY_ID,
PRODUCT_ID
ORDER BY DAY_ID,
PRODUCT_ID
DAY_ID PRODUCT_ID MAX_PRICE
1 1 2
1 2 4
2 1 6
2 2 8 -
Analytic function for grouping?
Hello @all
10gR2
Is it possible to use an analytic function for grouping following (example-)query:
SELECT job, ename, sal,
ROW_NUMBER() OVER(PARTITION BY job ORDER BY empno) AS no,
RANK() OVER(PARTITION BY job ORDER BY NULL) AS JobNo
FROM emp;The output is following:
JOB ENAME SAL NO JOBNO
ANALYST SCOTT 3000 1 1
ANALYST FORD 3000 2 1
CLERK SMITH 818 1 1
CLERK ADAMS 1100 2 1
CLERK JAMES 950 3 1
CLERK MILLER 1300 4 1
MANAGER Müller 1000 1 1
MANAGER JONES 2975 2 1
....The JobNo should increase group by job and ename; my desired output should be looking like...:
JOB ENAME SAL NO JOBNO
ANALYST SCOTT 3000 1 1
ANALYST FORD 3000 2 1
CLERK SMITH 818 1 2
CLERK ADAMS 1100 2 2
CLERK JAMES 950 3 2
CLERK MILLER 1300 4 2
MANAGER Müller 1000 1 3
MANAGER JONES 2975 2 3
MANAGER BLAKE 2850 3 3
MANAGER CLARK 2450 4 3
PRESIDENT KING 5000 1 4
SALESMAN ALLEN 1600 1 5
SALESMAN WARD 1250 2 5
SALESMAN MARTIN 1250 3 5
SALESMAN TURNER 1500 4 5How can I achieve this?This, perhaps?
with emp as (select 1 empno, 'ANALYST' job, 'SCOTT' ename, 3000 sal from dual union all
select 2 empno, 'ANALYST' job, 'FORD' ename, 3000 sal from dual union all
select 3 empno, 'CLERK' job, 'SMITH' ename, 818 sal from dual union all
select 4 empno, 'CLERK' job, 'ADAMS' ename, 1100 sal from dual union all
select 5 empno, 'CLERK' job, 'JAMES' ename, 950 sal from dual union all
select 6 empno, 'CLERK' job, 'MILLER' ename, 1300 sal from dual union all
select 7 empno, 'MANAGER' job, 'Müller' ename, 1000 sal from dual union all
select 8 empno, 'MANAGER' job, 'JONES' ename, 2975 sal from dual union all
select 9 empno, 'MANAGER' job, 'BLAKE' ename, 2850 sal from dual union all
select 10 empno, 'MANAGER' job, 'CLARK' ename, 2450 sal from dual union all
select 11 empno, 'PRESIDENT' job, 'KING' ename, 5000 sal from dual union all
select 12 empno, 'SALESMAN' job, 'ALLEN' ename, 1600 sal from dual union all
select 13 empno, 'SALESMAN' job, 'WARD' ename, 1250 sal from dual union all
select 14 empno, 'SALESMAN' job, 'MARTIN' ename, 1250 sal from dual union all
select 15 empno, 'SALESMAN' job, 'TURNER' ename, 1500 sal from dual)
select job, ename, sal,
row_number() over(partition by job order by empno) no,
dense_rank() over(order by job) jobno
from emp
JOB ENAME SAL NO JOBNO
ANALYST SCOTT 3000 1 1
ANALYST FORD 3000 2 1
CLERK SMITH 818 1 2
CLERK ADAMS 1100 2 2
CLERK JAMES 950 3 2
CLERK MILLER 1300 4 2
MANAGER Müller 1000 1 3
MANAGER JONES 2975 2 3
MANAGER BLAKE 2850 3 3
MANAGER CLARK 2450 4 3
PRESIDENT KING 5000 1 4
SALESMAN ALLEN 1600 1 5
SALESMAN WARD 1250 2 5
SALESMAN MARTIN 1250 3 5
SALESMAN TURNER 1500 4 5 -
Hi,
I am having a scenario like :
Column 1: BrokerList(dimension1)
Column 2 : Broker(dimension2)
Column 3 : Metric value(measure)
so i am having a case when (dimension 3) Custodian = 'ss' then sum(metirc) group by dimension1,dimension2 but the result value is not matching
BrokerList
Broker
Metric
a1
a
10
b
20
c
30
a1 :total
60
a2
a
50
c
60
d
10
a2:total
120
Grand total
180
Here the metric is based on other case condition.. so the total value is not matching.. Is there any other way to do a case function with group by funtions. Please advise.
regards,
GuruUse filter on metric by ss value and then go for group by from Criteria
something like
sum(FILTER(metric USING (Custodian = 'ss')) by dimension1,dimension2)
mark if helps
~ http://cool-bi.com -
Return multiple columns from an analytic function with a window
Hello,
Is it possible to obtain multiple columns from an analytic function with a window?
I have a table with 4 columns, an id, a test id, a date, and the result of a test. I'm using an analytic function to obtain, for each row, the current test value, and the maximum test value in the next 2 days like so:
select
id,
test_id,
date,
result,
MAX ( result ) over ( partition BY id, test_id order by date RANGE BETWEEN CURRENT ROW AND INTERVAL '2' DAY FOLLOWING ) AS max_result_next_two_day
from table
This is working fine, but I would like to also obtain the date when the max result occurs. I can see that this would be possible using a self join, but I'd like to know if there is a better way? I cannot use the FIRST_VALUE aggregate function and order by result, because the window function needs to be ordered by the date.
It would be a great help if you could provide any pointers/suggestions.
Thanks,
Dan
http://danieljamesscott.orgAssuming RESULT is a positive integer that has a maximum width of, say 10,
and assuming date has no time-component:
select
id
,test_id
,date
,result
,to_number(substr(max_result_with_date,1,10)) as max_result_next_two_day
,to_date(substr(max_result_with_date,11),'YYYYMMDD') as date_where_max_result_occurs
from (select
id
,test_id
,date
,result
,MAX(lpad(to_char(result),10,'0')||to_char(date,'YYYYMMDD'))
over (partition BY id, test_id
order by date
RANGE BETWEEN CURRENT ROW AND INTERVAL '2' DAY FOLLOWING )
AS max_result_with_date
from table) -
In need help: Analytic Report with Group by
Good morning,
I am trying to create a report with subtotal and grand total, which of couse goes to the group by clause, with rollup, cube, grouping... etc. I'd like to use rollup, then some columns in the Select list have to be put into the Group By clause, which is not supposed to be. So I had to use one of SUM, AVG, MIN and MAX functions, to make those columns as *aggregated, which is wrong.
Another alternative I tried is to use Cube and Grouping_id to be the filter. However, that is still very cumbentsome and error-prone, also the order of the display is absolutely out of control.
I am trying hard to stick to the first option of using the Rollup, since the result is very close to what I want, but avoid the usage of aggregation functions. For example, if I want to display column A, which should not be grouped. Other than using those aggregation functions, what I can do?
Thanks in advance.Luc,
this is a simple and a good reference for analytic functions:
http://www.orafaq.com/node/55
It takes some time to understand how they work and also it takes some time to understand how to utilize them. I have solved some issues in reporting using them, avoiding the overkill of aggregates.
Denes Kubicek -
How to use analytic function with aggregate function
hello
can we use analytic function and aggrgate function in same qurey? i tried to find any example on Net but not get any example how both of these function works together. Any link or example plz share with me
Edited by: Oracle Studnet on Nov 15, 2009 10:29 PMselect
t1.region_name,
t2.division_name,
t3.month,
t3.amount mthly_sales,
max(t3.amount) over (partition by t1.region_name, t2.division_name)
max_mthly_sales
from
region t1,
division t2,
sales t3
where
t1.region_id=t3.region_id
and
t2.division_id=t3.division_id
and
t3.year=2004
Source:http://www.orafusion.com/art_anlytc.htm
Here max (aggregate) and over partition by (analytic) function is in same query. So it means we can use aggregate and analytic function in same query and more than one analytic function in same query also.
Hth
Girish Sharma -
Count(*) in a analytic situation with group by order by
Hello every body,
I have a count(*) problem in an sql with analytics function on a table
when I want to have all his column in the result
Say I have a table
mytable1
CREATE TABLE MYTABLE1
MY_TIME TIMESTAMP(3),
PRICE NUMBER,
VOLUME NUMBER
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.01.664','DD-MM-YY HH24:MI:SS:FF3' ),49.55,704492 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.02.570','DD-MM-YY HH24:MI:SS:FF3' ),49.55,705136 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.227','DD-MM-YY HH24:MI:SS:FF3' ),49.55,707313 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.227','DD-MM-YY HH24:MI:SS:FF3' ),49.55,706592 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.30.695','DD-MM-YY HH24:MI:SS:FF3' ),49.55,705581 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.227','DD-MM-YY HH24:MI:SS:FF3' ),49.55,707985 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.820','DD-MM-YY HH24:MI:SS:FF3' ),49.56,708494 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.32.258','DD-MM-YY HH24:MI:SS:FF3' ),49.57,708955 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.36.180','DD-MM-YY HH24:MI:SS:FF3' ),49.58,709519 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.352','DD-MM-YY HH24:MI:SS:FF3' ),49.59,710502 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.352','DD-MM-YY HH24:MI:SS:FF3' ),49.59,710102 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.352','DD-MM-YY HH24:MI:SS:FF3' ),49.59,709962 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.52.399','DD-MM-YY HH24:MI:SS:FF3' ),49.59,711427 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.977','DD-MM-YY HH24:MI:SS:FF3' ),49.6,710902 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.50.492','DD-MM-YY HH24:MI:SS:FF3' ),49.6,711379 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.45.550','DD-MM-YY HH24:MI:SS:FF3' ),49.6,711302 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.50.492','DD-MM-YY HH24:MI:SS:FF3' ),49.62,711417 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.57.790','DD-MM-YY HH24:MI:SS:FF3' ),49.49,715587 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.47.712','DD-MM-YY HH24:MI:SS:FF3' ),49.5,715166 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.57.790','DD-MM-YY HH24:MI:SS:FF3' ),49.5,715469 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.24.821','DD-MM-YY HH24:MI:SS:FF3' ),49.53,714833 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.24.821','DD-MM-YY HH24:MI:SS:FF3' ),49.53,714914 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.24.493','DD-MM-YY HH24:MI:SS:FF3' ),49.54,714136 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.19.977','DD-MM-YY HH24:MI:SS:FF3' ),49.55,713387 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.19.977','DD-MM-YY HH24:MI:SS:FF3' ),49.55,713562 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712172 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.09.274','DD-MM-YY HH24:MI:SS:FF3' ),49.59,713287 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.09.117','DD-MM-YY HH24:MI:SS:FF3' ),49.59,713206 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712984 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.836','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712997 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712185 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712261 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.32.244','DD-MM-YY HH24:MI:SS:FF3' ),49.46,725577 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.26.181','DD-MM-YY HH24:MI:SS:FF3' ),49.49,724664 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.540','DD-MM-YY HH24:MI:SS:FF3' ),49.49,723366 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.26.181','DD-MM-YY HH24:MI:SS:FF3' ),49.49,725242 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.26.181','DD-MM-YY HH24:MI:SS:FF3' ),49.49,725477 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.947','DD-MM-YY HH24:MI:SS:FF3' ),49.49,724521 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.540','DD-MM-YY HH24:MI:SS:FF3' ),49.49,723943 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.540','DD-MM-YY HH24:MI:SS:FF3' ),49.49,724086 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.34.103','DD-MM-YY HH24:MI:SS:FF3' ),49.49,725609 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.118','DD-MM-YY HH24:MI:SS:FF3' ),49.5,720166 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.118','DD-MM-YY HH24:MI:SS:FF3' ),49.5,720066 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.11.774','DD-MM-YY HH24:MI:SS:FF3' ),49.5,718524 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.696','DD-MM-YY HH24:MI:SS:FF3' ),49.5,722086 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.11.774','DD-MM-YY HH24:MI:SS:FF3' ),49.5,718092 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.11.774','DD-MM-YY HH24:MI:SS:FF3' ),49.5,715673 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.118','DD-MM-YY HH24:MI:SS:FF3' ),49.51,719666 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.12.555','DD-MM-YY HH24:MI:SS:FF3' ),49.52,719384 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.28.963','DD-MM-YY HH24:MI:SS:FF3' ),49.48,728830 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.11.884','DD-MM-YY HH24:MI:SS:FF3' ),49.48,726609 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.28.963','DD-MM-YY HH24:MI:SS:FF3' ),49.48,728943 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.45.947','DD-MM-YY HH24:MI:SS:FF3' ),49.49,729627 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.12.259','DD-MM-YY HH24:MI:SS:FF3' ),49.49,726830 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.46.494','DD-MM-YY HH24:MI:SS:FF3' ),49.49,733653 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.46.510','DD-MM-YY HH24:MI:SS:FF3' ),49.49,733772 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.12.259','DD-MM-YY HH24:MI:SS:FF3' ),49.49,727830 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.59.119','DD-MM-YY HH24:MI:SS:FF3' ),49.5,735772 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.47.369','DD-MM-YY HH24:MI:SS:FF3' ),49.5,734772 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.20.463','DD-MM-YY HH24:MI:SS:FF3' ),49.48,740621 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.12.369','DD-MM-YY HH24:MI:SS:FF3' ),49.48,740538 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.20.463','DD-MM-YY HH24:MI:SS:FF3' ),49.48,741021 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.10.588','DD-MM-YY HH24:MI:SS:FF3' ),49.49,740138 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.463','DD-MM-YY HH24:MI:SS:FF3' ),49.49,738320 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.135','DD-MM-YY HH24:MI:SS:FF3' ),49.49,737122 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.135','DD-MM-YY HH24:MI:SS:FF3' ),49.49,736424 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.260','DD-MM-YY HH24:MI:SS:FF3' ),49.49,737598 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.744','DD-MM-YY HH24:MI:SS:FF3' ),49.49,739360 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.135','DD-MM-YY HH24:MI:SS:FF3' ),49.49,736924 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.260','DD-MM-YY HH24:MI:SS:FF3' ),49.49,737784 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.463','DD-MM-YY HH24:MI:SS:FF3' ),49.49,738145 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.744','DD-MM-YY HH24:MI:SS:FF3' ),49.49,739134 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.463','DD-MM-YY HH24:MI:SS:FF3' ),49.49,738831 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.215','DD-MM-YY HH24:MI:SS:FF3' ),49.5,742421 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.580','DD-MM-YY HH24:MI:SS:FF3' ),49.5,741777 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.215','DD-MM-YY HH24:MI:SS:FF3' ),49.5,742021 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.48.433','DD-MM-YY HH24:MI:SS:FF3' ),49.5,741091 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.840','DD-MM-YY HH24:MI:SS:FF3' ),49.51,743021 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.57.511','DD-MM-YY HH24:MI:SS:FF3' ),49.52,743497 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.00.270','DD-MM-YY HH24:MI:SS:FF3' ),49.52,744021 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.17.699','DD-MM-YY HH24:MI:SS:FF3' ),49.53,750292 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.00.433','DD-MM-YY HH24:MI:SS:FF3' ),49.53,747382 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.17.699','DD-MM-YY HH24:MI:SS:FF3' ),49.53,749939 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.15.152','DD-MM-YY HH24:MI:SS:FF3' ),49.53,749414 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.00.433','DD-MM-YY HH24:MI:SS:FF3' ),49.53,744882 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.08.110','DD-MM-YY HH24:MI:SS:FF3' ),49.54,749262 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.01.168','DD-MM-YY HH24:MI:SS:FF3' ),49.54,748418 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.01.152','DD-MM-YY HH24:MI:SS:FF3' ),49.54,748243 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.07.293','DD-MM-YY HH24:MI:SS:FF3' ),49.54,748862 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.09.433','DD-MM-YY HH24:MI:SS:FF3' ),49.51,750414 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.28.262','DD-MM-YY HH24:MI:SS:FF3' ),49.53,750930 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.28.887','DD-MM-YY HH24:MI:SS:FF3' ),49.53,751986 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.28.887','DD-MM-YY HH24:MI:SS:FF3' ),49.53,750986 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.30.997','DD-MM-YY HH24:MI:SS:FF3' ),49.55,753900 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.30.887','DD-MM-YY HH24:MI:SS:FF3' ),49.55,753222 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.29.809','DD-MM-YY HH24:MI:SS:FF3' ),49.55,753022 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.29.809','DD-MM-YY HH24:MI:SS:FF3' ),49.55,752847 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.42.622','DD-MM-YY HH24:MI:SS:FF3' ),49.56,755385 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.31.120','DD-MM-YY HH24:MI:SS:FF3' ),49.56,754385 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.49.590','DD-MM-YY HH24:MI:SS:FF3' ),49.6,759087 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.50.341','DD-MM-YY HH24:MI:SS:FF3' ),49.6,759217 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.49.590','DD-MM-YY HH24:MI:SS:FF3' ),49.6,758701 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.57.262','DD-MM-YY HH24:MI:SS:FF3' ),49.6,761049 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.48.637','DD-MM-YY HH24:MI:SS:FF3' ),49.6,757827 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.48.120','DD-MM-YY HH24:MI:SS:FF3' ),49.6,757385 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.466','DD-MM-YY HH24:MI:SS:FF3' ),49.62,761001 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.137','DD-MM-YY HH24:MI:SS:FF3' ),49.62,760109 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.137','DD-MM-YY HH24:MI:SS:FF3' ),49.62,759617 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.278','DD-MM-YY HH24:MI:SS:FF3' ),49.62,760265 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.137','DD-MM-YY HH24:MI:SS:FF3' ),49.62,759954 );
so if I do
SELECT DISTINCT row_number() over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ) num,
MIN(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) low ,
MAX(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) high ,
-- sum(volume) over( partition by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 order by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 asc ) volume,
TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 TIME ,
price ,
COUNT( *) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ,price ASC,volume ASC ) TRADE,
first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC,volume ASC ) OPEN ,
first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 DESC,volume DESC) CLOSE ,
lag(price) over ( order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) lag_all
FROM mytable1
WHERE my_time > to_timestamp('04032008:09:00:00','DDMMYYYY:HH24:MI:SS')
AND my_time < to_timestamp('04032008:09:01:00','DDMMYYYY:HH24:MI:SS')
GROUP BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ,
price ,
volume
ORDER BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60,
price ,
num;
i have
NUM|LOW|HIGH|TIME|PRICE|TRADE|OPEN|CLOSE|LAG_ALL
1|49.55|49.62|04/03/2008 09:00:00|49.55|1|49.55|49.59|
2|49.55|49.62|04/03/2008 09:00:00|49.55|2|49.55|49.59|49.55
3|49.55|49.62|04/03/2008 09:00:00|49.55|3|49.55|49.59|49.55
4|49.55|49.62|04/03/2008 09:00:00|49.55|4|49.55|49.59|49.55
5|49.55|49.62|04/03/2008 09:00:00|49.55|5|49.55|49.59|49.55
6|49.55|49.62|04/03/2008 09:00:00|49.55|6|49.55|49.59|49.55
7|49.55|49.62|04/03/2008 09:00:00|49.56|7|49.55|49.59|49.55
8|49.55|49.62|04/03/2008 09:00:00|49.57|8|49.55|49.59|49.56
9|49.55|49.62|04/03/2008 09:00:00|49.58|9|49.55|49.59|49.57
10|49.55|49.62|04/03/2008 09:00:00|49.59|10|49.55|49.59|49.58
11|49.55|49.62|04/03/2008 09:00:00|49.59|11|49.55|49.59|49.59
12|49.55|49.62|04/03/2008 09:00:00|49.59|12|49.55|49.59|49.59
13|49.55|49.62|04/03/2008 09:00:00|49.59|13|49.55|49.59|49.59
14|49.55|49.62|04/03/2008 09:00:00|49.6|14|49.55|49.59|49.59
15|49.55|49.62|04/03/2008 09:00:00|49.6|15|49.55|49.59|49.6
16|49.55|49.62|04/03/2008 09:00:00|49.6|16|49.55|49.59|49.6
17|49.55|49.62|04/03/2008 09:00:00|49.62|17|49.55|49.59|49.6
Witch is errouneous
because
if I do'nt put the volume column in the script I get another result
SELECT DISTINCT row_number() over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ) num,
MIN(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) low ,
MAX(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) high ,
-- sum(volume) over( partition by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 order by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 asc ) volume,
TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 TIME ,
price ,
COUNT( *) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ,price ASC ) TRADE,
first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ) OPEN ,
first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 DESC) CLOSE ,
lag(price) over ( order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) lag_all
FROM mytable1
WHERE my_time > to_timestamp('04032008:09:00:00','DDMMYYYY:HH24:MI:SS')
AND my_time < to_timestamp('04032008:09:01:00','DDMMYYYY:HH24:MI:SS')
GROUP BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ,
price
ORDER BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60,
price ,
num;
I get this
NUM|LOW|HIGH|TIME|PRICE|TRADE|OPEN|CLOSE|LAG_ALL
1|49.55|49.62|04/03/2008 09:00:00|49.55|1|49.55|49.55|
2|49.55|49.62|04/03/2008 09:00:00|49.56|2|49.55|49.55|49.55
3|49.55|49.62|04/03/2008 09:00:00|49.57|3|49.55|49.55|49.56
4|49.55|49.62|04/03/2008 09:00:00|49.58|4|49.55|49.55|49.57
5|49.55|49.62|04/03/2008 09:00:00|49.59|5|49.55|49.55|49.58
6|49.55|49.62|04/03/2008 09:00:00|49.6|6|49.55|49.55|49.59
7|49.55|49.62|04/03/2008 09:00:00|49.62|7|49.55|49.55|49.6
How can I have the right count with all the column of the table?
BabataI'm not sure what in your eye the "right count" is. but I think the DISTINCT keyword is hiding the problems that you have. It could also be the reason for the different number of results between query one and query two.
-
SQL Performance issue: Using user defined function with group by
Hi Everyone,
im new here and I really could need some help on a weird performance issue. I hope this is the right topic for SQL performance issues.
Well ok, i create a function for converting a date from timezone GMT to a specified timzeone.
CREATE OR REPLACE FUNCTION I3S_REP_1.fnc_user_rep_date_to_local (date_in IN date, tz_name_in IN VARCHAR2) RETURN date
IS
tz_name VARCHAR2(100);
date_out date;
BEGIN
SELECT
to_date(to_char(cast(from_tz(cast( date_in AS TIMESTAMP),'GMT')AT
TIME ZONE (tz_name_in) AS DATE),'dd-mm-yyyy hh24:mi:ss'),'dd-mm-yyyy hh24:mi:ss')
INTO date_out
FROM dual;
RETURN date_out;
END fnc_user_rep_date_to_local;The following statement is just an example, the real statement is much more complex. So I select some date values from a table and aggregate a little.
select
stp_end_stamp,
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
stp_end_stampThis statement selects ~70000 rows and needs ~ 70ms
If i use the function it selects the same number of rows ;-) and takes ~ 4 sec ...
select
fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin'),
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin')I understand that the DB has to execute the function for each row.
But if I execute the following statement, it takes only ~90ms ...
select
fnc_user_rep_date_to_gmt(stp_end_stamp,'Europe/Berlin','ny21654'),
noi
from
select
stp_end_stamp,
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
stp_end_stamp
)The execution plan for all three statements is EXACTLY the same!!!
Usually i would say, that I use the third statement and the world is in order. BUT I'm working on a BI project with a tool called Business Objects and it generates SQL, so my hands are bound and I can't make this tool to generate the SQL as a subselect.
My questions are:
Why is the second statement sooo much slower than the third?
and
Howcan I force the optimizer to do whatever he is doing to make the third statement so fast?
I would really appreciate some help on this really weird issue.
Thanks in advance,
AndiHi,
The execution plan for all three statements is EXACTLY the same!!!Not exactly. Plans are the same - true. They uses slightly different approach to call function. See:
drop table t cascade constraints purge;
create table t as select mod(rownum,10) id, cast('x' as char(500)) pad from dual connect by level <= 10000;
exec dbms_stats.gather_table_stats(user, 't');
create or replace function test_fnc(p_int number) return number is
begin
return trunc(p_int);
end;
explain plan for select id from t group by id;
select * from table(dbms_xplan.display(null,null,'advanced'));
explain plan for select test_fnc(id) from t group by test_fnc(id);
select * from table(dbms_xplan.display(null,null,'advanced'));
explain plan for select test_fnc(id) from (select id from t group by id);
select * from table(dbms_xplan.display(null,null,'advanced'));Output:
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
2 - SEL$1 / T@SEL$1
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$1" "T"@"SEL$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "ID"[NUMBER,22]
2 - "ID"[NUMBER,22]
34 rows selected.
SQL>
Explained.
SQL>
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
2 - SEL$1 / T@SEL$1
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$1" "T"@"SEL$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "TEST_FNC"("ID")[22]
2 - "ID"[NUMBER,22]
34 rows selected.
SQL>
Explained.
SQL> select * from table(dbms_xplan.display(null,null,'advanced'));
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$F5BB74E1
2 - SEL$F5BB74E1 / T@SEL$2
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$F5BB74E1" "T"@"SEL$2")
OUTLINE(@"SEL$2")
OUTLINE(@"SEL$1")
MERGE(@"SEL$2")
OUTLINE_LEAF(@"SEL$F5BB74E1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "ID"[NUMBER,22]
2 - "ID"[NUMBER,22]
37 rows selected. -
Can i use Lead function with Group by function
I could use this query and get right ouput since i define product id =2000
select product_id, order_date,
lead (order_date,1) over (ORDER BY order_date) AS next_order_date
from orders
where product_id = 2000;
But can i run this query by Group by Function
for example
select product_id, order_date,
lead (order_date,1) over (ORDER BY order_date) AS next_order_date
from orders
group by product_id ;
since data would be like and i need
Product_id order Date
2000 1-jan-09
2000 21-jan-09
3000 13-jan-09
3000 15-jan-09
4000 18-jan-09
4000 19-jan-09
output would be like for eg
Product_id order Date Next_date
2000 1-jan-09 21-jan-09
3000 13-jan-09 15-jan-09
4000 18-jan-09 19-jan-09Thanks everybody for ur help
i could exactly mention what i requred
create table SCHEDULER
( REF VARCHAR2(10),
NO NUMBER ,
PORT VARCHAR2(10),
ARRIVAL DATE ,
DEPARTURE DATE
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA0677',1,'KUWAIT','1-Sep-09','02-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA0677',2,'INDIA','5-Sep-09','07-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA0677',3,'COLUMBO','8-Sep-09','09-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA0677',4,'IRAN','10-Sep-09','12-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA0677',5,'IRAQ','14-Sep-09','15-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA0677',6,'DELHI','17-Sep-09','19-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA0677',7,'POLAND','21-Sep-09','23-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA0678',1,'INDIA','5-Sep-09','07-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA0678',2,'COLUMBO','8-Sep-09','09-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA0678',3,'IRAN','10-Sep-09','12-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA0678',4,'IRAQ','14-Sep-09','15-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA0678',5,'DELHI','17-Sep-09','19-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA0678',6,'POLAND','21-Sep-09','23-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA0678',7,'GOA','1-Oct-09','02-Oct-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2372',1,'INDIA','1-Sep-09','02-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2372',2,'KERALA','3-Sep-09','03-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2372',3,'BOMBAY','4-Sep-09','04-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2373',1,'INDIA','5-Sep-09','06-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2373',2,'ANDHERI','6-Sep-09','07-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2376',1,'INDIA','5-Sep-09','07-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2420',1,'INDIA','5-Sep-09','06-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2420',2,'ANDHERI','7-Sep-09','08-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2420',3,'BURMA','10-Sep-09','11-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2420',4,'BENGAL','11-Sep-09','12-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2445',1,'INDIA','4-Sep-09','05-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2445',2,'BURMA','7-Sep-09','09-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2498',1,'BENGAL','8-Sep-09','08-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2498',2,'COCHIN','11-Sep-09','11-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2498',3,'LANKA','12-Sep-09','12-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2498',4,'COLUMBO','13-Sep-09','15-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2498',5,'INDIA','17-Sep-09','18-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2505',1,'COLUMBO','5-Sep-09','06-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2505',2,'GOA','8-Sep-09','09-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2505',3,'INDIA','13-Sep-09','15-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2510',1,'INDIA','4-Sep-09 06-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2510',2,'BENGAL','8-Sep-09 09-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2510',3,'GOA','10-Sep-09 11-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2513',1,'INDIA','7-Sep-09','09-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2513',2,'USA','11-Sep-09','11-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2513',3,'UK','12-Sep-09','13-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2520',1,'INDIA','4-Sep-09','06-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2520',2,'BENGAL','8-Sep-09','09-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2520',3,'GOA','10-Sep-09','11-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2526',1,'INDIA','5-Sep-09','07-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2526',2,'DUBAI','10-Sep-09','11-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2526',3,'GOA','13-Sep-09','15-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2526',4,'OMAN','17-Sep-09','18-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2526',5,'INDIA','19-Sep-09','20-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2527',1,'BURMA','7-Sep-09','08-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2527',2,'INDIA','9-Sep-09','10-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2527',3,'ANDHERI','10-Sep-09','16-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2532',1,'SHARJAH','3-Sep-09','04-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2532',2,'AEDXB','5-Sep-09','05-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2533',1,'AESHJ','2-Sep-09','02-Sep-09');
INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
VALUES('VA2533',2,'INDIA','3-Sep-09','03-Sep-09');
COMMIT;
Suppose these records shows the REF travelling from one location to another with respect to date
We need to find out each REF GROUP WISE AND THE DATE OF TRAVELLING FOR SPECIFIED location travelling IE from STARTING FROM INDIA AND ENDING TO GOA
OUTPUT SHOULD BE LIKE DATA SHOWN BELOW
FROM LOCATION TO LOCATION
REF , NO , PORT , ARRIVAL ,DEPARTURE , REF , NO , PORT , ARRIVAL , DEPARTURE
VA0678 1 INDIA 5-Sep-09 07-Sep-09 VA0678 7 GOA 1-Oct-09 02-Oct-09
VA2510 1 INDIA 4-Sep-09 06-Sep-09 VA2510 3 GOA 10-Sep-09 11-Sep-09
VA2520 1 INDIA 4-Sep-09 06-Sep-09 VA2520 3 GOA 10-Sep-09 11-Sep-09
VA2526 1 INDIA 5-Sep-09 07-Sep-09 VA2526 3 GOA 13-Sep-09 15-Sep-09
---------------------------------------------------------------------------------------------------------------------------------------------------------------- -
Using count function with grouped records
Hi all,
This seems like it should be real easy but I have yet to figure out a simple way to do this.
Suppose I want to count Opportunities that are grouped by Sales Rep. At run-time I am filtering this list with a parameter for Sales Stage and created date.
I've simplified this greatly, but here's what my setup looks like now:
Sales Rep* ---------Count*_
<?for-each-group:Opportunity[SalesStage=param1 and Created>param2];./salesrep?>
<?salesrep?>-------<?count(current-group()/Id)?>
<?end for-each-group?>
Total_
The only solution I have so far to get my grand total is to create a variable and keep a running total which I'll then display in the Total column. While it works, it seems like there should be an easier way, like doing a simple count(Id) to get the grand total. But since the Total appears after the end for-each-group, I lose the filter that was applied to the group so that count is invalid.
Any thoughts from the experts?
Thanks!To get grand total
use
<?count(/Oppurtunity[SalesStage=param1 and Created>param2]/Id)?>since you did not mention the complete xml, i assumed, Opportunity as the Root.
if not, put the full path from the root.
if you give some xml sample, and explain the output you wanted, we can fix it immediately.
go thru these too..some thing can be pulled from here .
http://winrichman.blogspot.com/search/label/Summation%20In%20BIP
http://winrichman.blogspot.com/search/label/BIP%20Vertical%20sum -
Hi,
I have a problem using analytic function: when I execute this query
SELECT TSIUPSITE, TSIUPCEAN , TSIUPDATE, sum(TSIUPCA) TSIUPCA, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
sum(TSIUPQTE) TSIUPQTE,sum(TSIUPQTEP) TSIUPQTEP, TSIUPMDIU,TSIUPMDar,
sum(TSIUPCRIU) TSIUPCRIU,sum(TSIUPCRAR) TSIUPCRAR, trunc(TSIUPDCRE) TSIUPDCRE ,trunc(TSIUPDMAJ) TSIUPDMAJ ,
TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, sum(TSIUPMHT) TSIUPMHT, 0 vtanfisc,
TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV ,count(*) over (partition by TSIUPSITE,TSIUPCEAN,TSIUP_TRT ) CONTA_ARTICOLO
FROM TST_FLIISR_VTEREMART
WHERE 1=1 --TSIUP_TRT = 1
AND TSIUPDATE=to_date('27082012','ddmmyyyy')
and TSIUP_NTRX =172
AND TSIUPSITE = 10025
AND TSIUPCEAN = '8012452018825'
GROUP BY TSIUPSITE, TSIUPCEAN , TSIUPDATE, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
TSIUPMDIU,TSIUPMDar, trunc(TSIUPDCRE),trunc(TSIUPDMAJ),TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, 0,
TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV
ORDER BY TSIUPSITE,TSIUPDATE ;
I have the error ORA-00979: not a GROUP BY expression related to TSIUP_TRT field,infact, if I execute this one
SELECT TSIUPSITE, TSIUPCEAN , TSIUPDATE, sum(TSIUPCA) TSIUPCA, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
sum(TSIUPQTE) TSIUPQTE,sum(TSIUPQTEP) TSIUPQTEP, TSIUPMDIU,TSIUPMDar,
sum(TSIUPCRIU) TSIUPCRIU,sum(TSIUPCRAR) TSIUPCRAR, trunc(TSIUPDCRE) TSIUPDCRE ,trunc(TSIUPDMAJ) TSIUPDMAJ ,
TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, sum(TSIUPMHT) TSIUPMHT, 0 vtanfisc,
TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV ,count(*) over (partition by TSIUPSITE,TSIUPCEAN ) CONTA_ARTICOLO
FROM TST_FLIISR_VTEREMART
WHERE 1=1 --TSIUP_TRT = 1
AND TSIUPDATE=to_date('27082012','ddmmyyyy')
and TSIUP_NTRX =172
AND TSIUPSITE = 10025
AND TSIUPCEAN = '8012452018825'
GROUP BY TSIUPSITE, TSIUPCEAN , TSIUPDATE, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
TSIUPMDIU,TSIUPMDar, trunc(TSIUPDCRE),trunc(TSIUPDMAJ),TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, 0,
TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV
ORDER BY TSIUPSITE,TSIUPDATE ;
I have no problem. Now the difference between TSIUPCEAN ( or TSIUPSITE) and TSIUP_TRT is that TSIUP_TRT is not in Group By clause, but, to be honest, I don't know why I have this problem using using an analitic function.
Thanks for helpHi,
I think you are not using analytic function properly.
Analytical functions will execute for each row. Where as Group BY will execute for groups of data.
See below example for you reference.
Example 1:
-- Below query displays number of employees for each department. Since we have used analytical function for each row you are getting the number of employees based on the department id.
SQL> SELECT e.department_id,count(*) OVER (PARTITION BY e.department_id) cnt_analytic
2 FROM employees e
3 WHERE e.department_id IN (10,20,30);
DEPARTMENT_ID CNT_ANALYTIC
10 1
20 2
20 2
30 6
30 6
30 6
30 6
30 6
30 6
9 rows selected.
Example 2:
-- Since I have used GROUP BY clause I'm getting only single row for each department.
SQL> SELECT e.department_id, count(*) cnt_group
2 FROM employees e
3 WHERE e.department_id IN (10,20,30)
4 GROUP BY e.department_id;
DEPARTMENT_ID CNT_GROUP
10 1
20 2
30 6Finally, what I'm trying to explain you is - If you use Analytical function with GROUP BY clause, the query will not give the menaing ful result set.
See below
SQL> SELECT e.department_id,count(*) OVER (PARTITION BY e.department_id) cnt_analytic, count(*) cnt_grp
2 FROM employees e
3 WHERE e.department_id IN (10,20,30)
4 GROUP BY e.department_id;
DEPARTMENT_ID CNT_ANALYTIC CNT_GRP
10 1 1
20 1 2
30 1 6 -
COUNT(DISTINCT) WITH ORDER BY in an analytic function
-- I create a table with three fields: Name, Amount, and a Trans_Date.
CREATE TABLE TEST
NAME VARCHAR2(19) NULL,
AMOUNT VARCHAR2(8) NULL,
TRANS_DATE DATE NULL
-- I insert a few rows into my table:
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '110', TO_DATE('06/01/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '20', TO_DATE('06/01/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '110', TO_DATE('06/02/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '21', TO_DATE('06/03/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '68', TO_DATE('06/04/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '110', TO_DATE('06/05/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '20', TO_DATE('06/06/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '43', TO_DATE('06/01/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '77', TO_DATE('06/02/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '221', TO_DATE('06/03/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '43', TO_DATE('06/04/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '73', TO_DATE('06/05/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
commit;
/* I want to retrieve all the distinct count of amount for every row in an analytic function with COUNT(DISTINCT AMOUNT) sorted by name and ordered by trans_date where I get only calculate for the last four trans_date for each row (i.e., for the row "Anna 110 6/5/2005 8:00:00.000 PM," I only want to look at the previous dates from 6/2/2005 to 6/5/2005 and get the distinct count of how many amounts there are different for Anna). Note, I cannot use the DISTINCT keyword in this query because it doesn't work with the ORDER BY */
select NAME, AMOUNT, TRANS_DATE, COUNT(/*DISTINCT*/ AMOUNT) over ( partition by NAME
order by TRANS_DATE range between numtodsinterval(3,'day') preceding and current row ) as COUNT_AMOUNT
from TEST t;
This is the results I get if I just count all the AMOUNT without using distinct:
NAME AMOUNT TRANS_DATE COUNT_AMOUNT
Anna 110 6/1/2005 8:00:00.000 PM 2
Anna 20 6/1/2005 8:00:00.000 PM 2
Anna 110 6/2/2005 8:00:00.000 PM 3
Anna 21 6/3/2005 8:00:00.000 PM 4
Anna 68 6/4/2005 8:00:00.000 PM 5
Anna 110 6/5/2005 8:00:00.000 PM 4
Anna 20 6/6/2005 8:00:00.000 PM 4
Bill 43 6/1/2005 8:00:00.000 PM 1
Bill 77 6/2/2005 8:00:00.000 PM 2
Bill 221 6/3/2005 8:00:00.000 PM 3
Bill 43 6/4/2005 8:00:00.000 PM 4
Bill 73 6/5/2005 8:00:00.000 PM 4
The COUNT_DISTINCT_AMOUNT is the desired output:
NAME AMOUNT TRANS_DATE COUNT_DISTINCT_AMOUNT
Anna 110 6/1/2005 8:00:00.000 PM 1
Anna 20 6/1/2005 8:00:00.000 PM 2
Anna 110 6/2/2005 8:00:00.000 PM 2
Anna 21 6/3/2005 8:00:00.000 PM 3
Anna 68 6/4/2005 8:00:00.000 PM 4
Anna 110 6/5/2005 8:00:00.000 PM 3
Anna 20 6/6/2005 8:00:00.000 PM 4
Bill 43 6/1/2005 8:00:00.000 PM 1
Bill 77 6/2/2005 8:00:00.000 PM 2
Bill 221 6/3/2005 8:00:00.000 PM 3
Bill 43 6/4/2005 8:00:00.000 PM 3
Bill 73 6/5/2005 8:00:00.000 PM 4
Thanks in advance.you can try to write your own udag.
here is a fake example, just to show how it "could" work. I am here using only 1,2,4,8,16,32 as potential values.
create or replace type CountDistinctType as object
bitor_number number,
static function ODCIAggregateInitialize(sctx IN OUT CountDistinctType)
return number,
member function ODCIAggregateIterate(self IN OUT CountDistinctType,
value IN number) return number,
member function ODCIAggregateTerminate(self IN CountDistinctType,
returnValue OUT number, flags IN number) return number,
member function ODCIAggregateMerge(self IN OUT CountDistinctType,
ctx2 IN CountDistinctType) return number
create or replace type body CountDistinctType is
static function ODCIAggregateInitialize(sctx IN OUT CountDistinctType)
return number is
begin
sctx := CountDistinctType('');
return ODCIConst.Success;
end;
member function ODCIAggregateIterate(self IN OUT CountDistinctType, value IN number)
return number is
begin
if (self.bitor_number is null) then
self.bitor_number := value;
else
self.bitor_number := self.bitor_number+value-bitand(self.bitor_number,value);
end if;
return ODCIConst.Success;
end;
member function ODCIAggregateTerminate(self IN CountDistinctType, returnValue OUT
number, flags IN number) return number is
begin
returnValue := 0;
for i in 0..log(2,self.bitor_number) loop
if (bitand(power(2,i),self.bitor_number)!=0) then
returnValue := returnValue+1;
end if;
end loop;
return ODCIConst.Success;
end;
member function ODCIAggregateMerge(self IN OUT CountDistinctType, ctx2 IN
CountDistinctType) return number is
begin
return ODCIConst.Success;
end;
end;
CREATE or REPLACE FUNCTION CountDistinct (n number) RETURN number
PARALLEL_ENABLE AGGREGATE USING CountDistinctType;
drop table t;
create table t as select rownum r, power(2,trunc(dbms_random.value(0,6))) p from all_objects;
SQL> select r,p,countdistinct(p) over (order by r) d from t where rownum<10 order by r;
R P D
1 4 1
2 1 2
3 8 3
4 32 4
5 1 4
6 16 5
7 16 5
8 4 5
9 4 5buy some good book if you want to start at writting your own "distinct" algorythm.
Message was edited by:
Laurent Schneider
a simpler but memory killer algorithm would use a plsql table in an udag and do the count(distinct) over that table to return the value -
Aggregate fuction with group by clause
Hello,
Following is assignment is given but i dont get correct output
so please i am request to all of us write code to solve my problem.
There can be multiple records for one customer in VBAK tables with different combinations.
Considering that we do not need details of each sales order,
use Aggregate functions with GROUP BY clause in SELECT to read the fields.
<garbled code removed>
Moderator Message: Please paste the relevant portions of the code
Edited by: Suhas Saha on Nov 18, 2011 1:48 PMSo if you need not want all the repeated records, then you select all the values to an Internal table,
and declare an internal table of same type and Usee COLLECT
for ex:
itab1 type <xxxx>.
wa_itba like line of itab1.
itab2 type <xxxx>. "<-This should be same type of above.
select * from ..... into table itab1.
and now...
loop at itab1 into wa_itab.
collect wa_itab1 into itab2.
endloop.
then you will get your desired result.. -
Discoverer Analytic Function windowing - errors and bad aggregation
I posted this first on Database General forum, but then I found this was the place to put it:
Hi, I'm using this kind of windowing function:
SUM(Receitas Especificas) OVER(PARTITION BY Tipo Periodo,Calculado,"Empresa Descrição (Operador)","Empresa Descrição" ORDER BY Ini Periodo RANGE BETWEEN INTERVAL '12' MONTH PRECEDING AND INTERVAL '12' MONTH PRECEDING )
If I use the "Receitas Especificas SUM" instead of
"Receitas Especificas" I get the following error running the report:
"an error occurred while attempting to run..."
This is not in accordance to:
http://www.boku.ac.at/oradoc/ias/10g(9.0.4)/bi.904/b10268.pdf
but ok, the version without SUM inside works.
Another problem is the fact that for analytic function with PARTITION BY,
this does not work (shows the cannot aggregate symbol) if we collapse or use "<All>" in page items.
But it works if we remove the item from the PARTITION BY and also remove from workbook.
It's even worse for windowing functions(query above), because the query
only works if we remove the item from the PARTITION BY but we have to show it on the workbook - and this MAKES NO SENSE... :(
Please help.Unfortunately Discoverer doesn't show (correct) values for analytical functions when selecting "<All>" in a page item. I found out that it does work when you add the analytical function to the db-view instead of to the report as a calculation or as a calculated item on the folder.
The only problem is you've to name all page-items in the PARTITION window, so, when adding a page-item to the report, you,ve to change the db-view and alter the PARTITION window.
Michael
Maybe you are looking for
-
I'm talking about the Address Bar. For example: You type in Google and it shows www.google.com etc. Because this is in your history. But this has stopped working since I emptied my cookies. But not my history. I also checked if my history is still th
-
Limitation on a material basis or another filed in recipe.
Hi All, I wasnt to grant access to recipes to users depending on the data that is given in the recipe (e.g. per application field or material,u2026).E.g. some users are allowed to see/change recipe of material XYZ, while other users are not allowed t
-
How do i stop Itunes auto creating playlists and auto playing imports?
HELP NEEDED - My PC crashed so i rebuilt with Windows 7, installed the latest itunes and so far so good. I then had to import my music library, first thing i noticed which it never did before, when i import folders it will automatically create a play
-
Bios!!! Oh God what did I do!!!
Well I have been putting the update off long enough, and I went for it last night... Had to my family did not see the 64 as the future when only one game works... Maybe I should have waited:( I saw the update in the MSI update program, and started t
-
Can't install Safari 4, missing program (?)
I'm trying to go to Safari across all the PCs and Macs in the household (two of each), but my desktop has a problem. When I run the installer, it runs for a while and then an error dialog pops up "There is a problem with this Windows Installer packag