Count(*) in a analytic situation with group by order by
Hello every body,
I have a count(*) problem in an sql with analytics function on a table
when I want to have all his column in the result
Say I have a table
mytable1
CREATE TABLE MYTABLE1
MY_TIME TIMESTAMP(3),
PRICE NUMBER,
VOLUME NUMBER
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.01.664','DD-MM-YY HH24:MI:SS:FF3' ),49.55,704492 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.02.570','DD-MM-YY HH24:MI:SS:FF3' ),49.55,705136 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.227','DD-MM-YY HH24:MI:SS:FF3' ),49.55,707313 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.227','DD-MM-YY HH24:MI:SS:FF3' ),49.55,706592 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.30.695','DD-MM-YY HH24:MI:SS:FF3' ),49.55,705581 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.227','DD-MM-YY HH24:MI:SS:FF3' ),49.55,707985 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.820','DD-MM-YY HH24:MI:SS:FF3' ),49.56,708494 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.32.258','DD-MM-YY HH24:MI:SS:FF3' ),49.57,708955 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.36.180','DD-MM-YY HH24:MI:SS:FF3' ),49.58,709519 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.352','DD-MM-YY HH24:MI:SS:FF3' ),49.59,710502 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.352','DD-MM-YY HH24:MI:SS:FF3' ),49.59,710102 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.352','DD-MM-YY HH24:MI:SS:FF3' ),49.59,709962 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.52.399','DD-MM-YY HH24:MI:SS:FF3' ),49.59,711427 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.977','DD-MM-YY HH24:MI:SS:FF3' ),49.6,710902 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.50.492','DD-MM-YY HH24:MI:SS:FF3' ),49.6,711379 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.45.550','DD-MM-YY HH24:MI:SS:FF3' ),49.6,711302 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.50.492','DD-MM-YY HH24:MI:SS:FF3' ),49.62,711417 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.57.790','DD-MM-YY HH24:MI:SS:FF3' ),49.49,715587 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.47.712','DD-MM-YY HH24:MI:SS:FF3' ),49.5,715166 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.57.790','DD-MM-YY HH24:MI:SS:FF3' ),49.5,715469 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.24.821','DD-MM-YY HH24:MI:SS:FF3' ),49.53,714833 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.24.821','DD-MM-YY HH24:MI:SS:FF3' ),49.53,714914 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.24.493','DD-MM-YY HH24:MI:SS:FF3' ),49.54,714136 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.19.977','DD-MM-YY HH24:MI:SS:FF3' ),49.55,713387 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.19.977','DD-MM-YY HH24:MI:SS:FF3' ),49.55,713562 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712172 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.09.274','DD-MM-YY HH24:MI:SS:FF3' ),49.59,713287 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.09.117','DD-MM-YY HH24:MI:SS:FF3' ),49.59,713206 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712984 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.836','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712997 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712185 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712261 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.32.244','DD-MM-YY HH24:MI:SS:FF3' ),49.46,725577 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.26.181','DD-MM-YY HH24:MI:SS:FF3' ),49.49,724664 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.540','DD-MM-YY HH24:MI:SS:FF3' ),49.49,723366 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.26.181','DD-MM-YY HH24:MI:SS:FF3' ),49.49,725242 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.26.181','DD-MM-YY HH24:MI:SS:FF3' ),49.49,725477 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.947','DD-MM-YY HH24:MI:SS:FF3' ),49.49,724521 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.540','DD-MM-YY HH24:MI:SS:FF3' ),49.49,723943 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.540','DD-MM-YY HH24:MI:SS:FF3' ),49.49,724086 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.34.103','DD-MM-YY HH24:MI:SS:FF3' ),49.49,725609 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.118','DD-MM-YY HH24:MI:SS:FF3' ),49.5,720166 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.118','DD-MM-YY HH24:MI:SS:FF3' ),49.5,720066 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.11.774','DD-MM-YY HH24:MI:SS:FF3' ),49.5,718524 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.696','DD-MM-YY HH24:MI:SS:FF3' ),49.5,722086 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.11.774','DD-MM-YY HH24:MI:SS:FF3' ),49.5,718092 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.11.774','DD-MM-YY HH24:MI:SS:FF3' ),49.5,715673 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.118','DD-MM-YY HH24:MI:SS:FF3' ),49.51,719666 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.12.555','DD-MM-YY HH24:MI:SS:FF3' ),49.52,719384 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.28.963','DD-MM-YY HH24:MI:SS:FF3' ),49.48,728830 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.11.884','DD-MM-YY HH24:MI:SS:FF3' ),49.48,726609 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.28.963','DD-MM-YY HH24:MI:SS:FF3' ),49.48,728943 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.45.947','DD-MM-YY HH24:MI:SS:FF3' ),49.49,729627 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.12.259','DD-MM-YY HH24:MI:SS:FF3' ),49.49,726830 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.46.494','DD-MM-YY HH24:MI:SS:FF3' ),49.49,733653 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.46.510','DD-MM-YY HH24:MI:SS:FF3' ),49.49,733772 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.12.259','DD-MM-YY HH24:MI:SS:FF3' ),49.49,727830 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.59.119','DD-MM-YY HH24:MI:SS:FF3' ),49.5,735772 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.47.369','DD-MM-YY HH24:MI:SS:FF3' ),49.5,734772 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.20.463','DD-MM-YY HH24:MI:SS:FF3' ),49.48,740621 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.12.369','DD-MM-YY HH24:MI:SS:FF3' ),49.48,740538 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.20.463','DD-MM-YY HH24:MI:SS:FF3' ),49.48,741021 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.10.588','DD-MM-YY HH24:MI:SS:FF3' ),49.49,740138 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.463','DD-MM-YY HH24:MI:SS:FF3' ),49.49,738320 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.135','DD-MM-YY HH24:MI:SS:FF3' ),49.49,737122 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.135','DD-MM-YY HH24:MI:SS:FF3' ),49.49,736424 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.260','DD-MM-YY HH24:MI:SS:FF3' ),49.49,737598 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.744','DD-MM-YY HH24:MI:SS:FF3' ),49.49,739360 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.135','DD-MM-YY HH24:MI:SS:FF3' ),49.49,736924 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.260','DD-MM-YY HH24:MI:SS:FF3' ),49.49,737784 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.463','DD-MM-YY HH24:MI:SS:FF3' ),49.49,738145 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.744','DD-MM-YY HH24:MI:SS:FF3' ),49.49,739134 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.463','DD-MM-YY HH24:MI:SS:FF3' ),49.49,738831 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.215','DD-MM-YY HH24:MI:SS:FF3' ),49.5,742421 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.580','DD-MM-YY HH24:MI:SS:FF3' ),49.5,741777 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.215','DD-MM-YY HH24:MI:SS:FF3' ),49.5,742021 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.48.433','DD-MM-YY HH24:MI:SS:FF3' ),49.5,741091 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.840','DD-MM-YY HH24:MI:SS:FF3' ),49.51,743021 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.57.511','DD-MM-YY HH24:MI:SS:FF3' ),49.52,743497 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.00.270','DD-MM-YY HH24:MI:SS:FF3' ),49.52,744021 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.17.699','DD-MM-YY HH24:MI:SS:FF3' ),49.53,750292 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.00.433','DD-MM-YY HH24:MI:SS:FF3' ),49.53,747382 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.17.699','DD-MM-YY HH24:MI:SS:FF3' ),49.53,749939 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.15.152','DD-MM-YY HH24:MI:SS:FF3' ),49.53,749414 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.00.433','DD-MM-YY HH24:MI:SS:FF3' ),49.53,744882 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.08.110','DD-MM-YY HH24:MI:SS:FF3' ),49.54,749262 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.01.168','DD-MM-YY HH24:MI:SS:FF3' ),49.54,748418 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.01.152','DD-MM-YY HH24:MI:SS:FF3' ),49.54,748243 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.07.293','DD-MM-YY HH24:MI:SS:FF3' ),49.54,748862 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.09.433','DD-MM-YY HH24:MI:SS:FF3' ),49.51,750414 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.28.262','DD-MM-YY HH24:MI:SS:FF3' ),49.53,750930 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.28.887','DD-MM-YY HH24:MI:SS:FF3' ),49.53,751986 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.28.887','DD-MM-YY HH24:MI:SS:FF3' ),49.53,750986 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.30.997','DD-MM-YY HH24:MI:SS:FF3' ),49.55,753900 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.30.887','DD-MM-YY HH24:MI:SS:FF3' ),49.55,753222 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.29.809','DD-MM-YY HH24:MI:SS:FF3' ),49.55,753022 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.29.809','DD-MM-YY HH24:MI:SS:FF3' ),49.55,752847 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.42.622','DD-MM-YY HH24:MI:SS:FF3' ),49.56,755385 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.31.120','DD-MM-YY HH24:MI:SS:FF3' ),49.56,754385 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.49.590','DD-MM-YY HH24:MI:SS:FF3' ),49.6,759087 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.50.341','DD-MM-YY HH24:MI:SS:FF3' ),49.6,759217 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.49.590','DD-MM-YY HH24:MI:SS:FF3' ),49.6,758701 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.57.262','DD-MM-YY HH24:MI:SS:FF3' ),49.6,761049 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.48.637','DD-MM-YY HH24:MI:SS:FF3' ),49.6,757827 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.48.120','DD-MM-YY HH24:MI:SS:FF3' ),49.6,757385 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.466','DD-MM-YY HH24:MI:SS:FF3' ),49.62,761001 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.137','DD-MM-YY HH24:MI:SS:FF3' ),49.62,760109 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.137','DD-MM-YY HH24:MI:SS:FF3' ),49.62,759617 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.278','DD-MM-YY HH24:MI:SS:FF3' ),49.62,760265 );
insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.137','DD-MM-YY HH24:MI:SS:FF3' ),49.62,759954 );
so if I do
SELECT DISTINCT row_number() over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ) num,
MIN(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) low ,
MAX(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) high ,
-- sum(volume) over( partition by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 order by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 asc ) volume,
TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 TIME ,
price ,
COUNT( *) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ,price ASC,volume ASC ) TRADE,
first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC,volume ASC ) OPEN ,
first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 DESC,volume DESC) CLOSE ,
lag(price) over ( order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) lag_all
FROM mytable1
WHERE my_time > to_timestamp('04032008:09:00:00','DDMMYYYY:HH24:MI:SS')
AND my_time < to_timestamp('04032008:09:01:00','DDMMYYYY:HH24:MI:SS')
GROUP BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ,
price ,
volume
ORDER BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60,
price ,
num;
i have
NUM|LOW|HIGH|TIME|PRICE|TRADE|OPEN|CLOSE|LAG_ALL
1|49.55|49.62|04/03/2008 09:00:00|49.55|1|49.55|49.59|
2|49.55|49.62|04/03/2008 09:00:00|49.55|2|49.55|49.59|49.55
3|49.55|49.62|04/03/2008 09:00:00|49.55|3|49.55|49.59|49.55
4|49.55|49.62|04/03/2008 09:00:00|49.55|4|49.55|49.59|49.55
5|49.55|49.62|04/03/2008 09:00:00|49.55|5|49.55|49.59|49.55
6|49.55|49.62|04/03/2008 09:00:00|49.55|6|49.55|49.59|49.55
7|49.55|49.62|04/03/2008 09:00:00|49.56|7|49.55|49.59|49.55
8|49.55|49.62|04/03/2008 09:00:00|49.57|8|49.55|49.59|49.56
9|49.55|49.62|04/03/2008 09:00:00|49.58|9|49.55|49.59|49.57
10|49.55|49.62|04/03/2008 09:00:00|49.59|10|49.55|49.59|49.58
11|49.55|49.62|04/03/2008 09:00:00|49.59|11|49.55|49.59|49.59
12|49.55|49.62|04/03/2008 09:00:00|49.59|12|49.55|49.59|49.59
13|49.55|49.62|04/03/2008 09:00:00|49.59|13|49.55|49.59|49.59
14|49.55|49.62|04/03/2008 09:00:00|49.6|14|49.55|49.59|49.59
15|49.55|49.62|04/03/2008 09:00:00|49.6|15|49.55|49.59|49.6
16|49.55|49.62|04/03/2008 09:00:00|49.6|16|49.55|49.59|49.6
17|49.55|49.62|04/03/2008 09:00:00|49.62|17|49.55|49.59|49.6
Witch is errouneous
because
if I do'nt put the volume column in the script I get another result
SELECT DISTINCT row_number() over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ) num,
MIN(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) low ,
MAX(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) high ,
-- sum(volume) over( partition by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 order by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 asc ) volume,
TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 TIME ,
price ,
COUNT( *) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ,price ASC ) TRADE,
first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ) OPEN ,
first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 DESC) CLOSE ,
lag(price) over ( order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) lag_all
FROM mytable1
WHERE my_time > to_timestamp('04032008:09:00:00','DDMMYYYY:HH24:MI:SS')
AND my_time < to_timestamp('04032008:09:01:00','DDMMYYYY:HH24:MI:SS')
GROUP BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ,
price
ORDER BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60,
price ,
num;
I get this
NUM|LOW|HIGH|TIME|PRICE|TRADE|OPEN|CLOSE|LAG_ALL
1|49.55|49.62|04/03/2008 09:00:00|49.55|1|49.55|49.55|
2|49.55|49.62|04/03/2008 09:00:00|49.56|2|49.55|49.55|49.55
3|49.55|49.62|04/03/2008 09:00:00|49.57|3|49.55|49.55|49.56
4|49.55|49.62|04/03/2008 09:00:00|49.58|4|49.55|49.55|49.57
5|49.55|49.62|04/03/2008 09:00:00|49.59|5|49.55|49.55|49.58
6|49.55|49.62|04/03/2008 09:00:00|49.6|6|49.55|49.55|49.59
7|49.55|49.62|04/03/2008 09:00:00|49.62|7|49.55|49.55|49.6
How can I have the right count with all the column of the table?
Babata
I'm not sure what in your eye the "right count" is. but I think the DISTINCT keyword is hiding the problems that you have. It could also be the reason for the different number of results between query one and query two.
Similar Messages
-
Problem with group by/order by clause when using huge data
Hi,
I'm using below query on my table of size more than 210 million rows.
SELECT booking_date FROM T_UTR
WHERE four_eyes_status = 'A' AND booking_date <= '01-jul-2005' AND booking_date >= '01-jan-2004'
AND invoice_id IS NULL AND link_id = 12345
AND billing_indicator = 'L'
GROUP BY booking_date ORDER BY booking_date
If I'm skipping last line "GROUP BY booking_date ORDER BY booking_date", its giving me immediate result, but, because of group by/order by, the query may take 30 seconds to 2 minutes based on the data fetched for the date range. It may vary from 2 to 2 million rows, & grouping by for so many rows at run time will automatically take some time.
Here I want to know, is there any procedure in oracle(any function based index) so that I can store/retrieve distinct values for link_id, & booking_date without grouping them at run time. Or the performance of the query can be increased
Thanks
DeepakHi,
You can use Materialized Views as stated earlier - specifically by using Query Rewrite. If the conditions on columns "four_eyes_status", "invoice_id", and "billing_indicator" never change in your query - then you can create a Materialized View that is targeted for those conditions and has lower cardinality (since you aren't grouping by those columns). The "COUNT(*)" allows the use of the DISTINCT operator in addition to "GROUP BY' as well for Query Rewrite.
Create the Materialized View like this:
CREATE MATERIALIZED VIEW test_mv1
BUILD IMMEDIATE
USING NO INDEX
REFRESH FORCE ON DEMAND
ENABLE QUERY REWRITE
AS
SELECT booking_date
, link_id
, COUNT(*) AS count_star
FROM T_UTR
WHERE four_eyes_status = 'A'
AND invoice_id IS NULL
AND billing_indicator = 'L'
GROUP BY booking_date
, link_id ;To improve performance further - create an index on the "LINK_ID" column like this:
CREATE INDEX test_mv1_link_id_idx
ON test_mv1 (link_id);Then - gather stats immediately on the Materialized View so that the CBO can use it for rewriting your original query - like this:
BEGIN
DBMS_STATS.gather_table_stats (ownname => USER
, tabname => 'TEST_MV1'
, partname => NULL
, estimate_percent => DBMS_STATS.auto_sample_size
, block_sample => FALSE
, method_opt => 'FOR ALL COLUMNS SIZE 1'
, degree => NULL
, granularity => 'ALL'
, cascade => TRUE
, no_invalidate => FALSE
END;
/Now - the CBO should be able to rewrite your original query to use the Materialized View - provided you set up your session for Query Rewrite like this:
ALTER SESSION SET query_rewrite_enabled = TRUE;
ALTER SESSION SET query_rewrite_integrity = ENFORCED; -- set this to whatever staleness you can tolerate - see the docs for details...Now - after setting up your session - try running your query with autotrace to see if it was rewritten...
Good luck!
Message was edited by:
PDaddy -
In need help: Analytic Report with Group by
Good morning,
I am trying to create a report with subtotal and grand total, which of couse goes to the group by clause, with rollup, cube, grouping... etc. I'd like to use rollup, then some columns in the Select list have to be put into the Group By clause, which is not supposed to be. So I had to use one of SUM, AVG, MIN and MAX functions, to make those columns as *aggregated, which is wrong.
Another alternative I tried is to use Cube and Grouping_id to be the filter. However, that is still very cumbentsome and error-prone, also the order of the display is absolutely out of control.
I am trying hard to stick to the first option of using the Rollup, since the result is very close to what I want, but avoid the usage of aggregation functions. For example, if I want to display column A, which should not be grouped. Other than using those aggregation functions, what I can do?
Thanks in advance.Luc,
this is a simple and a good reference for analytic functions:
http://www.orafaq.com/node/55
It takes some time to understand how they work and also it takes some time to understand how to utilize them. I have solved some issues in reporting using them, avoiding the overkill of aggregates.
Denes Kubicek -
Analytic Functions with GROUP-BY Clause?
I'm just getting acquainted with analytical functions. I like them. I'm having a problem, though. I want to sum up the results, but either I'm running into a limitation or I'm writing the SQL wrong. Any hints for me?
Hypothetical Table SALES consisting of a DAY_ID, PRODUCT_ID, PURCHASER_ID, PURCHASE_PRICE lists all the
Hypothetical Business Question: Product prices can fluctuate over the course of a day. I want to know how much per day I would have made had I sold one each of all my products at their max price for that day. Silly question, I know, but it's the best I could come up with to show the problem.
INSERT INTO SALES VALUES(1,1,1,1.0);
INSERT INTO SALES VALUES(1,1,1,2.0);
INSERT INTO SALES VALUES(1,2,1,3.0);
INSERT INTO SALES VALUES(1,2,1,4.0);
INSERT INTO SALES VALUES(2,1,1,5.0);
INSERT INTO SALES VALUES(2,1,1,6.0);
INSERT INTO SALES VALUES(2,2,1,7.0);
INSERT INTO SALES VALUES(2,2,1,8.0);
COMMIT;
Day 1: Iif I had sold one product 1 at $2 and one product 2 at $4, I would have made 6$.
Day 2: Iif I had sold one product 1 at $6 and one product 2 at $8, I would have made 14$.
The desired result set is:
DAY_ID MY_MEASURE
1 6
1 14The following SQL gets me tantalizingly close:
SELECT DAY_ID,
MAX(PURCHASE_PRICE)
KEEP(DENSE_RANK FIRST ORDER BY PURCHASE_PRICE DESC)
OVER(PARTITION BY DAY_ID, PRODUCT_ID) AS MY_MEASURE
FROM SALES
ORDER BY DAY_ID
DAY_ID MY_MEASURE
1 2
1 2
1 4
1 4
2 6
2 6
2 8
2 8But as you can see, my result set is "longer" than I wanted it to be. I want a single row per DAY_ID. I understand what the analytical functions are doing here, and I acknowledge that I am "not doing it right." I just can't seem to figure out how to make it work.
Trying to do a sum() of max() simply does not work, nor does any semblance of a group-by clause that I can come up with. Unfortunately, as soon as I add the windowing function, I am no longer allowed to use group-by expressions (I think).
I am using a reporting tool, so unfortunately using things like inline views are not an option. I need to be able to define "MY_MEASURE" as something the query tool can apply the SUM() function to in its generated SQL.
(Note: The actual problem is slightly less easy to conceptualize, but solving this conundrum will take me much closer to solving the other.)
I humbly solicit your collective wisdom, oh forum.Thanks, SY. I went that way originally too. Unfortunately that's no different from what I could get without the RANK function.
SELECT DAY_ID,
PRODUCT_ID,
MAX(PURCHASE_PRICE) MAX_PRICE
FROM SALES
GROUP BY DAY_ID,
PRODUCT_ID
ORDER BY DAY_ID,
PRODUCT_ID
DAY_ID PRODUCT_ID MAX_PRICE
1 1 2
1 2 4
2 1 6
2 2 8 -
Analytic function with GROUP BY
Hi:
using the query below I am getting the following error --> 3:19:15 PM ORA-00979: not a GROUP BY expression
SELECT a.proj_title_ds, b.prgm_sers_title_nm,
SUM(c.PRGM_TOT_EXP_AMT) OVER(PARTITION BY c.prgm_id) AS "Total $ Spend1"
FROM iPlanrpt.VM_RPT_PROJECT a INNER JOIN iPlanrpt.VM_RPT_PRGM_SERS b
ON a.proj_id = b.proj_id INNER JOIN iPlanrpt.VM_RPT_PRGM c
ON b.prgm_sers_id = c.prgm_sers_id
WHERE a.proj_id IN (1209624,1209623,1209625, 1211122,1211123)
AND c.PRGM_STATE_ID in (6,7)
GROUP BY a.proj_title_ds, b.prgm_sers_title_nm
Any suggestions to get the desired result (Sum of c.PRGM_TOT_EXP_AMT for each / distinct c.prgm_id within the group by specified) will be helpful@OP,
Please mark the "other duplicate thread as complete or duplicate". I responded to the other thread and asked to sample data.
With the sample included here...would the following work for you?
SELECT a.proj_title_ds,
b.prgm_sers_title_nm,
SUM (c.prgm_tot_exp_amt) AS "Total $ Spend1"
FROM iplanrpt.vm_rpt_project a
INNER JOIN
iplanrpt.vm_rpt_prgm_sers b
ON a.proj_id = b.proj_id
INNER JOIN
(select distinct prgm_id, prgm_tot_exp_amt from iplanrpt.vm_rpt_prgm ) c
ON b.prgm_sers_id = c.prgm_sers_id
WHERE a.proj_id IN (1209624, 1209623, 1209625, 1211122, 1211123)
AND c.prgm_state_id IN (6, 7)
GROUP BY a.proj_title_ds, b.prgm_sers_title_nm
;vr,
Sudhakar B. -
Count(*) with group by max(date)
SQL> select xdesc,xcust,xdate from coba1 order by xdesc,xcust,xdate;
XDESC XCUST XDATE
RUB-A 11026 01-JAN-06
RUB-A 11026 05-JAN-06
RUB-A 11026 08-JAN-06
RUB-A 11027 10-JAN-06
RUB-B 11026 02-JAN-06
RUB-B 11026 08-JAN-06
RUB-B 11026 09-JAN-06
RUB-C 11027 08-JAN-06
I want to make sql that result :
XDESC COUNT(*)
RUB-A 2
RUB-B 1
RUB-C 1
Criteria : GROUPING: XDESC XCUST AND MAX(DATE)
bellow mark *** that was selected in count.
XDESC XCUST XDATE
RUB-A 11026 01-JAN-06
RUB-A 11026 05-JAN-06
RUB-A 11026 08-JAN-06 ***
RUB-A 11027 10-JAN-06 ***
---------------------------------------------------------COUNT RUB-A = 2
RUB-B 11026 02-JAN-06
RUB-B 11026 08-JAN-06
RUB-B 11026 09-JAN-06 ***
---------------------------------------------------------COUNT RUB-B = 1
RUB-C 11027 08-JAN-06 ***
--------------------------------------------------------COUNT RUB-C = 1
Can Anybody help ?
I tried :
select xdesc,max(xdate),count(max(xdate)) from coba1 group by xdesc
ERROR at line 1:
ORA-00937: not a single-group group function
ThankThis one is duplicate. see the following link
Count(*) with group by max(date)
Thanks -
Max, Min and Count with Group By
Hello,
i want the max, min and count of a table, which is grouped by a column
I need a combination of these two selects:
select
max(COUNTRY_S) MAXVALUE,
min(COUNTRY_S) MINVALUE
from
tab_Country
select
count(*)
from
(select COUNTRY_TXT from tab_Country group by COUNTRY_TXT) ;
The result should be one row with the max and min value of the table and with the count of the grouped by table, not the max and min of each group! - i hope you understand my question?
Is this possible in one SQL-select?
Thank you very much
Best regards
HeidiHi, Heidi,
HeidiWeber wrote:
Hello,
i want the max, min and count of a table, which is grouped by a column
I need a combination of these two selects:
select
max(COUNTRY_S) MAXVALUE,
min(COUNTRY_S) MINVALUE
from
tab_Country
select
count(*)
from
(select COUNTRY_TXT from tab_Country group by COUNTRY_TXT) ;
The result should be one row with the max and min value of the table and with the count of the grouped by table, not the max and min of each group! - i hope you understand my question?
Is this possible in one SQL-select?
Thank you very much
Best regards
Heidi
It's not clear what you want. Perhaps
SELECT MAX (country_s) AS max_country_s
, MIN (country_s) AS min_country_s
, COUNT (DISTINCT country_txt) AS count_country_txt
FROM tab_country
I hope this answers your question.
If not, post a little sample data (CREATE TABLE and INSERT statements, relevant columns only) for all the tables involved, and the results you want from that data.
Explain, using specific examples, how you get those results from that data.
Always say what version of Oracle you're using (e.g. 11.2.0.2.0).
See the forum FAQ: https://forums.oracle.com/message/9362002 -
Problem with Group policies and Administrator count
I have one problem with Group policies and Admnistrator count.
Win XP, Client 4.91, Client Zen 4
I use DLU for users.
the Group policies are well applied and i keep them after logout for
security reasons.
But my problem is, after logout, the Administrator count becomes this
Group policies, and the only technique that I use, is to remove the
repertories c:\windows\system32\GroupPolicy*. Administrator must
loguing again for having good policies.
Can you help me?Bill,
It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.
Has your problem been resolved? If not, you might try one of the following options:
- Do a search of our knowledgebase at http://support.novell.com/search/kb_index.jsp
- Check all of the other support tools and options available at
http://support.novell.com.
- You could also try posting your message again. Make sure it is posted in the
correct newsgroup. (http://support.novell.com/forums)
Be sure to read the forum FAQ about what to expect in the way of responses:
http://support.novell.com/forums/faq_general.html
If this is a reply to a duplicate posting, please ignore and accept our apologies
and rest assured we will issue a stern reprimand to our posting bot.
Good luck!
Your Novell Product Support Forums Team
http://support.novell.com/forums/ -
Problems with GROUP BY - not a GROUP BY expression
Hello,
I am fighting little bit with GROUP BY expression.
After some tests I am able to reduce the problem to following...
When can I use column numbers in GROUP BY expression?
Consider this situation:
CREATE TABLE EMP4 (
NAME VARCHAR2(10)
COMMIT;
INSERT INTO EMP4 VALUES('Tamara');
INSERT INTO EMP4 VALUES('John');
INSERT INTO EMP4 VALUES('Joseph');
COMMIT;
SELECT NAME, COUNT(*)
FROM EMP4
GROUP BY 1;
00979. 00000 - "not a GROUP BY expression"
-- This is working
SELECT NAME, COUNT(*)
FROM EMP4
GROUP BY NAME;Why is the GROUP BY 1 not workig?
I am using the GROUP BY 1 because in real query there is some PL/SQL function which somehow modifies the column NAME, so I can't use the column name
SELECT TEST_PACKAGE.AppendSomeCharacter(NAME), COUNT(*)
FROM EMP4
GROUP BY 1;Of course I can nest the query and move the COUNT and GROUP BY to outer query or maybe something else, but I was just curious why is the GROUP BY not working...
Also in real query, there are 3 columns in the GROUP BY expression, so I have there GROUP BY 1, 2, 3
Thanks for helphai
try the following
CREATE TABLE TBL(ID NUMBER,VAL VARCHAR(20));
INSERT INTO TBL VALUES(1,'Z');
INSERT INTO TBL VALUES(2,'X');
INSERT INTO TBL VALUES(1,'Z');
INSERT INTO TBL VALUES(2,'X');
INSERT INTO TBL VALUES(3,'A');
INSERT INTO TBL VALUES(4,'H');
INSERT INTO TBL VALUES(5,'B');
INSERT INTO TBL VALUES(6,'C');
INSERT INTO TBL VALUES(7,'T');
INSERT INTO TBL VALUES(3,'A');
INSERT INTO TBL VALUES(4,'H');
INSERT INTO TBL VALUES(5,'B');
INSERT INTO TBL VALUES(6,'C');
INSERT INTO TBL VALUES(7,'T');
CREATE TYPE SAMPLETYPE AS OBJECT ( ID NUMBER, NAME
VARCHAR2(25) ) ;
CREATE TYPE SAMPETBLTYPE AS TABLE OF SAMPLETYPE;
CREATE OR REPLACE FUNCTION SAMPLEFUNC
p_colname varchar2
)return SAMPETBLTYPE pipelined as
ret_val SAMPLETYPE;
TYPE cursor_ref IS REF CURSOR;
fcur cursor_ref;
di TBL%ROWTYPE;
sqlstr varchar2(1000);
colname varchar(30):=p_colname;
begin
sqlstr:='SELECT * FROM TBL ORDER BY '|| colname ;
DBMS_OUTPUT.PUT_LINE(sqlstr);
open fcur FOR sqlstr;
loop
FETCH fcur INTO di;
EXIT WHEN fcur%NOTFOUND;
ret_val:=SAMPLETYPE(di.ID,di.VAL);
PIPE ROW(ret_val);
end loop;
close fcur;
return;
end;
select * from table(SAMPLEFUNC('ID'));
select * from table(SAMPLEFUNC('VAL')); -
Analytic Question with lag and lead
Hello,
I'm working on tracking a package and the number of times it was recorded in an office. I want to see the start and end dates along with number of occurrences (or records) during the start/end dates. I'm pretty confident I can get the start end date correct but it is the number of occurences that is the issue.
Essentially, I want to build a time line start and end_dates and the number of times the package was recorded in the office.
I am fumbling around with using the lag and lead analytic to build start/end dates along with the count of occurrences during that period.
I've been using analytics lag and lead feature and can pretty much get the start and end dates setup but having difficulty determining count ---count(*) within the analytic. (I think I can do it outside of the analytic with a self join but performance will suffer). I have millions of records in this table.
I've been playing with the windowing using RANGE and INTERVAL days but to no avail. When I try this and count(*) (over partition by package_ID, location_office_id order by event_date range ......) I can calculate the interval correctly by subtracting the lead date - current date, however,
the count is off because when I partition the values by package_id, location_office_id I get the third group of package 12 partitioned with the first group of package 12 (or in same window) because they are at the same office. However, I want to treat these separately because the package has gone to a different office in be-tween.
I've attached the DDL/DML to create my test case. Any help would be appreciated.
--Current
package_id, location_office_ID. event_date
12 1 20010101
12 1 20010102
12 1 20010103
13 5 20010102
13 5 20010104
13 5 20010105
13 6 20010106
13 6 20010111
12 2 20010108
12 2 20010110
12 1 20010111
12 1 20010112
12 1 20010113
12 1 20010114
--Needs to look like
package_id location_office_id start_date end_date count
12 1 20010101 20010103 3
12 2 20010108 20010110 2
12 1 20010111 20010114 4
13 5 20010102 20010105 3
13 6 20010106 20010111 2
create table test (package_id number, location_office_id number,event_date date);
insert into test values (12,1,to_date('20010101','YYYYMMDD'));
insert into test values (12,1,to_date('20010102','YYYYMMDD'));
insert into test values (12,1,to_date('20010103','YYYYMMDD'));
insert into test values (13,5,to_date('20010102','YYYYMMDD'));
insert into test values (13,5,to_date('20010104','YYYYMMDD'));
insert into test values (13,5,to_date('20010105','YYYYMMDD'));
insert into test values (13,6,to_date('20010106','YYYYMMDD'));
insert into test values (13,6,to_date('20010111','YYYYMMDD'));
insert into test values (12,2,to_date('20010108','YYYYMMDD'));
insert into test values (12,2,to_date('20010110','YYYYMMDD'));
insert into test values (12,1,to_date('20010111','YYYYMMDD'));
insert into test values (12,1,to_date('20010112','YYYYMMDD'));
insert into test values (12,1,to_date('20010113','YYYYMMDD'));
insert into test values (12,1,to_date('20010114','YYYYMMDD'));
commit;
--I'm trying something like
select package_id, location_office_id, event_date,
lead(event_date) over (partition by package_id, location_office_id order by event_date) lead_event,
count(*) over (partition by package_id, location_office_id order by event_date) rcount -- When I do this it merges the window together for package 12 and location 1 so I get the total, However, I want to keep them separate because the package moved to another office in between).
Appreciate your help,Hi,
Thanks for posting the CREATE TABLE and INSERT statements; that's very helpful!
You can do what you want with LEAD and/or LAG, but here's a more elegant way, using the analytic ROW_NUMBER function:
WITH got_grp_num AS
SELECT package_id, location_office_id, event_date
, ROW_NUMBER () OVER ( PARTITION BY package_id
ORDER BY event_date
- ROW_NUMBER () OVER ( PARTITION BY package_id
, location_office_id
ORDER BY event_date
) AS grp_num
FROM test
-- WHERE ... -- If you need any filtering, put it here
SELECT package_id
, location_office_id
, MIN (event_date) AS start_date
, MAX (event_date) AS end_date
, COUNT (*) AS cnt
FROM got_grp_num
GROUP BY package_id
, location_office_id
, grp_num
ORDER BY package_id
, start_date
;This approach treats the problem as a GROUP BY problem. Getting the start_date, end_date and cnt are all trivial using aggregate functions. The tricky part is what to GROUP BY. We can't just GROUP BY package_id and location_office_id, because, when a package (like package_id=1) leaves an office, goes to another office, then comes back, the two periods spent in the same office have to be treated as separate groups. We need something else to GROUP BY. The query above uses the Fixed Difference method to provide that something else. To see how this works, let's run the sub-query (slightly modified) by itself:
WITH got_grp_num AS
SELECT package_id, location_office_id, event_date
, ROW_NUMBER () OVER ( PARTITION BY package_id
ORDER BY event_date
) AS p_num
, ROW_NUMBER () OVER ( PARTITION BY package_id
, location_office_id
ORDER BY event_date
) AS p_l_num
FROM test
SELECT g.*
, p_num - p_l_num AS grp_num
FROM got_grp_num g
ORDER BY package_id
, event_date
;Output:
` LOCATION
PACKAGE _OFFICE
_ID _ID EVENT_DATE P_NUM P_L_NUM GRP_NUM
12 1 2001-01-01 1 1 0
12 1 2001-01-02 2 2 0
12 1 2001-01-03 3 3 0
12 2 2001-01-08 4 1 3
12 2 2001-01-10 5 2 3
12 1 2001-01-11 6 4 2
12 1 2001-01-12 7 5 2
12 1 2001-01-13 8 6 2
12 1 2001-01-14 9 7 2
13 5 2001-01-02 1 1 0
13 5 2001-01-04 2 2 0
13 5 2001-01-05 3 3 0
13 6 2001-01-06 4 1 3
13 6 2001-01-11 5 2 3As you can see, p_num numbers the rows for each package with consecutive integers. P_l_num likewise numbers the rows with consecutive integers, but instead of having a separate series of numbers for each package, it has a separate series for each package and location . As long as a package remains at the same location, both numbers increase by 1, and therefore the difference between those two numbers stays fixed. (This assumes that the combination (package_id, event_date is unique.) But whenever a pacakge changes from one location to another, and then comes back, p_num will have increased, but p_l_num will resume where it left off, and so the difference will not be the same as it was previously. The amount of the difference doesn't mean anything by itself; it's just a number (more or less arbitrary) that, together wth package_id and location_office_id, uniquely identifies the groups.
Edited by: Frank Kulash on Oct 26, 2011 8:49 PM
Added explanation. -
Select count from large fact tables with bitmap indexes on them
Hi..
I have several large fact tables with bitmap indexes on them, and when I do a select count from these tables, I get a different result than when I do a select count, column one from the table, group by column one. I don't have any null values in these columns. Is there a patch or a one-off that can rectify this.
ThxYou may have corruption in the index if the queries ...
Select /*+ full(t) */ count(*) from my_table t
... and ...
Select /*+ index_combine(t my_index) */ count(*) from my_table t;
... give different results.
Look at metalink for patches, and in the meantime drop-and-recreate the indexes or make them unusable then rebuild them. -
HI BPC Friends
I have created Journal Entry Template with header Information Groups and so on.
I have entered one Journal Entry with groups='EUR' and saved it.
I have seen that it saved the journal with groups= 'LC'
Is not possible to do a Journal with groups='EUR'?
thanks
regards
Michele MedagliaHi
Another approach is to create a folder in the local Administrators Home folder. Name it Applications. Place the applications you want to restrict access to into that folder. If you have ARD you could use the mkdir and mv commands to achieve this. In some situations I find this an easier way of managing applications rather than what's available in WorkGroup Manager. For me it only tends to work with Apple's built in applications effectively. Anything else is liable to cause a problem along the lines you mention. Some 3rd-Party applications can have dependencies that may be sited in different locations. The trick is tracking down them.
Tony -
I am having trouble with a query.
Table 1
id status
1 STARTED
2 STARTED
3 STARTED
4 STARTED
5 STARTED
6 INCOMPLETE
7 INPROGRESS
Table 2
id individual
1 MMJ
1 JBJ
2 MKJ
3 MKJ
3 LJJ
4 MMJ
5 MMJ
5 JBJ
6 ADJ
7 ADJ
The two tables are linked by id.
The user wants to see the count of STARTED projects for two groups.
Group 1 is MMJ, JBJ, ADJ
Group 2 is MKJ, LJJ
These groups aren’t assigned anywhere in the schema – I just got a list of names and what group they should go into…
How do I go about grouping the individuals and then how do I perform the count?
I did do a count and group by individual but my results are skewed because I get a double count on the ids that have multiple individuals assigned. Can anyone provide some insight?
Thank You!SELECT
SUM(group1),
SUM(group2)
FROM
SELECT DISTINCT
t1.id,
DECODE(t2.individual, 'MMJ', 1, 'JBJ', 1, 'ADJ', 1, 0) group1,
DECODE(t2.individual, 'MKJ', 1, 'LJJ', 1, 0) group2
FROM
(select 1 id, 'STARTED' status from DUAL
union
select 2 id, 'STARTED' status from DUAL
union
select 3 id, 'STARTED' status from DUAL
union
select 4 id, 'STARTED' status from DUAL
union
select 5 id, 'STARTED' status from DUAL
union
select 6 id, 'INCOMPLETE' status from DUAL
union
select 7 id, 'INPROGRESS' status from DUAL
) t1,
(select 1 id, 'MMJ' individual from dual
union
select 1 id, 'JBJ' individual from dual
union
select 2 id, 'MKJ' individual from dual
union
select 3 id, 'MKJ' individual from dual
union
select 3 id, 'LJJ' individual from dual
union
select 4 id, 'MMJ' individual from dual
union
select 5 id, 'MMJ' individual from dual
union
select 5 id, 'JBJ' individual from dual
union
select 6 id, 'ADJ' individual from dual
union
select 7 id, 'ADJ' individual from dual
) t2
WHERE
t1.id = t2.id AND
t1.status = 'STARTED'
SUM(GROUP1) SUM(GROUP2)
3 2 -
Weird situation with BINARY SEARCH in READ statwment??
Hi Experts,
I got weird situation with BINARY SEARCH !! bcoz, below is my code,
data: begin of it_vbap occurs o,
vbeln like vbap-vbap,
posnr like vbap-posnr, ( i also tried like, posnr(6) type n)
end of it_vbap.
data: counter type i ( i also tried like, counter(6) type n)
my it_vbap is filled like below,
vbeln----
posnr
12345678-------000001
12345678-------000002
12345678-------000003
12345678-------000004
12345678-------000005
12345678-------000006
sort it_vbap by posnr. (*)
clear counter
loop it_vbap.
counter = counter + 1.
read table it_vbap with key posnr = counter
binary search (after commenting the above SORT * marked statement, then,if I delete BINARY SEARCH, then its working!!)
if sy-subrc = 0.
here is my logic.
endif.
endloop.
so, now, for
1st loop the sy-subrc = 0.
2nd loop the sy-subrc = 0.
3rdloop the sy-subrc NE 0.
4th loop the sy-subrc = 0.
5th loop the sy-subrc NE 0.
6th loop the sy-subrc NE 0.
so, why, ebven though there r all entires in it_vbap, why am getting the sy-subrc NE 0??
Is the reason that, there r less number of entries in it_vbap?? and am using BINARY SEARCH??
thanq
Edited by: SAP ABAPer on Dec 4, 2008 8:33 PM
Edited by: SAP ABAPer on Dec 4, 2008 8:37 PM
Edited by: SAP ABAPer on Dec 4, 2008 8:37 PMHello
The following coding works perfect (6x sy-subrc = 0) on ERP 6.0:
*& Report ZUS_SDN_ITAB_BINARY_SEARCH
REPORT zus_sdn_itab_binary_search.
TABLES: vbap.
DATA: BEGIN OF it_vbap OCCURS 0,
vbeln LIKE vbap-vbeln,
posnr LIKE vbap-posnr, "( i also tried like, posnr(6) type n)
END OF it_vbap.
DATA: counter TYPE posnr.
START-OF-SELECTION.
" Fill itab with data:
* 12345678-------000001
* 12345678-------000002
* 12345678-------000003
* 12345678-------000004
* 12345678-------000005
* 12345678-------000006
REFRESH: it_vbap.
CLEAR: vbap.
DO 6 TIMES.
it_vbap-vbeln = '12345678'.
it_vbap-posnr = syst-index.
APPEND it_vbap.
ENDDO.
SORT it_vbap[] BY posnr. " for BINARY SEARCH
BREAK-POINT.
clear counter.
loop at it_vbap.
counter = counter + 1.
READ TABLE it_vbap WITH KEY posnr = counter
BINARY SEARCH. " (after commenting the above sort * marked statement, then,if i delete binary search, then its working!!)
IF sy-subrc = 0.
"here is my logic.
ENDIF.
ENDLOOP.
END-OF-SELECTION.
By the way, if your requirement is to check whether the first item has POSNR = '000001', the second item has POSNR = '000002' and so on then you can simplify your coding like this:
counter = 0.
LOOP AT it_vbap.
counter = syst-tabix.
IF ( it_vbap-posnr = counter ).
" put in here your logic
ENDIF.
ENDLOOP.
Regards
Uwe -
TimesTen SQL with group by returning multiple rows
I have a Active-Standby TimesTen nodes.
Using group by with or without having clause:
Whenever I do a group by query on table1 table with or without having clause, SQL returns multiple rows. This looks very strange to me. Each time it gives different count
Command> select count(*) from table1 group by pname having pname='pool';
< 390400 >
1 row found.
Command> select count(*) from table1 group by pname having pname='pool';
< 390608 >
< 32639 >
2 rows found.
Command> select count(*) from table1 group by pname having pname='pool';
< 2394 >
< 351057 >
2 rows found.
Command> select count(*) from table1 group by pname having pname='pool';
< 305732 >
1 row found.
Command> select count(*) from table1 group by pname having pname='pool';
< 420783 >
1 row found.
Command> select count(*),pool_name from root.rms_address_pools group by pool_name order by pool_name;
< *435473, pool* >
< *32313, pool* >
< 453, smvG3 >
< *28980, pool* >
< 3786, smvG4 >
< *26025, pool* >
< 236120, smvG6 >
< 131455, smcG3 >
< *65150, pool* >
< 23, snt1G1 >
< 510, snt2G1 >
< 510, snt2G2 >
Using where clause:
Command> select count(*) from table1 where pname='pool';
< *442354* >
1 row found.
Command> select count(*) from table1 where pname='pool';
< 442354 >
1 row found.
Table description:
Command> desc table1;
Table table1:
Columns:
*IP_ADDRESS BIGINT NOT NULL
PNAME CHAR (32) NOT NULL
SITEID TINYINT NOT NULL
1 table found.
ttVersion:
bash-3.00# ./ttVersion
TimesTen Release *7.0.3.1.0 (64 bit Solaris)* (tt70:17001) 2007-10-30T22:17:07Z
Instance admin: root
Instance home directory: /TimesTen/tt70
Daemon home directory: /var/TimesTen/tt70
bash-3.00#
Could any one suggest what is wrong with my SQL? or is it a bug with TimesTen?
Many thanks in advance.
Br,
BrijHi Gena,
When i execute the query with where clause, it gives me the output with more than one pool:
Command> select pname, count (*) from table1 where pname='pool' group by pname ;
< smcG3 , 18836 >
< pool , 423527 >
2 rows found.
Command> select pname, count (*) from table1 where pname='pool' group by pname ;
< intG302 , 17202 >
< pool , 425159 >
2 rows found.
While if give use the having clause it gives me multiple rows for one pool only ( sometimes) :
select pname, count (*) from table1 group by pname having pname='pool';
< pool , 32686 >
< pool , 420445 >
2 rows found.
select pname, count (*) from table1 group by pname having pname='pool';
< pool , 393574 >
< pool , 5838 >
< pool , 110943 >
3 rows found.
Command> select pname, count (*) from table1 group by pname having pname='pool';
< pool , 414590 >
< pool , 8395 >
2 rows found.
Please suggest what can be done in this case, need i open a case with Oracle for this.
Regards, Brij
Maybe you are looking for
-
Error in transporting the Transformation to Quality system
Hi All, I am trying to transport the transformation which has field level routine as well as start and end routine. I have basically ehanced the standard routine and it works perfectly fine in dev server . I have even loaded the data and generated re
-
How to invoke the store procedure(Oracle) in java ?
There is a store procedure in Oracle. the out parameter type is %ROWTYPE. how to invoke it in java ? create or replace procedure select_on_test(outtable out test%rowtype) is cursor ctest is select id,name from test; begin open ctest; fetch ctest into
-
I am trying to authorize my new DELL computer to i tunes, which I updated today.Under the instructions it says: under store menu choose Authorize this computer. When I get to store there is no menu and no"authorize this computer."
-
Install of Photoshop Elements onto new computer.
I downloaded Photoshop Elements on-line to my Windows 8 system. I now have an upgraded computer and want to install Elements. I do not have a backup disk. How do I do it?
-
I had issues with my macbook pro putting a "read mask" over many images and menus.MSWord and Pages looked positively pink. I did some searching on the intertubes and in the forums and found nothing but lots of complaints about color on Macbook pros.