Analytic function, rows in col
Hi everyone,
I have this query which give the result i need :
select distinct id_media, to_char(date_media,'WW') week
, count(rowid) over (partition by id_media,to_char(date_media,'WW') order by id_media) as nb_count
from music01.data_radio
Result :
2004 06 2 => previous week
2004 07 4 => current week
But i would rather have a result like :
2004 2 4 => previous and current week on the same rows
How can i make that ?
I dont think its anywhere close to what OP wants
SQL> With t
2 as
3 (select '2004' as year, '06' as week, 2 as nb_count
4 from dual
5 union all
6 Select '2004', '07', 4
7 from dual
8 union all
9 Select '2004', '08', 7
10 from dual)
11 Select year, nb_count,lag(nb_count,1) OVER(ORDER BY week) from t
12 /
YEAR NB_COUNT LAG(NB_COUNT,1)OVER(ORDERBYWEEK)
2004 2
2004 4 2
2004 7 4He is looking for something like this
SQL> With t
2 as
3 (select '2004' as year, '06' as week, 2 as nb_count
4 from dual
5 union all
6 Select '2004', '07', 4
7 from dual
8 union all
9 Select '2004', '08', 7
10 from dual)
11 select year,
12 max(decode(week, '06', nb_count)) "06",
13 max(decode(week, '07', nb_count)) "07",
14 max(decode(week, '08', nb_count)) "08"
15 from t
16 group by year
17 /
YEAR 06 07 08
2004 2 4 7Edited by: Karthick_Arp on Feb 24, 2009 4:07 AM
Similar Messages
-
Hide duplicate row and analytic functions
Hi all,
I have to count how many customers have two particular product on the same area.
Table cols are:
AREA
PRODUCT_CODE (PK)
CUSTOMER_ID (PK)
QTA
The query is:
select distinct area, count(customer_id) over(PARTITION BY area)
from all_products
where product_code in ('BC01007', 'BC01004')
group by area, customer_id
having ( sum(decode(FPP.PRODOTTO_CODE,'BC01007',qta,0)) ) > 0)
and ( sum(decode(FPP.PRODOTTO_CODE,'BC01004',qta,0)) ) > 0);
In SQL*PLUS works fine, but in Oracle Discoverer I can't get distinct results also if I chek "Hide duplicate rows" in Table Layout .
Anybody have another way to get distinct results for analytic function results?
Thanks in advance,
GiuseppeThe query in Disco is exactly that I've post before.
Results are there:
AREA.........................................C1
01704 - AREA VR NORD..............3
01704 - AREA VR NORD..............3
01704 - AREA VR NORD..............3
01705 - AREA VR SUD.................1
02702 - AREA EMILIA NORD........1
If I check "hide duplicate row" in layout option results didn't change.
If I add distinct clause manually in SQL*PLUS the query become:
SELECT distinct o141151.AREA as E141152,COUNT(o141151.CUSTOMER_ID) OVER(PARTITION BY o141151.AREA ) as C_1
FROM BPN.ALL_PRODUCTS o141151
WHERE (o141151.PRODUCT_CODE IN ('BC01006','BC01007','BC01004'))
GROUP BY o141151.AREA,o141151.CUSTOMER_ID
HAVING (( SUM(DECODE(o141151.PRODUCT_CODE,'BC01006',1,0)) ) > 0 AND ( SUM(DECODE(o141151.PRODUCT_CODE,'BC01004',1,0)) ) > 0)
and the results are not duplicate.
AREA.........................................C1
01704 - AREA VR NORD..............3
01705 - AREA VR SUD.................1
02702 - AREA EMILIA NORD........1
There is any other way to force distinct clause in Discoverer?
Thank you
Giuseppe -
Analytical function count(*) with order by Rows unbounded preceding
Hi
I have query about analytical function count(*) with order by (col) ROWS unbounded preceding.
If i removed order by rows unbouned preceding then it behaves some other way.
Can anybody tell me what is the impact of order by ROWS unbounded preceding with count(*) analytical function?
Please help me and thanks in advance.Sweety,
CURRENT ROW is the default behaviour of the analytical function if no windowing clause is provided. So if you are giving ROWS UNBOUNDED PRECEDING, It basically means that you want to compute COUNT(*) from the beginning of the window to the current ROW. In other words, the use of ROWS UNBOUNDED PRECEDING is to implicitly indicate the end of the window is the current row
The beginning of the window of a result set will depend on how you have defined your partition by clause in the analytical function.
If you specify ROWS 2 preceding, then it will calculate COUNT(*) from 2 ROWS prior to the current row. It is a physical offset.
Regards,
Message was edited by:
henryswift -
Analytic function to count rows based on Special criteria
Hi
I have the following query with analytic function but wrong results on the last column COUNT.
Please help me to achive the required result.Need to change the way how I select the last column.
1)I am getting the output order by b.sequence_no column . This is a must.
2)COUNT Column :
I don't want the total count based on thor column hence there is no point in grouping by that column.
The actual requirement to achieve COUNT is:
2a -If in the next row, if either the THOR and LOC combination changes to a new value, then COUNT=1
(In other words, if it is different from the following row)
2b-If the values of THOR and LOC repeats in the following row, then the count should be the total of all those same value rows until the rows become different.
(In this case 2b-WHERE THE ROWS ARE SAME- also I only want to show these same rows only once. This is shown in the "MY REQUIRED OUTPUT) .
My present query:
select r.name REGION ,
p.name PT,
do.name DELOFF,
ro.name ROUTE,
decode(th.thorfare_name,'OSIUNKNOWN',NULL,th.thorfare_name)
THOR,
l.name LOC ,
b.sequence_no SEQ,
CASE WHEN th.thorfare_name = LAG (th.thorfare_name)
OVER (order by b.sequence_no)
or th.thorfare_name = LEAD (th.thorfare_name)
OVER (order by b.sequence_no)
THEN COUNT(b.sequence_no) OVER (partition by r.name,th.thorfare_name,l.name order BY b.sequence_no
ELSE 1
END COUNT
from t_regions r,t_post_towns p,t_delivery_offices do, t_routes ro, t_counties c,t_head_offices ho,
t_buildings b,t_thoroughfares th,t_localities l
where th.thorfare_id = b.thorfare_id
and nvl(b.invalid,'N')='N'
and b.route_id=ro.route_id(+)
and b.locality_id =l.locality_id(+)
and ro.delivery_office_id=do.delivery_office_id(+)
and do.post_town_id = p.post_town_id(+)
and p.ho_id=ho.ho_id(+)
and ho.county_id = c.county_id(+)
and c.region_id = r.region_id(+)
and r.name='NAAS'
and do.DELIVERY_OFFICE_id= &&DELIVERY_OFFICE_id
and ro.route_id=3405
group by r.name,p.name,do.name,ro.name,th.thorfare_name,l.name,b.sequence_no
ORDER BY ro.name,b.sequence_no;My incorrect output[PART OF DATA]:
>
REGION PT DELOFF ROUTE THOR LOC SEQ COUNT
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 1 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 2 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 PRIMHILL CEL 4 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 5 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 THEGROVE CEL 2 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 7 3
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 8 4
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 9 5
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 10 6
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 11 7
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 12 8
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 15 2
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 19 3
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 24 4
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 29 5
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 34 6
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 39 7
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 42 2
NAAS NAAS MAYNOOTH MAYNOOTHR010 PRIMHILL CEL 43 2
NAAS NAAS MAYNOOTH MAYNOOTHR010 PRIMHILL CEL 44 3
My required output[PART OF DATA]-Please compare with the above.:
>
REGION PT DELOFF ROUTE THOR LOC COUNT
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 PRIMHILL CEL 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 THEGROVE CEL 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 6
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 7
NAAS NAAS MAYNOOTH MAYNOOTHR010 PRIMHILL CEL 2
NOTE :Count as 1 is correctly coming.
But where there is same rows and I want to take the total count on them, I am not getting.
Pls pls help.
Thanks
Edited by: Krithi on 04-Nov-2010 05:28Nicosa wrote:
Hi,
Can you give us some sample data (create table + inserts orders) to play with ?
Considering your output, I'm not even sure you need analytic count.Yes sure.
I am describing the query again here with 3 tables now to make this understand better.
Given below are the create table statements and insert statements for these 3 tables.
These tables are - BULDINGSV,THORV and LOCV
CREATE TABLE BUILDINGSV
BUILDING_ID NUMBER(10) NOT NULL,
INVALID VARCHAR2(1 BYTE),
ROUTE_ID NUMBER(10),
LOCALITY_ID NUMBER(10),
SEQUENCE_NO NUMBER(4),
THORFARE_ID NUMBER(10) NOT NULL
CREATE TABLE THORV
THORFARE_ID NUMBER(10) NOT NULL,
THORFARE_NAME VARCHAR2(40 BYTE) NOT NULL
CREATE TABLE LOCV
LOCALITY_ID NUMBER(10) NOT NULL,
NAME VARCHAR2(40 BYTE) NOT NULL);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002372, 'N', 3405, 37382613, 5, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002363, 'N', 3405, 37382613, 57, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002362, 'N', 3405, 37382613, 56, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002360, 'N', 3405, 37382613, 52, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002358, 'N', 3405, 37382613, 1, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002240, 'N', 3405, 37382613, 6, 9002284);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002229, 'N', 3405, 37382613, 66, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002228, 'N', 3405, 37382613, 65, 35291872);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002226, 'N', 3405, 37382613, 62, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002222, 'N', 3405, 37382613, 43, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002217, 'N', 3405, 37382613, 125, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002221, 'N', 3405, 37382613, 58, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002214, 'N', 3405, 37382613, 128, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33363182, 'N', 3405, 37382613, 114, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33363185, 'N', 3405, 37382613, 115, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002371, 'N', 3405, 37382613, 2, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003329, 'N', 3405, 37382613, 415, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002359, 'N', 3405, 37382613, 15, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002224, 'N', 3405, 37382613, 61, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003318, 'N', 3405, 37382613, 411, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003326, 'N', 3405, 37382613, 412, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003327, 'N', 3405, 37382613, 413, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003328, 'N', 3405, 37382613, 414, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003330, 'N', 3405, 37382613, 416, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003331, 'N', 3405, 37382613, 417, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003332, 'N', 3405, 37382613, 410, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27004795, 'N', 3405, 37382613, 514, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27004807, 'N', 3405, 37382613, 515, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002227, 'N', 3405, 37382613, 64, 35291872);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33230805, 'N', 3405, 37382613, 44, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33231027, 'N', 3405, 37382613, 7, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33231058, 'N', 3405, 37382613, 9, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33231078, 'N', 3405, 37382613, 10, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33231087, 'N', 3405, 37382613, 11, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33231093, 'N', 3405, 37382613, 12, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33229890, 'N', 3405, 37382613, 55, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561996, 'N', 3405, 34224751, 544, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561997, 'N', 3405, 34224751, 543, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561998, 'N', 3405, 34224751, 555, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562000, 'N', 3405, 34224751, 541, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562001, 'N', 3405, 34224751, 538, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562028, 'N', 3405, 35417256, 525, 0);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562031, 'N', 3405, 35417256, 518, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562032, 'N', 3405, 35417256, 519, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562033, 'N', 3405, 35417256, 523, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561939, 'N', 3405, 34224751, 551, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561940, 'N', 3405, 34224751, 552, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561941, 'N', 3405, 34224751, 553, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561942, 'N', 3405, 35417256, 536, 0);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561943, 'N', 3405, 35417256, 537, 0);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561970, 'N', 3405, 35417256, 522, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561972, 'N', 3405, 35417256, 527, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561974, 'N', 3405, 35417256, 530, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561975, 'N', 3405, 35417256, 531, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561980, 'N', 3405, 34224751, 575, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561981, 'N', 3405, 34224751, 574, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561983, 'N', 3405, 34224751, 571, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561984, 'N', 3405, 34224751, 570, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561985, 'N', 3405, 34224751, 568, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561986, 'N', 3405, 34224751, 567, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561987, 'N', 3405, 34224751, 566, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561989, 'N', 3405, 34224751, 563, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561990, 'N', 3405, 34224751, 562, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561991, 'N', 3405, 34224751, 560, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561992, 'N', 3405, 34224751, 559, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561993, 'N', 3405, 34224751, 558, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561994, 'N', 3405, 34224751, 548, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561995, 'N', 3405, 34224751, 546, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562160, 'N', 3405, 37382613, 139, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562161, 'N', 3405, 37382613, 140, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562162, 'N', 3405, 37382613, 141, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562163, 'N', 3405, 37382613, 142, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562164, 'N', 3405, 37382613, 143, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562165, 'N', 3405, 37382613, 145, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562166, 'N', 3405, 37382613, 100, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562167, 'N', 3405, 37382613, 102, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562171, 'N', 3405, 37382613, 107, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562172, 'N', 3405, 37382613, 108, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562174, 'N', 3405, 37382613, 110, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562175, 'N', 3405, 37382613, 111, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562176, 'N', 3405, 37382613, 112, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562177, 'N', 3405, 37382613, 113, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562182, 'N', 3405, 37382613, 123, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562183, 'N', 3405, 37382613, 121, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562184, 'N', 3405, 37382613, 120, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562185, 'N', 3405, 37382613, 118, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562186, 'N', 3405, 37382613, 117, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562187, 'N', 3405, 37382613, 116, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562189, 'N', 3405, 37382613, 95, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562190, 'N', 3405, 37382613, 94, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562213, 'N', 3405, 37382613, 89, 35291872);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562240, 'N', 3405, 35417256, 516, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329559, 'N', 3405, 35329152, 443, 35329551);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329560, 'N', 3405, 35329152, 444, 35329551);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329562, 'N', 3405, 35329152, 446, 35329551);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329109, 'N', 3405, 35329152, 433, 35329181);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329169, 'N', 3405, 35329152, 434, 35329181);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329557, 'N', 3405, 35329152, 441, 35329551);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329558, 'N', 3405, 35329152, 442, 35329551);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329191, 'N', 3405, 35329152, 436, 35329181);
COMMIT;
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(0, 'OSIUNKNOWN');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(9002284, 'THE GROVE');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(9002364, 'DUBLIN ROAD');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(9002375, 'NEWTOWN ROAD');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35291872, 'HAZELHATCH ROAD');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35291878, 'SIMMONSTOWN PARK');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35291883, 'PRIMROSE HILL');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35329181, 'THE COPSE');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35329213, 'THE COURT');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35329529, 'THE CRESCENT');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35329551, 'THE LAWNS');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35329580, 'THE DRIVE');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35417271, 'TEMPLEMILLS COTTAGES');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35417360, 'CHELMSFORD');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(36500023, 'THE CLOSE');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(36500101, 'THE GREEN');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37375569, 'THE DOWNS');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37375595, 'THE PARK');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37375754, 'THE AVENUE');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37375781, 'THE VIEW');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37376046, 'THE CRESCENT');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37376048, 'THE GLADE');
COMMIT;
Insert into LOCV
(LOCALITY_ID, NAME)
Values
(34224751, 'SIMMONSTOWN');
Insert into LOCV
(LOCALITY_ID, NAME)
Values
(35417256, 'TEMPLEMILLS');
Insert into LOCV
(LOCALITY_ID, NAME)
Values
(35329152, 'TEMPLE MANOR');
Insert into LOCV
(LOCALITY_ID, NAME)
Values
(37382613, 'CELBRIDGE');
Insert into LOCV
(LOCALITY_ID, NAME)
Values
(37375570, 'SAINT WOLSTAN''S ABBEY');
COMMIT;
------------------------------------------------------------------------------Now the query with wrong result:
select decode(th.thorfare_name,'OSIUNKNOWN',NULL,th.thorfare_name)
THOR,
l.name LOC,
b.sequence_no SEQ,
CASE WHEN th.thorfare_name = LAG (th.thorfare_name)
OVER (order by b.sequence_no)
or th.thorfare_name = LEAD (th.thorfare_name)
OVER (order by b.sequence_no)
THEN COUNT(b.sequence_no) OVER (partition by th.thorfare_name,l.name order BY b.sequence_no
ELSE 1
END COUNT from BUILDINGSV b,THORV th,LOCV l
where th.thorfare_id = b.thorfare_id
and nvl(b.invalid,'N')='N'
and b.route_id=3405
and b.locality_id =l.locality_id(+)
order by b.sequence_no;The query result -WRONG (only first few lines)
THOR LOC SEQ COUNT
DUBLIN ROAD CELBRIDGE 1 1
NEWTOWN ROAD CELBRIDGE 2 1
NEWTOWN ROAD CELBRIDGE 5 2
THE GROVE CELBRIDGE 6 1
NEWTOWN ROAD CELBRIDGE 7 3
NEWTOWN ROAD CELBRIDGE 9 4
NEWTOWN ROAD CELBRIDGE 10 5
NEWTOWN ROAD CELBRIDGE 11 6
NEWTOWN ROAD CELBRIDGE 12 7
DUBLIN ROAD CELBRIDGE 15 1
PRIMROSE HILL CELBRIDGE 43 1
PRIMROSE HILL CELBRIDGE 44 2
DUBLIN ROAD CELBRIDGE 52 3
DUBLIN ROAD CELBRIDGE 55 4
DUBLIN ROAD CELBRIDGE 56 5
DUBLIN ROAD CELBRIDGE 57 6
DUBLIN ROAD CELBRIDGE 58 7
PRIMROSE HILL CELBRIDGE 61 3
PRIMROSE HILL CELBRIDGE 62 4
HAZELHATCH ROAD CELBRIDGE 64 1
HAZELHATCH ROAD CELBRIDGE 65 2The query result -EXPECTED (only first few lines)
THOR LOC COUNT
DUBLIN ROAD CELBRIDGE 1
NEWTOWN ROAD CELBRIDGE 2
THE GROVE CELBRIDGE 1
NEWTOWN ROAD CELBRIDGE 5
DUBLIN ROAD CELBRIDGE 1
PRIMROSE HILL CELBRIDGE 2
DUBLIN ROAD CELBRIDGE 5
PRIMROSE HILL CELBRIDGE 2
HAZELHATCH ROAD CELBRIDGE 2Please note, in the expected result, I only need 1 row but need to show the total count of rows until the names change.
So the issues are
1) the count column values are wrong in my query.
2)I dont want to repeat the same rows(Please see EXPECTED output and compare it against the original)
3)Want the output in exactly same way as in EXPECTED OUTPUT as I dont want to group by thor name(Eg. I dont want the count for all DUBLIN ROAD but I want to examine rows for the next one, if THOR/LOC combination is different in next row then COUNT=1 else COUNT=Count of no. of rows for that thor/loc combination until the combination change -So there are same value multiple rows which i need to show it in 1 row with the total count)
I am explaining below this in more detail!!
I only need 1 row per same THOR/LOC names coming multiple times but I need the count shown against that 1 row(i.e COUNT= how many rows with same thor/loc combination until THOR/LOC combo changes value).
Then repeat the process until all rows are finished..
If there is no multiple row with same THOR/LOC coming in the following row-i.e the following row is a different THOR/LOC combination, then the count for that row is 1.
Hope this is clear.
Is this doable?
Thanks in advance.
Edited by: Krithi on 04-Nov-2010 07:45
Edited by: Krithi on 04-Nov-2010 07:45
Edited by: Krithi on 04-Nov-2010 08:31 -
How can I restrict the rows of a SELECT which uses analytical functions?
Hello all,
Can anyone please tell me how to restrict the following query:
SELECT empno,
ename,
deptno,
SUM(sal) over(PARTITION BY deptno) sum_per_dept
FROM emp;
I would need just the lines which have sum_per_dept>100, without using a SUBSELECT.
Is there any way which is specific for analytical functions?
Thank you in advance,
Eugen
Message was edited by:
misailescuSQL> select empno,
2 ename,
3 deptno,sum_per_dept
4 from
5 (
6 SELECT empno,
7 ename,
8 deptno,
9 SUM(sal) over(PARTITION BY deptno) sum_per_dept
10 FROM emp
11 )
12 where sum_per_dept>1000;
EMPNO ENAME DEPTNO SUM_PER_DEPT
7839 KING 10 8750
7782 CLARK 10 8750
7934 MILLER 10 8750
7902 FORD 20 6775
7369 SMITH 20 6775
7566 JONES 20 6775
7900 JAMES 30 9400
7844 TURNER 30 9400
7654 MARTIN 30 9400
7521 WARD 30 9400
7499 ALLEN 30 9400
7698 BLAKE 30 9400
12 rows selected
SQL>
SQL> select empno,
2 ename,
3 deptno,sum_per_dept
4 from
5 (
6 SELECT empno,
7 ename,
8 deptno,
9 SUM(sal) over(PARTITION BY deptno) sum_per_dept
10 FROM emp
11 )
12 where sum_per_dept>9000;
EMPNO ENAME DEPTNO SUM_PER_DEPT
7900 JAMES 30 9400
7844 TURNER 30 9400
7654 MARTIN 30 9400
7521 WARD 30 9400
7499 ALLEN 30 9400
7698 BLAKE 30 9400
6 rows selected
SQL> Greetings...
Sim -
Select value of most occurances - analytical function?!
Hi ...
I've get stuck with a "little" problem.
I try to provide some testing code for this:
CREATE TABLE a1 (
id NUMBER(8),
val NUMBER(6),
title VARCHAR2(16),
CONSTRAINT test_pk PRIMARY KEY(id)
INERT INTO a1 (id, val, title) VALUES (1,12,'c');
INERT INTO a1 (id, val, title) VALUES (2,13,'b');
INERT INTO a1 (id, val, title) VALUES (3,13,'a');
INERT INTO a1 (id, val, title) VALUES (4,13,'a');
INERT INTO a1 (id, val, title) VALUES (5,42,'a');
INERT INTO a1 (id, val, title) VALUES (6,42,'b');
INERT INTO a1 (id, val, title) VALUES (7,42,'b');Actually the table is much bigger ;) But this should be ok for this question. It already exist a query like:
SELECT
count(*) -- just an example
FROM
a1
GROUP BY
val
-- should return 1,3,3 (for the groups val=12, val=13,val=42)Now it is nessecary to select a title for each group (specified by group by). And there the title which occurs the most in this group should be selected. For this example this are 'c' for the group val=12 .. 'a' for the group val=13 and finally 'b' for the group val=42.
I tried to use some anayltical function, but I#m not able to get this to work - may be because I never used analytical functions before. If I try something I mostly get an error: Keyword FROM not at expected position (ORA-00923). I searched for some tutorial/howto documentations where my problem is handled but without success. So I guess the syntax and the way to understand analytical functions is not as easy as it semms to be ...
title OVER ( ORDER BY count(*) ROWS | RANGE BETWEEN 1 AND 1 ) <-- that would by logical for my brain, but not for oracles ;-)
Can somebody help?
Thanks!Hi folks,
thanks for the variuos answers! Weekend is over, so we should work on ...
I tried some examples you gave me. And I decide to provide more detailes information! At first, if the amount of 'a' and 'b' is equal it doesn't matter which one will be returned (so it can be undefined, if that makes thing easier). I will now paste the original query I work with and I add some comments for you - so you can find the lines which should be changed to return the most occurances-value.
If you think it makes sense to Provide some create-table ddl an (maybe as csv file) some data, tell me how i can do that (I think its not an option to post ~ Mio rows as Inserts here).
The select-query I want to manipulate - this is no more related to our test-table 'a1' !! For Example let us look on the rows where 'A' and 'drm_' is selected - starts at line 7!
SELECT
box_id,
schedule_id,
fixsecs_down(MIN(acqtime),600),
COUNT(*), -- each row in rfm_meas_hr represents one frame of measuremnt-data, so this represents the number of frames received in this block
-- instead of 'A' the most occurance of the col 'rpro' should be selected here
'A',
-- like above, but from the column 'rdmo'
'drm_',
-- below this some other cols are calculated/selected, not important here
FLOOR(MEDIAN(rfre)),
ROUND(AVG(rdbv),2),
ROUND(SUM(POWER(rdbv,2)),2),
ROUND(MAX(rdbv),2),
ROUND(MIN(rdbv),2),
ROUND(SUM(rsnr)/SUM(nframes),2),
ROUND(SUM(POWER(rsnr,2)),2),
ROUND(MAX(rsnr),2),
ROUND(MIN(rsnr),2),
ROUND( AVG(rsta_sync),2), -- rsta_sync
ROUND(SUM(POWER(rsta_sync,2)),2), -- rsta_sync_s
ROUND( MIN(rsta_sync),2), -- rsta_sync_min
ROUND( MAX(rsta_sync),2), -- rsta_sync_max
ROUND( AVG(rsta_fac),2), -- rsta_facc
ROUND(SUM(POWER(rsta_fac,2)),2), -- rsta_fac_s
ROUND( MIN(rsta_fac),2), -- rsta_fac_min
ROUND( MAX(rsta_fac),2), -- rsta_fac_max
ROUND( AVG(rsta_sdc),2), -- rsta_sdc
ROUND(SUM(POWER(rsta_sdc,2)),2), -- rsta_sdc_s
ROUND( MIN(rsta_sdc),2), -- rsta_sdc_min
ROUND( MAX(rsta_sdc),2), -- rsta_sdc_max
ROUND( AVG(rsta_audio),2), -- rsta_audio
ROUND(SUM(POWER(rsta_audio,2)),2), -- rsta_audio_s
ROUND( MIN(rsta_audio),2), -- rsta_audio_min
ROUND( MAX(rsta_audio),2), -- rsta_audio_max
MIN(rser), TODO: most occurances
MIN(rtty_stream0), -- TODO: most occurances
MIN(rtty_stream1), -- TODO: most occurances
MIN(rtty_stream2), -- TODO: most occurances
MIN(rtty_stream3), -- TODO: most occurances
ROUND(AVG(NVL(rafs_error/nullif(rafs_au,0),1))*SUM(rafs_au)/NULLIF(SUM(rafs_au),0),2), -- rafs
ROUND( SUM( POWER( NVL(rafs_error/nullif(rafs_au,0),1),2))*SUM(rafs_au)/NULLIF(SUM(rafs_au),0) ,2), -- rafs_s
ROUND(MIN(rafs_error/ NULLIF(rafs_au,0)),2), -- rafs_min
ROUND(MAX(NVL(rafs_error/NULLIF(rafs_au,0),1) )*SUM(rafs_au)/NULLIF(SUM(rafs_au),0),2), -- rafs_max
SUM(robm_A),
SUM(robm_B),
SUM(robm_C),
SUM(robm_D),
SUM(robm_E),
ROUND(SUM(rwmf) / SUM(nframes),2), -- rwmf
ROUND(SUM(POWER(rwmf,2)),2), -- rwmf_s
ROUND(MIN(rwmf),2), -- rwmf_min
ROUND(MAX(rwmf),2), -- rwmf_max
ROUND(SUM(rwmm) / SUM(nframes),2), -- rwmm
ROUND(SUM(POWER(rwmm,2)),2), -- rwmm_s
ROUND(MIN(rwmm),2), -- rwmm_min
ROUND(MAX(rwmm),2), -- rwmm_max
ROUND(SUM(rmer) / SUM(nframes),2), -- rmer
ROUND(SUM(POWER(rmer,2)),2), -- rmer_s
ROUND(MIN(rmer),2), -- rmer_min
ROUND(MAX(rmer),2), -- rmer_max
ROUND(SUM(RBP0_ERRS+ RBP1_ERRS+ RBP2_ERRS+ RBP3_ERRS) / NULLIF(SUM(RBP0_BITS+ RBP1_BITS+ RBP2_BITS+ RBP3_BITS),0) ,10), -- ber
ROUND(SUM(POWER( (RBP0_ERRS+ RBP1_ERRS+ RBP2_ERRS+ RBP3_ERRS) / NULLIF((RBP0_BITS+ RBP1_BITS+ RBP2_BITS+ RBP3_BITS),0) ,2)),10), -- ber_s
ROUND(MIN(RBP0_ERRS+ RBP1_ERRS+ RBP2_ERRS+ RBP3_ERRS) / NULLIF(MIN(RBP0_BITS+ RBP1_BITS+ RBP2_BITS+ RBP3_BITS),0) ,10), -- ber_min
ROUND(MAX(RBP0_ERRS+ RBP1_ERRS+ RBP2_ERRS+ RBP3_ERRS) / NULLIF(MAX(RBP0_BITS+ RBP1_BITS+ RBP2_BITS+ RBP3_BITS),0) ,10), -- ber_max
ROUND(AVG(rdop),2), -- rdop
ROUND(SUM(POWER(rdop,2) ),2), -- rdop_s
ROUND(MIN(rdop),2), -- rdop_min
ROUND(MAX(rdop),2), -- rdop_max
ROUND(AVG(rdel90),2), -- rdel90
ROUND(SUM(POWER(rdel90,2) ),2), -- rdel90_s
ROUND(MIN(rdel90),2), -- rdel90_min
ROUND(MAX(rdel90),2), -- rdel90_max
ROUND(AVG(rdel95),2), -- rdel95
ROUND(AVG(rdel99),2), -- rdel99
null AS reslevel
FROM
-- select the data where this should be calculated
SELECT
FROM
rfm_meas_hr
WHERE
acqtime < fixsecs_down(to_timestamp('07-01-2011 14:00:00,00','dd-mm-yyyy hh24:mi:ss,ff'),600)
AND (reslevel IS NULL OR reslevel=10)
-- group the selected data and executes the calculation given by SELECT-statement
GROUP BY
-- group the data into 10min packages, indicated by its timestamp
to_char( EXTRACT(MONTH FROM acqtime)*100000 + EXTRACT(DAY FROM acqtime)*1000 + EXTRACT(HOUR FROM acqtime)*10 + floor(EXTRACT(MINUTE FROM acqtime)/10) ),
schedule_id,
box_id
HAVING
SUM(nframes)>15
;I chould say: I can add indexes if nessecary! At the moment there is one on (acqtime, reslevel) as this improves the access speed. But the query above, executed on typical data already takes 5-7 sec.
Please let me know if you need any more information.
Regards! -
Aggregation of analytic functions not allowed
Hi all, I have a calculated field called Calculation1 with the following calculation:
AVG(Resolution_time) KEEP(DENSE_RANK FIRST ORDER BY RANK ) OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report #7 COMPL".Resource Name )
The result of this calculation is correct, but is repeated for all the rows I have in the dataset.
Group Name Resourse name Calculation1
SH Group Mr. A 10
SH Group Mr. A 10
SH Group Mr. A 10
SH Group Mr. A 10
SH Group Mr. A 10
5112 rowsI tried to create another calculation in order to have only ONE value for the couple "Group Name, Resource Name) as AVG(Calculation1) but I have the error: Aggregation of analytic functions not allowed
I saw also inside the "Edit worksheet" panel that the Calculation1 *is not represented* with the "Sigma" symbol I(as for example a simple AVG(field_1)) and inside the SQL code I don't have GROUP BY Group Name, Resource Name......
I'd like to see ONLY one row as:
Group Name Resourse name Calculation1
SH Group Mr. A 10....that it means I grouped by Group Name, Resource Name
Anyone knows how can I achieve this result or any workarounds ??
Thanks in advance
AlexHi Rod unfortunately I can't use the AVG(Resolution_time) because my dataset is quite strange...I explain to you better.
Ι start from this situation:
!http://www.freeimagehosting.net/uploads/6c7bba26bd.jpg!
There are 3 calculated fields:
RANK is the first calculated field:
ROW_NUMBER() OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report Assigned To & Created By COMPL".Resource Name,"Tickets Report Assigned To & Created By COMPL".Incident Id ORDER BY "Tickets Report Assigned To & Created By COMPL".Select Flag )
RT Calc is the 2nd calculation:
CASE WHEN RANK = 1 THEN Resolution_time END
and Calculation2 is the 3rd calculation:
AVG(Resolution_time) KEEP(DENSE_RANK FIRST ORDER BY RANK ) OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report Assigned To & Created By COMPL".Resource Name )
As you can see, from the initial dataset, I have duplicated incident id and a simple AVG(Resolution Time) counts also all the duplication.
I used the rank (based on the field "flag) to take, for each ticket, ONLY a "resolution time" value (in my case I need the resolution time when the rank =1)
So, with the Calculation2 I calculated for each couple Group Name, Resource Name the right AVG(Resolution time), but how yuo can see....this result is duplicated for each incident_id....
What I need instead is to see *once* for each couple 'Group Name, Resource Name' the AVG(Resolution time).
In other words I need to calculate the AVG(Resolution time) considering only the values written inside the RT Calc fields (where they are NOT NULL, and so, the total of the tickets it's not 14, but 9).
I tried to aggregate again using AVG(Calculation2)...but I had the error "Aggregation of analytic functions not allowed"...
Do you know a way to fix this problem ?
Thanks
Alex -
Replacing Oracle's FIRST_VALUE and LAST_VALUE analytical functions.
Hi,
I am using OBI 10.1.3.2.1 where, I guess, EVALUATE is not available. I would like to know alternatives, esp. to replace Oracle's FIRST_VALUE and LAST_VALUE analytical functions.
I want to track some changes. For example, there are four methods of travel - Air, Train, Road and Sea. Would like to know traveler's first method of traveling and the last method of traveling in an year. If both of them match then a certain action is taken. If they do not match, then another action is taken.
I tried as under.
1. Get Sequence ID for each travel within an year per traveler as Sequence_Id.
2. Get the Lowest Sequence ID (which should be 1) for travels within an year per traveler as Sequence_LId.
3. Get the Highest Sequence ID (which could be 1 or greater than 1) for travels within an year per traveler as Sequence_HId.
4. If Sequence ID = Lowest Sequence ID then display the method of travel as First Method of Travel.
5. If Sequence ID = Highest Sequence ID then display the method of travel as Latest Method of Travel.
6. If First Method of Travel = Latest Method of Travel then display Yes/No as Match.
The issue is cells could be blank in First Method of Travel and Last Method of Travel unless the traveler traveled only once in an year.
Using Oracle's FIRST_VALUE and LAST_VALUE analytical functions, I can get a result like
Traveler | Card Issue Date | Journey Date | Method | First Method of Travel | Last Method of Travel | Match?
ABC | 01/01/2000 | 04/04/2000 | Road | Road | Air | No
ABC | 01/01/2000 | 15/12/2000 | Air | Road | Air | No
XYZ | 01/01/2000 | 04/05/2000 | Train | Train | Train | Yes
XYZ | 01/01/2000 | 04/11/2000 | Train | Train | Train | Yes
Using OBI Answers, I am getting something like this.
Traveler | Card Issue Date | Journey Date | Method | First Method of Travel | Last Method of Travel | Match?
ABC | 01/01/2000 | 04/04/2000 | Road | Road | <BLANK> | No
ABC | 01/01/2000 | 15/12/2000 | Air | <BLANK> | Air | No
XYZ | 01/01/2000 | 04/05/2000 | Train | Train | <BLANK> | No
XYZ | 01/01/2000 | 04/11/2000 | Train | <BLANK> | Train | No
Above, for XYZ traveler the Match? clearly shows a wrong result (although somehow it's correct for traveler ABC).
Would appreciate if someone can guide me how to resolve the issue.
Many thanks,
Manoj.
Edited by: mandix on 27-Nov-2009 08:43
Edited by: mandix on 27-Nov-2009 08:47Hi,
Just to recap, in OBI 10.1.3.2.1, I am trying to find an alternative way to FIRST_VALUE and LAST_VALUE analytical functions used in Oracle. Somehow, I feel it's achievable. I would like to know answers to the following questions.
1. Is there any way of referring to a cell value and displaying it in other cells for a reference value?
For example, can I display the First Method of Travel for traveler 'ABC' and 'XYZ' for all the rows returned in the same column, respectively?
2. I tried RMIN, RMAX functions in the RDP but it does not accept "BY" clause (for example, RMIN(Transaction_Id BY Traveler) to define Lowest Sequence Id per traveler). Am I doing something wrong here? Why can a formula with "BY" clause be defined in Answers but not the RPD? The idea is to use this in Answers. This is in relation to my first question.
Could someone please let me know?
I understand that this thread that I have posted is related to something that can be done outside OBI, but still would like to know.
If anything is not clear please let me know.
Thanks,
Manoj. -
Completion of data series by analytical function
I have the pleasure of learning the benefits of analytical functions and hope to get some help
The case is as follows:
Different projects gets funds from different sources over several years, but not from each source every year.
I want to produce the cumulative sum of funds for each source for each year for each project, but so far I have not been able to do so for years without fund for a particular source.
I have used this syntax:
SUM(fund) OVER(PARTITION BY project, source ORDER BY year ROWS UNBOUNDED PRECEDING)
I have also experimented with different variations of the window clause, but without any luck.
This is the last step in a big job I have been working on for several weeks, so I would be very thankful for any help.If you want to use Analytic functions and if you are on 10.1.3.3 version of BI EE then try using Evaluate, Evaluate_aggr that support native database functions. I have blogged about it here http://oraclebizint.wordpress.com/2007/09/10/oracle-bi-ee-10133-support-for-native-database-functions-and-aggregates/. But in your case all you might want to do is have a column with the following function.
SUM(Measure BY Col1, Col2...)
I have also blogged about it here http://oraclebizint.wordpress.com/2007/10/02/oracle-bi-ee-101332-varying-aggregation-based-on-levels-analytic-functions-equivalence/.
Thanks,
Venkat
http://oraclebizint.wordpress.com -
Analytic function to retrieve a value one year ago
Hello,
I'm trying to find an analytic function to get a value on another row by looking on a date with Oracle 11gR2.
I have a table with a date_id (truncated date), a flag and a measure. For each date, I have at least one row (sometimes 2), so it is gapless.
I would like to find analytic functions to show for each date :
sum of the measure for that date
sum of the measure one week ago
sum of the measure one year ago
As it is gapless I managed to do it the week doing a group by date in a subquery and using a LAG with offset set to 7 on top of it (see below).
However I'm struggling on how to do that for the data one year ago as we might have leap years. I cannot simply set the offset to 365.
Is it possible to do it with a RANGE BETWEEN window clause? I can't manage to have it working with dates.
Week :LAG with offset 7
SQL Fiddle
or
create table daily_counts
date_id date,
internal_flag number,
measure1 number
insert into daily_counts values ('01-Jan-2013', 0, 8014);
insert into daily_counts values ('01-Jan-2013', 1, 2);
insert into daily_counts values ('02-Jan-2013', 0, 1300);
insert into daily_counts values ('02-Jan-2013', 1, 37);
insert into daily_counts values ('03-Jan-2013', 0, 19);
insert into daily_counts values ('03-Jan-2013', 1, 14);
insert into daily_counts values ('04-Jan-2013', 0, 3);
insert into daily_counts values ('05-Jan-2013', 0, 0);
insert into daily_counts values ('05-Jan-2013', 1, 1);
insert into daily_counts values ('06-Jan-2013', 0, 0);
insert into daily_counts values ('07-Jan-2013', 1, 3);
insert into daily_counts values ('08-Jan-2013', 0, 33);
insert into daily_counts values ('08-Jan-2013', 1, 9);
commit;
select
date_id,
total1,
LAG(total1, 7) OVER(ORDER BY date_id) total_one_week_ago
from
select
date_id,
SUM(measure1) total1
from daily_counts
group by date_id
order by 1;
Year : no idea?
I can't give a gapless example, would be too long but if there is a solution with the date directly :
SQL Fiddle
or add this to the schema above :
insert into daily_counts values ('07-Jan-2012', 0, 11);
insert into daily_counts values ('07-Jan-2012', 1, 1);
insert into daily_counts values ('08-Jan-2012', 1, 4);
Thank you for your help.
FloydHi,
Sorry, I;m not sure I understand the problem.
If you are certain that there is at least 1 row for every day, then you can be sure that the GROUP BY will produce exactly 1 row per day, and you can use LAG (total1, 365) just like you already use LAG (total1, 7).
Are you concerned about leap years? That is, when the day is March 1, 2016, do you want the total_one_year_ago column to reflect March 1, 2015, which was 366 days earlier? If that case, use
date_id - ADD_MONTHS (date_id, -12)
instead of 365.
LAG only works with an exact number, but you can use RANGE BETWEEN with other analytic functions, such as MIN or SUM:
SELECT DISTINCT
date_id
, SUM (measure1) OVER (PARTITION BY date_id) AS total1
, SUM (measure1) OVER ( ORDER BY date_id
RANGE BETWEEN 7 PRECEDING
AND 7 PRECEDING
) AS total1_one_week_ago
, SUM (measure1) OVER ( ORDER BY date_id
RANGE BETWEEN 365 PRECEDING
AND 365 PRECEDING
) AS total1_one_year_ago
FROM daily_counts
ORDER BY date_id
Again, use date arithmetic instead of the hard-coded 365, if that's an issue.
As Hoek said, it really helps to post the exact results you want from the given sample data. You're miles ahead of the people who don't even post the sample data, though.
You're right not to post hundreds of INSERT statements to get a year's data. Here's one way to generate sample data for lots of rows at the same time:
-- Put a 0 into the table for every day in 2012
INSERT INTO daily_counts (date_id, measure1)
SELECT DATE '2011-12-31' + LEVEL
, 0
FROM dual
CONNECT BY LEVEL <= 366 -
Analytical function in OWB 10.2.0.4.0
Dear -
I am trying to implement analytical function in OWB but not sure how to do it. Can anyone help me?
My SQL query looks like
select sum (aamtorg),
sum(sum(aamtorg)) over
(order by cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl,
cmgmint, cbasent, cbok, tamtlbl,
cctygbk, caffgbk, dacggll, dctx
rows between unbounded preceding and current row) cumulative_amountcctybbl
from fmbnd_evt
where cbssuntgbk = 'FM001'
and caccgbk = '14300000029'
and caccroo = '9146581'
and ccrytrngbk = 'AUD'
and creftrl = '~'
and cmgmint = '~'
and cbasent = 'U2725'
and cbok = '0000'
and tamtlbl = '~'
and dacggll between '01aug2011' and '04aug11'
group by cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl,
cmgmint, ctrdnbmgint, cbasent, cbok, tamtlbl,
cctygbk, caffgbk, dacggll, dctx
I want to implement cumulative_amountcctybb column in the mapping.
Can anyone help?Hi Arun,
analytical functions don't require GROUP BY clause and that's why you can use an expression operator. You also have a normal SUM (aggregate) function in your query, which requires GROUP BY and can only be implemented using aggregator operator. If I understand your problem correctly, you need to use aggregate SUM with GROUP BY on your data set first, and then use analytical SUM on this set (which is already processed with an aggregate SUM). Your query would look something like this:
select sum_aamtorg,
sum(sum_aamtorg) over
(order by cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl,
cmgmint, cbasent, cbok, tamtlbl,
cctygbk, caffgbk, dacggll, dctx
rows between unbounded preceding and current row) cumulative_amountcctybbl
from (
select sum (aamtorg) sum_aamtorg,
cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl,
cmgmint, cbasent, cbok, tamtlbl,
cctygbk, caffgbk, dacggll, dctx
from fmbnd_evt
where cbssuntgbk = 'FM001'
and caccgbk = '14300000029'
and caccroo = '9146581'
and ccrytrngbk = 'AUD'
and creftrl = '~'
and cmgmint = '~'
and cbasent = 'U2725'
and cbok = '0000'
and tamtlbl = '~'
and dacggll between '01aug2011' and '04aug11'
group by cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl,
cmgmint, ctrdnbmgint, cbasent, cbok, tamtlbl,
cctygbk, caffgbk, dacggll, dctx)
Operator sequence would then look like: TABLE -> FILTER -> AGGREGATOR ->EXPRESSION.
Hope this helps
Mate
Edited by: mate on Sep 26, 2011 1:36 PM
Edited by: mate on Sep 26, 2011 1:36 PM -
How to use group by in analytic function
I need to write department which has minimum salary in one row. It must be with analytic function but i have problem with group by. I can not use min() without group by.
select * from (select min(sal) min_salary, deptno, RANK() OVER (ORDER BY sal ASC, rownum ASC) RN from emp group by deptno) WHERE RN < 20 order by deptno;
Edited by: senza on 6.11.2009 16:09different query, different results.
LPALANI@l11gr2>select department_id, min(salary)
2 from hr.employees
3 group by department_id
4 order by 2;
DEPARTMENT_ID MIN(SALARY)
50 2,100
20 2,100
30 2,500
60 4,200
10 4,400
80 6,100
40 6,500
100 6,900
7,000
110 8,300
70 10,000
90 17,000
12 rows selected.
LPALANI@l11gr2>
LPALANI@l11gr2>-- Always lists one department in a non-deterministic way
LPALANI@l11gr2>select * from (
2 select department_id, min(salary) min_salary
3 from hr.employees
4 group by department_id
5 order by 2) where rownum = 1;
DEPARTMENT_ID MIN_SALARY
20 2,100
LPALANI@l11gr2>
LPALANI@l11gr2>-- Out of the departments with the same least salary, returns the one with the least department number
LPALANI@l11gr2>SELECT MIN (department_id) KEEP (DENSE_RANK FIRST ORDER BY salary) AS dept_with_lowest_sal, min(salary) min_salary
2 FROM hr.employees;
DEPT_WITH_LOWEST_SAL MIN_SALARY
20 2,100
LPALANI@l11gr2>
LPALANI@l11gr2>-- This will list all the deparments with the minimum salary
LPALANI@l11gr2>select department_id, min_salary
2 from (select
3 department_id,
4 min(salary) min_salary,
5 RANK() OVER (ORDER BY min(salary) ASC) RN
6 from hr.employees
7 group by department_id)
8 WHERE rn=1;
DEPARTMENT_ID MIN_SALARY
20 2,100
50 2,100 -
Does sql analytic function help to determine continuity in occurences
We need to solve this problem in a sql statement.
imagine a table test with two columns
create table test (id char(1), begin number, end number);
and these values
insert into test('a',1, 2);
insert into test('a',2,3);
insert into test('a',3,4);
insert into test('a',7,10);
insert into test('a',10,15);
insert into test('b',5,9);
insert into test('b',9,21);
insert into test('c',1,5);
our goal is to determine continuity in number sequence between begin and end attributes for a same id and determine min and max number from these contuinity chains.
The result may be
a, 1, 4
a, 7, 15
b, 5, 21
c, 1, 5
We test some analytic functions like lag, lead, row_number, min, max, partition by, etc to search a way to identify row set that represent a continuity but we didn't find a way to identify (mark) them so we can use min and max functions to extract extreme values.
Any idea is really welcome !Here is our implementation in a real context for example:
insert into requesterstage(requesterstage_i, requester_i, t_requesterstage_i, datefrom, dateto )
With ListToAdd as
(Select distinct support.requester_i,
support.datefrom,
support.dateto
from support
where support.datefrom < to_date('01.01.2006', 'dd.mm.yyyy')
and support.t_relief_i = t_relief_ipar.fgetflextypologyclassitem_i(t_relief_ipar.fismedicalexpenses)
and not exists
(select null
from requesterstage
where requesterstage.requester_i = support.requester_i
and support.datefrom < nvl(requesterstage.dateto, support.datefrom + 1)
and nvl(support.dateto, requesterstage.datefrom + 1) > requesterstage.datefrom)
ListToAddAnalyzed_1 as
(select requester_i,
datefrom,
dateto,
decode(datefrom,lag(dateto) over (partition by requester_i order by datefrom),0,1) data_set_start
from ListToAdd),
ListToAddAnalyzed_2 as
(select requester_i,
datefrom,
dateto,
data_set_start,
sum(data_set_start) over(order by requester_i, datefrom ) data_set_id
from ListToAddAnalyzed_1)
select requesterstage_iseq.nextval,
requester_i,
t_requesterstage_ipar.fgetflextypologyclassitem_i(t_requesterstage_ipar.fisbefore2006),
datefrom,
decode(sign(nvl(dateto, to_date('01.01.2006', 'dd.mm.yyyy')) -to_date('01.01.2006', 'dd.mm.yyyy')), 0, to_date('01.01.2006', 'dd.mm.yyyy'), -1, dateto, 1, to_date('01.01.2006', 'dd.mm.yyyy'))
from ( select requester_i
, min(datefrom) datefrom
, max(dateto) dateto
From ListToAddAnalyzed_2
group by requester_i, data_set_id
); -
Analytic Functions with GROUP-BY Clause?
I'm just getting acquainted with analytical functions. I like them. I'm having a problem, though. I want to sum up the results, but either I'm running into a limitation or I'm writing the SQL wrong. Any hints for me?
Hypothetical Table SALES consisting of a DAY_ID, PRODUCT_ID, PURCHASER_ID, PURCHASE_PRICE lists all the
Hypothetical Business Question: Product prices can fluctuate over the course of a day. I want to know how much per day I would have made had I sold one each of all my products at their max price for that day. Silly question, I know, but it's the best I could come up with to show the problem.
INSERT INTO SALES VALUES(1,1,1,1.0);
INSERT INTO SALES VALUES(1,1,1,2.0);
INSERT INTO SALES VALUES(1,2,1,3.0);
INSERT INTO SALES VALUES(1,2,1,4.0);
INSERT INTO SALES VALUES(2,1,1,5.0);
INSERT INTO SALES VALUES(2,1,1,6.0);
INSERT INTO SALES VALUES(2,2,1,7.0);
INSERT INTO SALES VALUES(2,2,1,8.0);
COMMIT;
Day 1: Iif I had sold one product 1 at $2 and one product 2 at $4, I would have made 6$.
Day 2: Iif I had sold one product 1 at $6 and one product 2 at $8, I would have made 14$.
The desired result set is:
DAY_ID MY_MEASURE
1 6
1 14The following SQL gets me tantalizingly close:
SELECT DAY_ID,
MAX(PURCHASE_PRICE)
KEEP(DENSE_RANK FIRST ORDER BY PURCHASE_PRICE DESC)
OVER(PARTITION BY DAY_ID, PRODUCT_ID) AS MY_MEASURE
FROM SALES
ORDER BY DAY_ID
DAY_ID MY_MEASURE
1 2
1 2
1 4
1 4
2 6
2 6
2 8
2 8But as you can see, my result set is "longer" than I wanted it to be. I want a single row per DAY_ID. I understand what the analytical functions are doing here, and I acknowledge that I am "not doing it right." I just can't seem to figure out how to make it work.
Trying to do a sum() of max() simply does not work, nor does any semblance of a group-by clause that I can come up with. Unfortunately, as soon as I add the windowing function, I am no longer allowed to use group-by expressions (I think).
I am using a reporting tool, so unfortunately using things like inline views are not an option. I need to be able to define "MY_MEASURE" as something the query tool can apply the SUM() function to in its generated SQL.
(Note: The actual problem is slightly less easy to conceptualize, but solving this conundrum will take me much closer to solving the other.)
I humbly solicit your collective wisdom, oh forum.Thanks, SY. I went that way originally too. Unfortunately that's no different from what I could get without the RANK function.
SELECT DAY_ID,
PRODUCT_ID,
MAX(PURCHASE_PRICE) MAX_PRICE
FROM SALES
GROUP BY DAY_ID,
PRODUCT_ID
ORDER BY DAY_ID,
PRODUCT_ID
DAY_ID PRODUCT_ID MAX_PRICE
1 1 2
1 2 4
2 1 6
2 2 8 -
Hi,
Please find below table structure and insert scritps. Requesting for vluable help.
create table temp2 (col1 number,col2 varchar2(10),col3 number,col4 varchar2(20));
insert into temp2 values (1,'a',100,'vvv');
insert into temp2 values (2,'b',200,'www');
insert into temp2 values (3,'c',300,'xxx');
insert into temp2 values (4,'d',400,'yyy');
insert into temp2 values (5,'e',500,'zzz');
insert into temp2 values (6,'f',600,'aaa');
insert into temp2 values (7,'g',700,'bbb');
insert into temp2 values (8,'h',800,'ccc');
I am trying to get same output, what we get from below UNION query with ANALYTICAL Function.
select * from temp2 where col1 in (1,2,3,4,5)
union
select * from temp2 where col1 in (1,2,5,6)
union
select * from temp2 where col1 in (1,2,7,8);
I am seeking help by this dummy example to understand the concept, how can we use analytical functional over UNION or OUTER JOINS.
In my exact query, I am using same table three times adding UNION clause. here also we scan temp2 three times, so for bulky tables using 'union' would be hampering query's performance
It means i go with three time scans of same table that is not performance oriented. With the help of above required concept, i will try to remove UNIONs from my exact query.
Thanks!!Thanks for your time BluShadow and sorry as i think i couldn't make my query clear.
I try it again. Below there are three queries, you may see all three queries are using same tables. Difference in all three queries are just few conditions, which makes all three queries diff with each other.
I know, u cant run below query in your database, but i think it will convey my doubt to you. I have mentioned no. of rows with each clause and total i am getting 67 rows as my output. (Reason may be first n third query's result set are the subset of Second Query dataset)
So i want to take all common rows as well as additional rows, if present in any of the query. This is getting easliy done with UNION clause but want to have it in other way as here my same is getting scanned again n again.
SELECT
START_TX.FX_TRAN_ID START_FX_TRAN_ID
,END_TX.FX_TRAN_ID END_FX_TRAN_ID
,START_TX.ENTERED_DT_TS
,USER
,START_TX.TRADE_DT
,START_TX.DEAL_NUMBER
,START_TX.FX_DEAL_TYPE
,START_TX.ORIENTATION_BUYSELL
,START_TX.BASE_CCY
,START_TX.BASE_CCY_AMT
,START_TX.SECONDARY_CCY
,START_TX.SECONDARY_CCY_AMT
,START_TX.MATURITY_DT
,START_TX.TRADE_RT
,START_TX.FORWARD_PTS
,START_TX.CORPORATE_PIPS
,START_TX.DEAL_OWNER_INITIALS
,START_TX.CORPORATE_DEALER
,START_TX.PROFIT_CENTER_CD
,START_TX.COUNTERPARTY_NM
,START_TX.COUNTERPARTY_NUMBER
FROM
(SELECT * FROM FX_TRANSACTIONS WHERE GMT_CONV_ENTERED_DT_TS >= TO_DATE('20-Nov-2013 4:00:01 AM','DD-Mon-YYYY HH:MI:SS AM')) START_TX
INNER JOIN
(SELECT * FROM FX_TRANSACTIONS WHERE GMT_CONV_ENTERED_DT_TS <= TO_DATE('20-Nov-2013 4:59:59 PM','DD-Mon-YYYY HH:MI:SS AM')) END_TX
ON START_TX.COUNTERPARTY_NM = END_TX.COUNTERPARTY_NM AND
START_TX.COUNTERPARTY_NUMBER = END_TX.COUNTERPARTY_NUMBER AND
START_TX.FX_DEAL_TYPE = END_TX.FX_DEAL_TYPE AND
START_TX.BASE_CCY = END_TX.BASE_CCY AND
START_TX.SECONDARY_CCY = END_TX.SECONDARY_CCY AND
NVL(START_TX.CORPORATE_DEALER,'nullX')=NVL(END_TX.CORPORATE_DEALER,'nullX') AND
START_TX.ORIENTATION_BUYSELL='B' AND
END_TX.ORIENTATION_BUYSELL='S' AND
START_TX.FX_TRAN_ID = 1850718 AND
(START_TX.BASE_CCY_AMT = END_TX.BASE_CCY_AMT
OR
START_TX.SECONDARY_CCY_AMT = END_TX.SECONDARY_CCY_AMT) -- 10 Rows
UNION
SELECT
START_TX.FX_TRAN_ID START_FX_TRAN_ID
,END_TX.FX_TRAN_ID END_FX_TRAN_ID
,START_TX.ENTERED_DT_TS
,USER
,START_TX.TRADE_DT
,START_TX.DEAL_NUMBER
,START_TX.FX_DEAL_TYPE
,START_TX.ORIENTATION_BUYSELL
,START_TX.BASE_CCY
,START_TX.BASE_CCY_AMT
,START_TX.SECONDARY_CCY
,START_TX.SECONDARY_CCY_AMT
,START_TX.MATURITY_DT
,START_TX.TRADE_RT
,START_TX.FORWARD_PTS
,START_TX.CORPORATE_PIPS
,START_TX.DEAL_OWNER_INITIALS
,START_TX.CORPORATE_DEALER
,START_TX.PROFIT_CENTER_CD
,START_TX.COUNTERPARTY_NM
,START_TX.COUNTERPARTY_NUMBER
FROM
(SELECT * FROM FX_TRANSACTIONS WHERE GMT_CONV_ENTERED_DT_TS >= TO_DATE('20-Nov-2013 4:00:01 AM','DD-Mon-YYYY HH:MI:SS AM')) START_TX
INNER JOIN
(SELECT * FROM FX_TRANSACTIONS WHERE GMT_CONV_ENTERED_DT_TS <= TO_DATE('20-Nov-2013 4:59:59 PM','DD-Mon-YYYY HH:MI:SS AM')) END_TX
ON START_TX.COUNTERPARTY_NM = END_TX.COUNTERPARTY_NM AND
START_TX.COUNTERPARTY_NUMBER = END_TX.COUNTERPARTY_NUMBER AND
START_TX.FX_DEAL_TYPE = END_TX.FX_DEAL_TYPE AND
START_TX.BASE_CCY = END_TX.BASE_CCY AND
START_TX.SECONDARY_CCY = END_TX.SECONDARY_CCY AND
NVL(START_TX.CORPORATE_DEALER,'nullX')=NVL(END_TX.CORPORATE_DEALER,'nullX') AND
START_TX.FX_TRAN_ID = 1850718 AND
START_TX.ORIENTATION_BUYSELL='B' AND
END_TX.ORIENTATION_BUYSELL='S' -- 67 Rows
UNION
SELECT
START_TX.FX_TRAN_ID START_FX_TRAN_ID
,END_TX.FX_TRAN_ID END_FX_TRAN_ID
,START_TX.ENTERED_DT_TS
,USER
,START_TX.TRADE_DT
,START_TX.DEAL_NUMBER
,START_TX.FX_DEAL_TYPE
,START_TX.ORIENTATION_BUYSELL
,START_TX.BASE_CCY
,START_TX.BASE_CCY_AMT
,START_TX.SECONDARY_CCY
,START_TX.SECONDARY_CCY_AMT
,START_TX.MATURITY_DT
,START_TX.TRADE_RT
,START_TX.FORWARD_PTS
,START_TX.CORPORATE_PIPS
,START_TX.DEAL_OWNER_INITIALS
,START_TX.CORPORATE_DEALER
,START_TX.PROFIT_CENTER_CD
,START_TX.COUNTERPARTY_NM
,START_TX.COUNTERPARTY_NUMBER
FROM
(SELECT * FROM FX_TRANSACTIONS WHERE GMT_CONV_ENTERED_DT_TS >= TO_DATE('20-Nov-2013 4:00:01 AM','DD-Mon-YYYY HH:MI:SS AM')) START_TX
INNER JOIN
(SELECT * FROM FX_TRANSACTIONS WHERE GMT_CONV_ENTERED_DT_TS <= TO_DATE('20-Nov-2013 4:59:59 PM','DD-Mon-YYYY HH:MI:SS AM')) END_TX
ON START_TX.COUNTERPARTY_NM = END_TX.COUNTERPARTY_NM AND
START_TX.COUNTERPARTY_NUMBER = END_TX.COUNTERPARTY_NUMBER AND
START_TX.FX_DEAL_TYPE = END_TX.FX_DEAL_TYPE AND
START_TX.BASE_CCY = END_TX.BASE_CCY AND
START_TX.SECONDARY_CCY = END_TX.SECONDARY_CCY AND
NVL(START_TX.CORPORATE_DEALER,'nullX')=NVL(END_TX.CORPORATE_DEALER,'nullX') AND
START_TX.ORIENTATION_BUYSELL='B' AND
END_TX.ORIENTATION_BUYSELL='S' AND
START_TX.FX_TRAN_ID = 1850718 AND
END_TX.BASE_CCY_AMT BETWEEN (START_TX.BASE_CCY_AMT - (START_TX.BASE_CCY_AMT * :PERC_DEV/100)) AND (START_TX.BASE_CCY_AMT + (START_TX.BASE_CCY_AMT * :PERC_DEV/100))
OR
END_TX.SECONDARY_CCY_AMT BETWEEN (START_TX.SECONDARY_CCY_AMT - (START_TX.SECONDARY_CCY_AMT*:PERC_DEV/100) ) AND (START_TX.SECONDARY_CCY_AMT + (START_TX.SECONDARY_CCY_AMT*:PERC_DEV/100))
); --- 10 Rows
Maybe you are looking for
-
Compatibility with Windows Vista...HELP!
Hi, in my office there are 2 Vista that make me crazy! I've a TC for router and NAS, but those 2 Vista have continuous problems to access the data on it. The error that have is: "could not connect to the disk. make sure NetBIOS is enabled and try aga
-
Backup sms contact and call history to pc
Hello , I got the nokia 920 based on the camera hype at the time of it’s first unveiling. To my surprised it turned out my old Galaxy S2 took better pictures in my opinion. Here is my question, I am trying to backup my entire phone (has to be sent in
-
Windows to Mac + transferring ratings
Hi guys, i'm currently using my ipod on my Windows PC which is about to pack up (based on the sounds it's making) and so i'm going to purchase a mac (iBook). I've pretty much got the transfer sorted (backed it all to CD/ DVD should the PC blow up) an
-
Using Photo Stream is the same as putting photos on iCloud? Not sure of the terminology?
-
HI, 1. how do i use Runtime.getRuntime().exec in Mac OS9 2. Is there any place where i can find the Mac equivalent commands for windows based commands? 3. I need to create a .exe file which downloads a zipfile from a website and unzips the contents i