Analytic function Issue
Create table Example as
select 124 Emp_Id, 350 Amt from dual
Union select 123 Emp_Id, 200 Amt from dual
Union select 124 Emp_Id, 350 Amt from dual
Union select 125 Emp_Id, 250 Amt from dual
Union select 126 Emp_Id, 250 Amt from dual
Union select 127 Emp_Id, 100 Amt from dual
I am having the dataet like
Emp_Id TotalAmt
123 200
124 350
125 250
126 250
127 100
with query
SELECT EMP_ID,Amt,
SUM (Amt) OVER ( ORDER BY Amt DESC) AS total
FROM Example
WHERE Amt > 0 ;
query return
Emp_Id Amt Total
124 350 350
*125 250 850*
*126 250 850*
123 200 1050
127 100 1150
in this second row total amount added with third row.
Excepted Output is
Emp_Id Amt Total
124 350 350
125 250 600
126 250 850
123 200 1050
127 100 1150
Hi,
Add more expressions to the analytic ORDER BY cluase so that there are no ties in the ordering.
For example:
SELECT EMP_ID,Amt,
SUM (Amt) OVER ( ORDER BY Amt DESC
, emp_id -- Added
) AS total
FROM Example
WHERE Amt > 0 ;"SUM (x) OVER (ORDER BY y)" means you will get the same result for each value of y, and a different result only if y is different.
If there is nothing else unique about each row, you can use ROWNUM.
This behavior is a consequence of the default windowing clause "RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW". "RANGE" refers to the value(s) in the ORDER BY clause. SBH (above) gave another solution which explicitly changes the wondowing from RANGE to ROWS, meaning you can get a separate result for each row, regardless of the value(s) of y.
Edited by: Frank Kulash on Jul 12, 2010 5:40 AM
Similar Messages
-
Oracle Analytic Function Issue
Hi, I have created a simple table contaning 3 columns and 3 records.
Insert into MYTABLE (ID, AMOUNT, RESULT) Values (1, 1, 1);
Insert into MYTABLE (ID, AMOUNT, RESULT) Values (2, 4, 1);
Insert into MYTABLE (ID, AMOUNT, RESULT) Values (3, 7, 0);
COMMIT;
I can SUM the AMOUNT using the analytic functions as
SELECT ID, AMOUNT, RESULT, SUM(AMOUNT) OVER() as TOTAL
FROM MYTABLE;
ID AMOUNT RESULT TOTAL
1 1 1 12
2 4 1 12
3 7 0 12
What I want to be able to do is summing the AMOUNTs by RESULT in this case 0 and 1.
To get the following result, how should I rewrite the query?
ID AMOUNT RESULT TOTAL RESULT_0_TOTAL RESULT_1_TOTAL
1 1 1 12 7 5
2 4 1 12 7 5
3 7 0 12 7 5SELECT ID, AMOUNT, RESULT
, SUM(CASE WHEN RESULT = 0 THEN AMOUNT ELSE 0 END) OVER() as TOTAL_0
, SUM(CASE WHEN RESULT = 1 THEN AMOUNT ELSE 0 END) OVER() as TOTAL_1
FROM MYTABLE; -
Replacing Oracle's FIRST_VALUE and LAST_VALUE analytical functions.
Hi,
I am using OBI 10.1.3.2.1 where, I guess, EVALUATE is not available. I would like to know alternatives, esp. to replace Oracle's FIRST_VALUE and LAST_VALUE analytical functions.
I want to track some changes. For example, there are four methods of travel - Air, Train, Road and Sea. Would like to know traveler's first method of traveling and the last method of traveling in an year. If both of them match then a certain action is taken. If they do not match, then another action is taken.
I tried as under.
1. Get Sequence ID for each travel within an year per traveler as Sequence_Id.
2. Get the Lowest Sequence ID (which should be 1) for travels within an year per traveler as Sequence_LId.
3. Get the Highest Sequence ID (which could be 1 or greater than 1) for travels within an year per traveler as Sequence_HId.
4. If Sequence ID = Lowest Sequence ID then display the method of travel as First Method of Travel.
5. If Sequence ID = Highest Sequence ID then display the method of travel as Latest Method of Travel.
6. If First Method of Travel = Latest Method of Travel then display Yes/No as Match.
The issue is cells could be blank in First Method of Travel and Last Method of Travel unless the traveler traveled only once in an year.
Using Oracle's FIRST_VALUE and LAST_VALUE analytical functions, I can get a result like
Traveler | Card Issue Date | Journey Date | Method | First Method of Travel | Last Method of Travel | Match?
ABC | 01/01/2000 | 04/04/2000 | Road | Road | Air | No
ABC | 01/01/2000 | 15/12/2000 | Air | Road | Air | No
XYZ | 01/01/2000 | 04/05/2000 | Train | Train | Train | Yes
XYZ | 01/01/2000 | 04/11/2000 | Train | Train | Train | Yes
Using OBI Answers, I am getting something like this.
Traveler | Card Issue Date | Journey Date | Method | First Method of Travel | Last Method of Travel | Match?
ABC | 01/01/2000 | 04/04/2000 | Road | Road | <BLANK> | No
ABC | 01/01/2000 | 15/12/2000 | Air | <BLANK> | Air | No
XYZ | 01/01/2000 | 04/05/2000 | Train | Train | <BLANK> | No
XYZ | 01/01/2000 | 04/11/2000 | Train | <BLANK> | Train | No
Above, for XYZ traveler the Match? clearly shows a wrong result (although somehow it's correct for traveler ABC).
Would appreciate if someone can guide me how to resolve the issue.
Many thanks,
Manoj.
Edited by: mandix on 27-Nov-2009 08:43
Edited by: mandix on 27-Nov-2009 08:47Hi,
Just to recap, in OBI 10.1.3.2.1, I am trying to find an alternative way to FIRST_VALUE and LAST_VALUE analytical functions used in Oracle. Somehow, I feel it's achievable. I would like to know answers to the following questions.
1. Is there any way of referring to a cell value and displaying it in other cells for a reference value?
For example, can I display the First Method of Travel for traveler 'ABC' and 'XYZ' for all the rows returned in the same column, respectively?
2. I tried RMIN, RMAX functions in the RDP but it does not accept "BY" clause (for example, RMIN(Transaction_Id BY Traveler) to define Lowest Sequence Id per traveler). Am I doing something wrong here? Why can a formula with "BY" clause be defined in Answers but not the RPD? The idea is to use this in Answers. This is in relation to my first question.
Could someone please let me know?
I understand that this thread that I have posted is related to something that can be done outside OBI, but still would like to know.
If anything is not clear please let me know.
Thanks,
Manoj. -
Analytic function to count rows based on Special criteria
Hi
I have the following query with analytic function but wrong results on the last column COUNT.
Please help me to achive the required result.Need to change the way how I select the last column.
1)I am getting the output order by b.sequence_no column . This is a must.
2)COUNT Column :
I don't want the total count based on thor column hence there is no point in grouping by that column.
The actual requirement to achieve COUNT is:
2a -If in the next row, if either the THOR and LOC combination changes to a new value, then COUNT=1
(In other words, if it is different from the following row)
2b-If the values of THOR and LOC repeats in the following row, then the count should be the total of all those same value rows until the rows become different.
(In this case 2b-WHERE THE ROWS ARE SAME- also I only want to show these same rows only once. This is shown in the "MY REQUIRED OUTPUT) .
My present query:
select r.name REGION ,
p.name PT,
do.name DELOFF,
ro.name ROUTE,
decode(th.thorfare_name,'OSIUNKNOWN',NULL,th.thorfare_name)
THOR,
l.name LOC ,
b.sequence_no SEQ,
CASE WHEN th.thorfare_name = LAG (th.thorfare_name)
OVER (order by b.sequence_no)
or th.thorfare_name = LEAD (th.thorfare_name)
OVER (order by b.sequence_no)
THEN COUNT(b.sequence_no) OVER (partition by r.name,th.thorfare_name,l.name order BY b.sequence_no
ELSE 1
END COUNT
from t_regions r,t_post_towns p,t_delivery_offices do, t_routes ro, t_counties c,t_head_offices ho,
t_buildings b,t_thoroughfares th,t_localities l
where th.thorfare_id = b.thorfare_id
and nvl(b.invalid,'N')='N'
and b.route_id=ro.route_id(+)
and b.locality_id =l.locality_id(+)
and ro.delivery_office_id=do.delivery_office_id(+)
and do.post_town_id = p.post_town_id(+)
and p.ho_id=ho.ho_id(+)
and ho.county_id = c.county_id(+)
and c.region_id = r.region_id(+)
and r.name='NAAS'
and do.DELIVERY_OFFICE_id= &&DELIVERY_OFFICE_id
and ro.route_id=3405
group by r.name,p.name,do.name,ro.name,th.thorfare_name,l.name,b.sequence_no
ORDER BY ro.name,b.sequence_no;My incorrect output[PART OF DATA]:
>
REGION PT DELOFF ROUTE THOR LOC SEQ COUNT
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 1 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 2 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 PRIMHILL CEL 4 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 5 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 THEGROVE CEL 2 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 7 3
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 8 4
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 9 5
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 10 6
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 11 7
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 12 8
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 15 2
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 19 3
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 24 4
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 29 5
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 34 6
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 39 7
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 42 2
NAAS NAAS MAYNOOTH MAYNOOTHR010 PRIMHILL CEL 43 2
NAAS NAAS MAYNOOTH MAYNOOTHR010 PRIMHILL CEL 44 3
My required output[PART OF DATA]-Please compare with the above.:
>
REGION PT DELOFF ROUTE THOR LOC COUNT
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 PRIMHILL CEL 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 THEGROVE CEL 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 6
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 7
NAAS NAAS MAYNOOTH MAYNOOTHR010 PRIMHILL CEL 2
NOTE :Count as 1 is correctly coming.
But where there is same rows and I want to take the total count on them, I am not getting.
Pls pls help.
Thanks
Edited by: Krithi on 04-Nov-2010 05:28Nicosa wrote:
Hi,
Can you give us some sample data (create table + inserts orders) to play with ?
Considering your output, I'm not even sure you need analytic count.Yes sure.
I am describing the query again here with 3 tables now to make this understand better.
Given below are the create table statements and insert statements for these 3 tables.
These tables are - BULDINGSV,THORV and LOCV
CREATE TABLE BUILDINGSV
BUILDING_ID NUMBER(10) NOT NULL,
INVALID VARCHAR2(1 BYTE),
ROUTE_ID NUMBER(10),
LOCALITY_ID NUMBER(10),
SEQUENCE_NO NUMBER(4),
THORFARE_ID NUMBER(10) NOT NULL
CREATE TABLE THORV
THORFARE_ID NUMBER(10) NOT NULL,
THORFARE_NAME VARCHAR2(40 BYTE) NOT NULL
CREATE TABLE LOCV
LOCALITY_ID NUMBER(10) NOT NULL,
NAME VARCHAR2(40 BYTE) NOT NULL);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002372, 'N', 3405, 37382613, 5, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002363, 'N', 3405, 37382613, 57, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002362, 'N', 3405, 37382613, 56, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002360, 'N', 3405, 37382613, 52, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002358, 'N', 3405, 37382613, 1, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002240, 'N', 3405, 37382613, 6, 9002284);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002229, 'N', 3405, 37382613, 66, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002228, 'N', 3405, 37382613, 65, 35291872);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002226, 'N', 3405, 37382613, 62, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002222, 'N', 3405, 37382613, 43, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002217, 'N', 3405, 37382613, 125, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002221, 'N', 3405, 37382613, 58, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002214, 'N', 3405, 37382613, 128, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33363182, 'N', 3405, 37382613, 114, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33363185, 'N', 3405, 37382613, 115, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002371, 'N', 3405, 37382613, 2, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003329, 'N', 3405, 37382613, 415, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002359, 'N', 3405, 37382613, 15, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002224, 'N', 3405, 37382613, 61, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003318, 'N', 3405, 37382613, 411, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003326, 'N', 3405, 37382613, 412, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003327, 'N', 3405, 37382613, 413, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003328, 'N', 3405, 37382613, 414, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003330, 'N', 3405, 37382613, 416, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003331, 'N', 3405, 37382613, 417, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003332, 'N', 3405, 37382613, 410, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27004795, 'N', 3405, 37382613, 514, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27004807, 'N', 3405, 37382613, 515, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002227, 'N', 3405, 37382613, 64, 35291872);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33230805, 'N', 3405, 37382613, 44, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33231027, 'N', 3405, 37382613, 7, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33231058, 'N', 3405, 37382613, 9, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33231078, 'N', 3405, 37382613, 10, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33231087, 'N', 3405, 37382613, 11, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33231093, 'N', 3405, 37382613, 12, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33229890, 'N', 3405, 37382613, 55, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561996, 'N', 3405, 34224751, 544, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561997, 'N', 3405, 34224751, 543, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561998, 'N', 3405, 34224751, 555, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562000, 'N', 3405, 34224751, 541, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562001, 'N', 3405, 34224751, 538, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562028, 'N', 3405, 35417256, 525, 0);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562031, 'N', 3405, 35417256, 518, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562032, 'N', 3405, 35417256, 519, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562033, 'N', 3405, 35417256, 523, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561939, 'N', 3405, 34224751, 551, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561940, 'N', 3405, 34224751, 552, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561941, 'N', 3405, 34224751, 553, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561942, 'N', 3405, 35417256, 536, 0);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561943, 'N', 3405, 35417256, 537, 0);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561970, 'N', 3405, 35417256, 522, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561972, 'N', 3405, 35417256, 527, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561974, 'N', 3405, 35417256, 530, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561975, 'N', 3405, 35417256, 531, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561980, 'N', 3405, 34224751, 575, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561981, 'N', 3405, 34224751, 574, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561983, 'N', 3405, 34224751, 571, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561984, 'N', 3405, 34224751, 570, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561985, 'N', 3405, 34224751, 568, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561986, 'N', 3405, 34224751, 567, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561987, 'N', 3405, 34224751, 566, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561989, 'N', 3405, 34224751, 563, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561990, 'N', 3405, 34224751, 562, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561991, 'N', 3405, 34224751, 560, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561992, 'N', 3405, 34224751, 559, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561993, 'N', 3405, 34224751, 558, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561994, 'N', 3405, 34224751, 548, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561995, 'N', 3405, 34224751, 546, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562160, 'N', 3405, 37382613, 139, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562161, 'N', 3405, 37382613, 140, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562162, 'N', 3405, 37382613, 141, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562163, 'N', 3405, 37382613, 142, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562164, 'N', 3405, 37382613, 143, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562165, 'N', 3405, 37382613, 145, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562166, 'N', 3405, 37382613, 100, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562167, 'N', 3405, 37382613, 102, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562171, 'N', 3405, 37382613, 107, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562172, 'N', 3405, 37382613, 108, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562174, 'N', 3405, 37382613, 110, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562175, 'N', 3405, 37382613, 111, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562176, 'N', 3405, 37382613, 112, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562177, 'N', 3405, 37382613, 113, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562182, 'N', 3405, 37382613, 123, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562183, 'N', 3405, 37382613, 121, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562184, 'N', 3405, 37382613, 120, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562185, 'N', 3405, 37382613, 118, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562186, 'N', 3405, 37382613, 117, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562187, 'N', 3405, 37382613, 116, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562189, 'N', 3405, 37382613, 95, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562190, 'N', 3405, 37382613, 94, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562213, 'N', 3405, 37382613, 89, 35291872);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562240, 'N', 3405, 35417256, 516, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329559, 'N', 3405, 35329152, 443, 35329551);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329560, 'N', 3405, 35329152, 444, 35329551);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329562, 'N', 3405, 35329152, 446, 35329551);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329109, 'N', 3405, 35329152, 433, 35329181);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329169, 'N', 3405, 35329152, 434, 35329181);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329557, 'N', 3405, 35329152, 441, 35329551);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329558, 'N', 3405, 35329152, 442, 35329551);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329191, 'N', 3405, 35329152, 436, 35329181);
COMMIT;
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(0, 'OSIUNKNOWN');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(9002284, 'THE GROVE');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(9002364, 'DUBLIN ROAD');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(9002375, 'NEWTOWN ROAD');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35291872, 'HAZELHATCH ROAD');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35291878, 'SIMMONSTOWN PARK');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35291883, 'PRIMROSE HILL');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35329181, 'THE COPSE');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35329213, 'THE COURT');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35329529, 'THE CRESCENT');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35329551, 'THE LAWNS');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35329580, 'THE DRIVE');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35417271, 'TEMPLEMILLS COTTAGES');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35417360, 'CHELMSFORD');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(36500023, 'THE CLOSE');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(36500101, 'THE GREEN');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37375569, 'THE DOWNS');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37375595, 'THE PARK');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37375754, 'THE AVENUE');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37375781, 'THE VIEW');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37376046, 'THE CRESCENT');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37376048, 'THE GLADE');
COMMIT;
Insert into LOCV
(LOCALITY_ID, NAME)
Values
(34224751, 'SIMMONSTOWN');
Insert into LOCV
(LOCALITY_ID, NAME)
Values
(35417256, 'TEMPLEMILLS');
Insert into LOCV
(LOCALITY_ID, NAME)
Values
(35329152, 'TEMPLE MANOR');
Insert into LOCV
(LOCALITY_ID, NAME)
Values
(37382613, 'CELBRIDGE');
Insert into LOCV
(LOCALITY_ID, NAME)
Values
(37375570, 'SAINT WOLSTAN''S ABBEY');
COMMIT;
------------------------------------------------------------------------------Now the query with wrong result:
select decode(th.thorfare_name,'OSIUNKNOWN',NULL,th.thorfare_name)
THOR,
l.name LOC,
b.sequence_no SEQ,
CASE WHEN th.thorfare_name = LAG (th.thorfare_name)
OVER (order by b.sequence_no)
or th.thorfare_name = LEAD (th.thorfare_name)
OVER (order by b.sequence_no)
THEN COUNT(b.sequence_no) OVER (partition by th.thorfare_name,l.name order BY b.sequence_no
ELSE 1
END COUNT from BUILDINGSV b,THORV th,LOCV l
where th.thorfare_id = b.thorfare_id
and nvl(b.invalid,'N')='N'
and b.route_id=3405
and b.locality_id =l.locality_id(+)
order by b.sequence_no;The query result -WRONG (only first few lines)
THOR LOC SEQ COUNT
DUBLIN ROAD CELBRIDGE 1 1
NEWTOWN ROAD CELBRIDGE 2 1
NEWTOWN ROAD CELBRIDGE 5 2
THE GROVE CELBRIDGE 6 1
NEWTOWN ROAD CELBRIDGE 7 3
NEWTOWN ROAD CELBRIDGE 9 4
NEWTOWN ROAD CELBRIDGE 10 5
NEWTOWN ROAD CELBRIDGE 11 6
NEWTOWN ROAD CELBRIDGE 12 7
DUBLIN ROAD CELBRIDGE 15 1
PRIMROSE HILL CELBRIDGE 43 1
PRIMROSE HILL CELBRIDGE 44 2
DUBLIN ROAD CELBRIDGE 52 3
DUBLIN ROAD CELBRIDGE 55 4
DUBLIN ROAD CELBRIDGE 56 5
DUBLIN ROAD CELBRIDGE 57 6
DUBLIN ROAD CELBRIDGE 58 7
PRIMROSE HILL CELBRIDGE 61 3
PRIMROSE HILL CELBRIDGE 62 4
HAZELHATCH ROAD CELBRIDGE 64 1
HAZELHATCH ROAD CELBRIDGE 65 2The query result -EXPECTED (only first few lines)
THOR LOC COUNT
DUBLIN ROAD CELBRIDGE 1
NEWTOWN ROAD CELBRIDGE 2
THE GROVE CELBRIDGE 1
NEWTOWN ROAD CELBRIDGE 5
DUBLIN ROAD CELBRIDGE 1
PRIMROSE HILL CELBRIDGE 2
DUBLIN ROAD CELBRIDGE 5
PRIMROSE HILL CELBRIDGE 2
HAZELHATCH ROAD CELBRIDGE 2Please note, in the expected result, I only need 1 row but need to show the total count of rows until the names change.
So the issues are
1) the count column values are wrong in my query.
2)I dont want to repeat the same rows(Please see EXPECTED output and compare it against the original)
3)Want the output in exactly same way as in EXPECTED OUTPUT as I dont want to group by thor name(Eg. I dont want the count for all DUBLIN ROAD but I want to examine rows for the next one, if THOR/LOC combination is different in next row then COUNT=1 else COUNT=Count of no. of rows for that thor/loc combination until the combination change -So there are same value multiple rows which i need to show it in 1 row with the total count)
I am explaining below this in more detail!!
I only need 1 row per same THOR/LOC names coming multiple times but I need the count shown against that 1 row(i.e COUNT= how many rows with same thor/loc combination until THOR/LOC combo changes value).
Then repeat the process until all rows are finished..
If there is no multiple row with same THOR/LOC coming in the following row-i.e the following row is a different THOR/LOC combination, then the count for that row is 1.
Hope this is clear.
Is this doable?
Thanks in advance.
Edited by: Krithi on 04-Nov-2010 07:45
Edited by: Krithi on 04-Nov-2010 07:45
Edited by: Krithi on 04-Nov-2010 08:31 -
Analytic function to retrieve a value one year ago
Hello,
I'm trying to find an analytic function to get a value on another row by looking on a date with Oracle 11gR2.
I have a table with a date_id (truncated date), a flag and a measure. For each date, I have at least one row (sometimes 2), so it is gapless.
I would like to find analytic functions to show for each date :
sum of the measure for that date
sum of the measure one week ago
sum of the measure one year ago
As it is gapless I managed to do it the week doing a group by date in a subquery and using a LAG with offset set to 7 on top of it (see below).
However I'm struggling on how to do that for the data one year ago as we might have leap years. I cannot simply set the offset to 365.
Is it possible to do it with a RANGE BETWEEN window clause? I can't manage to have it working with dates.
Week :LAG with offset 7
SQL Fiddle
or
create table daily_counts
date_id date,
internal_flag number,
measure1 number
insert into daily_counts values ('01-Jan-2013', 0, 8014);
insert into daily_counts values ('01-Jan-2013', 1, 2);
insert into daily_counts values ('02-Jan-2013', 0, 1300);
insert into daily_counts values ('02-Jan-2013', 1, 37);
insert into daily_counts values ('03-Jan-2013', 0, 19);
insert into daily_counts values ('03-Jan-2013', 1, 14);
insert into daily_counts values ('04-Jan-2013', 0, 3);
insert into daily_counts values ('05-Jan-2013', 0, 0);
insert into daily_counts values ('05-Jan-2013', 1, 1);
insert into daily_counts values ('06-Jan-2013', 0, 0);
insert into daily_counts values ('07-Jan-2013', 1, 3);
insert into daily_counts values ('08-Jan-2013', 0, 33);
insert into daily_counts values ('08-Jan-2013', 1, 9);
commit;
select
date_id,
total1,
LAG(total1, 7) OVER(ORDER BY date_id) total_one_week_ago
from
select
date_id,
SUM(measure1) total1
from daily_counts
group by date_id
order by 1;
Year : no idea?
I can't give a gapless example, would be too long but if there is a solution with the date directly :
SQL Fiddle
or add this to the schema above :
insert into daily_counts values ('07-Jan-2012', 0, 11);
insert into daily_counts values ('07-Jan-2012', 1, 1);
insert into daily_counts values ('08-Jan-2012', 1, 4);
Thank you for your help.
FloydHi,
Sorry, I;m not sure I understand the problem.
If you are certain that there is at least 1 row for every day, then you can be sure that the GROUP BY will produce exactly 1 row per day, and you can use LAG (total1, 365) just like you already use LAG (total1, 7).
Are you concerned about leap years? That is, when the day is March 1, 2016, do you want the total_one_year_ago column to reflect March 1, 2015, which was 366 days earlier? If that case, use
date_id - ADD_MONTHS (date_id, -12)
instead of 365.
LAG only works with an exact number, but you can use RANGE BETWEEN with other analytic functions, such as MIN or SUM:
SELECT DISTINCT
date_id
, SUM (measure1) OVER (PARTITION BY date_id) AS total1
, SUM (measure1) OVER ( ORDER BY date_id
RANGE BETWEEN 7 PRECEDING
AND 7 PRECEDING
) AS total1_one_week_ago
, SUM (measure1) OVER ( ORDER BY date_id
RANGE BETWEEN 365 PRECEDING
AND 365 PRECEDING
) AS total1_one_year_ago
FROM daily_counts
ORDER BY date_id
Again, use date arithmetic instead of the hard-coded 365, if that's an issue.
As Hoek said, it really helps to post the exact results you want from the given sample data. You're miles ahead of the people who don't even post the sample data, though.
You're right not to post hundreds of INSERT statements to get a year's data. Here's one way to generate sample data for lots of rows at the same time:
-- Put a 0 into the table for every day in 2012
INSERT INTO daily_counts (date_id, measure1)
SELECT DATE '2011-12-31' + LEVEL
, 0
FROM dual
CONNECT BY LEVEL <= 366 -
Analytical function fine within TOAD but throwing an error for a mapping.
Hi,
When I validate an expression based on SUM .... OVER PARTITION BY in a mapping, I am getting the following error.
Line 4, Col 23:
PLS-00103: Encountered the symbol "OVER" when expecting one of the following:
* & = - + < / > at in is mod remainder not rem then
<an exponent (**)> <> or != or ~= >= <= <> and or like LIKE2_
LIKE4_ LIKEC_ between || multiset member SUBMULTISET_
However, using TOAD, the expression is working fine.
A staging table has got three columns, col1, col2 and col3. The expression is checking for a word in col3. The expression is as under.
(CASE WHEN SUM (CASE WHEN UPPER(INGRP1.col3) LIKE 'some_value%'
THEN 1
ELSE 0
END) OVER (PARTITION BY INGRP1.col1
,INGRP1.col2) > 0
THEN 'Y'
ELSE 'N'
END)
I searched the forum for similar issues, but not able to resolve my issue.
Could you please let me know what's wrong here?
Many thanks,
Manoj.Yes, expression validation in 10g simply does not work for (i.e. does not recognize) analytic functions.
It can simply be ignored. You should also set Generation mode to "Set Based only". Otherwise the mapping will fail to deploy under certain circumstances (when using non-set-based (PL/SQL) operators after the analytic function). -
I successfully use the following analytical function to sum all net_movement of a position (key for a position: bp_id, prtfl_num, instrmnt_id, cost_prc_crncy) from first occurrence until current row:
SELECT SUM (net_movement) OVER (PARTITION BY bp_id, prtfl_num, instrmnt_id, cost_prc_crncy ORDER BY TRUNC (val_dt) RANGE BETWEEN UNBOUNDED PRECEDING AND 0 FOLLOWING) holding,
what i need is another column to sum net_movement of a position but only for the current date, but all my approaches fail..
- add the date (val_dt) to the 'partition by' clause and therefore sum only values with same position and date
SELECT SUM (net_movement) OVER (PARTITION BY val_dt, bp_id, prtfl_num, instrmnt_id, cost_prc_crncy ORDER BY TRUNC (val_dt) RANGE BETWEEN UNBOUNDED PRECEDING AND 0 FOLLOWING) today_net_movement
- take the holding for the last date and subtract it from the current holding afterwards
SELECT SUM (net_movement) OVER (PARTITION BY bp_id, prtfl_num, instrmnt_id, cost_prc_crncy ORDER BY TRUNC (val_dt) RANGE BETWEEN UNBOUNDED PRECEDING AND -1 FOLLOWING) last_holding,
- using lag on the analytical function which calculates holding fails too
I also want to avoid creating a table which stores the last holding..
Does anyone sees where I make a mistake or knows an alternative to get this value?
It would help me much!
Thanks in advance!Thank you,
but I already tried that but it returns strange values which are not the correct ones for sure.
It is always the same value for each row, if its not 0, and a very high one (500500 for example), even if the sum of all net_movement of that date is 0 (and the statement for holding returns 0 too)
I also tried witch trunc(val_dt,'DDD') with the same result (without trunc it is the same issue)
please help if you can, thanks in advance! -
GROUP BY and analytical functions
Hi all,
I need your help with grouping my data.
Below you can see sample of my data (in my case I have view where data is in almost same format).
with test_data as(
select '01' as code, 'SM' as abbreviation, 1010 as groupnum, 21 as pieces, 4.13 as volume, 3.186 as avgvolume from dual
union
select '01' as code, 'SM' as abbreviation, 2010 as groupnum, 21 as pieces, 0 as volume, 3.186 as avgvolume from dual
union
select '01' as code, 'SM' as abbreviation, 3000 as groupnum, 21 as pieces, 55 as volume, 3.186 as avgvolume from dual
union
select '01' as code, 'SM' as abbreviation, 3010 as groupnum, 21 as pieces, 7.77 as volume, 3.186 as avgvolume from dual
union
select '02' as code, 'SMP' as abbreviation, 1010 as groupnum, 30 as pieces, 2.99 as volume, 0.1 as avgvolume from dual
union
select '03' as code, 'SMC' as abbreviation, 1010 as groupnum, 10 as pieces, 4.59 as volume, 0.459 as avgvolume from dual
union
select '40' as code, 'DB' as abbreviation, 1010 as groupnum, 21 as pieces, 5.28 as avgvolume, 0.251 as avgvolume from dual
select
DECODE (GROUPING (code), 1, 'report total:', code) as code,
abbreviation as abbreviation,
groupnum as pricelistgrp,
sum(pieces) as pieces,
sum(volume) as volume,
sum(avgvolume) as avgvolume
--sum(sum(distinct pieces)) over (partition by code,groupnum) as piecessum,
--sum(volume) volume,
--round(sum(volume) / 82,3) as avgvolume
from test_data
group by grouping sets((code,abbreviation,groupnum,pieces,volume,avgvolume),null)
order by 1,3;Select statement which I have written returns the output below:
CODE ABBR GRPOUP PIECES VOLUME AVGVOL
01 SM 1010 21 4.13 3.186
01 SM 2010 21 0 3.186
01 SM 3000 21 55 3.186
01 SM 3010 21 7.77 3.186
02 SMP 1010 30 2.99 0.1
03 SMC 1010 10 4.59 0.459
40 DB 1010 21 5.28 0.251
report total: 145 79.76 13.554Number of pieces and avg volume is same for same codes (01 - pieces = 21, avgvolume = 3.186 etc.)
What I need is to get output like below:
CODE ABBR GRPOUP PIECES VOLUME AVGVOL
01 SM 1010 21 4.13 3.186
01 SM 2010 21 0 3.186
01 SM 3000 21 55 3.186
01 SM 3010 21 7.77 3.186
02 SMP 1010 30 2.99 0.1
03 SMC 1010 10 4.59 0.459
40 DB 1010 21 5.28 0.251
report total: 82 79.76 0.973Where total number of pieces is computed as sum of distinct numbers of pieces for each code -> *82 = 21 + 30 + 10 +21*.
Total volume is just sum of volumes in each row -> *79.76 = 4.13+0+55+7.77+2.99+4.59+5.28*.
And Average volume is computed as total volume / total number of pieces -> *0.973 = 79.76 / 82*.
I was trying to use analytical function (sum() over (partition by)) to get desired output, but without good results.
Could anyone help me with this issue?
Thanks in advance!
Regards,
JiriHi, Jiri,
Jiri N. wrote:
Hi all,
I need your help with grouping my data.
Below you can see sample of my data (in my case I have view where data is in almost same format).I assume the view guarantees that all rows with the same code (or the same code and groupnum) will always have the same pieces and the same avgvolume.
with test_data as( ...Thanks for posting this; it's very helpful.
What I need is to get output like below:
CODE ABBR GRPOUP PIECES VOLUME AVGVOL
01 SM 1010 21 4.13 3.186
01 SM 2010 21 0 3.186
01 SM 3000 21 55 3.186
01 SM 3010 21 7.77 3.186
02 SMP 1010 30 2.99 0.1
03 SMC 1010 10 4.59 0.459
40 DB 1010 21 5.28 0.251
report total: 82 79.76 0.973
Except for the last row, you're just displaying data straight from the table (or view).
It might be easier to get the results you want uisng a UNION. One branch of the UNION would get the"report total" row, and the other branch would get all the rest.
>
Where total number of pieces is computed as sum of distinct numbers of pieces for each code -> *82 = 21 + 30 + 10 +21*.It's not just distinct numbers. In this example, two different codes have pieces=21, so the total of distinct pieces is 61 = 21 + 30 + 10.
>
Total volume is just sum of volumes in each row -> *79.76 = 4.13+0+55+7.77+2.99+4.59+5.28*.
And Average volume is computed as total volume / total number of pieces -> *0.973 = 79.76 / 82*.
I was trying to use analytical function (sum() over (partition by)) to get desired output, but without good results. I would use nested aggregate functions to do that:
SELECT code
, abbreviation
, groupnum AS pricelistgrp
, pieces
, volume
, avgvolume
FROM test_data
UNION ALL
SELECT 'report total:' AS code
, NULL AS abbreviaion
, NULL AS pricelistgrp
, SUM (MAX (pieces)) AS pieces
, SUM (SUM (volume)) AS volume
, SUM (SUM (volume))
/ SUM (MAX (pieces)) AS avgvolume
FROM test_data
GROUP BY code -- , abbreviation?
ORDER BY code
, pricelistgrp
;Output:
CODE ABB PRICELISTGRP PIECES VOLUME AVGVOLUME
01 SM 1010 21 4.13 3.186
01 SM 2010 21 0.00 3.186
01 SM 3000 21 55.00 3.186
01 SM 3010 21 7.77 3.186
02 SMP 1010 30 2.99 .100
03 SMC 1010 10 4.59 .459
40 DB 1010 21 5.28 .251
report total: 82 79.76 .973It's unclear if you want to GROUP BY just code (like I did above) or by both code and abbreviation.
Given that this data is coming from a view, it might be simpler and/or more efficient to make separate version of the view, or to replicate most of the view in a query. -
Advantages and disadvantages of Analytical function
Plz list out the advantages and disadvantages of normal queries and queries using analytical function (Performance wise)
I'm not sure how you wish to compare?
Analytical functions give you functionality that cannot otherwise be achieved easily in a lot of cases. They can introduce some performance degredation to a query but you have to compare on a query by query basis to determine if analytical functions or otherwise are the best solution for the issue. If it were as simple as saying that analytical functions are always slower than doing it without analytical functions, then Oracle wouldn't bother introducing them into the language. -
Wired problem with LEAD Analytical function
I am having wired problem with LEAD Analytical function. It works fine as a standalone query
but when used in sub-query it is not behaving as expected (explained below). I unable to troubleshoot
the issue. Please help me.
Detail explanation and the data follows:
========================= Table & Test Data =======================================
CREATE TABLE "TESTSCHED"
( "SEQ" NUMBER NOT NULL ENABLE,
"EMPL_ID" NUMBER NOT NULL ENABLE,
"SCHED_DT" DATE NOT NULL ENABLE,
"START_DT_TM" DATE NOT NULL ENABLE,
"END_DT_TM" DATE NOT NULL ENABLE,
"REC_STAT" CHAR(1) NOT NULL ENABLE,
CONSTRAINT "TESTSCHED_PK" PRIMARY KEY ("SEQ")
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (1, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (2, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 11:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (3, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (4, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:15:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (5, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:15:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (6, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (7, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (8, 39609, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 11:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (9, 118327, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (10, 120033, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 19:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('04-03-2008', 'dd-mm-yyyy'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (11, 120033, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 19:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('04-03-2008', 'dd-mm-yyyy'), 'I');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (12, 120033, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 19:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (13, 120033, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 19:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (14, 126690, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 18:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 21:15:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (15, 126690, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 12:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 16:45:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (16, 126690, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 12:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 21:15:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (17, 126690, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 16:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 18:45:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (18, 126690, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 16:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 18:45:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (19, 169241, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 05:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 14:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (20, 169241, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 05:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 14:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (21, 169241, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (22, 169241, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (23, 200716, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (24, 200716, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (25, 200716, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (26, 252836, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 19:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (27, 252836, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (28, 252836, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 19:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (29, 252836, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 17:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 19:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (30, 260774, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 10:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (31, 260774, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 10:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 14:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (32, 260774, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 05:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (33, 260774, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:15:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 10:35:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (34, 260774, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:15:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 10:35:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (35, 260774, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 05:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 14:00:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (36, 260774, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 07:00:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 10:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (37, 289039, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 16:30:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:15:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (38, 289039, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 13:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 22:15:00', 'dd-mm-yyyy hh24:mi:ss'), 'I');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (39, 289039, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 13:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 16:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'A');
insert into testsched (SEQ, EMPL_ID, SCHED_DT, START_DT_TM, END_DT_TM, REC_STAT)
values (40, 289039, to_date('03-03-2008', 'dd-mm-yyyy'), to_date('03-03-2008 13:45:00', 'dd-mm-yyyy hh24:mi:ss'), to_date('03-03-2008 16:30:00', 'dd-mm-yyyy hh24:mi:ss'), 'U');
========================= Problem Statement =======================================
Problem Statement:
Select all employees whose active schedule (rec_stat = 'A') start date & time falls
between "16:30" and "17:45".
An employee may be having multiple schedules on a day. It may be continuous or may be
having breaks in between the schedules.
If an employee schedule is continuous then the minimum schedule start date & time is
considered as the start date & time of the work.
If an employee is having breaks in the schedule, then each schedule start date & time is
considered as start date & time.
e.g:-1
1 39609 2008/03/03 2008/03/03 07:00:00 2008/03/03 11:00:00 A
2 39609 2008/03/03 2008/03/03 11:00:00 2008/03/03 11:30:00 A
In above case, the employee schedule is continuous. The start date & time is 2008/03/03 07:00:00
e.g:-2
19 169241 2008/03/03 2008/03/03 05:30:00 2008/03/03 14:00:00 A
21 169241 2008/03/03 2008/03/03 17:30:00 2008/03/03 22:00:00 A
In above case, the employee is having a break in between. Then both records
start date & time (i.e. 2008/03/03 05:30:00, 2008/03/03 05:30:00 ) is considered
as start date & time.
e.g:-3
30 260774 2008/03/03 2008/03/03 07:00:00 2008/03/03 10:30:00 A
31 260774 2008/03/03 2008/03/03 10:30:00 2008/03/03 14:00:00 A
32 260774 2008/03/03 2008/03/03 05:30:00 2008/03/03 07:00:00 A
In above case, the employee schedule is continuous. The start date & time is 2008/03/03 05:30:00
========================= Query to test Individual Employees =======================================
Query-1: Test query to see the CASE and LEAD function works fine. If "CAL = 0" means that all
schedules are continuous and the employee needs to ignored.
SELECT fw.*,
CASE WHEN LEAD(FW.END_DT_TM, 1)
OVER(ORDER BY FW.START_DT_TM DESC ) IS NULL AND
fw.start_dt_tm BETWEEN
TO_DATE('03/03/2008 16:30:00','mm/dd/yyyy hh24:mi:ss') AND
TO_DATE('03/03/2008 17:45:00','mm/dd/yyyy hh24:mi:ss')
THEN Fw.seq
WHEN LEAD(FW.END_DT_TM, 1)
OVER(ORDER BY FW.START_DT_TM DESC) != FW.START_DT_TM AND
fw.start_dt_tm BETWEEN
TO_DATE('03/03/2008 16:30:00','mm/dd/yyyy hh24:mi:ss') AND
TO_DATE('03/03/2008 17:45:00','mm/dd/yyyy hh24:mi:ss')
THEN Fw.seq
ELSE 0
END AS "CAL"
FROM TESTSCHED fw
WHERE fw.empl_id = 289039
and fw.rec_stat = 'A'
Query Results
SEQ EMPL_ID SCHED_DT START_DT_TM END_DT_TM REC_STAT CAL
37 289039 2008/03/03 2008/03/03 16:30:00 2008/03/03 22:15:00 A 0
39 289039 2008/03/03 2008/03/03 13:45:00 2008/03/03 16:30:00 A 0
Since all "CAL" column is zero, the employee should be ignored.
========================= Query to be used in the application =======================================
Query-2: Actual Query executed in the application which is not working. Query-1 is used as subquery
here.
SELECT
F1.SEQ,
F1.EMPL_ID,
F1.SCHED_DT,
F1.START_DT_TM,
F1.END_DT_TM,
F1.REC_STAT
FROM TESTSCHED F1
WHERE F1.SEQ IN
(SELECT CASE
WHEN LEAD(FW.END_DT_TM, 1)
OVER(ORDER BY FW.START_DT_TM DESC) IS NULL AND
FW.START_DT_TM BETWEEN
TO_DATE('03/03/2008 16:30:00', 'mm/dd/yyyy hh24:mi:ss') AND
TO_DATE('03/03/2008 17:45:00', 'mm/dd/yyyy hh24:mi:ss') THEN FW.SEQ
WHEN LEAD(FW.END_DT_TM, 1)
OVER(ORDER BY FW.START_DT_TM DESC) != FW.START_DT_TM AND
FW.START_DT_TM BETWEEN
TO_DATE('03/03/2008 16:30:00', 'mm/dd/yyyy hh24:mi:ss') AND
TO_DATE('03/03/2008 17:45:00', 'mm/dd/yyyy hh24:mi:ss') THEN FW.SEQ
ELSE 0
END
FROM TESTSCHED FW
WHERE FW.EMPL_ID = F1.EMPL_ID
AND FW.REC_STAT = 'A')
Query-2 Results
SEQ EMPL_ID SCHED_DT START_DT_TM END_DT_TM REC_STAT
9 118327 2008/03/03 2008/03/03 17:30:00 2008/03/03 22:00:00 A
12 120033 2008/03/03 2008/03/03 17:30:00 2008/03/03 19:30:00 A
17 126690 2008/03/03 2008/03/03 16:45:00 2008/03/03 18:45:00 A
21 169241 2008/03/03 2008/03/03 17:30:00 2008/03/03 22:00:00 A
24 200716 2008/03/03 2008/03/03 17:30:00 2008/03/03 22:00:00 A
28 252836 2008/03/03 2008/03/03 17:30:00 2008/03/03 19:30:00 A
37 289039 2008/03/03 2008/03/03 16:30:00 2008/03/03 22:15:00 A
Problem with Query-2 Results:
Employee IDs 126690 & 289039 should not appear in the List.
Reason: Employee 126690: schedule start date & time is 2008/03/03 12:45:00 and falls outside
"16:30" and "17:45"
14 126690 2008/03/03 2008/03/03 18:45:00 2008/03/03 21:15:00 A
17 126690 2008/03/03 2008/03/03 16:45:00 2008/03/03 18:45:00 A
15 126690 2008/03/03 2008/03/03 12:45:00 2008/03/03 16:45:00 A
Reason: Employee 289039: schedule start date & time is 2008/03/03 13:45:00 and falls outside
37 289039 2008/03/03 2008/03/03 16:30:00 2008/03/03 22:15:00 A
39 289039 2008/03/03 2008/03/03 13:45:00 2008/03/03 16:30:00 AYour LEAD function is order by fw.start_dt_tm, in your first sql, the where clause limited the records return to REC_STAT = A and EMPL_ID = 289039.
In your 2nd query, there is no where clause, so to the LEAD function, the next record after seq 37 is seq 38 which has an end date of 2003-03-03 10:15 pm, not equal to the start time of record seq 37.
Message was edited by:
user477455 -
Ugent: Regarding Oracle 10g Analytic functions
Hi,
EMP TABLE:
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
==================================================================
7369 SMITH CLERK 7902 12/17/1980 800 20
7499 ALLEN SALESMAN 7698 2/20/1981 1600 300 30
7521 WARD SALESMAN 7698 2/22/1981 1250 500 30
7566 JONES MANAGER 7839 4/2/1981 2975 20
7654 MARTIN SALESMAN 7698 9/28/1981 1250 1400 30
7698 BLAKE MANAGER 7839 5/1/1981 2850 30
7782 CLARK MANAGER 7839 6/9/1981 2450
7788 SCOTT ANALYST 7566 12/9/1982 3000 20
7839 KING PRESIDENT 11/17/1981 5000
7844 TURNER SALESMAN 7698 9/8/1981 1500 0 30
7876 ADAMS CLERK 7788 1/12/1983 1100 20
7900 JAMES CLERK 7698 12/3/1981 950 30
7902 FORD ANALYST 7566 12/3/1981 3000 20
7934 MILLER CLERK 7782 1/23/1982 1300
================================================================
I would like the output group by manger and the employees under that manager using analytic functions.
Output should look like:
ManagerName EMPNAME
==========================================================
KING JONES,BLAKE,CLARK
JONES SCOTT,FORD
BLAKE ALLEN,WARD,MARTIN,TURNER,JAMES
CLARK MILLER
FORD SMITH
SCOTT ADAMS
Also I would like to run this query in unix shell script in order to create a folder structure like this:
Root Folder: King -> Jones -> SCOTT -> ADAMS
-> FORD -> SMITH
-> BLAKE -> ALLEN
-> WARD
-> MARTIN
-> TURNER
-> JAMES
-> CLARK -> MILLER
On a total 14 folders should be created.
Thanks in Advance
G.Vamsi Krishna
Edited by: user10733211 on Apr 20, 2009 11:30 PMuser10633982 wrote:
hey guys can you please give your personal opinions and remarks out of this thread.
this thread is supposed to be solving query and not for chit chattingNot chit chatting and not just personal opinions, just letting you know how it works around here...
Urgent is it?
Why? Have you forgotten to do your coursework and you'll get thrown off your course if you don't hand it in today?
What makes you believe that your request for help is more important than someone else who has requested help? It's very rude to assume you are more important than somebody else, and I'm sure they would like an answer to their issue as soon as they can get one too, but they've generally been polite and not demanded that it is urgent.
Also, you assume that people giving answers are all sitting here just waiting to answer your question for you. That's not so. We're all volunteers with our own jobs to do. How dare you presume to demand our attention with urgency.
If you want help and you want it answering quickly you simply just put your issue forward and provide as much valuable information as possible.
You will find if you post on here demanding your post is urgent then most people will just ignore it, some will tell you to get lost, and some will explain to you why you shouldn't post "urgent" requests. Occasionally you may find somebody who's got nothing better to do who will actually provide you with an answer, but you really are limiting your options by not asking properly.
How can something being run against the SCOTT schema be something that is "urgent"?
For the first part of your enquiry:
SQL> ed
Wrote file afiedt.buf
1 with emps as (select ename, mgr, row_number() over (partition by mgr order by ename) as rn from emp)
2 select ename as managername
3 ,(select ltrim(sys_connect_by_path(emps.ename,','),',')
4 from emps
5 where emps.mgr = emp.empno
6 and connect_by_isleaf = 1
7 connect by rn = prior rn + 1 and mgr = prior mgr
8 start with rn = 1
9 ) as empname
10 from emp
11* where empno in (select mgr from emp)
SQL> /
MANAGERNAM EMPNAME
JONES FORD,SCOTT
BLAKE ALLEN,JAMES,MARTIN,TURNER,WARD
CLARK MILLER
SCOTT ADAMS
KING BLAKE,CLARK,JONES
FORD SMITH
6 rows selected.
SQL>As for using that output in a unix shell script to create directory structures you should consider asking in a unix forum. -
Analytical functions approach for this scenario?
Here is my data:
SQL*Plus: Release 11.2.0.2.0 Production on Tue Feb 26 17:03:17 2013
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select * from batch_parameters;
LOW HI MIN_ORDERS MAX_ORDERS
51 100 6 8
121 200 1 5
201 1000 1 1
SQL> select * from orders;
ORDER_NUMBER LINE_COUNT BATCH_NO
4905 154
4899 143
4925 123
4900 110
4936 106
4901 103
4911 101
4902 91
4903 91
4887 90
4904 85
4926 81
4930 75
4934 73
4935 71
4906 68
4907 66
4896 57
4909 57
4908 56
4894 55
4910 51
4912 49
4914 49
4915 48
4893 48
4916 48
4913 48
2894 47
4917 47
4920 46
4918 46
4919 46
4886 45
2882 45
2876 44
2901 44
4921 44
4891 43
4922 43
4923 42
4884 41
4924 40
4927 39
4895 38
2853 38
4890 37
2852 37
4929 37
2885 37
4931 37
4928 37
2850 36
4932 36
4897 36
2905 36
4933 36
2843 36
2833 35
4937 35
2880 34
4938 34
2836 34
2872 34
2841 33
4889 33
2865 31
2889 30
2813 29
2902 28
2818 28
2820 27
2839 27
2884 27
4892 27
2827 26
2837 22
2883 20
2866 18
2849 17
2857 17
2871 17
4898 16
2840 15
4874 13
2856 8
2846 7
2847 7
2870 7
4885 6
1938 6
2893 6
4888 2
4880 1
4875 1
4881 1
4883 1
ORDER_NUMBER LINE_COUNT BATCH_NO
4879 1
2899 1
2898 1
4882 1
4877 1
4876 1
2891 1
2890 1
2892 1
4878 1
107 rows selected.
SQL>The batch_parameters:
hi - high count of lines in the batch.
low - low count of lines in the batch.
min_orders - number of minimum orders in the batch
max_orders - number of maximum orders in the batch.
The issue is to create an optimal size of batches for us to pick the orders. Usually, you have stick within a given low - hi count but, there is a leeway of around let's say 5%percent on the batch size (for the number of lines in the batch).
But, for the number of orders in a batch, the leeway is zero.
So, I have to assign these 'orders' into the optimal mix of batches. Now, for every run, if I don't find the mix I am looking for, then, the last batch could be as small as a one line one order too. But, every Order HAS to be batched in that run. No exceptions.
I have a procedure that does 'sort of' this, but, it leaves non - optimal orders alone. There is a potential of the orders not getting batched, because they didn't fall in the optimal mix potentially missing our required dates. (I can write another procedure that can clean up after).
I was thinking (maybe just a general direction would be enough), with what analytical functions can do these days, if somebody can come up with the 'sql' that gets us the batch number (think of it as a sequence starting at 1).
Also, the batch_parameters limits are not hard and fast. Those numbers can change but, give you a general idea.
Any ideas?Ok, sorry about that. That was just guesstimates. I ran the program and here are the results.
SQL> SELECT SUM(line_count) no_of_lines_in_batch,
2 COUNT(*) no_of_orders_in_batch,
3 batch_no
4 FROM orders o
5 GROUP BY o.batch_no;
NO_OF_LINES_IN_BATCH NO_OF_ORDERS_IN_BATCH BATCH_NO
199 4 241140
99 6 241143
199 5 241150
197 6 241156
196 5 241148
199 6 241152
164 6 241160
216 2 241128
194 6 241159
297 2 241123
199 3 241124
192 2 241132
199 6 241136
199 5 241142
94 7 241161
199 6 241129
154 2 241135
193 6 241154
199 5 241133
199 4 241138
199 6 241146
191 6 241158
22 rows selected.
SQL> select * from orders;
ORDER_NUMBER LINE_COUNT BATCH_NO
4905 154 241123
4899 143 241123
4925 123 241124
4900 110 241128
4936 106 241128
4901 103 241129
4911 101 241132
4903 91 241132
4902 91 241129
4887 90 241133
4904 85 241133
4926 81 241135
4930 75 241124
4934 73 241135
4935 71 241136
4906 68 241136
4907 66 241138
4896 57 241136
4909 57 241138
4908 56 241138
4894 55 241140
4910 51 241140
4914 49 241142
4912 49 241140
4915 48 241142
4916 48 241142
4913 48 241142
4893 48 241143
2894 47 241143
4917 47 241146
4919 46 241146
4918 46 241146
4920 46 241146
2882 45 241148
4886 45 241148
2901 44 241148
2876 44 241148
4921 44 241140
4891 43 241150
4922 43 241150
4923 42 241150
4884 41 241150
4924 40 241152
4927 39 241152
2853 38 241152
4895 38 241152
4931 37 241154
2885 37 241152
4929 37 241154
4890 37 241154
4928 37 241154
2852 37 241154
2843 36 241156
2850 36 241156
4932 36 241156
4897 36 241156
4933 36 241158
2905 36 241156
2833 35 241158
4937 35 241158
4938 34 241158
2880 34 241159
2872 34 241159
2836 34 241158
2841 33 241159
4889 33 241159
2865 31 241159
2889 30 241150
2813 29 241159
2902 28 241160
2818 28 241160
4892 27 241160
2884 27 241160
2820 27 241160
2839 27 241160
2827 26 241161
2837 22 241133
2883 20 241138
2866 18 241148
2849 17 241161
2871 17 241156
2857 17 241158
4898 16 241161
2840 15 241161
4874 13 241146
2856 8 241154
2847 7 241161
2846 7 241161
2870 7 241152
2893 6 241142
1938 6 241161
4888 2 241129
2890 1 241133
2899 1 241136
4877 1 241143
4875 1 241143
2892 1 241136
ORDER_NUMBER LINE_COUNT BATCH_NO
4878 1 241146
4876 1 241136
2891 1 241133
4880 1 241129
4883 1 241143
4879 1 241143
2898 1 241129
4882 1 241129
4881 1 241124
106 rows selected.As you can see, my code is a little buggy in that it may not have followed the strict batch_parameters. But, this is acceptable to be in the general area. -
Yet another question on analytical functions
Hi,
A "simple" issue with date overlapping: I need to display a "y" or "n" where a start time/end time overlaps with another row for the same employee.
For this, I used analytical functions to find out which rows were overlapping. This is what I did:
create table test (id number, emp_id number, date_worked date, start_time date, end_time date);
insert into test values (1, 333, to_date('2008-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 08:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS'));
insert into test values (2, 333, to_date('2008-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 12:00:00', 'YYYY-MM-DD HH24:MI:SS'));
insert into test values (3, 444, to_date('2008-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 08:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 09:00:00', 'YYYY-MM-DD HH24:MI:SS'));
insert into test values (4, 333, to_date('2008-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 11:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 13:00:00', 'YYYY-MM-DD HH24:MI:SS'));
SQL> select * from test;
ID EMP_ID DATE_WORKED START_TIME END_TIME
1 333 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 10:00:00
2 333 01-01-2008 00:00:00 01-01-2008 10:00:00 01-01-2008 12:00:00 --> conflict
3 444 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 09:00:00
4 333 01-01-2008 00:00:00 01-01-2008 11:00:00 01-01-2008 13:00:00 --> conflict Here I see that Employee 333 is scheduled from 10 to 12 am, but it's also scheduled from 11 to 13. This is a conflict.
To find this conflict, I did this (please correct me if this is incorrect):
SQL> select id, emp_id, date_worked, start_time, end_time, next_date
2 from (
3 select lead( start_time, 1 ) over ( partition by emp_id order by start_time ) next_date,
4 start_time, end_time, emp_id, date_worked, id
5 from test
6 )
7 where next_date < end_time;
ID EMP_ID DATE_WORKED START_TIME END_TIME NEXT_DATE
2 333 01-01-2008 00:00:00 01-01-2008 10:00:00 01-01-2008 12:00:00 01-01-2008 11:00:00So far, so good. Where I'm stuck is, I need to display the conflicts in this way:
ID EMP_ID DATE_WORKED START_TIME END_TIME OVERLAPPED
1 333 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 10:00:00
2 333 01-01-2008 00:00:00 01-01-2008 10:00:00 01-01-2008 12:00:00 yes
3 444 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 09:00:00
4 333 01-01-2008 00:00:00 01-01-2008 11:00:00 01-01-2008 13:00:00 yes Is there a way I can achieve this?
I tried doing a nested query with a count(*) but no success...I found an issue. Say we insert this row.
insert into test values (5, 333, to_date('2008-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 07:00:00', 'YYYY-MM-DD HH24:MI:SS'),to_date('2008-01-01 08:00:00', 'YYYY-MM-DD HH24:MI:SS'));
SQL> select * from test order by start_time;
ID EMP_ID DATE_WORKED START_TIME END_TIME
5 333 01-01-2008 00:00:00 01-01-2008 07:00:00 01-01-2008 08:00:00
1 333 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 10:00:00
3 444 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 09:00:00
2 333 01-01-2008 00:00:00 01-01-2008 10:00:00 01-01-2008 12:00:00 --> conflict
4 333 01-01-2008 00:00:00 01-01-2008 11:00:00 01-01-2008 13:00:00 --> conflictIf I run the SQL suggested by Nicloei, I get:
SQL> select id,
2 emp_id,
3 date_worked,
4 start_time,
5 end_time,
6 Case When end_time > prev_date Or next_date > End_time Then 'Yes' Else 'No' End over_lap
7 from (
8 select lead( start_time, 1 ) over ( partition by emp_id order by start_time ) next_date,
9 lag( start_time, 1 ) over ( partition by emp_id order by start_time ) prev_date,
10 start_time, end_time, emp_id, date_worked, id
11 from test
12 );
ID EMP_ID DATE_WORKED START_TIME END_TIME OVER_LAP
5 333 01-01-2008 00:00:00 01-01-2008 07:00:00 01-01-2008 08:00:00 No
1 333 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 10:00:00 Yes
2 333 01-01-2008 00:00:00 01-01-2008 10:00:00 01-01-2008 12:00:00 Yes
4 333 01-01-2008 00:00:00 01-01-2008 11:00:00 01-01-2008 13:00:00 Yes
3 444 01-01-2008 00:00:00 01-01-2008 08:00:00 01-01-2008 09:00:00 NoIt's basically saying that there's an overlap between id 1 and id 2 (8-10 and 10-12), which is incorrect. I ran the inner query by itself:
SQL> select lead( start_time, 1 ) over ( partition by emp_id order by start_time ) next_date,
2 lag( start_time, 1 ) over ( partition by emp_id order by start_time ) prev_date,
3 start_time, end_time, emp_id, date_worked, id
4 from test;
NEXT_DATE PREV_DATE START_TIME END_TIME EMP_ID DATE_WORKED ID
01-01-2008 08:00:00 01-01-2008 07:00:00 01-01-2008 08:00:00 333 01-01-2008 00:00:00 5
01-01-2008 10:00:00 01-01-2008 07:00:00 01-01-2008 08:00:00 01-01-2008 10:00:00 333 01-01-2008 00:00:00 1
01-01-2008 11:00:00 01-01-2008 08:00:00 01-01-2008 10:00:00 01-01-2008 12:00:00 333 01-01-2008 00:00:00 2
01-01-2008 10:00:00 01-01-2008 11:00:00 01-01-2008 13:00:00 333 01-01-2008 00:00:00 4
01-01-2008 08:00:00 01-01-2008 09:00:00 444 01-01-2008 00:00:00 3with this data, I can't seem to find a way to conditional test in order to print my overlap column with "yes" or "no" accordingly.
am I missing something? -
Need some kind of Analytical Function
Hi Oracle experts
I need a little help from you experts. I have a PARTY table as listed below
The existing data
Party key ID_INTERNAL EID BID
1 11111 123
1 11111 321
1 22222 321 899
1 66666 ------ 888
New records comes
I have to assign a party key to each record based on which attribute is matching
Now the situation is as new records comes.
New records comes
ID_INTERNAL EID BID
22222 555
44444 555
89898 ------ 888
If I match on ID_INTERNAL I may not be able to match ID_INTERNAL 44444 and 89898 and if I match EID or BID the same situation.
Is thera any analytical function which helps me assigning a party key to all the recoords. ALl the above records should be assigned PARTY KEY 1 only.
Please help
Thanks
RajeshJustin
My main goal is to assign a party key from existing set of records to the new records which are being selected/inserted. I have to write my algoritum in such a way that the new values should match their value in existing records.
Example
my first new record has a value of 11111 under ID_INTERNAL and in the same record it has a value of 555 under EID attribute. so based on matching algoritum for ID INTERNAL it will be assigned existing party key 1.
Similarly second new record has a value of 87777 under ID INTERNAL and has a value of 555 under EID and this ID INTERNAL does not exists in the target table. but the value of 555 is available under EID attribute so I have to write algoritum based on EID.
Now the delima is my target table is as follows
Party key PARTYID PARTYNAME
1 11111 ITSID
1 123 EID
1 321 EID
Now when new records come I have to write match algortium for ID_INTERNAL to PARTYID for Partyname='ITSID'
Once matched this record ID INTERNAL=11111 and EID =555 assigned a party key=1. So after first record the output table slooks like
Party key PARTYID PARTYNAME
1 11111 ITSID
1 123 EID
1 321 EID
1 555 EID
Same way for second new record where the values are ID_INTERNAL=87777 and EID=555. I have to write match algortium based on EID because the EID value of 555 already exists in target tabel with party key.
SO after second record the target table will look like
Party key PARTYID PARTYNAME
1 11111 ITSID
1 123 EID
1 321 EID
1 555 EID
1 87777 ITSID
So this is how I have to solve this match algoritum.
Please help me if you need any information I will be glad to provide you all.
Thanks
Regards
Rajesh -
Reports 6i and analytical function
hi
I have this query which wrks fine in TOAD
SELECT rvt.receipt_num srv_no, rvt.supplier supplier,
rvt.transaction_date srv_date, inv.segment1 item_no,
rvt.item_desc item_description, hrov.NAME,
( SUBSTR (v.standard_industry_class, 1, 1)
|| '-'
|| po_headers.segment1
|| '-'
|| TO_CHAR (po_headers.creation_date, 'RRRR')
) po_no,
po_headers.creation_date_disp po_date,
( (rvt.currency_conversion_rate * po_lines.unit_price)
* rvt.transact_qty
)aMOUNT ,
----Analytic function used here
SUM( ( (rvt.currency_conversion_rate * po_lines.unit_price)
* rvt.transact_qty)) over(partition by hrov.NAME) SUM_AMOUNT,
(SELECT SUM (mot.on_hand)
FROM mtl_onhand_total_mwb_v mot
WHERE inv.inventory_item_id = mot.inventory_item_id
-- AND INV.ORGANIZATION_ID=MOT.ORGANIZATION_ID
AND loc.inventory_location_id = mot.locator_id
AND loc.organization_id = mot.organization_id
AND rvt.locator_id = mot.locator_id) onhand
FROM rcv_vrc_txs_v rvt,
mtl_system_items_b inv,
mtl_item_locations loc,
hr_organization_units_v hrov,
po_headers_v po_headers,
ap_vendors_v v,
po_lines_v po_lines
WHERE inv.inventory_item_id(+) = rvt.item_id
AND po_headers.vendor_id = v.vendor_id
AND rvt.po_line_id = po_lines.po_line_id
AND rvt.po_header_id = po_lines.po_header_id
AND rvt.po_header_id = po_headers.po_header_id
AND rvt.supplier_id = v.vendor_id
AND inv.organization_id = hrov.organization_id
AND rvt.transaction_type = 'DELIVER'
AND rvt.inspection_status_code <> 'REJECTED'
AND rvt.organization_id = inv.organization_id(+)
AND to_char(to_date(rvt.transaction_date, 'DD/MM/YYYY'), 'DD-MON-YYYY') BETWEEN (:p_from_date)
AND NVL (:p_to_date,
:p_from_date
AND rvt.locator_id = loc.physical_location_id(+)
AND transaction_id NOT IN (
SELECT parent_transaction_id
FROM rcv_vrc_txs_v rvtd
WHERE rvt.item_id = rvtd.item_id
AND rvtd.transaction_type IN
('RETURN TO RECEIVING', 'RETURN TO VENDOR'))
GROUP BY rvt.receipt_num , rvt.supplier ,
rvt.transaction_date , inv.segment1 ,
rvt.item_desc , hrov.NAME,v.standard_industry_clasS,po_headers.segment1,po_headers.creation_datE,
po_headers.creation_date_disp,inv.inventory_item_iD,loc.inventory_location_id,loc.organization_id,
rvt.locator_iD,rvt.currency_conversion_rate,po_lines.unit_price, rvt.transact_qty
but it gives blank page in reports 6i
could it be that reports 6i donot support analytical functionskindly guide another alternaive
thanking in advance
Edited by: makdutakdu on Mar 25, 2012 2:22 PMhi
will the view be like
create view S_Amount as SELECT rvt.receipt_num srv_no, rvt.supplier supplier,
rvt.transaction_date srv_date, inv.segment1 item_no,
rvt.item_desc item_description, hrov.NAME,
( SUBSTR (v.standard_industry_class, 1, 1)
|| '-'
|| po_headers.segment1
|| '-'
|| TO_CHAR (po_headers.creation_date, 'RRRR')
) po_no,
po_headers.creation_date_disp po_date,
( (rvt.currency_conversion_rate * po_lines.unit_price)
* rvt.transact_qty
)aMOUNT ,
----Analytic function used here
SUM( ( (rvt.currency_conversion_rate * po_lines.unit_price)
* rvt.transact_qty)) over(partition by hrov.NAME) SUM_AMOUNT,
(SELECT SUM (mot.on_hand)
FROM mtl_onhand_total_mwb_v mot
WHERE inv.inventory_item_id = mot.inventory_item_id
-- AND INV.ORGANIZATION_ID=MOT.ORGANIZATION_ID
AND loc.inventory_location_id = mot.locator_id
AND loc.organization_id = mot.organization_id
AND rvt.locator_id = mot.locator_id) onhand
FROM rcv_vrc_txs_v rvt,
mtl_system_items_b inv,
mtl_item_locations loc,
hr_organization_units_v hrov,
po_headers_v po_headers,
ap_vendors_v v,
po_lines_v po_lines
WHERE inv.inventory_item_id(+) = rvt.item_id
AND po_headers.vendor_id = v.vendor_id
AND rvt.po_line_id = po_lines.po_line_id
AND rvt.po_header_id = po_lines.po_header_id
AND rvt.po_header_id = po_headers.po_header_id
AND rvt.supplier_id = v.vendor_id
AND inv.organization_id = hrov.organization_id
AND rvt.transaction_type = 'DELIVER'
AND rvt.inspection_status_code <> 'REJECTED'
AND rvt.organization_id = inv.organization_id(+)
AND rvt.locator_id = loc.physical_location_id(+)
AND transaction_id NOT IN (
SELECT parent_transaction_id
FROM rcv_vrc_txs_v rvtd
WHERE rvt.item_id = rvtd.item_id
AND rvtd.transaction_type IN
('RETURN TO RECEIVING', 'RETURN TO VENDOR'))
GROUP BY rvt.receipt_num , rvt.supplier ,
rvt.transaction_date , inv.segment1 ,
rvt.item_desc , hrov.NAME,v.standard_industry_clasS,po_headers.segment1,po_headers.creation_datE,
po_headers.creation_date_disp,inv.inventory_item_iD,loc.inventory_location_id,loc.organization_id,
rvt.locator_iD,rvt.currency_conversion_rate,po_lines.unit_price, rvt.transact_qtyis this correct ? i mean i have not included the bind parameters in the view ..moreover shoud this view be joined with all the columns in the from clause of the original query?
kindly guide
thanking in advance
Maybe you are looking for
-
My iMac has panic issues, need help.
I've got an iMac 4,1 and it's panicking in a bad way. It's not used for whole lot, just email, web, and printer sharing. The thing won't boot sometimes and I'm forced to leave it on. It only panics on boot. Sometimes when going into Safe Boot, bu
-
Hi All, Is there any document about write back? I need to understand how I can use this in my reports. Thanks, Poojak
-
Shared Storage allocation for Oracle 10g RAC
Hi, I am about to implement a Two Node Oracle 10g RAC on Windows 2008 (32-Bit) servers. We are going to use EVA4400 storeage and we have 600 GB for Oracle RAC and database. I intend to follow the steps given in RACGuides_Rac10gR2OnWindows.pdf mention
-
How to you create a 2nd itunes account without losing content for either user
How do you create a new 2nd itunes account without losing content for either user?
-
Installation problem re Photoshop
I have just signed up for the Photography elements of the Creative Cloud, so Light Room and Photoshop. Lightroom has installed successfully, but i am getting an exit code 6 and Photoshop is failing to install. It gives me 2 errors under Payload: Mic