Need sql using analytical functions
Need to formulate this SQL:Please help -
EMP Table structure is:
ENAME
DEPTNO
SAL
Here:
CUMDEPTTOT – Running cumulative Total by Department
SALBYDEPT – Total salary by Department
CUMTOT – Running cumulative total
DEPTNO ENAME SAL CUMDEPTTOT SALBYDEPT CUMTOT
10 MILLER 1300 1300 8750 1300
10 CLARK 2450 3750 8750 3750
10 KING 5000 8750 8750 8750
Thanks
ravikumar.sv wrote:
This has been asked today....
I think you both are friends....
Re: Please provide SQL
follow up that thread...As I suspected, it's a homework question. {noformat}*sigh*{noformat}. Let's hope their tutor checks the internet for them asking the question and fails them.
Similar Messages
-
Hi all,
I want an help in creating my SQL query to extract the data described below:
I have one table example test containing data like below:
ID Desc Status
1 T1 DEACTIVE
2 T2 ACTIVE
3 T3 SUCCESS
4 T4 DEACTIVE
The thing i want to do is selecting all lines with ACTIVE status in this table but is there is no ACTIVE status, my query will give me the last line with DEACTIVE status.
Can I do this in one query by using analytical function for example, if yes can yiu help me on thaht query.
regards,
RaluceHi, Raluce,
Here's one way to do that:
WITH got_r_num AS
SELECT deptno, ename, job, hiredate
, ROW_NUMBER () OVER ( PARTITION BY deptno
ORDER BY job
, hiredate DESC
) AS r_num
FROM scott.emp
WHERE job IN ('ANALYST', 'CLERK')
SELECT deptno, ename, job, hiredate
FROM got_r_num
WHERE job = 'ANALYST'
OR r_num = 1
ORDER BY deptno
Since I don't have a sample version of your table, I used scott.emp to illustrate.
Output:
DEPTNO ENAME JOB HIREDATE
10 MILLER CLERK 23-JAN-82
20 SCOTT ANALYST 19-APR-87
20 FORD ANALYST 03-DEC-81
30 JAMES CLERK 03-DEC-81
This query finds all ANALYSTs in each department, regardless of how many there are. (Deptno 20 happens to have 2 ANALYSTs.) If there is no ANALYST in a department, then the most recently hired CLERK is included. (Deptnos 10 and 30 don't have any ANALYSTs.)
This "partitions", or sub-divides, the table into separate units, one for each department. In the problem you posted, it looks like you want to operate in the entire table, without sub-dividing it in any way. To do that, just omit the PARTITION BY clause in the analytic ROW_NUMBER function, like this:
WITH got_r_num AS
SELECT deptno, ename, job, hiredate
, ROW_NUMBER () OVER ( -- PARTITION BY deptno
ORDER BY job
, hiredate DESC
) AS r_num
FROM scott.emp
WHERE job IN ('ANALYST', 'CLERK')
SELECT deptno, ename, job, hiredate
FROM got_r_num
WHERE job = 'ANALYST'
OR r_num = 1
ORDER BY deptno -
Is it possible using Analytical functions?
Hi,
I have the following data
Column1 Column2
2005 500
2006 500
2007 500
2008 500
Now, if I've some variable value as 800, then the output record should be
Column1 Column2
2008 500
2007 300
2006 0
2005 0i.e. the Column2 value(order by column1 desc) is split to accommodate the variable passed.
Right now, it's being done in PL/SQL. Is it possible to do it in SQL using Analytical function?
Thanks,
Sundar
P.S: It doesnt have to be using analytical, if it can be achieved in a SQL, it's good.
Message was edited by:
Sundar MHi, a sample using analytical function SUM:
CREATE TABLE Source_Data
( Year NUMBER
, Value NUMBER
BEGIN
DELETE FROM Source_Data;
FOR v_Cycle IN 1 .. 6
LOOP
INSERT
INTO Source_Data
Year
, Value
VALUES
2000 + v_Cycle
, 100 * v_Cycle
END LOOP;
COMMIT;
END;
VARIABLE v_Amount NUMBER
EXECUTE :v_Amount := 1200using the SUM, the previous values are totalized:
so
SELECT Year
, Value Year_Value
, :v_Amount Original_Amount
, SUM(Value) OVER (ORDER BY Year DESC RANGE UNBOUNDED PRECEDING) Cumulative_Sum
, DECODE(
SIGN(:v_Amount - SUM(Value) OVER (ORDER BY Year DESC RANGE UNBOUNDED PRECEDING))
, 1, Value -- Positive number, more value can be subtract
, GREATEST(Value - (SUM(Value) OVER (ORDER BY Year DESC RANGE UNBOUNDED PRECEDING) - :v_Amount), 0)
) Year_Quota
FROM Source_Data s
ORDER BY Year DESC
/will give
YEAR YEAR_VALUE ORIGINAL_AMOUNT CUMULATIVE_SUM YEAR_QUOTA
2006 600 1200 600 600
2005 500 1200 1100 500
2004 400 1200 1500 100
2003 300 1200 1800 0
2002 200 1200 2000 0
2001 100 1200 2100 0You can add different conditions (PARTITION BY ..)
Hope this helps
Max -
[b]Using Analytic functions...[/b]
Hi All,
I need help in writing a query using analytic functions.
Foll is my scenario. I have a table cust_points
CREATE TABLE cust_points
( cust_id varchar2(10),
pts_dt date,
reward_points number(3),
bal_points number(3)
insert into cust_points values ('ABC',01-MAY-2004',5, 15)
insert into cust_points values ('ABC',05-MAY-2004',3, 12)
insert into cust_points values ('ABC',09-MAY-2004',3, 9)
insert into cust_points values ('XYZ',02-MAY-2004',8, 4)
insert into cust_points values ('XYZ',03-MAY-2004',5, 1)
insert into cust_points values ('JKL',10-MAY-2004',5, 11)
I want a result set which shows for each customer, the sum of reward his/her points
but balance points as of the last date. So for the above I should have foll results
cust_id reward_pts bal_points
ABC 11 9
XYZ 13 1
JKL 5 11
I having tried using last_value(), for eg
Select cust_id, sum(reward_points), last_value(bal_points) over (partition by cust_id)...but run into grouping errors.
Can anyone help ?try this...
SELECT a.pkcol,
nvl(SUM(b.col1),0) col1,
nvl(SUM(b.col2),0) col2,
nvl(SUM(b.col3),0) col3
FROM table1 a, table2 b, table3 c
WHERE a.pkcol = b.plcol(+)
AND a.pkcol = c.pkcol
GROUP BY a.pkcol;
SQL> select a.deptno,
2 nvl((select sum(sal) from test_emp b where a.deptno = b.deptno),0) col1,
3 nvl((select sum(comm) from test_emp b where a.deptno = b.deptno),0) col2
4 from test_dept a;
DEPTNO COL1 COL2
10 12786 0
20 13237 738
30 11217 2415
40 0 0
99 0 0
SQL> select a.deptno,
2 nvl(sum(b.sal),0) col1,
3 nvl(sum(b.comm),0) col2
4 from test_dept a,test_emp b
5 where a.deptno = b.deptno
6 group by a.deptno;
DEPTNO COL1 COL2
30 11217 2415
20 13237 738
10 12786 0
SQL> select a.deptno,
2 nvl(sum(b.sal),0) col1,
3 nvl(sum(b.comm),0) col2
4 from test_dept a,test_emp b
5 where a.deptno = b.deptno(+)
6 group by a.deptno;
DEPTNO COL1 COL2
10 12786 0
20 13237 738
30 11217 2415
40 0 0
99 0 0
SQL> -
Hi all,
I am using ODI 11g(11.1.1.3.0) and I am trying to make an interface using analytic functions in the column mapping, something like below.
sum(salary) over (partition by .....)
The problem is that when ODI saw sum it assumes this as an aggregate function and puts group by. Is there any way to make ODI understand it is not an aggregate function?
I tried creating an option to specify whether it is analytic or not and updated IKM with no luck.
<%if ( odiRef.getUserExit("ANALYTIC").equals("1") ) { %>
<% } else { %>
<%=odiRef.getGrpBy(i)%>
<%=odiRef.getHaving(i)%>
<% } %>
Thanks in advanceThanks for the reply.
But I think in ODI 11g getFrom() function is behaving differently, that is why it is not working.
When I check out the A.2.18 getFrom() Method from Substitution API Reference document, it says
Allows the retrieval of the SQL string of the FROM in the source SELECT clause for a given dataset. The FROM statement is built from tables and joins (and according to the SQL capabilities of the technologies) that are used in this dataset.
I think getfrom also retrieves group by clause, I create a step in IKM just with *<%=odiRef.getFrom(0)%>* and I can see that even that query generated has a group by clause -
Query for using "analytical functions" in DWH...
Dear team,
I would like to know if following task can be done using analytical functions...
If it can be done using other ways, please do share the ideas...
I have table as shown below..
Create Table t As
Select *
From
Select 12345 PRODUCT, 'W1' WEEK, 10000 SOH, 0 DEMAND, 0 SUPPLY, 0 EOH From dual Union All
Select 12345, 'W2', 0, 100, 50, 0 From dual Union All
Select 12345, 'W3', 0, 100, 50, 0 From dual Union All
Select 12345, 'W4', 0, 100, 50, 0 From dual
PRODUCT WEEK SOH DEMAND SUPPLY EOH
12345 W1 10,000 0 0 10000
12345 W2 0 100 50 0
12345 W3 0 100 50 0
12345 W4 0 100 50 0
Now i want to calcuate EOH (ending on hand) quantity for W1...
This EOH for W1 becomes SOH (Starting on hand) for W2...and so on...till end of weeks
The formula is :- EOH = SOH - (DEMAND + SUPPLY)
The output should be as follows...
PRODUCT WEEK SOH DEMAND SUPPLY EOH
12345 W1 10,000 10000
12345 W2 10,000 100 50 9950
12345 W3 9,950 100 50 9900
12345 W4 9,000 100 50 8950
Kindly share your ideas...Nicloei W wrote:
Means SOH_AFTER_SUPPLY for W1, should be displayed under SOH FOR W2...i.e. SOH for W4 should be SOH_AFTER_SUPPLY for W3, right?
If yes, why are you expecting it to be 9000 for W4??
So in output should be...
PRODUCT WE SOH DEMAND SUPPLY EOH SOH_AFTER_SUPPLY
12345 W1 10000 0 0 0 10000
12345 W2 10000 100 50 0 9950
12345 W3 9950 100 50 0 *9900*
12345 W4 *9000* 100 50 0 9850
per logic you explained, shouldn't it be *9900* instead???
you could customize Martin Preiss's logic for your requirement :
SQL> with
2 data
3 As
4 (
5 Select 12345 PRODUCT, 'W1' WEEK, 10000 SOH, 0 DEMAND, 0 SUPPLY, 0 EOH Fom dual Union All
6 Select 12345, 'W2', 0, 100, 50, 0 From dal Union All
7 Select 12345, 'W3', 0, 100, 50, 0 From dal Union All
8 Select 12345, 'W4', 0, 100, 50, 0 From dual
9 )
10 Select Product
11 ,Week
12 , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Parttion By Product Order By Week)+Supply Soh
13 ,Demand
14 ,Supply
15 , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Partition By Product Order By Week) eoh
16 from data;
PRODUCT WE SOH DEMAND SUPPLY EOH
12345 W1 10000 0 0 10000
12345 W2 10000 100 50 9950
12345 W3 9950 100 50 9900
12345 W4 9900 100 50 9850 Vivek L -
Using analytic function to get the right output.
Dear all;
I have the following sample date below
create table temp_one
id number(30),
placeid varchar2(400),
issuedate date,
person varchar2(400),
failures number(30),
primary key(id)
insert into temp_one values (1, 'NY', to_date('03/04/2011', 'MM/DD/YYYY'), 'John', 3);
insert into temp_one values (2, 'NY', to_date('03/03/2011', 'MM/DD/YYYY'), 'Adam', 7);
insert into temp_one values (3, 'Mexico', to_date('03/04/2011', 'MM/DD/YYYY'), 'Wendy', 3);
insert into temp_one values (4, 'Mexico', to_date('03/14/2011', 'MM/DD/YYYY'), 'Gerry', 3);
insert into temp_one values (5, 'Mexico', to_date('03/15/2011', 'MM/DD/YYYY'), 'Zick', 9);
insert into temp_one values (6, 'London', to_date('03/16/2011', 'MM/DD/YYYY'), 'Mike', 8);this is output I desire
placeid issueperiod failures
NY 02/28/2011 - 03/06/2011 10
Mexico 02/28/2011 - 03/06/2011 3
Mexico 03/14/2011 - 03/20/2011 12
London 03/14/2011 - 03/20/2011 8All help is appreciated. I will post my query as soon as I am able to think of a good logic for this...hI,
user13328581 wrote:
... Kindly note, I am still learning how to use analytic functions.That doesn't matter; analytic functions won't help in this problem. The aggregate SUM function is all you need.
But what do you need to GROUP BY? What is each row of the result set going to represent? A placeid? Yes, each row will represent only one placedid, but it's going to be divided further. You want a separate row of output for every placeid and week, so you'll want to GROUP BY placeid and week. You don't want to GROUP BY the raw issuedate; that would put March 3 and March 4 into separate groups. And you don't want to GROUP BY failures; that would mean a row with 3 failures could never be in the same group as a row with 9 failures.
This gets the output you posted from the sample data you posted:
SELECT placeid
, TO_CHAR ( TRUNC (issuedate, 'IW')
, 'MM/DD/YYYY'
) || ' - '|| TO_CHAR ( TRUNC (issuedate, 'IW') + 6
, 'MM/DD/YYY'
) AS issueperiod
, SUM (failures) AS sumfailures
FROM temp_one
GROUP BY placeid
, TRUNC (issuedate, 'IW')
;You could use a sub-query to compute TRUNC (issuedate, 'IW') once. The code would be about as complicated, efficiency probably won't improve noticeably, and the the results would be the same. -
Using analytic function in a view
Hello to all
Sorry If I use this thread
sql not merge using analytic functions
for my question,
From example you write and from Tom explain is not possible create a view on analytic function?
Thanks and sorry againI think what you'll discover is that if you apply the function over the result set, the initial SQL might be quicker,
for example, this is a test I did with a large dictionary view:
select tp.Table_Name
,tp.Partition_Name
from
select tbl.Table_Name as Table_Name
,tbl.Partition_Date as dt
,row_number() over (partition by dtp.table_Name order by dtp.Partition_Name desc) rn
from (
select /*+ all_rows */
dtp.Table_Name
,dtp.Partition_name
from dba_tab_partitions dtp
where dtp.Partition_Name like 'Y____\_Q_\_M__\_D__' escape '\'
and dtp.Table_Owner = 'APPS'
and dtp.Table_name not like '%$%'
and dtp.Table_Name like '%'
) tbl
) tp
where tp.rn = 1
select Table_Name
,Partition_Name
from (
select /*+ all_rows */
dtp.Table_Name
,row_number() over (partition by tbl.table_Name order by tbl.Partition_Name desc) rn
from dba_tab_partitions dtp
where dtp.Partition_Name like 'Y____\_Q_\_M__\_D__' escape '\'
and dtp.Table_Owner = 'APPS'
and dtp.Table_name not like '%$%'
and dtp.Table_Name '%'
) tbl
where rn = 1I found the former to be quicker.
I think ask tom was saying a lot more, but included something similar,
Edited by: bluefrog on Jun 10, 2010 12:48 PM -
Using analytical function - value with highest count
Hi
i have this table below
CREATE TABLE table1
( cust_name VARCHAR2 (10)
, txn_id NUMBER
, txn_date DATE
, country VARCHAR2 (10)
, flag number
, CONSTRAINT key1 UNIQUE (cust_name, txn_id)
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9870,TO_DATE ('15-Jan-2011', 'DD-Mon-YYYY'), 'Iran', 1);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9871,TO_DATE ('16-Jan-2011', 'DD-Mon-YYYY'), 'China', 1);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9872,TO_DATE ('17-Jan-2011', 'DD-Mon-YYYY'), 'China', 1);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9873,TO_DATE ('18-Jan-2011', 'DD-Mon-YYYY'), 'Japan', 1);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9874,TO_DATE ('19-Jan-2011', 'DD-Mon-YYYY'), 'Japan', 1);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9875,TO_DATE ('20-Jan-2011', 'DD-Mon-YYYY'), 'Russia', 1);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9877,TO_DATE ('22-Jan-2011', 'DD-Mon-YYYY'), 'China', 0);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9878,TO_DATE ('26-Jan-2011', 'DD-Mon-YYYY'), 'Korea', 0);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9811,TO_DATE ('17-Jan-2011', 'DD-Mon-YYYY'), 'China', 0);
INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9854,TO_DATE ('13-Jan-2011', 'DD-Mon-YYYY'), 'Taiwan', 0);
The requirement is to create an additional column in the resultset with country name where the customer has done the maximum number of transactions
(with transaction flag 1). In case we have two or more countries tied with the same count, then we need to select the country (among the tied ones)
where the customer has done the last transaction (with transaction flag 1)
e.g. The count is 2 for both 'China' and 'Japan' for transaction flag 1 ,and the latest transaction is for 'Japan'. So the new column should contain 'Japan'
CUST_NAME TXN_ID TXN_DATE COUNTRY FLAG country_1
Peter 9811 17-JAN-11 China 0 Japan
Peter 9854 13-JAN-11 Taiwan 0 Japan
Peter 9870 15-JAN-11 Iran 1 Japan
Peter 9871 16-JAN-11 China 1 Japan
Peter 9872 17-JAN-11 China 1 Japan
Peter 9873 18-JAN-11 Japan 1 Japan
Peter 9874 19-JAN-11 Japan 1 Japan
Peter 9875 20-JAN-11 Russia 1 Japan
Peter 9877 22-JAN-11 China 0 Japan
Peter 9878 26-JAN-11 Korea 0 Japan
Please let me know how to accomplish this using analytical functions
Thanks
-LearnsequelDoes this work (not spent much time checking it)?
WITH ana AS (
SELECT cust_name, txn_id, txn_date, country, flag,
Sum (flag)
OVER (PARTITION BY cust_name, country) n_trx,
Max (CASE WHEN flag = 1 THEN txn_date END)
OVER (PARTITION BY cust_name, country) l_trx
FROM cnt_trx
SELECT cust_name, txn_id, txn_date, country, flag,
First_Value (country) OVER (PARTITION BY cust_name ORDER BY n_trx DESC, l_trx DESC) top_cnt
FROM ana
CUST_NAME TXN_ID TXN_DATE COUNTRY FLAG TOP_CNT
Fred 9875 20-JAN-11 Russia 1 Russia
Fred 9874 19-JAN-11 Japan 1 Russia
Peter 9873 18-JAN-11 Japan 1 Japan
Peter 9874 19-JAN-11 Japan 1 Japan
Peter 9872 17-JAN-11 China 1 Japan
Peter 9871 16-JAN-11 China 1 Japan
Peter 9811 17-JAN-11 China 0 Japan
Peter 9877 22-JAN-11 China 0 Japan
Peter 9875 20-JAN-11 Russia 1 Japan
Peter 9870 15-JAN-11 Iran 1 Japan
Peter 9878 26-JAN-11 Korea 0 Japan
Peter 9854 13-JAN-11 Taiwan 0 Japan
12 rows selected. -
Build interface using analytic functions twice
Hi all, tell me please is it possible to build interface using analytic functions twice, like:
select max(tt.val) from (
select id, sum(val) val
from (
select 1 id, 10 val from dual union all
select 2 id, 10 val from dual union all
select 2 id, 30 val from dual union all
select 2 id, 10 val from dual union all
select 3 id, 20 val from dual) t
group by id) tt
thanks in advanceHI,
Just a question...
You used only dual table. That correspond to the reality or is just as example?
I mean, won't physical table be used?
I believe you need that at target column, is that true? -
To use "analytic function" at "recursive with clause"
http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_10002.htm#i2077142
The recursive member cannot contain any of the following elements:
・An aggregate function. However, analytic functions are permitted in the select list.
OK I will use analytic function at The recursive member :-)
SQL> select * from v$version;
BANNER
Oracle Database 11g Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> with rec(Val,TotalRecCnt) as(
2 select 1,1 from dual
3 union all
4 select Val+1,count(*) over()
5 from rec
6 where Val+1 <= 5)
7 select * from rec;
select * from rec
ERROR at line 7:
ORA-32486: unsupported operation in recursive branch of recursive WITH clauseWhy ORA-32486 happen ?:|Hi Aketi,
It works in 11.2.0.2, so it is probably a bug:
select * from v$version
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
with rec(Val,TotalRecCnt) as(
select 1,1 from dual
union all
select Val+1,count(*) over()
from rec
where Val+1 <= 5)
select * from rec
VAL TOTALRECCNT
1 1
2 1
3 1
4 1
5 1 Regards,
Bob -
Help on Using Analytical Functions
I am hetting error when i use Analytical functions in Expressions
AVG( INGRP1.Test1 ) OVER (PARTITION BY INGRP1.Test2)
Error is as follows
Line 1, Col 28:
PLS-00103: Encountered the symbol "OVER" when expecting one of the following:
* & = - + ; < / > at in is mod remainder not rem
<an exponent (**)> <> or != or ~= >= <= <> and or like LIKE2_
LIKE4_ LIKEC_ between || multiset member SUBMULTISET_Hi,
the syntax of this part of the sql statement is okay. Please post the complete statement to identify the error.
Sometimes oracle identifies the wrong point for the error.
Regards,
Detlef -
Should I use Analytic functions ?
Hello,
I have a table rci_dates with the following structure (rci_id,visit_id,rci_name,rci_date).
A sample of data in this table is as given below.
1,101,'FIRST VISIT', '2010-MAY-01',
2,101,'FIRST VISIT', '2010-MAY-01'
3,101,'FIRST VISIT', '2010-MAY-01'
4,101,'FIRST VISIT', '2010-MAY-01'
5,102,'SECOND VISIT', '2010-JUN-01',
6,102,'SECOND VISIT', '2010-JUN-01'
7,102,'SECOND VISIT', '2010-JUN-01'
8,102,'SECOND VISIT', '2010-JUL-01'
I want to write a query which returns me the records which are similar to the record with rc_id =8 since the rci_date is different within the visit_id 102. Where as in Visit_id 101 the rci_dates are all same so it should not be displayed in the output returned by my query.
How can I do this ? Should I be using analytic functions. Can someone please let me know.
Thanksok i have created the table and inserted the data. but it appears that the data are the output you are expecting, they all the same visit_id.
SQL> CREATE TABLE RCI
2 (RCI_ID NUMBER(10) NOT NULL,
3 VISIT_ID NUMBER(10) NOT NULL,
4 RCI_NAME VARCHAR2(20 BYTE) NOT NULL,
5 DCI_DATE VARCHAR2(8 BYTE));
Table created
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876540, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876640, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876740, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876840, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876940, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877040, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877140, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877240, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877240, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877640, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877740, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877840, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877940, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878040, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878140, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878240, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878340, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878440, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878540, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877640, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877740, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878340, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878540, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 17418240, 12140, 'SCREENING', '20000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 17418340, 12140, 'SCREENING', '20000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 17418440, 12140, 'SCREENING', '20000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878240, 12140, 'SCREENING', '20000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 18790240, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 21724540, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876540, 12140, 'SCREENING', '20091015');
1 row inserted
SQL> commit;
Commit complete
SQL> select * from rci;
RCI_ID VISIT_ID RCI_NAME DCI_DATE
14876540 12140 SCREENING 19000101
14876640 12140 SCREENING 19000101
14876740 12140 SCREENING 19000101
14876840 12140 SCREENING 19000101
14876940 12140 SCREENING 19000101
14877040 12140 SCREENING 19000101
14877140 12140 SCREENING 19000101
14877240 12140 SCREENING 19000101
14877240 12140 SCREENING 19000101
14877640 12140 SCREENING 19000101
14877740 12140 SCREENING 19000101
14877840 12140 SCREENING 19000101
14877940 12140 SCREENING 19000101
14878040 12140 SCREENING 19000101
14878140 12140 SCREENING 19000101
14878240 12140 SCREENING 19000101
14878340 12140 SCREENING 19000101
14878440 12140 SCREENING 19000101
14878540 12140 SCREENING 19000101
14877640 12140 SCREENING 19000101
14877740 12140 SCREENING 19000101
14878340 12140 SCREENING 19000101
14878540 12140 SCREENING 19000101
17418240 12140 SCREENING 20000101
17418340 12140 SCREENING 20000101
17418440 12140 SCREENING 20000101
14878240 12140 SCREENING 20000101
18790240 12140 SCREENING 19000101
21724540 12140 SCREENING 19000101
14876540 12140 SCREENING 20091015
30 rows selected
SQL> -- using the sample similar code that i have previously posted it returned all the rows.
SQL> select rci.*
2 from rci
3 where rci.visit_id in (select r1.visit_id
4 from (select rci.visit_id,
5 count(*) over (partition by rci.visit_id, rci.dci_date order by rci.visit_id) rn
6 from rci) r1
7 where r1.rn = 1)
8 order by rci.rci_id;
RCI_ID VISIT_ID RCI_NAME DCI_DATE
14876540 12140 SCREENING 20091015
14876540 12140 SCREENING 19000101
14876640 12140 SCREENING 19000101
14876740 12140 SCREENING 19000101
14876840 12140 SCREENING 19000101
14876940 12140 SCREENING 19000101
14877040 12140 SCREENING 19000101
14877140 12140 SCREENING 19000101
14877240 12140 SCREENING 19000101
14877240 12140 SCREENING 19000101
14877640 12140 SCREENING 19000101
14877640 12140 SCREENING 19000101
14877740 12140 SCREENING 19000101
14877740 12140 SCREENING 19000101
14877840 12140 SCREENING 19000101
14877940 12140 SCREENING 19000101
14878040 12140 SCREENING 19000101
14878140 12140 SCREENING 19000101
14878240 12140 SCREENING 19000101
14878240 12140 SCREENING 20000101
14878340 12140 SCREENING 19000101
14878340 12140 SCREENING 19000101
14878440 12140 SCREENING 19000101
14878540 12140 SCREENING 19000101
14878540 12140 SCREENING 19000101
17418240 12140 SCREENING 20000101
17418340 12140 SCREENING 20000101
17418440 12140 SCREENING 20000101
18790240 12140 SCREENING 19000101
21724540 12140 SCREENING 19000101
30 rows selected
SQL> just as what frank have said it will be helpful if you post a sample output based on the original posting, that is in the first posting you have. -
I need help with Analytic Function
Hi,
I have this little problem that I need help with.
My datafile has thousands of records that look like...
Client_Id Region Countries
[1] [1] [USA, Canada]
[1] [2] [Australia, France, Germany]
[1] [3] [China, India, Korea]
[1] [4] [Brazil, Mexico]
[8] [1] [USA, Canada]
[9] [1] [USA, Canada]
[9] [4] [Argentina, Brazil]
[13] [1] [USA, Canada]
[15] [1] [USA]
[15] [4] [Argentina, Brazil]
etc
My task is is to create a report with 2 columns - Client_Id and Countries, to look something like...
Client_Id Countries
[1] [USA, Canada, Australia, France, Germany, China, India, Korea, Brazil, Mexico]
[8] [USA, Canada]
[9] [USA, Canada, Argentina, Brazil]
[13] [USA, Canada]
[15] [USA, Argentina, Brazil]
etc.
How can I achieve this using Analytic Function(s)?
Thanks.
BDFHi,
That's called String Aggregation , and the following site shows many ways to do it:
http://www.oracle-base.com/articles/10g/StringAggregationTechniques.php
Which one should you use? That depends on which version of Oracle you're using, and your exact requirements.
For example, is order importatn? You said the results shoudl include:
CLIENT_ID COUNTRIES
1 USA, Canada, Australia, France, Germany, China, India, Korea, Brazil, Mexicobut would you be equally happy with
CLIENT_ID COUNTRIES
1 Australia, France, Germany, China, India, Korea, Brazil, Mexico, USA, Canadaor
CLIENT_ID COUNTRIES
1 Australia, France, Germany, USA, Canada, Brazil, Mexico, China, India, Korea?
Mwalimu wrote:
... How can I achieve this using Analytic Function(s)?The best solution may not involve analytic functions at all. Is that okay?
If you'd like help, post your best attempt, a little sample data (CREATE TABLE and INSERT statements), the results you want from that data, and an explanation of how you get those results from that data.
Always say which version of Oracle you're using.
Edited by: Frank Kulash on Aug 29, 2011 3:05 PM -
How can rewrite the Query using Analytical functions ?
Hi,
I have the SQL script as shown below ,
SELECT cd.cardid, cd.cardno,TT.TRANSACTIONTYPECODE,TT.TRANSACTIONTYPEDESC DESCRIPTION,
SUM (NVL (CASE tt.transactiontypecode
WHEN 'LOAD_ACH'
THEN th.transactionamount
END, 0)
) AS load_ach,
SUM
(NVL (CASE tt.transactiontypecode
WHEN 'FUND_TRANSFER_RECEIVED'
THEN th.transactionamount
END,
0
) AS Transfersin,
( SUM (NVL (CASE tt.transactiontypecode
WHEN 'FTRNS'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'SEND_MONEY'
THEN th.transactionamount
END, 0)
)) AS Transferout,
SUM (NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_ACH'
THEN th.transactionamount
END, 0)
) AS withdrawal_ach,
SUM (NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_CHECK'
THEN th.transactionamount
END, 0)
) AS withdrawal_check,
( SUM (NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_CHECK_FEE'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'REJECTED_ACH_LOAD_FEE'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_ACH_REV'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_CHECK_REV'
THEN th.transactionamount
END,
0
) +
SUM
(NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_CHECK_FEE_REV'
THEN th.transactionamount
END,
0
) +
SUM
(NVL (CASE tt.transactiontypecode
WHEN 'REJECTED_ACH_LOAD_FEE_REV'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'OVERDRAFT_FEE_REV'
THEN th.transactionamount
END, 0)
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'STOP_CHECK_FEE_REV'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'LOAD_ACH_REV'
THEN th.transactionamount
END, 0)
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'OVERDRAFT_FEE'
THEN th.transactionamount
END, 0)
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'STOP_CHECK_FEE'
THEN th.transactionamount
END, 0)
)) AS Fee,
th.transactiondatetime
FROM carddetail cd,
transactionhistory th,
transactiontype tt,
(SELECT rmx_a.cardid, rmx_a.endingbalance prev_balance, rmx_a.NUMBEROFDAYS
FROM rmxactbalreport rmx_a,
(SELECT cardid, MAX (reportdate) reportdate
FROM rmxactbalreport
GROUP BY cardid) rmx_b
WHERE rmx_a.cardid = rmx_b.cardid AND rmx_a.reportdate = rmx_b.reportdate) a
WHERE th.transactiontypeid = tt.transactiontypeid
AND cd.cardid = th.cardid
AND cd.cardtype = 'P'
AND cd.cardid = a.cardid (+)
AND CD.CARDNO = '7116734387812758335'
--AND TT.TRANSACTIONTYPECODE = 'FUND_TRANSFER_RECEIVED'
GROUP BY cd.cardid, cd.cardno, numberofdays,th.transactiondatetime,tt.transactiontypecode,TT.TRANSACTIONTYPEDESC
Ouput of the above query is :
CARDID CARDNO TRANSACTIONTYPECODE DESCRIPTION LOAD_ACH TRANSFERSIN TRANSFEROUT WITHDRAWAL_ACH WITHDRAWAL_CHECK FEE TRANSACTIONDATETIME
6005 7116734387812758335 FUND_TRANSFER_RECEIVED Fund Transfer Received 0 3.75 0 0 0 0 21/09/2007 11:15:38 AM
6005 7116734387812758335 FUND_TRANSFER_RECEIVED Fund Transfer Received 0 272 0 0 0 0 05/10/2007 9:12:37 AM
6005 7116734387812758335 WITHDRAWAL_ACH Withdraw Funds via ACH 0 0 0 300 0 0 24/10/2007 3:43:54 PM
6005 7116734387812758335 SEND_MONEY Fund Transfer Sent 0 0 1 0 0 0 19/09/2007 1:17:48 PM
6005 7116734387812758335 FUND_TRANSFER_RECEIVED Fund Transfer Received 0 1 0 0 0 0 18/09/2007 7:25:23 PM
6005 7116734387812758335 LOAD_ACH Prepaid Deposit via ACH 300 0 0 0 0 0 02/10/2007 3:00:00 AM
I want the output like for Load_ACH there should be one record etc.,
Can any one help me , how can i rewrite the above query using analytical functions .,
SekharNot sure of your requirements but this mayhelp reduce your code;
<untested>
SUM (
CASE
WHEN tt.transactiontypecode IN
('WITHDRAWAL_CHECK_FEE', 'REJECTED_ACH_LOAD_FEE', 'WITHDRAWAL_ACH_REV', 'WITHDRAWAL_CHECK_REV',
'WITHDRAWAL_CHECK_FEE_REV', 'REJECTED_ACH_LOAD_FEE_REV', 'OVERDRAFT_FEE_REV','STOP_CHECK_FEE_REV',
'LOAD_ACH_REV', 'OVERDRAFT_FEE', 'STOP_CHECK_FEE')
THEN th.transactionamount
ELSE 0) feeAlso, you might want to edit your post and use [pre] and [/pre] tags around your code for formatting.
Maybe you are looking for
-
My family uses a single lap-top as our home computer and several of us have i-pods that we like to synch, using I-tunes. Although we've tried to create separate I-tune accounts, our i-tunes playlists are getting wiped out when one of us deletes song
-
How to recover deleted songs from iTunes hard drive (not iPod)
I have a Western Digital hard drive that I use to play all of my mp3 files. Tonight I was creating a playlist by dragging and dropping songs. Although I thought I was editing the playlist, it turns out that I accidently deleted a song from my library
-
Opening files on a Mac from a PC
My company recently switched me from a PC to a Mac. I am having issues opening any InDesign file. I am not however experiencing problems opening any files on the other platforms (photoshop, illustrator, dreamweaver etc.). I am using the trial version
-
Hi, I have to redesign my workflow . Existing Design This Workflow is for Blocked Invoices where the blocking reason can be due to price,qantity etc(some 6 reasons) . The invoice can be blocked due to all 6 reasons also.Now my workflow is designed w
-
Re: CONTACT NUMBER IN UK ???
Why is it pretty much impossible to speak to somebody in UK? I paid for a new line to be installed over two months ago and paid for the 12 months line rental up front. I keep getting updates from the delay management team from India... each time I ge