Can I use analytical function in this problem?
Hi,
I want to use query only for the following . I don't want to wright any function or procedure for this.
create temp table test_3 (user_id number, auth_id number);
insert into test_3 values (133,609);
insert into test_3 values (133,610);
insert into test_3 values (133,611);
insert into test_3 values (133,612);
insert into test_3 values (133,613);
insert into test_3 values (133,614);
insert into test_3 values (144,1);
insert into test_3 values (134,610);
insert into test_3 values (135,610);
insert into test_3 values (135,610);
insert into test_3 values (135,610);
insert into test_3 values (136,610);
insert into test_3 values (136,610);
insert into test_3 values (137,610);
insert into test_3 values (137,610);
insert into test_3 values (137,609);
insert into test_3 values (137,11);
I want to count:
1. for each auth_id, how many users are there who is assigned to this aut_id only
example
user_id 134 and 135 is assigned to auth_id 610 only and the count is 3 and 2 respectively .
user_id 144 is assigned to auth_id 1 only and the count is 1.
2.how many user_id is common between auth_id 609 and 610
how many user_id is common between auth_id 609 and 611
how many user_id is common between auth_id 609 and 612
and so on.
I have re-written the problem bellow
Regards,
Edited by: user576726 on Feb 13, 2011 3:54 AM
Hi,
user576726 wrote:
Hi,
Thanks for the response.
drop table test_3;
create table test_3 (user_id number, auth_id number);
insert into test_3 values (133,609); --row 1
...Thanks. That makes the problem a lot clearer.
My desired output is:
auth_id_1 auth_id_2 count1 count2
1 12 1 --(user_id 144) 2 --(row 15, row 16)
1 610 1 --(user_id 144) 1 --(row 19)
11 609 1 --(user_id 137) 1 --(row 13)
11 610 1 --(user_id 137) 2 --(row 11, row 12)
12 1 1 --(user_id 144) 1 --(row 4)
12 610 1 --(user_id 144) 1 --(row 19)
609 11 1 --(user_id 137) 1 --(row 14)
609 610 2 --(user_id 133 & 137) 3 --(row 2, row 11 and row 12)
609 611 1 --(user_id 133) 1 --(row 3)
610 1 1 --(user_id 144) 1 --(row 4)
610 11 1 --(user_id 137) 1 --(row 14)
610 12 1 --(user_id 144) 2 --(row 15, row 16)
610 609 2 --(user_id 133 & 137) 4 --(row 1, row 13, row 17 and row 18)
610 611 1 --(user_id 133) 1 --(row 3)
611 609 1 --(user_id 133) 3 --(row 1, row 17 and row 18)
611 610 1 --(user_id 133) 1 --(row 2) 1 --(user_id 133) 1 --(row 2)
Count1 is the number of common different user id between auth_id_1 and auth_id_2
example
for the first row in the output:-
common user ids between 609 and 610 are 133 and 137. so the count1 should be 2
Count2 is how many rows are there for auth_id_2 where user id is common for auth_id_1 and auth_id_2
example
for the first row in the output:-
the common user_id for 609 and 610 are 133 & 137
the rows in the test_3 table that has auth_id 610 and user_id 133 & 137 are
row 2, row 11 and row 12 so the count is 3.
What I have done is
I have writtent the following query to get the first two columns of the output:
select tab1.auth_id auth_id_1, tab2.auth_id auth_id_2
from
(select user_id, auth_id
from test_3
group by user_id, auth_id
) tab1,
(select user_id, auth_id
from test_3
group by user_id, auth_id
) tab2
where tab1.user_id = tab2.user_id
and tab1.auth_id <> tab2.auth_id
group by tab1.auth_id, tab2.auth_id
order by 1,2;You're on the right track. You're doing a self-join and getting the right combinations of auth_id_1 and auth_id_2.
Why are you doing the GROUP BY in sub-queries tab1 and tab2? Eventually, you'll need to count identical rows, like these:
insert into test_3 values (137,610); --row 11
insert into test_3 values (137,610); --row 12If you do a GROUP BY in the sub-queries, all you'll know is that user_id=137 was related to auth_id=610. You won't know how many times, which is what count2 is based on. So don't do a GROUP BY in the sub-queries; just do the GROUP BY in the main query. That means you won't need to do sub-queries; you might as well just join two copies of the original test_3 table.
Count1 is the number of common different user id between auth_id_1 and auth_id_2Great; that's very clear. In SQL, how do you count the number of different user_ids in such a group? (Hint "different" means the same thing as "distinct".)
Count2 is how many rows are there for auth_id_2 where user id is common for auth_id_1 and auth_id_2
example
for the first row in the output:-The first row in the output you posted was
1 12 1 --(user_id 144) 2 --(row 15, row 16)Isn't this one that you're explaining here the 8th row of output?
the common user_id for 609 and 610 are 133 & 137
the rows in the test_3 table that has auth_id 610 and user_id 133 & 137 are
row 2, row 11 and row 12 so the count is 3.So, for count2, you want to know how many distinct rows from tab2 are in each group. If you had a primary key in the table, or anything that uniquely identified the rows, you could count the distinct occurrences of that, but you're not storing anything unique on each row (at least you haven't mentioned it in your sample data). If that's really the case, then this is one place where the ROWID pseudocolumn is handy; it uniquely identifies any row in any table, so you can just count how many different values of tab2.ROWID are in each group.
Similar Messages
-
How can i use analytic function on these data
hi ,
i have the following data:
id start end
1 11-oct-2006 03:00:34 12-oct-2006 09:00:10
1 09-oct-2006 05:00:23 11-oct-2006 03:00:34
1 08-oct-2006 03:00:23 09-oct-2006 05:00:23
2 11-oct-2006 11:00:00 11-oct-2006 14:00:00
1 08-oct-2006 03:00:00 08-oct-2006 04:00:00
my end results shld be
id start end
1 08-oct-2006 05:00:23 12-oct-2006 09:00:10
2 11-oct-2006 11:00:00 11-oct-2006 14:00:00
1 08-oct-2006 03:00:00 08-oct-2006 04:00:00
pls advise
tks & rdgsMohana ,
I think Elic's solution can be used here also
(Group by preserving the order
sql>select * from t;
ID ST ED
1 11-OCT-06 03.00.34.000000 AM 12-OCT-06 09.00.10.000000 AM
1 09-OCT-06 05.00.23.000000 AM 11-OCT-06 03.00.34.000000 AM
1 08-OCT-06 04.00.23.000000 AM 09-OCT-06 05.00.23.000000 AM
2 11-OCT-06 11.00.00.000000 AM 11-OCT-06 12.00.00.000000 PM
1 08-OCT-06 03.00.00.000000 AM 08-OCT-06 04.00.00.000000 AM
sql>
select id,min(st) st,max(ed) ed
from(
select id,st,ed, sum(grp) over(order by st) sm
from(
select id,st,ed,decode(lag(ed) over(order by st),st,0,1) grp
from t))
group by id,sm;
ID ST ED
1 08-OCT-06 03.00.00.000000 AM 08-OCT-06 04.00.00.000000 AM
1 08-OCT-06 04.00.23.000000 AM 12-OCT-06 09.00.10.000000 AM
2 11-OCT-06 11.00.00.000000 AM 11-OCT-06 12.00.00.000000 PM
Message was edited by:
jeneesh -
How to use analytical function in this case
SELECT COUNT (rms.status_code) rms_status_count,
rms.status_name rms_status_name,
TO_CHAR (rtd.add_date, 'MON')|| ' '|| TO_CHAR (rtd.add_date, 'YYYY') month_year,
MAX (rtd.add_date) date_for_sort
FROM ri_mast_status rms, ri_tran_data rtd
WHERE rtd.status_code = rms.status_code
AND TRUNC (MONTHS_BETWEEN (SYSDATE, rtd.add_date)) < 36
AND NVL (rtd.delete_flg, '0') = '0'
GROUP BY TO_CHAR (rtd.add_date, 'MON')|| ' '|| TO_CHAR (rtd.add_date, 'YYYY'),
rms.status_name
ORDER BY MAX (rtd.add_date);
it gives output for the last 3 years based on month and year.r you trying this ?
select *from
select rms.*,
row_number() over(partition by TO_CHAR (rtd.add_date, 'MON')|| ' '|| TO_CHAR (rtd.add_date,rms.status_name 'YYYY') order by rtd.add_date) RN,
MAX(rtd.add_date) over(partition by TO_CHAR (rtd.add_date, 'MON')|| ' '|| TO_CHAR (rtd.add_date,rms.status_name 'YYYY') order by rtd.add_date) date_for_sort,
COUNT(rms.status_code) over(partition by TO_CHAR (rtd.add_date, 'MON')|| ' '|| TO_CHAR (rtd.add_date,rms.status_name 'YYYY') order by rtd.add_date) rms_status_count
FROM ri_mast_status rms, ri_tran_data rtd
WHERE rtd.status_code = rms.status_code
AND TRUNC (MONTHS_BETWEEN (SYSDATE, rtd.add_date)) < 36
AND NVL (rtd.delete_flg, '0') = '0'
where rn=1 -
How to use Analytic functions in Forms10g
Hi,
Can we use Analytic function in forms10g like Lead & Lag.
Thanks & Regards,Use a db view as a data source of your form block ....
Greetings...
Sim -
How to use analytic function with aggregate function
hello
can we use analytic function and aggrgate function in same qurey? i tried to find any example on Net but not get any example how both of these function works together. Any link or example plz share with me
Edited by: Oracle Studnet on Nov 15, 2009 10:29 PMselect
t1.region_name,
t2.division_name,
t3.month,
t3.amount mthly_sales,
max(t3.amount) over (partition by t1.region_name, t2.division_name)
max_mthly_sales
from
region t1,
division t2,
sales t3
where
t1.region_id=t3.region_id
and
t2.division_id=t3.division_id
and
t3.year=2004
Source:http://www.orafusion.com/art_anlytc.htm
Here max (aggregate) and over partition by (analytic) function is in same query. So it means we can use aggregate and analytic function in same query and more than one analytic function in same query also.
Hth
Girish Sharma -
How can rewrite the Query using Analytical functions ?
Hi,
I have the SQL script as shown below ,
SELECT cd.cardid, cd.cardno,TT.TRANSACTIONTYPECODE,TT.TRANSACTIONTYPEDESC DESCRIPTION,
SUM (NVL (CASE tt.transactiontypecode
WHEN 'LOAD_ACH'
THEN th.transactionamount
END, 0)
) AS load_ach,
SUM
(NVL (CASE tt.transactiontypecode
WHEN 'FUND_TRANSFER_RECEIVED'
THEN th.transactionamount
END,
0
) AS Transfersin,
( SUM (NVL (CASE tt.transactiontypecode
WHEN 'FTRNS'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'SEND_MONEY'
THEN th.transactionamount
END, 0)
)) AS Transferout,
SUM (NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_ACH'
THEN th.transactionamount
END, 0)
) AS withdrawal_ach,
SUM (NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_CHECK'
THEN th.transactionamount
END, 0)
) AS withdrawal_check,
( SUM (NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_CHECK_FEE'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'REJECTED_ACH_LOAD_FEE'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_ACH_REV'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_CHECK_REV'
THEN th.transactionamount
END,
0
) +
SUM
(NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_CHECK_FEE_REV'
THEN th.transactionamount
END,
0
) +
SUM
(NVL (CASE tt.transactiontypecode
WHEN 'REJECTED_ACH_LOAD_FEE_REV'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'OVERDRAFT_FEE_REV'
THEN th.transactionamount
END, 0)
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'STOP_CHECK_FEE_REV'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'LOAD_ACH_REV'
THEN th.transactionamount
END, 0)
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'OVERDRAFT_FEE'
THEN th.transactionamount
END, 0)
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'STOP_CHECK_FEE'
THEN th.transactionamount
END, 0)
)) AS Fee,
th.transactiondatetime
FROM carddetail cd,
transactionhistory th,
transactiontype tt,
(SELECT rmx_a.cardid, rmx_a.endingbalance prev_balance, rmx_a.NUMBEROFDAYS
FROM rmxactbalreport rmx_a,
(SELECT cardid, MAX (reportdate) reportdate
FROM rmxactbalreport
GROUP BY cardid) rmx_b
WHERE rmx_a.cardid = rmx_b.cardid AND rmx_a.reportdate = rmx_b.reportdate) a
WHERE th.transactiontypeid = tt.transactiontypeid
AND cd.cardid = th.cardid
AND cd.cardtype = 'P'
AND cd.cardid = a.cardid (+)
AND CD.CARDNO = '7116734387812758335'
--AND TT.TRANSACTIONTYPECODE = 'FUND_TRANSFER_RECEIVED'
GROUP BY cd.cardid, cd.cardno, numberofdays,th.transactiondatetime,tt.transactiontypecode,TT.TRANSACTIONTYPEDESC
Ouput of the above query is :
CARDID CARDNO TRANSACTIONTYPECODE DESCRIPTION LOAD_ACH TRANSFERSIN TRANSFEROUT WITHDRAWAL_ACH WITHDRAWAL_CHECK FEE TRANSACTIONDATETIME
6005 7116734387812758335 FUND_TRANSFER_RECEIVED Fund Transfer Received 0 3.75 0 0 0 0 21/09/2007 11:15:38 AM
6005 7116734387812758335 FUND_TRANSFER_RECEIVED Fund Transfer Received 0 272 0 0 0 0 05/10/2007 9:12:37 AM
6005 7116734387812758335 WITHDRAWAL_ACH Withdraw Funds via ACH 0 0 0 300 0 0 24/10/2007 3:43:54 PM
6005 7116734387812758335 SEND_MONEY Fund Transfer Sent 0 0 1 0 0 0 19/09/2007 1:17:48 PM
6005 7116734387812758335 FUND_TRANSFER_RECEIVED Fund Transfer Received 0 1 0 0 0 0 18/09/2007 7:25:23 PM
6005 7116734387812758335 LOAD_ACH Prepaid Deposit via ACH 300 0 0 0 0 0 02/10/2007 3:00:00 AM
I want the output like for Load_ACH there should be one record etc.,
Can any one help me , how can i rewrite the above query using analytical functions .,
SekharNot sure of your requirements but this mayhelp reduce your code;
<untested>
SUM (
CASE
WHEN tt.transactiontypecode IN
('WITHDRAWAL_CHECK_FEE', 'REJECTED_ACH_LOAD_FEE', 'WITHDRAWAL_ACH_REV', 'WITHDRAWAL_CHECK_REV',
'WITHDRAWAL_CHECK_FEE_REV', 'REJECTED_ACH_LOAD_FEE_REV', 'OVERDRAFT_FEE_REV','STOP_CHECK_FEE_REV',
'LOAD_ACH_REV', 'OVERDRAFT_FEE', 'STOP_CHECK_FEE')
THEN th.transactionamount
ELSE 0) feeAlso, you might want to edit your post and use [pre] and [/pre] tags around your code for formatting. -
i want to ask you if i will buy the ipad mini wifi from this site,after can i use it in Europe without problems?
Thank youWifi is wifi (generally) and you should be able to take your american bought device (presuming that's what you mean) to europe and get it online.
The biggest issue you may face is to make sure you have a plug adapter. Your power brick is rated for world wide power variances you'll just need an adapter to make the plug fit eurpoean plugs. -
I have the iPhone 4 recently became weak Wi-Fi where I can not use the Internet only when sitting Bejjani router, Can anyone help me in solving this problem please
iPhone 4, iOS 7.0.3There's a whole lot to read in your post, and frankly I have not read it all.
Having said that, this troubleshooting guide should help:
http://support.apple.com/kb/TS1538
In particular, pay attention to the mobile device support sections near the bottom, assuming you have already done the items above it. -
Hi all,
I am using ODI 11g(11.1.1.3.0) and I am trying to make an interface using analytic functions in the column mapping, something like below.
sum(salary) over (partition by .....)
The problem is that when ODI saw sum it assumes this as an aggregate function and puts group by. Is there any way to make ODI understand it is not an aggregate function?
I tried creating an option to specify whether it is analytic or not and updated IKM with no luck.
<%if ( odiRef.getUserExit("ANALYTIC").equals("1") ) { %>
<% } else { %>
<%=odiRef.getGrpBy(i)%>
<%=odiRef.getHaving(i)%>
<% } %>
Thanks in advanceThanks for the reply.
But I think in ODI 11g getFrom() function is behaving differently, that is why it is not working.
When I check out the A.2.18 getFrom() Method from Substitution API Reference document, it says
Allows the retrieval of the SQL string of the FROM in the source SELECT clause for a given dataset. The FROM statement is built from tables and joins (and according to the SQL capabilities of the technologies) that are used in this dataset.
I think getfrom also retrieves group by clause, I create a step in IKM just with *<%=odiRef.getFrom(0)%>* and I can see that even that query generated has a group by clause -
Restrict Query Resultset which uses Analytic Function
Gents,
Problem Definition: Using Analytic Function, get Total sales for the Product P1
and Customer C1 [Total sales for the customer itself] in one line.
I want to restrict the ResultSet of the query to Product P1,
please look at the data below, queries and problems..
Data
Customer Product Qtr Sales
C1 P1 19991 100.00
C1 P1 19992 125.00
C1 P1 19993 175.00
C1 P1 19994 300.00
C1 P2 19991 100.00
C1 P2 19992 125.00
C1 P2 19993 175.00
C1 P2 19994 300.00
C2 P1 19991 100.00
C2 P1 19992 125.00
C2 P1 19993 175.00
C2 P1 19994 300.00
Problem, I want to display....
Customer Product ProdSales CustSales
C1 P1 700 1400
But Without using outer query, i.e. please look below for the query that
returns this reult with two select, I want this result in one query only..
Select * From ----*** want to avoid this... ***----
(Select Customer,Product,
Sum(Sales) ProdSales,
Sum(Sum(Sales)) Over(Partition By Customer) CustSales
From t1
Where customer='C1')
Where
Product='P1' ;
Also, I want to avoid Hard coding of P1 in the select clause....
I mean, I can do it in one shot/select, but look at the query below, it uses
P1 in the select clause, which is No No!! P1 is allowed only in Where or Having ..
Select Customer,Decode(Product, 'P1','P1','P1') Product,
Decode(Product,'P1',Sales,0) ProdSales,
Sum(Sum(Sales)) Over (Partition By Customer ) CustSales
From t1
Where customer='C1' ;
This will get me what I want, but as I said earlier, I want to avoid using P1 in the
Select clause..
Goal is to Avoid using
1-> Two Select/Outer Query/In Line Views
2-> Product 'P1' in the Select clause...
Thanks
-Dhaval RasaniaI don't understand goal number 1 of not using an inline view.
What is the harm? -
Hi all,
I want an help in creating my SQL query to extract the data described below:
I have one table example test containing data like below:
ID Desc Status
1 T1 DEACTIVE
2 T2 ACTIVE
3 T3 SUCCESS
4 T4 DEACTIVE
The thing i want to do is selecting all lines with ACTIVE status in this table but is there is no ACTIVE status, my query will give me the last line with DEACTIVE status.
Can I do this in one query by using analytical function for example, if yes can yiu help me on thaht query.
regards,
RaluceHi, Raluce,
Here's one way to do that:
WITH got_r_num AS
SELECT deptno, ename, job, hiredate
, ROW_NUMBER () OVER ( PARTITION BY deptno
ORDER BY job
, hiredate DESC
) AS r_num
FROM scott.emp
WHERE job IN ('ANALYST', 'CLERK')
SELECT deptno, ename, job, hiredate
FROM got_r_num
WHERE job = 'ANALYST'
OR r_num = 1
ORDER BY deptno
Since I don't have a sample version of your table, I used scott.emp to illustrate.
Output:
DEPTNO ENAME JOB HIREDATE
10 MILLER CLERK 23-JAN-82
20 SCOTT ANALYST 19-APR-87
20 FORD ANALYST 03-DEC-81
30 JAMES CLERK 03-DEC-81
This query finds all ANALYSTs in each department, regardless of how many there are. (Deptno 20 happens to have 2 ANALYSTs.) If there is no ANALYST in a department, then the most recently hired CLERK is included. (Deptnos 10 and 30 don't have any ANALYSTs.)
This "partitions", or sub-divides, the table into separate units, one for each department. In the problem you posted, it looks like you want to operate in the entire table, without sub-dividing it in any way. To do that, just omit the PARTITION BY clause in the analytic ROW_NUMBER function, like this:
WITH got_r_num AS
SELECT deptno, ename, job, hiredate
, ROW_NUMBER () OVER ( -- PARTITION BY deptno
ORDER BY job
, hiredate DESC
) AS r_num
FROM scott.emp
WHERE job IN ('ANALYST', 'CLERK')
SELECT deptno, ename, job, hiredate
FROM got_r_num
WHERE job = 'ANALYST'
OR r_num = 1
ORDER BY deptno -
[b]Using Analytic functions...[/b]
Hi All,
I need help in writing a query using analytic functions.
Foll is my scenario. I have a table cust_points
CREATE TABLE cust_points
( cust_id varchar2(10),
pts_dt date,
reward_points number(3),
bal_points number(3)
insert into cust_points values ('ABC',01-MAY-2004',5, 15)
insert into cust_points values ('ABC',05-MAY-2004',3, 12)
insert into cust_points values ('ABC',09-MAY-2004',3, 9)
insert into cust_points values ('XYZ',02-MAY-2004',8, 4)
insert into cust_points values ('XYZ',03-MAY-2004',5, 1)
insert into cust_points values ('JKL',10-MAY-2004',5, 11)
I want a result set which shows for each customer, the sum of reward his/her points
but balance points as of the last date. So for the above I should have foll results
cust_id reward_pts bal_points
ABC 11 9
XYZ 13 1
JKL 5 11
I having tried using last_value(), for eg
Select cust_id, sum(reward_points), last_value(bal_points) over (partition by cust_id)...but run into grouping errors.
Can anyone help ?try this...
SELECT a.pkcol,
nvl(SUM(b.col1),0) col1,
nvl(SUM(b.col2),0) col2,
nvl(SUM(b.col3),0) col3
FROM table1 a, table2 b, table3 c
WHERE a.pkcol = b.plcol(+)
AND a.pkcol = c.pkcol
GROUP BY a.pkcol;
SQL> select a.deptno,
2 nvl((select sum(sal) from test_emp b where a.deptno = b.deptno),0) col1,
3 nvl((select sum(comm) from test_emp b where a.deptno = b.deptno),0) col2
4 from test_dept a;
DEPTNO COL1 COL2
10 12786 0
20 13237 738
30 11217 2415
40 0 0
99 0 0
SQL> select a.deptno,
2 nvl(sum(b.sal),0) col1,
3 nvl(sum(b.comm),0) col2
4 from test_dept a,test_emp b
5 where a.deptno = b.deptno
6 group by a.deptno;
DEPTNO COL1 COL2
30 11217 2415
20 13237 738
10 12786 0
SQL> select a.deptno,
2 nvl(sum(b.sal),0) col1,
3 nvl(sum(b.comm),0) col2
4 from test_dept a,test_emp b
5 where a.deptno = b.deptno(+)
6 group by a.deptno;
DEPTNO COL1 COL2
10 12786 0
20 13237 738
30 11217 2415
40 0 0
99 0 0
SQL> -
Query for using "analytical functions" in DWH...
Dear team,
I would like to know if following task can be done using analytical functions...
If it can be done using other ways, please do share the ideas...
I have table as shown below..
Create Table t As
Select *
From
Select 12345 PRODUCT, 'W1' WEEK, 10000 SOH, 0 DEMAND, 0 SUPPLY, 0 EOH From dual Union All
Select 12345, 'W2', 0, 100, 50, 0 From dual Union All
Select 12345, 'W3', 0, 100, 50, 0 From dual Union All
Select 12345, 'W4', 0, 100, 50, 0 From dual
PRODUCT WEEK SOH DEMAND SUPPLY EOH
12345 W1 10,000 0 0 10000
12345 W2 0 100 50 0
12345 W3 0 100 50 0
12345 W4 0 100 50 0
Now i want to calcuate EOH (ending on hand) quantity for W1...
This EOH for W1 becomes SOH (Starting on hand) for W2...and so on...till end of weeks
The formula is :- EOH = SOH - (DEMAND + SUPPLY)
The output should be as follows...
PRODUCT WEEK SOH DEMAND SUPPLY EOH
12345 W1 10,000 10000
12345 W2 10,000 100 50 9950
12345 W3 9,950 100 50 9900
12345 W4 9,000 100 50 8950
Kindly share your ideas...Nicloei W wrote:
Means SOH_AFTER_SUPPLY for W1, should be displayed under SOH FOR W2...i.e. SOH for W4 should be SOH_AFTER_SUPPLY for W3, right?
If yes, why are you expecting it to be 9000 for W4??
So in output should be...
PRODUCT WE SOH DEMAND SUPPLY EOH SOH_AFTER_SUPPLY
12345 W1 10000 0 0 0 10000
12345 W2 10000 100 50 0 9950
12345 W3 9950 100 50 0 *9900*
12345 W4 *9000* 100 50 0 9850
per logic you explained, shouldn't it be *9900* instead???
you could customize Martin Preiss's logic for your requirement :
SQL> with
2 data
3 As
4 (
5 Select 12345 PRODUCT, 'W1' WEEK, 10000 SOH, 0 DEMAND, 0 SUPPLY, 0 EOH Fom dual Union All
6 Select 12345, 'W2', 0, 100, 50, 0 From dal Union All
7 Select 12345, 'W3', 0, 100, 50, 0 From dal Union All
8 Select 12345, 'W4', 0, 100, 50, 0 From dual
9 )
10 Select Product
11 ,Week
12 , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Parttion By Product Order By Week)+Supply Soh
13 ,Demand
14 ,Supply
15 , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Partition By Product Order By Week) eoh
16 from data;
PRODUCT WE SOH DEMAND SUPPLY EOH
12345 W1 10000 0 0 10000
12345 W2 10000 100 50 9950
12345 W3 9950 100 50 9900
12345 W4 9900 100 50 9850 Vivek L -
Using analytic function to get the right output.
Dear all;
I have the following sample date below
create table temp_one
id number(30),
placeid varchar2(400),
issuedate date,
person varchar2(400),
failures number(30),
primary key(id)
insert into temp_one values (1, 'NY', to_date('03/04/2011', 'MM/DD/YYYY'), 'John', 3);
insert into temp_one values (2, 'NY', to_date('03/03/2011', 'MM/DD/YYYY'), 'Adam', 7);
insert into temp_one values (3, 'Mexico', to_date('03/04/2011', 'MM/DD/YYYY'), 'Wendy', 3);
insert into temp_one values (4, 'Mexico', to_date('03/14/2011', 'MM/DD/YYYY'), 'Gerry', 3);
insert into temp_one values (5, 'Mexico', to_date('03/15/2011', 'MM/DD/YYYY'), 'Zick', 9);
insert into temp_one values (6, 'London', to_date('03/16/2011', 'MM/DD/YYYY'), 'Mike', 8);this is output I desire
placeid issueperiod failures
NY 02/28/2011 - 03/06/2011 10
Mexico 02/28/2011 - 03/06/2011 3
Mexico 03/14/2011 - 03/20/2011 12
London 03/14/2011 - 03/20/2011 8All help is appreciated. I will post my query as soon as I am able to think of a good logic for this...hI,
user13328581 wrote:
... Kindly note, I am still learning how to use analytic functions.That doesn't matter; analytic functions won't help in this problem. The aggregate SUM function is all you need.
But what do you need to GROUP BY? What is each row of the result set going to represent? A placeid? Yes, each row will represent only one placedid, but it's going to be divided further. You want a separate row of output for every placeid and week, so you'll want to GROUP BY placeid and week. You don't want to GROUP BY the raw issuedate; that would put March 3 and March 4 into separate groups. And you don't want to GROUP BY failures; that would mean a row with 3 failures could never be in the same group as a row with 9 failures.
This gets the output you posted from the sample data you posted:
SELECT placeid
, TO_CHAR ( TRUNC (issuedate, 'IW')
, 'MM/DD/YYYY'
) || ' - '|| TO_CHAR ( TRUNC (issuedate, 'IW') + 6
, 'MM/DD/YYY'
) AS issueperiod
, SUM (failures) AS sumfailures
FROM temp_one
GROUP BY placeid
, TRUNC (issuedate, 'IW')
;You could use a sub-query to compute TRUNC (issuedate, 'IW') once. The code would be about as complicated, efficiency probably won't improve noticeably, and the the results would be the same. -
I can't use the function of "print to video".
I've recently purchased FCPX and Blackmagic Design Decklink HD Extreme 3(version 7.5.2) but I already know this combination looks quite weird because I'm using only DSR. This is the first time I have installed FCP.
I installed FCPX and figured out there's no way to get the video from Decklink on FCPX so that I deleted FCPX, installed FCP 2009 instead and installed FCPX again on the same disk as Final Cut Pro 7... It's a shame to get rid of it.
At first, I could get a good result when printing to tape. There's no problem. But it's not long before I failed to get what i wanted. VCR detects input signal from FCP 7 through Decklink but the monitor of VCR shows only noise. For now, I can't use the function of "print to video".
Also, I connect the input of Decklink to the output of DSR and the output of Decklink to the input of DSR at the same time and monitor w/ the output of DSR. However, the output on the monitor shows something weird when I capture the video from DSR as if the signal is fed back.
Someone told me that I have to use other version of decklink but i'm not sure it would be better if I just change the version.
What's wrong?
What should I do for fixing these all mess?I've recently purchased FCPX and Blackmagic Design Decklink HD Extreme 3(version 7.5.2) but I already know this combination looks quite weird because I'm using only DSR. This is the first time I have installed FCP.
I installed FCPX and figured out there's no way to get the video from Decklink on FCPX so that I deleted FCPX, installed FCP 2009 instead and installed FCPX again on the same disk as Final Cut Pro 7... It's a shame to get rid of it.
At first, I could get a good result when printing to tape. There's no problem. But it's not long before I failed to get what i wanted. VCR detects input signal from FCP 7 through Decklink but the monitor of VCR shows only noise. For now, I can't use the function of "print to video".
Also, I connect the input of Decklink to the output of DSR and the output of Decklink to the input of DSR at the same time and monitor w/ the output of DSR. However, the output on the monitor shows something weird when I capture the video from DSR as if the signal is fed back.
Someone told me that I have to use other version of decklink but i'm not sure it would be better if I just change the version.
What's wrong?
What should I do for fixing these all mess?
Maybe you are looking for
-
How do I connect to a scanner and scan PDF files
My Mac is about two years old, and I have an H P wireless printer. I can print wirlessly, or hook up a printer cable and scan in pictures using capture, a program on the computer. But how would I scan in a file and make it a PDF file? Thanks, Grum
-
How To Get AppleToCare About You (iOS7 wifi solved)
I recently encountered the now seemingly infamous iOS7 upgrade wifi bug. After upgrading to iOS 7.1.1 my wifi was subsequently knocked out. After reading skimming the forums and trying a few of the suggested work arounds I was still unable to get my
-
Cannot save alpha channel as PNG. need help.
Hi, I am running PS CS5. I am working on my business logo and love the way it has turned out with a texture I added as a new channel (alpha channel). I am using a transparent background on my image, the only compatible way to save is PNG format.(and
-
I've purchased the Universal Dock and the component video cable to hook my iPhone up to my home entertainment system. I can watch video podcasts and TV shows from my iPhone, but I cannot view photo slide shows. What am I doing wrong? All the literatu
-
Hello Gurus, I have Created Many Material Master Number in SAP. Now I want to Find Out that which Materials are Used in SAP At least In Single transaction and Which Materials are still Unused in SAP. I am Waiting Your reply. Regards, Riten Patel.