Using analytic function to get the right output.
Dear all;
I have the following sample date below
create table temp_one
id number(30),
placeid varchar2(400),
issuedate date,
person varchar2(400),
failures number(30),
primary key(id)
insert into temp_one values (1, 'NY', to_date('03/04/2011', 'MM/DD/YYYY'), 'John', 3);
insert into temp_one values (2, 'NY', to_date('03/03/2011', 'MM/DD/YYYY'), 'Adam', 7);
insert into temp_one values (3, 'Mexico', to_date('03/04/2011', 'MM/DD/YYYY'), 'Wendy', 3);
insert into temp_one values (4, 'Mexico', to_date('03/14/2011', 'MM/DD/YYYY'), 'Gerry', 3);
insert into temp_one values (5, 'Mexico', to_date('03/15/2011', 'MM/DD/YYYY'), 'Zick', 9);
insert into temp_one values (6, 'London', to_date('03/16/2011', 'MM/DD/YYYY'), 'Mike', 8);this is output I desire
placeid issueperiod failures
NY 02/28/2011 - 03/06/2011 10
Mexico 02/28/2011 - 03/06/2011 3
Mexico 03/14/2011 - 03/20/2011 12
London 03/14/2011 - 03/20/2011 8All help is appreciated. I will post my query as soon as I am able to think of a good logic for this...
hI,
user13328581 wrote:
... Kindly note, I am still learning how to use analytic functions.That doesn't matter; analytic functions won't help in this problem. The aggregate SUM function is all you need.
But what do you need to GROUP BY? What is each row of the result set going to represent? A placeid? Yes, each row will represent only one placedid, but it's going to be divided further. You want a separate row of output for every placeid and week, so you'll want to GROUP BY placeid and week. You don't want to GROUP BY the raw issuedate; that would put March 3 and March 4 into separate groups. And you don't want to GROUP BY failures; that would mean a row with 3 failures could never be in the same group as a row with 9 failures.
This gets the output you posted from the sample data you posted:
SELECT placeid
, TO_CHAR ( TRUNC (issuedate, 'IW')
, 'MM/DD/YYYY'
) || ' - '|| TO_CHAR ( TRUNC (issuedate, 'IW') + 6
, 'MM/DD/YYY'
) AS issueperiod
, SUM (failures) AS sumfailures
FROM temp_one
GROUP BY placeid
, TRUNC (issuedate, 'IW')
;You could use a sub-query to compute TRUNC (issuedate, 'IW') once. The code would be about as complicated, efficiency probably won't improve noticeably, and the the results would be the same.
Similar Messages
-
Using analytic function to get the data
Hi
Version is 11g
My table has this data
NAME SALARY LAST_UPDA
a 1000 01-JAN-07
a 2000 01-JAN-09
a 2500 01-JUN-10
b 2000 01-AUG-10
c 5000 01-JUN-07
c 6000 08-JAN-09
c 4500 01-FEB-10
I want to pick the salary and name of the person when it was last updated (max(last_update_date)).
Couple of ways of doing this I think are
SELECT distinct name,
TRUNC(
AVG(salary) KEEP (DENSE_RANK LAST
ORDER BY TO_CHAR(last_update_Date,'YYYY') )
OVER (PARTITION BY NAME)
) t
FROM kmdebug;
OR
SELECT * FROM KMDEBUG
WHERE (LAST_UPDATE_DATE, NAME) IN (SELECT MAX(LAST_UPDATE_DATE), NAME
FROM KMDEBUG
GROUP BY NAME);
They give the desired result.
NAME SALARY LAST_UPDA
a 2500 01-JUN-10
b 2000 01-AUG-10
c 4500 01-FEB-10
But the problem with the first version is DISTINCT. I want to get result set without using DISTINCT
The problem with second version is, it could be inefficient way of doing it espeically when KMDEBUG table would be big.
Thank you
MSKA couple of thoughts.
1. 11g is not a version number.
SELECT * FROM v$version;2. Read the FAQ and learn how to use tags to format your listing so others can read it. (blue circular icon to the right)
3. Post DDL to create your table and DML to load your sample data.
Then, perhaps, someone can try your query and consider how to get you what you want.
And, when you make requests like "without using DISTINCT" you need to explain why. Because otherwise this just looks like someone trying to get us to do their homework for them. -
Use SQL function to get the original order number using the invoice number
Hi All,
wondering is someone can help me with this challenge I am having? Often I need to return the original order numbers that created the resulting invoce. This is a relatively simple seriese of joins in a query but I am wanting to simplify it using a SQL function that can be referenced each time easily from with in the SELECT statement. the code i currently have is:
Use SQL function to get the original order number using the invoice number
CREATE FUNCTION dbo.fnOrdersThatMakeInvoice(@InvNum int)
RETURNS nvarchar(200)
AS
BEGIN
DECLARE @OrderList nvarchar(200)
SET @OrderList = ''
SELECT @OrderList = @OrderList + (cast(T6.DocNum AS nvarchar(10)) + ' ')
FROM OINV AS T1 INNER JOIN
INV1 AS T2 ON T1.DocEntry = T2.DocEntry INNER JOIN
DLN1 AS T4 ON T2.BaseEntry = T4.DocEntry AND T2.BaseLine = T4.LineNum INNER JOIN
RDR1 AS T5 ON T4.BaseEntry = T5.DocEntry AND T4.BaseLine = T5.LineNum INNER JOIN
ORDR AS T6 ON T5.DocEntry = T6.DocEntry
WHERE T1.DocNum = @InvNum
RETURN @OrderList
END
it is run by the following query:
Select T1.DocNum, dbo.fnOrdersThatMakeInvoice(T1.DocNum)
From OINV T1
Where T1.DocNum = 'your invoice number here'
The issue is that this returns the order number for all of the lines in the invoice. Only want to see the summary of the order numbers. ie if 3 orders were used to make a 20 line inovice I only want to see the 3 order numbers retuned in the field.
If this was a simple reporting SELECT query I would use SELECT DISTINCT. But I can't do that.
Any ideas?
Thanks,
MikeThanks Gordon,
I am trying to get away from the massive table access list everytime I write a query where I need to access the original order number of the invoice. However, I have managed to solve my own problem with a GROUP BY statement!
Others may be interested so, the code is this:
CREATE FUNCTION dbo.fnOrdersThatMakeInvoice(@InvNum int)
RETURNS nvarchar(200)
AS
BEGIN
DECLARE @OrderList nvarchar(200)
SET @OrderList = ''
SELECT @OrderList = @OrderList + (cast(T6.DocNum AS nvarchar(10)) + ' ')
FROM OINV AS T1 INNER JOIN
INV1 AS T2 ON T1.DocEntry = T2.DocEntry INNER JOIN
DLN1 AS T4 ON T2.BaseEntry = T4.DocEntry AND T2.BaseLine = T4.LineNum INNER JOIN
RDR1 AS T5 ON T4.BaseEntry = T5.DocEntry AND T4.BaseLine = T5.LineNum INNER JOIN
ORDR AS T6 ON T5.DocEntry = T6.DocEntry
WHERE T1.DocNum = @InvNum
GROUP BY T6.DocNum
RETURN @OrderList
END
and to call it use this:
Select T1.DocNum, dbo.fnOrdersThatMakeInvoice(T1.DocNum)
From OINV T1
Where T1.DocNum = 'your invoice number' -
Hi Guys,,I am not getting the right output?Plz help
Hi Guys,
Here is my code..
ELSEIF p_versb = 'W2'.
CONCATENATE lv_perxx '08' INTO lv_date3. (20090708)
CONCATENATE lv_perxx '15' INTO lv_date4. (20090715)
SELECT mseg~mblnr
mseg~bwart
mseg~matnr
mseg~lgort
mseg~menge
FROM mseg
INNER JOIN mkpf
ON msegmblnr = mkpfmblnr
INTO TABLE t_temp
FOR ALL ENTRIES IN t_firmplan
WHERE mseg~matnr = t_firmplan-matnr (490045,500001)
AND mkpf~budat GE lv_date3
AND mkpf~budat LE lv_date4
AND mseg~bwart IN ('101', '102')
AND mseg~werks = t_firmplan-werks. (werks = 1100)
Plz suggest where I m wrong ?
Thanks
SteveHi,
Check the
SELECT mseg~mblnr
mseg~bwart
mseg~matnr
mseg~lgort
mseg~menge
FROM mseg
INNER JOIN mkpf
ON mseg~mblnr = mkpf~mblnr
INTO TABLE t_temp
FOR ALL ENTRIES IN t_firmplan
WHERE mseg~matnr = t_firmplan-matnr (490045,500001) " Check the matnr value in the MSEG table if it storing with
" leading zero's then use the conversion exit to get the leading zero's
" in t_firmplan-matnr
AND mkpf~budat GE lv_date3
AND mkpf~budat LE lv_date4
AND mseg~bwart IN ('101', '102')
AND mseg~werks = t_firmplan-werks. (werks = 1100) -
How can rewrite the Query using Analytical functions ?
Hi,
I have the SQL script as shown below ,
SELECT cd.cardid, cd.cardno,TT.TRANSACTIONTYPECODE,TT.TRANSACTIONTYPEDESC DESCRIPTION,
SUM (NVL (CASE tt.transactiontypecode
WHEN 'LOAD_ACH'
THEN th.transactionamount
END, 0)
) AS load_ach,
SUM
(NVL (CASE tt.transactiontypecode
WHEN 'FUND_TRANSFER_RECEIVED'
THEN th.transactionamount
END,
0
) AS Transfersin,
( SUM (NVL (CASE tt.transactiontypecode
WHEN 'FTRNS'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'SEND_MONEY'
THEN th.transactionamount
END, 0)
)) AS Transferout,
SUM (NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_ACH'
THEN th.transactionamount
END, 0)
) AS withdrawal_ach,
SUM (NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_CHECK'
THEN th.transactionamount
END, 0)
) AS withdrawal_check,
( SUM (NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_CHECK_FEE'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'REJECTED_ACH_LOAD_FEE'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_ACH_REV'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_CHECK_REV'
THEN th.transactionamount
END,
0
) +
SUM
(NVL (CASE tt.transactiontypecode
WHEN 'WITHDRAWAL_CHECK_FEE_REV'
THEN th.transactionamount
END,
0
) +
SUM
(NVL (CASE tt.transactiontypecode
WHEN 'REJECTED_ACH_LOAD_FEE_REV'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'OVERDRAFT_FEE_REV'
THEN th.transactionamount
END, 0)
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'STOP_CHECK_FEE_REV'
THEN th.transactionamount
END,
0
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'LOAD_ACH_REV'
THEN th.transactionamount
END, 0)
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'OVERDRAFT_FEE'
THEN th.transactionamount
END, 0)
) +
SUM (NVL (CASE tt.transactiontypecode
WHEN 'STOP_CHECK_FEE'
THEN th.transactionamount
END, 0)
)) AS Fee,
th.transactiondatetime
FROM carddetail cd,
transactionhistory th,
transactiontype tt,
(SELECT rmx_a.cardid, rmx_a.endingbalance prev_balance, rmx_a.NUMBEROFDAYS
FROM rmxactbalreport rmx_a,
(SELECT cardid, MAX (reportdate) reportdate
FROM rmxactbalreport
GROUP BY cardid) rmx_b
WHERE rmx_a.cardid = rmx_b.cardid AND rmx_a.reportdate = rmx_b.reportdate) a
WHERE th.transactiontypeid = tt.transactiontypeid
AND cd.cardid = th.cardid
AND cd.cardtype = 'P'
AND cd.cardid = a.cardid (+)
AND CD.CARDNO = '7116734387812758335'
--AND TT.TRANSACTIONTYPECODE = 'FUND_TRANSFER_RECEIVED'
GROUP BY cd.cardid, cd.cardno, numberofdays,th.transactiondatetime,tt.transactiontypecode,TT.TRANSACTIONTYPEDESC
Ouput of the above query is :
CARDID CARDNO TRANSACTIONTYPECODE DESCRIPTION LOAD_ACH TRANSFERSIN TRANSFEROUT WITHDRAWAL_ACH WITHDRAWAL_CHECK FEE TRANSACTIONDATETIME
6005 7116734387812758335 FUND_TRANSFER_RECEIVED Fund Transfer Received 0 3.75 0 0 0 0 21/09/2007 11:15:38 AM
6005 7116734387812758335 FUND_TRANSFER_RECEIVED Fund Transfer Received 0 272 0 0 0 0 05/10/2007 9:12:37 AM
6005 7116734387812758335 WITHDRAWAL_ACH Withdraw Funds via ACH 0 0 0 300 0 0 24/10/2007 3:43:54 PM
6005 7116734387812758335 SEND_MONEY Fund Transfer Sent 0 0 1 0 0 0 19/09/2007 1:17:48 PM
6005 7116734387812758335 FUND_TRANSFER_RECEIVED Fund Transfer Received 0 1 0 0 0 0 18/09/2007 7:25:23 PM
6005 7116734387812758335 LOAD_ACH Prepaid Deposit via ACH 300 0 0 0 0 0 02/10/2007 3:00:00 AM
I want the output like for Load_ACH there should be one record etc.,
Can any one help me , how can i rewrite the above query using analytical functions .,
SekharNot sure of your requirements but this mayhelp reduce your code;
<untested>
SUM (
CASE
WHEN tt.transactiontypecode IN
('WITHDRAWAL_CHECK_FEE', 'REJECTED_ACH_LOAD_FEE', 'WITHDRAWAL_ACH_REV', 'WITHDRAWAL_CHECK_REV',
'WITHDRAWAL_CHECK_FEE_REV', 'REJECTED_ACH_LOAD_FEE_REV', 'OVERDRAFT_FEE_REV','STOP_CHECK_FEE_REV',
'LOAD_ACH_REV', 'OVERDRAFT_FEE', 'STOP_CHECK_FEE')
THEN th.transactionamount
ELSE 0) feeAlso, you might want to edit your post and use [pre] and [/pre] tags around your code for formatting. -
i downloaded microsoft office to my MBP and my question is how do i get the right file or operating system to open it and so that i can use it?
Welcome to the Apple Support Communities
There are two Office versions: Office for Windows, and Office for Mac.
I suspect that you have downloaded Office for Windows, and you can use it if you install Windows, but a cheaper and easiest way to use Office is to use Office for Mac, so you won't have to install Windows. See > http://www.microsoft.com/mac -
My airport exrreme can't get the right ip if I try to make a new network
I use also the apps
I have 3 devices but just can used always one
My ip and my user name is from the university
I need to fix the ip, username and password in the airport extrem, but he always copy a wrong ip in the system
Thanks for help
EliasYou're only allowed to authorize 5 devices with your apple id.
How many have you got authorized?
Maybe deauthorize one of them. -
Is AIR the right output to use?
Hello there.
So, let me preface this by saying I don't know a lot about using AIR to view help content nor the limitations an AIR output has. I've built a test file once and played around with it some back in RH 7.
Until now our company has used HtmlHelp (CHM) as our output format, but we're starting a new product, and we need it to have a modern look and feel as well as have the ability to store, display and moderate user comments on topics. Initially, I'm thinking the RoboHelp AIR output might be a good match.
What we need:
The help must be context sentive.
The help must be dockable inside the UI.
The help must allow user commenting (similar to that shown in the AIR examples).
The help must reside locally but have the ability to grab user comments for any who have access to the internet.
One possible problem is our product will have its user interface done using WPF. My first concern is can a WPF user interface support F1 context sensitivty in an AIR output? Some posts say perhaps it cannot (such as http://forums.adobe.com/message/2145088).
Also, can you dock an AIR output inside a UI?
If AIR isn't the right output, what would you recommend for the above requirements?
Hope all this makes sense. This is all new to me.
Many thanks!
(RoboHelp 9, Windows 7 64-bit)Thanks for the answers guys.
I don't see C# as a supported language here, so maybe that won't work for us then:
C:\Program Files (x86)\Adobe\Adobe RoboHelp 9\CSH API
We do already have RH 9.
We do not have the RH 9 Server. Is this needed for the user comments from the internet? From this help topic, it sounds like you might be able to do it some other way.
From the Rh 9 help:
Set Location For Comments And Topic Ratings
Depending on whether you want a trust-based system or user authentication, select one of the following options:
Select Network Folder. Click Browse to choose the shared network folder where the comments will be stored. Click Add to add locations on Mac and Linux systems if required. See URL formats for Windows, Mac, and Linux.
Specify the password that moderators need to provide to access the moderation dashboard.
Select Pending or Accepted to specify how you want to handle unmoderated comments. If you select Pending, unmoderated comments are not displayed to users.
Storing comments in a shared network folder is suitable in trust-based work environments, where shared reviews by internal stakeholders are part of the content development process.
Select RoboHelp Server and
specify the server URL.
Storing comments on RoboHelp Server enables you to authenticate users before allowing them to view or post comments.
Configuration File Path
Specify the path and name of the file that stores the configuration for comment syncing and auto-update. When the AIR application is distributed, users can copy the default configuration file from the !SSL folder of the project to the location specified in the configuration file path and modify the default settings according to their preferences. For example, they can disable commenting or change the location of storing comments.You can enter the path in any of the following formats:
A relative path (relative to the install folder)
An absolute location, such as a shared network drive or a file location in the user drive
A web URL pointing to the location where you’ve posted the XML file
Thanks for the help. -
Using analytical function to calculate concurrency between date range
Folks,
I'm trying to use analytical functions to come up with a query that gives me the
concurrency of jobs executing between a date range.
For example:
JOB100 - started at 9AM - stopped at 11AM
JOB200 - started at 10AM - stopped at 3PM
JOB300 - started at 12PM - stopped at 2PM
The query would tell me that JOB1 ran with a concurrency of 2 because JOB1 and JOB2
were running started and finished within the same time. JOB2 ran with the concurrency
of 3 because all jobs ran within its start and stop time. The output would look like this.
JOB START STOP CONCURRENCY
=== ==== ==== =========
100 9AM 11AM 2
200 10AM 3PM 3
300 12PM 2PM 2
I've been looking at this post, and this one if very similar...
Analytic functions using window date range
Here is the sample data..
CREATE TABLE TEST_JOB
( jobid NUMBER,
created_time DATE,
start_time DATE,
stop_time DATE
insert into TEST_JOB values (100, sysdate -1, to_date('05/04/08 09:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 11:00:00','MM/DD/YY hh24:mi:ss'));
insert into TEST_JOB values (200, sysdate -1, to_date('05/04/08 10:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 13:00:00','MM/DD/YY hh24:mi:ss'));
insert into TEST_JOB values (300, sysdate -1, to_date('05/04/08 12:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 14:00:00','MM/DD/YY hh24:mi:ss'));
select * from test_job;
JOBID|CREATED_TIME |START_TIME |STOP_TIME
----------|--------------|--------------|--------------
100|05/04/08 09:28|05/04/08 09:00|05/04/08 11:00
200|05/04/08 09:28|05/04/08 10:00|05/04/08 13:00
300|05/04/08 09:28|05/04/08 12:00|05/04/08 14:00
Any help with this query would be greatly appreciated.
thanks.
-peterafter some checking the model rule wasn't working exactly as expected.
I believe it's working right now. I'm posting a self-contained example for completeness sake.I use 2 functions to convert back and forth between epoch unix timestamps, so
I'll post them here as well.
Like I said I think this works okay, but any feedback is always appreciated.
-peter
CREATE OR REPLACE FUNCTION date_to_epoch(p_dateval IN DATE)
RETURN NUMBER
AS
BEGIN
return (p_dateval - to_date('01/01/1970','MM/DD/YYYY')) * (24 * 3600);
END;
CREATE OR REPLACE FUNCTION epoch_to_date (p_epochval IN NUMBER DEFAULT 0)
RETURN DATE
AS
BEGIN
return to_date('01/01/1970','MM/DD/YYYY') + (( p_epochval) / (24 * 3600));
END;
DROP TABLE TEST_MODEL3 purge;
CREATE TABLE TEST_MODEL3
( jobid NUMBER,
start_time NUMBER,
end_time NUMBER);
insert into TEST_MODEL3
VALUES (300,date_to_epoch(to_date('05/07/2008 10:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 19:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (200,date_to_epoch(to_date('05/07/2008 09:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 12:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (400,date_to_epoch(to_date('05/07/2008 10:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 14:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (500,date_to_epoch(to_date('05/07/2008 11:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 16:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (600,date_to_epoch(to_date('05/07/2008 15:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 22:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (100,date_to_epoch(to_date('05/07/2008 09:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 23:00','MM/DD/YYYY hh24:mi')));
commit;
SELECT jobid,
epoch_to_date(start_time)start_time,
epoch_to_date(end_time)end_time,
n concurrency
FROM TEST_MODEL3
MODEL
DIMENSION BY (start_time,end_time)
MEASURES (jobid,0 n)
(n[any,any]=
count(*)[start_time<= cv(start_time),end_time>=cv(start_time)]+
count(*)[start_time > cv(start_time) and start_time <= cv(end_time), end_time >= cv(start_time)]
ORDER BY start_time;
The results look like this:
JOBID|START_TIME|END_TIME |CONCURRENCY
----------|---------------|--------------|-------------------
100|05/07/08 09:00|05/07/08 23:00| 6
200|05/07/08 09:00|05/07/08 12:00| 5
300|05/07/08 10:00|05/07/08 19:00| 6
400|05/07/08 10:00|05/07/08 14:00| 5
500|05/07/08 11:00|05/07/08 16:00| 6
600|05/07/08 15:00|05/07/08 22:00| 4 -
Can I use analytical function in this problem?
Hi,
I want to use query only for the following . I don't want to wright any function or procedure for this.
create temp table test_3 (user_id number, auth_id number);
insert into test_3 values (133,609);
insert into test_3 values (133,610);
insert into test_3 values (133,611);
insert into test_3 values (133,612);
insert into test_3 values (133,613);
insert into test_3 values (133,614);
insert into test_3 values (144,1);
insert into test_3 values (134,610);
insert into test_3 values (135,610);
insert into test_3 values (135,610);
insert into test_3 values (135,610);
insert into test_3 values (136,610);
insert into test_3 values (136,610);
insert into test_3 values (137,610);
insert into test_3 values (137,610);
insert into test_3 values (137,609);
insert into test_3 values (137,11);
I want to count:
1. for each auth_id, how many users are there who is assigned to this aut_id only
example
user_id 134 and 135 is assigned to auth_id 610 only and the count is 3 and 2 respectively .
user_id 144 is assigned to auth_id 1 only and the count is 1.
2.how many user_id is common between auth_id 609 and 610
how many user_id is common between auth_id 609 and 611
how many user_id is common between auth_id 609 and 612
and so on.
I have re-written the problem bellow
Regards,
Edited by: user576726 on Feb 13, 2011 3:54 AMHi,
user576726 wrote:
Hi,
Thanks for the response.
drop table test_3;
create table test_3 (user_id number, auth_id number);
insert into test_3 values (133,609); --row 1
...Thanks. That makes the problem a lot clearer.
My desired output is:
auth_id_1 auth_id_2 count1 count2
1 12 1 --(user_id 144) 2 --(row 15, row 16)
1 610 1 --(user_id 144) 1 --(row 19)
11 609 1 --(user_id 137) 1 --(row 13)
11 610 1 --(user_id 137) 2 --(row 11, row 12)
12 1 1 --(user_id 144) 1 --(row 4)
12 610 1 --(user_id 144) 1 --(row 19)
609 11 1 --(user_id 137) 1 --(row 14)
609 610 2 --(user_id 133 & 137) 3 --(row 2, row 11 and row 12)
609 611 1 --(user_id 133) 1 --(row 3)
610 1 1 --(user_id 144) 1 --(row 4)
610 11 1 --(user_id 137) 1 --(row 14)
610 12 1 --(user_id 144) 2 --(row 15, row 16)
610 609 2 --(user_id 133 & 137) 4 --(row 1, row 13, row 17 and row 18)
610 611 1 --(user_id 133) 1 --(row 3)
611 609 1 --(user_id 133) 3 --(row 1, row 17 and row 18)
611 610 1 --(user_id 133) 1 --(row 2) 1 --(user_id 133) 1 --(row 2)
Count1 is the number of common different user id between auth_id_1 and auth_id_2
example
for the first row in the output:-
common user ids between 609 and 610 are 133 and 137. so the count1 should be 2
Count2 is how many rows are there for auth_id_2 where user id is common for auth_id_1 and auth_id_2
example
for the first row in the output:-
the common user_id for 609 and 610 are 133 & 137
the rows in the test_3 table that has auth_id 610 and user_id 133 & 137 are
row 2, row 11 and row 12 so the count is 3.
What I have done is
I have writtent the following query to get the first two columns of the output:
select tab1.auth_id auth_id_1, tab2.auth_id auth_id_2
from
(select user_id, auth_id
from test_3
group by user_id, auth_id
) tab1,
(select user_id, auth_id
from test_3
group by user_id, auth_id
) tab2
where tab1.user_id = tab2.user_id
and tab1.auth_id <> tab2.auth_id
group by tab1.auth_id, tab2.auth_id
order by 1,2;You're on the right track. You're doing a self-join and getting the right combinations of auth_id_1 and auth_id_2.
Why are you doing the GROUP BY in sub-queries tab1 and tab2? Eventually, you'll need to count identical rows, like these:
insert into test_3 values (137,610); --row 11
insert into test_3 values (137,610); --row 12If you do a GROUP BY in the sub-queries, all you'll know is that user_id=137 was related to auth_id=610. You won't know how many times, which is what count2 is based on. So don't do a GROUP BY in the sub-queries; just do the GROUP BY in the main query. That means you won't need to do sub-queries; you might as well just join two copies of the original test_3 table.
Count1 is the number of common different user id between auth_id_1 and auth_id_2Great; that's very clear. In SQL, how do you count the number of different user_ids in such a group? (Hint "different" means the same thing as "distinct".)
Count2 is how many rows are there for auth_id_2 where user id is common for auth_id_1 and auth_id_2
example
for the first row in the output:-The first row in the output you posted was
1 12 1 --(user_id 144) 2 --(row 15, row 16)Isn't this one that you're explaining here the 8th row of output?
the common user_id for 609 and 610 are 133 & 137
the rows in the test_3 table that has auth_id 610 and user_id 133 & 137 are
row 2, row 11 and row 12 so the count is 3.So, for count2, you want to know how many distinct rows from tab2 are in each group. If you had a primary key in the table, or anything that uniquely identified the rows, you could count the distinct occurrences of that, but you're not storing anything unique on each row (at least you haven't mentioned it in your sample data). If that's really the case, then this is one place where the ROWID pseudocolumn is handy; it uniquely identifies any row in any table, so you can just count how many different values of tab2.ROWID are in each group. -
Query for using "analytical functions" in DWH...
Dear team,
I would like to know if following task can be done using analytical functions...
If it can be done using other ways, please do share the ideas...
I have table as shown below..
Create Table t As
Select *
From
Select 12345 PRODUCT, 'W1' WEEK, 10000 SOH, 0 DEMAND, 0 SUPPLY, 0 EOH From dual Union All
Select 12345, 'W2', 0, 100, 50, 0 From dual Union All
Select 12345, 'W3', 0, 100, 50, 0 From dual Union All
Select 12345, 'W4', 0, 100, 50, 0 From dual
PRODUCT WEEK SOH DEMAND SUPPLY EOH
12345 W1 10,000 0 0 10000
12345 W2 0 100 50 0
12345 W3 0 100 50 0
12345 W4 0 100 50 0
Now i want to calcuate EOH (ending on hand) quantity for W1...
This EOH for W1 becomes SOH (Starting on hand) for W2...and so on...till end of weeks
The formula is :- EOH = SOH - (DEMAND + SUPPLY)
The output should be as follows...
PRODUCT WEEK SOH DEMAND SUPPLY EOH
12345 W1 10,000 10000
12345 W2 10,000 100 50 9950
12345 W3 9,950 100 50 9900
12345 W4 9,000 100 50 8950
Kindly share your ideas...Nicloei W wrote:
Means SOH_AFTER_SUPPLY for W1, should be displayed under SOH FOR W2...i.e. SOH for W4 should be SOH_AFTER_SUPPLY for W3, right?
If yes, why are you expecting it to be 9000 for W4??
So in output should be...
PRODUCT WE SOH DEMAND SUPPLY EOH SOH_AFTER_SUPPLY
12345 W1 10000 0 0 0 10000
12345 W2 10000 100 50 0 9950
12345 W3 9950 100 50 0 *9900*
12345 W4 *9000* 100 50 0 9850
per logic you explained, shouldn't it be *9900* instead???
you could customize Martin Preiss's logic for your requirement :
SQL> with
2 data
3 As
4 (
5 Select 12345 PRODUCT, 'W1' WEEK, 10000 SOH, 0 DEMAND, 0 SUPPLY, 0 EOH Fom dual Union All
6 Select 12345, 'W2', 0, 100, 50, 0 From dal Union All
7 Select 12345, 'W3', 0, 100, 50, 0 From dal Union All
8 Select 12345, 'W4', 0, 100, 50, 0 From dual
9 )
10 Select Product
11 ,Week
12 , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Parttion By Product Order By Week)+Supply Soh
13 ,Demand
14 ,Supply
15 , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Partition By Product Order By Week) eoh
16 from data;
PRODUCT WE SOH DEMAND SUPPLY EOH
12345 W1 10000 0 0 10000
12345 W2 10000 100 50 9950
12345 W3 9950 100 50 9900
12345 W4 9900 100 50 9850 Vivek L -
Is it possible using Analytical functions?
Hi,
I have the following data
Column1 Column2
2005 500
2006 500
2007 500
2008 500
Now, if I've some variable value as 800, then the output record should be
Column1 Column2
2008 500
2007 300
2006 0
2005 0i.e. the Column2 value(order by column1 desc) is split to accommodate the variable passed.
Right now, it's being done in PL/SQL. Is it possible to do it in SQL using Analytical function?
Thanks,
Sundar
P.S: It doesnt have to be using analytical, if it can be achieved in a SQL, it's good.
Message was edited by:
Sundar MHi, a sample using analytical function SUM:
CREATE TABLE Source_Data
( Year NUMBER
, Value NUMBER
BEGIN
DELETE FROM Source_Data;
FOR v_Cycle IN 1 .. 6
LOOP
INSERT
INTO Source_Data
Year
, Value
VALUES
2000 + v_Cycle
, 100 * v_Cycle
END LOOP;
COMMIT;
END;
VARIABLE v_Amount NUMBER
EXECUTE :v_Amount := 1200using the SUM, the previous values are totalized:
so
SELECT Year
, Value Year_Value
, :v_Amount Original_Amount
, SUM(Value) OVER (ORDER BY Year DESC RANGE UNBOUNDED PRECEDING) Cumulative_Sum
, DECODE(
SIGN(:v_Amount - SUM(Value) OVER (ORDER BY Year DESC RANGE UNBOUNDED PRECEDING))
, 1, Value -- Positive number, more value can be subtract
, GREATEST(Value - (SUM(Value) OVER (ORDER BY Year DESC RANGE UNBOUNDED PRECEDING) - :v_Amount), 0)
) Year_Quota
FROM Source_Data s
ORDER BY Year DESC
/will give
YEAR YEAR_VALUE ORIGINAL_AMOUNT CUMULATIVE_SUM YEAR_QUOTA
2006 600 1200 600 600
2005 500 1200 1100 500
2004 400 1200 1500 100
2003 300 1200 1800 0
2002 200 1200 2000 0
2001 100 1200 2100 0You can add different conditions (PARTITION BY ..)
Hope this helps
Max -
Restrict Query Resultset which uses Analytic Function
Gents,
Problem Definition: Using Analytic Function, get Total sales for the Product P1
and Customer C1 [Total sales for the customer itself] in one line.
I want to restrict the ResultSet of the query to Product P1,
please look at the data below, queries and problems..
Data
Customer Product Qtr Sales
C1 P1 19991 100.00
C1 P1 19992 125.00
C1 P1 19993 175.00
C1 P1 19994 300.00
C1 P2 19991 100.00
C1 P2 19992 125.00
C1 P2 19993 175.00
C1 P2 19994 300.00
C2 P1 19991 100.00
C2 P1 19992 125.00
C2 P1 19993 175.00
C2 P1 19994 300.00
Problem, I want to display....
Customer Product ProdSales CustSales
C1 P1 700 1400
But Without using outer query, i.e. please look below for the query that
returns this reult with two select, I want this result in one query only..
Select * From ----*** want to avoid this... ***----
(Select Customer,Product,
Sum(Sales) ProdSales,
Sum(Sum(Sales)) Over(Partition By Customer) CustSales
From t1
Where customer='C1')
Where
Product='P1' ;
Also, I want to avoid Hard coding of P1 in the select clause....
I mean, I can do it in one shot/select, but look at the query below, it uses
P1 in the select clause, which is No No!! P1 is allowed only in Where or Having ..
Select Customer,Decode(Product, 'P1','P1','P1') Product,
Decode(Product,'P1',Sales,0) ProdSales,
Sum(Sum(Sales)) Over (Partition By Customer ) CustSales
From t1
Where customer='C1' ;
This will get me what I want, but as I said earlier, I want to avoid using P1 in the
Select clause..
Goal is to Avoid using
1-> Two Select/Outer Query/In Line Views
2-> Product 'P1' in the Select clause...
Thanks
-Dhaval RasaniaI don't understand goal number 1 of not using an inline view.
What is the harm? -
Select using XMLAGG function cutting off the string
Hello,
I have string "Oracle & Oracle", when run thru the following statement I am getting & as not full string,
how to avoid this and to get the right values
SQL> select col1 from t;
COL1
SQL & SQL
Test & Test
Oracle & Oracle
SQL> select
2 RTRIM (XMLAGG (XMLELEMENT (E,XMLATTRIBUTES (col1|| ',' AS "Seg"))ORDER BY col1 ASC).EXTRACT ('./E[not(@Seg = preceding-sibling::E/@Seg)]/@Seg'),',') col1
3 from t;
COL1
Oracle & Oracle,SQL & SQL,Test & TestThe expected output is
COL1
Oracle & Oracle,SQL & SQL,Test & TestAny help would be greatly appreciated
Thanks,Hi,
& has a special meaning in XML.
One way to avoid the problem is to avoid using '&' in your XML opeations. If you can identify some string that never occurs in col1 (I used '~?~' below) then you can change all the '&'s in col1 to that string before doing the XML operations, and change all the '~?~'s back to '&'s afterwards, like this:
select REPLACE ( RTRIM ( XMLAGG ( XMLELEMENT ( E
, XMLATTRIBUTES ( REPLACE ( col1
, '~?~'
|| ',' AS "Seg"
) ORDER BY col1 ASC
).EXTRACT ('./E[not(@Seg = preceding-sibling::E/@Seg)]/@Seg')
, '~?~'
) AS new_col1
from t; -
Should I use Analytic functions ?
Hello,
I have a table rci_dates with the following structure (rci_id,visit_id,rci_name,rci_date).
A sample of data in this table is as given below.
1,101,'FIRST VISIT', '2010-MAY-01',
2,101,'FIRST VISIT', '2010-MAY-01'
3,101,'FIRST VISIT', '2010-MAY-01'
4,101,'FIRST VISIT', '2010-MAY-01'
5,102,'SECOND VISIT', '2010-JUN-01',
6,102,'SECOND VISIT', '2010-JUN-01'
7,102,'SECOND VISIT', '2010-JUN-01'
8,102,'SECOND VISIT', '2010-JUL-01'
I want to write a query which returns me the records which are similar to the record with rc_id =8 since the rci_date is different within the visit_id 102. Where as in Visit_id 101 the rci_dates are all same so it should not be displayed in the output returned by my query.
How can I do this ? Should I be using analytic functions. Can someone please let me know.
Thanksok i have created the table and inserted the data. but it appears that the data are the output you are expecting, they all the same visit_id.
SQL> CREATE TABLE RCI
2 (RCI_ID NUMBER(10) NOT NULL,
3 VISIT_ID NUMBER(10) NOT NULL,
4 RCI_NAME VARCHAR2(20 BYTE) NOT NULL,
5 DCI_DATE VARCHAR2(8 BYTE));
Table created
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876540, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876640, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876740, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876840, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876940, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877040, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877140, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877240, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877240, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877640, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877740, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877840, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877940, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878040, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878140, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878240, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878340, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878440, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878540, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877640, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877740, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878340, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878540, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 17418240, 12140, 'SCREENING', '20000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 17418340, 12140, 'SCREENING', '20000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 17418440, 12140, 'SCREENING', '20000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878240, 12140, 'SCREENING', '20000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 18790240, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 21724540, 12140, 'SCREENING', '19000101');
1 row inserted
SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876540, 12140, 'SCREENING', '20091015');
1 row inserted
SQL> commit;
Commit complete
SQL> select * from rci;
RCI_ID VISIT_ID RCI_NAME DCI_DATE
14876540 12140 SCREENING 19000101
14876640 12140 SCREENING 19000101
14876740 12140 SCREENING 19000101
14876840 12140 SCREENING 19000101
14876940 12140 SCREENING 19000101
14877040 12140 SCREENING 19000101
14877140 12140 SCREENING 19000101
14877240 12140 SCREENING 19000101
14877240 12140 SCREENING 19000101
14877640 12140 SCREENING 19000101
14877740 12140 SCREENING 19000101
14877840 12140 SCREENING 19000101
14877940 12140 SCREENING 19000101
14878040 12140 SCREENING 19000101
14878140 12140 SCREENING 19000101
14878240 12140 SCREENING 19000101
14878340 12140 SCREENING 19000101
14878440 12140 SCREENING 19000101
14878540 12140 SCREENING 19000101
14877640 12140 SCREENING 19000101
14877740 12140 SCREENING 19000101
14878340 12140 SCREENING 19000101
14878540 12140 SCREENING 19000101
17418240 12140 SCREENING 20000101
17418340 12140 SCREENING 20000101
17418440 12140 SCREENING 20000101
14878240 12140 SCREENING 20000101
18790240 12140 SCREENING 19000101
21724540 12140 SCREENING 19000101
14876540 12140 SCREENING 20091015
30 rows selected
SQL> -- using the sample similar code that i have previously posted it returned all the rows.
SQL> select rci.*
2 from rci
3 where rci.visit_id in (select r1.visit_id
4 from (select rci.visit_id,
5 count(*) over (partition by rci.visit_id, rci.dci_date order by rci.visit_id) rn
6 from rci) r1
7 where r1.rn = 1)
8 order by rci.rci_id;
RCI_ID VISIT_ID RCI_NAME DCI_DATE
14876540 12140 SCREENING 20091015
14876540 12140 SCREENING 19000101
14876640 12140 SCREENING 19000101
14876740 12140 SCREENING 19000101
14876840 12140 SCREENING 19000101
14876940 12140 SCREENING 19000101
14877040 12140 SCREENING 19000101
14877140 12140 SCREENING 19000101
14877240 12140 SCREENING 19000101
14877240 12140 SCREENING 19000101
14877640 12140 SCREENING 19000101
14877640 12140 SCREENING 19000101
14877740 12140 SCREENING 19000101
14877740 12140 SCREENING 19000101
14877840 12140 SCREENING 19000101
14877940 12140 SCREENING 19000101
14878040 12140 SCREENING 19000101
14878140 12140 SCREENING 19000101
14878240 12140 SCREENING 19000101
14878240 12140 SCREENING 20000101
14878340 12140 SCREENING 19000101
14878340 12140 SCREENING 19000101
14878440 12140 SCREENING 19000101
14878540 12140 SCREENING 19000101
14878540 12140 SCREENING 19000101
17418240 12140 SCREENING 20000101
17418340 12140 SCREENING 20000101
17418440 12140 SCREENING 20000101
18790240 12140 SCREENING 19000101
21724540 12140 SCREENING 19000101
30 rows selected
SQL> just as what frank have said it will be helpful if you post a sample output based on the original posting, that is in the first posting you have.
Maybe you are looking for
-
Capture One vs Adobe Camera Raw processing - what is better?
Hi, I regularly shoot on a P45 back and Hassleblad camera. I shoot using Capture One software. It is much more convenient for me to process the files in Photoshop's Raw convertor than Capture One but I'm always a little worried that maybe it lacks th
-
Coming back from TechEd Madrid where I had presented about System Recommendations for Security Notes and Security Configuration Validation using the SAP Solution Manager like to point out the most important links of these topics: SAP Active Global S
-
PDK 3.0 for DOTNET could be used for CE 7.1 with EHP1 ?
HI : we are working on MII Porject with CE 7.1.1 , and we want to know PDK 3.0 for DOTNET could be used on CE 7.1.1? we could install and start runtime server , also could found the ".Net Server Configutation" in Portal Support Desk, but if we cl
-
Cant install Illustrator cs4 on mavericks 10.9?
hey everyone, recently got a new macbook, running mavericks 10.9, went to install my old stuff from the last macbook and everything works fine except illustrator. i re-downloaded illustrator to try it again and when i get to the set up screen and cli
-
I can't load new application on my version 4.1
i can't load new application on my version 4.1