Restrict Query Resultset  which uses Analytic Function

Gents,
Problem Definition: Using Analytic Function, get Total sales for the Product P1
and Customer C1 [Total sales for the customer itself] in one line.
I want to restrict the ResultSet of the query to Product P1,
please look at the data below, queries and problems..
Data
Customer Product Qtr Sales
C1 P1 19991 100.00
C1 P1 19992 125.00
C1 P1 19993 175.00
C1 P1 19994 300.00
C1 P2 19991 100.00
C1 P2 19992 125.00
C1 P2 19993 175.00
C1 P2 19994 300.00
C2 P1 19991 100.00
C2 P1 19992 125.00
C2 P1 19993 175.00
C2 P1 19994 300.00
Problem, I want to display....
Customer Product ProdSales CustSales
C1 P1 700 1400
But Without using outer query, i.e. please look below for the query that
returns this reult with two select, I want this result in one query only..
Select * From ----*** want to avoid this... ***----
(Select Customer,Product,
Sum(Sales) ProdSales,
Sum(Sum(Sales)) Over(Partition By Customer) CustSales
From t1
Where customer='C1')
Where
Product='P1' ;
Also, I want to avoid Hard coding of P1 in the select clause....
I mean, I can do it in one shot/select, but look at the query below, it uses
P1 in the select clause, which is No No!! P1 is allowed only in Where or Having ..
Select Customer,Decode(Product, 'P1','P1','P1') Product,
Decode(Product,'P1',Sales,0) ProdSales,
Sum(Sum(Sales)) Over (Partition By Customer ) CustSales
From t1
Where customer='C1' ;
This will get me what I want, but as I said earlier, I want to avoid using P1 in the
Select clause..
Goal is to Avoid using
1-> Two Select/Outer Query/In Line Views
2-> Product 'P1' in the Select clause...
Thanks
-Dhaval Rasania

I don't understand goal number 1 of not using an inline view.
What is the harm?

Similar Messages

  • How can I restrict the rows of a SELECT which uses analytical functions?

    Hello all,
    Can anyone please tell me how to restrict the following query:
    SELECT empno,
    ename,
    deptno,
    SUM(sal) over(PARTITION BY deptno) sum_per_dept
    FROM emp;
    I would need just the lines which have sum_per_dept>100, without using a SUBSELECT.
    Is there any way which is specific for analytical functions?
    Thank you in advance,
    Eugen
    Message was edited by:
    misailescu

    SQL> select empno,
      2  ename,
      3  deptno,sum_per_dept
      4  from
      5  (
      6  SELECT empno,
      7  ename,
      8  deptno,
      9  SUM(sal) over(PARTITION BY deptno) sum_per_dept
    10  FROM emp
    11  )
    12  where sum_per_dept>1000;
    EMPNO ENAME      DEPTNO SUM_PER_DEPT
    7839 KING           10         8750
    7782 CLARK          10         8750
    7934 MILLER         10         8750
    7902 FORD           20         6775
    7369 SMITH          20         6775
    7566 JONES          20         6775
    7900 JAMES          30         9400
    7844 TURNER         30         9400
    7654 MARTIN         30         9400
    7521 WARD           30         9400
    7499 ALLEN          30         9400
    7698 BLAKE          30         9400
    12 rows selected
    SQL>
    SQL> select empno,
      2  ename,
      3  deptno,sum_per_dept
      4  from
      5  (
      6  SELECT empno,
      7  ename,
      8  deptno,
      9  SUM(sal) over(PARTITION BY deptno) sum_per_dept
    10  FROM emp
    11  )
    12  where sum_per_dept>9000;
    EMPNO ENAME      DEPTNO SUM_PER_DEPT
    7900 JAMES          30         9400
    7844 TURNER         30         9400
    7654 MARTIN         30         9400
    7521 WARD           30         9400
    7499 ALLEN          30         9400
    7698 BLAKE          30         9400
    6 rows selected
    SQL> Greetings...
    Sim

  • How can rewrite the Query using Analytical functions ?

    Hi,
    I have the SQL script as shown below ,
    SELECT cd.cardid, cd.cardno,TT.TRANSACTIONTYPECODE,TT.TRANSACTIONTYPEDESC DESCRIPTION,
    SUM (NVL (CASE tt.transactiontypecode
    WHEN 'LOAD_ACH'
    THEN th.transactionamount
    END, 0)
    ) AS load_ach,
    SUM
    (NVL (CASE tt.transactiontypecode
    WHEN 'FUND_TRANSFER_RECEIVED'
    THEN th.transactionamount
    END,
    0
    ) AS Transfersin,
    ( SUM (NVL (CASE tt.transactiontypecode
    WHEN 'FTRNS'
    THEN th.transactionamount
    END,
    0
    ) +
    SUM (NVL (CASE tt.transactiontypecode
    WHEN 'SEND_MONEY'
    THEN th.transactionamount
    END, 0)
    )) AS Transferout,
    SUM (NVL (CASE tt.transactiontypecode
    WHEN 'WITHDRAWAL_ACH'
    THEN th.transactionamount
    END, 0)
    ) AS withdrawal_ach,
    SUM (NVL (CASE tt.transactiontypecode
    WHEN 'WITHDRAWAL_CHECK'
    THEN th.transactionamount
    END, 0)
    ) AS withdrawal_check,
    ( SUM (NVL (CASE tt.transactiontypecode
    WHEN 'WITHDRAWAL_CHECK_FEE'
    THEN th.transactionamount
    END,
    0
    ) +
    SUM (NVL (CASE tt.transactiontypecode
    WHEN 'REJECTED_ACH_LOAD_FEE'
    THEN th.transactionamount
    END,
    0
    ) +
    SUM (NVL (CASE tt.transactiontypecode
    WHEN 'WITHDRAWAL_ACH_REV'
    THEN th.transactionamount
    END,
    0
    ) +
    SUM (NVL (CASE tt.transactiontypecode
    WHEN 'WITHDRAWAL_CHECK_REV'
    THEN th.transactionamount
    END,
    0
    ) +
    SUM
    (NVL (CASE tt.transactiontypecode
    WHEN 'WITHDRAWAL_CHECK_FEE_REV'
    THEN th.transactionamount
    END,
    0
    ) +
    SUM
    (NVL (CASE tt.transactiontypecode
    WHEN 'REJECTED_ACH_LOAD_FEE_REV'
    THEN th.transactionamount
    END,
    0
    ) +
    SUM (NVL (CASE tt.transactiontypecode
    WHEN 'OVERDRAFT_FEE_REV'
    THEN th.transactionamount
    END, 0)
    ) +
    SUM (NVL (CASE tt.transactiontypecode
    WHEN 'STOP_CHECK_FEE_REV'
    THEN th.transactionamount
    END,
    0
    ) +
    SUM (NVL (CASE tt.transactiontypecode
    WHEN 'LOAD_ACH_REV'
    THEN th.transactionamount
    END, 0)
    ) +
    SUM (NVL (CASE tt.transactiontypecode
    WHEN 'OVERDRAFT_FEE'
    THEN th.transactionamount
    END, 0)
    ) +
    SUM (NVL (CASE tt.transactiontypecode
    WHEN 'STOP_CHECK_FEE'
    THEN th.transactionamount
    END, 0)
    )) AS Fee,
    th.transactiondatetime
    FROM carddetail cd,
    transactionhistory th,
    transactiontype tt,
    (SELECT rmx_a.cardid, rmx_a.endingbalance prev_balance, rmx_a.NUMBEROFDAYS
    FROM rmxactbalreport rmx_a,
    (SELECT cardid, MAX (reportdate) reportdate
    FROM rmxactbalreport
    GROUP BY cardid) rmx_b
    WHERE rmx_a.cardid = rmx_b.cardid AND rmx_a.reportdate = rmx_b.reportdate) a
    WHERE th.transactiontypeid = tt.transactiontypeid
    AND cd.cardid = th.cardid
    AND cd.cardtype = 'P'
    AND cd.cardid = a.cardid (+)
    AND CD.CARDNO = '7116734387812758335'
    --AND TT.TRANSACTIONTYPECODE = 'FUND_TRANSFER_RECEIVED'
    GROUP BY cd.cardid, cd.cardno, numberofdays,th.transactiondatetime,tt.transactiontypecode,TT.TRANSACTIONTYPEDESC
    Ouput of the above query is :
    CARDID     CARDNO     TRANSACTIONTYPECODE     DESCRIPTION     LOAD_ACH     TRANSFERSIN     TRANSFEROUT     WITHDRAWAL_ACH     WITHDRAWAL_CHECK     FEE     TRANSACTIONDATETIME
    6005     7116734387812758335     FUND_TRANSFER_RECEIVED     Fund Transfer Received     0     3.75     0     0     0     0     21/09/2007 11:15:38 AM
    6005     7116734387812758335     FUND_TRANSFER_RECEIVED     Fund Transfer Received     0     272     0     0     0     0     05/10/2007 9:12:37 AM
    6005     7116734387812758335     WITHDRAWAL_ACH     Withdraw Funds via ACH     0     0     0     300     0     0     24/10/2007 3:43:54 PM
    6005     7116734387812758335     SEND_MONEY     Fund Transfer Sent     0     0     1     0     0     0     19/09/2007 1:17:48 PM
    6005     7116734387812758335     FUND_TRANSFER_RECEIVED     Fund Transfer Received     0     1     0     0     0     0     18/09/2007 7:25:23 PM
    6005     7116734387812758335     LOAD_ACH     Prepaid Deposit via ACH     300     0     0     0     0     0     02/10/2007 3:00:00 AM
    I want the output like for Load_ACH there should be one record etc.,
    Can any one help me , how can i rewrite the above query using analytical functions .,
    Sekhar

    Not sure of your requirements but this mayhelp reduce your code;
    <untested>
    SUM (
       CASE
       WHEN tt.transactiontypecode IN
          ('WITHDRAWAL_CHECK_FEE', 'REJECTED_ACH_LOAD_FEE', 'WITHDRAWAL_ACH_REV', 'WITHDRAWAL_CHECK_REV',
           'WITHDRAWAL_CHECK_FEE_REV', 'REJECTED_ACH_LOAD_FEE_REV', 'OVERDRAFT_FEE_REV','STOP_CHECK_FEE_REV',
           'LOAD_ACH_REV', 'OVERDRAFT_FEE', 'STOP_CHECK_FEE')
       THEN th.transactionamount
       ELSE 0) feeAlso, you might want to edit your post and use &#91;pre&#93; and &#91;/pre&#93; tags around your code for formatting.

  • Query for using "analytical functions" in DWH...

    Dear team,
    I would like to know if following task can be done using analytical functions...
    If it can be done using other ways, please do share the ideas...
    I have table as shown below..
    Create Table t As
    Select *
    From
    Select 12345 PRODUCT, 'W1' WEEK,  10000 SOH, 0 DEMAND, 0 SUPPLY,     0 EOH From dual Union All
    Select 12345,         'W2',       0,         100,      50,        0 From dual Union All
    Select 12345,         'W3',       0,         100,      50,        0 From dual Union All
    Select 12345,         'W4',       0,         100,      50,        0 From dual
    PRODUCT     WEEK     SOH     DEMAND     SUPPLY     EOH
    12345     W1     10,000     0     0     10000
    12345     W2     0     100     50     0
    12345     W3     0     100     50     0
    12345     W4     0     100     50     0
    Now i want to calcuate EOH (ending on hand) quantity for W1...
    This EOH for W1 becomes SOH (Starting on hand) for W2...and so on...till end of weeks
    The formula is :- EOH = SOH - (DEMAND + SUPPLY)
    The output should be as follows...
    PRODUCT     WEEK     SOH     DEMAND     SUPPLY     EOH
    12345     W1     10,000               10000
    12345     W2     10,000     100     50     9950
    12345     W3     9,950     100     50     9900
    12345     W4     9,000     100     50     8950
    Kindly share your ideas...

    Nicloei W wrote:
    Means SOH_AFTER_SUPPLY for W1, should be displayed under SOH FOR W2...i.e. SOH for W4 should be SOH_AFTER_SUPPLY for W3, right?
    If yes, why are you expecting it to be 9000 for W4??
    So in output should be...
    PRODUCT WE        SOH     DEMAND     SUPPLY        EOH SOH_AFTER_SUPPLY
    12345 W1      10000          0          0          0            10000
    12345 W2      10000      100         50          0             9950
    12345 W3      9950       100         50          0             *9900*
    12345 W4      *9000*       100         50          0             9850
    per logic you explained, shouldn't it be *9900* instead???
    you could customize Martin Preiss's logic for your requirement :
    SQL> with
      2  data
      3  As
      4  (
      5  Select 12345 PRODUCT, 'W1' WEEK,  10000 SOH, 0 DEMAND, 0 SUPPLY,   0 EOH Fom dual Union All
      6  Select 12345,         'W2',       0,         100,      50,        0 From dal Union All
      7  Select 12345,         'W3',       0,         100,      50,        0 From dal Union All
      8  Select 12345,         'W4',       0,         100,      50,        0 From dual
      9  )
    10  Select Product
    11  ,Week
    12  , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Parttion By Product Order By Week)+Supply Soh
    13  ,Demand
    14  ,Supply
    15  , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Partition By Product Order By Week) eoh
    16  from  data;
       PRODUCT WE        SOH     DEMAND     SUPPLY        EOH
         12345 W1      10000          0          0      10000
         12345 W2      10000        100         50       9950
         12345 W3       9950        100         50       9900
         12345 W4       9900        100         50       9850 Vivek L

  • [b]Using Analytic functions...[/b]

    Hi All,
    I need help in writing a query using analytic functions.
    Foll is my scenario. I have a table cust_points
    CREATE TABLE cust_points
    ( cust_id varchar2(10),
    pts_dt date,
    reward_points number(3),
    bal_points number(3)
    insert into cust_points values ('ABC',01-MAY-2004',5, 15)
    insert into cust_points values ('ABC',05-MAY-2004',3, 12)
    insert into cust_points values ('ABC',09-MAY-2004',3, 9)
    insert into cust_points values ('XYZ',02-MAY-2004',8, 4)
    insert into cust_points values ('XYZ',03-MAY-2004',5, 1)
    insert into cust_points values ('JKL',10-MAY-2004',5, 11)
    I want a result set which shows for each customer, the sum of reward his/her points
    but balance points as of the last date. So for the above I should have foll results
    cust_id reward_pts bal_points
    ABC 11 9
    XYZ 13 1
    JKL 5 11
    I having tried using last_value(), for eg
    Select cust_id, sum(reward_points), last_value(bal_points) over (partition by cust_id)...but run into grouping errors.
    Can anyone help ?

    try this...
    SELECT a.pkcol,
         nvl(SUM(b.col1),0) col1,
         nvl(SUM(b.col2),0) col2,
         nvl(SUM(b.col3),0) col3
    FROM table1 a, table2 b, table3 c
    WHERE a.pkcol = b.plcol(+)
    AND a.pkcol = c.pkcol
    GROUP BY a.pkcol;
    SQL> select a.deptno,
    2 nvl((select sum(sal) from test_emp b where a.deptno = b.deptno),0) col1,
    3 nvl((select sum(comm) from test_emp b where a.deptno = b.deptno),0) col2
    4 from test_dept a;
    DEPTNO COL1 COL2
    10 12786 0
    20 13237 738
    30 11217 2415
    40 0 0
    99 0 0
    SQL> select a.deptno,
    2 nvl(sum(b.sal),0) col1,
    3 nvl(sum(b.comm),0) col2
    4 from test_dept a,test_emp b
    5 where a.deptno = b.deptno
    6 group by a.deptno;
    DEPTNO COL1 COL2
    30 11217 2415
    20 13237 738
    10 12786 0
    SQL> select a.deptno,
    2 nvl(sum(b.sal),0) col1,
    3 nvl(sum(b.comm),0) col2
    4 from test_dept a,test_emp b
    5 where a.deptno = b.deptno(+)
    6 group by a.deptno;
    DEPTNO COL1 COL2
    10 12786 0
    20 13237 738
    30 11217 2415
    40 0 0
    99 0 0
    SQL>

  • Should I use Analytic functions ?

    Hello,
    I have a table rci_dates with the following structure (rci_id,visit_id,rci_name,rci_date).
    A sample of data in this table is as given below.
    1,101,'FIRST VISIT', '2010-MAY-01',
    2,101,'FIRST VISIT', '2010-MAY-01'
    3,101,'FIRST VISIT', '2010-MAY-01'
    4,101,'FIRST VISIT', '2010-MAY-01'
    5,102,'SECOND VISIT', '2010-JUN-01',
    6,102,'SECOND VISIT', '2010-JUN-01'
    7,102,'SECOND VISIT', '2010-JUN-01'
    8,102,'SECOND VISIT', '2010-JUL-01'
    I want to write a query which returns me the records which are similar to the record with rc_id =8 since the rci_date is different within the visit_id 102. Where as in Visit_id 101 the rci_dates are all same so it should not be displayed in the output returned by my query.
    How can I do this ? Should I be using analytic functions. Can someone please let me know.
    Thanks

    ok i have created the table and inserted the data. but it appears that the data are the output you are expecting, they all the same visit_id.
    SQL> CREATE TABLE RCI
      2  (RCI_ID NUMBER(10) NOT NULL,
      3   VISIT_ID NUMBER(10) NOT NULL,
      4   RCI_NAME VARCHAR2(20 BYTE) NOT NULL,
      5   DCI_DATE VARCHAR2(8 BYTE));
    Table created
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876540, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876640, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876740, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876840, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876940, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877040, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877140, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877240, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877240, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877640, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877740, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877840, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877940, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878040, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878140, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878240, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878340, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878440, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878540, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877640, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14877740, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878340, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878540, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 17418240, 12140, 'SCREENING', '20000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 17418340, 12140, 'SCREENING', '20000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 17418440, 12140, 'SCREENING', '20000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14878240, 12140, 'SCREENING', '20000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 18790240, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 21724540, 12140, 'SCREENING', '19000101');
    1 row inserted
    SQL> INSERT INTO RCI ( RCI_ID, VISIT_ID, RCI_NAME, DCI_DATE ) VALUES ( 14876540, 12140, 'SCREENING', '20091015');
    1 row inserted
    SQL> commit;
    Commit complete
    SQL> select * from rci;
         RCI_ID    VISIT_ID RCI_NAME             DCI_DATE
       14876540       12140 SCREENING            19000101
       14876640       12140 SCREENING            19000101
       14876740       12140 SCREENING            19000101
       14876840       12140 SCREENING            19000101
       14876940       12140 SCREENING            19000101
       14877040       12140 SCREENING            19000101
       14877140       12140 SCREENING            19000101
       14877240       12140 SCREENING            19000101
       14877240       12140 SCREENING            19000101
       14877640       12140 SCREENING            19000101
       14877740       12140 SCREENING            19000101
       14877840       12140 SCREENING            19000101
       14877940       12140 SCREENING            19000101
       14878040       12140 SCREENING            19000101
       14878140       12140 SCREENING            19000101
       14878240       12140 SCREENING            19000101
       14878340       12140 SCREENING            19000101
       14878440       12140 SCREENING            19000101
       14878540       12140 SCREENING            19000101
       14877640       12140 SCREENING            19000101
       14877740       12140 SCREENING            19000101
       14878340       12140 SCREENING            19000101
       14878540       12140 SCREENING            19000101
       17418240       12140 SCREENING            20000101
       17418340       12140 SCREENING            20000101
       17418440       12140 SCREENING            20000101
       14878240       12140 SCREENING            20000101
       18790240       12140 SCREENING            19000101
       21724540       12140 SCREENING            19000101
       14876540       12140 SCREENING            20091015
    30 rows selected
    SQL> -- using the sample similar code that i have previously posted it returned all the rows.
    SQL> select rci.*
      2    from rci
      3   where rci.visit_id in (select r1.visit_id
      4                            from (select rci.visit_id,
      5                                         count(*) over (partition by rci.visit_id, rci.dci_date order by rci.visit_id) rn
      6                                    from rci) r1
      7                            where r1.rn = 1)
      8  order by rci.rci_id;
         RCI_ID    VISIT_ID RCI_NAME             DCI_DATE
       14876540       12140 SCREENING            20091015
       14876540       12140 SCREENING            19000101
       14876640       12140 SCREENING            19000101
       14876740       12140 SCREENING            19000101
       14876840       12140 SCREENING            19000101
       14876940       12140 SCREENING            19000101
       14877040       12140 SCREENING            19000101
       14877140       12140 SCREENING            19000101
       14877240       12140 SCREENING            19000101
       14877240       12140 SCREENING            19000101
       14877640       12140 SCREENING            19000101
       14877640       12140 SCREENING            19000101
       14877740       12140 SCREENING            19000101
       14877740       12140 SCREENING            19000101
       14877840       12140 SCREENING            19000101
       14877940       12140 SCREENING            19000101
       14878040       12140 SCREENING            19000101
       14878140       12140 SCREENING            19000101
       14878240       12140 SCREENING            19000101
       14878240       12140 SCREENING            20000101
       14878340       12140 SCREENING            19000101
       14878340       12140 SCREENING            19000101
       14878440       12140 SCREENING            19000101
       14878540       12140 SCREENING            19000101
       14878540       12140 SCREENING            19000101
       17418240       12140 SCREENING            20000101
       17418340       12140 SCREENING            20000101
       17418440       12140 SCREENING            20000101
       18790240       12140 SCREENING            19000101
       21724540       12140 SCREENING            19000101
    30 rows selected
    SQL> just as what frank have said it will be helpful if you post a sample output based on the original posting, that is in the first posting you have.

  • Using analytical function to calculate concurrency between date range

    Folks,
    I'm trying to use analytical functions to come up with a query that gives me the
    concurrency of jobs executing between a date range.
    For example:
    JOB100 - started at 9AM - stopped at 11AM
    JOB200 - started at 10AM - stopped at 3PM
    JOB300 - started at 12PM - stopped at 2PM
    The query would tell me that JOB1 ran with a concurrency of 2 because JOB1 and JOB2
    were running started and finished within the same time. JOB2 ran with the concurrency
    of 3 because all jobs ran within its start and stop time. The output would look like this.
    JOB START STOP CONCURRENCY
    === ==== ==== =========
    100 9AM 11AM 2
    200 10AM 3PM 3
    300 12PM 2PM 2
    I've been looking at this post, and this one if very similar...
    Analytic functions using window date range
    Here is the sample data..
    CREATE TABLE TEST_JOB
    ( jobid NUMBER,
    created_time DATE,
    start_time DATE,
    stop_time DATE
    insert into TEST_JOB values (100, sysdate -1, to_date('05/04/08 09:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 11:00:00','MM/DD/YY hh24:mi:ss'));
    insert into TEST_JOB values (200, sysdate -1, to_date('05/04/08 10:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 13:00:00','MM/DD/YY hh24:mi:ss'));
    insert into TEST_JOB values (300, sysdate -1, to_date('05/04/08 12:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 14:00:00','MM/DD/YY hh24:mi:ss'));
    select * from test_job;
    JOBID|CREATED_TIME |START_TIME |STOP_TIME
    ----------|--------------|--------------|--------------
    100|05/04/08 09:28|05/04/08 09:00|05/04/08 11:00
    200|05/04/08 09:28|05/04/08 10:00|05/04/08 13:00
    300|05/04/08 09:28|05/04/08 12:00|05/04/08 14:00
    Any help with this query would be greatly appreciated.
    thanks.
    -peter

    after some checking the model rule wasn't working exactly as expected.
    I believe it's working right now. I'm posting a self-contained example for completeness sake.I use 2 functions to convert back and forth between epoch unix timestamps, so
    I'll post them here as well.
    Like I said I think this works okay, but any feedback is always appreciated.
    -peter
    CREATE OR REPLACE FUNCTION date_to_epoch(p_dateval IN DATE)
    RETURN NUMBER
    AS
    BEGIN
    return (p_dateval - to_date('01/01/1970','MM/DD/YYYY')) * (24 * 3600);
    END;
    CREATE OR REPLACE FUNCTION epoch_to_date (p_epochval IN NUMBER DEFAULT 0)
    RETURN DATE
    AS
    BEGIN
    return to_date('01/01/1970','MM/DD/YYYY') + (( p_epochval) / (24 * 3600));
    END;
    DROP TABLE TEST_MODEL3 purge;
    CREATE TABLE TEST_MODEL3
    ( jobid NUMBER,
    start_time NUMBER,
    end_time NUMBER);
    insert into TEST_MODEL3
    VALUES (300,date_to_epoch(to_date('05/07/2008 10:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 19:00','MM/DD/YYYY hh24:mi')));
    insert into TEST_MODEL3
    VALUES (200,date_to_epoch(to_date('05/07/2008 09:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 12:00','MM/DD/YYYY hh24:mi')));
    insert into TEST_MODEL3
    VALUES (400,date_to_epoch(to_date('05/07/2008 10:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 14:00','MM/DD/YYYY hh24:mi')));
    insert into TEST_MODEL3
    VALUES (500,date_to_epoch(to_date('05/07/2008 11:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 16:00','MM/DD/YYYY hh24:mi')));
    insert into TEST_MODEL3
    VALUES (600,date_to_epoch(to_date('05/07/2008 15:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 22:00','MM/DD/YYYY hh24:mi')));
    insert into TEST_MODEL3
    VALUES (100,date_to_epoch(to_date('05/07/2008 09:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 23:00','MM/DD/YYYY hh24:mi')));
    commit;
    SELECT jobid,
    epoch_to_date(start_time)start_time,
    epoch_to_date(end_time)end_time,
    n concurrency
    FROM TEST_MODEL3
    MODEL
    DIMENSION BY (start_time,end_time)
    MEASURES (jobid,0 n)
    (n[any,any]=
    count(*)[start_time<= cv(start_time),end_time>=cv(start_time)]+
    count(*)[start_time > cv(start_time) and start_time <= cv(end_time), end_time >= cv(start_time)]
    ORDER BY start_time;
    The results look like this:
    JOBID|START_TIME|END_TIME |CONCURRENCY
    ----------|---------------|--------------|-------------------
    100|05/07/08 09:00|05/07/08 23:00| 6
    200|05/07/08 09:00|05/07/08 12:00| 5
    300|05/07/08 10:00|05/07/08 19:00| 6
    400|05/07/08 10:00|05/07/08 14:00| 5
    500|05/07/08 11:00|05/07/08 16:00| 6
    600|05/07/08 15:00|05/07/08 22:00| 4

  • Using Analytic Functions

    Hi all,
    I am using ODI 11g(11.1.1.3.0) and I am trying to make an interface using analytic functions in the column mapping, something like below.
    sum(salary) over (partition by .....)
    The problem is that when ODI saw sum it assumes this as an aggregate function and puts group by. Is there any way to make ODI understand it is not an aggregate function?
    I tried creating an option to specify whether it is analytic or not and updated IKM with no luck.
    <%if ( odiRef.getUserExit("ANALYTIC").equals("1") ) { %>
    <% } else { %>
    <%=odiRef.getGrpBy(i)%>
    <%=odiRef.getHaving(i)%>
    <% } %>
    Thanks in advance

    Thanks for the reply.
    But I think in ODI 11g getFrom() function is behaving differently, that is why it is not working.
    When I check out the A.2.18 getFrom() Method from Substitution API Reference document, it says
    Allows the retrieval of the SQL string of the FROM in the source SELECT clause for a given dataset. The FROM statement is built from tables and joins (and according to the SQL capabilities of the technologies) that are used in this dataset.
    I think getfrom also retrieves group by clause, I create a step in IKM just with *<%=odiRef.getFrom(0)%>* and I can see that even that query generated has a group by clause

  • How to use analytic function with aggregate function

    hello
    can we use analytic function and aggrgate function in same qurey? i tried to find any example on Net but not get any example how both of these function works together. Any link or example plz share with me
    Edited by: Oracle Studnet on Nov 15, 2009 10:29 PM

    select
    t1.region_name,
    t2.division_name,
    t3.month,
    t3.amount mthly_sales,
    max(t3.amount) over (partition by t1.region_name, t2.division_name)
    max_mthly_sales
    from
    region t1,
    division t2,
    sales t3
    where
    t1.region_id=t3.region_id
    and
    t2.division_id=t3.division_id
    and
    t3.year=2004
    Source:http://www.orafusion.com/art_anlytc.htm
    Here max (aggregate) and over partition by (analytic) function is in same query. So it means we can use aggregate and analytic function in same query and more than one analytic function in same query also.
    Hth
    Girish Sharma

  • Using analytic function to get the right output.

    Dear all;
    I have the following sample date below
    create table temp_one
           id number(30),  
          placeid varchar2(400),
          issuedate  date,
          person varchar2(400),
          failures number(30),
          primary key(id)
    insert into temp_one values (1, 'NY', to_date('03/04/2011', 'MM/DD/YYYY'), 'John', 3);
    insert into temp_one values (2, 'NY', to_date('03/03/2011', 'MM/DD/YYYY'), 'Adam', 7);
    insert into temp_one values (3, 'Mexico', to_date('03/04/2011', 'MM/DD/YYYY'), 'Wendy', 3);
    insert into temp_one values (4, 'Mexico', to_date('03/14/2011', 'MM/DD/YYYY'), 'Gerry', 3);
    insert into temp_one values (5, 'Mexico', to_date('03/15/2011', 'MM/DD/YYYY'), 'Zick', 9);
    insert into temp_one values (6, 'London', to_date('03/16/2011', 'MM/DD/YYYY'), 'Mike', 8);this is output I desire
    placeid       issueperiod                               failures
    NY              02/28/2011 - 03/06/2011          10
    Mexico       02/28/2011 - 03/06/2011           3
    Mexico        03/14/2011 - 03/20/2011          12
    London        03/14/2011 - 03/20/2011          8All help is appreciated. I will post my query as soon as I am able to think of a good logic for this...

    hI,
    user13328581 wrote:
    ... Kindly note, I am still learning how to use analytic functions.That doesn't matter; analytic functions won't help in this problem. The aggregate SUM function is all you need.
    But what do you need to GROUP BY? What is each row of the result set going to represent? A placeid? Yes, each row will represent only one placedid, but it's going to be divided further. You want a separate row of output for every placeid and week, so you'll want to GROUP BY placeid and week. You don't want to GROUP BY the raw issuedate; that would put March 3 and March 4 into separate groups. And you don't want to GROUP BY failures; that would mean a row with 3 failures could never be in the same group as a row with 9 failures.
    This gets the output you posted from the sample data you posted:
    SELECT       placeid
    ,             TO_CHAR ( TRUNC (issuedate, 'IW')
                  , 'MM/DD/YYYY'
                ) || ' - '|| TO_CHAR ( TRUNC (issuedate, 'IW') + 6
                                             , 'MM/DD/YYY'
                               )     AS issueperiod
    ,       SUM (failures)                  AS sumfailures
    FROM        temp_one
    GROUP BY  placeid
    ,            TRUNC (issuedate, 'IW')
    ;You could use a sub-query to compute TRUNC (issuedate, 'IW') once. The code would be about as complicated, efficiency probably won't improve noticeably, and the the results would be the same.

  • Using analytical function - value with highest count

    Hi
    i have this table below
    CREATE TABLE table1
    ( cust_name VARCHAR2 (10)
    , txn_id NUMBER
    , txn_date DATE
    , country VARCHAR2 (10)
    , flag number
    , CONSTRAINT key1 UNIQUE (cust_name, txn_id)
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9870,TO_DATE ('15-Jan-2011', 'DD-Mon-YYYY'), 'Iran', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9871,TO_DATE ('16-Jan-2011', 'DD-Mon-YYYY'), 'China', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9872,TO_DATE ('17-Jan-2011', 'DD-Mon-YYYY'), 'China', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9873,TO_DATE ('18-Jan-2011', 'DD-Mon-YYYY'), 'Japan', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9874,TO_DATE ('19-Jan-2011', 'DD-Mon-YYYY'), 'Japan', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9875,TO_DATE ('20-Jan-2011', 'DD-Mon-YYYY'), 'Russia', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9877,TO_DATE ('22-Jan-2011', 'DD-Mon-YYYY'), 'China', 0);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9878,TO_DATE ('26-Jan-2011', 'DD-Mon-YYYY'), 'Korea', 0);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9811,TO_DATE ('17-Jan-2011', 'DD-Mon-YYYY'), 'China', 0);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9854,TO_DATE ('13-Jan-2011', 'DD-Mon-YYYY'), 'Taiwan', 0);
    The requirement is to create an additional column in the resultset with country name where the customer has done the maximum number of transactions
    (with transaction flag 1). In case we have two or more countries tied with the same count, then we need to select the country (among the tied ones)
    where the customer has done the last transaction (with transaction flag 1)
    e.g. The count is 2 for both 'China' and 'Japan' for transaction flag 1 ,and the latest transaction is for 'Japan'. So the new column should contain 'Japan'
    CUST_NAME TXN_ID TXN_DATE COUNTRY FLAG country_1
    Peter 9811 17-JAN-11 China 0 Japan
    Peter 9854 13-JAN-11 Taiwan 0 Japan
    Peter 9870 15-JAN-11 Iran 1 Japan
    Peter 9871 16-JAN-11 China 1 Japan
    Peter 9872 17-JAN-11 China 1 Japan
    Peter 9873 18-JAN-11 Japan 1 Japan
    Peter 9874 19-JAN-11 Japan 1 Japan
    Peter 9875 20-JAN-11 Russia 1 Japan
    Peter 9877 22-JAN-11 China 0 Japan
    Peter 9878 26-JAN-11 Korea 0 Japan
    Please let me know how to accomplish this using analytical functions
    Thanks
    -Learnsequel

    Does this work (not spent much time checking it)?
    WITH ana AS (
    SELECT cust_name, txn_id, txn_date, country, flag,
            Sum (flag)
                OVER (PARTITION BY cust_name, country)      n_trx,
            Max (CASE WHEN flag = 1 THEN txn_date END)
                OVER (PARTITION BY cust_name, country)      l_trx
      FROM cnt_trx
    SELECT cust_name, txn_id, txn_date, country, flag,
            First_Value (country) OVER (PARTITION BY cust_name ORDER BY n_trx DESC, l_trx DESC) top_cnt
      FROM ana
    CUST_NAME      TXN_ID TXN_DATE  COUNTRY          FLAG TOP_CNT
    Fred             9875 20-JAN-11 Russia              1 Russia
    Fred             9874 19-JAN-11 Japan               1 Russia
    Peter            9873 18-JAN-11 Japan               1 Japan
    Peter            9874 19-JAN-11 Japan               1 Japan
    Peter            9872 17-JAN-11 China               1 Japan
    Peter            9871 16-JAN-11 China               1 Japan
    Peter            9811 17-JAN-11 China               0 Japan
    Peter            9877 22-JAN-11 China               0 Japan
    Peter            9875 20-JAN-11 Russia              1 Japan
    Peter            9870 15-JAN-11 Iran                1 Japan
    Peter            9878 26-JAN-11 Korea               0 Japan
    Peter            9854 13-JAN-11 Taiwan              0 Japan
    12 rows selected.

  • SQL using analytical function

    Hi all,
    I want an help in creating my SQL query to extract the data described below:
    I have one table example test containing data like below:
    ID     Desc     Status
    1     T1          DEACTIVE
    2     T2          ACTIVE
    3     T3          SUCCESS
    4     T4          DEACTIVE
    The thing i want to do is selecting all lines with ACTIVE status in this table but is there is no ACTIVE status, my query will give me the last line with DEACTIVE status.
    Can I do this in one query by using analytical function for example, if yes can yiu help me on thaht query.
    regards,
    Raluce

    Hi, Raluce,
    Here's one way to do that:
    WITH got_r_num AS
        SELECT  deptno, ename, job, hiredate
        ,       ROW_NUMBER () OVER ( PARTITION BY  deptno
                                     ORDER BY      job
                                     ,             hiredate  DESC
                                   )  AS r_num
        FROM    scott.emp
        WHERE   job  IN ('ANALYST', 'CLERK')
    SELECT     deptno, ename, job, hiredate
    FROM       got_r_num
    WHERE      job     = 'ANALYST'
    OR         r_num   = 1
    ORDER BY   deptno
    Since I don't have a sample version of your table, I used scott.emp to illustrate.
    Output:
        DEPTNO ENAME      JOB       HIREDATE
            10 MILLER     CLERK     23-JAN-82
            20 SCOTT      ANALYST   19-APR-87
            20 FORD       ANALYST   03-DEC-81
            30 JAMES      CLERK     03-DEC-81
    This query finds all ANALYSTs in each department, regardless of how many there are.  (Deptno 20 happens to have 2 ANALYSTs.)  If there is no ANALYST in a department, then the most recently hired CLERK is included.  (Deptnos 10 and 30 don't have any ANALYSTs.)
    This "partitions", or sub-divides, the table into separate units, one for each department.  In the problem you posted, it looks like you want to operate in the entire table, without sub-dividing it in any way.  To do that, just omit the PARTITION BY clause in the analytic ROW_NUMBER function, like this:
    WITH got_r_num AS
        SELECT  deptno, ename, job, hiredate
        ,       ROW_NUMBER () OVER ( --  PARTITION BY  deptno
                                     ORDER BY      job
                                     ,             hiredate  DESC
                                   )  AS r_num
        FROM    scott.emp
        WHERE   job  IN ('ANALYST', 'CLERK')
    SELECT     deptno, ename, job, hiredate
    FROM       got_r_num
    WHERE      job     = 'ANALYST'
    OR         r_num   = 1
    ORDER BY   deptno

  • How to use Analytic functions in Forms10g

    Hi,
    Can we use Analytic function in forms10g like Lead & Lag.
    Thanks & Regards,

    Use a db view as a data source of your form block ....
    Greetings...
    Sim

  • Using analytic function in a view

    Hello to all
    Sorry If I use this thread
    sql not merge using analytic functions
    for my question,
    From example you write and from Tom explain is not possible create a view on analytic function?
    Thanks and sorry again

    I think what you'll discover is that if you apply the function over the result set, the initial SQL might be quicker,
    for example, this is a test I did with a large dictionary view:
    select tp.Table_Name
          ,tp.Partition_Name
    from
          select tbl.Table_Name         as Table_Name
                ,tbl.Partition_Date     as dt
                ,row_number() over (partition by dtp.table_Name order by dtp.Partition_Name desc) rn
          from (
                select  /*+ all_rows */
                        dtp.Table_Name
                       ,dtp.Partition_name
                from    dba_tab_partitions  dtp
                where   dtp.Partition_Name  like 'Y____\_Q_\_M__\_D__' escape '\'
                and     dtp.Table_Owner     =  'APPS'
                and     dtp.Table_name      not like '%$%'
                and     dtp.Table_Name      like '%'
               ) tbl
        ) tp
    where tp.rn = 1
    select Table_Name
          ,Partition_Name
    from (
          select  /*+ all_rows */
                  dtp.Table_Name
                 ,row_number() over (partition by tbl.table_Name order by tbl.Partition_Name desc) rn
          from    dba_tab_partitions  dtp
          where   dtp.Partition_Name  like 'Y____\_Q_\_M__\_D__' escape '\'
          and     dtp.Table_Owner     =  'APPS'
          and     dtp.Table_name      not like '%$%'
          and     dtp.Table_Name      '%'
         ) tbl
    where rn = 1I found the former to be quicker.
    I think ask tom was saying a lot more, but included something similar,
    Edited by: bluefrog on Jun 10, 2010 12:48 PM

  • Build interface using analytic functions twice

    Hi all, tell me please is it possible to build interface using analytic functions twice, like:
    select max(tt.val) from (
    select id, sum(val) val
    from (
    select 1 id, 10 val from dual union all
    select 2 id, 10 val from dual union all
    select 2 id, 30 val from dual union all
    select 2 id, 10 val from dual union all
    select 3 id, 20 val from dual) t
    group by id) tt
    thanks in advance

    HI,
    Just a question...
    You used only dual table. That correspond to the reality or is just as example?
    I mean, won't physical table be used?
    I believe you need that at target column, is that true?

Maybe you are looking for

  • Audigy 4 non

    Hello .. I?have a , and my issue is :?On the Creative at the Audigy 4 photo the DSP is CA0300-IAT LF -EMU . On my soundcard the DSP is CA008-IAT LF . The Audigy 2 value has CA008-IAT !!!!!!!! !!! I respect Creative , is a great corporate but i think?

  • HT4515 Activating call forwarding on my iPhone 4S & iphone 5,

    Activating call forwarding on my iPhone 4S & iphone 5, I go into "settings", then "phone" then "call forwarding" and then enter the phone number, after that there's nothing, if I exit out of it it just cancels itself out, ANY IDEA's?? Thanks Jeff M

  • How to use functional global with a large amount of variables?

    Hi all, I'm currently developping a LV program which control and acquired data from a device. Up to now I used global variables ( very conveniente to use for experimental parameters). But now my program is become to be too large and I have too much "

  • Is it possible to run garbage collector in publisher instance with out opening a tunnel?

    Hi, In General we are running garbage collector in publisher by opening a tunnel. is there any alternate way to do that. we installed adobe cq5 on solaris10. thanks, vidhyadhar

  • Adobe doesn't work

    This program does not work. I wasted my money!