Error using Analytic function in reports

Hi,
I am trying to use Oracle analytic function (lag) in a report. But am getting the below error:
Encountered the symbol "(" when expecting one of the following:
,from into bulk
This is the code in the formula column:
function extend_lifeFormula return VARCHAR2 is
l_extend_life VARCHAR2(80);
l_life_in_months VARCHAR2(80);
l_asset_id NUMBER;
begin
SRW.REFERENCE(:P_BOOK_NAME);
SRW.REFERENCE(:ASSET_ID);
SELECT asset_id,
     lag(life_in_months,1,0) over (PARTITION BY asset_id
               ORDER BY transaction_header_id_in) Extend_Life
INTO l_asset_id,
l_life_in_months
FROM fa_books
WHERE book_type_code = 'US GAAP'
AND asset_id = 1;
return life_in_months;
end;
Has anyone experienced this error before? Does client pl/sql engine not support Analytic functions? The above query runs fine in SQL.
Thanks,
Ashish

From our version of 6i Reports Builder Help, I got ...
Oracle ORACLE PL/SQL V8.0.6.3.0 - Production
You may check yours.

Similar Messages

  • OSB11g - using Concatenation function in report key - Xpath

    Hi,
    I am trying to use Concatenation function on Report key Xpath. For that i am using Following Xpath Expressions But this expressions not valid when trying to validate. But same expressions are valid under different scenarios in OSB.
    1.fn:concat(./bpel:process/bpel:input, ./bpel:process/bpel:input)
    error msg(when validate):_
    error: XPath expression invalid, not a selection: declare namespace jca = 'http://www.bea.
    com/wli/sb/transports/jca'; declare namespace wsp = 'http://schemas.xmlsoap.org/ws/2004/09/policy';
    declare namespace jms = 'http://www.bea.com/wli/sb/transports/jms'; declare namespace tp = 'http:
    //www.bea.com/wli/sb/transports'; declare namespace wsa05 = 'http://www.w3.
    org/2005/08/addressing'; declare namespace jejb = 'http://www.bea.com/wli/sb/transports/jejb';
    declare namespace xs = 'http://www.w3.org/2001/XMLSchema'; declare namespace sftp = 'http://www.
    bea.com/wli/sb/transports/sftp'; declare namespace flow = 'http://www.bea.com/alsb/flow/transport';
    declare namespace soap-env = 'http://schemas.xmlsoap.org/soap/envelope/'; declare namespace wsu
    = 'http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd'; declare
    namespace dsp = 'http://www.bea.com/dsp/transport/sb'; declare namespace ejb = 'http://www.bea.
    com/wli/sb/transports/ejb'; declare namespace bpel = 'http://xmlns.oracle.
    com/Bpel_Actvities/Assign_Activity/BPELProcess'; declare namespace wsa = 'http://schemas.xmlsoap.
    org/ws/2004/08/addressing'; declare namespace bpel-10g = 'http://www.bea.
    com/wli/sb/transports/bpel10g'; declare namespace tuxedo = 'http://www.bea.
    com/wli/sb/transports/tuxedo'; declare namespace file = 'http://www.bea.com/wli/sb/transports/file';
    declare namespace ctx = 'http://www.bea.com/wli/sb/context'; declare namespace fn = 'http://www.w3.
    org/2004/07/xpath-functions'; declare namespace soap12-enc = 'http://www.w3.org/2003/05/soap-
    encoding'; declare namespace soap12-env = 'http://www.w3.org/2003/05/soap-envelope'; declare
    namespace fn-bea = 'http://www.bea.com/xquery/xquery-functions'; declare namespace mq = 'http:
    //www.bea.com/wli/sb/transports/mq'; declare namespace ws = 'http://www.bea.
    com/wli/sb/transports/ws'; declare namespace http = 'http://www.bea.com/wli/sb/transports/http';
    declare namespace soa-direct = 'http://www.bea.com/wli/sb/transports/soa'; declare namespace email
    = 'http://www.bea.com/wli/sb/transports/email'; declare namespace sb = 'http://www.bea.
    com/wli/sb/transports/sb'; declare namespace ftp = 'http://www.bea.com/wli/sb/transports/ftp';
    declare namespace xsd = 'http://www.w3.org/2001/XMLSchema'; declare namespace soap-enc = 'http:
    //schemas.xmlsoap.org/soap/encoding/'; declare namespace xsi = 'http://www.w3.
    org/2001/XMLSchema-instance'; fn:concat(./bpel:process/bpel:input, ./bpel:process/bpel:input)
    2. op:concatenate(./bpel:process/bpel:input, ./bpel:process/bpel:input)
    While using this Xpath expression validation is sucessfull but concatenation operation is not working when checked in the message reports under operations tab.
    Can any one help me on this.
    Thanks in advance.

    can you try assign concatenated value to some xml element first, like
    assign : <value>{fn:concat(a,b)}</value> to e.g. value
    and then report key ./text() in variable $value
    Edited by: AigarsP on Jun 12, 2012 4:12 AM

  • [b]Using Analytic functions...[/b]

    Hi All,
    I need help in writing a query using analytic functions.
    Foll is my scenario. I have a table cust_points
    CREATE TABLE cust_points
    ( cust_id varchar2(10),
    pts_dt date,
    reward_points number(3),
    bal_points number(3)
    insert into cust_points values ('ABC',01-MAY-2004',5, 15)
    insert into cust_points values ('ABC',05-MAY-2004',3, 12)
    insert into cust_points values ('ABC',09-MAY-2004',3, 9)
    insert into cust_points values ('XYZ',02-MAY-2004',8, 4)
    insert into cust_points values ('XYZ',03-MAY-2004',5, 1)
    insert into cust_points values ('JKL',10-MAY-2004',5, 11)
    I want a result set which shows for each customer, the sum of reward his/her points
    but balance points as of the last date. So for the above I should have foll results
    cust_id reward_pts bal_points
    ABC 11 9
    XYZ 13 1
    JKL 5 11
    I having tried using last_value(), for eg
    Select cust_id, sum(reward_points), last_value(bal_points) over (partition by cust_id)...but run into grouping errors.
    Can anyone help ?

    try this...
    SELECT a.pkcol,
         nvl(SUM(b.col1),0) col1,
         nvl(SUM(b.col2),0) col2,
         nvl(SUM(b.col3),0) col3
    FROM table1 a, table2 b, table3 c
    WHERE a.pkcol = b.plcol(+)
    AND a.pkcol = c.pkcol
    GROUP BY a.pkcol;
    SQL> select a.deptno,
    2 nvl((select sum(sal) from test_emp b where a.deptno = b.deptno),0) col1,
    3 nvl((select sum(comm) from test_emp b where a.deptno = b.deptno),0) col2
    4 from test_dept a;
    DEPTNO COL1 COL2
    10 12786 0
    20 13237 738
    30 11217 2415
    40 0 0
    99 0 0
    SQL> select a.deptno,
    2 nvl(sum(b.sal),0) col1,
    3 nvl(sum(b.comm),0) col2
    4 from test_dept a,test_emp b
    5 where a.deptno = b.deptno
    6 group by a.deptno;
    DEPTNO COL1 COL2
    30 11217 2415
    20 13237 738
    10 12786 0
    SQL> select a.deptno,
    2 nvl(sum(b.sal),0) col1,
    3 nvl(sum(b.comm),0) col2
    4 from test_dept a,test_emp b
    5 where a.deptno = b.deptno(+)
    6 group by a.deptno;
    DEPTNO COL1 COL2
    10 12786 0
    20 13237 738
    30 11217 2415
    40 0 0
    99 0 0
    SQL>

  • To use "analytic function" at "recursive with clause"

    http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_10002.htm#i2077142
    The recursive member cannot contain any of the following elements:
    ・An aggregate function. However, analytic functions are permitted in the select list.
    OK I will use analytic function at The recursive member :-)
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Release 11.2.0.1.0 - Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    SQL> with rec(Val,TotalRecCnt) as(
      2  select 1,1 from dual
      3  union all
      4  select Val+1,count(*) over()
      5    from rec
      6   where Val+1 <= 5)
      7  select * from rec;
    select * from rec
    ERROR at line 7:
    ORA-32486: unsupported operation in recursive branch of recursive WITH clauseWhy ORA-32486 happen ?:|

    Hi Aketi,
    It works in 11.2.0.2, so it is probably a bug:
    select * from v$version
    BANNER                                                                          
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production    
    PL/SQL Release 11.2.0.2.0 - Production                                          
    CORE     11.2.0.2.0     Production                                                        
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production               
    NLSRTL Version 11.2.0.2.0 - Production                                          
    with rec(Val,TotalRecCnt) as(
    select 1,1 from dual
    union all
    select Val+1,count(*) over()
    from rec
    where Val+1 <= 5)
    select * from rec
    VAL                    TOTALRECCNT           
    1                      1                     
    2                      1                     
    3                      1                     
    4                      1                     
    5                      1                      Regards,
    Bob

  • Help on Using Analytical Functions

    I am hetting error when i use Analytical functions in Expressions
    AVG( INGRP1.Test1 ) OVER (PARTITION BY INGRP1.Test2)
    Error is as follows
    Line 1, Col 28:
    PLS-00103: Encountered the symbol "OVER" when expecting one of the following:
    * & = - + ; < / > at in is mod remainder not rem
    <an exponent (**)> <> or != or ~= >= <= <> and or like LIKE2_
    LIKE4_ LIKEC_ between || multiset member SUBMULTISET_

    Hi,
    the syntax of this part of the sql statement is okay. Please post the complete statement to identify the error.
    Sometimes oracle identifies the wrong point for the error.
    Regards,
    Detlef

  • Using analytical function to calculate concurrency between date range

    Folks,
    I'm trying to use analytical functions to come up with a query that gives me the
    concurrency of jobs executing between a date range.
    For example:
    JOB100 - started at 9AM - stopped at 11AM
    JOB200 - started at 10AM - stopped at 3PM
    JOB300 - started at 12PM - stopped at 2PM
    The query would tell me that JOB1 ran with a concurrency of 2 because JOB1 and JOB2
    were running started and finished within the same time. JOB2 ran with the concurrency
    of 3 because all jobs ran within its start and stop time. The output would look like this.
    JOB START STOP CONCURRENCY
    === ==== ==== =========
    100 9AM 11AM 2
    200 10AM 3PM 3
    300 12PM 2PM 2
    I've been looking at this post, and this one if very similar...
    Analytic functions using window date range
    Here is the sample data..
    CREATE TABLE TEST_JOB
    ( jobid NUMBER,
    created_time DATE,
    start_time DATE,
    stop_time DATE
    insert into TEST_JOB values (100, sysdate -1, to_date('05/04/08 09:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 11:00:00','MM/DD/YY hh24:mi:ss'));
    insert into TEST_JOB values (200, sysdate -1, to_date('05/04/08 10:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 13:00:00','MM/DD/YY hh24:mi:ss'));
    insert into TEST_JOB values (300, sysdate -1, to_date('05/04/08 12:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 14:00:00','MM/DD/YY hh24:mi:ss'));
    select * from test_job;
    JOBID|CREATED_TIME |START_TIME |STOP_TIME
    ----------|--------------|--------------|--------------
    100|05/04/08 09:28|05/04/08 09:00|05/04/08 11:00
    200|05/04/08 09:28|05/04/08 10:00|05/04/08 13:00
    300|05/04/08 09:28|05/04/08 12:00|05/04/08 14:00
    Any help with this query would be greatly appreciated.
    thanks.
    -peter

    after some checking the model rule wasn't working exactly as expected.
    I believe it's working right now. I'm posting a self-contained example for completeness sake.I use 2 functions to convert back and forth between epoch unix timestamps, so
    I'll post them here as well.
    Like I said I think this works okay, but any feedback is always appreciated.
    -peter
    CREATE OR REPLACE FUNCTION date_to_epoch(p_dateval IN DATE)
    RETURN NUMBER
    AS
    BEGIN
    return (p_dateval - to_date('01/01/1970','MM/DD/YYYY')) * (24 * 3600);
    END;
    CREATE OR REPLACE FUNCTION epoch_to_date (p_epochval IN NUMBER DEFAULT 0)
    RETURN DATE
    AS
    BEGIN
    return to_date('01/01/1970','MM/DD/YYYY') + (( p_epochval) / (24 * 3600));
    END;
    DROP TABLE TEST_MODEL3 purge;
    CREATE TABLE TEST_MODEL3
    ( jobid NUMBER,
    start_time NUMBER,
    end_time NUMBER);
    insert into TEST_MODEL3
    VALUES (300,date_to_epoch(to_date('05/07/2008 10:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 19:00','MM/DD/YYYY hh24:mi')));
    insert into TEST_MODEL3
    VALUES (200,date_to_epoch(to_date('05/07/2008 09:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 12:00','MM/DD/YYYY hh24:mi')));
    insert into TEST_MODEL3
    VALUES (400,date_to_epoch(to_date('05/07/2008 10:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 14:00','MM/DD/YYYY hh24:mi')));
    insert into TEST_MODEL3
    VALUES (500,date_to_epoch(to_date('05/07/2008 11:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 16:00','MM/DD/YYYY hh24:mi')));
    insert into TEST_MODEL3
    VALUES (600,date_to_epoch(to_date('05/07/2008 15:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 22:00','MM/DD/YYYY hh24:mi')));
    insert into TEST_MODEL3
    VALUES (100,date_to_epoch(to_date('05/07/2008 09:00','MM/DD/YYYY hh24:mi')),
    date_to_epoch(to_date('05/07/2008 23:00','MM/DD/YYYY hh24:mi')));
    commit;
    SELECT jobid,
    epoch_to_date(start_time)start_time,
    epoch_to_date(end_time)end_time,
    n concurrency
    FROM TEST_MODEL3
    MODEL
    DIMENSION BY (start_time,end_time)
    MEASURES (jobid,0 n)
    (n[any,any]=
    count(*)[start_time<= cv(start_time),end_time>=cv(start_time)]+
    count(*)[start_time > cv(start_time) and start_time <= cv(end_time), end_time >= cv(start_time)]
    ORDER BY start_time;
    The results look like this:
    JOBID|START_TIME|END_TIME |CONCURRENCY
    ----------|---------------|--------------|-------------------
    100|05/07/08 09:00|05/07/08 23:00| 6
    200|05/07/08 09:00|05/07/08 12:00| 5
    300|05/07/08 10:00|05/07/08 19:00| 6
    400|05/07/08 10:00|05/07/08 14:00| 5
    500|05/07/08 11:00|05/07/08 16:00| 6
    600|05/07/08 15:00|05/07/08 22:00| 4

  • How to use Analytic functions in Forms10g

    Hi,
    Can we use Analytic function in forms10g like Lead & Lag.
    Thanks & Regards,

    Use a db view as a data source of your form block ....
    Greetings...
    Sim

  • Using Analytic Functions

    Hi all,
    I am using ODI 11g(11.1.1.3.0) and I am trying to make an interface using analytic functions in the column mapping, something like below.
    sum(salary) over (partition by .....)
    The problem is that when ODI saw sum it assumes this as an aggregate function and puts group by. Is there any way to make ODI understand it is not an aggregate function?
    I tried creating an option to specify whether it is analytic or not and updated IKM with no luck.
    <%if ( odiRef.getUserExit("ANALYTIC").equals("1") ) { %>
    <% } else { %>
    <%=odiRef.getGrpBy(i)%>
    <%=odiRef.getHaving(i)%>
    <% } %>
    Thanks in advance

    Thanks for the reply.
    But I think in ODI 11g getFrom() function is behaving differently, that is why it is not working.
    When I check out the A.2.18 getFrom() Method from Substitution API Reference document, it says
    Allows the retrieval of the SQL string of the FROM in the source SELECT clause for a given dataset. The FROM statement is built from tables and joins (and according to the SQL capabilities of the technologies) that are used in this dataset.
    I think getfrom also retrieves group by clause, I create a step in IKM just with *<%=odiRef.getFrom(0)%>* and I can see that even that query generated has a group by clause

  • How to use analytic function with aggregate function

    hello
    can we use analytic function and aggrgate function in same qurey? i tried to find any example on Net but not get any example how both of these function works together. Any link or example plz share with me
    Edited by: Oracle Studnet on Nov 15, 2009 10:29 PM

    select
    t1.region_name,
    t2.division_name,
    t3.month,
    t3.amount mthly_sales,
    max(t3.amount) over (partition by t1.region_name, t2.division_name)
    max_mthly_sales
    from
    region t1,
    division t2,
    sales t3
    where
    t1.region_id=t3.region_id
    and
    t2.division_id=t3.division_id
    and
    t3.year=2004
    Source:http://www.orafusion.com/art_anlytc.htm
    Here max (aggregate) and over partition by (analytic) function is in same query. So it means we can use aggregate and analytic function in same query and more than one analytic function in same query also.
    Hth
    Girish Sharma

  • Query for using "analytical functions" in DWH...

    Dear team,
    I would like to know if following task can be done using analytical functions...
    If it can be done using other ways, please do share the ideas...
    I have table as shown below..
    Create Table t As
    Select *
    From
    Select 12345 PRODUCT, 'W1' WEEK,  10000 SOH, 0 DEMAND, 0 SUPPLY,     0 EOH From dual Union All
    Select 12345,         'W2',       0,         100,      50,        0 From dual Union All
    Select 12345,         'W3',       0,         100,      50,        0 From dual Union All
    Select 12345,         'W4',       0,         100,      50,        0 From dual
    PRODUCT     WEEK     SOH     DEMAND     SUPPLY     EOH
    12345     W1     10,000     0     0     10000
    12345     W2     0     100     50     0
    12345     W3     0     100     50     0
    12345     W4     0     100     50     0
    Now i want to calcuate EOH (ending on hand) quantity for W1...
    This EOH for W1 becomes SOH (Starting on hand) for W2...and so on...till end of weeks
    The formula is :- EOH = SOH - (DEMAND + SUPPLY)
    The output should be as follows...
    PRODUCT     WEEK     SOH     DEMAND     SUPPLY     EOH
    12345     W1     10,000               10000
    12345     W2     10,000     100     50     9950
    12345     W3     9,950     100     50     9900
    12345     W4     9,000     100     50     8950
    Kindly share your ideas...

    Nicloei W wrote:
    Means SOH_AFTER_SUPPLY for W1, should be displayed under SOH FOR W2...i.e. SOH for W4 should be SOH_AFTER_SUPPLY for W3, right?
    If yes, why are you expecting it to be 9000 for W4??
    So in output should be...
    PRODUCT WE        SOH     DEMAND     SUPPLY        EOH SOH_AFTER_SUPPLY
    12345 W1      10000          0          0          0            10000
    12345 W2      10000      100         50          0             9950
    12345 W3      9950       100         50          0             *9900*
    12345 W4      *9000*       100         50          0             9850
    per logic you explained, shouldn't it be *9900* instead???
    you could customize Martin Preiss's logic for your requirement :
    SQL> with
      2  data
      3  As
      4  (
      5  Select 12345 PRODUCT, 'W1' WEEK,  10000 SOH, 0 DEMAND, 0 SUPPLY,   0 EOH Fom dual Union All
      6  Select 12345,         'W2',       0,         100,      50,        0 From dal Union All
      7  Select 12345,         'W3',       0,         100,      50,        0 From dal Union All
      8  Select 12345,         'W4',       0,         100,      50,        0 From dual
      9  )
    10  Select Product
    11  ,Week
    12  , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Parttion By Product Order By Week)+Supply Soh
    13  ,Demand
    14  ,Supply
    15  , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Partition By Product Order By Week) eoh
    16  from  data;
       PRODUCT WE        SOH     DEMAND     SUPPLY        EOH
         12345 W1      10000          0          0      10000
         12345 W2      10000        100         50       9950
         12345 W3       9950        100         50       9900
         12345 W4       9900        100         50       9850 Vivek L

  • Restrict Query Resultset  which uses Analytic Function

    Gents,
    Problem Definition: Using Analytic Function, get Total sales for the Product P1
    and Customer C1 [Total sales for the customer itself] in one line.
    I want to restrict the ResultSet of the query to Product P1,
    please look at the data below, queries and problems..
    Data
    Customer Product Qtr Sales
    C1 P1 19991 100.00
    C1 P1 19992 125.00
    C1 P1 19993 175.00
    C1 P1 19994 300.00
    C1 P2 19991 100.00
    C1 P2 19992 125.00
    C1 P2 19993 175.00
    C1 P2 19994 300.00
    C2 P1 19991 100.00
    C2 P1 19992 125.00
    C2 P1 19993 175.00
    C2 P1 19994 300.00
    Problem, I want to display....
    Customer Product ProdSales CustSales
    C1 P1 700 1400
    But Without using outer query, i.e. please look below for the query that
    returns this reult with two select, I want this result in one query only..
    Select * From ----*** want to avoid this... ***----
    (Select Customer,Product,
    Sum(Sales) ProdSales,
    Sum(Sum(Sales)) Over(Partition By Customer) CustSales
    From t1
    Where customer='C1')
    Where
    Product='P1' ;
    Also, I want to avoid Hard coding of P1 in the select clause....
    I mean, I can do it in one shot/select, but look at the query below, it uses
    P1 in the select clause, which is No No!! P1 is allowed only in Where or Having ..
    Select Customer,Decode(Product, 'P1','P1','P1') Product,
    Decode(Product,'P1',Sales,0) ProdSales,
    Sum(Sum(Sales)) Over (Partition By Customer ) CustSales
    From t1
    Where customer='C1' ;
    This will get me what I want, but as I said earlier, I want to avoid using P1 in the
    Select clause..
    Goal is to Avoid using
    1-> Two Select/Outer Query/In Line Views
    2-> Product 'P1' in the Select clause...
    Thanks
    -Dhaval Rasania

    I don't understand goal number 1 of not using an inline view.
    What is the harm?

  • Using analytic function to get the right output.

    Dear all;
    I have the following sample date below
    create table temp_one
           id number(30),  
          placeid varchar2(400),
          issuedate  date,
          person varchar2(400),
          failures number(30),
          primary key(id)
    insert into temp_one values (1, 'NY', to_date('03/04/2011', 'MM/DD/YYYY'), 'John', 3);
    insert into temp_one values (2, 'NY', to_date('03/03/2011', 'MM/DD/YYYY'), 'Adam', 7);
    insert into temp_one values (3, 'Mexico', to_date('03/04/2011', 'MM/DD/YYYY'), 'Wendy', 3);
    insert into temp_one values (4, 'Mexico', to_date('03/14/2011', 'MM/DD/YYYY'), 'Gerry', 3);
    insert into temp_one values (5, 'Mexico', to_date('03/15/2011', 'MM/DD/YYYY'), 'Zick', 9);
    insert into temp_one values (6, 'London', to_date('03/16/2011', 'MM/DD/YYYY'), 'Mike', 8);this is output I desire
    placeid       issueperiod                               failures
    NY              02/28/2011 - 03/06/2011          10
    Mexico       02/28/2011 - 03/06/2011           3
    Mexico        03/14/2011 - 03/20/2011          12
    London        03/14/2011 - 03/20/2011          8All help is appreciated. I will post my query as soon as I am able to think of a good logic for this...

    hI,
    user13328581 wrote:
    ... Kindly note, I am still learning how to use analytic functions.That doesn't matter; analytic functions won't help in this problem. The aggregate SUM function is all you need.
    But what do you need to GROUP BY? What is each row of the result set going to represent? A placeid? Yes, each row will represent only one placedid, but it's going to be divided further. You want a separate row of output for every placeid and week, so you'll want to GROUP BY placeid and week. You don't want to GROUP BY the raw issuedate; that would put March 3 and March 4 into separate groups. And you don't want to GROUP BY failures; that would mean a row with 3 failures could never be in the same group as a row with 9 failures.
    This gets the output you posted from the sample data you posted:
    SELECT       placeid
    ,             TO_CHAR ( TRUNC (issuedate, 'IW')
                  , 'MM/DD/YYYY'
                ) || ' - '|| TO_CHAR ( TRUNC (issuedate, 'IW') + 6
                                             , 'MM/DD/YYY'
                               )     AS issueperiod
    ,       SUM (failures)                  AS sumfailures
    FROM        temp_one
    GROUP BY  placeid
    ,            TRUNC (issuedate, 'IW')
    ;You could use a sub-query to compute TRUNC (issuedate, 'IW') once. The code would be about as complicated, efficiency probably won't improve noticeably, and the the results would be the same.

  • Using analytic function in a view

    Hello to all
    Sorry If I use this thread
    sql not merge using analytic functions
    for my question,
    From example you write and from Tom explain is not possible create a view on analytic function?
    Thanks and sorry again

    I think what you'll discover is that if you apply the function over the result set, the initial SQL might be quicker,
    for example, this is a test I did with a large dictionary view:
    select tp.Table_Name
          ,tp.Partition_Name
    from
          select tbl.Table_Name         as Table_Name
                ,tbl.Partition_Date     as dt
                ,row_number() over (partition by dtp.table_Name order by dtp.Partition_Name desc) rn
          from (
                select  /*+ all_rows */
                        dtp.Table_Name
                       ,dtp.Partition_name
                from    dba_tab_partitions  dtp
                where   dtp.Partition_Name  like 'Y____\_Q_\_M__\_D__' escape '\'
                and     dtp.Table_Owner     =  'APPS'
                and     dtp.Table_name      not like '%$%'
                and     dtp.Table_Name      like '%'
               ) tbl
        ) tp
    where tp.rn = 1
    select Table_Name
          ,Partition_Name
    from (
          select  /*+ all_rows */
                  dtp.Table_Name
                 ,row_number() over (partition by tbl.table_Name order by tbl.Partition_Name desc) rn
          from    dba_tab_partitions  dtp
          where   dtp.Partition_Name  like 'Y____\_Q_\_M__\_D__' escape '\'
          and     dtp.Table_Owner     =  'APPS'
          and     dtp.Table_name      not like '%$%'
          and     dtp.Table_Name      '%'
         ) tbl
    where rn = 1I found the former to be quicker.
    I think ask tom was saying a lot more, but included something similar,
    Edited by: bluefrog on Jun 10, 2010 12:48 PM

  • Using analytical function - value with highest count

    Hi
    i have this table below
    CREATE TABLE table1
    ( cust_name VARCHAR2 (10)
    , txn_id NUMBER
    , txn_date DATE
    , country VARCHAR2 (10)
    , flag number
    , CONSTRAINT key1 UNIQUE (cust_name, txn_id)
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9870,TO_DATE ('15-Jan-2011', 'DD-Mon-YYYY'), 'Iran', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9871,TO_DATE ('16-Jan-2011', 'DD-Mon-YYYY'), 'China', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9872,TO_DATE ('17-Jan-2011', 'DD-Mon-YYYY'), 'China', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9873,TO_DATE ('18-Jan-2011', 'DD-Mon-YYYY'), 'Japan', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9874,TO_DATE ('19-Jan-2011', 'DD-Mon-YYYY'), 'Japan', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9875,TO_DATE ('20-Jan-2011', 'DD-Mon-YYYY'), 'Russia', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9877,TO_DATE ('22-Jan-2011', 'DD-Mon-YYYY'), 'China', 0);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9878,TO_DATE ('26-Jan-2011', 'DD-Mon-YYYY'), 'Korea', 0);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9811,TO_DATE ('17-Jan-2011', 'DD-Mon-YYYY'), 'China', 0);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9854,TO_DATE ('13-Jan-2011', 'DD-Mon-YYYY'), 'Taiwan', 0);
    The requirement is to create an additional column in the resultset with country name where the customer has done the maximum number of transactions
    (with transaction flag 1). In case we have two or more countries tied with the same count, then we need to select the country (among the tied ones)
    where the customer has done the last transaction (with transaction flag 1)
    e.g. The count is 2 for both 'China' and 'Japan' for transaction flag 1 ,and the latest transaction is for 'Japan'. So the new column should contain 'Japan'
    CUST_NAME TXN_ID TXN_DATE COUNTRY FLAG country_1
    Peter 9811 17-JAN-11 China 0 Japan
    Peter 9854 13-JAN-11 Taiwan 0 Japan
    Peter 9870 15-JAN-11 Iran 1 Japan
    Peter 9871 16-JAN-11 China 1 Japan
    Peter 9872 17-JAN-11 China 1 Japan
    Peter 9873 18-JAN-11 Japan 1 Japan
    Peter 9874 19-JAN-11 Japan 1 Japan
    Peter 9875 20-JAN-11 Russia 1 Japan
    Peter 9877 22-JAN-11 China 0 Japan
    Peter 9878 26-JAN-11 Korea 0 Japan
    Please let me know how to accomplish this using analytical functions
    Thanks
    -Learnsequel

    Does this work (not spent much time checking it)?
    WITH ana AS (
    SELECT cust_name, txn_id, txn_date, country, flag,
            Sum (flag)
                OVER (PARTITION BY cust_name, country)      n_trx,
            Max (CASE WHEN flag = 1 THEN txn_date END)
                OVER (PARTITION BY cust_name, country)      l_trx
      FROM cnt_trx
    SELECT cust_name, txn_id, txn_date, country, flag,
            First_Value (country) OVER (PARTITION BY cust_name ORDER BY n_trx DESC, l_trx DESC) top_cnt
      FROM ana
    CUST_NAME      TXN_ID TXN_DATE  COUNTRY          FLAG TOP_CNT
    Fred             9875 20-JAN-11 Russia              1 Russia
    Fred             9874 19-JAN-11 Japan               1 Russia
    Peter            9873 18-JAN-11 Japan               1 Japan
    Peter            9874 19-JAN-11 Japan               1 Japan
    Peter            9872 17-JAN-11 China               1 Japan
    Peter            9871 16-JAN-11 China               1 Japan
    Peter            9811 17-JAN-11 China               0 Japan
    Peter            9877 22-JAN-11 China               0 Japan
    Peter            9875 20-JAN-11 Russia              1 Japan
    Peter            9870 15-JAN-11 Iran                1 Japan
    Peter            9878 26-JAN-11 Korea               0 Japan
    Peter            9854 13-JAN-11 Taiwan              0 Japan
    12 rows selected.

  • Build interface using analytic functions twice

    Hi all, tell me please is it possible to build interface using analytic functions twice, like:
    select max(tt.val) from (
    select id, sum(val) val
    from (
    select 1 id, 10 val from dual union all
    select 2 id, 10 val from dual union all
    select 2 id, 30 val from dual union all
    select 2 id, 10 val from dual union all
    select 3 id, 20 val from dual) t
    group by id) tt
    thanks in advance

    HI,
    Just a question...
    You used only dual table. That correspond to the reality or is just as example?
    I mean, won't physical table be used?
    I believe you need that at target column, is that true?

Maybe you are looking for

  • How can i delete my nokia music account?

    i want to delete my nokia music account, is this possible? Greece Nokia X6 RM-559 v40.0.002 Solved! Go to Solution.

  • Processor noise with audio output?

    I attached my Mac book pro to my 40" LCD TV along with an audio output from the headphones out to the input on the TV. What I noticed is that the "processor whine" that we all complain about that seems to cease when we are running things like photo b

  • ORA-12911 in Oracle 9.2.0.2 while creating user

    The 9.2.0.2 database is actually an import from an 8.0.4 database. In 8.0.4 we never explicitly used a TEMPORARY or PERMANENT while creating the tablespace. So the 9.2.0.2 database inherited it and all tablespaces became permanent status. With this s

  • CUSTOM Service - multi Byte character issue

    Hi Experts, I wrote a custom Service. What this service is doing, its id reading some data from Database and then generates CSV report. Code is working fine. But if we have multi - byte characters in data, then these characters are not properly shown

  • Css cross browser issue

    i have laid out my template using css, but notice that one of my columns will bump down under another one if the screen is resized in IE (on pc, v.6). can someone take a quick look at my code and css and let me know what i can do to ensure that every