Analytic function and aggregate function

What are analytic function and aggregate function. What is difference between them?

hi,
Analytic Functions :----------
Analytic functions compute an aggregate value based on a group of rows. They differ from aggregate functions in that they return multiple rows for each group. The group of rows is called a window and is defined by the analytic_clause. For each row, a sliding window of rows is defined. The window determines the range of rows used to perform the calculations for the current row. Window sizes can be based on either a physical number of rows or a logical interval such as time.
Analytic functions are the last set of operations performed in a query except for the final ORDER BY clause. All joins and all WHERE, GROUP BY, and HAVING clauses are completed before the analytic functions are processed. Therefore, analytic functions can appear only in the select list or ORDER BY clause.
Analytic functions are commonly used to compute cumulative, moving, centered, and reporting aggregates.
Aggregate Functions :----------
Aggregate functions return a single result row based on groups of rows, rather than on single rows. Aggregate functions can appear in select lists and in ORDER BY and HAVING clauses. They are commonly used with the GROUP BY clause in a SELECT statement, where Oracle Database divides the rows of a queried table or view into groups. In a query containing a GROUP BY clause, the elements of the select list can be aggregate functions, GROUP BY expressions, constants, or expressions involving one of these. Oracle applies the aggregate functions to each group of rows and returns a single result row for each group.
If you omit the GROUP BY clause, then Oracle applies aggregate functions in the select list to all the rows in the queried table or view. You use aggregate functions in the HAVING clause to eliminate groups from the output based on the results of the aggregate functions, rather than on the values of the individual rows of the queried table or view.
let me know if you are feeling any problem in understanding.
thanks.
Edited by: varun4dba on Jan 27, 2011 3:32 PM

Similar Messages

  • How to use analytic function with aggregate function

    hello
    can we use analytic function and aggrgate function in same qurey? i tried to find any example on Net but not get any example how both of these function works together. Any link or example plz share with me
    Edited by: Oracle Studnet on Nov 15, 2009 10:29 PM

    select
    t1.region_name,
    t2.division_name,
    t3.month,
    t3.amount mthly_sales,
    max(t3.amount) over (partition by t1.region_name, t2.division_name)
    max_mthly_sales
    from
    region t1,
    division t2,
    sales t3
    where
    t1.region_id=t3.region_id
    and
    t2.division_id=t3.division_id
    and
    t3.year=2004
    Source:http://www.orafusion.com/art_anlytc.htm
    Here max (aggregate) and over partition by (analytic) function is in same query. So it means we can use aggregate and analytic function in same query and more than one analytic function in same query also.
    Hth
    Girish Sharma

  • Discoverer Analytic Function windowing - errors and bad aggregation

    I posted this first on Database General forum, but then I found this was the place to put it:
    Hi, I'm using this kind of windowing function:
    SUM(Receitas Especificas) OVER(PARTITION BY Tipo Periodo,Calculado,"Empresa Descrição (Operador)","Empresa Descrição" ORDER BY Ini Periodo RANGE BETWEEN INTERVAL '12' MONTH PRECEDING AND INTERVAL '12' MONTH PRECEDING )
    If I use the "Receitas Especificas SUM" instead of
    "Receitas Especificas" I get the following error running the report:
    "an error occurred while attempting to run..."
    This is not in accordance to:
    http://www.boku.ac.at/oradoc/ias/10g(9.0.4)/bi.904/b10268.pdf
    but ok, the version without SUM inside works.
    Another problem is the fact that for analytic function with PARTITION BY,
    this does not work (shows the cannot aggregate symbol) if we collapse or use "<All>" in page items.
    But it works if we remove the item from the PARTITION BY and also remove from workbook.
    It's even worse for windowing functions(query above), because the query
    only works if we remove the item from the PARTITION BY but we have to show it on the workbook - and this MAKES NO SENSE... :(
    Please help.

    Unfortunately Discoverer doesn't show (correct) values for analytical functions when selecting "<All>" in a page item. I found out that it does work when you add the analytical function to the db-view instead of to the report as a calculation or as a calculated item on the folder.
    The only problem is you've to name all page-items in the PARTITION window, so, when adding a page-item to the report, you,ve to change the db-view and alter the PARTITION window.
    Michael

  • GROUP BY and analytical functions

    Hi all,
    I need your help with grouping my data.
    Below you can see sample of my data (in my case I have view where data is in almost same format).
    with test_data as(
    select '01' as code, 'SM' as abbreviation, 1010 as groupnum, 21 as pieces, 4.13 as volume, 3.186 as avgvolume from dual
    union
    select '01' as code, 'SM' as abbreviation, 2010 as groupnum, 21 as pieces, 0 as volume, 3.186 as avgvolume from dual
    union
    select '01' as code, 'SM' as abbreviation, 3000 as groupnum, 21 as pieces, 55 as volume, 3.186 as avgvolume from dual
    union
    select '01' as code, 'SM' as abbreviation, 3010 as groupnum, 21 as pieces, 7.77 as volume, 3.186 as avgvolume from dual
    union
    select '02' as code, 'SMP' as abbreviation, 1010 as groupnum, 30 as pieces, 2.99 as volume, 0.1 as avgvolume from dual
    union
    select '03' as code, 'SMC' as abbreviation, 1010 as groupnum, 10 as pieces, 4.59 as volume, 0.459 as avgvolume from dual
    union
    select '40' as code, 'DB' as abbreviation, 1010 as groupnum, 21 as pieces, 5.28 as avgvolume, 0.251 as avgvolume from dual
    select
    DECODE (GROUPING (code), 1, 'report total:', code)     as code,
    abbreviation as abbreviation,
    groupnum as pricelistgrp,
    sum(pieces) as pieces,
    sum(volume) as volume,
    sum(avgvolume) as avgvolume
    --sum(sum(distinct pieces)) over (partition by code,groupnum) as piecessum,
    --sum(volume) volume,
    --round(sum(volume) / 82,3) as avgvolume
    from test_data
    group by grouping sets((code,abbreviation,groupnum,pieces,volume,avgvolume),null)
    order by 1,3;Select statement which I have written returns the output below:
    CODE    ABBR    GRPOUP  PIECES   VOLUME  AVGVOL
    01     SM     1010     21     4.13     3.186
    01     SM     2010     21     0     3.186
    01     SM     3000     21     55     3.186
    01     SM     3010     21     7.77     3.186
    02     SMP     1010     30     2.99     0.1
    03     SMC     1010     10     4.59     0.459
    40     DB     1010     21     5.28     0.251
    report total:          145     79.76     13.554Number of pieces and avg volume is same for same codes (01 - pieces = 21, avgvolume = 3.186 etc.)
    What I need is to get output like below:
    CODE    ABBR    GRPOUP  PIECES   VOLUME  AVGVOL
    01     SM     1010     21     4.13     3.186
    01     SM     2010     21     0     3.186
    01     SM     3000     21     55     3.186
    01     SM     3010     21     7.77     3.186
    02     SMP     1010     30     2.99     0.1
    03     SMC     1010     10     4.59     0.459
    40     DB     1010     21     5.28     0.251
    report total:          82     79.76     0.973Where total number of pieces is computed as sum of distinct numbers of pieces for each code -> *82 = 21 + 30 + 10 +21*.
    Total volume is just sum of volumes in each row -> *79.76 = 4.13+0+55+7.77+2.99+4.59+5.28*.
    And Average volume is computed as total volume / total number of pieces -> *0.973 = 79.76 / 82*.
    I was trying to use analytical function (sum() over (partition by)) to get desired output, but without good results.
    Could anyone help me with this issue?
    Thanks in advance!
    Regards,
    Jiri

    Hi, Jiri,
    Jiri N. wrote:
    Hi all,
    I need your help with grouping my data.
    Below you can see sample of my data (in my case I have view where data is in almost same format).I assume the view guarantees that all rows with the same code (or the same code and groupnum) will always have the same pieces and the same avgvolume.
    with test_data as( ...Thanks for posting this; it's very helpful.
    What I need is to get output like below:
    CODE    ABBR    GRPOUP  PIECES   VOLUME  AVGVOL
    01     SM     1010     21     4.13     3.186
    01     SM     2010     21     0     3.186
    01     SM     3000     21     55     3.186
    01     SM     3010     21     7.77     3.186
    02     SMP     1010     30     2.99     0.1
    03     SMC     1010     10     4.59     0.459
    40     DB     1010     21     5.28     0.251
    report total:          82     79.76     0.973
    Except for the last row, you're just displaying data straight from the table (or view).
    It might be easier to get the results you want uisng a UNION. One branch of the UNION would get the"report total" row, and the other branch would get all the rest.
    >
    Where total number of pieces is computed as sum of distinct numbers of pieces for each code -> *82 = 21 + 30 + 10 +21*.It's not just distinct numbers. In this example, two different codes have pieces=21, so the total of distinct pieces is 61 = 21 + 30 + 10.
    >
    Total volume is just sum of volumes in each row -> *79.76 = 4.13+0+55+7.77+2.99+4.59+5.28*.
    And Average volume is computed as total volume / total number of pieces -> *0.973 = 79.76 / 82*.
    I was trying to use analytical function (sum() over (partition by)) to get desired output, but without good results. I would use nested aggregate functions to do that:
    SELECT    code
    ,       abbreviation
    ,       groupnum          AS pricelistgrp
    ,       pieces
    ,       volume
    ,       avgvolume
    FROM      test_data
         UNION ALL
    SELECT        'report total:'     AS code
    ,        NULL                  AS abbreviaion
    ,        NULL               AS pricelistgrp
    ,        SUM (MAX (pieces))     AS pieces
    ,        SUM (SUM (volume))     AS volume
    ,        SUM (SUM (volume))
          / SUM (MAX (pieces))     AS avgvolume
    FROM        test_data
    GROUP BY   code     -- , abbreviation?
    ORDER BY  code
    ,            pricelistgrp
    ;Output:
    CODE          ABB PRICELISTGRP     PIECES  VOLUME  AVGVOLUME
    01            SM          1010         21    4.13      3.186
    01            SM          2010         21    0.00      3.186
    01            SM          3000         21   55.00      3.186
    01            SM          3010         21    7.77      3.186
    02            SMP         1010         30    2.99       .100
    03            SMC         1010         10    4.59       .459
    40            DB          1010         21    5.28       .251
    report total:                          82   79.76       .973It's unclear if you want to GROUP BY just code (like I did above) or by both code and abbreviation.
    Given that this data is coming from a view, it might be simpler and/or more efficient to make separate version of the view, or to replicate most of the view in a query.

  • Reports 6i and analytical function

    hi
    I have this query which wrks fine in TOAD
    SELECT rvt.receipt_num srv_no, rvt.supplier supplier,
                    rvt.transaction_date srv_date, inv.segment1 item_no,
                    rvt.item_desc item_description, hrov.NAME,               
                    (   SUBSTR (v.standard_industry_class, 1, 1)
                     || '-'
                     || po_headers.segment1
                     || '-'
                     || TO_CHAR (po_headers.creation_date, 'RRRR')
                    ) po_no,
                    po_headers.creation_date_disp po_date,   
                          (  (rvt.currency_conversion_rate * po_lines.unit_price)
                     * rvt.transact_qty
                    )aMOUNT  ,
    ----Analytic function used here                      
            SUM(          (  (rvt.currency_conversion_rate * po_lines.unit_price)
                     * rvt.transact_qty)) over(partition by hrov.NAME) SUM_AMOUNT,                                                                                 
                    (SELECT SUM (mot.on_hand)
                       FROM mtl_onhand_total_mwb_v mot
                      WHERE inv.inventory_item_id = mot.inventory_item_id
                        --  AND INV.ORGANIZATION_ID=MOT.ORGANIZATION_ID
                        AND loc.inventory_location_id = mot.locator_id
                        AND loc.organization_id = mot.organization_id
                        AND rvt.locator_id = mot.locator_id) onhand
               FROM rcv_vrc_txs_v rvt,
                    mtl_system_items_b inv,
                    mtl_item_locations loc,
                    hr_organization_units_v hrov,
                    po_headers_v po_headers,
                    ap_vendors_v v,
                    po_lines_v po_lines
              WHERE inv.inventory_item_id(+) = rvt.item_id
                AND po_headers.vendor_id = v.vendor_id
                AND rvt.po_line_id = po_lines.po_line_id
                AND rvt.po_header_id = po_lines.po_header_id
                AND rvt.po_header_id = po_headers.po_header_id
                AND rvt.supplier_id = v.vendor_id
                AND inv.organization_id = hrov.organization_id
                AND rvt.transaction_type = 'DELIVER'
                AND rvt.inspection_status_code <> 'REJECTED'
                AND rvt.organization_id = inv.organization_id(+)
                AND to_char(to_date(rvt.transaction_date, 'DD/MM/YYYY'), 'DD-MON-YYYY') BETWEEN (:p_from_date)
                                                     AND NVL (:p_to_date,
                                                              :p_from_date
                AND rvt.locator_id = loc.physical_location_id(+)
                AND transaction_id NOT IN (
                       SELECT parent_transaction_id
                         FROM rcv_vrc_txs_v rvtd
                        WHERE rvt.item_id = rvtd.item_id
                          AND rvtd.transaction_type IN
                                      ('RETURN TO RECEIVING', 'RETURN TO VENDOR'))
                                      GROUP BY rvt.receipt_num , rvt.supplier ,
                    rvt.transaction_date , inv.segment1 ,
                    rvt.item_desc , hrov.NAME,v.standard_industry_clasS,po_headers.segment1,po_headers.creation_datE,
                    po_headers.creation_date_disp,inv.inventory_item_iD,loc.inventory_location_id,loc.organization_id,
                    rvt.locator_iD,rvt.currency_conversion_rate,po_lines.unit_price, rvt.transact_qty
    but it gives blank page in reports 6i
    could it be that reports 6i donot support analytical functionskindly guide another alternaive
    thanking in advance
    Edited by: makdutakdu on Mar 25, 2012 2:22 PM

    hi
    will the view be like
    create view S_Amount as SELECT rvt.receipt_num srv_no, rvt.supplier supplier,
                    rvt.transaction_date srv_date, inv.segment1 item_no,
                    rvt.item_desc item_description, hrov.NAME,               
                    (   SUBSTR (v.standard_industry_class, 1, 1)
                     || '-'
                     || po_headers.segment1
                     || '-'
                     || TO_CHAR (po_headers.creation_date, 'RRRR')
                    ) po_no,
                    po_headers.creation_date_disp po_date,   
                          (  (rvt.currency_conversion_rate * po_lines.unit_price)
                     * rvt.transact_qty
                    )aMOUNT  ,
    ----Analytic function used here                      
            SUM(          (  (rvt.currency_conversion_rate * po_lines.unit_price)
                     * rvt.transact_qty)) over(partition by hrov.NAME) SUM_AMOUNT,                                                                                 
                    (SELECT SUM (mot.on_hand)
                       FROM mtl_onhand_total_mwb_v mot
                      WHERE inv.inventory_item_id = mot.inventory_item_id
                        --  AND INV.ORGANIZATION_ID=MOT.ORGANIZATION_ID
                        AND loc.inventory_location_id = mot.locator_id
                        AND loc.organization_id = mot.organization_id
                        AND rvt.locator_id = mot.locator_id) onhand
               FROM rcv_vrc_txs_v rvt,
                    mtl_system_items_b inv,
                    mtl_item_locations loc,
                    hr_organization_units_v hrov,
                    po_headers_v po_headers,
                    ap_vendors_v v,
                    po_lines_v po_lines
              WHERE inv.inventory_item_id(+) = rvt.item_id
                AND po_headers.vendor_id = v.vendor_id
                AND rvt.po_line_id = po_lines.po_line_id
                AND rvt.po_header_id = po_lines.po_header_id
                AND rvt.po_header_id = po_headers.po_header_id
                AND rvt.supplier_id = v.vendor_id
                AND inv.organization_id = hrov.organization_id
                AND rvt.transaction_type = 'DELIVER'
                AND rvt.inspection_status_code <> 'REJECTED'
                AND rvt.organization_id = inv.organization_id(+)
                           AND rvt.locator_id = loc.physical_location_id(+)
                AND transaction_id NOT IN (
                       SELECT parent_transaction_id
                         FROM rcv_vrc_txs_v rvtd
                        WHERE rvt.item_id = rvtd.item_id
                          AND rvtd.transaction_type IN
                                      ('RETURN TO RECEIVING', 'RETURN TO VENDOR'))
                                      GROUP BY rvt.receipt_num , rvt.supplier ,
                    rvt.transaction_date , inv.segment1 ,
                    rvt.item_desc , hrov.NAME,v.standard_industry_clasS,po_headers.segment1,po_headers.creation_datE,
                    po_headers.creation_date_disp,inv.inventory_item_iD,loc.inventory_location_id,loc.organization_id,
                    rvt.locator_iD,rvt.currency_conversion_rate,po_lines.unit_price, rvt.transact_qtyis this correct ? i mean i have not included the bind parameters in the view ..moreover shoud this view be joined with all the columns in the from clause of the original query?
    kindly guide
    thanking in advance

  • Replacing Oracle's FIRST_VALUE and LAST_VALUE analytical functions.

    Hi,
    I am using OBI 10.1.3.2.1 where, I guess, EVALUATE is not available. I would like to know alternatives, esp. to replace Oracle's FIRST_VALUE and LAST_VALUE analytical functions.
    I want to track some changes. For example, there are four methods of travel - Air, Train, Road and Sea. Would like to know traveler's first method of traveling and the last method of traveling in an year. If both of them match then a certain action is taken. If they do not match, then another action is taken.
    I tried as under.
    1. Get Sequence ID for each travel within an year per traveler as Sequence_Id.
    2. Get the Lowest Sequence ID (which should be 1) for travels within an year per traveler as Sequence_LId.
    3. Get the Highest Sequence ID (which could be 1 or greater than 1) for travels within an year per traveler as Sequence_HId.
    4. If Sequence ID = Lowest Sequence ID then display the method of travel as First Method of Travel.
    5. If Sequence ID = Highest Sequence ID then display the method of travel as Latest Method of Travel.
    6. If First Method of Travel = Latest Method of Travel then display Yes/No as Match.
    The issue is cells could be blank in First Method of Travel and Last Method of Travel unless the traveler traveled only once in an year.
    Using Oracle's FIRST_VALUE and LAST_VALUE analytical functions, I can get a result like
    Traveler | Card Issue Date | Journey Date | Method | First Method of Travel | Last Method of Travel | Match?
    ABC | 01/01/2000 | 04/04/2000 | Road | Road | Air | No
    ABC | 01/01/2000 | 15/12/2000 | Air | Road | Air | No
    XYZ | 01/01/2000 | 04/05/2000 | Train | Train | Train | Yes
    XYZ | 01/01/2000 | 04/11/2000 | Train | Train | Train | Yes
    Using OBI Answers, I am getting something like this.
    Traveler | Card Issue Date | Journey Date | Method | First Method of Travel | Last Method of Travel | Match?
    ABC | 01/01/2000 | 04/04/2000 | Road | Road | <BLANK> | No
    ABC | 01/01/2000 | 15/12/2000 | Air | <BLANK> | Air | No
    XYZ | 01/01/2000 | 04/05/2000 | Train | Train | <BLANK> | No
    XYZ | 01/01/2000 | 04/11/2000 | Train | <BLANK> | Train | No
    Above, for XYZ traveler the Match? clearly shows a wrong result (although somehow it's correct for traveler ABC).
    Would appreciate if someone can guide me how to resolve the issue.
    Many thanks,
    Manoj.
    Edited by: mandix on 27-Nov-2009 08:43
    Edited by: mandix on 27-Nov-2009 08:47

    Hi,
    Just to recap, in OBI 10.1.3.2.1, I am trying to find an alternative way to FIRST_VALUE and LAST_VALUE analytical functions used in Oracle. Somehow, I feel it's achievable. I would like to know answers to the following questions.
    1. Is there any way of referring to a cell value and displaying it in other cells for a reference value?
    For example, can I display the First Method of Travel for traveler 'ABC' and 'XYZ' for all the rows returned in the same column, respectively?
    2. I tried RMIN, RMAX functions in the RDP but it does not accept "BY" clause (for example, RMIN(Transaction_Id BY Traveler) to define Lowest Sequence Id per traveler). Am I doing something wrong here? Why can a formula with "BY" clause be defined in Answers but not the RPD? The idea is to use this in Answers. This is in relation to my first question.
    Could someone please let me know?
    I understand that this thread that I have posted is related to something that can be done outside OBI, but still would like to know.
    If anything is not clear please let me know.
    Thanks,
    Manoj.

  • Any difference between distinct and aggregate function in sql query cost???

    Hi,
    I have executed many sql stmts patterns- such as:
    a) using a single table
    b) using two tables, using simple joins or outer joins
    but i have not noticed any difference in sql stmts in cost and in execution plan....
    Anyway, my colleague insists on that using aggregate function is less costly compared to
    distinct....(something i have not confirmed, that's why i beleive that they are exactly the same...)
    For the above reffered 1st sql pattern.. we could for example use
    select distinct deptno
    from emp
    select count(*), deptno
    from emp
    group by deptno select distinct owner, object_type from all_objects
    select count(*), owner, object_type from all_objects
    group by owner, object_typeHave you found any difference between the two ever...????
    Note: I use Ora DB 10g v2.
    Thank you,
    Sim

    distinct and aggregate function are for different uses and may give same result but if u r using aggregate function to get distinct records, it will be expensive...
    ex
    select distinct deptno from scott.dept;
    Statistics
    0 recursive calls
    0 db block gets
    2 consistent gets
    0 physical reads
    0 redo size
    584 bytes sent via SQL*Net to client
    488 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    4 rows processed
    select deptno from scott.emp group by deptno;
    Statistics
    307 recursive calls
    0 db block gets
    60 consistent gets
    6 physical reads
    0 redo size
    576 bytes sent via SQL*Net to client
    488 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    6 sorts (memory)
    0 sorts (disk)
    3 rows processed
    Nimish Garg
    Software Developer
    *(Oracle & ASP.NET)*
    Indiamart Intermesh Limited, Noida
    To Get Free Oracle & ASP.NET Code Snippets
    Follow: http://nimishgarg.blogspot.com

  • HTMLDB 1.6 and "order by" in analytic functions

    In HTMLDB 1.6, oracle 10g, when i enter the string "order by" in the region source of a report of the type "sql query (pl/sql function body returning sql query", I get
    1 error has occurred
    * Your query can't include an "ORDER BY" clause when having column heading sorting enabled.
    I understand the reason for this error, but unfortunately i need this for an analytic function:
    row_number() over (partition by ... order by ...)
    It seems that the check is performed by simply looking for the string "order by" in the "region source" (in fact the error fires even if that string is contained within a comment).
    I know possible workarounds (eg creating a view and select'ing from it), i just wanted to let you know.
    Regards
    Alberto

    Another one under the 'obvious route' category:
    Seems that the ORDER BY check is apparentl for ORDER&lt;space&gt;BY... so simply adding extra whitespace between ORDER and BY bypasses the check (at least in 2.1.0.00.39).
    To make it a bit more obious that a separation is intended, an empty comment, i.e. ORDER/*/BY*, works nicely
    Edited by: mcstock on Nov 19, 2008 10:29 AM

  • Need complex query  with joins and AGGREGATE  functions.

    Hello Everyone ;
    Good Morning to all ;
    I have 3 tables with 2 lakhs record. I need to check query performance.. How CBO rewrites my query in materialized view ?
    I want to make complex join with AGGREGATE FUNCTION.
    my table details
    SQL> select from tab;*
    TNAME TABTYPE CLUSTERID
    DEPT TABLE
    PAYROLL TABLE
    EMP TABLE
    SQL> desc emp
    Name
    EID
    ENAME
    EDOB
    EGENDER
    EQUAL
    EGRADUATION
    EDESIGNATION
    ELEVEL
    EDOMAIN_ID
    EMOB_NO
    SQL> desc dept
    Name
    EID
    DNAME
    DMANAGER
    DCONTACT_NO
    DPROJ_NAME
    SQL> desc payroll
    Name
    EID
    PF_NO
    SAL_ACC_NO
    SALARY
    BONUS
    I want to make  complex query  with joins and AGGREGATE  functions.
    Dept names are : IT , ITES , Accounts , Mgmt , Hr
    GRADUATIONS are : Engineering , Arts , Accounts , business_applications
    I want to select records who are working in IT and ITES and graduation should be "Engineering"
    salary > 20000 and < = 22800 and bonus > 1000 and <= 1999 with count for males and females Separately ;
    Please help me to make a such complex query with joins ..
    Thanks in advance ..
    Edited by: 969352 on May 25, 2013 11:34 AM

    969352 wrote:
    why do you avoid providing requested & NEEDED details?I do NOT understand what do you expect ?
    My Goal is :
    1. When executing my own query i need to check expalin plan.please proceed to do so
    http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_9010.htm#SQLRF01601
    2. IF i enable query rewrite option .. i want to check explain plan ( how optimizer rewrites my query ) ? please proceed to do so
    http://docs.oracle.com/cd/E11882_01/server.112/e16638/ex_plan.htm#PFGRF009
    3. My only aim is QUERY PERFORMANCE with QUERY REWRITE clause in materialized view.It is an admirable goal.
    Best Wishes on your quest for performance improvements.

  • Lag analytical function and controlling the offset

    Can you help me on that? Small challenge. At least I gave up in half a day.
    Data
    ACCOUNT_NUMBER     BATCH_ID     TRANSACTION_DATE     TRANSACTION_TYPE     TRANSACTION_NUMBER     PARENT_TRANSACTION_NUMBER
    124680     ZY000489     1/11/2011     62     377     NULL
    124680     ZY000489     1/11/2011     1     378     NULL
    124680     ZY000489     1/11/2011     1     379     NULL
    124680     ZY000489     1/11/2011     1     380     NULL
    124680     ZY000489     1/11/2011     62     381     NULL
    124680     ZY000489     1/11/2011     1     381     NULL
    124681     ZY000490     1/11/2011     350     4000     NULL
    124681     ZY000490     1/11/2011     1     4001     NULL
    124681     ZY000490     1/11/2011     1     4002     NULL
    I want to identify parent Transaction Number for each row in above data.
    The way to identify it is My parent transaction Id is
    -     All child transaction have type as 1
    -     One main transaction can have multiple line items.
    -     Any transaction (type) can have an related child transaction (Transaction Type as 1)
    -     Each logical group of transactions have same account number, batch id, transaction date and consecutive Transaction Number (like 377, 378, 379, 380 in above example)
    The data should look like below once I identified parent transaction columns:
    ACCOUNT_NUMBER     BATCH_ID     TRANSACTION_DATE     TRANSACTION_TYPE     TRANSACTION_NUMBER     PARENT_TRANSACTION_NUMBER
    124680     ZY000489     1/11/2011     62     377     377
    124680     ZY000489     1/11/2011     1     378     377
    124680     ZY000489     1/11/2011     1     379     377
    124680     ZY000489     1/11/2011     1     380     377
    124680     ZY000489     1/11/2011     62     381     381
    124680     ZY000489     1/11/2011     1     382     381
    124681     ZY000490     1/11/2011     350     4000     4000
    124681     ZY000490     1/11/2011     1     4001     4000
    124681     ZY000490     1/11/2011     1     4002     4000
    I tried using LAG Analytical function trying to lag dynamically with offset but had difficulties dynamically expanding the offset. Its an Control Break kind of functionality that i want to achieve in single SQL.
    i Know we can do it using pl/sql construct but the challenge is to do it using single sql. Please help
    Please let me know if you are able to do it in single SQL.
    Thanks

    rohitgoswami wrote:
    Can you help me on that? Small challenge. At least I gave up in half a day.
    Data
    ACCOUNT_NUMBER     BATCH_ID     TRANSACTION_DATE     TRANSACTION_TYPE     TRANSACTION_NUMBER     PARENT_TRANSACTION_NUMBER
    124680     ZY000489     1/11/2011     62     377     NULL
    124680     ZY000489     1/11/2011     1     378     NULL
    124680     ZY000489     1/11/2011     1     379     NULL
    124680     ZY000489     1/11/2011     1     380     NULL
    124680     ZY000489     1/11/2011     62     381     NULL
    124680     ZY000489     1/11/2011     1     381     NULL
    124681     ZY000490     1/11/2011     350     4000     NULL
    124681     ZY000490     1/11/2011     1     4001     NULL
    124681     ZY000490     1/11/2011     1     4002     NULL
    I want to identify parent Transaction Number for each row in above data.
    The way to identify it is My parent transaction Id is
    -     All child transaction have type as 1
    -     One main transaction can have multiple line items.
    -     Any transaction (type) can have an related child transaction (Transaction Type as 1)
    -     Each logical group of transactions have same account number, batch id, transaction date and consecutive Transaction Number (like 377, 378, 379, 380 in above example)
    The data should look like below once I identified parent transaction columns:
    ACCOUNT_NUMBER     BATCH_ID     TRANSACTION_DATE     TRANSACTION_TYPE     TRANSACTION_NUMBER     PARENT_TRANSACTION_NUMBER
    124680     ZY000489     1/11/2011     62     377     377
    124680     ZY000489     1/11/2011     1     378     377
    124680     ZY000489     1/11/2011     1     379     377
    124680     ZY000489     1/11/2011     1     380     377
    124680     ZY000489     1/11/2011     62     381     381
    124680     ZY000489     1/11/2011     1     382     381
    124681     ZY000490     1/11/2011     350     4000     4000
    124681     ZY000490     1/11/2011     1     4001     4000
    124681     ZY000490     1/11/2011     1     4002     4000
    I tried using LAG Analytical function trying to lag dynamically with offset but had difficulties dynamically expanding the offset. Its an Control Break kind of functionality that i want to achieve in single SQL.
    i Know we can do it using pl/sql construct but the challenge is to do it using single sql. Please help
    Please let me know if you are able to do it in single SQL.
    ThanksCan probably pretty this up ... i just went for functional code for the moment.
    TUBBY_TUBBZ?with
      2     data (acc_no, batch_id, trans_date, trans_type, trans_no) as
      3  (
      4    select 124680, 'ZY000489', to_date('1/11/2011', 'mm/dd/yyyy'), 62   , 377   from dual union all
      5    select 124680, 'ZY000489', to_date('1/11/2011', 'mm/dd/yyyy'), 1    , 378   from dual union all
      6    select 124680, 'ZY000489', to_date('1/11/2011', 'mm/dd/yyyy'), 1    , 379   from dual union all
      7    select 124680, 'ZY000489', to_date('1/11/2011', 'mm/dd/yyyy'), 1    , 380   from dual union all
      8    select 124680, 'ZY000489', to_date('1/11/2011', 'mm/dd/yyyy'), 62   , 381   from dual union all
      9    select 124680, 'ZY000489', to_date('1/11/2011', 'mm/dd/yyyy'), 1    , 382   from dual union all
    10    select 124681, 'ZY000490', to_date('1/11/2011', 'mm/dd/yyyy'), 350  , 4000  from dual union all
    11    select 124681, 'ZY000490', to_date('1/11/2011', 'mm/dd/yyyy'), 1    , 4001  from dual union all
    12    select 124681, 'ZY000490', to_date('1/11/2011', 'mm/dd/yyyy'), 1    , 4002  from dual
    13  )
    14  select
    15    acc_no,
    16    batch_id,
    17    trans_date,
    18    trans_type,
    19    trans_no,
    20    case when trans_type != 1
    21    then
    22      trans_no
    23    else
    24      lag
    25      (
    26        case when trans_type = 1
    27        then
    28           null
    29        else
    30           trans_no
    31        end
    32        ignore nulls
    33      ) over (partition by acc_no, batch_id, trans_date order by trans_no asc)
    34    end as parent_trans_no
    35  from data;
                ACC_NO BATCH_ID                 TRANS_DATE                         TRANS_TYPE           TRANS_NO    PARENT_TRANS_NO
                124680 ZY000489                 11-JAN-2011 12 00:00                       62                377                377
                124680 ZY000489                 11-JAN-2011 12 00:00                        1                378                377
                124680 ZY000489                 11-JAN-2011 12 00:00                        1                379                377
                124680 ZY000489                 11-JAN-2011 12 00:00                        1                380                377
                124680 ZY000489                 11-JAN-2011 12 00:00                       62                381                381
                124680 ZY000489                 11-JAN-2011 12 00:00                        1                382                381
                124681 ZY000490                 11-JAN-2011 12 00:00                      350               4000               4000
                124681 ZY000490                 11-JAN-2011 12 00:00                        1               4001               4000
                124681 ZY000490                 11-JAN-2011 12 00:00                        1               4002               4000
    9 rows selected.
    Elapsed: 00:00:00.01
    TUBBY_TUBBZ?

  • Understanding row_number() and using it in an analytic function

    Dear all;
    I have been playing around with row_number and trying to understand how to use it and yet I still cant figure it out...
    I have the following code below
    create table Employee(
        ID                 VARCHAR2(4 BYTE)         NOT NULL,
       First_Name         VARCHAR2(10 BYTE),
       Last_Name          VARCHAR2(10 BYTE),
        Start_Date         DATE,
        End_Date           DATE,
         Salary             Number(8,2),
       City               VARCHAR2(10 BYTE),
        Description        VARCHAR2(15 BYTE)
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary,  City,       Description)
                 values ('01','Jason',    'Martin',  to_date('19960725','YYYYMMDD'), to_date('20060725','YYYYMMDD'), 1234.56, 'Toronto',  'Programmer');
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary,  City,       Description)
                  values('02','Alison',   'Mathews', to_date('19760321','YYYYMMDD'), to_date('19860221','YYYYMMDD'), 6661.78, 'Vancouver','Tester')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary,  City,       Description)
                 values('03','James',    'Smith',   to_date('19781212','YYYYMMDD'), to_date('19900315','YYYYMMDD'), 6544.78, 'Vancouver','Tester')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary,  City,       Description)
                  values('04','Celia',    'Rice',    to_date('19821024','YYYYMMDD'), to_date('19990421','YYYYMMDD'), 2344.78, 'Vancouver','Manager')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary,  City,       Description)
                  values('05','Robert',   'Black',   to_date('19840115','YYYYMMDD'), to_date('19980808','YYYYMMDD'), 2334.78, 'Vancouver','Tester')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary, City,        Description)
                  values('06','Linda',    'Green',   to_date('19870730','YYYYMMDD'), to_date('19960104','YYYYMMDD'), 4322.78,'New York',  'Tester')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary, City,        Description)
                  values('07','David',    'Larry',   to_date('19901231','YYYYMMDD'), to_date('19980212','YYYYMMDD'), 7897.78,'New York',  'Manager')
    insert into Employee(ID,  First_Name, Last_Name, Start_Date,                     End_Date,                       Salary, City,        Description)
                   values('08','James',    'Cat',     to_date('19960917','YYYYMMDD'), to_date('20020415','YYYYMMDD'), 1232.78,'Vancouver', 'Tester')I did a simple select statement
    select * from Employee e
    and it returns this below
    ID   FIRST_NAME LAST_NAME  START_DAT END_DATE      SALARY CITY       DESCRIPTION
    01   Jason      Martin     25-JUL-96 25-JUL-06    1234.56 Toronto    Programmer
    02   Alison     Mathews    21-MAR-76 21-FEB-86    6661.78 Vancouver  Tester
    03   James      Smith      12-DEC-78 15-MAR-90    6544.78 Vancouver  Tester
    04   Celia      Rice       24-OCT-82 21-APR-99    2344.78 Vancouver  Manager
    05   Robert     Black      15-JAN-84 08-AUG-98    2334.78 Vancouver  Tester
    06   Linda      Green      30-JUL-87 04-JAN-96    4322.78 New York   Tester
    07   David      Larry      31-DEC-90 12-FEB-98    7897.78 New York   Manager
    08   James      Cat        17-SEP-96 15-APR-02    1232.78 Vancouver  TesterI wrote another select statement with row_number. see below
    SELECT first_name, last_name, salary, city, description, id,
       ROW_NUMBER() OVER(PARTITION BY description ORDER BY city desc) "Test#"
       FROM employee
       and I get this result
    First_name  last_name   Salary         City             Description         ID         Test#
    Celina          Rice         2344.78      Vancouver    Manager             04          1
    David          Larry         7897.78      New York    Manager             07          2
    Jason          Martin       1234.56      Toronto      Programmer        01          1
    Alison         Mathews    6661.78      Vancouver   Tester               02          1 
    James         Cat           1232.78      Vancouver    Tester              08          2
    Robert        Black         2334.78     Vancouver     Tester              05          3
    James        Smith         6544.78     Vancouver     Tester              03          4
    Linda         Green        4322.78      New York      Tester             06           5
    I understand the partition by which means basically for each associated group a unique number wiill be assigned for that row, so in this case since tester is one group, manager is another group, and programmer is another group then tester gets its own unique number for each row, manager as well and etc.What is throwing me off is the order by and how this numbering are assigned. why is
    1 assigned to Alison Mathews for the tester group and 2 assigned to James Cat and 3 assigned Robert Black
    I apologize if this is a stupid question, i have tried reading about it online and looking at the oracle documentation but that still dont fully understand why.

    user13328581 wrote:
    understanding row_number() and using it in an analytic functionROW_NUMBER () IS an analytic fucntion. Are you trying to use the results of ROW_NUMBER in another analytic function? If so, you need a sub-query. Analuytic functions can't be nested within other analytic functions.
    ...I have the following code below
    ... I did a simple select statementThanks for posting all that! It's really helpful.
    ... and I get this result
    First_name  last_name   Salary         City             Description         ID         Test#
    Celina          Rice         2344.78      Vancouver    Manager             04          1
    David          Larry         7897.78      New York    Manager             07          2
    Jason          Martin       1234.56      Toronto      Programmer        01          1
    Alison         Mathews    6661.78      Vancouver   Tester               02          1 
    James         Cat           1232.78      Vancouver    Tester              08          2
    Robert        Black         2334.78     Vancouver     Tester              05          3
    James        Smith         6544.78     Vancouver     Tester              03          4
    Linda         Green        4322.78      New York      Tester             06           5... What is throwing me off is the order by and how this numbering are assigned. why is
    1 assigned to Alison Mathews for the tester group and 2 assigned to James Cat and 3 assigned Robert Black That's determined by the analytic ORDER BY clause. Yiou said "ORDER BY city desc", so a row where city='Vancouver' will get a lower namber than one where city='New York', since 'Vancouver' comes after 'New York' in alphabetic order.
    If you have several rows that all have the same city, then you can be sure that ROW_NUMBER will assign them consecutive numbers, but it's arbitrary which one of them will be lowest and which highest. For example, you have 5 'Tester's: 4 from Vancouver and 1 from New York. There's no particular reason why the one with first_name='Alison' got assinge 1, and 'James' got #2. If you run the same query again, without changing the table at all, then 'Robert' might be #1. It's certain that the 4 Vancouver rows will be assigned numbers 1 through 4, but there's no way of telling which of those 4 rows will get which of those 4 numbers.
    Similar to a query's ORDER BY clause, the analytic ORDER BY clause can have two or more expressions. The N-th one will only be considered if there was a tie for all (N-1) earlier ones. For example "ORDER BY city DESC, last_name, first_name" would mena 'Vancouver' comes before 'New York', but, if multiple rows all have city='Vancouver', last_name would determine the order: 'Black' would get a lower number than 'Cat'. If you had multiple rows with city='Vancouver' and last_name='Black', then the order would be determined by first_name.

  • Hide duplicate row and analytic functions

    Hi all,
    I have to count how many customers have two particular product on the same area.
    Table cols are:
    AREA
    PRODUCT_CODE (PK)
    CUSTOMER_ID (PK)
    QTA
    The query is:
    select distinct area, count(customer_id) over(PARTITION BY area)
    from all_products
    where product_code in ('BC01007', 'BC01004')
    group by area, customer_id
    having ( sum(decode(FPP.PRODOTTO_CODE,'BC01007',qta,0)) ) > 0)
    and ( sum(decode(FPP.PRODOTTO_CODE,'BC01004',qta,0)) ) > 0);
    In SQL*PLUS works fine, but in Oracle Discoverer I can't get distinct results also if I chek "Hide duplicate rows" in Table Layout .
    Anybody have another way to get distinct results for analytic function results?
    Thanks in advance,
    Giuseppe

    The query in Disco is exactly that I've post before.
    Results are there:
    AREA.........................................C1
    01704 - AREA VR NORD..............3
    01704 - AREA VR NORD..............3
    01704 - AREA VR NORD..............3
    01705 - AREA VR SUD.................1
    02702 - AREA EMILIA NORD........1
    If I check "hide duplicate row" in layout option results didn't change.
    If I add distinct clause manually in SQL*PLUS the query become:
    SELECT distinct o141151.AREA as E141152,COUNT(o141151.CUSTOMER_ID) OVER(PARTITION BY o141151.AREA ) as C_1
    FROM BPN.ALL_PRODUCTS o141151
    WHERE (o141151.PRODUCT_CODE IN ('BC01006','BC01007','BC01004'))
    GROUP BY o141151.AREA,o141151.CUSTOMER_ID
    HAVING (( SUM(DECODE(o141151.PRODUCT_CODE,'BC01006',1,0)) ) > 0 AND ( SUM(DECODE(o141151.PRODUCT_CODE,'BC01004',1,0)) ) > 0)
    and the results are not duplicate.
    AREA.........................................C1
    01704 - AREA VR NORD..............3
    01705 - AREA VR SUD.................1
    02702 - AREA EMILIA NORD........1
    There is any other way to force distinct clause in Discoverer?
    Thank you
    Giuseppe

  • How to ignore nulls in analytic functions ( row_number() and count())

    how to ignore nulls in analytic functions ( row_number() and count())

    Iam attaching test data can any one help me please
    thanks in advanceeeee
    CREATE TABLE TEMP_table
    ACCTNUM NUMBER,
    l_DATE TIMESTAMP(3),
    CODE VARCHAR2(35 BYTE),
    VENDOR VARCHAR2(35 BYTE)
    insert into temp_table values (1,sysdate+1/60,'bso','v1');
    insert into temp_table values (1,sysdate+2/60,'bsof','v1');
    insert into temp_table values (1,sysdate+3/60,'bsof','v2');
    insert into temp_table values (1,sysdate+4/60,'','v1');
    ian executing this my ;
    SELECT acctnum,l_date,vendor,code_1,
           CASE
             WHEN code = 'bsof'
              AND COUNT (DISTINCT code) OVER (PARTITION BY acctnum, vendor) > 1
              AND row_number () OVER (PARTITION BY acctnum, vendor ORDER BY vendor, l_date) != 1
             THEN 'yes'
           ELSE 'no' END result
      FROM  (select  acctnum,l_date,vendor, code code_1, case when code IN ('bso', 'bsof') then code
      else null end code   from  TEMP_TABLE
    ORDER BY acctnum ,l_date);
    my result :
    1    3/23/2011 5:24:34.000 PM    v1    bso    no
    1    3/23/2011 5:48:36.000 PM    v1    bsof    yes
    1    3/23/2011 6:36:41.000 PM    v1    bsof    yes
    1    3/24/2011 11:55:53.000 AM    v1        no
    1    3/23/2011 6:12:38.000 PM    v2    bsof    no
    I need to eliminate nulls  in top query not in inner query (not using where condition in inner query)
    [\code]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • How to use aggregate functions into Analytical functions

    Can we use aggregate functions into analytical functions?
    Please provide one example.
    Smiles.

    HI Learner6
    for information:
    Aggregate Functions
    Analytic Functions
    for practic:
    ORACLE-BASE - Analytic Functions
    Thank you

  • Advantages and disadvantages of Analytical function

    Plz list out the advantages and disadvantages of normal queries and queries using analytical function (Performance wise)

    I'm not sure how you wish to compare?
    Analytical functions give you functionality that cannot otherwise be achieved easily in a lot of cases. They can introduce some performance degredation to a query but you have to compare on a query by query basis to determine if analytical functions or otherwise are the best solution for the issue. If it were as simple as saying that analytical functions are always slower than doing it without analytical functions, then Oracle wouldn't bother introducing them into the language.

Maybe you are looking for