Count distinct values current group

Hi--
Is there a way to count the distinct values within the current group? ie--i've got a PO and want to display all the addresses at the shipment level if there is more than one distinct one, but if they are all the same as the header-level address, then I don't want any of them to show up.
I'm using the following at the header level to tell the header not to show up if there are multiple shipment level addresses, but can't seem to get a similar statement to work when it's sitting at the same level as the group that I want to count.
This is what I use at the header--it seems to work:
<?if:count(xdoxslt:distinct_values(PLL_SHIP_ADDRESS_LINE1))>1?>See Details Below<?end if?><?if:count(xdoxslt:distinct_values(PLL_SHIP_ADDRESS_LINE1))=1?>
POH_SHIP_ADDRESS_LINE1POH_SHIP_ADDRESS_LINE1
POH_SHIP_ADDRESS_LINE2
POH_SHIP_ADDRESS_LINE3
POH_SHIP_ADR_INFO POH_SHIP_COUNTRY<?end if?>
A really simplified version of the structure of the report is below:
<?xml version="1.0" ?>
- <!-- Generated by Oracle Reports version 10.1.2.0.2
-->
- <SMTPOXPRPOP2>
- <LIST_G_INIT_INFO>
- <G_INIT_INFO>
<MANUAL_PO_NUM_TYPE>NUMERIC</MANUAL_PO_NUM_TYPE>
<C_COMPANY>CompanyName</C_COMPANY>
- <LIST_G_HEADERS>
- <G_HEADERS>
<POH_PO_NUM>310001100</POH_PO_NUM>
- <LIST_G_LINES>
- <G_LINES>
<POL_VENDOR_PROD_NUM>12q</POL_VENDOR_PROD_NUM>
<POL_ITEM_DESCRIPTION>sample</POL_ITEM_DESCRIPTION>
<POL_QUANTITY_TO_PRINT>10</POL_QUANTITY_TO_PRINT>
- <LIST_G_SHIPMENTS>
- <G_SHIPMENTS>
<PLL_SHIP_COUNTRY>Canada</PLL_SHIP_COUNTRY>
<PLL_SHIP_ADR_INFO>Calg,AB Zip</PLL_SHIP_ADR_INFO>
<PLL_SHIP_ADDRESS_LINE3 />
<PLL_SHIP_ADDRESS_LINE2 />
<PLL_SHIP_ADDRESS_LINE1>Ad1</PLL_SHIP_ADDRESS_LINE1>
</G_SHIPMENTS>
</G_SHIPMENTS> <PLL_SHIP_COUNTRY>Canada</PLL_SHIP_COUNTRY>
<PLL_SHIP_ADR_INFO>Calg,AB Zip</PLL_SHIP_ADR_INFO>
<PLL_SHIP_ADDRESS_LINE3 />
<PLL_SHIP_ADDRESS_LINE2 />
<PLL_SHIP_ADDRESS_LINE1>Ad1</PLL_SHIP_ADDRESS_LINE1>
</G_SHIPMENTS>
</G_LINES>
<POH_SHIP_ADDRESS_LINE2 />
<POH_SHIP_COUNTRY>Canada</POH_SHIP_COUNTRY>
<POH_SHIP_ADR_INFO>Kanata,ON K2V 0A2</POH_SHIP_ADR_INFO>
<POH_SHIP_ADDRESS_LINE3 />
<POH_SHIP_ADDRESS_LINE1>XXX Palladium Drive</POH_SHIP_ADDRESS_LINE1>
</G_HEADERS>
</LIST_G_HEADERS>
</SMTPOXPRPOP2>
Could anyone help out with this?
Thanks--I'd really appreciate it!
Kate

Hi Vetsrini--
Thanks for getting back to me so quickly! I'd love to email you a copy and the XML if you wouldn't mind taking a look. it'll probably be more clear than me trying to explain.
I can't quite figure out how to do that, though---your profile doesn't list an email. Do I need to click elsewhere?
Thanks!
Kate

Similar Messages

  • Count distinct values in report builder

    i have a situation where i have to count distinct number of customers.
    i have a query which returns the list of values of bill_to_customer_id from ra_customer_trx_all table and i have to display only the number of distinct customers. i cant do this in the query because it has to be grouped and i am doing it in an aging report. i have to list the number of distinct customers in each aging period. can anybody please help me how to achieve this in reports 6i.
    thanks

    how can i count distinct values in reports?
    the situation is like this
    i have a query which lists customer_id, invoice number, amount due
    so what i want is to count the distinct customer_id and display the number of distinct customers. one customer_id can be repeated any number of times but i should count it only once.

  • Logical Aggregate Column (count(distinct)) Does Not Group for SQL Server DB

    When utilizing the count(distinct column_name) aggregate function within a Logical Fact source in the Business Model and Mapping layer in the RPD file the output in BI Answers is not grouping correctly for SQL Server 2008 database sources only. All Oracle database sources represent the same aggregate column correctly within BI Answers.
    I am using OBIEE version 10.1.3.3.3
    Does anyone know how to resolve this issue?
    Thanks in advance,
    Kyle

    I thought that I would update my current findings with this issue. If you display the report in BI Answers as a Pivot Table view the aggregate column displays properly, it does not in a Table or Compound Layout view for some reason. I am still working with Oracle Support on this.

  • Counting distinct values???

    Hi all,
    Why does this not work?
    "Select count(distinct employee_id) from employee_table"
    I am looking to return the count of the distinct employee_id.

    because its not valid SQL
    try:
    select * from employee_table where employee_id in (select distinct employee_id from employee_table)not very neat but should work (on oracle or SQLserveR)

  • Problem with Cross-tab report (RTF Template) null values, current-group()

    Hi, experts!
    I generate Cross-tab report using RTF Tamlpate and I have problem with grouping.
    My XML file have to be with one group only because I want to using dynamic regrouping inside the RTF template.
    Here is my data structure (XML file):(Look the images for details)
    http://img156.imageshack.us/img156/3997/xmlstructureay9.jpg
    =======================
    WITH GROUPS
    =======================
    Here is my RTF template:
    http://img151.imageshack.us/img151/2951/resultcrosstabbycustomehf8.jpg
    When I'm using grouping and I want to have for each group cross-tab report I have the following problem: (Look the images for details)
    http://img156.imageshack.us/img156/8786/resultcrosstabbycustomect9.jpg
    And this is my code I'm using inside the RTF template with groups:(Look the images for details)
    http://img156.imageshack.us/img156/4253/fieldbrowserxb7.jpg
    +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    =======================
    WITHOUT GROUPS
    =======================
    Here is my RTF template:
    http://img151.imageshack.us/img151/9545/resultcrosstabwithoutgrzg7.jpg
    When I generate one cross-tab there is no problem: (Look the images for details)
    http://img156.imageshack.us/img156/7030/resultcrosstabwithoutgrsr1.jpg
    This is my code I'm using inside the RTF template WITHOUT groups:(Look the images for details)
    http://img151.imageshack.us/img151/7030/fieldbrowserwithoutgrours8.jpg
    Can someone give me a hand to deal with this?
    Thank you!

    Pitson,
    better to ask this question in the BI Publisher Forum than here in the Oracle Reports Forum:
    BI Publisher
    Regards
    Rainer

  • How to count distinct excluding a value in business layer?

    Hi all,
    I'm having a column which has many values. I need to make this is as a measure with count distinct aggregator. But i should not count 0 in the column. How can i do this. If i try to use any condition means the aggregator option is disables. Please help
    Thanks

    Look this example:
    I made in BMM in the SALES fact table measure:
    Count_Distinct_Prod_Id_Exclude_Prod_Id_144
    I'll count distinct PRODUCTS.PROD_ID, but exclude PROD_ID=144 in counting.
    Make this measure like this:
    1. New object/Logical column
    2. Go to data type tab and click EDIT on the logical table table source
    3. Now, in the general tab add join to a table (in my case PRODUCTS)
    4. Go to the column mapping tab -> show unmapped columns
    5. In the new column (in my case Count_Distinct_Prod_Id_Exclude_Prod_Id_144) write code like similar:
    CASE WHEN "orcl".""."SH"."PRODUCTS"."PROD_ID" = 144 THEN NULL ELSE "orcl".""."SH"."PRODUCTS"."PROD_ID" END
    6. Click OK and close the logical table source window
    7. Now, in the logical column window go to aggregation tab and choose COUNT DISTINCT.
    8. Move the measure Count_Distinct_Prod_Id_Exclude_Prod_Id_144 in the presentation area
    9. Test in Answers (report cointains columns as follow)
    PROD_CATEGORY_ID
    Count_Distinct_Prod_Id_Exclude_Prod_Id_144
    And the result in the NQQuery.log is:
    select T21473.PROD_CATEGORY_ID as c1,
    count(distinct case when T21473.PROD_ID = 144 then NULL else T21473.PROD_ID end ) as c2
    from
    PRODUCTS T21473
    group by T21473.PROD_CATEGORY_ID
    order by c1
    Regards
    Goran
    http://108obiee.blogspot.com

  • Count all values with a special WHERE clause in a select for a group?

    Hello,
    I have the following table1:
    code, month, value
    *1,1,40*
    *1,2,50*
    *1,3,0*
    *1,4,0*
    *1,5,20*
    *1,6,30*
    *1,7,30*
    *1,8,30*
    *1,9,20*
    *1,10,20*
    *1,11,0*
    *1,12,0*
    *2,1,10*
    *2,2,10*
    *2,3,20*
    *2,4,20*
    *2,5,20*
    *2,6,30*
    *2,7,40*
    *2,8,50*
    *2,9,20*
    *2,10,20*
    *2,11,20*
    *2,12,20*
    This is a table with 3 columns, first column is a code, second one is the number of month, third one is a value.
    Now I want to select the records for each code. For example all records for code=1.
    I want to count how much values=0 for this code=1. After this counting I want to update the value with this count of 0.
    For my example:
    For code 1 there are 4 fields with value 0. Therefore I want to update all values of code1 to 4.
    For the second code=2 there are no value=0. Therefore I want to update the values of code2 to 0.
    This should be the result:
    code, month, value
    *1,1,4*
    *1,2,4*
    *1,3,4*
    *1,4,4*
    *1,5,4*
    *1,6,4*
    *1,7,4*
    *1,8,4*
    *1,9,4*
    *1,10,4*
    *1,11,4*
    *1,12,4*
    *2,1,0*
    *2,2,0*
    *2,3,0*
    *2,4,0*
    *2,5,0*
    *2,6,0*
    *2,7,0*
    *2,8,0*
    *2,9,0*
    *2,10,0*
    *2,11,0*
    *2,12,0*
    My question is:
    Is there any possibility in oracle to count in a select (or in a insert/update statement) all values=0 for one group (in this example named CODE) and do an update in the same statement for this group?
    Hope anyone can give me a hint if this is possible?
    Thanks a lot.
    Best regards,
    Tim

    Here's the select:
    SQL> select code, month
      2        ,count(decode(value,0,1,null)) over (partition by code) ct
      3  from   t
      4  order by code, month
      5  ;
                    CODE                MONTH                   CT
                       1                    1                    4
                       1                    2                    4
                       1                    3                    4
                       1                    4                    4
                       1                    5                    4
                       1                    6                    4
                       1                    7                    4
                       1                    8                    4
                       1                    9                    4
                       1                   10                    4
                       1                   11                    4
                       1                   12                    4
                       2                    1                    0
                       2                    2                    0
                       2                    3                    0
                       2                    4                    0
                       2                    5                    0
                       2                    6                    0
                       2                    7                    0
                       2                    8                    0
                       2                    9                    0
                       2                   10                    0
                       2                   11                    0
                       2                   12                    0

  • Distinct count inside a measure group with other measures

    Hello,
    I have 1 distinct count inside a measure group with other measures, sum, count etc. I know this is not recommended due to poor processing performance and query response time.
    Processing performance I can live with if it means not having another measure group, which increases processing time anyway.
    I have used the recommended approach before and it generated many questions about what this second measure group is for (visible via excel), even though I made the distinct count appear in the main measure group via a calculated measure.
    (it would be nice if you could hide measure groups)
    However my question is: is query response time only effected when the distinct count is used in the query? Or is query response time effected regardless if the distinct count is used or not??
    Below is an extract from the 2005 distinct count optimizer white paper. It’s not completely clear but I assume if effects queries regardless if distinct count is used or not?
    "By adding other measures to the measure group holding a distinct count measure, all of the other measures will be at the same granularity as the distinct count measure, resulting in inefficient data structures and suboptimal
    queries."

    You might also be interested in reading this blog post, which deals with a similar scenario, to get a feeling for some of the things that might be going on behind the scenes:
    http://cwebbbi.wordpress.com/2012/11/27/storage-engine-caching-measures-and-measure-groups/
    Chris
    Check out my MS BI blog I also do
    SSAS, PowerPivot, MDX and DAX consultancy
    and run public SQL Server and BI training courses in the UK

  • "group by" slow for using "count(distinct some_column)" - a better way?

    Hi all,
    i have an
    select
    count(distinct some_column),
    from [...]
    group by [...];
    Which is slowed down for the "*count(distinct some_column)*".
    The "group by" aggregates base records.
    But the base records have 1:n for some #1 event #n records each.
    Some of the #n records fall into group by result record (A), some other into group by result record (B).
    But each shall only count +1 per event - disregarding how many of the #n record have fallen into that category.
    Is there another (faster) way to count for this?
    - thanks!
    best regards,
    Frank
    Edited by: user8704911 on Jun 29, 2011 1:30 AM

    Hi Dom,
    incidentally i went in the direction you proposed:
    I replaced the pl/sql collection with the global temporary table.
    But the reason for doing this was a different one:
    I recognized, that the group by is much faster, if applied on table or global temporary table.
    However i first just moved the data from pl/sql collection to global temporary table in order to apply the group by there.
    Then the group by is much faster - but the moving of data from pl/sql collection to global temporary table then took away the time.
    So it was not the group by, but in general the read-access to the pl/sql collection (btw, around #65,000 records).
    Now having completely replaced the pl/sql collection with global temporary table everything is fine.
    cheers,
    Frank

  • Group by count distinct

    mytable
    id | yy
    1 | 78
    2 | 78
    3 | 78
    3 | 79
    3 | 79
    4 | 79
    5 | 79
    5 | 80
    Desired output:
    yy | id_count
    78 | 3
    79 | 2
    80 | 0
    Following query doesn't work, as it doesn't take into account that id was already counted
    select yy, count(distinct id) as id_count
    from mytable
    group by yy
    --output
    yy | id_count
    78 | 3
    79 | 4
    80 | 1
    Hope this makes sense.
    Ideas?

    Hi,
    You only want to count each id once, with the first (that is, lowest) yy: is that right?
    Here's one way:
    WITH     got_r_num    AS
         SELECT  id
         ,     yy
         ,     ROW_NUMBER () OVER ( PARTITION BY  id
                                   ORDER BY          yy
                           ) AS r_num
         FROM    my_table
    SELECT       yy
           COUNT ( CASE
                      WHEN  r_num = 1
                    THEN  id
                  END
              )     AS id_cnt
    FROM       got_r_num
    GROUP BY  yy
    ORDER BY  yy
    ;Doing anything for the first of each id is probably a job for "ROW_NUMBER () OVER (PARTITION BY id ...)".

  • Count number of distinct values for a column for all tables that contains that column

    Imagine I have one Column called cdperson. With the query below I know which Tables have column cdperson
    select
    t.[name]fromsys.schemassinnerjoin 
    sys.tables 
    tons.schema_id=t.schema_idinnerjoin 
    sys.columnscont.object_id=c.object_idinnerjoin 
    sys.types  
    donc.user_type_id=d.user_type_idwherec.name ='cdperson'
    now I want to know for each table, how many distinct values of cdperson I have and I want the result ordered by the table that has more distinct values (descending)
    Table1                                                                                     
       cdperson                        select distinct(cdperson) = 10
       cdadress
       quant
    Table2 with 
       cdaddress                      (no column cdperson in this table)
       quant
    Table3
       cdperson                        select distinct(cdperson) = 100
       value
    Table 4
       cdperson                        select distinct(cdperson) = 18
       sum
    I want this result ordered by number of distinct cdperson
    table3   100
    table4   18
    table    10
    Thks for your answers

    I had to add schema name to the above script to make it work in AdventureWorks:
    CREATE TABLE #temp(TableName sysname , CNT BIGINT)
    DECLARE @QRY NVARCHAR(MAX);
    SET @qry=(SELECT
    N'INSERT INTO #TEMP SELECT '''+schema_name(t.schema_id)+'.'+T.[name] +''' AS TableName, COUNT (DISTINCT ProductID) DistCount FROM '+
    schema_name(t.schema_id)+'.'+t.[name] +';'
    FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id=t.SCHEMA_ID
    INNER JOIN sys.columns c ON t.object_id=c.object_id INNER JOIN sys.types d ON c.user_type_id=d.user_type_id
    WHERE c.name ='ProductID'
    FOR XML PATH(''))
    EXEC(@QRY)
    SELECT * FROM #temp ORDER BY TableName
    DROP TABLE #temp
    Production.Product 504
    Production.ProductCostHistory 293
    Production.ProductDocument 31
    Production.ProductInventory 432
    Production.ProductListPriceHistory 293
    Production.ProductProductPhoto 504
    Production.ProductReview 3
    Production.TransactionHistory 441
    Production.TransactionHistoryArchive 497
    Production.WorkOrder 238
    Production.WorkOrderRouting 149
    Purchasing.ProductVendor 211
    Purchasing.PurchaseOrderDetail 211
    Sales.SalesOrderDetail 266
    Sales.ShoppingCartItem 3
    Sales.SpecialOfferProduct 295
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Database Design
    New Book / Kindle: Beginner Database Design & SQL Programming Using Microsoft SQL Server 2014

  • Count Distinct over a Window

    Hi everyone,
    An analyst on my team heard of a new metric called a "Stickiness" metric. It basically measures how often users are coming to your website overtime.
    The definition is as follows:
    # Unique Users Today/#Unique users Over Last 7 days
    and also
    # Unique Users Today/#Unique users Over Last 30 days
    We have visit information stored in a table W_WEB_VISIT_F. For the sake of simplicity say it has columns VISIT_ID, VISIT_DATE and USER_ID (there are several more dimensional columns it has but I want to keep this exercise simple).
    I want to create an aggregate table called W_WEB_VISIT_A that pre-aggregates the three values I need per day: # Unique Users Today, #Unique users Over Last 7 days and #Unique users Over Last 30 days. The only way I can think of building the aggregate table is as follows
    WITH AGG AS (
    SELECT
    VISIT_DATE,
    USER_ID
    FROM W_WEB_VISIT_F
    GROUP BY
    VISIT_DATE,
    USER_ID
    select
    VISIT_DATE
    COUNT(DISTINCT USER_ID) UNIQUE_TODAY,
    (select count(distinct hist.USER_ID) from agg hist where hist.VISIT_DATE between src.VISIT_DATE - 6 and src.VISIT_DATE) SEVEN_DAYS,
    (select count(distinct hist.USER_ID) from agg hist where hist.VISIT_DATE between src.VISIT_DATE - 29 and src.VISIT_DATE) THIRTY_DAYS
    from agg
    group by visit_date
    The problem I am having is that W_WEB_VISIT_F has several million records in it and I can't get it the above query to complete. It ran over night and didn't complete.
    Is there a fancy 11g function I can use to do this for me? Is there a more efficient method?
    Thanks everyone for the help!
    -Joe
    Edited by: user9208525 on Jan 13, 2011 6:24 AM
    You guys are right. I missed the group by I had in the WITH Clause.

    Hi,
    Haven't used the windowing clause a lot, so I wanted to give a try.
    I made up some data with this query :create table t as select sysdate-dbms_random.value(0,10) visit_date, mod(level,5)+1 user_id
    from dual
    connect by level <= 20;Which gave me following rows :Scott@my10g SQL>select * from t order by visit_date;
    VISIT_DATE             USER_ID
    03/01/2011 13:17:10          1
    04/01/2011 05:30:30          4
    04/01/2011 08:08:13          5
    04/01/2011 14:42:24          3
    04/01/2011 20:20:58          3
    05/01/2011 17:29:24          2
    05/01/2011 17:40:20          4
    05/01/2011 18:32:56          2
    06/01/2011 04:12:53          5
    06/01/2011 08:59:18          2
    06/01/2011 09:04:26          3
    06/01/2011 10:14:20          1
    06/01/2011 14:22:54          1
    06/01/2011 19:39:04          1
    08/01/2011 14:44:18          5
    08/01/2011 21:38:04          5
    11/01/2011 04:56:05          4
    11/01/2011 18:52:29          2
    11/01/2011 23:57:30          4
    13/01/2011 07:24:22          3
    20 rows selected.I came up to that query :select
            v.*,
            case
                    when unq_l3d is null then -1
                    else trunc(unq_today/unq_l3d,2)
            end ratio
    from (
            select distinct trcdt, unq_today, unq_l3d
            from (
                    select
                    trcdt,
                    count(user_id)
                    over (
                            order by trcdt
                            range between numtodsinterval(1,'DAY') preceding and current row
                    ) unq_today,
                    count(user_id)
                    over (
                            order by trcdt
                            range between numtodsinterval(3,'DAY') preceding and current row
                    ) unq_l3d
                    from (
                            select distinct trunc(visit_date) trcdt, user_id from t
    ) v
    order by trcdtWith my sample data, it gives me :TRCDT                UNQ_TODAY    UNQ_L3D RATIO
    03/01/2011 00:00:00          1          1  1.00
    04/01/2011 00:00:00          4          4  1.00
    05/01/2011 00:00:00          5          6  0.83
    06/01/2011 00:00:00          6         10  0.60
    08/01/2011 00:00:00          1          7  0.14
    11/01/2011 00:00:00          2          3  0.66
    13/01/2011 00:00:00          1          3  0.33
    7 rows selected.where :
    - UNQ_TODAY is the number of distinct user_id in the day
    - UNQ_L3D is the number of distinct user_id in the last 3 days
    - RATIO is UNQ_TODAY divided by UNQ_L3D +(when UNQ_L3D is not zero)+
    It seems quite correct, but you would have to modify the query to fit to your needs and double-check the results !
    Just noticed that my query is all wrong*... must have been missing coffeine, or sleep.... but I'm still trying !
    Edited by: Nicosa on Jan 13, 2011 5:29 PM

  • "median count" in a single group by query

    Hallo following problem:
    i use a single group by query to analyze a data table.
    like "select avg(parameter1), sum(parameter2), count(case when...end) from datatable where ....".
    Following problem: I want a median of a count without rewriting the query completely, accessing the table several times or something.
    Easy example - in Detail:
    Create table and fill with example data
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> CREATE TABLE datatable
      2  (
      3    SLOT NUMBER
      4  , DATA NUMBER
      5  );
    Table created.
    SQL> INSERT INTO datatable VALUES (1,1);
    1 row created.
    SQL> INSERT INTO datatable VALUES (1,2);
    1 row created.
    SQL> INSERT INTO datatable VALUES (1,3);
    1 row created.
    SQL> INSERT INTO datatable VALUES (1,4);
    1 row created.
    SQL> INSERT INTO datatable VALUES (1,5);
    1 row created.
    SQL> INSERT INTO datatable VALUES (2,1);
    1 row created.
    SQL> INSERT INTO datatable VALUES (3,1);
    1 row created.
    SQL> INSERT INTO datatable VALUES (3,2);
    1 row created.
    SQL> INSERT INTO datatable VALUES (3,3);
    1 row created.
    SQL> INSERT INTO datatable VALUES (3,4);
    1 row created.
    SQL> INSERT INTO datatable VALUES (3,5);
    1 row created.
    SQL> INSERT INTO datatable VALUES (4,1);
    1 row created.
    SQL> INSERT INTO datatable VALUES (4,2);
    1 row created.
    SQL> INSERT INTO datatable VALUES (4,3);
    1 row created.
    SQL> INSERT INTO datatable VALUES (4,4);
    1 row created.
    SQL> INSERT INTO datatable VALUES (4,5);
    1 row created.
    SQL> INSERT INTO datatable VALUES (5,1);
    1 row created.
    SQL> INSERT INTO datatable VALUES (5,2);
    1 row created.
    SQL> INSERT INTO datatable VALUES (5,3);
    1 row created.
    SQL> INSERT INTO datatable VALUES (5,4);
    1 row created.
    SQL> INSERT INTO datatable VALUES (5,5);
    1 row created.In the table there are several items (here slots) with a for each slot unique data-value.
    I want to have the median count of data values, to filter out slots with more or less values.
    this worked, until there where only slots with less data:
    SQL> SELECT FLOOR(COUNT(DISTINCT SLOT||DATA)/COUNT(DISTINCT DATA)) FROM datatabl
    e;
    FLOOR(COUNT(DISTINCTSLOT||DATA)/COUNT(DISTINCTDATA))
                                                       4but when there is a slot having more data - it won't work, the very simple and stupid calculation will give a too little value
    SQL>
    SQL> INSERT INTO datatable VALUES (4,6);
    1 row created.
    SQL>
    SQL> SELECT FLOOR(COUNT(DISTINCT SLOT||DATA)/COUNT(DISTINCT DATA)) FROM datatabl
    e;
    FLOOR(COUNT(DISTINCTSLOT||DATA)/COUNT(DISTINCTDATA))
                                                       3so what i would need is this:
    SQL>
    SQL> SELECT MEDIAN(count(DISTINCT SLOT) over (partition by DATA)) FROM datatable
    SELECT MEDIAN(count(DISTINCT SLOT) over (partition by DATA)) FROM datatable
    ERROR at line 1:
    ORA-30483: window  functions are not allowed hereIn detail:
    the count delivers the distinct count of slots for each data (possible duplicated entrys should not be counted)
    And I want the median, which should be 4.
    SQL> SELECT count(DISTINCT SLOT) over (partition by DATA) FROM datatable;
    COUNT(DISTINCTSLOT)OVER(PARTITIONBYDATA)
                                           5
                                           5
                                           5
                                           5
                                           5
                                           4
                                           4
                                           4
                                           4
                                           4
                                           4
                                           4
                                           4
                                           4
                                           4
                                           4
                                           4
                                           4
                                           4
                                           4
                                           4
    COUNT(DISTINCTSLOT)OVER(PARTITIONBYDATA)
                                           1
    22 rows selected.Is it -anyhow- possible to do this without rebuilding the whole query?
    Thanks a lot

    ok, an example:
      CREATE TABLE "DATATABLE"
       (     "TOOL" CHAR(5 BYTE),
         "SLOT" NUMBER,
         "LOTID" CHAR(4 BYTE),
         "STEP" VARCHAR2(44 BYTE),
         "ENDTIME" DATE,
         "PARAMETER1" NUMBER,
         "PARAMETER6" NUMBER
    Insert into DATATABLE  values ('TOOL1',-1,'LOT1','STEP1',to_timestamp('02-FEB-12 03.09.42 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',-1,'LOT1','STEP2',to_timestamp('02-FEB-12 03.18.47 PM','DD-MON-RR HH.MI.SS AM'),42,0);
    Insert into DATATABLE  values ('TOOL1',-1,'LOT1','STEP3',to_timestamp('02-FEB-12 03.09.44 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',-1,'LOT1','STEP4',to_timestamp('02-FEB-12 02.20.38 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',-1,'LOT1','STEP5',to_timestamp('02-FEB-12 02.20.35 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',1,'LOT1','STEP1',to_timestamp('02-FEB-12 01.51.28 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',1,'LOT1','STEP2',to_timestamp('02-FEB-12 01.52.40 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',1,'LOT1','STEP3',to_timestamp('02-FEB-12 01.54.20 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',1,'LOT1','STEP4',to_timestamp('02-FEB-12 01.55.32 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',1,'LOT1','STEP5',to_timestamp('02-FEB-12 01.56.36 PM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',2,'LOT1','STEP1',to_timestamp('02-FEB-12 01.52.41 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',3,'LOT1','STEP1',to_timestamp('02-FEB-12 02.00.29 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',3,'LOT1','STEP2',to_timestamp('02-FEB-12 02.01.43 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',3,'LOT1','STEP3',to_timestamp('02-FEB-12 02.03.23 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',3,'LOT1','STEP4',to_timestamp('02-FEB-12 02.04.34 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',3,'LOT1','STEP5',to_timestamp('02-FEB-12 02.05.40 PM','DD-MON-RR HH.MI.SS AM'),19,0);
    Insert into DATATABLE  values ('TOOL1',4,'LOT1','STEP1',to_timestamp('02-FEB-12 02.02.11 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',4,'LOT1','STEP2',to_timestamp('02-FEB-12 02.03.26 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',4,'LOT1','STEP3',to_timestamp('02-FEB-12 02.05.07 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',4,'LOT1','STEP4',to_timestamp('02-FEB-12 02.06.19 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',4,'LOT1','STEP5',to_timestamp('02-FEB-12 02.07.27 PM','DD-MON-RR HH.MI.SS AM'),20,0);
    Insert into DATATABLE  values ('TOOL1',5,'LOT1','STEP1',to_timestamp('02-FEB-12 02.03.54 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',5,'LOT1','STEP2',to_timestamp('02-FEB-12 02.05.08 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',5,'LOT1','STEP3',to_timestamp('02-FEB-12 02.06.49 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',5,'LOT1','STEP4',to_timestamp('02-FEB-12 02.08.01 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',5,'LOT1','STEP5',to_timestamp('02-FEB-12 02.14.26 PM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',6,'LOT1','STEP1',to_timestamp('02-FEB-12 02.05.35 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',6,'LOT1','STEP2',to_timestamp('02-FEB-12 02.06.50 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',6,'LOT1','STEP3',to_timestamp('02-FEB-12 02.15.14 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',6,'LOT1','STEP4',to_timestamp('02-FEB-12 02.16.26 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',6,'LOT1','STEP5',to_timestamp('02-FEB-12 02.17.31 PM','DD-MON-RR HH.MI.SS AM'),19,0);
    Insert into DATATABLE  values ('TOOL1',7,'LOT1','STEP1',to_timestamp('02-FEB-12 02.06.42 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',8,'LOT1','STEP1',to_timestamp('02-FEB-12 02.14.14 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',8,'LOT1','STEP2',to_timestamp('02-FEB-12 02.15.29 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',8,'LOT1','STEP3',to_timestamp('02-FEB-12 02.17.09 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',8,'LOT1','STEP4',to_timestamp('02-FEB-12 02.18.22 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',8,'LOT1','STEP5',to_timestamp('02-FEB-12 02.23.31 PM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',9,'LOT1','STEP1',to_timestamp('02-FEB-12 02.58.14 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',9,'LOT1','STEP2',to_timestamp('02-FEB-12 02.15.32 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',9,'LOT1','STEP3',to_timestamp('02-FEB-12 02.59.30 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',9,'LOT1','STEP4',to_timestamp('02-FEB-12 03.01.15 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',9,'LOT1','STEP5',to_timestamp('02-FEB-12 03.07.52 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',10,'LOT1','STEP1',to_timestamp('02-FEB-12 02.23.20 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',10,'LOT1','STEP2',to_timestamp('02-FEB-12 02.24.39 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',10,'LOT1','STEP3',to_timestamp('02-FEB-12 02.26.19 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',10,'LOT1','STEP4',to_timestamp('02-FEB-12 02.27.30 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',10,'LOT1','STEP5',to_timestamp('02-FEB-12 02.28.34 PM','DD-MON-RR HH.MI.SS AM'),17,0);
    Insert into DATATABLE  values ('TOOL1',11,'LOT1','STEP1',to_timestamp('02-FEB-12 02.59.37 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',11,'LOT1','STEP2',to_timestamp('02-FEB-12 03.10.51 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',11,'LOT1','STEP3',to_timestamp('02-FEB-12 02.24.52 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',11,'LOT1','STEP4',to_timestamp('02-FEB-12 03.12.03 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',11,'LOT1','STEP5',to_timestamp('02-FEB-12 03.13.41 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',12,'LOT1','STEP1',to_timestamp('02-FEB-12 02.32.30 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',12,'LOT1','STEP2',to_timestamp('02-FEB-12 02.33.43 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',12,'LOT1','STEP3',to_timestamp('02-FEB-12 02.35.24 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',12,'LOT1','STEP4',to_timestamp('02-FEB-12 02.36.35 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',12,'LOT1','STEP5',to_timestamp('02-FEB-12 02.37.41 PM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',13,'LOT1','STEP1',to_timestamp('02-FEB-12 02.34.11 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',13,'LOT1','STEP2',to_timestamp('02-FEB-12 02.35.26 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',13,'LOT1','STEP3',to_timestamp('02-FEB-12 02.37.06 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',13,'LOT1','STEP4',to_timestamp('02-FEB-12 02.38.19 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',13,'LOT1','STEP5',to_timestamp('02-FEB-12 02.39.29 PM','DD-MON-RR HH.MI.SS AM'),17,0);
    Insert into DATATABLE  values ('TOOL1',14,'LOT1','STEP1',to_timestamp('02-FEB-12 02.35.54 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',14,'LOT1','STEP2',to_timestamp('02-FEB-12 02.37.08 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',14,'LOT1','STEP3',to_timestamp('02-FEB-12 02.38.48 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',14,'LOT1','STEP4',to_timestamp('02-FEB-12 02.40.01 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',14,'LOT1','STEP5',to_timestamp('02-FEB-12 02.41.13 PM','DD-MON-RR HH.MI.SS AM'),13,0);
    Insert into DATATABLE  values ('TOOL1',15,'LOT1','STEP1',to_timestamp('02-FEB-12 02.37.37 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',15,'LOT1','STEP2',to_timestamp('02-FEB-12 02.38.52 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',15,'LOT1','STEP3',to_timestamp('02-FEB-12 02.40.33 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',15,'LOT1','STEP4',to_timestamp('02-FEB-12 02.41.46 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',15,'LOT1','STEP5',to_timestamp('02-FEB-12 02.42.58 PM','DD-MON-RR HH.MI.SS AM'),17,0);
    Insert into DATATABLE  values ('TOOL1',16,'LOT1','STEP1',to_timestamp('02-FEB-12 02.39.20 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',16,'LOT1','STEP2',to_timestamp('02-FEB-12 02.40.34 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',16,'LOT1','STEP3',to_timestamp('02-FEB-12 02.42.16 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',16,'LOT1','STEP4',to_timestamp('02-FEB-12 02.43.29 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',16,'LOT1','STEP5',to_timestamp('02-FEB-12 02.44.42 PM','DD-MON-RR HH.MI.SS AM'),15,0);
    Insert into DATATABLE  values ('TOOL1',17,'LOT1','STEP1',to_timestamp('02-FEB-12 02.41.04 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',17,'LOT1','STEP2',to_timestamp('02-FEB-12 02.42.19 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',17,'LOT1','STEP3',to_timestamp('02-FEB-12 02.43.59 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',17,'LOT1','STEP4',to_timestamp('02-FEB-12 02.45.13 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',17,'LOT1','STEP5',to_timestamp('02-FEB-12 02.46.26 PM','DD-MON-RR HH.MI.SS AM'),13,0);
    Insert into DATATABLE  values ('TOOL1',18,'LOT1','STEP1',to_timestamp('02-FEB-12 02.42.49 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',18,'LOT1','STEP2',to_timestamp('02-FEB-12 02.44.03 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',18,'LOT1','STEP3',to_timestamp('02-FEB-12 02.45.44 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',18,'LOT1','STEP4',to_timestamp('02-FEB-12 02.47.01 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',18,'LOT1','STEP5',to_timestamp('02-FEB-12 02.48.10 PM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',19,'LOT1','STEP1',to_timestamp('02-FEB-12 02.44.33 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',19,'LOT1','STEP2',to_timestamp('02-FEB-12 02.45.47 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',19,'LOT1','STEP3',to_timestamp('02-FEB-12 02.47.31 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',19,'LOT1','STEP4',to_timestamp('02-FEB-12 02.48.45 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',19,'LOT1','STEP5',to_timestamp('02-FEB-12 02.49.58 PM','DD-MON-RR HH.MI.SS AM'),14,0);
    Insert into DATATABLE  values ('TOOL1',20,'LOT1','STEP1',to_timestamp('02-FEB-12 02.46.17 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',20,'LOT1','STEP2',to_timestamp('02-FEB-12 02.47.31 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',20,'LOT1','STEP3',to_timestamp('02-FEB-12 02.49.16 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',20,'LOT1','STEP4',to_timestamp('02-FEB-12 02.50.31 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',20,'LOT1','STEP5',to_timestamp('02-FEB-12 02.51.42 PM','DD-MON-RR HH.MI.SS AM'),14,0);
    Insert into DATATABLE  values ('TOOL1',21,'LOT1','STEP1',to_timestamp('02-FEB-12 02.48.01 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',21,'LOT1','STEP2',to_timestamp('02-FEB-12 02.49.16 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',21,'LOT1','STEP3',to_timestamp('02-FEB-12 02.51.02 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',21,'LOT1','STEP4',to_timestamp('02-FEB-12 02.52.16 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',21,'LOT1','STEP5',to_timestamp('02-FEB-12 02.55.04 PM','DD-MON-RR HH.MI.SS AM'),19,0);
    Insert into DATATABLE  values ('TOOL1',22,'LOT1','STEP1',to_timestamp('02-FEB-12 02.49.48 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',22,'LOT1','STEP2',to_timestamp('02-FEB-12 02.51.02 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',22,'LOT1','STEP3',to_timestamp('02-FEB-12 02.52.47 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',22,'LOT1','STEP4',to_timestamp('02-FEB-12 02.55.39 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',22,'LOT1','STEP5',to_timestamp('02-FEB-12 02.56.45 PM','DD-MON-RR HH.MI.SS AM'),15,0);
    Insert into DATATABLE  values ('TOOL1',23,'LOT1','STEP1',to_timestamp('02-FEB-12 02.51.33 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',23,'LOT1','STEP2',to_timestamp('02-FEB-12 02.52.48 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',23,'LOT1','STEP3',to_timestamp('02-FEB-12 02.56.11 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',23,'LOT1','STEP4',to_timestamp('02-FEB-12 02.57.23 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',23,'LOT1','STEP5',to_timestamp('02-FEB-12 02.58.29 PM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',24,'LOT1','STEP1',to_timestamp('02-FEB-12 02.53.18 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',24,'LOT1','STEP2',to_timestamp('02-FEB-12 02.56.00 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',24,'LOT1','STEP3',to_timestamp('02-FEB-12 02.57.53 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',24,'LOT1','STEP4',to_timestamp('02-FEB-12 02.59.05 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',24,'LOT1','STEP5',to_timestamp('02-FEB-12 03.00.12 PM','DD-MON-RR HH.MI.SS AM'),17,0);
    Insert into DATATABLE  values ('TOOL1',25,'LOT1','STEP1',to_timestamp('02-FEB-12 02.56.29 PM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',25,'LOT1','STEP2',to_timestamp('02-FEB-12 02.57.44 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',25,'LOT1','STEP3',to_timestamp('02-FEB-12 02.59.35 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',25,'LOT1','STEP4',to_timestamp('02-FEB-12 03.00.46 PM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',25,'LOT1','STEP5',to_timestamp('02-FEB-12 03.07.19 PM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',-1,'LOT2','STEP1',to_timestamp('24-FEB-12 03.19.26 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',1,'LOT2','STEP1',to_timestamp('24-FEB-12 03.00.47 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',1,'LOT2','STEP2',to_timestamp('24-FEB-12 03.02.56 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',1,'LOT2','STEP3',to_timestamp('24-FEB-12 03.03.56 AM','DD-MON-RR HH.MI.SS AM'),19,0);
    Insert into DATATABLE  values ('TOOL1',2,'LOT2','STEP1',to_timestamp('24-FEB-12 03.02.22 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',2,'LOT2','STEP2',to_timestamp('24-FEB-12 03.05.06 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',2,'LOT2','STEP3',to_timestamp('24-FEB-12 03.06.06 AM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',3,'LOT2','STEP1',to_timestamp('24-FEB-12 03.03.56 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',3,'LOT2','STEP2',to_timestamp('24-FEB-12 03.07.15 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',3,'LOT2','STEP3',to_timestamp('24-FEB-12 03.13.53 AM','DD-MON-RR HH.MI.SS AM'),19,0);
    Insert into DATATABLE  values ('TOOL1',4,'LOT2','STEP1',to_timestamp('24-FEB-12 03.05.54 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',4,'LOT2','STEP2',to_timestamp('24-FEB-12 03.15.03 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',4,'LOT2','STEP3',to_timestamp('24-FEB-12 03.16.04 AM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',5,'LOT2','STEP1',to_timestamp('24-FEB-12 03.05.58 AM','DD-MON-RR HH.MI.SS AM'),0,0);
    Insert into DATATABLE  values ('TOOL1',5,'LOT2','STEP2',to_timestamp('24-FEB-12 03.25.35 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',5,'LOT2','STEP3',to_timestamp('24-FEB-12 03.29.56 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',5,'LOT2','STEP4',to_timestamp('24-FEB-12 03.30.57 AM','DD-MON-RR HH.MI.SS AM'),16,0);
    Insert into DATATABLE  values ('TOOL1',6,'LOT2','STEP1',to_timestamp('24-FEB-12 03.14.18 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',6,'LOT2','STEP2',to_timestamp('24-FEB-12 03.17.13 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',6,'LOT2','STEP3',to_timestamp('24-FEB-12 03.19.55 AM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',7,'LOT2','STEP1',to_timestamp('24-FEB-12 03.15.51 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',7,'LOT2','STEP2',to_timestamp('24-FEB-12 03.21.18 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',7,'LOT2','STEP3',to_timestamp('24-FEB-12 03.22.19 AM','DD-MON-RR HH.MI.SS AM'),19,0);
    Insert into DATATABLE  values ('TOOL1',8,'LOT2','STEP1',to_timestamp('24-FEB-12 03.17.50 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',8,'LOT2','STEP2',to_timestamp('24-FEB-12 03.23.28 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',8,'LOT2','STEP3',to_timestamp('24-FEB-12 03.24.29 AM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',9,'LOT2','STEP1',to_timestamp('24-FEB-12 03.21.42 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',9,'LOT2','STEP2',to_timestamp('24-FEB-12 03.25.37 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',9,'LOT2','STEP3',to_timestamp('24-FEB-12 03.26.38 AM','DD-MON-RR HH.MI.SS AM'),19,0);
    Insert into DATATABLE  values ('TOOL1',10,'LOT2','STEP1',to_timestamp('24-FEB-12 03.23.25 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',10,'LOT2','STEP2',to_timestamp('24-FEB-12 03.27.47 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',10,'LOT2','STEP3',to_timestamp('24-FEB-12 03.28.47 AM','DD-MON-RR HH.MI.SS AM'),19,0);
    Insert into DATATABLE  values ('TOOL1',11,'LOT2','STEP1',to_timestamp('24-FEB-12 03.27.44 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',11,'LOT2','STEP2',to_timestamp('24-FEB-12 03.32.06 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',11,'LOT2','STEP3',to_timestamp('24-FEB-12 03.33.07 AM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',12,'LOT2','STEP1',to_timestamp('24-FEB-12 03.29.54 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',12,'LOT2','STEP2',to_timestamp('24-FEB-12 03.34.16 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',12,'LOT2','STEP3',to_timestamp('24-FEB-12 03.35.17 AM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',13,'LOT2','STEP1',to_timestamp('24-FEB-12 03.32.04 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',13,'LOT2','STEP2',to_timestamp('24-FEB-12 03.36.26 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',13,'LOT2','STEP3',to_timestamp('24-FEB-12 03.37.27 AM','DD-MON-RR HH.MI.SS AM'),19,0);
    Insert into DATATABLE  values ('TOOL1',14,'LOT2','STEP1',to_timestamp('24-FEB-12 03.34.14 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',14,'LOT2','STEP2',to_timestamp('24-FEB-12 03.38.36 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',14,'LOT2','STEP3',to_timestamp('24-FEB-12 03.39.37 AM','DD-MON-RR HH.MI.SS AM'),19,0);
    Insert into DATATABLE  values ('TOOL1',15,'LOT2','STEP1',to_timestamp('24-FEB-12 03.36.24 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',15,'LOT2','STEP2',to_timestamp('24-FEB-12 03.40.46 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',15,'LOT2','STEP3',to_timestamp('24-FEB-12 03.41.46 AM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',16,'LOT2','STEP1',to_timestamp('24-FEB-12 03.38.34 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',16,'LOT2','STEP2',to_timestamp('24-FEB-12 03.42.55 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',16,'LOT2','STEP3',to_timestamp('24-FEB-12 03.43.56 AM','DD-MON-RR HH.MI.SS AM'),19,0);
    Insert into DATATABLE  values ('TOOL1',17,'LOT2','STEP1',to_timestamp('24-FEB-12 03.40.43 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',17,'LOT2','STEP2',to_timestamp('24-FEB-12 03.45.05 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',17,'LOT2','STEP3',to_timestamp('24-FEB-12 03.46.06 AM','DD-MON-RR HH.MI.SS AM'),17,0);
    Insert into DATATABLE  values ('TOOL1',18,'LOT2','STEP1',to_timestamp('24-FEB-12 03.42.53 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',18,'LOT2','STEP2',to_timestamp('24-FEB-12 03.47.15 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',18,'LOT2','STEP3',to_timestamp('24-FEB-12 03.48.16 AM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',19,'LOT2','STEP1',to_timestamp('24-FEB-12 03.45.03 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',19,'LOT2','STEP2',to_timestamp('24-FEB-12 03.49.25 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',19,'LOT2','STEP3',to_timestamp('24-FEB-12 03.50.26 AM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',20,'LOT2','STEP1',to_timestamp('24-FEB-12 03.47.13 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',20,'LOT2','STEP2',to_timestamp('24-FEB-12 03.51.34 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',20,'LOT2','STEP3',to_timestamp('24-FEB-12 03.52.35 AM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',21,'LOT2','STEP1',to_timestamp('24-FEB-12 03.49.22 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',21,'LOT2','STEP2',to_timestamp('24-FEB-12 03.53.44 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',21,'LOT2','STEP3',to_timestamp('24-FEB-12 03.54.45 AM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',22,'LOT2','STEP1',to_timestamp('24-FEB-12 03.51.32 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',22,'LOT2','STEP2',to_timestamp('24-FEB-12 03.55.54 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',22,'LOT2','STEP3',to_timestamp('24-FEB-12 03.56.55 AM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',23,'LOT2','STEP1',to_timestamp('24-FEB-12 03.53.42 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',23,'LOT2','STEP2',to_timestamp('24-FEB-12 03.58.03 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',23,'LOT2','STEP3',to_timestamp('24-FEB-12 03.59.04 AM','DD-MON-RR HH.MI.SS AM'),19,0);
    Insert into DATATABLE  values ('TOOL1',24,'LOT2','STEP1',to_timestamp('24-FEB-12 03.55.51 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',24,'LOT2','STEP2',to_timestamp('24-FEB-12 04.00.14 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',24,'LOT2','STEP3',to_timestamp('24-FEB-12 04.01.15 AM','DD-MON-RR HH.MI.SS AM'),18,0);
    Insert into DATATABLE  values ('TOOL1',25,'LOT2','STEP1',to_timestamp('24-FEB-12 03.58.01 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',25,'LOT2','STEP2',to_timestamp('24-FEB-12 04.02.23 AM','DD-MON-RR HH.MI.SS AM'),1,0);
    Insert into DATATABLE  values ('TOOL1',25,'LOT2','STEP3',to_timestamp('24-FEB-12 04.03.24 AM','DD-MON-RR HH.MI.SS AM'),18,0);(code generated by sqldeveloper)
    In the example:
    LOT1 has two slot, with only 1 step instead of 5 --> these slots should not be counted --> result 23
    LOT2 has one slot, with 4 instead of 3 --> this slot is ok and should be counted --> result 25
    as previously mentioned, i want to count all slots, which got at least the median amount of steps [edit: slot -1 should also not be counted]
    LOT1:
        SELECT
          sum(CASE WHEN SLOT != -1 THEN parameter1 END) parameter1,
          min(CASE WHEN SLOT != -1 THEN EndTime END) first_time,
          max(CASE WHEN SLOT != -1 THEN EndTime END) last_time,
          sum(CASE WHEN SLOT != -1 THEN parameter6 END) parameter6,
          CASE WHEN count(DISTINCT CASE WHEN SLOT != -1 THEN SLOT END) > 0 AND ROUND(count(DISTINCT CASE WHEN SLOT != -1 THEN SLOT||STEP END)/count(DISTINCT CASE WHEN SLOT != -1 THEN SLOT END)) > 0 THEN FLOOR(count(DISTINCT CASE WHEN SLOT != -1 THEN SLOT||STEP END)/ROUND(count(DISTINCT CASE WHEN SLOT != -1 THEN SLOT||STEP END)/count(DISTINCT CASE WHEN SLOT != -1 THEN SLOT END))) ELSE NULL END num_slots,
          sum(CASE WHEN SLOT = -1 THEN parameter1 END) minusone_parameter1,
          min(CASE WHEN SLOT = -1 THEN EndTime END) minusone_first_time,
          max(CASE WHEN SLOT = -1 THEN EndTime END) minusone_last_time,
          sum(CASE WHEN SLOT = -1 THEN parameter6 END) minusone_parameter6,
          count(CASE WHEN SLOT = -1 THEN 1 END) num_minusone
        FROM
          datatable
      WHERE
        tool = 'TOOL1'
        AND lotid = 'LOT1';
    PARAMETER1             FIRST_TIME                LAST_TIME                 PARAMETER6             *NUM_SLOTS*              MINUSONE_PARAMETER1    MINUSONE_FIRST_TIME       MINUSONE_LAST_TIME        MINUSONE_PARAMETER6    NUM_MINUSONE          
    423                    02.02.12 13:51:28         02.02.12 15:13:41         0                      *23*                     46                     02.02.12 14:20:35         02.02.12 15:18:47         0                      5                      this is the desired result
    same query with Lot2:
    PARAMETER1             FIRST_TIME                LAST_TIME                 PARAMETER6             *NUM_SLOTS*              MINUSONE_PARAMETER1    MINUSONE_FIRST_TIME       MINUSONE_LAST_TIME        MINUSONE_PARAMETER6    NUM_MINUSONE          
    506                    24.02.12 03:00:47         24.02.12 04:03:24         0                      *25*                     1                      24.02.12 03:19:26         24.02.12 03:19:26         0                      1                      so right now i get the desired results with this ugly query... but with combinations of several different conditions - e.g. several slots with more steps and a few with less... - it may not work.
    Edited by: Paneologist on Feb 24, 2012 7:42 AM

  • Select records based on first n distinct values of column

    I need to write a query in plsql to select records for first 3 distinct values of a single column (below example, ID )and all the rows for next 3 distinct values of the column and so on till the end of count of distinct values of a column.
    eg:
    ID name age
    1 abc 10
    1 def 20
    2 ghi 10
    2 jkl 20
    2 mno 60
    3 pqr 10
    4 rst 10
    4 tuv 10
    5 vwx 10
    6 xyz 10
    6 hij 10
    7 lmn 10
    so on... (till some count)
    Result should be
    Query 1 should result --->
    ID name age
    1 abc 10
    1 def 20
    2 ghi 10
    2 jkl 20
    2 mno 60
    3 pqr 10
    query 2 should result -->
    4 rst 10
    4 tuv 10
    5 vwx 10
    6 xyz 10
    6 hij 10
    query 3 should result -->
    7 lmn 10
    9 .. ..
    so on..
    How to write a query for this inside a loop.

    Hi,
    So, one group will consist of the lowest id value, the 2nd lowest and the 3rd lowest, reggardless of how many rows are involved. The next group will consist of the 4th lowest id, the 5th lowest and the 6th lowest. To do that, you need to assign numbers 1, 2, 3, 4, 5, 6, ... to the rows in order by id, with all rows having the same id getting the same number, and without skipping any numbers.
    That sounds like a job for the analytic DENSE_RANK function:
    WITH     got_grp_id     AS
         SELECT     id, name, age
         ,     CEIL ( DENSE_RANK () OVER (ORDER BY id)
                   / 3
                   )          AS grp_id
         FROM     table_x
    SELECT     id, name, age
    FROM     got_grp_id
    WHERE     id     = 1     -- or whatever number you want
    ;If you'd care to post CREATE TABLE and INSERT statements for your sample data, then I could test it.
    See the forum FAQ {message:id=9360002}

  • COUNT(DISTINCT) WITH ORDER BY in an analytic function

    -- I create a table with three fields: Name, Amount, and a Trans_Date.
    CREATE TABLE TEST
    NAME VARCHAR2(19) NULL,
    AMOUNT VARCHAR2(8) NULL,
    TRANS_DATE DATE NULL
    -- I insert a few rows into my table:
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '110', TO_DATE('06/01/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '20', TO_DATE('06/01/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '110', TO_DATE('06/02/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '21', TO_DATE('06/03/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '68', TO_DATE('06/04/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '110', TO_DATE('06/05/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '20', TO_DATE('06/06/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '43', TO_DATE('06/01/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '77', TO_DATE('06/02/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '221', TO_DATE('06/03/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '43', TO_DATE('06/04/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '73', TO_DATE('06/05/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
    commit;
    /* I want to retrieve all the distinct count of amount for every row in an analytic function with COUNT(DISTINCT AMOUNT) sorted by name and ordered by trans_date where I get only calculate for the last four trans_date for each row (i.e., for the row "Anna 110 6/5/2005 8:00:00.000 PM," I only want to look at the previous dates from 6/2/2005 to 6/5/2005 and get the distinct count of how many amounts there are different for Anna). Note, I cannot use the DISTINCT keyword in this query because it doesn't work with the ORDER BY */
    select NAME, AMOUNT, TRANS_DATE, COUNT(/*DISTINCT*/ AMOUNT) over ( partition by NAME
    order by TRANS_DATE range between numtodsinterval(3,'day') preceding and current row ) as COUNT_AMOUNT
    from TEST t;
    This is the results I get if I just count all the AMOUNT without using distinct:
    NAME     AMOUNT     TRANS_DATE     COUNT_AMOUNT
    Anna 110 6/1/2005 8:00:00.000 PM     2
    Anna 20 6/1/2005 8:00:00.000 PM     2
    Anna 110     6/2/2005 8:00:00.000 PM     3
    Anna 21     6/3/2005 8:00:00.000 PM     4
    Anna 68     6/4/2005 8:00:00.000 PM     5
    Anna 110     6/5/2005 8:00:00.000 PM     4
    Anna 20     6/6/2005 8:00:00.000 PM     4
    Bill 43     6/1/2005 8:00:00.000 PM     1
    Bill 77     6/2/2005 8:00:00.000 PM     2
    Bill 221     6/3/2005 8:00:00.000 PM     3
    Bill 43     6/4/2005 8:00:00.000 PM     4
    Bill 73     6/5/2005 8:00:00.000 PM     4
    The COUNT_DISTINCT_AMOUNT is the desired output:
    NAME     AMOUNT     TRANS_DATE     COUNT_DISTINCT_AMOUNT
    Anna     110     6/1/2005 8:00:00.000 PM     1
    Anna     20     6/1/2005 8:00:00.000 PM     2
    Anna     110     6/2/2005 8:00:00.000 PM     2
    Anna     21     6/3/2005 8:00:00.000 PM     3
    Anna     68     6/4/2005 8:00:00.000 PM     4
    Anna     110     6/5/2005 8:00:00.000 PM     3
    Anna     20     6/6/2005 8:00:00.000 PM     4
    Bill     43     6/1/2005 8:00:00.000 PM     1
    Bill     77     6/2/2005 8:00:00.000 PM     2
    Bill     221     6/3/2005 8:00:00.000 PM     3
    Bill     43     6/4/2005 8:00:00.000 PM     3
    Bill     73     6/5/2005 8:00:00.000 PM     4
    Thanks in advance.

    you can try to write your own udag.
    here is a fake example, just to show how it "could" work. I am here using only 1,2,4,8,16,32 as potential values.
    create or replace type CountDistinctType as object
       bitor_number number,
       static function ODCIAggregateInitialize(sctx IN OUT CountDistinctType) 
         return number,
       member function ODCIAggregateIterate(self IN OUT CountDistinctType, 
         value IN number) return number,
       member function ODCIAggregateTerminate(self IN CountDistinctType, 
         returnValue OUT number, flags IN number) return number,
        member function ODCIAggregateMerge(self IN OUT CountDistinctType,
          ctx2 IN CountDistinctType) return number
    create or replace type body CountDistinctType is 
    static function ODCIAggregateInitialize(sctx IN OUT CountDistinctType) 
    return number is 
    begin
       sctx := CountDistinctType('');
       return ODCIConst.Success;
    end;
    member function ODCIAggregateIterate(self IN OUT CountDistinctType, value IN number)
      return number is
      begin
        if (self.bitor_number is null) then
          self.bitor_number := value;
        else
          self.bitor_number := self.bitor_number+value-bitand(self.bitor_number,value);
        end if;
        return ODCIConst.Success;
      end;
      member function ODCIAggregateTerminate(self IN CountDistinctType, returnValue OUT
      number, flags IN number) return number is
      begin
        returnValue := 0;
        for i in 0..log(2,self.bitor_number) loop
          if (bitand(power(2,i),self.bitor_number)!=0) then
            returnValue := returnValue+1;
          end if;
        end loop;
        return ODCIConst.Success;
      end;
      member function ODCIAggregateMerge(self IN OUT CountDistinctType, ctx2 IN
      CountDistinctType) return number is
      begin
        return ODCIConst.Success;
      end;
      end;
    CREATE or REPLACE FUNCTION CountDistinct (n number) RETURN number 
    PARALLEL_ENABLE AGGREGATE USING CountDistinctType;
    drop table t;
    create table t as select rownum r, power(2,trunc(dbms_random.value(0,6))) p from all_objects;
    SQL> select r,p,countdistinct(p) over (order by r) d from t where rownum<10 order by r;
             R          P          D
             1          4          1
             2          1          2
             3          8          3
             4         32          4
             5          1          4
             6         16          5
             7         16          5
             8          4          5
             9          4          5buy some good book if you want to start at writting your own "distinct" algorythm.
    Message was edited by:
    Laurent Schneider
    a simpler but memory killer algorithm would use a plsql table in an udag and do the count(distinct) over that table to return the value

Maybe you are looking for

  • Settle this once and for all...

    In every interview that I conduct, i ask a standard question: what is teh difference between a java bean and a class... and the answers i get are minor variations of the sentence: beans are classes with private members and get/set methods defined for

  • Will iTunes work with a laptop running windows 7 and 64bit?

    Trying to get a system started and I'm not sure if iTunes will work with a windows 7 laptop. It has 64 bit not 32.

  • Fax Machines Obsolete?

    My old Brother fax machine finally broke but with new technological advances I am not sure if I should replace it with another traditional machine, or is sending a fax via the computer extremely common and easy now? The main reason I would need a fax

  • Adobe Color Issues

    I have perfectly good internet connection, but over half the time when I work in Illustrator and InDesign, the Adobe Color feature has issues and "can't connect."  I want to use all the color themes I've made, but I rarely have access to them.

  • Have a question on build project

    Hi guys, I am trying to build applet with 3rd party classes. I have created 2 packages in the project mypackage/applet mypackage/schedule if I add 3rd party class file into the src folder, I get an error. but.. If I add 3rd party class files into the