Analytic Functions with GROUP-BY Clause?

I'm just getting acquainted with analytical functions. I like them. I'm having a problem, though. I want to sum up the results, but either I'm running into a limitation or I'm writing the SQL wrong. Any hints for me?
Hypothetical Table SALES consisting of a DAY_ID, PRODUCT_ID, PURCHASER_ID, PURCHASE_PRICE lists all the
Hypothetical Business Question: Product prices can fluctuate over the course of a day. I want to know how much per day I would have made had I sold one each of all my products at their max price for that day. Silly question, I know, but it's the best I could come up with to show the problem.
INSERT INTO SALES VALUES(1,1,1,1.0);
INSERT INTO SALES VALUES(1,1,1,2.0);
INSERT INTO SALES VALUES(1,2,1,3.0);
INSERT INTO SALES VALUES(1,2,1,4.0);
INSERT INTO SALES VALUES(2,1,1,5.0);
INSERT INTO SALES VALUES(2,1,1,6.0);
INSERT INTO SALES VALUES(2,2,1,7.0);
INSERT INTO SALES VALUES(2,2,1,8.0);
COMMIT;
Day 1: Iif I had sold one product 1 at $2 and one product 2 at $4, I would have made 6$.
Day 2: Iif I had sold one product 1 at $6 and one product 2 at $8, I would have made 14$.
The desired result set is:
DAY_ID                 MY_MEASURE
1                        6
1                       14The following SQL gets me tantalizingly close:
SELECT DAY_ID,
  MAX(PURCHASE_PRICE)
  KEEP(DENSE_RANK FIRST ORDER BY PURCHASE_PRICE DESC)
  OVER(PARTITION BY DAY_ID, PRODUCT_ID) AS MY_MEASURE
  FROM SALES
ORDER BY DAY_ID
DAY_ID                 MY_MEASURE
1                      2
1                      2
1                      4
1                      4
2                      6
2                      6
2                      8
2                      8But as you can see, my result set is "longer" than I wanted it to be. I want a single row per DAY_ID. I understand what the analytical functions are doing here, and I acknowledge that I am "not doing it right." I just can't seem to figure out how to make it work.
Trying to do a sum() of max() simply does not work, nor does any semblance of a group-by clause that I can come up with. Unfortunately, as soon as I add the windowing function, I am no longer allowed to use group-by expressions (I think).
I am using a reporting tool, so unfortunately using things like inline views are not an option. I need to be able to define "MY_MEASURE" as something the query tool can apply the SUM() function to in its generated SQL.
(Note: The actual problem is slightly less easy to conceptualize, but solving this conundrum will take me much closer to solving the other.)
I humbly solicit your collective wisdom, oh forum.

Thanks, SY. I went that way originally too. Unfortunately that's no different from what I could get without the RANK function.
SELECT  DAY_ID,
        PRODUCT_ID,
        MAX(PURCHASE_PRICE) MAX_PRICE
  FROM  SALES
  GROUP BY DAY_ID,
           PRODUCT_ID
  ORDER BY DAY_ID,
           PRODUCT_ID
DAY_ID                 PRODUCT_ID             MAX_PRICE             
1                      1                      2                     
1                      2                      4                     
2                      1                      6                     
2                      2                      8

Similar Messages

  • Analytic function with GROUP BY

    Hi:
    using the query below I am getting the following error --> 3:19:15 PM ORA-00979: not a GROUP BY expression
    SELECT a.proj_title_ds, b.prgm_sers_title_nm,
    SUM(c.PRGM_TOT_EXP_AMT) OVER(PARTITION BY c.prgm_id) AS "Total $ Spend1"
    FROM iPlanrpt.VM_RPT_PROJECT a INNER JOIN iPlanrpt.VM_RPT_PRGM_SERS b
    ON a.proj_id = b.proj_id INNER JOIN iPlanrpt.VM_RPT_PRGM c
    ON b.prgm_sers_id = c.prgm_sers_id
    WHERE a.proj_id IN (1209624,1209623,1209625, 1211122,1211123)
    AND c.PRGM_STATE_ID in (6,7)
    GROUP BY a.proj_title_ds, b.prgm_sers_title_nm
    Any suggestions to get the desired result (Sum of c.PRGM_TOT_EXP_AMT for each / distinct c.prgm_id within the group by specified) will be helpful

    @OP,
    Please mark the "other duplicate thread as complete or duplicate". I responded to the other thread and asked to sample data.
    With the sample included here...would the following work for you?
    SELECT a.proj_title_ds,
           b.prgm_sers_title_nm,
           SUM (c.prgm_tot_exp_amt) AS "Total $ Spend1"
    FROM       iplanrpt.vm_rpt_project a
            INNER JOIN
               iplanrpt.vm_rpt_prgm_sers b
            ON a.proj_id = b.proj_id
         INNER JOIN
            (select distinct prgm_id, prgm_tot_exp_amt from iplanrpt.vm_rpt_prgm ) c
         ON b.prgm_sers_id = c.prgm_sers_id
    WHERE a.proj_id IN (1209624, 1209623, 1209625, 1211122, 1211123)
          AND c.prgm_state_id IN (6, 7)
    GROUP BY a.proj_title_ds, b.prgm_sers_title_nm
    ;vr,
    Sudhakar B.

  • Aggregate fuction with group by clause

    Hello,
    Following is assignment is given but i dont get correct output 
    so please i am request to all of us write code to solve my problem.
    There can be multiple records for one customer in VBAK tables with different combinations.
    Considering that we do not need details of each sales order,
    use Aggregate functions with GROUP BY clause in SELECT to read the fields.
    <garbled code removed>
    Moderator Message: Please paste the relevant portions of the code
    Edited by: Suhas Saha on Nov 18, 2011 1:48 PM

    So if you need not want all the repeated records, then you select all the values to an Internal table,
    and declare an internal table of same type and Usee COLLECT
    for ex:
    itab1 type  <xxxx>.
    wa_itba like line of itab1.
    itab2 type  <xxxx>. "<-This should be same type of above.
    select * from ..... into table itab1.
    and now...
    loop at itab1 into wa_itab.
    collect wa_itab1 into itab2.
    endloop.
    then you will get your desired result..

  • Analytic function for grouping?

    Hello @all
    10gR2
    Is it possible to use an analytic function for grouping following (example-)query:
    SELECT job, ename, sal,
      ROW_NUMBER() OVER(PARTITION BY job ORDER BY empno) AS no,
      RANK() OVER(PARTITION BY job ORDER BY NULL) AS JobNo
      FROM emp;The output is following:
    JOB     ENAME     SAL     NO     JOBNO
    ANALYST     SCOTT     3000     1     1
    ANALYST     FORD     3000     2     1
    CLERK     SMITH     818     1     1
    CLERK     ADAMS     1100     2     1
    CLERK     JAMES     950     3     1
    CLERK     MILLER     1300     4     1
    MANAGER     Müller     1000     1     1
    MANAGER     JONES     2975     2     1
    ....The JobNo should increase group by job and ename; my desired output should be looking like...:
    JOB     ENAME     SAL     NO     JOBNO
    ANALYST     SCOTT     3000     1     1
    ANALYST     FORD     3000     2     1
    CLERK     SMITH     818     1     2
    CLERK     ADAMS     1100     2     2
    CLERK     JAMES     950     3     2
    CLERK     MILLER     1300     4     2
    MANAGER     Müller     1000     1     3
    MANAGER     JONES     2975     2     3
    MANAGER     BLAKE     2850     3     3
    MANAGER     CLARK     2450     4     3
    PRESIDENT     KING     5000     1     4
    SALESMAN     ALLEN     1600     1     5
    SALESMAN     WARD     1250     2     5
    SALESMAN     MARTIN     1250     3     5
    SALESMAN     TURNER     1500     4     5How can I achieve this?

    This, perhaps?
    with emp as (select 1 empno, 'ANALYST' job, 'SCOTT' ename, 3000 sal from dual union all
                 select 2 empno, 'ANALYST' job, 'FORD' ename, 3000 sal from dual union all
                 select 3 empno, 'CLERK' job, 'SMITH' ename, 818 sal from dual union all
                 select 4 empno, 'CLERK' job, 'ADAMS' ename, 1100 sal from dual union all
                 select 5 empno, 'CLERK' job, 'JAMES' ename, 950 sal from dual union all
                 select 6 empno, 'CLERK' job, 'MILLER' ename, 1300 sal from dual union all
                 select 7 empno, 'MANAGER' job, 'Müller' ename, 1000 sal from dual union all
                 select 8 empno, 'MANAGER' job, 'JONES' ename, 2975 sal from dual union all
                 select 9 empno, 'MANAGER' job, 'BLAKE' ename, 2850 sal from dual union all
                 select 10 empno, 'MANAGER' job, 'CLARK' ename, 2450 sal from dual union all
                 select 11 empno, 'PRESIDENT' job, 'KING' ename, 5000 sal from dual union all
                 select 12 empno, 'SALESMAN' job, 'ALLEN' ename, 1600 sal from dual union all
                 select 13 empno, 'SALESMAN' job, 'WARD' ename, 1250 sal from dual union all
                 select 14 empno, 'SALESMAN' job, 'MARTIN' ename, 1250 sal from dual union all
                 select 15 empno, 'SALESMAN' job, 'TURNER' ename, 1500 sal from dual)
    select job, ename, sal,
           row_number() over(partition by job order by empno) no,
           dense_rank() over(order by job) jobno
    from   emp
    JOB     ENAME     SAL     NO     JOBNO
    ANALYST     SCOTT     3000     1     1
    ANALYST     FORD     3000     2     1
    CLERK     SMITH     818     1     2
    CLERK     ADAMS     1100     2     2
    CLERK     JAMES     950     3     2
    CLERK     MILLER     1300     4     2
    MANAGER     Müller     1000     1     3
    MANAGER     JONES     2975     2     3
    MANAGER     BLAKE     2850     3     3
    MANAGER     CLARK     2450     4     3
    PRESIDENT     KING     5000     1     4
    SALESMAN     ALLEN     1600     1     5
    SALESMAN     WARD     1250     2     5
    SALESMAN     MARTIN     1250     3     5
    SALESMAN     TURNER     1500     4     5

  • Case function with group by

    Hi,
      I am having a scenario like :
    Column 1:  BrokerList(dimension1)
    Column 2 : Broker(dimension2)
    Column 3 : Metric value(measure)
    so i am having a case when (dimension 3) Custodian = 'ss' then sum(metirc) group by dimension1,dimension2 but the result value is not matching
    BrokerList
    Broker
    Metric
    a1
    a
    10
    b
    20
    c
    30
    a1 :total
    60
    a2
    a
    50
    c
    60
    d
    10
    a2:total
    120
    Grand total
    180
    Here the metric is based on other case condition.. so the total value is not matching.. Is there any other way to do a case function with group by funtions. Please advise.
    regards,
    Guru

    Use filter on metric by ss value and then go for group by from Criteria
    something like
    sum(FILTER(metric USING (Custodian = 'ss')) by dimension1,dimension2)
    mark if helps
    ~ http://cool-bi.com

  • Return multiple columns from an analytic function with a window

    Hello,
    Is it possible to obtain multiple columns from an analytic function with a window?
    I have a table with 4 columns, an id, a test id, a date, and the result of a test. I'm using an analytic function to obtain, for each row, the current test value, and the maximum test value in the next 2 days like so:
    select
    id,
    test_id,
    date,
    result,
    MAX ( result ) over ( partition BY id, test_id order by date RANGE BETWEEN CURRENT ROW AND INTERVAL '2' DAY FOLLOWING ) AS max_result_next_two_day
    from table
    This is working fine, but I would like to also obtain the date when the max result occurs. I can see that this would be possible using a self join, but I'd like to know if there is a better way? I cannot use the FIRST_VALUE aggregate function and order by result, because the window function needs to be ordered by the date.
    It would be a great help if you could provide any pointers/suggestions.
    Thanks,
    Dan
    http://danieljamesscott.org

    Assuming RESULT is a positive integer that has a maximum width of, say 10,
    and assuming date has no time-component:
    select
       id
      ,test_id
      ,date
      ,result
      ,to_number(substr(max_result_with_date,1,10)) as max_result_next_two_day
      ,to_date(substr(max_result_with_date,11),'YYYYMMDD') as date_where_max_result_occurs
    from (select
            id
           ,test_id
           ,date
           ,result
           ,MAX(lpad(to_char(result),10,'0')||to_char(date,'YYYYMMDD'))
               over (partition BY id, test_id
                     order by date
                     RANGE BETWEEN CURRENT ROW AND INTERVAL '2' DAY FOLLOWING )
            AS max_result_with_date
          from table)

  • In need help: Analytic Report with Group by

    Good morning,
    I am trying to create a report with subtotal and grand total, which of couse goes to the group by clause, with rollup, cube, grouping... etc. I'd like to use rollup, then some columns in the Select list have to be put into the Group By clause, which is not supposed to be. So I had to use one of SUM, AVG, MIN and MAX functions, to make those columns as *aggregated, which is wrong.
    Another alternative I tried is to use Cube and Grouping_id to be the filter. However, that is still very cumbentsome and error-prone, also the order of the display is absolutely out of control.
    I am trying hard to stick to the first option of using the Rollup, since the result is very close to what I want, but avoid the usage of aggregation functions. For example, if I want to display column A, which should not be grouped. Other than using those aggregation functions, what I can do?
    Thanks in advance.

    Luc,
    this is a simple and a good reference for analytic functions:
    http://www.orafaq.com/node/55
    It takes some time to understand how they work and also it takes some time to understand how to utilize them. I have solved some issues in reporting using them, avoiding the overkill of aggregates.
    Denes Kubicek

  • Incorrect warning when parsing query with group by clause

    I am using SQL Developer 4.0.0.13.80 with JDK 1.7.0_51 x64 on Windows 8.1
    I have the following SQL, which works perfectly:
    select substr(to_char(tot_amount), 0, 1) as digit, count(*)
    from transactions
    where tot_amount <> 0
    group by substr(to_char(tot_amount), 0, 1)
    order by digit;
    However, SQL Developer is yellow-underlining the first line, telling me that:
    SELECT list is inconsistent with GROUP BY; amend GROUP BY clause to: substr(to_char(rep_tot_amount), 0, 1), substr(to_char(rep_tot_amount),
    which is clearly wrong.
    Message was edited by: JamHan
    Added code formatting.

    Hello,
    I also have found the same issue with the GROUP BY hint. Another problem I found is that the hint suggests to amend the GROUP BY members to add also constant values that are included in the SELECTed columns and whenever those constants include strings with spaces, it generates an invalid query. Normally, constant values won't affect grouping functions and as such they can be omitted from the GROUP BY members, but it seems SQL Dev thinks it differently.
    For example, if you try the following query:
    SELECT d.DNAME, sum(e.sal) amt, 'Total salary' report_title, 100 report_nr
    FROM scott.emp e, scott.dept d
    WHERE e.DEPTNO = d.DEPTNO
    GROUP BY d.DNAME;
    when you hover the mouse pointer on the yellow hint, a popup will show the following message:
    SELECT list inconsistent with GROUP BY; amend GROUP BY clause to:
    d.DNAME, 'Total, 100
    If you click on the hint, it will amend the group by members to become:
    GROUP BY d.DNAME, 'Total, 100;
    that is clearly incorrect syntax. You may notice that after the change the yellow hint is still there, suggesting to amend further the GROUP BY members. If you click again on the hint, you will end with the following:
    GROUP BY d.DNAME, 'Total, 100;
    , 'Total, 100
    and so on.
    I am not sure if this behaviour was already known (Vadim??), but it would be nice if somebody could file a bug against it.
    Finally when writing big queries with complex functions and constant columns, those yellow lines extend all over the select list and they are visually annoying, so I wonder if there is a way to disable the GROUP BY hint until it gets fixed.
    Thanks for any suggestion,
    Paolo

  • Custom IKM Knowledge Modules are not working with Group By Clause

    Hi All,
       I am facing an issue with custom IKM knowledge modules. Those are IKM Sql Incremental Update and IKM Sql Control Append.
    My Scenario is
    1. Created an interface with table on source and temporary datastore with some columns in target.
    2. In the Interface on the target i defined one column as UD1 and other column as UD2  for which group by to be implemented .
    3. Customized  IKM Sql Incremental Update  with " Group by " by making  modification in my IKM Sql Incremental Update
    detail step "Insert flow into I$ table"  i.e., i replaced like this whereever i find this <%=odiRef.getGrpBy()%> API
    Group By
    <%=snpRef.getColList("","[EXPRESSION]","","","UD1")%>,
    <%=snpRef.getColList("","[EXPRESSION]","","","UD2")%>
    Up to UD5.
    . Here in the place of [EXPRESSION] i passed column names for UD1 similarly other column name for [EXPRESSION] of UD2.
    4.Made all the proper mappings and also selected the KM's needed for that interface like CKM Sql ,IKM Sql Incremental Update.
    5. Executed the Interface with global context.
    Error i am getting in this case is :
    Caused By: java.sql.SQLSyntaxErrorException: ORA-00979: not a GROUP BY expression
      at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:462)
      at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405)
      at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:931)
      at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:481)
      at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:205)
      at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:548)
      at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:217)
      at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1115)
    Here the columns in the select clause are there in Group By also.
    I did the same scenario using IKM Sql to file append .In that case i am able to do it. But not with the above mentioned KM's. Please let me know if any one know it or tried with this, as it is
    high priority for me.
    Please help me out.
    Thanks,
    keerthi

    Hi Keerthi,
    If your are transfering data from Oracle to Oracle (I means source is oracle Db and target is also oracle DB) then use below KM's and group by will come automatically based on the key values you selected on interface target datastore
    1) CKM Oracle
    2) LKM Oracle to Oracle (DBLINK)
    3) IKM Oracle Incremental Update (MERGE)
    Hope this will helps to resolve your issue
    Regards,
    Phanikanth

  • How to use analytic function with aggregate function

    hello
    can we use analytic function and aggrgate function in same qurey? i tried to find any example on Net but not get any example how both of these function works together. Any link or example plz share with me
    Edited by: Oracle Studnet on Nov 15, 2009 10:29 PM

    select
    t1.region_name,
    t2.division_name,
    t3.month,
    t3.amount mthly_sales,
    max(t3.amount) over (partition by t1.region_name, t2.division_name)
    max_mthly_sales
    from
    region t1,
    division t2,
    sales t3
    where
    t1.region_id=t3.region_id
    and
    t2.division_id=t3.division_id
    and
    t3.year=2004
    Source:http://www.orafusion.com/art_anlytc.htm
    Here max (aggregate) and over partition by (analytic) function is in same query. So it means we can use aggregate and analytic function in same query and more than one analytic function in same query also.
    Hth
    Girish Sharma

  • Default Sorting behaviour of Oracle 9i in 11g along with group by clause

    Hi,
    We have recently migrated from 9i to 11g. The reports from application comes in a jumbled fashion. Later we understood when there is a group by clause in the query, the recordset will be sorted by default in 9i and this feature is not available in 11g. Do anyone faced the same issue and resolved at the DB level.
    Only alternate we found is the change in code with addittional order by clause which will take a long time to complete and roll out the same.
    If anyone has got any immediate solution, please let me know.
    Thx in advance.
    Sheen

    Hi,
    A group by can sort (depending on the method of grouping) but it isn't necessary. If you want to sort the output you need the ORDER BY clause. There are different group by mechanismes between 9i and 11g. 10g introduced HASH GROUP BY where in 9i only the SORT GROUP BY existed. The latter gives a sorted set, the first not.
    if you want the same behaviour you can use "_gby_hash_aggregation_enabled parameter" = false, which disables the hash group by.
    Have also a look at the support document "'Group By' Does Not Guarantee a Sort Without Order By Clause In 10g and Above [ID 345048.1]".
    Herald ten Dam
    http://htendam.wordpress.com

  • Strange behavior in inner query with group by clause

    Hi All,
    I found a very strange behaviour with Inner query having a group by clause.
    Please look the sample code
    Select b,c,qty from (select a,b,c,sum(d) qty from tab_xyz group by b,c);
    This query gives output by summing b,c wise. But when i run the inner query it gives error as "not a group by expression "
    My question is - though the inner query is wrong, how is it possible that this query gives output.
    it behaves like -
    Select b,c,qty from (select b,c,sum(d) qty from tab_xyz group by b,c);
    Is it a normal behaviour ?
    If it is a normal behaviour, then how group by behaves inside a inner query.
    Thanks !!

    This case I have tested already and it throws error.
    But why the initial posted query is not throwing
    error even the inner query is wrong.
    In what way oracle behaves for the initial posted
    query?what is the scenario that throws the error? is it at the time when you only run the inner query or the whole query?

  • Reg - Search Form for a VO with group by clause

    Hi,
    I have a Bar graph that displays data based on the Query below.
    SELECT STATUS.STATUS_ID AS STATUSID,STATUS.STATUS,COUNT(SR.SERVICEREQUEST_ID) AS SRCOUNT
    FROM SERVICE_REQUEST SR ,SERVICEREQUESTSTATUS STATUS
    WHERE SR.STATUS_ID = STATUS.STATUS_ID
    GROUP BY STATUS.STATUS_ID,STATUS.STATUS,SR.STATUS_ID
    It displays the count of SRs against a particular status.
    Now I need to add a search form to this graph with customer and date range.
    So we need to add the line below to the where clause.
    "SR.CUSTOMER_ID = :customerId AND SR.REQUESTED_ON BETWEEN :fromDate and :toDate"
    But the columns SR.CUSTOMER_ID, SR.REQUESTED_ON also need to be added to the select clause to create the View criteria for search panel.
    The two columns should also need to be added to the group by clause if we are to add them in the select clause.
    This would not produce the expected results.
    How do I create a search form with the criterias applied only at the where clause.Please help.
    With Regards,
    Guna

    The [url http://docs.oracle.com/cd/E16162_01/apirefs.1112/e17483/oracle/jbo/server/ViewObjectImpl.html]ViewObjectImpl has methods for doing this programmatically (setQuery, defineNamedWhereClauseParam, setNamedWhereClauseParam) that you can use to manipulate the query, the bind variables expected, and the values for the binds.
    John

  • Aggregate functions without group by clause

    hi friends,
    i was asked an interesting question by my friend. The question is...
    There is a DEPT table which has dept_no and dept_name. There is an EMP table which has emp_no, emp_name and dept_no.
    My requirement is to get the the dept_no, dept_name and the no. of employees in that department. This should be done without using a group by clause.
    Can anyone of you help me to get a solution for this?

    select distinct emp.deptno,dname
    ,count(*) over(partition by emp.deptno)
    from emp
    ,dept
    where emp.deptno=dept.deptno;
    10     ACCOUNTING     3
    20     RESEARCH     5
    30     SALES     6

  • Count(*) in a analytic situation with group by order by

    Hello every body,
    I have a count(*) problem in an sql with analytics function on a table
    when I want to have all his column in the result
    Say I have a table
    mytable1
    CREATE TABLE MYTABLE1
    MY_TIME TIMESTAMP(3),
    PRICE NUMBER,
    VOLUME NUMBER
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.01.664','DD-MM-YY HH24:MI:SS:FF3' ),49.55,704492 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.02.570','DD-MM-YY HH24:MI:SS:FF3' ),49.55,705136 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.227','DD-MM-YY HH24:MI:SS:FF3' ),49.55,707313 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.227','DD-MM-YY HH24:MI:SS:FF3' ),49.55,706592 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.30.695','DD-MM-YY HH24:MI:SS:FF3' ),49.55,705581 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.227','DD-MM-YY HH24:MI:SS:FF3' ),49.55,707985 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.820','DD-MM-YY HH24:MI:SS:FF3' ),49.56,708494 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.32.258','DD-MM-YY HH24:MI:SS:FF3' ),49.57,708955 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.36.180','DD-MM-YY HH24:MI:SS:FF3' ),49.58,709519 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.352','DD-MM-YY HH24:MI:SS:FF3' ),49.59,710502 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.352','DD-MM-YY HH24:MI:SS:FF3' ),49.59,710102 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.352','DD-MM-YY HH24:MI:SS:FF3' ),49.59,709962 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.52.399','DD-MM-YY HH24:MI:SS:FF3' ),49.59,711427 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.977','DD-MM-YY HH24:MI:SS:FF3' ),49.6,710902 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.50.492','DD-MM-YY HH24:MI:SS:FF3' ),49.6,711379 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.45.550','DD-MM-YY HH24:MI:SS:FF3' ),49.6,711302 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.50.492','DD-MM-YY HH24:MI:SS:FF3' ),49.62,711417 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.57.790','DD-MM-YY HH24:MI:SS:FF3' ),49.49,715587 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.47.712','DD-MM-YY HH24:MI:SS:FF3' ),49.5,715166 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.57.790','DD-MM-YY HH24:MI:SS:FF3' ),49.5,715469 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.24.821','DD-MM-YY HH24:MI:SS:FF3' ),49.53,714833 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.24.821','DD-MM-YY HH24:MI:SS:FF3' ),49.53,714914 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.24.493','DD-MM-YY HH24:MI:SS:FF3' ),49.54,714136 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.19.977','DD-MM-YY HH24:MI:SS:FF3' ),49.55,713387 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.19.977','DD-MM-YY HH24:MI:SS:FF3' ),49.55,713562 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712172 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.09.274','DD-MM-YY HH24:MI:SS:FF3' ),49.59,713287 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.09.117','DD-MM-YY HH24:MI:SS:FF3' ),49.59,713206 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712984 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.836','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712997 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712185 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712261 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.32.244','DD-MM-YY HH24:MI:SS:FF3' ),49.46,725577 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.26.181','DD-MM-YY HH24:MI:SS:FF3' ),49.49,724664 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.540','DD-MM-YY HH24:MI:SS:FF3' ),49.49,723366 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.26.181','DD-MM-YY HH24:MI:SS:FF3' ),49.49,725242 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.26.181','DD-MM-YY HH24:MI:SS:FF3' ),49.49,725477 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.947','DD-MM-YY HH24:MI:SS:FF3' ),49.49,724521 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.540','DD-MM-YY HH24:MI:SS:FF3' ),49.49,723943 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.540','DD-MM-YY HH24:MI:SS:FF3' ),49.49,724086 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.34.103','DD-MM-YY HH24:MI:SS:FF3' ),49.49,725609 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.118','DD-MM-YY HH24:MI:SS:FF3' ),49.5,720166 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.118','DD-MM-YY HH24:MI:SS:FF3' ),49.5,720066 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.11.774','DD-MM-YY HH24:MI:SS:FF3' ),49.5,718524 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.696','DD-MM-YY HH24:MI:SS:FF3' ),49.5,722086 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.11.774','DD-MM-YY HH24:MI:SS:FF3' ),49.5,718092 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.11.774','DD-MM-YY HH24:MI:SS:FF3' ),49.5,715673 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.118','DD-MM-YY HH24:MI:SS:FF3' ),49.51,719666 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.12.555','DD-MM-YY HH24:MI:SS:FF3' ),49.52,719384 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.28.963','DD-MM-YY HH24:MI:SS:FF3' ),49.48,728830 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.11.884','DD-MM-YY HH24:MI:SS:FF3' ),49.48,726609 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.28.963','DD-MM-YY HH24:MI:SS:FF3' ),49.48,728943 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.45.947','DD-MM-YY HH24:MI:SS:FF3' ),49.49,729627 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.12.259','DD-MM-YY HH24:MI:SS:FF3' ),49.49,726830 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.46.494','DD-MM-YY HH24:MI:SS:FF3' ),49.49,733653 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.46.510','DD-MM-YY HH24:MI:SS:FF3' ),49.49,733772 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.12.259','DD-MM-YY HH24:MI:SS:FF3' ),49.49,727830 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.59.119','DD-MM-YY HH24:MI:SS:FF3' ),49.5,735772 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.47.369','DD-MM-YY HH24:MI:SS:FF3' ),49.5,734772 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.20.463','DD-MM-YY HH24:MI:SS:FF3' ),49.48,740621 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.12.369','DD-MM-YY HH24:MI:SS:FF3' ),49.48,740538 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.20.463','DD-MM-YY HH24:MI:SS:FF3' ),49.48,741021 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.10.588','DD-MM-YY HH24:MI:SS:FF3' ),49.49,740138 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.463','DD-MM-YY HH24:MI:SS:FF3' ),49.49,738320 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.135','DD-MM-YY HH24:MI:SS:FF3' ),49.49,737122 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.135','DD-MM-YY HH24:MI:SS:FF3' ),49.49,736424 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.260','DD-MM-YY HH24:MI:SS:FF3' ),49.49,737598 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.744','DD-MM-YY HH24:MI:SS:FF3' ),49.49,739360 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.135','DD-MM-YY HH24:MI:SS:FF3' ),49.49,736924 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.260','DD-MM-YY HH24:MI:SS:FF3' ),49.49,737784 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.463','DD-MM-YY HH24:MI:SS:FF3' ),49.49,738145 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.744','DD-MM-YY HH24:MI:SS:FF3' ),49.49,739134 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.463','DD-MM-YY HH24:MI:SS:FF3' ),49.49,738831 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.215','DD-MM-YY HH24:MI:SS:FF3' ),49.5,742421 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.580','DD-MM-YY HH24:MI:SS:FF3' ),49.5,741777 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.215','DD-MM-YY HH24:MI:SS:FF3' ),49.5,742021 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.48.433','DD-MM-YY HH24:MI:SS:FF3' ),49.5,741091 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.840','DD-MM-YY HH24:MI:SS:FF3' ),49.51,743021 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.57.511','DD-MM-YY HH24:MI:SS:FF3' ),49.52,743497 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.00.270','DD-MM-YY HH24:MI:SS:FF3' ),49.52,744021 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.17.699','DD-MM-YY HH24:MI:SS:FF3' ),49.53,750292 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.00.433','DD-MM-YY HH24:MI:SS:FF3' ),49.53,747382 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.17.699','DD-MM-YY HH24:MI:SS:FF3' ),49.53,749939 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.15.152','DD-MM-YY HH24:MI:SS:FF3' ),49.53,749414 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.00.433','DD-MM-YY HH24:MI:SS:FF3' ),49.53,744882 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.08.110','DD-MM-YY HH24:MI:SS:FF3' ),49.54,749262 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.01.168','DD-MM-YY HH24:MI:SS:FF3' ),49.54,748418 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.01.152','DD-MM-YY HH24:MI:SS:FF3' ),49.54,748243 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.07.293','DD-MM-YY HH24:MI:SS:FF3' ),49.54,748862 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.09.433','DD-MM-YY HH24:MI:SS:FF3' ),49.51,750414 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.28.262','DD-MM-YY HH24:MI:SS:FF3' ),49.53,750930 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.28.887','DD-MM-YY HH24:MI:SS:FF3' ),49.53,751986 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.28.887','DD-MM-YY HH24:MI:SS:FF3' ),49.53,750986 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.30.997','DD-MM-YY HH24:MI:SS:FF3' ),49.55,753900 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.30.887','DD-MM-YY HH24:MI:SS:FF3' ),49.55,753222 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.29.809','DD-MM-YY HH24:MI:SS:FF3' ),49.55,753022 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.29.809','DD-MM-YY HH24:MI:SS:FF3' ),49.55,752847 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.42.622','DD-MM-YY HH24:MI:SS:FF3' ),49.56,755385 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.31.120','DD-MM-YY HH24:MI:SS:FF3' ),49.56,754385 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.49.590','DD-MM-YY HH24:MI:SS:FF3' ),49.6,759087 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.50.341','DD-MM-YY HH24:MI:SS:FF3' ),49.6,759217 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.49.590','DD-MM-YY HH24:MI:SS:FF3' ),49.6,758701 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.57.262','DD-MM-YY HH24:MI:SS:FF3' ),49.6,761049 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.48.637','DD-MM-YY HH24:MI:SS:FF3' ),49.6,757827 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.48.120','DD-MM-YY HH24:MI:SS:FF3' ),49.6,757385 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.466','DD-MM-YY HH24:MI:SS:FF3' ),49.62,761001 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.137','DD-MM-YY HH24:MI:SS:FF3' ),49.62,760109 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.137','DD-MM-YY HH24:MI:SS:FF3' ),49.62,759617 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.278','DD-MM-YY HH24:MI:SS:FF3' ),49.62,760265 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.137','DD-MM-YY HH24:MI:SS:FF3' ),49.62,759954 );
    so if I do
    SELECT DISTINCT row_number() over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ) num,
    MIN(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) low ,
    MAX(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) high ,
    -- sum(volume) over( partition by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 order by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 asc ) volume,
    TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 TIME ,
    price ,
    COUNT( *) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ,price ASC,volume ASC ) TRADE,
    first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC,volume ASC ) OPEN ,
    first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 DESC,volume DESC) CLOSE ,
    lag(price) over ( order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) lag_all
    FROM mytable1
    WHERE my_time > to_timestamp('04032008:09:00:00','DDMMYYYY:HH24:MI:SS')
    AND my_time < to_timestamp('04032008:09:01:00','DDMMYYYY:HH24:MI:SS')
    GROUP BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ,
    price ,
    volume
    ORDER BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60,
    price ,
    num;
    i have
    NUM|LOW|HIGH|TIME|PRICE|TRADE|OPEN|CLOSE|LAG_ALL
    1|49.55|49.62|04/03/2008 09:00:00|49.55|1|49.55|49.59|
    2|49.55|49.62|04/03/2008 09:00:00|49.55|2|49.55|49.59|49.55
    3|49.55|49.62|04/03/2008 09:00:00|49.55|3|49.55|49.59|49.55
    4|49.55|49.62|04/03/2008 09:00:00|49.55|4|49.55|49.59|49.55
    5|49.55|49.62|04/03/2008 09:00:00|49.55|5|49.55|49.59|49.55
    6|49.55|49.62|04/03/2008 09:00:00|49.55|6|49.55|49.59|49.55
    7|49.55|49.62|04/03/2008 09:00:00|49.56|7|49.55|49.59|49.55
    8|49.55|49.62|04/03/2008 09:00:00|49.57|8|49.55|49.59|49.56
    9|49.55|49.62|04/03/2008 09:00:00|49.58|9|49.55|49.59|49.57
    10|49.55|49.62|04/03/2008 09:00:00|49.59|10|49.55|49.59|49.58
    11|49.55|49.62|04/03/2008 09:00:00|49.59|11|49.55|49.59|49.59
    12|49.55|49.62|04/03/2008 09:00:00|49.59|12|49.55|49.59|49.59
    13|49.55|49.62|04/03/2008 09:00:00|49.59|13|49.55|49.59|49.59
    14|49.55|49.62|04/03/2008 09:00:00|49.6|14|49.55|49.59|49.59
    15|49.55|49.62|04/03/2008 09:00:00|49.6|15|49.55|49.59|49.6
    16|49.55|49.62|04/03/2008 09:00:00|49.6|16|49.55|49.59|49.6
    17|49.55|49.62|04/03/2008 09:00:00|49.62|17|49.55|49.59|49.6
    Witch is errouneous
    because
    if I do'nt put the volume column in the script I get another result
    SELECT DISTINCT row_number() over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ) num,
    MIN(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) low ,
    MAX(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) high ,
    -- sum(volume) over( partition by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 order by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 asc ) volume,
    TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 TIME ,
    price ,
    COUNT( *) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ,price ASC ) TRADE,
    first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ) OPEN ,
    first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 DESC) CLOSE ,
    lag(price) over ( order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) lag_all
    FROM mytable1
    WHERE my_time > to_timestamp('04032008:09:00:00','DDMMYYYY:HH24:MI:SS')
    AND my_time < to_timestamp('04032008:09:01:00','DDMMYYYY:HH24:MI:SS')
    GROUP BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ,
    price
    ORDER BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60,
    price ,
    num;
    I get this
    NUM|LOW|HIGH|TIME|PRICE|TRADE|OPEN|CLOSE|LAG_ALL
    1|49.55|49.62|04/03/2008 09:00:00|49.55|1|49.55|49.55|
    2|49.55|49.62|04/03/2008 09:00:00|49.56|2|49.55|49.55|49.55
    3|49.55|49.62|04/03/2008 09:00:00|49.57|3|49.55|49.55|49.56
    4|49.55|49.62|04/03/2008 09:00:00|49.58|4|49.55|49.55|49.57
    5|49.55|49.62|04/03/2008 09:00:00|49.59|5|49.55|49.55|49.58
    6|49.55|49.62|04/03/2008 09:00:00|49.6|6|49.55|49.55|49.59
    7|49.55|49.62|04/03/2008 09:00:00|49.62|7|49.55|49.55|49.6
    How can I have the right count with all the column of the table?
    Babata

    I'm not sure what in your eye the "right count" is. but I think the DISTINCT keyword is hiding the problems that you have. It could also be the reason for the different number of results between query one and query two.

Maybe you are looking for

  • Can't open RAW files in CS3 or Bridge

    I am using PowerMac G4 running OS 10.4.10, one gig RAM, with an Nvidia Geforce 4 MX graphics card. Adobe Updated notified me the new Camera RAW plugin was available so I said yes to install it. The installer hung half way through and I had to force q

  • Family purchases not appearing post family sharing

    My partner and I have activated family sharing, we're both on each other's devices as family and sharing is activated; however neither of our respective purchases appear in the other's itunes or App Store purchase history.  I've reset both devices an

  • I can't see how to edit PDF inside Portfolio for Acrobat X. In Acrobat 9 Pro

    I can't see how to edit PDF inside Portfolio for Acrobat X. I can do this in Acrobat 9 Pro with the Document pull down menu. Did they take this away in Acrobat X?

  • Essbase studio Startup

    Hi All, I have installed 11.1.1.3 on win2003 server. It is working fine. When i open the Essbase studio Console with out starting Studio Server Manually everytime it is showing error Error Connecting to Essbase Studio Server Reason- Network Error Ple

  • Bin tables in Oralce10g

    Hi All , i have my oralce database in 10.2.0.4 in datawarehouse envirinment.The database creates a lot of tables for temporary purpose and then drops them.In this process the recycle bin sizes goes to 100 gb where as total dbsize is 400gb.My question