Using aggregrate functions without group by

Hi
I have a query which is
select empno,deptno,count(*) from emp group by empno,deptno;
Is there any thing which will help me to return empno,deptno without using group by clause?
Appreciate your help on the above?
Thanks & Regards
Thakur Manoj R

This will give the same result:
select empno, deptno, count(*) over (partition by empno, deptno) from emp;If you want to see the number of employees in the same department you could use:
select empno, deptno, count(*) over (partition by deptno) from emp;But what is your intention? What is wrong about "group by"?
Edited by: hm on 27.01.2011 00:10

Similar Messages

  • Nested group function without group xmlagg

    I am getting nested group function without group by xmlagg when using the xmlagg function inside another xmlagg function. Find the table structure and sample data here,
    CREATE TABLE "TEST_TABLE"
       ("KEY" NUMBER(20,0),
        "NAME" VARCHAR2(50 ),
        "DESCRIPTION" VARCHAR2(100 )
       Insert into TEST_TABLE (KEY,NAME,DESCRIPTION) values (1,'sam','desc1');
       Insert into TEST_TABLE (KEY,NAME,DESCRIPTION) values (2,'max','desc2');
       Insert into TEST_TABLE (KEY,NAME,DESCRIPTION) values (3,'peter',null);
       Insert into TEST_TABLE (KEY,NAME,DESCRIPTION) values (4,'andrew',null);
    select
            XMLSerialize(document
            xmlelement("root",
             xmlagg(
               xmlelement("emp"          
               , xmlforest(Key as "ID")          
               , xmlforest(name as "ename")
               , xmlelement("Descriptions", 
               xmlagg(
                  xmlforest(description as "Desc")
           ) as clob indent
           ) as t   
          from test_table;Then i removed the xmlagg function from the above select query and used xmlelement instead
      select
            XMLSerialize(document
            xmlelement("root",
             xmlagg(
               xmlelement("emp"          
               , xmlforest(Key as "ID")          
               , xmlforest(name as "ename")
               , xmlelement("Descriptions",            
                  xmlforest(description as "Desc")
           ) as clob indent
           ) as t   
          from test_table;This is working fine, but xml created with empty elements for Descriptions element for key 3 and 4 which has null values. I need don't need Descriptions element in the xml when it has null value. Please help me to resolve this.

    You can do it with a correlated subquery :
    SQL> select xmlserialize(document
      2           xmlelement("root",
      3             xmlagg(
      4               xmlelement("emp"
      5               , xmlforest(
      6                   t.key as "ID"
      7                 , t.name as "ename"
      8                 , (
      9                     select xmlagg(
    10                              xmlelement("Desc", d.description)
    11                              order by d.description -- if necessary
    12                            )
    13                     from test_desc d
    14                     where d.key = t.key
    15                   ) as "Descriptions"
    16                 )
    17               )
    18             )
    19           ) as clob indent
    20         )
    21  from test_table t;
    XMLSERIALIZE(DOCUMENTXMLELEMEN
    <root>
      <emp>
        <ID>1</ID>
        <ename>sam</ename>
        <Descriptions>
          <Desc>desc1_1</Desc>
          <Desc>desc1_2</Desc>
          <Desc>desc1_3</Desc>
        </Descriptions>
      </emp>
      <emp>
        <ID>2</ID>
        <ename>max</ename>
        <Descriptions>
          <Desc>desc2_1</Desc>
          <Desc>desc2_2</Desc>
          <Desc>desc2_3</Desc>
        </Descriptions>
      </emp>
      <emp>
        <ID>3</ID>
        <ename>peter</ename>
      </emp>
      <emp>
        <ID>4</ID>
        <ename>andrew</ename>
      </emp>
    </root>
    Or an OUTER JOIN + GROUP-BY :
    select xmlserialize(document
             xmlelement("root",
               xmlagg(
                 xmlelement("emp"          
                 , xmlforest(
                     t.key as "ID"
                   , t.name as "ename"
                   , xmlagg(
                       xmlforest(d.description as "Desc")
                       order by d.description -- if necessary
                     ) as "Descriptions"
             ) as clob indent
    from test_table t
         left outer join test_desc d on d.key = t.key
    group by t.key
           , t.name
    ;Edited by: odie_63 on 11 juil. 2012 14:54 - added 2nd option

  • Nested Group Function without Group By Problem

    Hey everyone,
    I have 3 tables as below:
    TABLES
    ITEM (Item_no, Item_price, desc)
    DeliveryItem (delivery_no, item_no, quantity)
    Delivery (delivery_no, delivery_date)
    SELECT desc, MAX(SUM(quantity)) FROM DeliveryItem, Item, Delivery WHERE Item.item_no = DeliveryItem.item_no AND Delivery.delivery_no = deliveryitem.delivery_no;
    And I'm trying to output description of most delivered item but I got an error like SQL Error: ORA-00978: nested group function without GROUP BY. Could you help me to fix my code?
    Thanx

    Hi,
    DESC is not a good column name; you could get errors if the parser thinks it means DESCending. I used DESCRIPTION instead, below.
    I think the best way is to do the SUM in a sub-query, lkike this:
    WITH     got_r_num     AS
         SELECT       item_no
         ,       SUM (quantity)     AS total_quantity
         ,       RANK () OVER (ORDER BY  SUM (quantity) DESC)     AS r_num
         FROM       deliveryitem
         GROUP BY  item_no
    SELECT     i.description
    ,     r.total_quantity
    FROM     got_r_num     r
    JOIN     item          i     ON     r.item_no     = i.item_no
    WHERE     r.r_num     = 1
    ;If you want to do it without a sub-query:
    SELECT       MIN (i.description) KEEP (DENSE_RANK LAST ORDER BY SUM (di.quantity)
                        AS description
    ,       MAX (SUM (quantity))     AS total_quantity
    FROM       deliveryitem     di
    JOIN       item          i     ON     d1.item_no     = i.tiem_no
    GROUP BY  i.description
    ;If you do nested aggegate functions, then every column in the SELECT clause must be an aggregate applied to either
    (a) another aggregate, or
    (b) one of the GROUP BY expressions.
    That's why you got the ORA-00937 error.
    This second approach will only display one row of output, so If there is a tie for the item with the greatest total_quantity, only one description will be shown. The RANK method will show all items that had the highest total_quantity.
    It looks like the delivery table plays no role in this problem, but it there's some reason for including it, you can join it tpo either query above.
    Of course, unless you post test copies of your tables (CREATE TABLE and INSERT statements) I cn't test anything.
    Edited by: Frank Kulash on Nov 6, 2010 10:57 AM

  • Using decode function without negative values

    Hi friends
    I am using oracle 11g
    I have at doubt regarding the following.
    create table Device(Did char(20),Dname char(20),Datetime char(40),Val char(20));
    insert into Device values('1','ABC','06/13/2012 18:00','400');
    insert into Device values('1','abc','06/13/2012 18:05','600');
    insert into Device values('1','abc','06/13/2012 18:55','600');
    insert into Device values('1','abc','06/13/2012 19:00','-32768');
    insert into Device values('1','abc','06/13/2012 19:05','800');
    insert into Device values('1','abc','06/13/2012 19:10','600');
    insert into Device values('1','abc','06/13/2012 19:15','900');
    insert into Device values('1','abc','06/13/2012 19:55','1100');
    insert into Device values('1','abc','06/13/2012 20:00','-32768');
    insert into Device values('1','abc','06/13/2012 20:05','-32768');
    Like this I am inserting data into table for every 5 minutes Here i need the result like
    output:
    Dname 18:00 19:00 20:00
    abc 400 -32768 -32768
    to retrieve this result i am using decode function
    SELECT Dname,
    MAX(DECODE ( rn , 1,val )) h1,
    MAX(DECODE ( rn , 2, val )) h2,
    FROM
    (SELECT Dname,Datetime,row_number() OVER
    (partition by Dname order by datetime asc) rn FROM Device
    where substr(datetime,15,2)='00' group by Dname.
    According to above data expected result is
    Dname 18:00 19:00 20:00
    abc 400 600(or)800 1100
    This means I dont want to display negative values instead of that values i want to show previous or next value.
    Edited by: 913672 on Jul 2, 2012 3:44 AM

    Are you looking for something like this?
    select * from
    select dname,
           datetime,
           val,
           lag(val) over (partition by upper(dname) order by datetime) prev_val,
           lead(val) over (partition by upper(dname) order by datetime) next_val,
           case when nvl(val,0)<0  and lag(val) over (partition by upper(dname) order by datetime) >0 then
             lag(val) over (partition by upper(dname) order by datetime)
           else 
             lead(val) over (partition by upper(dname) order by datetime)
           end gt0_val
    from device
    order by datetime
    where substr(datetime,15,2)='00';Please take a look at the result_column gt0_val.
    Edited by: hm on 02.07.2012 04:06

  • How to use Pivot function for group range in oracle SQL

    Hi,
    Good Morning !!!
    I need to show the data in the below format. There is 2 columns 1 is State and another one is rate.
    State     <100     100-199     200-299     300-399     400-499     500-599     600-699     700-799     800-899     900-999     >=1000     Total
    AK     1     2     0     4     1     4     4     35     35     4     1     25
    AL     0     0     2     27     10     17     35     2     2     35     0     103
    AR     0     0     1     0     0     2     2     13     13     2     0     6
    AZ     0     1     2     14     2     14     13     3     3     13     0     57
    CA     0     0     1     6     2     7     3     4     4     3     0     34
    Developed the below query but unable to use the range on pivot function . Please help on this.
    (select      (SELECT SHORT_DESCRIPTION
         FROM CODE_VALUES
         WHERE CODE_TYPE_CODE = ad.STATE_TYPE_IND_CODE
         AND VALUE = ad.STATE_CODE
         ) STATE,
    nr.rate
         FROM neutrals n,
         contacts c,
         addresses ad,
         xref_contacts_addresses xca,
         neutral_rates nr
                        where n.contact_id=c.contact_id
                        and n.address_id = ad.address_id
                        and xca.address_id=ad.address_id
                        and xca.contact_id=c.contact_id
                        and nr.contact_id = n.contact_id
                        and nr.rate_frequency='HOUR' )

    user8564931 wrote:
    This solutions is useful and Thanks for your reply.
    How can i get the Min value and Max value for each row ?
    State     <100     100-199     200-299     300-399     400-499     500-599     600-699     700-799     800-899     900-999     >=1000     Total     Min     Max
    IL     0     0     1     5     1     5     40     1     1     40     0     53     $10     $2,500
    IN     0     0     0     0     0     0     1     49     49     1     0     3     $70     $1,500This?
    WITH t AS
            (SELECT 'AL' state, 12 VALUE FROM DUAL
             UNION ALL
             SELECT 'AL' state, 67 VALUE FROM DUAL
             UNION ALL
             SELECT 'AL' state, 23 VALUE FROM DUAL
             UNION ALL
             SELECT 'AL' state, 12 VALUE FROM DUAL
             UNION ALL
             SELECT 'AL' state, 12 VALUE FROM DUAL
             UNION ALL
             SELECT 'AL' state, 78 VALUE FROM DUAL
             UNION ALL
             SELECT 'AL' state, 34 VALUE FROM DUAL
             UNION ALL
             SELECT 'AL' state, 4 VALUE FROM DUAL
             UNION ALL
             SELECT 'AL' state, 12 VALUE FROM DUAL
             UNION ALL
             SELECT 'AL' state, 15 VALUE FROM DUAL
             UNION ALL
             SELECT 'AZ' state, 6 VALUE FROM DUAL
             UNION ALL
             SELECT 'AZ' state, 123 VALUE FROM DUAL
             UNION ALL
             SELECT 'AZ' state, 123 VALUE FROM DUAL
             UNION ALL
             SELECT 'MA' state, 23 VALUE FROM DUAL
             UNION ALL
             SELECT 'MA' state, 120 VALUE FROM DUAL
             UNION ALL
             SELECT 'MA' state, 456 VALUE FROM DUAL
             UNION ALL
             SELECT 'MA' state, 11 VALUE FROM DUAL
             UNION ALL
             SELECT 'MA' state, 24 VALUE FROM DUAL
             UNION ALL
             SELECT 'MA' state, 34 VALUE FROM DUAL
             UNION ALL
             SELECT 'MA' state, 87 VALUE FROM DUAL
             UNION ALL
             SELECT 'MA' state, 23 VALUE FROM DUAL
             UNION ALL
             SELECT 'MA' state, 234 VALUE FROM DUAL
             UNION ALL
             SELECT 'MA' state, 789 VALUE FROM DUAL
             UNION ALL
             SELECT 'MH' state, 54321 VALUE FROM DUAL),
         -- End of test data
         t1 AS
            (  SELECT state,
                      NVL (COUNT (DECODE (VALUE, 0, 0)), 0) "<100",
                      NVL (COUNT (DECODE (VALUE, 1, 1)), 0) "100-199",
                      NVL (COUNT (DECODE (VALUE, 2, 2)), 0) "200-299",
                      NVL (COUNT (DECODE (VALUE, 3, 3)), 0) "300-399",
                      NVL (COUNT (DECODE (VALUE, 4, 4)), 0) "400-499",
                      NVL (COUNT (DECODE (VALUE, 5, 5)), 0) "500-599",
                      NVL (COUNT (DECODE (VALUE, 6, 6)), 0) "600-699",
                      NVL (COUNT (DECODE (VALUE, 7, 7)), 0) "700-799",
                      NVL (COUNT (DECODE (VALUE, 8, 8)), 0) "800-899",
                      NVL (COUNT (DECODE (VALUE, 9, 9)), 0) "900-999",
                      NVL (COUNT (DECODE (VALUE, 10, 10)), 0) ">=1000"
                 FROM (SELECT state,
                              CASE
                                 WHEN VALUE < 100 THEN 0
                                 WHEN VALUE BETWEEN 100 AND 199 THEN 1
                                 WHEN VALUE BETWEEN 200 AND 299 THEN 2
                                 WHEN VALUE BETWEEN 300 AND 399 THEN 3
                                 WHEN VALUE BETWEEN 400 AND 499 THEN 4
                                 WHEN VALUE BETWEEN 500 AND 599 THEN 5
                                 WHEN VALUE BETWEEN 600 AND 699 THEN 6
                                 WHEN VALUE BETWEEN 700 AND 799 THEN 7
                                 WHEN VALUE BETWEEN 800 AND 899 THEN 8
                                 WHEN VALUE BETWEEN 900 AND 999 THEN 9
                                 WHEN VALUE >= 1000 THEN 10
                              END
                                 VALUE
                         FROM t)
             GROUP BY state)
    SELECT STATE,
           "<100",
           "100-199",
           "200-299",
           "300-399",
           "400-499",
           "500-599",
           "600-699",
           "700-799",
           "800-899",
           "900-999",
           ">=1000",
             "<100"
           + "100-199"
           + "200-299"
           + "300-399"
           + "400-499"
           + "500-599"
           + "600-699"
           + "700-799"
           + "800-899"
           + "900-999"
           + ">=1000"
              total,
         least("<100",
           "100-199",
           "200-299",
           "300-399",
           "400-499",
           "500-599",
           "600-699",
           "700-799",
           "800-899",
           "900-999",
           ">=1000") min_val,
          greatest("<100",
           "100-199",
           "200-299",
           "300-399",
           "400-499",
           "500-599",
           "600-699",
           "700-799",
           "800-899",
           "900-999",
           ">=1000") max_val
      FROM t1
    /

  • Aggregate functions without group by clause

    hi friends,
    i was asked an interesting question by my friend. The question is...
    There is a DEPT table which has dept_no and dept_name. There is an EMP table which has emp_no, emp_name and dept_no.
    My requirement is to get the the dept_no, dept_name and the no. of employees in that department. This should be done without using a group by clause.
    Can anyone of you help me to get a solution for this?

    select distinct emp.deptno,dname
    ,count(*) over(partition by emp.deptno)
    from emp
    ,dept
    where emp.deptno=dept.deptno;
    10     ACCOUNTING     3
    20     RESEARCH     5
    30     SALES     6

  • Can i use Lead function with Group by function

    I could use this query and get right ouput since i define product id =2000
    select product_id, order_date,
    lead (order_date,1) over (ORDER BY order_date) AS next_order_date
    from orders
    where product_id = 2000;
    But can i run this query by Group by Function
    for example
    select product_id, order_date,
    lead (order_date,1) over (ORDER BY order_date) AS next_order_date
    from orders
    group by product_id ;
    since data would be like and i need
    Product_id order Date
    2000 1-jan-09
    2000 21-jan-09
    3000 13-jan-09
    3000 15-jan-09
    4000 18-jan-09
    4000 19-jan-09
    output would be like for eg
    Product_id order Date Next_date
    2000 1-jan-09 21-jan-09
    3000 13-jan-09 15-jan-09
    4000 18-jan-09 19-jan-09

    Thanks everybody for ur help
    i could exactly mention what i requred
    create table SCHEDULER
    ( REF VARCHAR2(10),     
    NO NUMBER     ,
    PORT VARCHAR2(10),     
    ARRIVAL DATE     ,
    DEPARTURE DATE
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA0677',1,'KUWAIT','1-Sep-09','02-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA0677',2,'INDIA','5-Sep-09','07-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA0677',3,'COLUMBO','8-Sep-09','09-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA0677',4,'IRAN','10-Sep-09','12-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA0677',5,'IRAQ','14-Sep-09','15-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA0677',6,'DELHI','17-Sep-09','19-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA0677',7,'POLAND','21-Sep-09','23-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA0678',1,'INDIA','5-Sep-09','07-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA0678',2,'COLUMBO','8-Sep-09','09-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA0678',3,'IRAN','10-Sep-09','12-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA0678',4,'IRAQ','14-Sep-09','15-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA0678',5,'DELHI','17-Sep-09','19-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA0678',6,'POLAND','21-Sep-09','23-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA0678',7,'GOA','1-Oct-09','02-Oct-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2372',1,'INDIA','1-Sep-09','02-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2372',2,'KERALA','3-Sep-09','03-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2372',3,'BOMBAY','4-Sep-09','04-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2373',1,'INDIA','5-Sep-09','06-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2373',2,'ANDHERI','6-Sep-09','07-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2376',1,'INDIA','5-Sep-09','07-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2420',1,'INDIA','5-Sep-09','06-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2420',2,'ANDHERI','7-Sep-09','08-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2420',3,'BURMA','10-Sep-09','11-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2420',4,'BENGAL','11-Sep-09','12-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2445',1,'INDIA','4-Sep-09','05-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2445',2,'BURMA','7-Sep-09','09-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2498',1,'BENGAL','8-Sep-09','08-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2498',2,'COCHIN','11-Sep-09','11-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2498',3,'LANKA','12-Sep-09','12-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2498',4,'COLUMBO','13-Sep-09','15-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2498',5,'INDIA','17-Sep-09','18-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2505',1,'COLUMBO','5-Sep-09','06-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2505',2,'GOA','8-Sep-09','09-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2505',3,'INDIA','13-Sep-09','15-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2510',1,'INDIA','4-Sep-09     06-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2510',2,'BENGAL','8-Sep-09     09-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2510',3,'GOA','10-Sep-09     11-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2513',1,'INDIA','7-Sep-09','09-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2513',2,'USA','11-Sep-09','11-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2513',3,'UK','12-Sep-09','13-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2520',1,'INDIA','4-Sep-09','06-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2520',2,'BENGAL','8-Sep-09','09-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2520',3,'GOA','10-Sep-09','11-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2526',1,'INDIA','5-Sep-09','07-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2526',2,'DUBAI','10-Sep-09','11-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2526',3,'GOA','13-Sep-09','15-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2526',4,'OMAN','17-Sep-09','18-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2526',5,'INDIA','19-Sep-09','20-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2527',1,'BURMA','7-Sep-09','08-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2527',2,'INDIA','9-Sep-09','10-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2527',3,'ANDHERI','10-Sep-09','16-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2532',1,'SHARJAH','3-Sep-09','04-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2532',2,'AEDXB','5-Sep-09','05-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2533',1,'AESHJ','2-Sep-09','02-Sep-09');
    INSERT INTO SCHEDULER(REF,NO,PORT,ARRIVAL,DEPARTURE)
    VALUES('VA2533',2,'INDIA','3-Sep-09','03-Sep-09');
    COMMIT;
    Suppose these records shows the REF travelling from one location to another with respect to date
    We need to find out each REF GROUP WISE AND THE DATE OF TRAVELLING FOR SPECIFIED location travelling IE from STARTING FROM INDIA AND ENDING TO GOA
    OUTPUT SHOULD BE LIKE DATA SHOWN BELOW
    FROM LOCATION TO LOCATION
    REF , NO , PORT , ARRIVAL ,DEPARTURE , REF , NO , PORT , ARRIVAL , DEPARTURE
    VA0678     1 INDIA     5-Sep-09 07-Sep-09     VA0678 7 GOA 1-Oct-09 02-Oct-09     
    VA2510     1 INDIA     4-Sep-09 06-Sep-09     VA2510 3 GOA 10-Sep-09 11-Sep-09
    VA2520     1 INDIA     4-Sep-09 06-Sep-09     VA2520 3 GOA 10-Sep-09 11-Sep-09
    VA2526     1 INDIA     5-Sep-09 07-Sep-09     VA2526 3 GOA 13-Sep-09 15-Sep-09
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------

  • Using count function with grouped records

    Hi all,
    This seems like it should be real easy but I have yet to figure out a simple way to do this.
    Suppose I want to count Opportunities that are grouped by Sales Rep. At run-time I am filtering this list with a parameter for Sales Stage and created date.
    I've simplified this greatly, but here's what my setup looks like now:
    Sales Rep* ---------Count*_
    <?for-each-group:Opportunity[SalesStage=param1 and Created>param2];./salesrep?>
    <?salesrep?>-------<?count(current-group()/Id)?>
    <?end for-each-group?>
    Total_
    The only solution I have so far to get my grand total is to create a variable and keep a running total which I'll then display in the Total column. While it works, it seems like there should be an easier way, like doing a simple count(Id) to get the grand total. But since the Total appears after the end for-each-group, I lose the filter that was applied to the group so that count is invalid.
    Any thoughts from the experts?
    Thanks!

    To get grand total
    use
    <?count(/Oppurtunity[SalesStage=param1 and Created>param2]/Id)?>since you did not mention the complete xml, i assumed, Opportunity as the Root.
    if not, put the full path from the root.
    if you give some xml sample, and explain the output you wanted, we can fix it immediately.
    go thru these too..some thing can be pulled from here .
    http://winrichman.blogspot.com/search/label/Summation%20In%20BIP
    http://winrichman.blogspot.com/search/label/BIP%20Vertical%20sum

  • Using DAQmx functions without installation

    Good evening everybody,
    I have an instrumentation problem linked to DAQmx. I would like to communicate with an instrument using NIDAQmx tasks. I use an environment program system : visual studio 2003. At the end, I want to produce a dll. This dll, I want to put it on another computer. The question is the following. If I want to launch the dll by calling it, will I need NI-DAQ on the other computer or a software to use the dll ?
    When I tried to produce an exe with a VI, I needed to have Labview RunTime to launch the exe. If I want to launch the dll, will I be able to use the dll alone on the computer, without the installation of NIDAQ. Can I use the tasks only with the dll (since I need Run Time for the .exe, do I need Run Time also). ?
    I am sorry, it must be not clear. But I am available to explain you more the problem.
    Best regards.
    Thank you.
    gautier 
    Solved!
    Go to Solution.

    Since you have LabVIEW installed on your computer, the Run-Time is not required for running LabVIEW executables.
    But if you decide to use your LabVIEW executable on an other computer, you will need to install the LabVIEW Run-Time on it. You can also build an installer which will contain your program and addtionnal components (such as LabVIEW Run-Time and required drivers). Building an installer avoid to download and install the components separately.
    For more information regarding installer, you can look into LabVIEW's help (Building an Installer (Windows)) or follow Core 2 training course.
    If your program uses NI-488.2 driver, you will need to install it on your computer.
    Installing LabVIEW Run-Time is only required if your program which contains DLL calls is a LabVIEW executable.
    Regards,
    Jérémy C.
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    Travaux Pratiques d'initiation à LabVIEW et à la mesure
    Du 2 au 23 octobre, partout en France

  • SSRS 2008R2 : Not able to use Previous aggregrate function in matrix columns cell

    Hi Expert,
    I have used a matrix tablix in my report. It is working fine. But when I am trying to use Previous aggregrate in one matrix column cell I get the below error:
    The use of previous aggregrate function ia a tablix cell with in 'Tablix1' is not supported.
    Please help me regarding that.
    Thanks Rana

    Hi Rana,
    In your scenario, you use previous function in the “Data” cell, right? Previous function cannot be used in the overlapping parts of row group and column group. One workaround of this issue is use custom code to get the previous value.
    Public Shared previous as Integer
    Public Shared current as Integer
      Public Shared Function GetCurrent(Item as Integer) as Integer
         previous=current
         current=Item
         return current
      End Function
      Public Shared Function GetPrevious()
         return previous
      End Function
    Then you can use the expression below in the “Data” cell to get the previous value:
    =Code.GetCurrent(fields!Score.Value) & "-Previous-" & iif(Code.GetPrevious()=0,"",Code.GetPrevious())
    If you have any questions, please feel free to ask.
    Regards,
    Charlie Liao
    TechNet Community Support

  • OBIEE: Incorrect SQL - with count function uses ORDER BY instead GROUP BY

    I made a basic report that is a client count; I want to know how many clients the company have.
    But, when I run this report, OBIEE generates a ORDER BY sentence, instead a GROUP BY. Remember that I'm using count function, that is a agregation.
    The SQL generated was:
    select 'N0' as c1,
    count(*) as c2
    from
    (select distinct T1416.CLIENT_INTER_KEY as c1
    from
    (select *
    from prd.D_SERVICE where SOURCE_SYS in ('ARBOR','PPB') and DW_SERV_ST_ID in (100000003,100000009)) T1836,
    (select *
    from prd.D_CLIENT) T1416,
    (select *
    from prd.D_CUSTOMER_ACCOUNT where SOURCE_SYS In ('ARBOR','PPB')) T1515
    where ( T1416.DW_CLIENT_ID = T1515.DW_CLIENT_ID and T1515.DW_CUST_ACCT_ID = T1836.DW_CUST_ACCT_ID and T1836.MSISDN = '917330340' )
    ) D1
    order by c1
    The error that I receive is:
    "Query Status: Query Failed: [nQSError: 16001] ODBC error state: S1000 code: -1005018 message: [Sybase][ODBC Driver][Adaptive Server Anywhere]Illegal ORDER BY item Order Item: 'N0',
    -- (opt_OrderBy.cxx 429) .
    [nQSError: 16011] ODBC error occurred while executing SQLExtendedFetch to retrieve the results of a SQL statement."
    If I substitute ORDER BY with GROUP BY and test it in Sybase, Ithe query runs without any problem.
    select 'N0' as c1,
    count(*) as c2
    from
    (select distinct T1416.CLIENT_INTER_KEY as c1
    from
    (select *
    from prd.D_SERVICE where SOURCE_SYS in ('ARBOR','PPB') and DW_SERV_ST_ID in (100000003,100000009)) T1836,
    (select *
    from prd.D_CLIENT) T1416,
    (select *
    from prd.D_CUSTOMER_ACCOUNT where SOURCE_SYS In ('ARBOR','PPB')) T1515
    where ( T1416.DW_CLIENT_ID = T1515.DW_CLIENT_ID and T1515.DW_CUST_ACCT_ID = T1836.DW_CUST_ACCT_ID and T1836.MSISDN = '917330340' )
    ) D1
    group by c1
    Do you know why OBIEE generates this SQL??? Why uses, with a aggregation function, a ORDER BY and not a GROUP BY? How can I resolve this problem???
    Regards,
    Susana Figueiredo

    Verify your repository design and make sure that you have defined count aggregate on fact column. You would also need to define the content level of each dimension in fact table.

  • SQL Performance issue: Using user defined function with group by

    Hi Everyone,
    im new here and I really could need some help on a weird performance issue. I hope this is the right topic for SQL performance issues.
    Well ok, i create a function for converting a date from timezone GMT to a specified timzeone.
    CREATE OR REPLACE FUNCTION I3S_REP_1.fnc_user_rep_date_to_local (date_in IN date, tz_name_in IN VARCHAR2) RETURN date
    IS
    tz_name VARCHAR2(100);
    date_out date;
    BEGIN
    SELECT
    to_date(to_char(cast(from_tz(cast( date_in AS TIMESTAMP),'GMT')AT
    TIME ZONE (tz_name_in) AS DATE),'dd-mm-yyyy hh24:mi:ss'),'dd-mm-yyyy hh24:mi:ss')
    INTO date_out
    FROM dual;
    RETURN date_out;
    END fnc_user_rep_date_to_local;The following statement is just an example, the real statement is much more complex. So I select some date values from a table and aggregate a little.
    select
    stp_end_stamp,
    count(*) noi
    from step
    where
    stp_end_stamp
    BETWEEN
    to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')      
    AND
    to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
    group by
    stp_end_stampThis statement selects ~70000 rows and needs ~ 70ms
    If i use the function it selects the same number of rows ;-) and takes ~ 4 sec ...
    select
    fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin'),
    count(*) noi
    from step
    where
    stp_end_stamp
    BETWEEN
    to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')      
    AND
    to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
    group by
    fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin')I understand that the DB has to execute the function for each row.
    But if I execute the following statement, it takes only ~90ms ...
    select
    fnc_user_rep_date_to_gmt(stp_end_stamp,'Europe/Berlin','ny21654'),
    noi
    from
    select
    stp_end_stamp,
    count(*) noi
    from step
    where
    stp_end_stamp
    BETWEEN
    to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')      
    AND
    to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
    group by
    stp_end_stamp
    )The execution plan for all three statements is EXACTLY the same!!!
    Usually i would say, that I use the third statement and the world is in order. BUT I'm working on a BI project with a tool called Business Objects and it generates SQL, so my hands are bound and I can't make this tool to generate the SQL as a subselect.
    My questions are:
    Why is the second statement sooo much slower than the third?
    and
    Howcan I force the optimizer to do whatever he is doing to make the third statement so fast?
    I would really appreciate some help on this really weird issue.
    Thanks in advance,
    Andi

    Hi,
    The execution plan for all three statements is EXACTLY the same!!!Not exactly. Plans are the same - true. They uses slightly different approach to call function. See:
    drop table t cascade constraints purge;
    create table t as select mod(rownum,10) id, cast('x' as char(500)) pad from dual connect by level <= 10000;
    exec dbms_stats.gather_table_stats(user, 't');
    create or replace function test_fnc(p_int number) return number is
    begin
        return trunc(p_int);
    end;
    explain plan for select id from t group by id;
    select * from table(dbms_xplan.display(null,null,'advanced'));
    explain plan for select test_fnc(id) from t group by test_fnc(id);
    select * from table(dbms_xplan.display(null,null,'advanced'));
    explain plan for select test_fnc(id) from (select id from t group by id);
    select * from table(dbms_xplan.display(null,null,'advanced'));Output:
    PLAN_TABLE_OUTPUT
    Plan hash value: 47235625
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   1 |  HASH GROUP BY     |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   2 |   TABLE ACCESS FULL| T    | 10000 | 30000 |   159   (1)| 00:00:02 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$1
       2 - SEL$1 / T@SEL$1
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          FULL(@"SEL$1" "T"@"SEL$1")
          OUTLINE_LEAF(@"SEL$1")
          ALL_ROWS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
          IGNORE_OPTIM_EMBEDDED_HINTS
          END_OUTLINE_DATA
    Column Projection Information (identified by operation id):
       1 - (#keys=1) "ID"[NUMBER,22]
       2 - "ID"[NUMBER,22]
    34 rows selected.
    SQL>
    Explained.
    SQL>
    PLAN_TABLE_OUTPUT
    Plan hash value: 47235625
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   1 |  HASH GROUP BY     |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   2 |   TABLE ACCESS FULL| T    | 10000 | 30000 |   159   (1)| 00:00:02 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$1
       2 - SEL$1 / T@SEL$1
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          FULL(@"SEL$1" "T"@"SEL$1")
          OUTLINE_LEAF(@"SEL$1")
          ALL_ROWS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
          IGNORE_OPTIM_EMBEDDED_HINTS
          END_OUTLINE_DATA
    Column Projection Information (identified by operation id):
       1 - (#keys=1) "TEST_FNC"("ID")[22]
       2 - "ID"[NUMBER,22]
    34 rows selected.
    SQL>
    Explained.
    SQL> select * from table(dbms_xplan.display(null,null,'advanced'));
    PLAN_TABLE_OUTPUT
    Plan hash value: 47235625
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   1 |  HASH GROUP BY     |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   2 |   TABLE ACCESS FULL| T    | 10000 | 30000 |   159   (1)| 00:00:02 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$F5BB74E1
       2 - SEL$F5BB74E1 / T@SEL$2
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          FULL(@"SEL$F5BB74E1" "T"@"SEL$2")
          OUTLINE(@"SEL$2")
          OUTLINE(@"SEL$1")
          MERGE(@"SEL$2")
          OUTLINE_LEAF(@"SEL$F5BB74E1")
          ALL_ROWS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
          IGNORE_OPTIM_EMBEDDED_HINTS
          END_OUTLINE_DATA
    Column Projection Information (identified by operation id):
       1 - (#keys=1) "ID"[NUMBER,22]
       2 - "ID"[NUMBER,22]
    37 rows selected.

  • Hey i hav an hp pavilion g series n i would like to kno how use the function keys without having to

    hey i hav an hp pavilion g series n i would like to kno how use the function keys without having to click on the fn + function key

    Hi,
    You can change this in your system bios as described in the link below.
    http://h10025.www1.hp.com/ewfrf/wc/document?docname=c02035108&cc=us&dlc=en&lc=en&jumpid=reg_R1002_US...
    Regards,
    DP-K
    ****Click the White thumb to say thanks****
    ****Please mark Accept As Solution if it solves your problem****
    ****I don't work for HP****
    Microsoft MVP - Windows Experience

  • I want single update query without use the function.

    I want to update sells_table selling_code field with max date product_code from product table.
    In product table there is multiple product_code date wise.
    I have been done it with below quey with the use of function but can we do it in only one update query
    without use the function.
    UPDATE sells_table
    SET selling_code = MAXDATEPRODUCT(ctd_vpk_product_code)
    WHERE NVL(update_product_flag,0) = 0 ;
    CREATE OR REPLACE FUNCTION HVL.maxdateproduct (p_product IN VARCHAR2) RETURN NUMBER
    IS
    max_date_product VARCHAR2 (100);
    BEGIN
    BEGIN
    SELECT NVL (TRIM (product_code), 0)
    INTO max_date_product
    FROM (SELECT product_code, xref_end_dt
    FROM product
    WHERE TO_NUMBER (p_product) = pr.item_id
    ORDER BY xref_end_dt DESC)
    WHERE ROWNUM = 1; -- It will return only one row - max date product code
    EXCEPTION
    WHEN OTHERS
    THEN
    RETURN 0;
    END;
    RETURN max_date_product;
    END maxdateproduct;
    Thanks in Advance.

    Hi,
    Something like this.
    update setlls_table st
            set selling_code =(select nvl(trim(product_code)) from 
                                  (select product_code
                                          , rank() over (partition by item_id order by xref_end_dt DESC) rn
                                       from product
                                   ) pr
                                   where rn =1
                                         and pr.item_id = st.ctd_vpk_product_code
                               ) where NVL(update_product_flag,0) = 0 ;As such not tested due to lack of input sample.
    Regards
    Anurag Tibrewal.

  • Max value without using max() function

    Hi
    Is there any way to get the max value from a table without using MAX() function
    Thanks

    well if you think about it i'm sure you'll find a solution
    what does max(field) means, it simply is the value of the field where no other value of the same field that is > than this value exists.
    consider the following :
    table TAB(
    fld NUMBER(5));
    translate the logic and you'll have
    select a.fld from TAB a where NOT EXISTS(select b.fld from TAB b where b.fld>a.fld) and rownum=1;
    of course there are better ways i'm sure, you'll just have to figure'em out.

Maybe you are looking for