(10g) - Grouping w/ Unions & NULLS

Hello All,
I am in need of, what I hoped to be, some simple help. Here is the sample query:
SELECT
gl_acct,
gl_batch,
gl_desc,
gl_amt,
gl_user
FROM(
SELECT
jc_acct gl_acct,
jc_batch gl_batch,
jc_desc gl_desc,
jc_amt gl_amt,
jc_user gl_user
FROM JCTABLE
UNION ALL
SELECT
py_acct gl_acct,
py_batch gl_batch,
NULL gl_desc,
py_amt gl_amt,
py_user gl_user
FROM PYTABLE)
The results that I am looking for are as follows:
ACCT BATCH DESC AMT USER
1234 111111 JC desc1 2.00 joe
1234 111111 JC desc1 3.00 joe
1234 111111 JC desc2 3.00 kim
5678 222222 JC desc3 4.00 bill
5678 222222 JC desc1 4.00 joe
Where, for example, the last two records amounts come from the PY table. Both the JC and PY table have ACCT 5678 and BATCH 222222. I want the PY amount to be grouped with teh JC description, batch and acct.
Message was edited by:
KB
Message was edited by:
KB

something like this then?
SELECT
a.gl_acct,
a.gl_batch ,
a.gl_desc,
a.gl_amt + b.gl_amt,
a.gl_user
FROM (SELECT
jc_acct gl_acct,
jc_batch gl_batch,
jc_desc gl_desc,
sum(jc_amt) gl_amt,
jc_user gl_user
FROM JCTABLE
GROUP BY jc_acct gl_acct,
jc_batch gl_batch,
jc_desc gl_desc,
jc_user gl_user
) a,
(SELECT
py_acct gl_acct,
py_batch gl_batch,
NULL gl_desc,
sum(py_amt) gl_amt,
py_user gl_user
FROM PYTABLE
GROUP BY py_acct gl_acct,
py_batch gl_batch,
py_user gl_user
) b,
WHERE a.gl_acct = b.gl_acct
AND a.gl_batch = b.gl_batch
AND a.gl_desc = b.gl_desc
you did not tell us what the user output should look like
cheers Bernhard

Similar Messages

  • Oracle 10g group by sorting

    Hello friends
    Pls refere to my previous thread where we have discussed on sorting oprtion of Adding order by whereever group by is applicable
    ORACLE 10G group by sorting Suggestion rqd
    Read?
    Then come to my point
    if there is multiple columns for the group by then if we want to include order by clause, same number of columns has to be included in the order by also.
    i dont think it is so
    example'
    SELECT SUM(QTY1),SUM(QTY2), PARTNO, PART_DESC
    from part_sen
    group by partno, part_desc
    order by partno, part_desc
    qty1 qty2 partno partdesc
    70     101     I001     BRAKE SHOE
    56     54     I002     HORN
    30     30     I003     GEAR
    15     5     I004     DOOR
    5     5     I007     MOTOR GEAR
    10     15     I008     ANCILLARY
    10     15     I009     Window
    so in order by clause we can include only the first column.. that is partno
    SELECT SUM(QTY1),SUM(QTY2), PARTNO, PART_DESC
    from part_sen
    group by partno, part_desc
    order by partno
    Same result i get.
    Pls give your suggestion
    s

    ALex,
    SELECT SUM(QTY1),SUM(QTY2), PARTNO, PART_DESC
    from part_sen
    group by partno, part_desc
    70     101     I001     BRAKE SHOE
    56     54     I002     HORN
    30     30     I003     GEAR
    15     5     I004     DOOR
    5     5     I007     MOTOR GEAR
    10     15     I008     ZLLADER
    10     15     I008     OCTLADER
    10     15     I008     ANCILLARY
    10     15     I008     BALLLADER
    10     15     I009     Window
    SELECT SUM(QTY1),SUM(QTY2), PARTNO, PART_DESC
    from part_sen
    group by partno, part_desc
    order by partno
    70     101     I001     BRAKE SHOE
    56     54     I002     HORN
    30     30     I003     GEAR
    15     5     I004     DOOR
    5     5     I007     MOTOR GEAR
    10     15     I008     ANCILLARY
    10     15     I008     BALLLADER
    10     15     I008     OCTLADER
    10     15     I008     ZLLADER
    without ordering the second column part_desc it is properly ordered how?
    so adding order by partno is same as order by partno,part_desc
    Pls reply
    S

  • RE: The selectBooleanRadio component's group attribute is null

    Hi all,
    i am using Jdeveloper 11.1.2.3.0
    I have one Read only table in my fragment which contains one Checkbox and one Radio button.
    Here i have one requirement that whenever i check the checkbox the radio button will get enabled.
    I have done all the things regarding my requirements that is whenever i check the checkbox the radio button is enabled with an error message  in log.
    The log message is as follows:
    The selectBooleanRadio component's group attribute is null. It must be set to a non-null value for the selectBooleanRadio component to function properly. The selectBooleanRadio component is meant to be used with other selectBooleanRadio components with the same group value.
    So, Can any one help me to resolve the above issue.
    Regards,
    Syam

    Hi,
    I think an error message cannot be more clear than this;
    The selectBooleanRadio component's group attribute is null. It must be set to a non-null value for the selectBooleanRadio component to function properly.
    selectBooleanRadio component's group attribute  ==> the selectBooleanRadio has a "group" property that you need to define a value for
    Frank

  • Oracle 10g - Set Operator Union Causing Timeout Network problem

    Purpose is to get all of the customers not contacted given a starting date and current date(sysdate). The problem is a timeout issue because the query is inefficient. Example: A salesman has 6,946 rows returned from the cust table where his salesid =1163. Then the inner query:
    ‘SELECT count(Customer_ID) FROM cust_info WHERE info_type_id = 32’
    returns 225505 rows just based on this info_type_record.
    Next, ‘SELECT c.customer_id
      FROM customer c,
        event e
      WHERE c.salesperson_id = 1163
      AND e.eventdate BETWEEN '10-Feb-2010' AND TRUNC(SYSDATE)
      AND c.customer_id = e.customer_id
      GROUP BY c.customer_id’
    Returns 231 rows
    Finally, ‘SELECT c.customer_id
      FROM customer c,
        note n
      WHERE c.salesperson_id = 1163
      AND n.created_date_time BETWEEN '10-Feb-2010' AND TRUNC(SYSDATE)
      AND n.note_type_id IN (1,3,4)
      AND c.customer_id   = n.pk_id
      AND n.table_name    = 'CUSTOMER'
      GROUP BY c.customer_id’
    Returns 399 rows.
    How can I improve the structure of this query(see bottom)? The following is a sample data structure:
      CREATE TABLE "CUST "
       (     "CUST_ID" NUMBER,
         "SSN" VARCHAR2(9),
                      "SSN_TYP" NUMBER(1,0),
         "CREATED_DTE_TME" DATE,
         "FULLNAME" VARCHAR2(110),
         "F_NAME" VARCHAR2(35),
         "L_NAME" VARCHAR2(40),
         "BDTE" DATE,
         "DCEASED_DTE" DATE,
         "SALES_ID" NUMBER DEFAULT NULL,
         "BRNCH_ID" NUMBER,
         "HOME_BRNCH_ID" NUMBER,
         "TTL_ASSETS" NUMBER,
         "TTL_ASSETS_DTE" DATE,
         "NO_MAILINGS" NUMBER(1,0),
         "NO_CALLS" NUMBER(1,0) ) ;
      CREATE TABLE "CUST_INFO"
       (     "CUST_INFO_ID" NUMBER,
         "CUST_ID" NUMBER,
         "INFO_TYPE_ID" NUMBER ) ;
    CREATE TABLE "EVENT"
       (     "EVENT_ID" NUMBER,
         "EVENTDATE" DATE,
         "CUST_ID" NUMBER,
         "SALES_ID" NUMBER,     
                      "EVENT_INFO" VARCHAR2(4000)  )
    ENABLE ROW MOVEMENT ;
    CREATE TABLE “NOTE"
       (     "NOTE_ID" NUMBER,
         "NOTE_TYPE_ID" NUMBER DEFAULT 0,
         "TABLE_NAME" VARCHAR2(50),
         "PK_ID" NUMBER,
         "CREATED_DTE_TME" DATE ) ;
    INSERT INTO CUST VALUES(20151,'009529433',1,'01-MAY-5','FRENCH','D','M','01-DEC-01', '05-JUN-05',1163,
    NULL,0,NULL,NULL,NULL,NULL)
    INSERT INTO CUST_INFO VALUES (15,1001,32)
    INSERT INTO EVENT VALUES (5,'05-MAY-05',1001,1163,'NONE')
    INSERT INTO NOTE VALUES (100,2,'CUST',1001,TRUNC(SYSDATE))
    SELECT CUST.CUST_ID,
      SSN,
      F_NAME,
      L_NAME,
      CREATED_DTE_TME ,
      TTL_ASSETS,
      BRNCH_ID,
      SALES_ID ,
      BDTE,
      SSN_TYP,
      FULLNAME,
      Home_BRNCH_ID ,
      No_Mailings,
      No_Calls,
      DCEASED_DTE,
      TTL_ASSETS_DTE
    FROM CUST
    WHERE SALES_ID          = 1163
    AND CUST.CUST_ID NOT IN (
      (SELECT CUST_ID FROM cust_info WHERE info_type_id = 32
    UNION
      (SELECT c.CUST_ID
      FROM CUST c,
        event e
      WHERE c.SALES_ID = 1163
      AND e.eventdate BETWEEN '10-Feb-2010' AND TRUNC(SYSDATE)
      AND c.CUST_ID = e.CUST_ID
      GROUP BY c.CUST_ID
    UNION
      (SELECT c.CUST_ID
      FROM CUST c,
        note n
      WHERE c.SALES_ID = 1163
      AND n.CREATED_DTE_TME BETWEEN '10-Feb-2010' AND TRUNC(SYSDATE)
      AND n.note_type_id IN (1,3,4)
      AND c.CUST_ID   = n.pk_id
      AND n.table_name    = 'CUST'
      GROUP BY c.CUST_ID
    AND CUST.ssn           IS NOT NULL
    AND CUST.DCEASED_DTE IS NULL
    {code}
    Any guidance is appreciated!                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    It’s not problem with SET operator. while you are using date field in where clause, U must use date conversion function, otherwise it will stuck there
    Here is the right sql, U can try with this
    SELECT cust.cust_id, ssn, f_name, l_name, created_dte_tme, ttl_assets,
    brnch_id, sales_id, bdte, ssn_typ, fullname, home_brnch_id,
    no_mailings, no_calls, dceased_dte, ttl_assets_dte
    FROM cust
    WHERE sales_id = 1163
    AND cust.cust_id NOT IN (
    (SELECT cust_id
    FROM cust_info
    WHERE info_type_id = 32)
    UNION
    ((SELECT c.cust_id
    FROM cust c, event e
    WHERE c.sales_id = 1163
    AND e.eventdate BETWEEN to_date('10-Feb-2010','dd-mon-rrrr') AND TRUNC (SYSDATE)
    AND c.cust_id = e.cust_id
    GROUP BY c.cust_id)
    UNION
    (SELECT c.cust_id
    FROM cust c, note n
    WHERE c.sales_id = 1163
    AND n.created_dte_tme BETWEEN to_date('10-Feb-2010','dd-mon-rrrr') AND TRUNC
    (SYSDATE)
    AND n.note_type_id IN (1, 3, 4)
    AND c.cust_id = n.pk_id
    AND n.table_name = 'CUST'
    GROUP BY c.cust_id)))
    AND cust.ssn IS NOT NULL
    AND cust.dceased_dte IS NULL;

  • Group by with null field

    Hi,
    I am trying to write a group by query that is basically:
    Select a,
    b (sometimes null),
    c,
    sum(d),
    sum(e)
    from (union select a,b,c....),
    G,
    H
    where G.field = a and
    H.field = b (+)
    group by a,
    b,
    c
    Since b is sometimes null, depending if I use an inner or outer join, I will either only get the records where b is not null, or only get the records where b is null.
    I obviously would like all the records whether or not b is null.
    I tried using the nvl function but then how can I use the join to H?
    Can anyone help me here?
    Thanks.
    Leah

    Hi,
    I just learned something new. I did not know that there was such a thing as a full outer join. Thank you for that alone.
    I just solved the problem. I once again used the nvl function and joined to the other folder in the administrator using outer join on master along with one to one between master and detail, and detail item might not exist in master folder. It actually worked.
    Thank you for your help and to anyone else who looked at my question.
    Leah

  • Grouping to display null values for all the missing dates

    Hi SAP,
    I am trying to display '0.00' value for all the missing dates in my crystal reports as follows:
    17-Jan-14     40.00
    18-Jan-14       0.00
    19-Jan-14       0.00
    20-Jan-14     80.00
    However, my crystal report is showing as follows:
    17-Jan-14     40.00
    18-Jan-14       0.00
    19-Jan-14
    20-Jan-14     80.00
    The missing dates with no data are group together and my '0.00' value is not. Kindly advise me the solution. The formula is shown as per attached.
    Thank you.
    Regards.

    Hi,
    Thanks for your reply.
    Fyi, I am using a formula field in crystal report to display "0.00" for days in between as follows:
    whileprintingrecords;
    if Days_Between({Command.DocDate},next({Command.DocDate}),'dd-MMM-yy') = "" then "" else "0.00"
    There is another formula field in crystal report to display the missing dates in between as follows:
    whileprintingrecords;
    Days_Between({Command.DocDate},next({Command.DocDate}),'dd-MMM-yy')
    Below is my report custom functions:
    Function Days_Between (datefield as datetime, nextdatefield as datetime, format as string)
    ' This function is only used to display what data is missing within a specified date range...output type is text
    dim thisdate as date
    dim nextdate as date
    dim output as string 'output is the text display
    dim daysbetween as number
    dim looptimes as number
    looptimes = 0
    daysbetween = 0
    output = ""
    thisdate = datevalue(datefield) + 1
    nextdate = datevalue(nextdatefield)
    if nextdate - thisdate > 1 then daysbetween = nextdate - thisdate else daysbetween = 1
    do
    if nextdate - thisdate > 1 _
    then output = output + totext(thisdate, format) + chr(10) _
    else _
    if nextdate - thisdate > 0 _
    then output = output + totext(thisdate, format)
    looptimes = looptimes + 1
    thisdate = thisdate + 1
    loop until looptimes = daysbetween
        Days_Between = output
    End Function
    The issue arise when when there is more than one missing dates in between but the null values ("0.00") displayed is only for the first missing dates and not for all the missing dates.
    Regards,
    Ting Wei 

  • Oracle 10g group by clause

    I have one SQL query using a GROUP BY clause and no ORDER BY clause is used. When executed in Oracle 8i, the query results are returned ordered by first column mentioned in the GROUP BY clause. When the same query is executed in Oracle 10g, the query results are returned withour ordering the data by the first column in the GROUP BY clause. It works only when I explicitly mention the ORDER BY clause. Can you please explain this? In Orcale 8i, is it that, by default, the data is ordered by the first column mentioned in the GROUP BY clause when ORDER BY clause is not mentioned?
    In which order does oracle 10g sorts when I use group by clause in oracle 10g

    [email protected] wrote:
    the use of group by is to group based on some column value, in this case why does the the output differs in rows. why does the output, when you use group by is not the following formatSorry, but this is a totally fruitless topic. Why are you bothering with something that is totally internal to the DBMS? If you want the data ordered, use ORDER BY, it's that simple.
    Check out this link, if you want some discussion on it:
    [http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:74320098178823]

  • ORACLE 10G group by sorting Suggestion rqd

    Hai friends,
    recently we migrated from Oracle 9i to oracle 10g. We are instructed that Group by will not sort the records in Oracle 10g, so we are asked to add Order By Clause in wherever applicable in the package.
    But after migration i inserted 100 records in 10g database and tested. I am getting correct sorting of records based on Group by column. Can any one explain me why it is? I heard the news sorting will difer on Oracle 10g versions. I am using Oracle Database 10g Enterprise Edition Release 10.2.0.4.0. if it automaticallly sorts, then no need to change the package for sortiing right?
    Give me your valuable suggestions..
    S

    And here's a simple example on 10.2.0.4 that clearly shows the results are not sorted.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP, Data Mining
    and Real Application Testing options
    SQL> select object_type, count(*) from all_objects group by object_type;
    OBJECT_TYPE                     COUNT(*)
    CONSUMER GROUP                         2
    INDEX PARTITION                      718
    TABLE SUBPARTITION                    14
    SEQUENCE                             228
    SCHEDULE                               1
    TABLE PARTITION                      301
    PROCEDURE                             21
    ...Like everyone else has already said, Oracle does not guarantee the order from a group by clause with no order by.
    If you need the results in order you must use an order by clause.
    Pretty simple really.

  • 10g Grouping and  Hide Repeating Data

    I need to return a sales rep's accounts assigned to him. But, a customer can have several sub account records,
    because a new transaction will create a new record in the subacct table. Here's the query but it returns
    all of the rows for the customer assigned to the Salesperson and really not a great format.
    All I want is the customer's last record. It would also be great to eliminate any repeating data in columns:ssn
    sb.salesperonid, acctid. It's easier to view the data when there's just 1 ssn, 1 salespersonid,acctid but
    multiple subaccbal balances.
    SELECT sb.ssn,
    sb.salespersonid,
    sb.acctid,
    sb.subacctbal
    FROM subaccbal_current sb,
    sysusr_salesperson1,
    product p
    WHERE sb.subacctbal > 1
    AND sysusr_salesperson1.usrid = sb.salespersonid
    AND sb.prodid = p.prodid (+)
    AND sysusr_salesperson1.usrid = 2007
    order BY sb.ssn,sb.subacctbal
    sb.ssn sb.salespersonid    acctid             subaccbal    
    16x67 2007                8x41                 3840.1447436467236765             southwestern burger
    16x67 2007                8x41                3999.53760824202520962         chips and salsa
    16x67 2007                8x41                   4001.49146206932217968            chips
    31x34 2007                8x47                 3940.7899999999823942          chipotle meal
    31x34 2007                8x47                 4019.660000000200747           coke
    31x34 2007                8x47                 4110.8599999999692953          speakers
    31x34 2007                8x47                 8216.30999999995797296         phone
    31x34 2007                8x47                 8671.1099999999555424           table cloth
    31x34 2007                8x47                 8743.079704290217582865      pickles
    32x76 2007                8x10               117756.91                           juice      
    32x59 2007                1x004               52116.229999999969462937         fish n chip
    32x59 2007                1x004               64675.1499999994542136           paper
    32x59 2007                1x004               64959.1799999998650161           markerss
    33x58 2007                8x37                44334.3858770048316717           paint
    33x58 2007                8x37                45016.1826601808917375           beer
    33x58 2007                8x37                71637.165562261602140981      fruit
    Preferred report format:
    Salesperson:2007
    sb.ssn      acctid             subaccbal    
    16x67      8x41            4001.49146206932217968      chips
    31x34       8x47           8743.079704290217582865       pickles
    32x76       8x10        117756.91                             juice
    32x59       1x004        64959.1799999998650161            markerss
    33x58       8x37          71637.165562261602140981      fruit
    2nd report format:
    sb.ssn sb.salespersonid    acctid             subaccbal    
    16x67 2007                8x41                 3840.1447436467236765             southwestern burger
                                                           3999.53760824202520962         chips and salsa
                                                           4001.49146206932217968            chips
    31x34                        8x47                 3940.7899999999823942          chipotle meal
                                                           4019.660000000200747           coke
                                                           4110.8599999999692953          speakers
                                                            8216.30999999995797296         phone
                                                           8671.1099999999555424           table cloth
                                                           8743.079704290217582865      pickles
    32x76                       8x10              117756.91                           juice      
                                                          52116.229999999969462937         fish n chip
    32x59                                               64675.1499999994542136           paper
                                                          64959.1799999998650161           markerss
    33x58                       8x37                44334.3858770048316717           paint
                                                          45016.1826601808917375           beer
                                                          71637.165562261602140981      fruitthanks for any help.
    Edited by: achtung on Jan 6, 2010 1:12 PM

    Hi,
    Your 'Preferred report format" is an example of a +Top-N Query+ , which you can do using the analytic ROW_NUMBER function:
    {code}
    WITH     got_rnum     AS
         SELECT     sb.ssn,
              sb.salespersonid,
              sb.acctid,
              sb.subacctbal,
              ROW_NUMBER () OVER ( PARTITION BY sb.ssn
                        ORDER BY      sb.subacccbal          DESC
                        ) AS rnum
         FROM subaccbal_current sb,
              sysusr_salesperson1,
              product p
         WHERE      sb.subacctbal               > 1
         AND      sysusr_salesperson1.usrid      = sb.salespersonid
         AND      sb.prodid                = p.prodid (+)
         AND      sysusr_salesperson1.usrid      = 2007
    SELECT     *     -- or list all columns except rnum
    FROM     got_rnum
    WHERE     rnum          = 1
    ORDER BY ssn
    {code}
    This assumes that subaccbal_current.ssn is unique, and that the "last" row for any ssn is the one with the greatest subacctbal.
    Notice that the sub-query got_rnum is just your original query, with one more column in the SELECT clause.
    Output like your "2nd report format" is best done by your front-end tool. If SQL*Plus is your front end, you can use the BREAK command to suppress display of certain columns if the value is the same as the preceding row.
    If you must do it in pure SQL, you can get the ROW_NUMBER like I did above. In the main query, you could use rnum in a CASE expressions to determine whether you show the value or NULL:
    {code}
    SELECT     CASE
              WHEN rnum = 1
              THEN ssn
         END     AS ssn
    {code}
    Edited by: Frank Kulash on Jan 6, 2010 4:31 PM
    Added example of "2nd report format"

  • A column in 4i showing Null & Blank simultaneously but not in 10g

    Hi ,
    Presently working on migrating data from discoverer 4i to discoverer 10g and post migration it seems to be succesful.
    Though currently in a report we are facing the following issue :
    A column in 4i viewer showing Null & Blank simultaneously but its not so in 10g.In 10g it is showing NULL only.
    I tried working out with OPTION / Sheet----> Null and
    even with FORMAT DATA / TEXT giving the category as NONE(depicting the format as in Database)
    There was no difference with the above.
    Kindly advice me regarding the above.
    Regards
    Sajit.

    Hi,
    Check the aggregation methods in the 4i and in the new version

  • My contact groups are "null" when coming into my Iphone5 and ipad

    my contact groups are "null" when coming into my Iphone5 and ipad...anyone have any info on this issue or a similiar issue?

    I have the same problem. Any of my groups are shown 'null'....

  • Conditional GROUP BY clause

    Hi,
    I have four columns in group by clause. I want to add conditional group by clause.
    Case 1 : If one of the  column's value is null, I don't want to include it in group by clause. Means, Now GROUP BY clause will have only 3 columns.
    Case 2 : If not null, then GROUP BY clause with all four columns.
    Please help me out on this.
    Thanks in advance.  

    Hi
    I think it won't matter, all group functions by default ignore NULLs so your result won't differ.
    select  dept, loc, sum(sal)
    from (
    select 'A' emp , 1 dept , 'P' loc , 100 sal from dual union all
    select'B',1,'P',200 from dual union all
    select'C',2,'P',300 from dual union all
    select'D',2,'P',400 from dual union all
    select'E',3, 'P',500 from dual union all
    select'F',3, 'P',600 from dual union all
    select'G',4, 'Q',700 from dual union all
    select'H', null,'Q' , 1000 from dual union all
    select'I',null ,'Q', 2000 from dual union all
    select 'J' ,null, 'Q',300 from dual)
    group by dept,loc;
    Output
    DEPT      LOC      SUM(SAL)
    1                P      300
    2                P      700
    3                P      1100
                    Q      3300
    4                Q      700
    Now by doing grouping only for NOT NULL values,
    select dept,loc, sum(sal)
    from (
    select 'A' emp , 1 dept , 'P' loc , 100 sal from dual union all
    select'B',1,'P',200 from dual union all
    select'C',2,'P',300 from dual union all
    select'D',2,'P',400 from dual union all
    select'E',3, 'P',500 from dual union all
    select'F',3, 'P',600 from dual union all
    select'G',4, 'Q',700 from dual union all
    select'H', null,'Q' , 1000 from dual union all
    select'I',null ,'Q', 2000 from dual union all
    select 'J' ,null, 'Q',300 from dual)
    where dept is not null          --------------NOT NULL Condition
    group by dept, loc;
    Output
    DEPT     LOC      SUM(SAL)
    1           P           300
    2           P           700
    3           P           1100
    4           Q           700
    Now by doing grouping only for NULL values,
    select dept,loc, sum(sal)
    from (
    select 'A' emp , 1 dept , 'P' loc , 100 sal from dual union all
    select'B',1,'P',200 from dual union all
    select'C',2,'P',300 from dual union all
    select'D',2,'P',400 from dual union all
    select'E',3, 'P',500 from dual union all
    select'F',3, 'P',600 from dual union all
    select'G',4, 'Q',700 from dual union all
    select'H', null,'Q' , 1000 from dual union all
    select'I',null ,'Q', 2000 from dual union all
    select 'J' ,null, 'Q',300 from dual)
    where dept is null               --------------NULL Condition
    group by dept, loc;
    Output
    DEPT      LOC         SUM(SAL)
                    Q           3300
    The output is same for both the conditions.

  • "Most major grouping in a GROUP BY"?

    Hi,
    So, I just came across this question in the 1z0-047 exam prep, which puzzled me:
    When a GROUP BY clause contains multiple columns, which grouping is the most major grouping?What puzzled me was, I never knew there was such a thing as a "most major grouping" in a GROUP BY clause.
    Anyway, the answer:
    Answer:
    the first column listed in the GROUP BY clauseCan anyone explain what this means in practice? It must mean something different to your bog standard "select sum(ordervalue) from sales group by city,country,region" because in that case, I can't see how city has any more or less relevance to the query than region.
    Thanks,
    Jason

    Greg.Spall wrote:
    Consider two data sets:
    1 grouped by: Country, Gender.
    the other grouped by: Gender, Country.
    Just think about it for a minute.
    First is saying: Let's split the data up by country. Then within each country, split by gender.
    You end up with a lot of groups, then 2 groups in each.
    Second is saying, let's split it up by Gender. Then for each gender, split by country.
    You end up with 2 large groups, with a break down of country afterwards.
    In some ways the data is the same, in other, it is very very different.Hi Greg.
    I don't understand.
    I knocked up a quick sample query to illustrate: group by two columns; then reverse the grouping, then union the grouping, see what we get:
    HR@ORCL> desc employees;
    Name                                                              Null?    Type
    EMPLOYEE_ID                                                       NOT NULL NUMBER(6)
    FIRST_NAME                                                                 VARCHAR2(20)
    LAST_NAME                                                         NOT NULL VARCHAR2(25)
    EMAIL                                                             NOT NULL VARCHAR2(25)
    PHONE_NUMBER                                                               VARCHAR2(20)
    HIRE_DATE                                                         NOT NULL DATE
    JOB_ID                                                            NOT NULL VARCHAR2(10)
    SALARY                                                                     NUMBER(8,2)
    COMMISSION_PCT                                                             NUMBER(2,2)
    MANAGER_ID                                                                 NUMBER(6)
    DEPARTMENT_ID                                                              NUMBER(4)
    HR@ORCL> select job_id,department_id,sum(salary) from employees group by department_id,job_id order by 1,2,3;
    JOB_ID     DEPARTMENT_ID SUM(SALARY)
    AC_ACCOUNT           110        8300
    AC_MGR               110       12008
    AD_ASST               10        4400
    AD_PRES               90       24000
    AD_VP                 90       34000
    FI_ACCOUNT           100       39600
    FI_MGR               100       12008
    HR_REP                40        6500
    IT_PROG               60       28800
    MK_MAN                20       13000
    MK_REP                20        6000
    PR_REP                70       10000
    PU_CLERK              30       13900
    PU_MAN                30       11000
    SA_MAN                80       62200
    SA_REP                80      243500
    SA_REP     NULL                 7000
    SH_CLERK              50       64300
    ST_CLERK              50       55700
    ST_MAN                50       36400
    20 rows selected.
    HR@ORCL> select job_id,department_id,sum(salary) from employees group by job_id,department_id order by 1,2,3;
    JOB_ID     DEPARTMENT_ID SUM(SALARY)
    AC_ACCOUNT           110        8300
    AC_MGR               110       12008
    AD_ASST               10        4400
    AD_PRES               90       24000
    AD_VP                 90       34000
    FI_ACCOUNT           100       39600
    FI_MGR               100       12008
    HR_REP                40        6500
    IT_PROG               60       28800
    MK_MAN                20       13000
    MK_REP                20        6000
    PR_REP                70       10000
    PU_CLERK              30       13900
    PU_MAN                30       11000
    SA_MAN                80       62200
    SA_REP                80      243500
    SA_REP     NULL                 7000
    SH_CLERK              50       64300
    ST_CLERK              50       55700
    ST_MAN                50       36400
    20 rows selected.and just to be sure, let's UNION them to see if we get more than 20 rows:
    HR@ORCL> select job_id,department_id,sum(salary) from employees group by department_id,job_id
      2  union
      3  select job_id,department_id,sum(salary) from employees group by job_id,department_id;
    JOB_ID     DEPARTMENT_ID SUM(SALARY)
    AC_ACCOUNT           110        8300
    AC_MGR               110       12008
    AD_ASST               10        4400
    AD_PRES               90       24000
    AD_VP                 90       34000
    FI_ACCOUNT           100       39600
    FI_MGR               100       12008
    HR_REP                40        6500
    IT_PROG               60       28800
    MK_MAN                20       13000
    MK_REP                20        6000
    PR_REP                70       10000
    PU_CLERK              30       13900
    PU_MAN                30       11000
    SA_MAN                80       62200
    SA_REP                80      243500
    SA_REP     NULL                 7000
    SH_CLERK              50       64300
    ST_CLERK              50       55700
    ST_MAN                50       36400
    20 rows selected.So it looks to me as expected, GROUP BY A,B = GROUP BY B,A.
    Or are you suggesting something else?
    Thanks,
    Jason

  • SQL is running SLOW in group by

    Dear Experts,
    I have a query which is fetching around 2,603,675 records. When it is running without group clause it is running fine but as soon as Group by claused comes into picture it's dying. I have seent the plan which is correct and hitting to right indexs with right order. We have oracle 11g. Could some one please help me how to get rid of this? Below is the expaling plan for the same. Cost Bytes and Carfinality looks ok to me.
    Plan
    SELECT STATEMENT  HINT: ALL_ROWSCost: 1,363  Bytes: 304  Cardinality: 1                                                                   
         31 HASH GROUP BY  Cost: 1,363  Bytes: 304  Cardinality: 1                                                              
              30 NESTED LOOPS                                                         
                   28 NESTED LOOPS  Cost: 1,362  Bytes: 304  Cardinality: 1                                                    
                        25 NESTED LOOPS OUTER  Cost: 1,360  Bytes: 183  Cardinality: 1                                               
                             22 NESTED LOOPS  Cost: 1,359  Bytes: 168  Cardinality: 1                                          
                                  19 NESTED LOOPS  Cost: 1,357  Bytes: 121  Cardinality: 1                                     
                                       16 NESTED LOOPS  Cost: 1,356  Bytes: 107  Cardinality: 1                                
                                            13 NESTED LOOPS  Cost: 1,356  Bytes: 100  Cardinality: 1                           
                                                 6 NESTED LOOPS  Cost: 54  Bytes: 2,209  Cardinality: 47                      
                                                      2 TABLE ACCESS BY INDEX ROWID TABLE FDM.WH_SOURCES_D Cost: 2  Bytes: 16  Cardinality: 1                 
                                                           1 INDEX UNIQUE SCAN INDEX (UNIQUE) FDM.WSRC_UK1 Cost: 1  Cardinality: 1            
                                                      5 INLIST ITERATOR                 
                                                           4 TABLE ACCESS BY INDEX ROWID TABLE FDM.WH_ACCOUNTS_D Cost: 52  Bytes: 1,457  Cardinality: 47            
                                                                3 INDEX UNIQUE SCAN INDEX (UNIQUE) FDM.WACC_UK1 Cost: 50  Cardinality: 47       
                                                 12 PARTITION RANGE SINGLE  Cost: 1,356  Bytes: 53  Cardinality: 1  Partition #: 15  Partitions accessed #KEY(AP)                    
                                                      11 TABLE ACCESS BY LOCAL INDEX ROWID TABLE FDM.WH_ATM_BALANCES_F Cost: 1,356  Bytes: 53  Cardinality: 1  Partition #: 15  Partitions accessed #KEY(AP)               
                                                           10 BITMAP CONVERSION TO ROWIDS            
                                                                9 BITMAP AND       
                                                                     7 BITMAP INDEX SINGLE VALUE INDEX (BITMAP) FDM.WABF_BM6 Partition #: 15  Partitions accessed #KEY(AP)
                                                                     8 BITMAP INDEX SINGLE VALUE INDEX (BITMAP) FDM.WABF_BM10 Partition #: 15  Partitions accessed #KEY(AP)
                                            15 TABLE ACCESS BY INDEX ROWID TABLE FDM.T_SDM_GLPRODUCT Cost: 0  Bytes: 7  Cardinality: 1                           
                                                 14 INDEX UNIQUE SCAN INDEX (UNIQUE) FDM.TSGP_PK Cost: 0  Cardinality: 1                      
                                       18 TABLE ACCESS BY INDEX ROWID TABLE FDM.WH_PRODUCTS_D Cost: 1  Bytes: 14  Cardinality: 1                                
                                            17 INDEX UNIQUE SCAN INDEX (UNIQUE) FDM.WPRD_PK Cost: 0  Cardinality: 1                           
                                  21 TABLE ACCESS BY INDEX ROWID TABLE FDM.WH_COMMON_TRADES_D Cost: 2  Bytes: 47  Cardinality: 1                                     
                                       20 INDEX UNIQUE SCAN INDEX (UNIQUE) FDM.WCTD_PK Cost: 1  Cardinality: 1                                
                             24 TABLE ACCESS BY INDEX ROWID TABLE FDM.T_SDM_SECURITYINSTRUMENT Cost: 1  Bytes: 15  Cardinality: 1                                          
                                  23 INDEX UNIQUE SCAN INDEX (UNIQUE) FDM.TSSI_PK Cost: 0  Cardinality: 1                                     
                        27 PARTITION RANGE SINGLE  Cost: 1  Cardinality: 1  Partition #: 29  Partitions accessed #KEY(AP)                                             
                             26 INDEX RANGE SCAN INDEX (UNIQUE) FDM.WBKS_UK2 Cost: 1  Cardinality: 1  Partition #: 29  Partitions accessed #KEY(AP)                                        
                   29 TABLE ACCESS BY LOCAL INDEX ROWID TABLE FDM.WH_BOOKS_D Cost: 2  Bytes: 121  Cardinality: 1  Partition #: 29  Partitions accessed #1Edited by: BluShadow on 21-Oct-2011 08:57
    added {noformat}{noformat} tags. Please read {message:id=9360002} and learn to do this yourself.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    user535789 wrote:
    Dear Experts,
    I have a query which is fetching around 2,603,675 records. When it is running without group clause it is running fine but as soon as Group by claused comes into picture it's dying. I have seent the plan which is correct and hitting to right indexs with right order. We have oracle 11g. Could some one please help me how to get rid of this? Below is the expaling plan for the same. Cost Bytes and Carfinality looks ok to me.I think you have a common misunderstanding there.
    Your original query did run fine fetching 2.6 million records?
    Usually such claims are only made when the query just returned the first few records (default 50 for tools like sql deeveloper or toad).
    How long did it take till you saw the very last record?
    If you add a group by then essentially you tell the database it has to do a sort operation over the whole dataset. Because it might be that the very last record has some information that would influence the first output row that is shown on the screen. Therefore the database has to read all the records including the very last one (no. 2,603,675) before it can show you one single output line. Same goes for all kind of sort operations, but the most typical are "Order by", "distinct", "group by", "union".
    My guess is that both queries take approximately the same time. You just don't know whow slow the original query was, becuase you saw some output rows while the query was still running.

  • GROUP and MAX Query

    Hi
    I have two tables that store following information
    CREATE TABLE T_FEED (FEED_ID NUMBER, GRP_NUM NUMBER);
    CREATE TABLE T_FEED_RCV (FEED_ID NUMBER, RCV_DT DATE);
    INSERT INTO T_FEED VALUES (1, 1);
    INSERT INTO T_FEED VALUES (2, 1);
    INSERT INTO T_FEED VALUES (3, 2);
    INSERT INTO T_FEED VALUES (4, NULL);
    INSERT INTO T_FEED VALUES (5, NULL);
    INSERT INTO T_FEED_RCV VALUES (2, '1-MAY-2009');
    INSERT INTO T_FEED_RCV VALUES (3, '1-FEB-2009');
    INSERT INTO T_FEED_RCV VALUES (4, '12-MAY-2009');
    COMMIT;
    I join these tables using the following query to return all the feeds and check when each feed was received:
    SELECT
    F.FEED_ID,
    F.GRP_NUM,
    FR.RCV_DT
    FROM T_FEED F
    LEFT OUTER JOIN T_FEED_RCV FR
    ON F.FEED_ID = FR.FEED_ID
    ORDER BY GRP_NUM, RCV_DT DESC;
    Output
    Line: ----------
    FEED_ID     GRP_NUM     RCV_DT
    1     1     
    2     1     5/1/2009
    3     2     2/1/2009
    5          
    4          5/12/2009
    Actually I want the maximum date of when we received the feed. Grp_Num tells which feeds are grouped together. NULL grp_num means they are not grouped so treat them as individual group. In the example - Feeds 1 and 2 are in one group and any one of the feed is required. Feed 3, 4 and 5 are individual groups and all the three are required.
    I need a single query that should return the maximum date for the feeds. For the example the result should be NULL because.. out of feed 1 and 2 the max date is 5/1/2009. For feed 3 the max date is 2/1/2009, for feed 4 it is 5/12/2009 and for feed 4 it is NULL. Since one of the required feed is null so the results should be null.
    DELETE FROM T_FEED;
    DELETE FROM T_FEED_RCV;
    COMMIT;
    INSERT INTO T_FEED VALUES (1, 1);
    INSERT INTO T_FEED VALUES (2, 1);
    INSERT INTO T_FEED VALUES (3, NULL);
    INSERT INTO T_FEED VALUES (4, NULL);
    INSERT INTO T_FEED_RCV VALUES (2, '1-MAY-2009');
    INSERT INTO T_FEED_RCV VALUES (3, '1-FEB-2009');
    INSERT INTO T_FEED_RCV VALUES (4, '12-MAY-2009');
    COMMIT;
    For above inserts, the result should be for feed 1 and 2 - 5/1/2009, feed 3 - 2/1/2009 and feed 4 - 5/12/2009. So the max of these dates is 5/12/2009.
    I tried using MAX function grouped by GRP_NUM and also tried using DENSE_RANK but unable to resolve the issue. I am not sure how can I use the same query to return - not null value for same group and null (if any) for those that doesn't belong to any group. Appreciate if anyone can help me..

    Hi,
    Kuul13 wrote:
    Thanks Frank!
    Appreciate your time and solution. I tweaked your earlier solution which was more cleaner and simple and built the following query to resolve the prblem.
    SELECT * FROM (
    SELECT NVL (F.GRP_NUM, F.CARR_ID || F.FEED_ID || TO_CHAR(EFF_DT, 'MMDDYYYY')) AS GRP_ID
    ,MAX (FR.RCV_DT) AS MAX_DT
    FROM T_FEED F
    LEFT OUTER JOIN T_FEED_RCV FR ON F.FEED_ID = FR.FEED_ID
    GROUP BY NVL (F.GRP_NUM, F.CARR_ID || F.FEED_ID || TO_CHAR(EFF_DT, 'MMDDYYYY'))
    ORDER BY MAX_DT DESC NULLS FIRST)
    WHERE ROWNUM=1;
    I hope there are no hidden issues with this query than the later one provided by you.Actually, I can see 4 issues with this. I admit that some of them are unlikely, but why take any chances?
    (1) The first argument to NVL is a NUMBER, the second (being the result of ||) is a VARCHAR2. That means one of them will be implicitly converted to the type of the other. This is just the kind of thing that behaves differently in different versions or Oracle, so it may work fine for a year or two, and then, when you change to another version, mysteriously quit wiorking. When you have to convert from one type of data to another, always do an explicit conversion, using TO_CHAR (for example).
    (2)
    F.CARR_ID || F.FEED_ID || TO_CHAR(EFF_DT, 'MMDDYYYY)'will produce a key like '123405202009'. grp_num is a NUMBER with no restriction on the number of digits, so it could conceivably be 123405202009. The made-up grp_ids must never be the same any real grp_num.
    (3) The combination (carr_id, feed_id, eff_dt) is unique, but using TO_CHAR(EFF_DT, 'MMDDYYYY) assumes that the combination (carr_id, feed_id, TRUNC (eff_dt)) is unique. Even if eff_dt is always entered as (say) midnight (00:00:00) now, you may decide to start using the time of day sometime in the future. What are the chances that you'll remember to change this query when you do? Not very likely. If multiple rows from the same day are relatively rare, this is the kind of error that could go on for months before you even realize that there is an error.
    (4) Say you have this data in t_feed:
    carr_id      feed_id  eff_dt       grp_num
    1        234      20-May-2009  NULL
    12       34       20-May-2009  NULL
    123      4        20-May-2009  NULLAll of these rows will produce the same grp_id: 123405202009.
    Using NVL, as you are doing, allows you to get by with just one sub-query, which is nice.
    You can do that and still address all the problems above:
    SELECT  *
    FROM     (
         SELECT  NVL2 ( F.GRP_NUM
                   , 'A' || TO_CHAR (f.grp_num)
                   , 'B' || TO_CHAR (f.carr_id) || ':' ||
                                TO_CHAR (f.feed_id) || ':' ||
                                TO_CHAR ( f.eff_dt
                                            , 'MMDDYYYYHH24MISS'
                   ) AS grp_id
         ,     MAX (FR.RCV_DT) AS MAX_DT
         FROM               T_FEED      F
         LEFT OUTER JOIN  T_FEED_RCV  FR        ON  F.FEED_ID = FR.FEED_ID
         GROUP BY  NVL2 ( F.GRP_NUM
                     , 'A' || TO_CHAR (f.grp_num)
                     , 'B' || TO_CHAR (f.carr_id) || ':' ||
                                     TO_CHAR (f.feed_id) || ':' ||
                                     TO_CHAR ( f.eff_dt
                                                 , 'MMDDYYYYHH24MISS'
         ORDER BY  MAX_DT   DESC   NULLS FIRST
    WHERE      ROWNUM = 1;I would still use two sub-queries, adding one to compute grp_id, so we don't have to repeat the NVL2 expression. I would also use a WITH clause rather than in-line views.
    Do you find it easier to read the query above, or the simpler query you posted in your last message?
    Please make things easy on yourself and the people who want to help you. Always format your code so that the way the code looks on the screen makes it clear what the code is doing.
    In particular, the formatting should make clear
    (a) where each clause (SELECT, FROM, WHERE, ...) of each query begins
    (b) where sub-queries begin and end
    (c) what each argument to functions is
    (d) the scope of parentheses
    When you post formatted text on this site, type these 6 characters:
    before and after the formatted text, to preserve spacing.
    The way you post the DDL (CREATE TABLE ...)  and DML (INSERT ...) statements is great: I wish more people were as helpful as you.
    There's no need to format the DDL and DML.  (If you want to, then go ahead: it does help a little.)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Maybe you are looking for