Simple Query (GROUP BY?) Question

I have a verity search on a website of mine, and today added
a logging feature. When the user submits a search I write the
search criteria to a table with two columns; one column stores the
search criteria (string), and the other a date/time stamp.
I'm writing a report that return the searches performed on a
given day, which is easy enough thanks to my date/time stamp.
Let's say the following searches have been run today...
- superman movie
- concert tickets
- concert tickets
- used car stereos
Obviously I don't want the report page to list "concert
tickets" twice, instead it should be shown as follows:
- superman movie (1)
- concert tickets (2)
- used car stereos (1)
And furthermore, it should be ranked in order of popularity,
as such:
- concert tickets (2)
- superman movie (1)
- used car stereos (1)
The "GROUP BY" statement in SQL can help me group together
identical searches just fine, but determining the number of
attempts is another story. Here is what I came up with for now, but
I can't order the results by the number of searches like this:

SELECT search_criteria, count(search_criteria) thecount
FROM table_name
GROUP BY search_criteria
order by count(search_criteria) desc

Similar Messages

  • Query Group By Question

    I have tons of entries that are a tiny bit different, but the way that I need for this to work is that I need to group them together somehow.
    For example: 
    Table: BlogPost
    Field: ByLine
    Records
    1 Press
    2 Press & Staff
    3 Press & Company
    4 Press & News
    5 Press & Radio
    6 Test
    7 Test & Press
    8 Test & Ipad
    I want to group all of these together so that it returns just Press.  There are also different scenarios where I have to do this same thing, but with other items.  Any ideas?

    Hello,
    Please also review the following statement if it is meet your requirement:
    create table BlogPost
    ByLine INT,
    Records VARCHAR(32)
    INSERT INTO BlogPost VALUES(1, 'Press'), (2, 'Press & Staff'), (3, 'Press & Company'), (4, 'Press & News'), (5, 'Press & Radio'),
    (6, 'Test'), (7, 'Test & Press'), (8, 'Test & Ipad')
    select byline, records= case when PATINDEX ('%&%',records)=0 then Records
    else
    stuff (records, PATINDEX('%&%',records),len(records)-PATINDEX('%&%',records)+1 ,'')
    end
    from BlogPost
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • How I can delete a row using a simple query?

    SZSLIFE_SPRIDEN_PIDM     SZSLIFE_SGBSTDN_TERM_CODE_EFF     SZSLIFE_SLRRASG_BLDG_CODE     SZSLIFE_SLRRASG_ROOM_NUMBER     SZSLIFE_SLRRASG_BEGIN_DATE     SZSLIFE_SLRRASG_END_DATE
    48547     199890                    
    48547     199990                    
    48547     199990     BLU     205     09/03/1999     12/23/1999
    48547     200010                    
    48547     200010     BLU     205     01/25/2000     05/25/2000
    48547     200090                    
    48547     200090     MOR     406     09/03/2000     12/23/2000
    48547     200110                    
    48547     200110     MOR     406     01/25/2001     05/25/2001
    48547     200190                    
    48547     200210                    
    48547     200290                    
    48547     200310                    
    48547     200390                    
    48547     200410                    
    48547     200610                    
    Here is what a simple question is probably for some of you; I can not get this to work. I need to delete all the rows that are duplicate like row #2 with the same SZSLIFE_SGBSTDN_TERM_CODE_EFF but with not
    SZSLIFE_SLRRASG_BLDG_CODE and SZSLIFE_SLRRASG_ROOM_NUMBER     
    I need to write a code where it counts the SZSLIFE_SGBSTDN_TERM_CODE_EFF and if it have the same (2 times.
    I need to delete the row without SZSLIFE_SLRRASG_BLDG_CODE and SZSLIFE_SLRRASG_ROOM_NUMBER
    The SZSLIFE_SLRRASG_BLDG_CODE NEEDS to be not null, because I do an insert in this table I need to be able to insert null values.
    How I can use a simple query where I can delete all the duplicate records without bldg_code and room number…
    Here is the table description
    SZSLIFE_SPRIDEN_PIDM NUMBER(8)
    SZSLIFE_SPRIDEN_ID VARCHAR2(10)
    SZSLIFE_SPRIDEN_LAST_NAME VARCHAR2(60)
    SZSLIFE_SPRIDEN_FIRST_NAME VARCHAR2(60)
    SZSLIFE_SPRIDEN_MI VARCHAR2(15)
    SZSLIFE_SGBSTDN_TERM_CODE_EFF VARCHAR2(6)
    SZSLIFE_SGBSTDN_STST_CODE VARCHAR2(2)
    SZSLIFE_STVSTST_DESC VARCHAR2(30)
    SZSLIFE_SGBSTDN_STYP_CODE VARCHAR2(2)
    SZSLIFE_STVSTYP_DESC VARCHAR2(30)
    SZSLIFE_SGBSTDN_LEVL_CODE VARCHAR2(2)
    SZSLIFE_STVLEVL_DESC VARCHAR2(30)
    SZSLIFE_SGBSTDN_RESD_CODE VARCHAR2(10)
    SZSLIFE_STVRESD_DESC VARCHAR2(40)
    SZSLIFE_SLRRASG_BLDG_CODE VARCHAR2(10)
    SZSLIFE_SLRRASG_ROOM_NUMBER VARCHAR2(10)
    SZSLIFE_SLRRASG_BEGIN_DATE VARCHAR2(12)
    SZSLIFE_SLRRASG_END_DATE VARCHAR2(12)
    SLRRASG_ASCD_CODE VARCHAR2(2)
    I will appreciate any help!
    SLRRASG_ROLL_IND VARCHAR2(2)

    Thank you very much Sandeep, this works!
    1 DELETE SZSLIFE_TEMP2
    2 WHERE
    3 SZSLIFE_SGBSTDN_TERM_CODE_EFF
    4 IN
    5 (SELECT SZSLIFE_SGBSTDN_TERM_CODE_EFF
    6 FROM SZSLIFE_TEMP2
    7 GROUP BY
    8 SZSLIFE_SGBSTDN_TERM_CODE_EFF
    9 HAVING COUNT(*) > 1)
    10 AND
    11* SZSLIFE_SLRRASG_BLDG_CODE = ' '
    12 /
    4 rows deleted.
    The only thing here is that the SZSLIFE_SLRRASG_BLDG_CODE can not be defined as a a NULL, so I can not use
    where SZSLIFE_SLRRASG_BLDG_CODE is null
    Here is how those two columns are define:
    SZSLIFE_SLRRASG_BLDG_CODE VARCHAR2(10)
    SZSLIFE_SLRRASG_ROOM_NUMBER VARCHAR2(10)
    So,
    my question is it will be safe to do SZSLIFE_SLRRASG_BLDG_CODE = ' ' ?
    Again, it works, it deleted the rfows that I wanted...
    Thank you very much!!!
    Rogelio

  • How to write a simple query.

    I have a table where I have data shown below. Now, I want to write a simple query which lists me the project and the count of the distinct effective dates for which data is existant there.
    Sample data:
    Project Task Effective Date (xx_proj_task_data)
    101 T1 01-Jan-2008
    101 T1 01-Feb-2008
    101 T1 01-Mar-2008
    101 T2 01-Jan-2008
    101 T2 01-Apr-2008
    101 T3 01-Apr-2008
    102 T1 01-Jan-2008
    102 T1 01-Feb-2008
    102 T2 01-Apr-2008
    103 T1 01-Jan-2008
    103 T1 01-Feb-2008
    103 T1 01-Mar-2008
    103 T1 01-Apr-2008
    103 T2 01-May-2008
    103 T3 01-Jun-2008
    103 T1 01-Jan-2008
    103 T1 01-Aug-2008
    103 T2 01-Apr-2008
    Output Reqd:
    Project Count(Distinct Effective Dates)
    101 4
    102 3
    103 7
    I can write a query that says:
    select project_id, count(1)
    from (select distinct project_id, effective_date
    from xx_proj_task_data) x
    group by project_id;
    But, is there a way I can achieve the same by avoiding the inner Query (x) and just by a simple query ?
    Thanks!

    Try below query:
    select project_id
    , count(distinct effective_date)
    from xx_proj_task_data
    group by project_id;
    --venkata                                                                                                                                                                                                                                                                                       

  • Error in the simple Query

    Dear Experts,
    Not able to Execute this simple query :
    Select T1.JobID , T1.BudgetValue,T1.ActualValue FROM [dbo].[Enprise_JobCost_ActualBudgetView] T1 WHERE T1.TransType = '[%0]'
    Regards

    Hello,
    View - A View in simple terms is a subset of a 'virtual table. It can be used to retrieve data from the tables, Insert, Update or Delete from the tables. The Results of using View are not permanently  stored in the database.
    Stored Procedure -  A stored procedure is a group of SQL statements which can be stored into the database and can be shared over the netwrok with different users.
    http://www.geekinterview.com/question_details/65914
    Better make a UDT for your requirement.
    Thanks
    Manvendra Singh Niranjan

  • Simple Query working on 10G and not working on 11gR2 after upgrade

    Hi Folks,
    This is the first time i am posting the query in this Blog.
    I have a small issue which preventing the UAT Sigoff.
    Simple query working fine on 10.2.0.1 and after upgrade to 11.2.0.1 its error out
    10.2.0.4:
    =====
    SQL> SELECT COUNT(*) FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1=1;
    COUNT(*)
    1
    SQL> SELECT COUNT(*) FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1=00001;
    COUNT(*)
    1
    SQL> select ATTRIBUTE1 FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1=1;
    ATTRIBUTE1
    00001
    11.2.0.1:
    =====
    SQL> SELECT COUNT(*) FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1=1
    ERROR at line 1:
    ORA-01722: invalid number
    SQL> SELECT COUNT(*) FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1=00001
    ERROR at line 1:
    ORA-01722: invalid number
    SQL> select ATTRIBUTE1 FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1='1';
    no rows selected
    SQL> SELECT COUNT(*) FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1='00001';
    COUNT(*)
    1
    SQL> select ATTRIBUTE1 FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1='00001';
    ATTRIBUTE1
    00001
    ++++++++++++++++++++++++++++++++++++++++++++++
    SQL > desc APPS.HZ_PARTIES
    Name Type
    ======== ======
    ATTRIBUTE1 VARCHAR2(150)
    ++++++++++++++++++++++++++++++++++++++++++++++
    Changes:
    Recently i upgraded the DB from 10.2.0.4 to 11.2.0.1
    Query:
    1.If the type of that row is VARCHAR,why it is working in 10.2.0.4 and why not working in 11.2.0.1
    2.after upgrade i analyzed the table with "analyze table " query for all AP,AR,GL,HR,BEN,APPS Schemas--Is it got impact if we run analyze table.
    Please provide me the answer for above two questions or refer the document is also well enough to understand.Based on the Answer client will sigoff to-day.
    Thanks,
    P Kumar

    WhiteHat wrote:
    the issue has already been identified: in oracle versions prior to 11, there was an implicit conversion of numbers to characters. your database has a character field which you are attempting to compare to a number.
    i.e. the string '000001' is not in any way equivalent to the number 1. but Oracle 10 converts '000001' to a number because you are asking it to compare to the number you have provided.
    version 11 doesn't do this anymore (and rightly so).
    the issue is with the bad code design. you can either: use characters in the predicate (where field = 'parameter') or you can do a conversion of the field prior to comparing (where to_num(field) = parameter).
    I would suggest that you should fix your code and don't assume that '000001' = 1I don't think that the above is completely correct, and a simple demonstration will show why. First, a simple table on Oracle Database 10.2.0.4:
    CREATE TABLE T1(C1 VARCHAR2(20));
    INSERT INTO T1 VALUES ('1');
    INSERT INTO T1 VALUES ('0001');
    COMMIT;A select from the above table, relying on implicit data type conversion:
    SELECT
    FROM
      T1
    WHERE
      C1=1;
    C1
    1
    0001Technically, the second row should not have been returned as an exact match. Why was it returned, let's take a look at the actual execution plan:
    SELECT
    FROM
      TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,NULL));
    SQL_ID  g6gvbpsgj1dvf, child number 0
    SELECT   * FROM   T1 WHERE   C1=1
    Plan hash value: 3617692013
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
    |*  1 |  TABLE ACCESS FULL| T1   |     2 |    24 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(TO_NUMBER("C1")=1)
    Note
       - dynamic sampling used for this statementNotice that the VARCHAR2 column was converted to a NUMBER, so if there was any data in that column that could not be converted to a number (or NULL), we should receive an error (unless the bad rows are already removed due to another predicate in the WHERE clause). For example:
    INSERT INTO T1 VALUES ('.0001.');
    SELECT
    FROM
      T1
    WHERE
      C1=1;
    SQL> SELECT
      2    *
      3  FROM
      4    T1
      5  WHERE
      6    C1=1;
    ERROR:
    ORA-01722: invalid numberNow the same test on Oracle Database 11.1.0.7:
    CREATE TABLE T1(C1 VARCHAR2(20));
    INSERT INTO T1 VALUES ('1');
    INSERT INTO T1 VALUES ('0001');
    COMMIT;
    SELECT
    FROM
      T1
    WHERE
      C1=1;
    C1
    1
    0001
    SELECT
    FROM
      TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,NULL));
    SQL_ID  g6gvbpsgj1dvf, child number 0
    SELECT   * FROM   T1 WHERE   C1=1
    Plan hash value: 3617692013
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
    |*  1 |  TABLE ACCESS FULL| T1   |     2 |    24 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(TO_NUMBER("C1")=1)
    Note
       - dynamic sampling used for this statement
    INSERT INTO T1 VALUES ('.0001.');
    SELECT
    FROM
      T1
    WHERE
      C1=1;
    SQL> SELECT
      2    *
      3  FROM
      4    T1
      5  WHERE
      6    C1=1;
    ERROR:
    ORA-01722: invalid numberAs you can see, exactly the same actual execution plan, and the same end result.
    The OP needs to determine if non-numeric data now exists in the column. Was the database characterset possibly changed during/after the upgrade?
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Improving a simple Query

    Following is a simple query, what i want to know, i have added upper select to get d.name which is the description of region_code. Can i get whole result with single select
    SELECT t.region_code,d.name, t.emp_contr
    FROM
    (SELECT c.br_region_fo_code as Region_Code,
    SUM(c.employer_contribution) AS emp_contr
    FROM core_business.cb_contr_emp_pmt_slip c
    GROUP BY c.br_region_fo_code ) t,
    general_information.cb_region_fo d
    WHERE t.br_region_fo_code = d.region_fo_code;

    Boneist wrote:
    malhi wrote:
    Following is a simple query, what i want to know, i have added upper select to get d.name which is the description of region_code. Can i get whole result with single select
    SELECT t.region_code,d.name, t.emp_contr
    FROM
    (SELECT c.br_region_fo_code as Region_Code,
    SUM(c.employer_contribution) AS emp_contr
    FROM core_business.cb_contr_emp_pmt_slip c
    GROUP BY c.br_region_fo_code ) t,
    general_information.cb_region_fo d
    WHERE t.br_region_fo_code = d.region_fo_code;To be honest, I wouldn't bother rewriting the above query - it looks like it's filtering early (although Oracle could choose to rewrite it so that it does the join first, I guess), so that you're reducing the number of rows that the outer query has to join to. That means less work. If it is rewriting, I'd stick a no_merge hint on the subquery, to tell Oracle to do the grouping first before joining.
    You could rewrite the above query as:
    SELECT c.br_region_fo_code AS region_code,
    d.name,
    SUM(c.employer_contribution) AS emp_contr
    FROM   core_business.cb_contr_emp_pmt_slip c,
    general_information.cb_region_fo d
    WHERE  t.br_region_fo_code = d.region_fo_code
    GROUP BY c.br_region_fo_code, d.name;but whether Oracle will filter early or not is another matter. You would have to test both runs.I believe that Jonathan Lewis had a demonstration of execution plans that showed Oracle transforming queries to "push" GROUP BY clause prior to a join when sufficient contraints were in place to allow that and there was a performance benefit in doing so. I'd certainly be interested in seeing whether this was being done. The optimisation was really aimed at reducing the size of the group by key columns.

  • Simple query takes time to run

    Hi,
    I have a simple query whcih takes about 20 mins to run.. here is the TKPROF forit:
      SELECT
        SY2.QBAC0,
        sum(decode(SALES_ORDER.SDCRCD,'USD', SALES_ORDER.SDAEXP,'CAD', SALES_ORDER.SDAEXP /1.0452))
      FROM
        JDE.F5542SY2  SY2,
        JDE.F42119  SALES_ORDER,
        JDE.F0116  SHIP_TO,
        JDE.F5542SY1  SY1,
       JDE.F4101  PRODUCT_INFO
    WHERE
        ( SHIP_TO.ALAN8=SALES_ORDER.SDSHAN  )
        AND  ( SY1.QANRAC=SY2.QBNRAC and SY1.QAOTCD=SY2.QBOTCD  )
        AND  ( PRODUCT_INFO.IMITM=SALES_ORDER.SDITM  )
        AND  ( SY2.QBSHAN=SALES_ORDER.SDSHAN  )
        AND  ( SALES_ORDER.SDLNTY NOT IN ('H ','HC','I ')  )
        AND  ( PRODUCT_INFO.IMSRP1 Not In ('   ','000','689')  )
        AND  ( SALES_ORDER.SDDCTO IN  ('CO','CR','SA','SF','SG','SP','SM','SO','SL','SR')  )
        AND  (
        ( SY1.QACTR=SHIP_TO.ALCTR  )
        AND  ( PRODUCT_INFO.IMSRP1=SY1.QASRP1  )
      GROUP BY
      SY2.QBAC0
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.07       0.07          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch       10     92.40     929.16     798689     838484          0         131
    total       12     92.48     929.24     798689     838484          0         131
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 62 
    Rows     Row Source Operation
        131  SORT GROUP BY
    3535506   HASH JOIN 
    4026100    HASH JOIN 
        922     TABLE ACCESS FULL OBJ#(187309)
    3454198     HASH JOIN 
      80065      INDEX FAST FULL SCAN OBJ#(30492) (object id 30492)
    3489670      HASH JOIN 
      65192       INDEX FAST FULL SCAN OBJ#(30457) (object id 30457)
    3489936       PARTITION RANGE ALL PARTITION: 1 9
    3489936        TABLE ACCESS FULL OBJ#(30530) PARTITION: 1 9
      97152    TABLE ACCESS FULL OBJ#(187308)
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.07       0.07          0          0          0           0
    Execute      2      0.00       0.00          0          0          0           0
    Fetch       10     92.40     929.16     798689     838484          0         131
    total       13     92.48     929.24     798689     838484          0         131
    Misses in library cache during parse: 1kindly suggest how to resolve this...
    OS is windows and its 9i DB...
    Thanks

    > ... you want to get rid of the IN statements.
    They prevent Oracle from usering the index.
    SQL> create table mytable (id,num,description)
      2  as
      3   select level
      4        , case level
      5          when 0 then 0
      6          when 1 then 1
      7          else 2
      8          end
      9        , 'description ' || to_char(level)
    10     from dual
    11  connect by level <= 10000
    12  /
    Table created.
    SQL> create index i1 on mytable(num)
      2  /
    Index created.
    SQL> exec dbms_stats.gather_table_stats(user,'mytable')
    PL/SQL procedure successfully completed.
    SQL> set autotrace on explain
    SQL> select id
      2       , num
      3       , description
      4    from mytable
      5   where num in (0,1)
      6  /
                                        ID                                    NUM DESCRIPTION
                                         1                                      1 description 1
    1 row selected.
    Execution Plan
    Plan hash value: 2172953059
    | Id  | Operation                    | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |         |  5001 |   112K|     2   (0)| 00:00:01 |
    |   1 |  INLIST ITERATOR             |         |       |       |            |          |
    |   2 |   TABLE ACCESS BY INDEX ROWID| MYTABLE |  5001 |   112K|     2   (0)| 00:00:01 |
    |*  3 |    INDEX RANGE SCAN | I1      |  5001 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - access("NUM"=0 OR "NUM"=1)Regards,
    Rob.

  • Two or more productid will be accquired by same customerid, by same shipvia, on the same day of the week of shipped date. i want the simple query for this.

    consider this situation,
     Two or more productid will be accquired by same customerid, by same shipvia, on the same  day of the week of shipped date. i want the simple query for this.
    my tables are  from northwind:
    [orders] = OrderID, CustomerID, EmployeeID, OrderDate, RequiredDate, ShippedDate, ShipVia, Freight, ShipName, ShipAddress, ShipCity, ShipRegion, ShipPostalCode, ShipCountry.
    [orders details] = OrderID, ProductID, UnitPrice, Quantity, Discount.
    i tried some but it is not exact, it gives wrong result.
    select pd.CustomerID,pd.ProductID, pd.no_of_time_purchased, sd.ShipVia, sd.same_ship_count, shipped_day from
    select ProductID,o.CustomerID,COUNT(productid) as no_of_time_purchased
    from orders o join [Order Details] od on o.OrderID=od.OrderID group by ProductID,o.CustomerID
    having count(od.ProductID) >1) pd
    join
    (select OrderID,customerid, shipvia, count(shipvia)as same_ship_count, DATENAME(DW,ShippedDate)as shipped_day from orders
    group by customerid, ShipVia, ShippedDate having COUNT(ShipVia) > 1 ) sd
    on sd.CustomerID=pd.CustomerID

    Hi,
    I think I have a solution that will at least give you a clue how to go about it. I have simplified the tables you mentioned and created them as temporary tables on my side, with some fake data to test with. I have incldued the generation of these temporary
    tables for your review.
    In my example I have included:
    1. A customer which has purchased the same product on the same day, using the same ship 3 times,
    2. Another example the same as the first but the third purchase was on a different day
    3. Another example the same as the first but the third purchase was a different product
    4. Another example the same as the first but the third purchase was using a different "ShipVia".
    You should be able to see that by grouping on all of the columns that you wich to return, you should not need to perform any subselects.
    Please let me know if I have missed any requirements.
    Hope this helps:
    CREATE TABLE #ORDERS
     OrderID INT,
     CustomerID INT,
     OrderDate DATETIME,
     ShipVia VARCHAR(5)
    CREATE TABLE #ORDERS_DETAILS
     OrderID INT,
     ProductID INT,
    INSERT INTO #ORDERS
    VALUES
    (1, 1, GETDATE(), 'ABC'),
    (2, 1, GETDATE(), 'ABC'),
    (3, 1, GETDATE(), 'ABC'),
    (4, 2, GETDATE() - 4, 'DEF'),
    (5, 2, GETDATE() - 4, 'DEF'),
    (6, 2, GETDATE() - 5, 'DEF'),
    (7, 3, GETDATE() - 10, 'GHI'),
    (8, 3, GETDATE() - 10, 'GHI'),
    (9, 3, GETDATE() - 10, 'GHI'),
    (10, 4, GETDATE() - 10, 'JKL'),
    (11, 4, GETDATE() - 10, 'JKL'),
    (12, 4, GETDATE() - 10, 'MNO')
    INSERT INTO #ORDERS_DETAILS
    VALUES
    (1, 1),
    (2, 1),
    (3, 1),
    (4, 2),
    (5, 2),
    (6, 2),
    (7, 3),
    (8, 3),
    (9, 4),
    (10, 5),
    (11, 5),
    (12, 5)
    SELECT * FROM #ORDERS
    SELECT * FROM #ORDERS_DETAILS
    SELECT
     O.CustomerID,
     OD.ProductID,
     O.ShipVia,
     COUNT(O.ShipVia),
     DATENAME(DW, O.OrderDate) AS [Shipped Day]
    FROM #ORDERS O
    JOIN #ORDERS_DETAILS OD ON O.orderID = OD.OrderID
    GROUP BY OD.ProductID, O.CustomerID, O.ShipVia, DATENAME(DW, O.OrderDate) HAVING COUNT(OD.ProductID) > 1
    DROP TABLE #ORDERS
    DROP TABLE #ORDERS_DETAILS

  • 11.2.2.4.0 - Problem with temporary space in simple query

    ttVersion
    TimesTen Release 11.2.2.4.0 (64 bit Linux/x86_64) (timesten:53396) 2012-09-24T08:28:05Z
    Instance admin: root
    Instance home directory: /opt/TimesTen/timesten
    World accessible
    Daemon home directory: /var/TimesTen/timesten
    I get "TT0802: Database temporary space exhausted" error in simple query with small data amount; Timesten try to allocate *40000312* bytes
    describe adm.peer
    Table ADM.PEER:
    Name Null Type
    PEER_ID NOT NULL TT_SMALLINT
    CLUSTER_ID NOT NULL TT_TINYINT
    DIALECT NOT NULL TT_INTEGER
    HOST NOT NULL TT_VARCHAR(256 BYTE)
    REALM NOT NULL TT_VARCHAR(256 BYTE)
    ADDRESS TT_VARCHAR(256 BYTE)
    PORT NOT NULL TT_INTEGER
    PROTOCOL NOT NULL TT_INTEGER
    AUTO_CONNECT NOT NULL TT_TINYINT
    ENABLED NOT NULL TT_TINYINT
    PRIORITY NOT NULL TT_TINYINT
    MANDATORY NOT NULL TT_TINYINT
    TSTAMP BINARY(8)
    1 rows selected
    describe adm.session
    Table ADM.SESSION:
    Name Null Type
    SESSION_ID NOT NULL TT_VARCHAR(64 BYTE) inline
    OBJ_ID NOT NULL TT_BIGINT
    PR_OBJ_ID NOT NULL TT_BIGINT
    SUBSCRIBER_ID NOT NULL TT_VARCHAR(32 BYTE) inline
    IP NOT NULL TT_VARCHAR(15 BYTE) inline
    IPV6_PREFIX TT_VARCHAR(39 BYTE) inline
    IPV6_PREFIX_LEN NOT NULL TT_TINYINT
    CREATE_TIME NOT NULL TT_TIMESTAMP
    UPDATE_TIME NOT NULL TT_TIMESTAMP
    RULES_SET_ID NOT NULL TT_BIGINT
    PEER_ID NOT NULL TT_SMALLINT
    MY_PEER_ID NOT NULL TT_SMALLINT
    PROFILE_HASHC NOT NULL TT_BIGINT
    FLAGS NOT NULL TT_INTEGER
    QOS_POLICY_NAME NOT NULL TT_VARCHAR(32 BYTE) inline
    BSID NOT NULL TT_BIGINT
    CONGESTION_FLAG NOT NULL TT_TINYINT
    SERVICE_CATEGORY_ID TT_VARCHAR(32 BYTE) inline
    EVENT_CAUSE NOT NULL TT_TINYINT
    EVENT_TIME TT_TIMESTAMP
    TSTAMP BINARY(8)
    1 rows selected
    select * from adm.peer;
    PEER_ID CLUSTER_ID DIALECT HOST REALM ADDRESS PORT PROTOCOL AUTO_CONNECT ENABLED PRIORITY MANDATORY TSTAMP
    21 2 0 ddf1.server.com diameter.realm ddf1.server.com 3868 6 1 1 0 1 (null)
    22 2 0 ddf2.server.com diameter.realm ddf2.server.com 3868 6 1 1 1 1 (null)
    101 233 0 peer_101 testik.com peer_101.testik.com 3886 0 0 1 101 0 (null)
    102 233 0 peer_102 testik.com peer_102.testik.com 3886 0 0 1 102 0 (null)
    1 1 0 vr-t500.testik.com diameter.realm vr-t500.testik.com 3868 6 1 1 0 1 (null)
    5 rows selected
    select * from adm.session;
    SESSION_ID OBJ_ID PR_OBJ_ID SUBSCRIBER_ID IP IPV6_PREFIX IPV6_PREFIX_LEN CREATE_TIME UPDATE_TIME RULES_SET_ID PEER_ID MY_PEER_ID PROFILE_HASHC FLAGS QOS_POLICY_NAME BSID CONGESTION_FLAG SERVICE_CATEGORY_ID EVENT_CAUSE EVENT_TIME TSTAMP
    TEST_SESSION 13300000000020027 0 TEST_SUBSCRIBER 94.25.209.27 0 2012-10-18 12:56:07.155381000 2012-10-18 12:56:07.155381000 1 101 1 0 0 0 0 DEFAULT 0 (null) (null)
    TEST_SESSION2 13300000000020028 13300000000020027 TEST_SUBSCRIBER 94.25.209.27 0 2012-10-18 12:56:07.155687000 2012-10-18 12:56:07.155687000 1 102 1 0 4 0 0 DEFAULT 0 (null) (null)
    2 rows selected
    SELECT p.address, count(*) as session_count from session s, peer p where p.peer_id = s.peer_id group by p.address failed,
    TT0802: Database temporary space exhausted
    dssize
    PERM_ALLOCATED_SIZE:     307200.0
    PERM_IN_USE_SIZE:     61763.0
    PERM_IN_USE_HIGH_WATER:     69393.0
    TEMP_ALLOCATED_SIZE:     37888.0
    TEMP_IN_USE_SIZE:     13494.0
    TEMP_IN_USE_HIGH_WATER:     21307.0
    This is additional error info when this code exuted inside C code:
    [TimesTen][TimesTen 11.2.2.4.0 ODBC Driver][TimesTen]TT0802: Database temporary space exhausted -- file "blk.c", lineno 3477, procedure "sbBlkAlloc"
    ODBC Error/Warning = S1000, Additional Error/Warning = 802
    [TimesTen][TimesTen 11.2.2.4.0 ODBC Driver][TimesTen]TT6221: Temporary data partition free space insufficient to allocate *40000312* bytes of memory -- file "blk.c", lineno 3477, procedure "sbBlkAlloc"
    ODBC Error/Warning = S1000, Additional Error/Warning = 6221
    Edited by: Vladimir Romanov on 18.10.2012 13:13
    Edited by: Vladimir Romanov on 18.10.2012 13:51

    This may well be
    Bug 14634954 - SELECT WITH GROUP BY REQUESTS LARGE TEMP MEMORY GETS TT0802 / TT6221
    The bug is fixed in 11.2.2.4.1 which is hopefully due before the end of October. Can you run your test on 11.2.1 as well? The problem should not reproduce there as it is specific to 11.2.2

  • Query and cfchart question

    I have this simple query :
    SELECT top 10
      SupplierName, supplierNumber
      SUM(TOTALS) AS TOTAL
      FROM tableName
    It will give me a simple output like:
    company1     12345     20
    company2     98881     5
    company3     76512     18
    What I need to do is plot the query in pie chart, with a drilldown report for each pie slice (supplier name) :
    <cfchart
             format="flash"
       chartwidth="350"
       chartheight="450"
       title='"Top 10 Supplier Volume "'
             pieslicestyle="sliced"
             labelformat="number"
             show3d="yes"
       url="../reports/supplierVolumeReport.cfm?supplierName=$itemlabel$&supplierNumber=#qryBuye rVolume.supplier#">
          <cfchartSeries type="pie"
                query="queryName"
                itemcolumn="supplierName"
                valuecolumn="Total"           
                datalabelstyle="value"
                colorlist="##CE1126,##3399CC,##CC5500,##444444,##00CC33,##7C96A1,##DAD9A0">
          </cfchartseries>
          </cfchart>
    Everything works fine. But when I go to the drilldown report, I cannot really search by supplierName because some companies might have the same name, but the supplier numbers are unique, so I have to search by supplier number. That is why I am passing that value also.
    But when I click on pie slice for company 3, it passes the supplier name company 3, but the supplier number is always the first one, 12345, regardless of what slice I click on. I have cfoutput in the report and it shows company 3 and 12345.
    How do I get the corresponding supplier number for the pie slice supplier name that I click on ? I need the corresponding supplier number so I can use it to search the query in the drilldown report.
    I tried to combine the name and number in the query and have the chart display 12345 - Company 1 in the legend, but they do not want the number to display, just the name in the legend.

    On your graph, pass the company name and total as a list with a delimter not likely to appear in the company name.  Like this:
    supplierName=$itemlabel$¿$value$
    On your drill down page, start with a query to get the supplier number.
    select suppliernumber, count(*)
    from yourtable
    where suppliername = ListFirst(url.suppliername, "¿")
    group by suppliernumber
    having count(*) = ListLast(url.suppliername, "¿")

  • Simple query -- tuning

    Hi gurus,
    I have a very simple query
    Select * from emp
    where deptno = 10
    if this query has been executed on 10,000 records .. time taken is little bit slow .. but whereas it has been executed on 10,00,000 records .. its taking lot hell of time..
    the client is complaining about the time tacking .. I really do wonder, how can I tune this query .. Please help
    Regards

    Hi guys,
    I really appreciate from the bottom of my heart, for putting and taking lot of pains, in answering my question .. Well, that question has been asked in an interview .. I dont know whether its a real problem faced by him or his client ..
    He has asked, me, I have given the query "select * from emp where deptno = 10" .. and there is already an index associated with the query that too on deptno .. when it has been tested on a very huge database consisting of 10,00,000 my client has asked me to tune the query .. how can i achieve that ..
    Like some of you people, i have tried, in giving different answers, but he wasnt satisified .. so thought of asking or sharing with you, so that, I can get some different answer ..
    One of the gurus has been asking me .. whether are they same tables of EMP and DEPT which we normally use (dummy tables )... Yes , they are the same tables ..
    Now any suggestions please
    Regards

  • Simple logic group to operate prior to custom import script?

    Hi all,
    Thanks for taking the time to read my question. I will gladly mark this thread as helpful or answered if you can help me. I'm a novice at FDM so please bear with.
    I have a custom import script that assigns ICP None to a specific account (overriding any ICP detail). However, now I need the ICP detail for that account in a second statistical account. I setup a simple logic group to create the logic account that I can map to the statistical but then realized that the import script runs prior to the logic group so I lose all ICP detail in the logic account as well.
    Is there a way to run the logic group prior to import script or is there a better way to accomplish what I'm trying to do?
    I'm not sure how critical this is but I'm using FDM v11.1.1.3.01 adapter 11x-G5-C
    Edited by: user4591089 on Aug 17, 2011 2:10 PM
    Edited by: user4591089 on Aug 17, 2011 2:50 PM

    Do the following:
    1) Remove the custom import script.
    2) Create a complex logic account and override the ICP dimension in the Group By Column with the Value [ICP None]. This will then be what is diplayed on the import screen for this logic account.
    3) Map the original source as the statistical account and the logic account as appropriate
    Edited by: SH on Aug 18, 2011 9:48 AM

  • Simple Query in Oracle Linked Table in MS Access causes full table scan.

    I am running a very simple query in MS ACCESS to a linked Oracle table as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > MyDate()
    or
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > [Forms]![MyForm]![Date1]
    We have over 50 machines and this query runs fine on over half of these, using an Oracle Index on the "performed" field. Running exactly the same thing on the other machines causes a full table scan, therefore ignoring the Index (all machines access the same Access DB).
    Strangely, if we write the query as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS
    Where performed > #09/04/2009 08:00#
    it works fast everywhere!
    Any help on this 'phenominon' would be appreciated.
    Things we've done:
    Checked regional settings, ODBC driver settings, MS Access settings (as in Tools->Options), we have the latest XP and Office service packs, and re-linked all Access Tables on both the slow and fast machines independantly).

    Primarily, thanks gdarling for your reply. This solved our problem.
    Just a small note to those who may be using this thread.
    Although this might not be the reason, my PC had Oracle 9iR2 installed with Administratiev Tools, where user machines had the same thing installed but using Runtime Installation. For some reason, my PC did not have 'bind date' etc. as an option in the workarounds, but user machines did have this workaround option. Strangely, although I did not have the option, my (ODBC) query was running as expected, but user queries were not.
    When we set the workaround checkbox accordingly, the queries then run as expected (fast).
    Once again,
    Thanks

  • Multi-query group above report creates more pages

    Hi,
    I have a multi-query group above report (paper only), the parent group creates 5 rows(subframes) all onto the same page, but then creates 4 more IDENTICAL pages!!?
    at the end I have 5 repating frames and 5 pages.
    If I set Maximum Records per Page to 1, I have 5 pages (IDENTICAL) with the first frame only...
    any idea?
    cheers
    Matteo

    hello,
    you will have to create a counter, that tells you the numbers of students (summary-column, function : count, reset on : course) and create a format-trigger on the heading that hides it when the number of students is 0.
    regards,
    the oracle reports team --pw                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Maybe you are looking for

  • Printing Report W/o Form Feed

    I want to stop the printer as soon as my report(6x) finished using dot matrix (i.e Epson LX 300). null

  • Internet trouble using Macbook and Airport extreme

    I have a new Macbook and an older airport extreme base station. I'm trying to access the internet with a dial-up connection, however, after the base station dials the number it gets hung up on "connecting to your ISP" and I can't surf the net. I've c

  • Annoying little plus sign on shapes

    I am putting several shapes together in Pages for use in another program. I do a command-shift-4 screen shot, open the little PNG in Preview, save it as a jpeg, and load it into the other program. I discovered I could make a shape editable so I could

  • Can't make international calls on iPhone 4s.

    My carrier says there is no barring on my account so it must be my phone settings. Also would this mean I can't text an international number? Thanks

  • Downloading Black & White Photos

    When I download B&W photos made on my 5s to my MacBook Pro they appear only in color. Is there a setting I need to change?