Query optimization...provide thoughts..

Could someone suggest improvements to my query?
SELECT 'B'||','||GC.GEO||','||GC.PROD||','||','||','||','||','||','||SUM(GC.FACT_AMT)||','||GC.CURRENCY||','||GC.DUE_PERD||','||GC.MEASR content
FROM
(SELECT FD.GEO,FD.PROD,PROFT_CTR,FD.MEASR,(FACT_AMT_16-FACT_AMT_15) FACT_AMT,DECODE(FD.ISO_CRNCY_CODE_CHAR,NULL,'USD') CURRENCY,FD.DUE_PERD
FROM FACT FD
WHERE FD.PROD IN (SELECT DISTINCT DSEND FROM PROD WHERE STRCT_CODE='4518' AND CTRL_PERD=(SELECT MAX(CTRL_PERD) FROM PROD WHERE STRCT_CODE='722') AND STRCT_CODE='4518' AND PARNT='000000011' AND DSEND_LVL=8 )
AND FD.GEO = '4156'
AND FD.MEASR IN
('84858',
'54858',
'45685',
'12345')
AND DUE_PERD = (SELECT TRUNC(LAST_DAY(ADD_MONTHS(SYSDATE,-2))) + 1 FROM DUAL)
AND FD.TIME_PERD IN (SELECT TIME_PERD FROM TIME_PERD WHERE TIME_PERD_TYPE_CODE ='MONTHS' AND END_DATE = ADD_MONTHS(TRUNC(LAST_DAY(SYSDATE)),-1))) GC
GROUP BY GC.GEO,GC.PROD,GC.CURRENCY,GC.DUE_PERD,GC.MEASR
ORDER BY GC.GEO,GC.PROD,GC.CURRENCY,GC.DUE_PERD,GC.MEASR;
PLAN_TABLE_OUTPUT
| Id  | Operation                             |  Name                  | Rows  | Bytes | Cost  | Pstart| Pstop |
|   0 | SELECT STATEMENT                      |                        |     8 |   624 |  1715 |       |       |
|   1 |  SORT GROUP BY                        |                        |     8 |   624 |  1715 |       |       |
|   2 |   NESTED LOOPS                        |                        |    81 |  6318 |  1702 |       |       |
|   3 |    NESTED LOOPS                       |                        |    81 |  4779 |  1702 |       |       |
|   4 |     VIEW                              | VW_NSO_1               |     4 |    40 |   380 |       |       |
|   5 |      SORT UNIQUE                      |                        |     4 |   192 |   380 |       |       |
|   6 |       PARTITION RANGE SINGLE          |                        |       |       |       |   KEY |   KEY |
|   7 |        TABLE ACCESS FULL              | PROD                   |     4 |   192 |   367 |   KEY |   KEY |
|   8 |         SORT AGGREGATE                |                        |     1 |    13 |       |       |       |
|   9 |          PARTITION RANGE ITERATOR     |                        |       |       |       |    56 |    62 |
|  10 |           INDEX FAST FULL SCAN        | PROD_IDX2              |  5963K|    73M|   936 |    56 |    62 |
|  11 |     PARTITION RANGE ITERATOR          |                        |       |       |       |     1 |     2 |
|  12 |      TABLE ACCESS BY LOCAL INDEX ROWID| FACT                   |    20 |   980 |  1702 |     1 |     2 |
|  13 |       BITMAP CONVERSION TO ROWIDS     |                        |       |       |       |       |       |
|  14 |        BITMAP AND                     |                        |       |       |       |       |       |
|  15 |         BITMAP INDEX SINGLE VALUE     | FACT_PROD              |       |       |       |     1 |     2 |
|  16 |         BITMAP INDEX SINGLE VALUE     | FACT_GEO               |       |       |       |     1 |     2 |
|  17 |       TABLE ACCESS FULL               | DUAL                   |  8168 |       |     4 |       |       |
|  18 |    INDEX UNIQUE SCAN                  | TIME_PERD_IDX2         |     1 |    19 |       |       |       |Thanks,
Bhagat

I was sure..day dreaming..
Now I'm awake,changed my query as below:
SELECT 'B'||','||GC.GEO||','||GC.PROD||','||','||','||','||','||','||SUM(GC.FACT_AMT)||','||GC.CURRENCY||','||GC.DUE_PERD||','||GC.MEASR content
FROM
(SELECT FD.GEO,FD.PROD,PROFT_CTR,FD.MEASR,(FACT_AMT_16-FACT_AMT_15) FACT_AMT,DECODE(FD.ISO_CRNCY_CODE_CHAR,NULL,'USD') CURRENCY,FD.DUE_PERD
FROM FACT FD
WHERE EXISTS (SELECT DISTINCT DSEND FROM PROD WHERE DSEND_ID=FD.PROD_ID AND STRCT_CODE='4518' AND CTRL_PERD='12-SEP-06' AND STRCT_CODE='4518' AND PARNT='000000011' AND DSEND_LVL=8 )
AND FD.GEO = '4156'
AND FD.MEASR IN
('84858',
'54858',
'45685',
'12345')
AND DUE_PERD =(TRUNC(LAST_DAY(ADD_MONTHS(SYSDATE,-2))) + 1 )
AND EXISTS (SELECT TIME_PERD FROM TIME_PERD WHERE TIME_PERD_TYPE_CODE ='MONTHS' AND END_DATE = ADD_MONTHS(TRUNC(LAST_DAY(SYSDATE)),-1))) GC
GROUP BY GC.GEO,GC.PROD,GC.CURRENCY,GC.DUE_PERD,GC.MEASR
ORDER BY GC.GEO,GC.PROD,GC.CURRENCY,GC.DUE_PERD,GC.MEASR;
and plan below:
PLAN_TABLE_OUTPUT
| Id  | Operation                            |  Name                  | Rows  | Bytes | Cost  | Pstart| Pstop |
|   0 | SELECT STATEMENT                     |                        | 10066 |   422K| 98146 |       |       |
|   1 |  SORT GROUP BY                       |                        | 10066 |   422K| 98146 |       |       |
|   2 |   FILTER                             |                        |       |       |       |       |       |
|   3 |    PARTITION RANGE ITERATOR          |                        |       |       |       |     1 |     2 |
|   4 |     TABLE ACCESS BY LOCAL INDEX ROWID| FACT                   |   370K|    15M| 95453 |     1 |     2 |
|   5 |      BITMAP CONVERSION TO ROWIDS     |                        |       |       |       |       |       |
|   6 |       BITMAP AND                     |                        |       |       |       |       |       |
|   7 |        BITMAP INDEX SINGLE VALUE     | FACT_GEO               |       |       |       |     1 |     2 |
|   8 |        BITMAP OR                     |                        |       |       |       |       |       |
|   9 |         BITMAP INDEX SINGLE VALUE    | FACT_MEASR             |       |       |       |     1 |     2 |
|  10 |         BITMAP INDEX SINGLE VALUE    | FACT_MEASR             |       |       |       |     1 |     2 |
|  11 |         BITMAP INDEX SINGLE VALUE    | FACT_MEASR             |       |       |       |     1 |     2 |
|  12 |         BITMAP INDEX SINGLE VALUE    | FACT_MEASR             |       |       |       |     1 |     2 |
|  13 |         BITMAP INDEX SINGLE VALUE    | FACT_MEASR             |       |       |       |     1 |     2 |
|  14 |         BITMAP INDEX SINGLE VALUE    | FACT_MEASR             |       |       |       |     1 |     2 |
|  15 |         BITMAP INDEX SINGLE VALUE    | FACT_MEASR             |       |       |       |     1 |     2 |
|  16 |         BITMAP INDEX SINGLE VALUE    | FACT_MEASR             |       |       |       |     1 |     2 |
|  17 |    PARTITION RANGE SINGLE            |                        |       |       |       |   KEY |   KEY |
|  18 |     TABLE ACCESS BY LOCAL INDEX ROWID| PROD_ASSOC_DNORM       |     4 |   152 | 17954 |   KEY |   KEY |
|  19 |      INDEX RANGE SCAN                | PROD_ASSOC_DNORM_IDX3  | 22849 |       |  1406 |   KEY |   KEY |
|  20 |    INDEX RANGE SCAN                  | TIME_PERD_IDX2         |     1 |    13 |     2 |       |       |
[\pre]
Thanks,
Bhagat
Message was edited by:
        Bugs
Message was edited by:
        Bugs                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • What is query optimization and how to do it.

    Hi
    What is query optimization?
    Any link can any one provide me so that i can read and learn the techniques.
    Thanks
    Elias Maliackal

    THis is an excellent place to start: When your query takes too long ...

  • How to do query Optimization ,plz share any documents with real examples

    Hi All,
    could any one please provide me some informations, how can i do query optimization in oracle using Third party tool sql developer .
    i am working oracle 10g version,please share with me if any documents or ppt like that.
    Thanks
    Krupa

    879534 wrote:
    Hi All,
    could any one please provide me some informations, how can i do query optimization in oracle using Third party tool sql developer .SQL Developer is Oracle's development tool, not a third party tool.
    Questions about using SQL Developer should go in the SQL Developer forum:
    SQL Developer
    I'll move the question there for you.

  • The query processor ran out of stack space during query optimization. Please simplify the query

    Can you suggest me that what should i do in this case.
    Actually i am having one table that is a MasterTable.I am referring this table in more than 300 tables that means i am having foreign key of this primary key in 300+ tables.
    due to this i am getting following error during deleting any row,
    doesn't matter that data is existing in reference table or not.
    Error that i am getting is 
    "The query processor ran out of stack space during query optimization. Please simplify the query"
    Can you suggest me that what should i do to avoid this error,because i am unable to delete this entry.
    Apart from it,i am getting performance problem too,so is it due to such huge FK on it.
    Please suggest me on following points
    1. Is it worst way to handle it,if yes then please suggest me solution for it.
    2. If it is a correct way then what should i do if getting error during deleting any record.
    3. Is it right to create Foreign key on each table where i am saving data of this master. if no then how to manage integrity.
    4. What people do in huge database when they wants to create foreign key for any primary key.
    5. Can you suggest me that how DBA's are handling this in big database,where they are having huge no. of tables.

    The most common reason of getting such error is having more than 253 foreign key constraints on a table. 
    The max limit is documented here:
    http://msdn.microsoft.com/en-us/library/ms143432(SQL.90).aspx 
    Although a table can contain an unlimited number of FOREIGN KEY constraints, the recommended maximum is 253. Depending on the hardware configuration hosting SQL Server, specifying additional foreign key constraints may be expensive for the query
    optimizer to process. If you are on 32 bit, then you might want to move to 64 bit to get little bigger stack space but eventually having 300 FK is not something which would work in long run.
    Balmukund Lakhani | Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • Need Query Optimization Info

    Hi Everyone:
    I have a user on my site who's trying to find information on database query optimization. She's willing to pay $3 to someone who will give her some very basic information. I figured those of you who read this forum would have something handy you could just provide to her. It's not the information she's paying for, but the service of someone retrieving it for her. Would someone take a look? You can find her request here:
    http://www.yepic.com/index.php?module=ContReq&hdnRequestID=8
    The one problem is you have to register on the site before you can post your response. You just need to give create a username, email and password, though. It's very easy. I'd appreciate someone helping one of my users out.

    . . . she's trying to avoid the
    search taxSearch tax... never heard it put that way. I guess you are trying to avoid the "search engine" tax by advertising your site here? Maybe some one will help you out by answering the request from Corey for Search Engine Optimization.

  • Oracle 11g on Linux : Query Optimization issue

    Hi guru,
    I am facing one query optimization related problem in group by query
    Table (10 million Records)
    Product(ProductId number,ProductName varchar(100),CategoryId VARCHAR2(38),SubCategoryId VARCHAR2(38))
    Index
    create index idxCategory on Product (CategoryId,SubCategoryId)
    Query1:To find product count for all CategoryId and SubCategoryId
    select CategoryId,SubCategoryId,count(*) from Product group by CategoryId,SubCategoryId
    Above query is not using index idxCategory and doing table scan which is very costly.
    When I fire Query2: select count(*) from Product group by CategoryId,SubCategoryId
    then it is properly using index idxCategory and very fast.
    Even I specified hint in Query1 but it is not using hint.
    Can anybody suggest why oracle is not using index in Query1 and what should I do so that Query1 will use Index.
    Thanks in advance.

    user644199 wrote:
    I am facing one query optimization related problem in group by query
    Query1:To find product count for all CategoryId and SubCategoryId
    select CategoryId,SubCategoryId,count(*) from Product group by CategoryId,SubCategoryId
    Above query is not using index idxCategory and doing table scan which is very costly.
    When I fire Query2: select count(*) from Product group by CategoryId,SubCategoryId
    then it is properly using index idxCategory and very fast.
    Even I specified hint in Query1 but it is not using hint.
    Can anybody suggest why oracle is not using index in Query1 and what should I do so that Query1 will use Index.The most obvious reason that the table needs to be visited would be that the columns "CategoryId" / "SubCategoryId" can be NULL but then this should apply to both queries. You could try the following to check the NULL issue:
    select CategoryId,SubCategoryId,count(*) from Product where CategoryId is not null and SubCategoryId is not null group by CategoryId,SubCategoryId
    Does this query use the index?
    Can you show us the hint you've used to force the index usage and the EXPLAIN PLAN output of the two queries including the "Predicate Information" section? Use DBMS_XPLAN.DISPLAY to get a proper output, and use the \ tag before and after when posting here to format it using fixed font. Use the "Quote" button in the message editor to see how I used the \ tag here.
    Are above queries representing the actual queries used or did you omit some predicates etc. for simplicity?
    By the way, VARCHAR2(38) and ...ID as name, are these columns storing number values?
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Creating a query that provides a default transaction type for those transactions not categorized?

    How could I create a query that provides a default transaction type for those transactions not categorized?
    So assuming I have:
    * Transactions table (with transactions)
    * Categories table
    * transactions_categories table - allows to allocate multiple categories (with a percentage)
     - tranactionID
     - categoryID
     - percentageAllocation
    * Usage is such that only non-personal categories have been applied through out data.  So there is a lot of transactions with no categories applied
    Aim:
    * Want to create a query that creates a list of all the allocated amounts, so would include as columns:  transaction.tDate, transaction.tTitle, categories.name, allocatedAmount(calculated from percentage * transaction amount)
    BUT:
    * How could I include in the query, the entries that cover all transaction that haven't been allocated, to a default category "personal", where the allocated Amount would be 100% of the transaction value
    * And also (if it were possible), for transactions that have been categorized but not for the complete transaction value (say only 50% was allocated to a category), how to to cover this off to.  

    To default the value of the category:
    select IIf(IsNull(Category),"Personal",Category) as Category,IIf(IsNull(Category),"100%",PercentageAllocation
    ) as PercentageAllocation from [yourtable]
    What do you want to put the values of these ones:
    And also (if it were possible), for transactions that have been categorized but not for the complete transaction
    value (say only 50% was allocated to a category), how to to cover this off to.  
    Fouad Roumieh

  • Reg Query Optimization - doubts..

    Hi Experts,
    This is related to Blog by Mr Prakash Darji regarding "Query Optimization" posted on Jan 26 2006.In this to optimize query Generation of Report is suggested.
    I tried this, but I am not sure I am analyzing this correctly.
    I collected Stats data before and after Generation of Report.But how to be sure that this is helping me? Did any one has tried this?
    What to look for in Stats Data - duration?
    But duration would not be absolute parameter as there is factor of "Wait Time, User", so duration may depend on this.
    Please help me in this.
    Thanks
    Gaurav
    Message was edited by: Gaurav

    Any ideas Experts?

  • OBIEE 11.1.1.7-Ago Function Error-[nQSError: 46008] Internal error: File server\Query\Optimizer\SmartScheduler\PhysicalRequestGenerator\Src\SQOSPMarkMultiTargetSupport.cpp

    Hi All,
    I was performing the steps mentioned in Oracle Tutorial"http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/bi/bi11115/biadmin11g_02/biadmin11g.htm#t10"-BI RPD creation.
    After Using the AGO function data in the Time series metric(Month Ago Revenue) was null always. I updated the DB features in RPD physical layers by selecting support for time series functions.
    After that report started to fail with below error. Please let me know if its a bug and any option to fix it.
    Thanks,
    Sreekanth
    Error
    View Display Error
    Odbc driver returned an error (SQLExecDirectW). 
      Error Details
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P 
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 46008] Internal error: File server\Query\Optimizer\SmartScheduler\PhysicalRequestGenerator\Src\SQOSPMarkMultiTargetSupport.cpp, line 1680. (HY000) 
    SQL Issued: SELECT 0 s_0, "Sample Sales"."Time"."Year-L1" s_1, "Sample Sales"."Revenue"."Ago-Year Revenue" s_2, "Sample Sales"."Revenue"."Revenue" s_3 FROM "Sample Sales" FETCH FIRST 65001 ROWS ONLY
      Refresh

    I know that there is no relation between "SampleApp Lite"."D3 Orders (Facts Attributes)"."Order Date", "SampleApp Lite"."D0 Time"."Calendar Date", it's also the same thing in my own RPD.
    But as it's working with the 11.1.1.6.2 BP1 version I don't understand why it's not working with 11.1.1.6.9.
    Implicit fact column is not set on my repository, but I don't have any request with only dimensional column, so if my understanding is correct I don't need to use it. Also, the problem appears during the check of the repository not in answers.
    thanks anyway

  • SQL Server 2008R2 SP2 Query optimizer memory leak ?

    It looks like we are facing a SQL Server 2008R2 queery optimizer memory leak.
    We have below version of SQL Server
    Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
     Jun 28 2012 08:36:30
     Copyright (c) Microsoft Corporation
     Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
    The instance is set MAximum memory tro 20 GB.
    After executing a huge query (2277 kB generated by IBM SPSS Clementine) with tons of CASE and a lot of AND/OR statements in the WHERE and CASE statements and muliple subqueries the server stops responding on Out of memory in the internal pool
    and the query optimizer has allocated all the memory.
    From Management Data Warehouse we can find that the query was executed at
    7.11.2014 22:40:57
    Then at 1:22:48 we recieve FAIL_PACE_ALLOCATION 1
    2014-11-08 01:22:48.70 spid75       Failed allocate pages: FAIL_PAGE_ALLOCATION 1
    And then tons of below errors
    2014-11-08 01:24:02.22 spid87      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:02.22 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:02.22 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:02.30 Server      Error: 17312, Severity: 16, State: 1.
    2014-11-08 01:24:02.30 Server      SQL Server is terminating a system or background task Fulltext Host Controller Timer Task due to errors in starting up the task (setup state 1).
    2014-11-08 01:24:02.22 spid74      Error: 701, Severity: 17, State: 123.
    2014-11-08 01:24:02.22 spid74      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:13.22 Server      Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:13.22 spid87      Error: 701, Severity: 17, State: 123.
    2014-11-08 01:24:13.22 spid87      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:13.22 spid63      Error: 701, Severity: 17, State: 130.
    2014-11-08 01:24:13.22 spid63      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:13.22 spid57      Error: 701, Severity: 17, State: 123.
    2014-11-08 01:24:13.22 spid57      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:13.22 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:18.26 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:24.43 spid81      Error: 701, Severity: 17, State: 123.
    2014-11-08 01:24:24.43 spid81      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:18.25 Server      Error: 18052, Severity: -1, State: 0. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:18.25 Server      BRKR TASK: Operating system error Exception 0x1 encountered.
    2014-11-08 01:24:30.11 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:30.11 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:35.18 spid57      Error: 701, Severity: 17, State: 131.
    2014-11-08 01:24:35.18 spid57      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:35.18 spid71      Error: 701, Severity: 17, State: 193.
    2014-11-08 01:24:35.18 spid71      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:35.18 Server      Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:35.41 Server      Error: 17312, Severity: 16, State: 1.
    2014-11-08 01:24:35.41 Server      SQL Server is terminating a system or background task SSB Task due to errors in starting up the task (setup state 1).
    2014-11-08 01:24:35.71 Server      Error: 17053, Severity: 16, State: 1.
    2014-11-08 01:24:35.71 Server      BRKR TASK: Operating system error Exception 0x1 encountered.
    2014-11-08 01:24:35.71 spid73      Error: 701, Severity: 17, State: 123.
    2014-11-08 01:24:35.71 spid73      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:46.30 Server      Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:51.31 Server      Error: 17053, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:51.31 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:51.31 Logon       Error: 18052, Severity: -1, State: 0. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    Last error message is half an hour after the inital Out of memory at 2014-11-08 01:52:54.03. Then the Instance is completely shut down
    From the memory information in the error log we can see that all the memory is consumed by the QUERY_OPTIMIZER
    Buffer Pool                                   Value
    Committed                                   2621440
    Target                                      2621440
    Database                                     130726
    Dirty                                          3682
    In IO                                            
    0
    Latched                                          
    1
    Free                                           
    346
    Stolen                                      2490368
    Reserved                                          0
    Visible                                     2621440
    Stolen Potential                                  0
    Limiting Factor                                  17
    Last OOM Factor                                   0
    Last OS Error                                     0
    Page Life Expectancy                             28
    2014-11-08 01:22:48.90 spid75     
    Process/System Counts                         Value
    Available Physical Memory                29361627136
    Available Virtual Memory                 8691842715648
    Available Paging File                    51593969664
    Working Set                               628932608
    Percent of Committed Memory in WS               100
    Page Faults                                48955000
    System physical memory high                       1
    System physical memory low                        0
    Process physical memory low                       1
    Process virtual memory low                        0
    MEMORYCLERK_SQLOPTIMIZER (node 1)                KB
    VM Reserved                                       0
    VM Committed                                      0
    Locked Pages Allocated                            0
    SM Reserved                                       0
    SM Committed                                      0
    SinglePage Allocator                       19419712
    MultiPage Allocator                             128
    Memory Manager                                   KB
    VM Reserved                               100960236
    VM Committed                                 277664
    Locked Pages Allocated                     21483904
    Reserved Memory                                1024
    Reserved Memory In Use                            0
    On the other side MDW reports that the MEMORYCLERK_SQLOPTIMIZER increases since the execution of the query up to the point of OUTOF MEMORY, but the Average value is 54.7 MB during that period as can be seen on attached graph.
    We have encountered this issue already two times (every time the critical query is executed).

    Hi,
    This does seems to me kind of memory Leak and actually it is from SQL Optimizer which leaked memory from buffer pool so much that it did not had any memory to be allocated for new page.
    MEMORYCLERK_SQLOPTIMIZER (node 1)                KB
    VM Reserved                                       0
    VM Committed                                      0
    Locked Pages Allocated                            0
    SM Reserved                                       0
    SM Committed                                      0
    SinglePage Allocator                       19419712
    MultiPage Allocator                             128
    Can you post complete DBCC MEMORYSTATUS output which was generated in errorlog. Is this the only message in errorlog or there are some more messages before and after it.
    select (SUM(single_pages_kb)*1024)/8192 as total_stolen_pages, type
    from sys.dm_os_memory_clerks
    group by typeorder by total_stolen_pages desc
    and
    select sum(pages_allocated_count * page_size_in_bytes)/1024,type from sys.dm_os_memory_objects
    group by type
    If you can post the output of above two queries with dbcc memorystaus output on some shared drive and share location with us here. I would try to find out what is leaking memory.
    You can very well apply SQL Server 2008 r2 SP3 and see if this issue subsides but I am not sure whether this is fixed or actually it is a bug.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Any software for Query Optimization?

    Hi,
    Is there any software to be used for query Optimization
    Please help me

    3rd party tools suck (if they did not Oracle would incorporate them in its optimizer long ago). The best tools are:
    1. Gather table and index statistics
    2. Use CBO (FIRST_ROWS, ALL_ROWS)
    3. SQL*Plus AUTOTRACE
    4. EXPLAIN PLAN
    5. SET SQL_TRACE = TRUE + TKPROF
    6. Statspack
    7. Enterprise manager (new Ora10g, did not try it)
    All from Oracle...

  • Regarding Query Optimization

    Hi Gurus,
    I am preparing for an interview. I need to know about Query Optimization, explain plan, Cost analysis.
    Can anyone please give me some reference link or site where i can get step by step details for the above..
    It would be really helpfull.
    Thanks in advance..
    Ameya.

    RTFM [url http://oraclesvca2.oracle.com/docs/cd/B10501_01/server.920/a96533/toc.htm]Performance Tuning Guide and Reference

  • Is there an accurate query to  provide list of all objects grown in the past between 2 intervals ?

    Please Advice
    I need accurate query to  provide list of all objects grown in the past between 2 intervals as one provided in Doc ID 1395195.1 is not giving accurate results and DBMS_SPACE.OBJECT_GROWTH_TREND is not giving objects name
    Kind Regards

    user13778506 wrote:
    Please Advice
    I need accurate query to  provide list of all objects grown in the past between 2 intervals as one provided in Doc ID 1395195.1 is not giving accurate results and DBMS_SPACE.OBJECT_GROWTH_TREND is not giving objects name
    Kind Regards
    only possible if  you collect object sizes on a regular basis & store the details after each collection.
    what unit  of measure should be used to quantify the "growth"; rows, blocks, segment, extent?
    Some might consider this level of  detail to be symptomatic of Compulsive Tuning Disorder.

  • Aggregate Query Optimization (with indexes) for trivial queries

    Table myTable, which is quite large, has an index on the month column.
    "select max(month) from myTable" uses this index and returns quickly.
    "select max(month) from myTable where 1 = 1" does not use this index, falls through to a full table scan, and takes a very long time.
    Can this possibly be a genuine omission in the query optimizer, or is there some setting or another to convince it to perform the latter query more sanely?

    Oracle 11.2.0.1
    SQL> select table_name, num_rows from dba_tables where table_name = 'DWH_ADDRESS_MASTER';
    TABLE_NAME                                 NUM_ROWS
    DWH_ADDRESS_MASTER                        295729948
    SQL> explain plan for select max(last_update_date) from DWH.DWH_ADDRESS_MASTER;
    | Id  | Operation                  | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT           |                        |     1 |     8 |     4   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE            |                        |     1 |     8 |            |          |
    |   2 |   INDEX FULL SCAN (MIN/MAX)| DWH_ADDRESS_MASTER_N1  |     1 |     8 |     4   (0)| 00:00:01 |
    SQL> explain plan for select max(last_update_date) from DWH.DWH_ADDRESS_MASTER where 1 = 1;
    | Id  | Operation                   | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |                        |     1 |     8 |     4   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE             |                        |     1 |     8 |            |          |
    |   2 |   FIRST ROW                 |                        |     1 |     8 |     4   (0)| 00:00:01 |
    |   3 |    INDEX FULL SCAN (MIN/MAX)| DWH_ADDRESS_MASTER_N1  |     1 |     8 |     4   (0)| 00:00:01 |
    ------------------------------------------------------------------------------------------------------

  • HFR - Data Query Optimization Settings

    hi
    In HFR, in a grid there are Data Query Optimization Settings, can you please help me on the settings property.
    Our report fetches data using lot of attributes. When we don't use MDX setting, it fetches very fast, but the alias doesn't come.
    However, when we use MDX setting, the alias comes, but the report takes lot of time to run(approx 1 min / row of data fetched)
    Can you please guide on the same.
    Thanks

    pm wrote:
    Hi ,
    I am working on below query optimization/Tuning.
    SELECT
    c_date,
    c1,
    c2,
    c3,
    c4,
    c5,
    c6
    FROM tab1
    WHERE
    ROWNUM <= &param
    AND ( c_date BETWEEN &date1 AND &date2 )
    AND c3 in (
    select c3
    from tab2
    where xxx='abc')
    ORDER BY c1, c_date;
    Note : &param,&date1 ,&date2 are parameters runtime getting from UI code .
    tab1 having huge data around 10lacs to crores. and it has range partition on c_date column and subpartiotion on c1 column.
    To get best throughput in less time, what i need to do ?
    Please do let me know steps to tune/optimize the sql query.Also which hint we can use on sql query to better results.
    Thanks in advanced.
    PMBefore you start worrying about performance tuning you should worry about the query being incorrect.
             WHERE
                 ROWNUM <= &param
                 AND   ( c_date BETWEEN &date1 AND &date2 )
                 AND c3 in (
                                             select c3
                                             from tab2
                                             where xxx='abc')
             ORDER BY c1, c_date;Presumably you want to limit the number of rows with the ROWNUM predicate AFTER the ORDER BY clause is applied.
    Please read this and learn how queries are actually processed, I can almost guarantee you this query is not doing what you think it is doing at present.
    http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html
    Cheers,

  • Autonomic Query Optimizer

    Hi All,
    I'm doing some bit of study on that portion of Query Optimizer in Oracle9i that is autonomic (I mean it is dynamic and a self learning one).Can anyone give me some pointers to this topic(In fact I doubt whether something like this at all exists, or is still a topic of research); I need some more info on this topic.Some papers or articles may be handy.
    Greetings,
    Kaustav Ghoshal

    I think u mean CBO (Cost Based Optimizer). If your database is running in "choose" mode so yes, CBO will automaticly optimize your query.
    Check the doc here : http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/part1.htm
    Fred

Maybe you are looking for

  • How do I insert multiple values into different fields in a stored procedure

    I am writing a Stored Procedure where I select data from various queries, insert the results into a variable and then I insert the variables into final target table. This works fine when the queries return only one row. However I have some queries th

  • How do I fix Reader XI compatability issues with Windows 7 Home Premium?

    Compatability issues are keeping me from being able to open pdf files using Acrobat Reader XI.  I have Windows 7 Home Premium and am using Webroot Security.  It is telling me to try unprotected mode, but I would like to be able to use protected mode.

  • Samples of non-cumulative key figure

    hi experts, can u give me samples about: 1> in what scenario should i use the non-***.Value change to create a non-cumulative key figure? Characteristic            : HEADCOUNT Aggregation               : (here chose sum?) Exception Aggregation:(here

  • F.05 and valuation account issue

    Hello, I have a crazy question about the Payment document posting. Here is the scenario: There is an Invoice cretaed through MIRO in USD. The company code is in CAD. So there are postings against the GR/IR accounts. If I do Payment runF110, the total

  • Uninstalling Acrobat reader 8.12

    Hi, I am trying to install Acrobat Pro 9 and need to uninstall previous programs. I can't uninstall Acrobat reader 8.12 as it says the source is unavailable. Any ideas how I can get rid of it? Thanks goll