Stats collection running long in 11g

Oracle11.2.0.1
Sun Solaris.
Stats collection running long for almost 12 hours from 6PM to 6AM daily and stats collection autotask is getting STOPPED after the window duration.
DBA_TABLES.last_analyzed is not getting updated after 2AM in the morning. I believe it is scanning some big tables, but not able to pin point the table/index.
And we've some a bunch of 100GB tables in the DB. Is there a way to find out which table it's gathering stats between 2AM and 6AM and not getting completed?
Thanks for your time!

moreajays wrote:
Hi,
I would suggest to find out those tables with huge Table & last_analyzed is Null , these tables would be possible tables responsible for long run.
Only AWR ot other SQL Hist stats tables can help you to get details.
To fix this issue you should run stats gathering manually in day time (not in maintenance window) either with auto stats or with little estimate_percent (10 or 5)
After getting first successful analyze (manually) , maintenance stats will pick up new incremental stats thereafter
Ajay,
I don't think that your statement that after the gathering the statistics manually, it would be next done incrmentally is correct. The incremental statistics are introduced only for partitioned tables AFAIK. Can you post a link from the documentation that your mentioned behavior exists?
Aman....

Similar Messages

  • Is 11g auto stats collection automatic on? where can I find this job?

    Is 11g auto stats collection automatic on?
    I manually created a 11g database, but I did not see auto stats collection job. where can I find this job? or how could I verify this is on?
    Thank you.

    Found in Automated Maintenance Tasks.

  • Auto optimizer stats collection enabled, but not running and not showing up

    I enabled auto optimizer stats collection days ago, but it never ran. DB version is 11gr2, OS is redhat 5. It shows enabled in dba_autotask_client, but not in autotask_task. Please help on this issue.
    SQL> select client_name, status from dba_autotask_client;
    CLIENT_NAME STATUS
    auto optimizer stats collection ENABLED
    auto space advisor DISABLED
    sql tuning advisor ENABLED
    SQL> select task_name, status, to_char(last_good_date, 'YYYY-MM-DD HH24:MI:SS') last_good_date, last_good_duration
    from dba_autotask_task
    where client_name = 'auto optimizer stats collection'; 2 3
    no rows selected
    SQL> select program_action, number_of_arguments, enabled
    from dba_scheduler_programs
    where owner = 'SYS'
    and program_name = 'GATHER_STATS_PROG'; 2 3 4
    PROGRAM_ACTION NUMBER_OF_ARGUMENTS ENABL
    dbms_stats.gather_database_stats_job_proc 0 TRUE
    SQL> select w.window_name, c.autotask_status, c.optimizer_stats, w.repeat_interval, w.enabled
    -- , w.duration, w.last_start_date, w.next_start_date
    2 from dba_autotask_window_clients c , dba_scheduler_windows w
    3 4 where c.window_name = w.window_name
    5 order by last_start_date desc;
    WINDOW_NAME AUTOTASK OPTIMIZE REPEAT_INTERVAL ENABL
    MONDAY_WINDOW ENABLED ENABLED freq=daily;byday=MON;byhour=22;byminute=0; bysecond=0 TRUE
    SUNDAY_WINDOW ENABLED ENABLED freq=daily;byday=SUN;byhour=6;byminute=0; bysecond=0 TRUE
    SATURDAY_WINDOW ENABLED ENABLED freq=daily;byday=SAT;byhour=6;byminute=0; bysecond=0 TRUE
    FRIDAY_WINDOW ENABLED ENABLED freq=daily;byday=FRI;byhour=22;byminute=0; bysecond=0 TRUE
    THURSDAY_WINDOW ENABLED ENABLED freq=daily;byday=THU;byhour=22;byminute=0; bysecond=0 TRUE
    WEDNESDAY_WINDOW ENABLED ENABLED freq=daily;byday=WED;byhour=22;byminute=0; bysecond=0 TRUE
    TUESDAY_WINDOW ENABLED ENABLED freq=daily;byday=TUE;byhour=22;byminute=0; bysecond=0 TRUE
    7 rows selected.
    SQL>

    SQL> select max(last_analyzed) from all_tables where owner='ARAD';
    MAX(LAST_
    26-SEP-12

  • Regarding stats collection in oracle 11g.

    Just a general question, while doing stats collection weather system takes the backup of current statistics. i think we can specify stattab. but weather it takes stats backup before over writing?
    I got this requirement as a part of upgrade, i have already gone through export_schema_stats and import_schema stats already.  Just trying all other possible options only...
    Regards
    DBA.

    DBA wrote:
    Just a general question, while doing stats collection weather system takes the backup of current statistics. i think we can specify stattab. but weather it takes stats backup before over writing?
    backup to where exactly?

  • Delete statements are taking longer time

    Hi All,
    I have an issue with delete statement. below are my oracle DB details.
    SQL>select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE 11.1.0.6.0 Production
    TNS for Linux: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    I have a schema in the database which has 120 tables in it. The problem is, when the QA team is deleting the customers from customer table, it is running long time and haven't completed. this customer table have FK relation with other tables and have all the right indexes in place. then i have tried to identified the problem, why it is taking long time, and found that when they are running this delete query at the same time application is also running. the sessions created by application are locked most of the table and at the same time the delete session is also waiting for exclusive lock on those table. I given the same info to QA team and asked to stop the application when they deleting the customers. Then they said, they don't want to stop the application and the delete has to work and they asked for different solution. Here i am not sure, what solution i need to provide them.
    Can you please suggest me a approach and Thanks in advance.

    Hari wrote:
    and at the same time the delete session is also waiting for exclusive lock on those table. The only way the delete session could be waiting for an exclusive lock on the table is if the code to delete from the customer includes the command: lock table customer in exclusive mode;Unless your original description is wrong, either the deletion code has to change or the application has to stop modifying the customer table.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

  • Is index range scan the reason for query running long time

    I would like to know whether index range scan is the reason for the query running long time. Below is the explain plan. If so, how to optimise it? Please help
    Operation     Object     COST     CARDINALITY     BYTES
    SELECT STATEMENT ()          413     1000     265000
    COUNT (STOPKEY)                    
    FILTER ()                    
    TABLE ACCESS (BY INDEX ROWID)     ORDERS     413     58720     15560800
    INDEX (RANGE SCAN)     IDX_SERV_PROV_ID     13     411709     
    TABLE ACCESS (BY INDEX ROWID)     ADDRESSES     2     1     14
    INDEX (UNIQUE SCAN)     SYS_C004605     1     1     
    TABLE ACCESS (BY INDEX ROWID)     ADDRESSES     2     1     14
    INDEX (UNIQUE SCAN)     SYS_C004605     1     1     
    TABLE ACCESS (BY INDEX ROWID)     ADDRESSES     2     1     14
    INDEX (UNIQUE SCAN)     SYS_C004605     1     1

    The index range scan means that the optimiser has determined that it is better to read the index rather than perform a full table scan. So in answer to your question - quite possibly but the alternative might take even longer!
    The best thing to do is to review your query and check that you need every table included in the query and that you are accessing the tables via the best route. For example if you can access a table via primary key index that would be better than using a non-unique index. But the best way of reducing the time the query takes to run is to give it less tables (and indexes) to read.
    John Seaman
    http://www.asktheoracle.net

  • Stats collection

    Hi,
    I am using oracle 9i. when I ran refresh for some tables , I found that statistics collection was also running for those tables simultaneously and query performance is poor. however , last_analyzed column in user_tables was updated.
    Could you please let me know whether refresh skips stats collection if both are running in parlell ?
    Thanks!

    Please define your terms so we can help you.
    1. What does refresh mean? Is this a DML statement, Streams? SQL*Loader? DataPump?
    Surely you don't expect that we can help you by guessing?
    2. What is your version number ... 9i is not a version number.
    SELECT * FROM v$version;3. If both what are running in parallel?
    Please provide clarity so someone can help you.

  • Auto optimizer stats collection interval

    Hi,
    I have a question about "auto optimizer stats collection" in 11gR2.
    I see on the history, 1 execution of "auto optimizer stats collection" on Monday, Tuesday, Wenesday, Thursday and Friday and 5 executions on Saturday and Sunday.
    I find the duration of each window (4 hours on Mon, Tue, Wen, Thu and Friday, 20 hours on Sat and Sun) and start date (22 hour Mon, Tue, Wen, Thur and Friday and 6h on Sat and Sun), but i don't find the interval between executions or frenquency of execution for each window.
    Someone can tell me, if is it possible to change this and where ?
    Regards,

    Thank's for all reply, but it's not the answer of my question.
    I know how disable/enable automatic task, i know to add/delete window, etc ... but where can i configure the frequency or interval of the "auto optimizer stats collection" task.
    I try to explain what i want to know.
    By defaut, on window Mon, Tue, Wen, Thu and Fri, the task start at 22:00 and run once, the window duration is 4 hours. Why one run ?
    On window Sat and Sun, the task start à 6:00 and run 5 times ( 6:00, 10:00, 14:00, 18:00 and 22:00), the window duration is 20 hours. Why 5 runs ? or why 4 hours between 2 runs ?
    Regards

  • Performance degrades after stats collection

    Oracle 11gR2 OEL 5
    We have several very large tables (40 million rows and up) and recently we gathered stats on the tables and it degraded our performance. It started doing full table scans instead of using the indexes. The same queries are fine in other environments. Only difference is the stats gathering. Logically, the performance should be better after stats collection. But instead it is poor.
    I ran the 10053 trace on the query and I see that the cardinality and cost is way high in the poorly performing environment. As a test, I restored the old stats in the environment and it put everything back to normal - the query runs fast again. Note that the restored stats were gathered over a year ago. Should we not gather stats regularly on very large tables?
    Thanks.

    I will have to look more into the GATHER_STATS job.
    In the meantime, here is the query with both the good and bad explain plans.
    I'm sure you can see the difference in cost and cardinality
    Good (Original) Query Explain Plan
    SELECT  DL.LCODE , DL.CNUM , DR.DNAME , STPL.FSTE    
    FROM DL , DR_LK , DR ,  STPL    
    WHERE ( DL.LCODE = DR_LK.LCODE )
    AND ( DL.CNUM = DR_LK.CNUM )
    AND ( CCDT_LK.DBTR_ID = DR.DBTR_ID )
    AND ( DL.LCODE = STPL.LCODE )
    AND ( DL.CNUM = STPL.CNUM )
    AND ( ( DL.DT_ID = :dt_id )
    AND ( DL.DT_TPE = :dt_tpe )
    AND ( DR_LK.RLTP = '80' )
    AND ( STPL.IND = 'T' ) )
    ============
    Plan Table
    ============
    -----------------------------------------------------------------+-----------------------------------+
    | Id  | Operation                        | Name                  | Rows  | Bytes | Cost  | Time      |
    -----------------------------------------------------------------+-----------------------------------+
    | 0   | SELECT STATEMENT                 |                       |       |       |    12 |           |
    | 1   |  NESTED LOOPS                    |                       |       |       |       |           |
    | 2   |   NESTED LOOPS                   |                       |   16K | 2210K |    12 |  00:00:01 |
    | 3   |    NESTED LOOPS                  |                       |     1 |   102 |     9 |  00:00:01 |
    | 4   |     NESTED LOOPS                 |                       |     1 |    65 |     7 |  00:00:01 |
    | 5   |      TABLE ACCESS BY INDEX ROWID | CDL                  |     1 |    32 |     4 |  00:00:01 |
    | 6   |       INDEX RANGE SCAN           | XI1_CDL             |     1 |       |     3 |  00:00:01 |
    | 7   |      TABLE ACCESS BY INDEX ROWID | CDR_LK              |     2 |    48 |     3 |  00:00:01 |
    | 8   |       INDEX RANGE SCAN           | PKEY_21               |     1 |       |     2 |  00:00:01 |
    | 9   |     TABLE ACCESS BY INDEX ROWID  | CDR                 |     1 |    21 |     2 |  00:00:01 |
    | 10  |      INDEX UNIQUE SCAN           | CDR_4398287791      |     1 |       |     1 |  00:00:01 |
    | 11  |    INDEX RANGE SCAN              | XIE1_C_PL         |     1 |       |     2 |  00:00:01 |
    | 12  |   TABLE ACCESS BY INDEX ROWID    | C_PL              | 2568K |   77M |     3 |  00:00:01 |
    -----------------------------------------------------------------+-----------------------------------+
    The bad (after gathering stats) Query Explain Plan
    ============
    Plan Table
    ============
    ------------------------------------------------------------------+-----------------------------------+
    | Id  | Operation                         | Name                  | Rows  | Bytes | Cost  | Time      |
    ------------------------------------------------------------------+-----------------------------------+
    | 0   | SELECT STATEMENT                  |                       |       |       |   58K |           |
    | 1   |  NESTED LOOPS                     |                       |       |       |       |           |
    | 2   |   NESTED LOOPS                    |                       |   30G | 5864G |   58K |  00:10:17 |
    | 3   |    HASH JOIN                      |                       |   11K | 1253K |   16K |  00:03:06 |
    | 4   |     NESTED LOOPS                  |                       |       |       |       |           |
    | 5   |      NESTED LOOPS                 |                       |   11K |  890K |     7 |  00:00:01 |
    | 6   |       TABLE ACCESS BY INDEX ROWID | CL              |     1 |    42 |     4 |  00:00:01 |
    | 7   |        INDEX RANGE SCAN           | XI1_CL               |     1 |       |     3 |  00:00:01 |
    | 8   |       INDEX RANGE SCAN            | PKEY_31               |     1 |       |     2 |  00:00:01 |
    | 9   |      TABLE ACCESS BY INDEX ROWID  | C_LK              | 3432K |  115M |     3 |  00:00:01 |
    | 10  |     TABLE ACCESS FULL             | CDR                 | 3614K |  115M |   16K |  00:03:06 |
    | 11  |    INDEX RANGE SCAN               | XI1_C_PL         |     1 |       |     2 |  00:00:01 |
    | 12  |   TABLE ACCESS BY INDEX ROWID     | C_PL              | 3465K |   88M |     3 |  00:00:01 |
    ------------------------------------------------------------------+-----------------------------------+

  • VCP Plans running long

    Hi
    our VCP Plan Programs are running long. There is no particular program which is running that I am unable to identify. We are running Executions and ATP plans after collections.  Either collections is running long or VCP ATP plan is running long or both. Please suggest.
    Ideal run time for collections is 40 mins. but it is taking 100 min + for the same. ATP plan used to complete in 50 -60 mins now it is taking 120mins.
    Thanks,

    Hi,
    On the Collections,Can you please plot the time taken by PDP or PDP Workers, Refresh Snapshot or ODS over few days. If the behaviour is not consistent, then you may try running the GSS on ERP instance and VCP instance at 50% or may be at 100%.
    Are there any other activities like backup being scheduled at the same time the Collections and plans are run?
    You may also monitor the DB sessions and see what are the other processes/concurrent programs that are running at the same time.

  • Select query running long time

    Hi,
    DB version : 10g
    platform : sunos
    My select sql query running long time (more than 20hrs) .Still running .
    Is there any way to find sql query completion time approximately. (Pending time)
    Also is there any possibilities to increase the speed of sql query (already running) like adding hints.
    Please help me on this .
    Thanks

    Hi Sathish thanks for your reply,
    I have already checked in V$SESSION_LONGOPS .But it's showing TIME_REMAINING -->0
    select TOTALWORK,SOFAR,START_TIME,TIME_REMAINING from V$SESSION_LONGOPS where SID='10'
    TOTALWORK      SOFAR START_TIME      TIME_REMAINING
         1099759    1099759 27-JAN-11                    0Any idea ?
    Thanks.

  • Change field catalog in VT05 - Selective logs for collective run

    Hi,
    Through transaction VT05, we can check shipment log which having following ALV field Catalog.
    Collective run
    Date
    Time
    User
    Transaction code
    Operating mode
    Problem class
    Expiry date
    Keep until expiry
    Processing status
    But if we want check Delivery number for particular Collective Run then we have to click that collective run and then can check for the Delivery number.
    Actually our requirement is to add one more column in above ALV field Catalog of Delivery Number.
    I have searched for Enhancements and BADIs, but I did not found suitable one for VT05 log.
    Can any one help me out for above issue for adding one more column in above ALV field Catalog of Delivery Number (along with above mentioned ALV field catalog in output) of VT05 log report.

    Thanks Nabheet once again...
    I would like to show you the coding what I have done in implemented implicit Enhancements.
    Following code is written under implemented implicit Enhancements - ZSHIPMENT_DETAIL
    Properties of Enhancement Implementation           ZSHIPMENT_DETAIL                          (Active)
    Enhancem. Technique:  Source Code Plug-In
    Description           Showing SHIPMENT Detail in output
    Package               $TMP
    Original Language     EN
    Created               MNIKAM       02/09/2012
    Last Changed          MNIKAM       02/19/2012
    Superordinate Enhancement Implementation     ZSHIPMENT
    Enhanced Development Object:   V54X
    SAPLV54X          Static Enhancement Point/Section     \PR:SAPLV54X\FO:LOG_HEADER_DISPLAY\SE:BEGIN\EI
    ENHANCEMENT 1  ZSHIPMENT_DETAIL.    "active version
    TYPE-POOLS: szal.
      DATA: g_l_header_extr     TYPE header_extr.
      FIELD-SYMBOLS: <f_g_l_header_extr>  TYPE header_extr.
      DATA: g_l_log_link        TYPE log_link.
      FIELD-SYMBOLS:  <f_g_l_log_link>  TYPE  log_link.
      DATA: g_l_t_lognumbers    TYPE szal_lognumbers WITH HEADER LINE.
      FIELD-SYMBOLS: <f_log_nr> TYPE  log_nr.
      data: messages    TYPE TABLE OF balm.
      FIELD-SYMBOLS:  <f_l_s_message> TYPE balm.
      DATA: l_tabix TYPE sy-tabix.
    LOOP AT G_HEADER_EXTR_TAB ASSIGNING <f_g_l_header_extr> WHERE status ne 1.
          CLEAR: l_tabix.
          l_tabix = sy-tabix.
          READ TABLE g_log_link_tab ASSIGNING <f_g_l_log_link>
                                     WITH KEY fccnu = <f_g_l_header_extr>-fccnu
                                     BINARY SEARCH.
           READ TABLE  <f_g_l_log_link>-log_tab ASSIGNING <f_log_nr> index 1.
           CLEAR: g_l_t_lognumbers.
           g_l_t_lognumbers-item = <f_log_nr>-log_nr.
           APPEND g_l_t_lognumbers.
           CLEAR: messages.
           CALL FUNCTION 'APPL_LOG_READ_DB_WITH_LOGNO'
            TABLES
              lognumbers = g_l_t_lognumbers[]
              messages   = messages.
          delete messages WHERE MSGNO ne 371 AND MSGNO  ne 494 AND  MSGNO ne 491.
          LOOP AT messages  ASSIGNING <f_l_s_message> WHERE  LOGNUMBER = <f_log_nr>-log_nr
                                                       AND  ( MSGTY   = 'S' or  MSGTY = 'W' )   "Message Type S-success & W-warning
                                                       AND  MSGID     = 'VW'
                                                       AND  ( MSGNO   = 371 or MSGNO  = 494 or MSGNO = 491  ).  "Message Numbr through which Shipment Number gets
                   <f_g_l_header_extr>-tknum =  <f_l_S_message>-MSGV1.      "SHIPMENT NUMBER
                   CLEAR: g_l_t_lognumbers[].
                   exit.
          ENDLOOP.
    ENDLOOP.
    ENDENHANCEMENT.
    Thanks again,
    Mahesh Nikam.

  • Inventory Management Collection run

    We would like to increase our Inventory Management Collection run to every 5 minutes (currently every hour).  Can anyone forsee a problem with this or have experienced a problem
    We have an Unserialized V3 update
    Thanks

    I do not see any issue. The LUW count of the records getting written to the delta queue might increase marginally, but that should not be an issue.

  • 10g Automatic Stats Collection -- AUTO ( METHOD_OPT)

    I have implememted Oracle 10g Auto Stats collection feature. To generate new stats on 10g optimizer I deleted the schema stats and then scheduled the DBMS_SCHEDULER to collect stats in AUTO method_opt.
    I see that the new 10g optimizer is collecting stats differently in different databases eben through they have the same data.
    For exampl I see that it collected Histograms on 10 columns in one table in one environment but it generatd histograms only on 9 columns in another env.
    I am concerned that the stats difference will result in the execution plan getting changed across different instances.
    Can someone please clarify the cause of the same. How can I make sure tha the 10g optimizer is collecting stats uniformly across different envs.
    Thank You

    You wrote:
    "I am concerned that the stats difference will result in the execution plan getting changed across different instances."
    and you should be concerned.
    I would recommend not collecting stats the way you are but rather to use a nuanced approach where you collect only when they are stale or only when you know that changes have rendered old statistics invalid.

  • Auto Stats Collection on function based indexes

    In 10g does AUTO Stats Collection job in the default weeknight / weekend window collect stats on "function based indexes".
    Thank You
    Message was edited by:
    user449511
    I got the reply in another similar post posted by me some time earlier.
    Thank You
    Message was edited by:
    user449511

    SELECT job_name, comments
    FROM dba_scheduler_jobs;Then refine your search using the columns in the view.

Maybe you are looking for

  • Why does my TV screen say "no signal!"?

    Hello! I've been enjoying apple TV for months now and for some reason this morning it stopped working. I unplugged and restarted the black box, (the solid white llight is on making me think the apple tv box is not the problem). I made sure everything

  • ITunes window keeps coming forward

    I've just installed 11.1.3.8 for Windows 7 and the iTunes window keeps coming forward, overriding any other windows present. It seems to be in 1-2 minute intervals. I can't even type this properly without iTunes popping up. Any help?

  • Adding a dynamic menu to a template

    Can somebody tell me what I'm doing wrong please? I have created a menu, I want to add it to a template (this template already has a dynamic menu which I added to it by the way, in case it's important to the answer). I have selected the Template and

  • Is the Mac App Store down ? / Fan Speed, CPU Temp problem ?

    I tried to enter the App Store, this dialog box came up: I had to reinstall my OS and programs as things started to go "Haywire". Would have this affected the App Store program on my mac ? When thing went "Haywire" I noticed the fan running very fast

  • Can not change the toolbar on a Windows 8 installation.

    I use Windows 8 64bit RTM. I have installed firefox 17 german. I can not move the icon (exaple: favorit icon or printer icon) from toolbar-window to the toolbar or the icon on the toolbar from left to right.