AWR & GATHER_STATS_JOB stats jobs difference

10gR2 version
What the difference b/w Optimizer stats gathered by GATHER_STATS_JOB and stats collected by AWR, and what are areas of stats collections for both

10gR2 version
What the difference b/w Optimizer stats gathered by GATHER_STATS_JOB and >stats collected by AWR, and what are areas of stats collections for both GATHER_STATS_JOB
http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php#automatic_optimizer_statistics_collection
AWR
http://www.oracle-base.com/articles/10g/AutomaticWorkloadRepository10g.php
HTH
Anantha.

Similar Messages

  • How to exclude tables from default scheduled stats job in 10.2.0.4

    I am using Oracle default stats job GATHER_STATS_JOB in my database. But i need to exclude stats gathering for few tables as part of my business requirement.

    Welcome to the forums.
    As I am sure you read the docs on DBMS_STATS at tahiti.oracle.com what was it about the method for addressing it in the docs that you don't think will work for you.
    Hint: The correct answer to my question is not "I didn't read the docs because I thought someone here would read them for me." ;-)

  • Sql slow afterweekly STATS-JOB,then run analyze table it is fast again

    Oracle R11.2.0.2 :
    I hade some slow sql / reports and found the effect,
    that the sql is slow obvious after the weekend ,
    when STATS - JOB BSLN_MAINTAIN_STATS_JOB and other Jobs were running weekly on SYS.
    I did run dbms_stats.GATHER_TABLE_STATS on schema
    it doesn't help.
    But when
    run ANALYZE TABLE afterwards on only one or two tables of the schema
    the sql / reports performance is well and fast.
    in the dba_tables I can see the last_analyze - date
    and in GLOBAL_STATS = NO ( when Table runs with ANALYZE ),
    GLOBAL_STATS = YES( when Table runs with STATS )
    what does the ANALYZE TABLE command doing good and let my sql run well and fast,
    while dbms_stats.GATHER_TABLE_STATS
    seems not work well in this situation ?
    regards

    astramare wrote:
    Oracle R11.2.0.2 :
    I hade some slow sql / reports and found the effect,
    that the sql is slow obvious after the weekend ,
    when STATS - JOB BSLN_MAINTAIN_STATS_JOB and other Jobs were running weekly on SYS.
    I did run dbms_stats.GATHER_TABLE_STATS on schema
    it doesn't help.
    What options do you use for the gather_stats command ?
    Do you also have collected system stats?
    But when
    run ANALYZE TABLE afterwards on only one or two tables of the schema
    the sql / reports performance is well and fast.Analyze table is deprecated, but still does its work for some part. It is not as complete as dbms_stats
    >
    in the dba_tables I can see the last_analyze - date
    and in GLOBAL_STATS = NO ( when Table runs with ANALYZE ),
    GLOBAL_STATS = YES( when Table runs with STATS )
    It must have to do something with the way you use it..
    HTH
    FJFranken

  • Stats  job in Oracle 11g

    During installation, the decision was made to not install the stats gathering job in 11g.
    Post installation the job is not running as expected.
    SInce then we have done more research and are comfortable with the stats job running only for Oracle dictionary tables.
    My question is
    How to install it and make sure it is run at a certain time?
    I have read documentation and have issues commands such as :-
    DBMS_AUTO_TASK_ADMIN.ENABLE (
    client_name IN VARCHAR2,
    operation IN VARCHAR2,
    window_name IN VARCHAR2);
    I have verified that the stats are not gathered on dictionary tables a.k.a SYS owned objects.
    Any input will be highly appreciated.

    I have verified that the stats are not gathered on dictionary tables a.k.a SYS owned objects.post SQL & results that are proof that "the stats are not gathered on dictionary tables".
    LAST_ANALYZED is NOT a reliable indicator to determine if/when statistics were gathered.
    If/when table content does NOT change, then current statistics are still valid & statistics do NOT need to be collected again.
    Handle:      Dbacloud
    Status Level:      Newbie (20)
    Registered:      Nov 18, 2009
    Total Posts:      55
    Total Questions:      13 (13 unresolved)
    so many questions without ANY answers.
    http://forums.oracle.com/forums/ann.jspa?annID=718
    Edited by: sb92075 on Feb 25, 2011 5:46 PM

  • Gather Stats Job in 11g

    Hi,
    I am using 11.1.0.7 on IBMAIX Power based 64 bit system.
    In 10g, if i query dba_scheduler_jobs view, i see the GATHER_STATS_JOB for automated statistics collection but in 11g i don't see this rather i see BSLN_MAINTAIN_STATS_JOB job which executes BSLN_MAINTAIN_STATS_PROG program for stats collection.
    And if i query DBA_SCHEDULER_PROGRAMS, i also see GATHER_STATS_PROG program here. Can gurus help me understanding both in 11g. Why there are two different programs and what is the difference?
    Actually the problem is that i am receiving following error message in my alert log file
    Mon Aug 16 22:01:42 2010
    GATHER_STATS_JOB encountered errors.  Check the trace file.
    Errors in file /oracle/diag/rdbms/usgdwdbp/usgdwdbp/trace/usgdwdbp_j000_1179854.trc:
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expiredThe trace files shows
    *** 2010-08-14 22:10:14.449
    *** SESSION ID:(2028.20611) 2010-08-14 22:10:14.449
    *** CLIENT ID:() 2010-08-14 22:10:14.449
    *** SERVICE NAME:(SYS$USERS) 2010-08-14 22:10:14.449
    *** MODULE NAME:(DBMS_SCHEDULER) 2010-08-14 22:10:14.449
    *** ACTION NAME:(ORA$AT_OS_OPT_SY_3407) 2010-08-14 22:10:14.449
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
    *** 2010-08-14 22:10:14.450
    GATHER_STATS_JOB: GATHER_TABLE_STATS('"DWDB_ADMIN_SYN"','"TEMP_HIST_HEADER_LIVE"','""', ...)
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expiredBut we dont have GATHER_STATS_JOB in 11g and also the job BSLN_MAINTAIN_STATS_PROG runs only on weekend and the above error message came last night 16 August.
    Thanks
    Salman

    Thanks for the people who are contributing.
    I know from the information that table is locked, but i have tried manually locking a table and executing gather_table_stats procedure but it runs fine. Here i have two questions
    Where is GATHER_STATS_JOB in 11g as you can see the text of trace file where it says that GATHER_STATS_JOB failed, i dont see there is any GATHER_STATS_JOB in 11g.
    BSLN_MAINTAIN_STATS_JOB job is supposed gather statistics but only on weekend nights, then how come i see this error occurring last night at 22:11 on 16th August which is not a week end night.
    Salman

  • How to check whether gather stats job is running or not in OEM

    Hi,
    People in our team are saying that there is an automatic job is running in OEM to gather the statistics of the tables. Also it decides which table needs to be gather stats.
    I have not much idea in OEM (Oracle 10g), please let me know how to check the job which is gathering the statistics of tables and where to check that job.
    Thanks in advance,
    Mahi

    You may query dba_scheduler_job_log like
    SQL> select JOB_NAME,LOG_DATE,STATUS from dba_scheduler_job_log;There you should see the GATHER_STATS_JOB and its runnings.

  • Can we change GATHER_STATS_PROG (auto gather stat job) behavior

    Hi All,
    Is anyway we can change default oracle auto gather job ?
    We know that oracle gather job (GATHER_STATS_JOB, below 11g) will trigger base on the Maintenance Window, we have few huge Production environment.
    We're thinking to gather application schema stat by using our own scripts, thus would like to know is anyway we can still enable oracle auto gather stat
    but exclude application schema?
    Thanks in advance.
    Regards,
    Klnghau

    Hi All,
    Forgot to mention my oracle version is 10.2.0.4, and I think i find the solution from My Oracle support,
    Oracle10g: New DBMS_STATS parameter AUTOSTATS_TARGET [ID 276358.1]
    This is a new parameter in Oracle10g for the DBMS_STATS package.
    According to the documentation for this package in file dbmsstat.sql
    (under ORACLE_HOME/rdbms/admin):
    This parameter is applicable only for auto stats collection.
    The value of this parameter controls the objects considered for stats collection.
    It takes the following values:
    'ALL' -- statistics collected for all objects in system
    'ORACLE' -- statistics collected for all oracle owned objects
    'AUTO' -- oracle decides for which objects to collect stats
    Default is AUTO, i think i can chenge to ORACLE in my environment.
    Thanks

  • GATHER STATS JOB

    There is one automatic statistics gathering job GATHER_STATS_JOB. Is it necessary to leave it in enabled state (default) or should we disable it ?
    Are there situations where it is not advisable to run it ?

    Is there any reason why you want to disable it?
    Does it have any negative impact on your situation? Did you investigate this?
    Or do you just want to disable it, because you don't know what it does?
    This job is quite efficient, and is smarter at getting adequate statistics, than any job you set up manually!
    If you disable it, statistics will get stale, and Oracle will potentially generate adverse execution plans!
    If you want to exclude parts of the database for statistic calculation, just lock those statistics!
    Sybrand Bakker
    Senior Oracle DBA
    Experts: those who did read documentation.

  • Cannot disable stats job

    SQL> conn / as sysdba
    Connected.
    SQL> exec DBMS_SCHEDULER.DISABLE('GATHER_STATS_JOB');
    BEGIN DBMS_SCHEDULER.DISABLE('GATHER_STATS_JOB'); END;
    ERROR at line 1:
    ORA-27476: "SYS.GATHER_STATS_JOB" does not exist
    ORA-06512: at "SYS.DBMS_ISCHED", line 3429
    ORA-06512: at "SYS.DBMS_SCHEDULER", line 2395
    ORA-06512: at line 1
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE    11.1.0.7.0      Production
    TNS for Linux: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production

    SQL> col LOG_USER format a10
    SQL> col WHAT format a50
    SQL> select JOB,LOG_USER,LAST_DATE,BROKEN,FAILURES,WHAT,NEXT_DATE from dba_jobs;
           JOB LOG_USER   LAST_DATE B   FAILURES
    WHAT                                               NEXT_DATE
            27 SYS        15-FEB-10 N          0
    wksys.wk_job.invoke(22,44);                        22-FEB-10
            26 SYS        18-FEB-10 N          0
    wksys.wk_job.invoke(22,24);                        18-FEB-10
             1 SYSMAN     18-FEB-10 N          0
    EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS();       18-FEB-10
           JOB LOG_USER   LAST_DATE B   FAILURES
    WHAT                                               NEXT_DATE
          4001 SYS        18-FEB-10 N          0
    wwv_flow_cache.purge_sessions(p_purge_sess_older_t 18-FEB-10
    hen_hrs => 24);
          4002 SYS        18-FEB-10 N          0
    wwv_flow_mail.push_queue(wwv_flow_platform.get_pre 18-FEB-10
    ference('SMTP_HOST_ADDRESS'),wwv_flow_platform.get
    _preference('SMTP_HOST_PORT'));
           JOB LOG_USER   LAST_DATE B   FAILURES
    WHAT                                               NEXT_DATE
            46 SYSMAN     18-FEB-10 N          0
    EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS();       18-FEB-10

  • Audit dml vs noaudit dml - session stat huge difference in redo size DB 9.2

    Hi,
    I've just finished test ,where compared audit update table, delete table, insert table by user by access with noaudit dml statements.
    DB version is 9.2.0.8 , same test run twice on same configuration and data so its comparable .
    What concerns me the most is difference in redo size ,and redo entries , here goes table with results:
    noaudit            audit              statname
    486 439,00     878 484,00     calls to kcmgas
    40 005,00     137 913,00     calls to kcmgcs
    2 917 090,00     5 386 386,00     db block changes
    4 136 305,00     6 709 616,00     db block gets
    116 489,00     285 025,00     deferred (CURRENT) block cleanout applications
    1,00     3 729,00     leaf node splits
    361 723 368,00     773 737 980,00     redo size
    4 235,00     50 752,00     active txn count during cleanoutCould You explain that differences in statistics, especially in redo size.
    I'm suprissed because in 9.2 DB audit dml doesnt log actual sql statements, only indication of usage .
    Regards.
    Greg

    Hi,
    I've just finished test ,where compared audit update table, delete table, insert table by user by access with noaudit dml statements.
    DB version is 9.2.0.8 , same test run twice on same configuration and data so its comparable .
    What concerns me the most is difference in redo size ,and redo entries , here goes table with results:
    noaudit            audit              statname
    486 439,00     878 484,00     calls to kcmgas
    40 005,00     137 913,00     calls to kcmgcs
    2 917 090,00     5 386 386,00     db block changes
    4 136 305,00     6 709 616,00     db block gets
    116 489,00     285 025,00     deferred (CURRENT) block cleanout applications
    1,00     3 729,00     leaf node splits
    361 723 368,00     773 737 980,00     redo size
    4 235,00     50 752,00     active txn count during cleanoutCould You explain that differences in statistics, especially in redo size.
    I'm suprissed because in 9.2 DB audit dml doesnt log actual sql statements, only indication of usage .
    Regards.
    Greg

  • Change Update stats job creteria

    Hi ,
    As per my understanding Update statistics job Update stats of table if it has been changed more than 50%.
    Please suggest where we can check this value set for job and How can we change it.
    Regards,
    Shivam Mittal

    Hi Shivam,
    You can set the "stats_change_threshold" parameter, in "init<DBSID>.sap" file. Check http://help.sap.com/saphelp_bw30b/helpdata/en/02/0ae0c6395911d5992200508b6b8b11/content.htm
    Best regards,
    Orkun Gedik

  • How AUTO stats job decides on histograms and sample size?

    Hi All,
    I am on Oracle 11.2.0.3 .
    I have a table with 161M records. In fact it is an IOT. We are running the default stats collection job from Oracle (not even the time is changed !!).
    This is the nature of stats on this table
    SQL> select column_name, num_distinct, num_nulls, num_buckets, sample_size, histogram
      2  from dba_tab_col_statistics where table_name='TABLE1';
    COLUMN_NAME                    NUM_DISTINCT  NUM_NULLS NUM_BUCKETS SAMPLE_SIZE HISTOGRAM
    COLUMN1                                   1          0           1   162988917 NONE
    COLUMN2                                   0  162988917           0             NONE
    COLUMN3                           119808000          0         254        5548 HEIGHT BALANCED
    COLUMN4                                3048          0         254        5548 HEIGHT BALANCED
    COLUMN5                                   1          0           1   162988917 NONE
    COLUMN6                                 173          0          77        5549 FREQUENCY
    COLUMN7                            43225088          0           1   162988917 NONE
    7 rows selected.I have so many question on this
    Why sample size is so small on the columns where histogram exists?
    Why sample size is 100% on the columns where there is no histogram? (I am assuming NONE means there is no histogram on this column)
    COLUMN6 is a really skewed column. There are actually 173 distinct values in there. How come Oracle found that correctly in such a small sample size?
    Thanks in advance

    rahulras wrote:
    SQL> select column_name, num_distinct, num_nulls, num_buckets, sample_size, histogram
    2  from dba_tab_col_statistics where table_name='TABLE1';
    COLUMN_NAME                    NUM_DISTINCT  NUM_NULLS NUM_BUCKETS SAMPLE_SIZE HISTOGRAM
    COLUMN1                                   1          0           1   162988917 NONE
    COLUMN2                                   0  162988917           0             NONE
    COLUMN3                           119808000          0         254        5548 HEIGHT BALANCED
    COLUMN4                                3048          0         254        5548 HEIGHT BALANCED
    COLUMN5                                   1          0           1   162988917 NONE
    COLUMN6                                 173          0          77        5549 FREQUENCY
    COLUMN7                            43225088          0           1   162988917 NONE
    7 rows selected.Why sample size is so small on the columns where histogram exists?
    Because the code decided that the histogram it generated at that size was sufficiently accurate - but I can't tell how it came to that conclusion.
    Why sample size is 100% on the columns where there is no histogram? (I am assuming NONE means there is no histogram on this column)
    COLUMN6 is a really skewed column. There are actually 173 distinct values in there. How come Oracle found that correctly in such a small sample size?
    The answer is probably that it didn't. I suspect you have enabled the approximate_ndv mechanism, which applies in the first pass for collecting simple stats, and the num_distinct in every case was therefore very close to perfect with a 100% sample size. The histograms would then have been collected in a second pass. You have only 77 buckets in the frequency histogram on column6, which means Oracle knows about just 77 values for that column despite reporting 173 as the number of distinct values. to my mind, the inconsistency between num_distinct (which should have been pretty accurate) and the number of buckets should have made Oracle collect another, larger sample size of the histogram.
    Regards
    Jonathan Lewis

  • Oracle gather stat jobs

    Hi,
    on 11g R2, on Win 2008.
    How to see when Oracle does stat gathering default job ?
    Any job table to query ?
    Thanks.

    Thank you.
    Ok ,
    Realy thanks. Fantastic.
    I saw that they are done at 23: 21.
    select client_name,last_good_date from DBA_AUTOTASK_TASK
    CLIENT_NAME                                                      LAST_GOOD_DATE
    sql tuning advisor                                               06/05/13 23:00:36,381000000 +02:00
    auto optimizer stats collection                                  06/05/13 23:21:48,319000000 +02:00
    auto space advisor                                               06/05/13 23:03:25,992000000 +02:00
    3 rows selectedHow can we force it to execute auto optimizer stats collection at another time ? For example 5 o'clock in the morning ?
    Thanks.

  • Difference between emergency state and single user mode ?

    Hi,
    I want to know the difference between emergency state which we normally use in suspect mode database and single user mode.
    Navakanth

    Emergency/suspect mode is tells you the state of the database and database is not available for user action but where as single user mode tells the user action preference. Database is active and available for the user action.
    You can refer 
    http://msdn.microsoft.com/en-us/library/bb522682.aspx
    EMERGENCY
    The database is marked READ_ONLY, logging is disabled, and access is limited to members of the sysadmin fixed server role. EMERGENCY is primarily used for troubleshooting purposes. For example, a database marked as suspect due to a corrupted log file can be
    set to the EMERGENCY state. This could enable the system administrator read-only access to the database. Only members of the sysadmin fixed server role can set a database to the EMERGENCY state.
    SINGLE_USER
    Specifies that only one user at a time can access the database. If SINGLE_USER is specified and there are other users connected to the database the ALTER DATABASE statement will be blocked until all users disconnect from the specified database. To override
    this behavior, see the WITH <termination> clause.
    The database remains in SINGLE_USER mode even if the user that set the option logs off. At that point, a different user, but only one, can connect to the database.
    Before you set the database to SINGLE_USER, verify the AUTO_UPDATE_STATISTICS_ASYNC option is set to OFF. When set to ON, the background thread used to update statistics takes a connection against the database, and you will be unable to access the database
    in single-user mode. To view the status of this option, query the is_auto_update_stats_async_on column in the sys.databases catalog view.
    If the option is set to ON, perform the following tasks:
    Set AUTO_UPDATE_STATISTICS_ASYNC to OFF.
    Check for active asynchronous statistics jobs by querying the sys.dm_exec_background_job_queue dynamic management view.
    If there are active jobs, either allow the jobs to complete or manually terminate them by using KILL STATS JOB.
    --Prashanth

  • Partitioned Incremental Table - no stats gathered on new partitions

    Dear Gurus
    Hoping that someone can point me in the right direction to trouble-shoot. Version Enterprise 11.1.0.7 AIX.
    Range partitioned table with hash sub-partitions.
    Automatic stats gather is on.
    dba_tables shows global stats YES analyzed 06/09/2011 (when first analyzed on migration of data) and dba_tab_partitions shows most partitions analyzed at that date and most others up until 10/10/2011 - done by the automatically by the weekend stats_gather scheduled job.
    46 new partitions added in the last few months but no stats gathered on in dba_tab_partitions and dba_table last_analyzed says 06/09/2011 - the date it was first analyzed manually gathering stats rather than using auto stats gatherer.
    Checked dbms_stats.get_prefs set to incremental and all the default values recommended by Oracle are set including publish = TRUE.
    dba_tab_partitions has no values in num_rows, last_analyzed etc.
    dba_tab_modifications has no values next to the new partitions but shows inserts as being 8 million approx per partition - no deletes or updates.
    dba_tab_statistics has no values next to the new partitions. All other partitions are marked as NO in the stale column.
    checked the dbms_stats job history window - and it showed that the window for stats gathering stopped within the Window automatically allowed.
    Looked at Grid Control - the stats gather for the table started at 6am Saturday morning and closed 2am Monday morning.
    Checked the recommended Window - and it stopped analyzing that table at 2am exactly having tried to analyze it since Saturday morning at 6am.
    Had expected that as the table was in incremental mode - it wouldn't have timed out and the new partitions would have been analyzed within the window.
    The job_queue_processes on the database = 1.
    Increased the job_queue_processes on the database = 2.
    Had been told that the original stats had taken 3 days in total to gather so via GRID - scheduled a dbms_scheduler (10.2.0.4) - to gather stats on that table over a bank holiday weekend - but asked management to start it 24 hours earlier to take account of extra time.
    The Oracle defaults were accepted (and as recommended in various seminars and whilte papers) - except CASCADE - although I wanted the indexes to be analyzed - I decided that was icing on the cake I couldn't afford).
    Went to work - 24 hours later - checked dba_scheduler_tasks_job running. Checked stats on dba_tab_stats and tba tablestats nothing had changed. I had expected to see partition stats for those not gathered first - but quick check of Grid - and it was doing a select via full table scan - and still on the first datafile!! Some have suggested to watchout for the DELETE taking along time - but I only saw evidence of the SELECT - so ran an AWR report - and sure enough full table scan on the whole table. Although the weekend gather stats job was also in operation - it wasn't doing my table - but was definitely running against others.
    So I checked the last_analyzed on other tables - one of them is a partitioned table - and they were getting up-to-date stats. But the tables and partitions are ridiculously small in comparison to the table I was focussed on.
    Next day I came in checked the dba_scheduler_job log and my job had completed within 24 hours and completed successfully.
    Horrors of horrors - none of the stats had changed one bit in any view I looked at.
    I got my excel spreadsheet out - and worked out whether because there was less than 10% changed - and I'd accepted the defaults - that was why there was nothing in the dba_tables to reflect it had last been analyzed when I asked it to.
    My stats roughly worked out showed that they were around the 20% mark - so the gather_table stats should have picked that up and gathered stats for the new partitions? There was nothing in evidence on any views at all.
    I scheduled the job via GRID 10.2.04 for an Oracle database using incremental stats introduced in 11.1.0.7 - is there a problem at that level?
    There are bugs I understand with incremental tables and gathering statistics in 11.1.0.7 which are resolved in 11.2.0 - however we've applied all the CPU until April of last year - it's possible that as we are so behind - we've missed stuff?
    Or that I really don't know how to gather stats on partitioned tables and it's all my fault - in which case - please let me know - and don't hold back!!!
    I'd rather find a solution than save my reputation!!
    Thanks for anyone who replies - I'm not online at work so can't always give you my exact commands done - but hopefully you'll give me a few pointers of where to look next?
    Thanks!!!!!!!!!!!!!

    Save the attitude for your friends and family - it isn't appropriate on the forum.
    >
    I did exactly what it said on the tin:
    >
    Maybe 'tin' has some meaning for you but I have never heard of it when discussing
    an Oracle issue or problem and I have been doing this for over 25 years.
    >
    but obviously cannot subscribe to individual names:
    >
    Same with this. No idea what 'subscribe to individual names' means.
    >
    When I said defaults - I really did mean the defaults given by Oracle - not some made up defaults by me - I thought that by putting Oracle in my text - there - would enable people to realise what the defaults were.
    If you are suggesting that in all posts - I should put the Oracle defaults in name becuause the gurus on the site do not know them - then please let me know as I have wrongly assumed that I am asking questions to gurus who know this suff inside out.
    Clearly I have got this site wong.
    >
    Yes - you have got this site wrong. Putting 'Oracle' in the text doesn't enable people to realize
    what the defaults in your specific environment are.
    There is not a guru that I know of,
    and that includes Tom Kyte, Jonathan Lewis and many others, that can tell
    you, site unseen, what default values are in play in your specific environment
    given only the information you provided in your post.
    What is, or isn't a 'default' can often be changed at either the system or session level.0
    Can we make an educated guess about what the default value for a parameter might be?
    Of course - but that IS NOT how you troubleshoot.
    The first rule of troubleshooting is DO NOT MAKE ANY ASSUMPTIONS.
    The second rule is to gather all of the facts possible about the reported problem, its symptoms
    and its possible causes.
    These facts include determining EXACTLY what steps and commands the user performed.
    Next you post the prototype for stats
    DBMS_STATS.GATHER_TABLE_STATS (
    ownname VARCHAR2,
    tabname VARCHAR2,
    partname VARCHAR2 DEFAULT NULL,
    estimate_percent NUMBER DEFAULT to_estimate_percent_type
    (get_param('ESTIMATE_PERCENT')),
    block_sample BOOLEAN DEFAULT FALSE,
    method_opt VARCHAR2 DEFAULT get_param('METHOD_OPT'),
    degree NUMBER DEFAULT to_degree_type(get_param('DEGREE')),
    granularity VARCHAR2 DEFAULT GET_PARAM('GRANULARITY'),
    cascade BOOLEAN DEFAULT to_cascade_type(get_param('CASCADE')),
    stattab VARCHAR2 DEFAULT NULL,
    statid VARCHAR2 DEFAULT NULL,
    statown VARCHAR2 DEFAULT NULL,
    no_invalidate BOOLEAN DEFAULT to_no_invalidate_type (
    get_param('NO_INVALIDATE')),So what exactly is the value for GRANULARITY? Do you know?
    Well it can make a big difference. If you don't know you need to find out.
    >
    As mentioned earlier - I accepted all the "defaults".
    >
    Saying 'I used the default' only helps WHEN YOU KNOW WHAT THE DEFAULT VALUES ARE!
    Now can we get back to the issue?
    If you had read the excerpt I provided you should have noticed that the values
    used for GRANULARITY and INCREMENTAL have a significant influence on the stats gathered.
    And you should have noticed that the excerpt mentions full table scans exactly like yours.
    So even though you said this
    >
    Had expected that as the table was in incremental mode
    >
    Why did you expect this? You said you used all default values. The excerpt I provided
    says the default value for INCREMENTAL is FALSE. That doesn't jibe with your expectation.
    So did you check to see what INCREMENTAL was set to? Why not? That is part of troubleshooting.
    You form a hypothesis. You gather the facts; one of which is that you are getting a full table
    scan. One of which is you used default settings; one of which is FALSE for INCREMENTAL which,
    according to the excerpt, causes full table scans which matches what you are getting.
    Conclusion? Your expectation is wrong. So now you need to check out why. The first step
    is to query to see what value of INCREMENTAL is being used.
    You also need to check what value of GRANULARITY is being used.
    And you say this
    >
    Or that I really don't know how to gather stats on partitioned tables and it's all my fault - in which case - please let me know - and don't hold back!!!
    I'd rather find a solution than save my reputation!!
    >
    Yet when I provide an excerpt that seems to match your issue you cop an attitude.
    I gave you a few pointers of where to look next and you fault us for not knowing the default
    values for all parameters for all versions of Oracle for all OSs.
    How disingenous is that?

Maybe you are looking for