Automatic Gathering Statistics Script

Hi,
We must to schedule in our database some kind of job in order to monitoring the database and regularly gathering statitics of database objects (nowadays most of tables have been never analyzed). We have read the Oracle documentation in order to use the DBMS_STATS package but we are a bit confused about the enormous possibilities (gather_schema_stats, gather_table_stats,...=
Could some one provide us some kind of help about the basics steps that should contain a simple script that could be scheduled daily in order to mantain fresh table statistics on database?
Thanks

Mmm. this docs refers to 10g version.
We have 9.2.0.7 Standard EdtionYou neglected to mention that in your original post so one must assume that you are on a 'supported release' of oracle - which you aren't:
http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96612/d_stats2.htm#1012974.

Similar Messages

  • Gathering statistics on partitioned and non-partitioned tables

    Hi all,
    My DB is 11.1
    I find that gathering statistics on partitioned tables are really slow.
    TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE LAST_ANALYZED PARTITIONED COMPRESSION
    O_FCT_BP1                        112123170     843140    11212317 8/30/2011 3:5            NO                    DISABLED
    LEON_123456                      112096060     521984    11209606 8/30/2011 4:2           NO                   ENABLED
    O_FCT                           115170000     486556      115170 8/29/2011 6:3            YES        
    SQL> SELECT COUNT(*)  FROM user_tab_subpartitions
      2  WHERE table_name =O_FCT'
      3  ;
      COUNT(*)
           112I used the following script:
    BEGIN
      DBMS_STATS.GATHER_TABLE_STATS(ownname          => user,
                                    tabname          => O_FCT',
                                    method_opt       => 'for all columns size auto',
                                    degree           => 4,
                                    estimate_percent =>10,
                                    granularity      => 'ALL',
                                    cascade          => false);
    END;
    /It costs 2 mins for the first two tables to gather the statistics respectively, but more than 10 mins for the partitioned table.
    The time of collecting statistics accounts for a large part of total batch time.
    And most jobs of the batch are full load in which case all partitions and subpartitions will be affected and we can't just gather specified partitions.
    Does anyone have some experiences on this subject? Thank you very much.
    Best regards,
    Leon
    Edited by: user12064076 on Aug 30, 2011 1:45 AM

    Hi Leon
    Why don't you gather stats at partition level? If your partitions data is not going to change after a day (date range partition for ex), you can simply do at partition level
    GRANULARITY=>'PARTITION' for partition level and
    GRANULARITY=>'SUBPARTITION' for subpartition level
    You are gathering global stats every time which you may not require.
    Edited by: user12035575 on 30-Aug-2011 01:50

  • AUTOMATIC UPDATE STATISTICS for VB* tables ON automatically

    Hello All,
    We had reviewed our ECC system with SAP and they recommended us to OFF the AUTOMATIC UPDATE STATISTICS for VBDATA, VBHDR and VBMOD.
    We executed the script EXEC sp_autostats <tablename>, 'OFF' but the status goes ON after a while.
    Checked the SAP note 771352 but did not get proper idea from it.
    MS SQL database used is 2008.
    Can someone suggest and share his/her experience.
    Regards,
    Mohit

    Hi Mohit,
    Did you ran the sap_z* script ?
    What is your SP level - did you check 1702325 - Alerts appear for VB tables
    Also, I asked to to use NORECOMPUTE - have you tried that ?
    Regards

  • Automatic Optimizer Statistics Collection Enabled still tables not analyzed

    Hello,
    We have Oracle 11g R1 database. Our automatic Optimizer Statistics Collection settings are enabled, still I don't see the tables being analyzed, any suggestions if I am missing any settings. All tables do get analyzed if I do manual statistics gathering.
    SQL> select CLIENT_NAME ,STATUS from DBA_AUTOTASK_CLIENT;
    CLIENT_NAME STATUS
    auto optimizer stats collection ENABLED
    auto space advisor ENABLED
    sql tuning advisor ENABLED
    Thanks,
    SK

    user599845 wrote:
    Hello,
    We have Oracle 11g R1 database. Our automatic Optimizer Statistics Collection settings are enabled, still I don't see the tables being analyzed, any suggestions if I am missing any settings. All tables do get analyzed if I do manual statistics gathering.
    SQL> select CLIENT_NAME ,STATUS from DBA_AUTOTASK_CLIENT;
    CLIENT_NAME STATUS
    auto optimizer stats collection ENABLED
    auto space advisor ENABLED
    sql tuning advisor ENABLED
    Thanks,
    SK
    still I don't see the tables being analyzed, post SQL & results that lead you to this conclusion.
    realize that statistics can be "collected" without updating LAST_ANALYZED column.
    if data within table does not change, the nothing would be gained by "updating" statistics to same values as now/before.

  • Disable Automatic Optimizer Statistics

    Hi there
    I wanted to query user_tab_modifications to track, number of rows updated in a week. Since this view is refreshed automatically when Automatic Optimizer Statistics gathers statistics, i disabled the Automatic Optimizer Statistics. Now i am executing execute dbms_stats.FLUSH_DATABASE_MONITORING_INFO(); manually to get the view populated with number of rows updated.
    My concern here is , will i get the exact number of rows updated in a week from user_tab_modifications by doing this ? Also, is there anything else that is also updating this view apart from optimizer statistics that are gathered on a table.
    Thanks

    You could try writing some PLSQL on your own.
    How about :
    SQL> create table count_X_updates (update_count number);
    Table created.
    SQL> insert into count_X_updates values (0);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL>
    SQL> create table X (col_1 varchar2(5), col_2 varchar2(5), col_3 number);
    Table created.
    SQL> insert into X values ('a','first',1);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL>
    SQL> create or replace trigger count_x_updates_trg
      2  after update of col_1,col_2,col_3
      3  on X
      4  for each row
      5  declare prev_cnt number;
      6  begin
      7  update count_X_updates set update_count = update_count+1;
      8* end;
    SQL> /
    Trigger created.
    SQL>  update x set col_1 = 'b', col_2='secnd',col_3=2;
    1 row updated.
    SQL> commit;
    Commit complete.
    SQL> select * from count_X_updates;
    UPDATE_COUNT
               1
    SQL>  update x set col_1 = 'c' where col_3=2;
    1 row updated.
    SQL> commit;
    Commit complete.
    SQL> select * from count_X_updates;
    UPDATE_COUNT
               2
    SQL> select * from x;
    COL_1 COL_2      COL_3
    c     secnd          2
    SQL>Note : This trigger code has to be improved because
    a. Multiple sessions might get the same value
    b. It introduces a point of serialisation -- multiple session will wait on a row lock on the table count_X_updates -- effectively meaning that all other sessions attempting to update X will wait (even if they are updating different rows in X) till each preceding one issues a COMMIT.
    So, this demo code is only to show you PLSQL Triggers. But it cannot be used in Production.
    Practice some PLSQL. Read up on autonomous transactions.
    Hemant K Chitale

  • Best practices for gathering statistics in 10g

    I would like to get some opinions on what is considered best practice for gathering statistics in 10g. I know that 10g has auto statistics gathering, but that doesn't seem to be very effective as I see some table stats are way out of date.
    I have recommended that we have at least a weekly job that generates stats for our schema using DBMS_STATS (DBMS_STATS.gather_schema_stats). Is this the right approach to generate object stats for a schema and keep it up to date? Are index stats included in that using CASCADE?
    Is it also necessary to gather system stats? I welcome any thoughts anyone might have. Thanks.

    Hi,
    Is this the right approach to generate object stats for a schema and keep it up to date? The choices of executions plans made by the CBO are only as good as the statistics available to it. The old-fashioned analyze table and dbms_utility methods for generating CBO statistics are obsolete and somewhat dangerous to SQL performance. As we may know, the CBO uses object statistics to choose the best execution plan for all SQL statements.
    I spoke with Andrew Holsworth of Oracle Corp SQL Tuning group, and he says that Oracle recommends taking a single, deep sample and keep it, only re-analyzing when there is a chance that would make a difference in execution plans (not the default 20% re-analyze threshold).
    I have my detailed notes here:
    http://www.dba-oracle.com/art_otn_cbo.htm
    As to system stats, oh yes!
    By measuring the relative costs of sequential vs. scattered I/O, the CBO can make better decisons. Here are the data items collected by dbms_stats.gather_system_stats:
    No Workload (NW) stats:
    CPUSPEEDNW - CPU speed
    IOSEEKTIM - The I/O seek time in milliseconds
    IOTFRSPEED - I/O transfer speed in milliseconds
    I have my notes here:
    http://www.dba-oracle.com/t_dbms_stats_gather_system_stats.htm
    Hope this helps. . . .
    Don Burleson
    Oracle Press author
    Author of “Oracle Tuning: The Definitive Reference”
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • How can I create a simple app that will automatically add folder script

    Hi! I hope I can get a little help on this.  I tried searching online and haven't found anything.
    Is there away I can make a simple "application" that will automatically add a folder script to a users folder?
    Basically I need a folder script to run but I don't want to explaint o a user how they'd have to do it in automator.
    I'd like to be able to create an application they double click.  The app tells them to select a folder then automatically runs the script.
    Does anyone have any ideas of how'd I'd do this?

    Oh I get it. Yeah I read you post on one of the other pages and didn't quite understand, but not that you say that your're makeing this for another user, it makes sence.
    So what you want to do is have the computer automatically install a script on a customer's computer, right?
    (I'm using "custumer" loosly; i.e. just another user)
    If that's what you'd like to do, then you'll probably have to write an actual program in Xcode, since I imagine automiticlly installing folder action scrips will be highly discuraged by Apple because it would cause a huge security hole in the OS. (You wouldn't want some random person sending you a folder action installer disgused as a regular app LOL.)
    But I will actually suggest the following, which I think will work great for your users:
    Make a regular Automator app, and drag it to your Dock. Now, have the user click and drag a bunch of photos to the application icon, and it will run the app automatically on those files.
    You could try a work flow like this:
    ask the user Are you sure?
    convert pictures
    save pictures to ConvertedPics folder
    pop up a confirmation message saying that everything was resized
    Hope this helps

  • How to automatically create a script based on XML input

    I was wondering if it is possible to automatically generate a script in Flash itself, so like a plug in or something that I can build that will allow me to take in an xml document and generate scripts on certain frames in certain movie clips
    or is it better to create a swf where users load in an xml file and things just happen from that (would that be safe? could I guarentee that no one would be able to screw things up?)

    Here are the steps to do for your requirement.
    1) In design console, create a rule like user-type ="end user"
    2) Create a role using the above rule and also create an access policy to provision these two resources using the above role.
    2) In design console, go to resource object --> select resource2->dependency tab--> assign resource1.
    3) Try to provision resource 1..it should provision resource 2 as well.

  • (10g) 자동 통계정보 수집(AUTOMATIC OPTIMIZER STATISTICS COLLECTION)

    제품 : ORACLE SERVER
    작성날짜 : 2006-07-21
    PURPOSE
    이 문서는 10g의 new feature인 자동 통계정보 수집(Automatic Optimizer
    Statistics Collection)에 대한 소개와 기능에 대한 자료이다.
    Explanation
    1. 개요
    Optimizer statistics는 GATHER_STATS_JOB에 의해서 자동으로
    수집된다. 이 JOB은 SYS 소유로서 OBJECT_TYPE이 JOB이다.
    이 JOB은 통계정보가 없거나 stale 상태의 통계정보를 갖는 DB 내의
    모든 OBJECT들에 대한 통계정보들을 수집한다.
    2. 자동 통계정보 수집을 위한 설정과 방식
    1) STATISTICS_LEVEL = TYPICAL | ALL
    2) 통계정보들은 predefined GATHER_STATS_JOB에 의해 수집된다.
    3) JOB이 수행될 때 JOB은 다음과 같은 사항들을 결정한다.
    - missing 또는 stale 상태의 통계정보를 갖는 object를 결정한다.
    - 좋은 통계정보를 생성하기 위해 필요한 적당한 sampling percentage.
    - histogram과 histogram의 사이즈를 요구하는 적절한 column.
    - 통계정보 수집에 대한 parallelism의 degree.
    - 어느 object에 대한 통계정보를 수집할지에 대한 우선순위
    3. GATHER_STATS_JOB에 대한 설명
    이 job은 데이타베이스 생성 시점에 생성되고 스케줄러에 의해 관리된다.
    GATHER_STATS_JOB 은 DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure를
    call함으로써 통계정보를 수집한다.
    이 프로시져는 'GATHER AUTO' 옵션을 사용한 DBMS_STATS.GATHER_DATABASE_STATS
    procedure와 아주 유사한 형태로 동작한다. 이것과 다른 점은
    GATHER_DATABASE_STATS_JOB_PROC procedure는 통계정보를 수집해야 할
    Object에 대해 우선순위를 두고 순서대로 처리한다. 즉, 가장 많이
    통계정보가 update가 되어야 할 object를 가장 먼저 처리하는 것이다.
    이것은 maintenance window가 close되기 전에 가장 필요한 통계정보가
    먼저 수집되도록 하기 위함이다.
    4. Dictionary Objects에 대한 통계정보
    1) Oracle Database 10g부터 최적의 performance 결과를 얻기 위해 dictionary
    table들에 대한 통계정보도 수집할 수 있다.
    언제라도, DBMS_STATS.GATHER_SCHEMA_STATS procedure를 사용하여
    dictionary table들에 대한 통계정보를 수집하는 것이 가능하다.
    이 때 GATHER_SYS argument는 TRUE로 셋팅되어 있어야 한다.
    2) DBMS_STATS.GATHER_DICTIONARY_STATS라 하는 새로운 procedure도 사용
    하는 것이 가능하다. 이것을 사용하기 위해서는 ANALYZE ANY DICTIONARY
    라는 새로운 system privilege가 있어야 한다.
    이 권한은 만약 어떤 user가 SYSDBA 권한이 없는 경우 dictionary object와
    fixed object들을 analyze할 수 있도록 한다.
    3) GATHER_DATABASE_STATS라는 프로시져는 GATHER_FIXED라 불리우는 새로운
    argument를 가진다. 이 값은 default로 FALSE로 셋팅된다. 즉, 기본적으로
    fixed table들에 대해서는 통계정보를 생성하지 않도록 한다.
    전형적인 System WorkLoad가 있는 동안에는 fixed table들에 대하여
    한번만 analyze하면 충분하다.
    4) GATHER_FIXED_OBJECTS_STATS라는 procedure를 사용하여 fixed table들에
    대한 통계정보를 모으는 것도 가능하다. 또한 모든 fixed table들에 대하여
    통계정보를 delete하는 것도 가능하고, fixed table에 통계정보를
    export 또는 import하는 것도 가능하다.
    Example
    none
    Reference Documents
    <Note:266040.1>

    Hi,
    Please see here,
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i41448
    If the table/s are changing very frequently than its better to gather the stats manually.This would lead teh volatile table coming up into the stats job again and again.
    For the system stats and data dictionary stats,they are not collected by default.So there is no choice but to gather them manually.
    Aman....

  • Oracle 11g R1 Automatic Installation shell scripts

    Hy Guys,
    Please can someone help me with: Oracle 11g R1 Automatic Installation shell scripts. A guide, how to or a link will be wellcome
    Kind Regards
    Easyman
    Edited by: Easyman on Feb 2, 2010 3:58 AM

    Hi Easyman,
    sure, just have a look in $INSTALL_CONF directory. Files starting with swInst* are referencing a response file from $INSTALL_CONF/response directory. Either use a sample configuration file or add your own response file to install Oracle.
    Cheers,
    David
    OCP 9i / 10g / 11g
    http://www.oratoolkit.ch/knowledge/howto/installation/otn.php
    P.S.: If you have more questions about oraToolKit please contact me at: http://www.oratoolkit.ch/faq.php

  • Gathering Statistics in 10.7 NCA with 8i Database

    In preparation for the 11i upgrade we are first migrating our database to 8i
    (strongly recommended Category 1 pre-upgrade step 5).
    I have contacted Oracle Support on whether it is recommended to gather
    statistics in a 10.7 NCA environment (optimizer_mode remains RULE). Oracle
    Support says NOT to gather statistics in a 10.7 environment.
    This is contradictory information to several documents I have found.
    Furthermore, Oracle has provided ADXANLYZ.sql and FND_STATS for the 10.7
    environment.
    The following sources recommend gathering statistics in a 10.7 environment:
    1) 10.7 Installation Manual (A47542-1) page A-16
    2) 11i Upgrade Manual (A69411-01) page 2-5
    3) Metalink note 1065584.6
    Can somebody please clarify? Your feedback is much appreciated.
    Thank you,
    Rich
    null

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Rich Cisneros:
    We will be running 10.7 NCA in a server partitioned mode for 6-8 month using 7.3.4.4 with 8.1.6.2.
    Should I gather statistics as part of the 8i database upgrade (still 10.7) or part of the 11i Application upgrade?<HR></BLOCKQUOTE>
    Rich,
    Gather Statistics is only relevant to databases running with optimiser mode = COST or CHOOSE. Apps 10.7 runs with optimiser mode = RULE so you don't need to gather statistics until you start your actual upgrade to 11i, which will run with CBO.
    Hope this makes it clear.
    Steve

  • Implications of not gathering statistics

    Hi,
    I have a scenario that is provoking vigorous debate in my workplace and may do so here.
    We have a table with about 70 million rows in it. It grows at the rate of about 3-4 million rows a month, the rows being loaded daily rather than in one lot at the end of the month. Gathering statistics takes about 6 hours and takes a significant part of our 9 hour window for completion of various batch jobs, including loading the new rows.
    The new rows do have quite different values in certain columns (some indexed) to the existing data. However, as these rows are processed over the course of a week they will come to look just like the existing rows.
    The action that we're considering is to stop gathering statistics after every large data load and instead to do this every few months instead on the basis that the new data should not skew the balance of the existing data significantly. However, this has divided opinions.
    The database is running on Oracle 10g R2.
    Any thoughts?
    Russell.

    Oracle's default collection may or may not be the best for you given the size.
    For a table this large you should have the partitioning option. If you do then you should write your own stats collection to only look at active partitions and, if possible, set archival data partition's tablespaces to READ ONLY.
    Then with respect to collection ... do some research on what estimate percentage gives you stats good enough to support a good plan. In 10gR2 this may be a very high number. With 11g and above the default collection seems sufficiently improved you can trust it.
    SB advises "let Oracle be Oracle" and I agree. Right up until doing that hurts performance and interferes with the primary purpose of the database: Supporting your organization and its customers.

  • How important is gathering statistics on SYS objects?

    Hi,
    How important is gathering statistics on Data dictionary tables and other X$ tables in SYS schema. Is it bad to keep the statistics. Recently our Sr.DBA has deleted all the SYS schema stats telling that it will inversely affect the DB performance. Is it true?
    Regards
    Satish

    Hi Satish,
    *10g:*
    A new DBA task in Oracle Database 10g is to generate statistics on data dictionary objects which are contained in the SYS schema. The stored procedures dbms_stats.gather_database_stats and dbms_stats.gather_schema_stats can be used to gather the SYS schema stats. Here is an example of using dbms_stats.gather_schema_stats to gather data dictionary statistics:
    EXEC dbms_stats.gather_schema_stats(’SYS’);
    *9i*
    While it is supported in 9.2 to gather statistics on the data dictionary and fixed views, doing so isn't the norm.
    There is a bug fixed only in 10gR2 (not expected to be back-ported to 9.2) that caused this error. The fix is – don’t generate statistics against SYS – especially not the Fixed tables.
    For this query, let's see if we can get a better plan by removing statistics or by getting better statistics, or if we need to do something else to tune it. Take the SYS statistics as before, but with gather_fixed => false.
    I would like for you to test first by deleting the statistics on these two X$ tables and see how the query runs (elapsed time, plan).
    delete_table_stats('SYS','X$KQLFXPL');
    delete_table_stats('SYS','X$KGLOB');
    Then you can take statistics on them using gather_table_stats and check again (elapsed time, plan).
    gather_table_stats('SYS','X$KQLFXPL');
    gather_table_stats('SYS','X$KGLOB');
    The issue with this is that the contents of these fixed views, particularly x$kqlfxpl, can change dramatically. Gathering fixed object statistics may help now and cause problems later as the contents change.
    Warning, this is a bit dangerous due to latch contention, see the following note. I've supported a couple of very busy systems that were completely halted for a time due to latch contention on x$kglob due to monitoring queries (particularly on v$open_cursors).
    Note.4339128.8 Ext/Pub Bug 4339128 - Heavy latch contention from queries against library cache views.
    Hope this answers your question . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference":
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • Gathering statistics on interMedia indexes and tables

    Has anyone found any differences (like which one is better or worse) between using the ANALYZE sql command, dbms_utility package, or dbms_stats package to compute or estimate statistics for interMedia text indexes and tables for 8.1.6? I've read the documentation on the subject, but it is still unclear as to which method should be used. The interMedia text docs say the ANALYZE command should be used, and the dbms_stats docs say that dbms_stats should be used.
    Any help or past experience will be grateful.
    Thanks,
    jj

    According to the Support Document "Using statistics with Oracle Text" (Doc ID 139979.1), no:
    Q. Should we gather statistics on the underlying DR$/DR# tables? If yes/no, why?
    A. The recommendation is NO. All internal recursive queries have hints to fix the plans that are deemed most optimal. We have seen in the past that statistics on the underlying DR$ tables may cause query plan changes leading to serious query performance problems.
    Q. Should we gather statistics on Text domain indexes ( in our example above, BOOKS_INDEX)? Does it have any effect?
    A: As documented in the reference manual, gathering statistics on Text domain index will help CBO to estimate selectivity and costs for processing a CONTAINS() predicate. If the Text index does not have statistics collected, default selectivity and cost will be used.
    So 'No' on the DR$ tables and indexes, 'yes' on the user table being indexed.

  • Automatic startup/shutdown script - 3 Oracle Home

    Hi All,
    We have a server which has 3 Oracle Home - 8i/9i/10g.
    So i am confused about configuring automatic startup/shutdown scripts.
    Can you guys please let me know how to setup automatic startup/shutdown for these multiple Oracle Homes.
    Thanks,
    Kumar.

    Bellow is a HP-UX script that can help. Create three scripts like oracle8, oracle9, oracle10 with correct path values. and than decide the runlevels and create soft links for the scripts you created, for example;
    Under /sbin/rc3.d
    ln -s /sbin/init.d/oracle10 S999oracle10
    Under /sbin/rc1.d and /sbin/rc2.d
    ln -s /sbin/init.d/oracle10 C9oracle10
    AUTOMATIC STARTUP/SHUTDOWN SCRIPT
    $vi /sbin/init.d/oracle10
    #!/sbin/sh
    # NOTE: This script is not configurable! Any changes made to this
    # scipt will be overwritten when you upgrade to the next
    # release of HP-UX.
    # WARNING: Changing this script in any way may lead to a system that
    # is unbootable. Do not modify this script.
    # NOTE:
    # For ORACLE:
    PATH=/usr/sbin:/usr/bin:/sbin
    export PATH
    ORA_HOME="/oracle/app/oracle/product/10.2.0"
    ORA_OWNR="oracle"
    rval=0
    set_return() {
    x=$?
    if [ $x -ne 0 ]; then
    echo "EXIT CODE: $x"
    rval=1
    fi
    case $1 in
    start)
    # Oracle listener and instance startup
    echo -n "Starting Oracle: "
    su - $ORA_OWNR -c "$ORA_HOME/bin/dbstart /oracle/app/oracle/product/10.2.0"
    echo "OK DB started"
    stop)
    # Oracle listener and instance shutdown
    echo -n "Shutdown Oracle: "
    su - $ORA_OWNR -c "$ORA_HOME/bin/dbshut /oracle/app/oracle/product/10.2.0"
    echo "OK DB shutdown"
    reload|restart)
    $0 stop
    $0 start
    echo "usage: $0 {start|stop}"
    esac
    exit $rval

Maybe you are looking for