Analyze Vs Gather Table Stats

hi!
I have some confusions about analyze and gather table stats command. Please answers my questions to remove these confusions:
1 - What’s major difference between analyze and gather table stats ?
2 - Which is better?
3 - Suppose i am running analyze/stats table command when some transactions are being performed on the process table then what will be affected of performance?
Hopes you will support me.
regards
Irfan Ahmad

[email protected] wrote:
1 - What’s major difference between analyze and gather table stats ?
2 - Which is bet
3 - Suppose i am running analyze/stats table command when some transactions are being performed on the process table then what will be affected of performance?1. analyze is older and probably not being extended with new functionality/support for new features
2. overall, dbms_stats is better - it should support everything
3. Any queries running when the analyze takes place will have to use the old stats
Although dbms_stats is probably better, I find the analyze syntax LOTS easier to remember! The temptation for me to be lazy and just use analyze in development is very high. For advanced environments though dbms_stats will probably work better.
There is one other minor difference between analyze and dbms_stats. There's a column in the user/all/dba_histograms view called endpoint_actual_value that I have seen analyze populate with the data value the histogram was created for but have not seen dbms_stats populate.

Similar Messages

  • How to gather table stats for tables in a different schema

    Hi All,
    I have a table present in one schema and I want to gather stats for the table in a different schema.
    I gave GRANT ALL ON SCHEMA1.T1 TO SCHEMA2;
    And when I tried to execute the command to gather stats using
    DBMS_STATS.GATHER_TABLE_STATS (OWNNAME=>'SCHMEA1',TABNAME=>'T1');
    The function is failing.
    Is there any way we can gather table stats for tables in one schema in an another schema.
    Thanks,
    MK.

    You need to grant analyze any to schema2.
    SY.

  • Gather table stats

    Hi All,
    DB version:10.2.0.4
    OS:Aix 6.1
    I want to gather table stats for a table since the query which uses this table is running slow. Also I noticed that this table is using full table scan and it was last analyzed 2 months back.
    I am planning to execute the below query for gathering the stats. The table has 50 million records.
    COUNT(*)
    51364617
    I expect this gonna take a long time if I execute the query Like below.
    EXEC DBMS_STATS.gather_table_stats('schema_name', 'table_name');
    My doubts specified below.
    1. can i use the estimate_percent parameter also for gathering the stats ?
    2. how much percentage should I specify for the parameter estimate_percent?
    3. what difference will it make if I use the estimate_percent parameter?
    Thanks in advance
    Edited by: user13364377 on Mar 27, 2012 1:28 PM

    If you are worried about the stats gathering process running for a long time, consider gathering stats in parallel.
    1. Can you use estimate_percent? Sure! Go for it.
    2. What % to use? Why not let the database decide with auto_sample_size? Various "rules of thumb" have been thrown around over the years, usually around 10% to 20%.
    3. What difference will it make? Very little, probably. Occasionally you might see where a small sample size makes a difference, but in general it is perfectly ok to estimate stats.
    Perhaps something like this:
    BEGIN
      dbms_stats.gather_table_stats(ownname => user, tabname => 'MY_TABLE',
       estimate_percent => dbms_stats.auto_sample_size, method_opt=>'for all columns size auto',
       cascade=>true,degree=>8);
    END;

  • Gather table stats taking longer for Large tables

    Version : 11.2
    I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
    Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
    But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
    Does Table size actually matter for stats collection ?

    Max wrote:
    Version : 11.2
    I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
    Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
    But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
    09:40:05 SQL> desc user_tables
    Name                            Null?    Type
    TABLE_NAME                       NOT NULL VARCHAR2(30)
    TABLESPACE_NAME                        VARCHAR2(30)
    CLUSTER_NAME                             VARCHAR2(30)
    IOT_NAME                             VARCHAR2(30)
    STATUS                              VARCHAR2(8)
    PCT_FREE                             NUMBER
    PCT_USED                             NUMBER
    INI_TRANS                             NUMBER
    MAX_TRANS                             NUMBER
    INITIAL_EXTENT                         NUMBER
    NEXT_EXTENT                             NUMBER
    MIN_EXTENTS                             NUMBER
    MAX_EXTENTS                             NUMBER
    PCT_INCREASE                             NUMBER
    FREELISTS                             NUMBER
    FREELIST_GROUPS                        NUMBER
    LOGGING                             VARCHAR2(3)
    BACKED_UP                             VARCHAR2(1)
    NUM_ROWS                             NUMBER
    BLOCKS                              NUMBER
    EMPTY_BLOCKS                             NUMBER
    AVG_SPACE                             NUMBER
    CHAIN_CNT                             NUMBER
    AVG_ROW_LEN                             NUMBER
    AVG_SPACE_FREELIST_BLOCKS                   NUMBER
    NUM_FREELIST_BLOCKS                        NUMBER
    DEGREE                              VARCHAR2(10)
    INSTANCES                             VARCHAR2(10)
    CACHE                                  VARCHAR2(5)
    TABLE_LOCK                             VARCHAR2(8)
    SAMPLE_SIZE                             NUMBER
    LAST_ANALYZED                             DATE
    PARTITIONED                             VARCHAR2(3)
    IOT_TYPE                             VARCHAR2(12)
    TEMPORARY                             VARCHAR2(1)
    SECONDARY                             VARCHAR2(1)
    NESTED                              VARCHAR2(3)
    BUFFER_POOL                             VARCHAR2(7)
    FLASH_CACHE                             VARCHAR2(7)
    CELL_FLASH_CACHE                        VARCHAR2(7)
    ROW_MOVEMENT                             VARCHAR2(8)
    GLOBAL_STATS                             VARCHAR2(3)
    USER_STATS                             VARCHAR2(3)
    DURATION                             VARCHAR2(15)
    SKIP_CORRUPT                             VARCHAR2(8)
    MONITORING                             VARCHAR2(3)
    CLUSTER_OWNER                             VARCHAR2(30)
    DEPENDENCIES                             VARCHAR2(8)
    COMPRESSION                             VARCHAR2(8)
    COMPRESS_FOR                             VARCHAR2(12)
    DROPPED                             VARCHAR2(3)
    READ_ONLY                             VARCHAR2(3)
    SEGMENT_CREATED                        VARCHAR2(3)
    RESULT_CACHE                             VARCHAR2(7)
    09:40:10 SQL> >
    Does Table size actually matter for stats collection ?yes
    Handle:     Max
    Status Level:     Newbie
    Registered:     Nov 10, 2008
    Total Posts:     155
    Total Questions:     80 (49 unresolved)
    why so many unanswered questions?

  • Gather table stats after shrink?

    Hello,
    do I need (or is it useful) to run dbms_stats.gather_table_stats after table shrink or does oracle update statistics automatically?
    Regards,
    Arndt

    Hi,
    I'd suggest you to write some script to run it from crontab to gather statistics automaticaly.
    You can do the same via Enterprise Manager scheduling it on daily/hourly basis you wish.
    select 'Gather Schema Stats' from dual;
    select 'Started: '|| to_char(sysdate, 'DD-MM-YYYY HH24:MI:SS') from dual;
    -- exec dbms_stats.gather_schema_stats('<my_schema>');
    select '- for all columns size auto' from dual;
    exec DBMS_STATS.gather_schema_stats(ownname => '<my_schema>', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', degree => DBMS_STATS.AUTO_DEGREE, cascade => true, force => true);
    select '- for all indexed columns size skewonly' from dual;
    exec  DBMS_STATS.gather_schema_stats(ownname => '<my_schema>', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL INDEXED COLUMNS SIZE SKEWONLY', degree => DBMS_STATS.AUTO_DEGREE, cascade => true, force => true);
    select 'Finished: '|| to_char(sysdate, 'DD-MM-YYYY HH24:MI:SS') from dual;Good luck!
    http://dba-star.blogspot.com/

  • Temp table, and gather table stats

    One of my developers is generating a report from Oracle. He loads a subset of the data he needs into a temp table, then creates an index on the temp table, and then runs his report from the temp table (which is a lot smaller than the original table).
    My question is: Is it necessary to gather table statistics for the temp table, and the index on the temp table, before querying it ?

    It depends yesterday I have very bad experience with stats one of my table has NUM_ROWS 300 and count(*)-7million and database version is 9206(bad every with optimizer bugs) so queries starts breaking and lot of buffer busy and latch free it took while to figure out but I have deleted the stats and every thing came under control - my mean to say statistics are good and bad. Once you start collecting you should keep an eye.
    Thanks.

  • Question on gather table stats.

    Guys,
    I am trying to gather stats on a table and it is taking a long time to finish up ( it had been running for about 28 hours ), after dropping some partitions and creating couple of partitions by splitting the part lead partition I executed the below command for gathering the stats on the table along with the newly created partitions. Our database in on Oracle 10g and OS is windows 2003 server. Can someone please let me know if I am missing any parameters.
    exec dbms_stats.gather_table_stats (-
    ownname => 'TRACK', -
    tabname => 'TRK_DTL_STAGE', -
    estimate_percent => dbms_stats.auto_sample_size, -
    cascade => true, -
    degree => 5);
    Thanks for all your help.

    user12055522 wrote:
    Thanks for the reply. I am not seeing any waits and also I see activity against this session as I looked in the long ops. Will I be better off to kill the current session and just grab the stats for the partitions which I created from the split.How can that be possible that you do not see any waits, it could be that you did not cared to check.
    Killing the existing session would be your decision and i do not see any problem in doing that so far with it.
    Regards
    Anurag

  • Gather table stats takes time for big table

    Table has got millions of record and it is partitioned. when i analyze the table using the following syntax it takes more than 5 hrs and it has got one index.
    I tried with auto sample size and also by changing the estimate percentage value like 20, 50 70 etc. But the time is almost same.
    exec dbms_stats.gather_table_stats(ownname=>'SYSADM',tabname=>'TEST',granularity =>'ALL',ESTIMATE_PERCENT=>100,cascade=>TRUE);
    What i should do to reduce the analyze time for Big tables. Can anyone help me. l

    Hello,
    The behaviour of the ESTIMATE_PERCENT may change from one Release to another.
    In some Release when you specify a "too high" (>25%,...) ESTIMATE_PERCENT in fact you collect the Statistics over 100% of the rows, as in COMPUTE mode:
    Using DBMS_STATS.GATHER_TABLE_STATS With ESTIMATE_PERCENT Parameter Samples All Rows [ID 223065.1]For later Release, *10g* or *11g*, you have the possibility to use the following value:
    estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZEIn fact, you may use it even in *9.2*, but in this release it is recommended using a specific estimate value.
    More over, starting with *10.1* it's possible to Schedule the Statistics collect by using DBMS_SCHEDULE and, specify a Window so that the Job doesn't run during production hours.
    So, the answer may depends on the Oracle Release and also on the Application (SAP, Peoplesoft, ...).
    Best regards,
    Jean-Valentin

  • Question on Exporting Table Stats to another database

    I have a question exporting Table Stats from one schema in database A to another schema in another database B.
    Currently running Oracle 9.0.2.6 for unix in both prod and dev.
    Currently table stats are gathered using the ANALYZE TABLE command. We currently don't use the DBMS_STATS package to gather table statistics.
    Question:
    If I execute DBMS_STATS.EXPORT_TABLE_STATS in database A can I import them to database B if I'm only using the ANALYZE TABLE to gather table stats? Do I need to execute the DBMS_STATS.GATHER_TABLE_STATS package in database A prior to excuting DBMS_STATS.EXPORT_TABLE_STATS ?
    The overall goal is to take table stats from Production in its current state and import them into a Development environment to be used for testing data / processes.
    Yes we will be upgrading to Oracle 10 / 11g in near future.
    Yes we will be changing our method of gathering table stats by using the DBMS_STATS package.
    Thanks,
    Russ D

    Hi,
    If I execute DBMS_STATS.EXPORT_TABLE_STATS in database A can I import them to database B if I'm only using the ANALYZE TABLE to gather table stats? You need using the DBMS_STAT package for get and export statistics process if you want migrate the stats to other database.
    Do I need to execute the DBMS_STATS.GATHER_TABLE_STATS package in database A prior to excuting DBMS_STATS.EXPORT_TABLE_STATS ?Yes, you need executing first DBMS_STATS.GATHER_TABLE_STATS.
    Good luck.
    Regards.

  • OMBPlus "Analyze table statements" and "Analyze table sample percentage"

    Hello!
    I want to use OMBPLUS to fetch what properties a mapping have for “Analyze table statements” and “Analyze table sample percentage”.
    I have tried to use OMBRETERIVE MAPPING but I can’t find the right GET PROPERTIES.
    I hope YOU can help me.
    Best regards,
    Tina

    Hi Tina,
    try
    OMBRETRIEVE MAPPING 'MAP_DUMMY' GET PROPERTIES (ANALYZE_TABLE_STATEMENTS)
    OMBRETRIEVE MAPPING 'MAP_DUMMY' GET PROPERTIES (ANALYZE_TABLE_SAMPLE_PERCENTAGE)
    Regards,
    Carsten.

  • Different Ways To Gather Table Statistics

    Can anyone tell me the difference(s), if any, between these two ways to refresh table stats:
    1. Execute this procedure which includes everything owned by the specified user
    EXEC DBMS_STATS.gather_schema_stats (ownname => 'USERNAME', degree => dbms_stats.auto_degree);
    2. Execute this statement for each table owned by the specifed user
    analyze table USERNAME.TABLENAME compute statistics;
    Generally speaking, is one way better than the other? Do they act differently and how?

    In Oracle's automatic stats collection, not all object are included in stats collection.
    Only those tables, which has stale stats are taken for stats collection. I don't remember on top of my head, but its either 10% or 20% i.e. the tables where more than 10% (or 20%) data has changed are maked as stale. And only those stale objects will be considered for stats collection.
    Do you really think, each and every object/table has to be analyzed every day? How long does it take when you gather stats for all objects?

  • How do you analyze your Siebel tables for best performance?

    Do you use any particular script to analyze Siebel tables? We are having some performance degradation...
    Do you use the scripts from Oracle support note 781927.1?
    Thank you.

    Hi,
    We do not follow that document because many years ago we tried following it to the letter and the CBO did not perform very well. Once we saw this we spent many months coming up with a solution. Here is an outline of what we do.
    •     Implement the PPS init.ora settings on top of the ones recommended by Oracle for Siebel.
    •     Gather both schema and system (the Siebel documentation does not even mention these) stats according to our standards. Here we run dbms_stats for the Siebel schema and sample 100% of the rows and only collect high and low values (for all columns size 1).
    •     We always gather our stats in a test environment that is a recent copy of production.
    •     We always test our stats before moving them into prod. We use our SQL Playback utility to do this. Likewise you can purchase Oracle’s Real Application Testing.
    •     Once tested we then backup the current stats and copy the new stats into production.
    The key point here is that we always test our stats. Almost 100% of Oracle DBA’s don’t do this but look for this trend to start changing. Would you ever move a new SRF into production without testing it?
    If you are having performance problems and are licensed to use it Oracle’s AWR facility provides you everything you need to find and fix your issues.
    HTH,
    R
    Robert Ponder

  • Gather Schema Stat running long in R12

    Hi:
    I scheduled Gather Schema Stat (all with 10%) on two PROD instances last night. One completed about 2 hrs (this one normally has more activeties) while the other is still running since 12:00am. Please give me the steps and commands how I should troubleshoot. Is it still running? I have checked there is no block. BTW, I did one Gather Schema Stats for AP with 10% yesterday afternoon. It completed less than 2 min. They both are on 12.1.3 and 11.1.0.7 on Linux. Thank you in advance.

    I guess I have to do one of these. Thank you.
    Another question: Why some tables dont have date under last_analyzed? When I checked the all_tables and sorted on the last_analyzed, saw 1/3 of the tables with date blank. They are from all kinds of schemas, ex, two tables below. But some AR, or AP are analyzed.
    AR_SUBMISSION_CTRL_GT
    AP_PERIOD_CLOSE_EXCPS_GT

  • Did analyzing of a table make faster execution????

    Did analyzing of a table make faster execution?????I have written a package and it is taking 30 minutes to execute.....Plz help me.....

    - I assume you mean "Does analyzing a table improve the performance of queries?"
    - What version of Oracle?
    - Are you using the rule-based optimizer (RBO)? Or the cost-based optimizer (CBO)?
    - Have statistics ever been gathered on the table(s) involved in your procedure? If so, are the statistics still accurate? Or has there been a substantial amount of change to the table since statistics were last gathered?
    - How do you normally gather statistics.
    If you are using the CBO and your statistics are missing or out of data, gathering fresh statistics should give the optimizer better information from which it should generate better query plans. If your code is slow because a particular SQL statement is using a suboptimal plan because statistics are incorrect, that would improve the performance of your procedure. Because there are many other reasons that code might be slow, and because we don't know whether the optimizer has made an incorrect decision or whether the existing statistics are misleading or even whether the statistics matter to the optimizer you're using, we can't say anything more definitive than "maybe".
    Justin

  • Why Oracle not using the correct indexes after running table stats

    I created an index on the table and ran the a sql statement. I found that via the explain plan that index is being used and is cheaper which I wanted.
    Latter I ran all tables stats and found out again via explain plan that the same sql is now using different index and more costly plan. I don't know what is going on. Why this is happening. Any suggestions.
    Thx

    I just wanted to know the cost using the index.
    To gather histograms use (method_opt is the one that causes the package to collect histograms)
    DBMS_STATS.GATHER_SCHEMA_STATS (
    ownname => 'SCHEMA',
    estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
    block_sample => TRUE,
    method_opt => 'FOR ALL COLUMNS SIZE AUTO',
    degree => 4,
    granularity => 'ALL',
    cascade => TRUE,
    options => 'GATHER'
    );

Maybe you are looking for

  • WLC, ISE certificate authentication issue

    Hi Folks, This is the setup: Redundant pair of WLC 5508 (version 7.5.102.0) Redundant Pair of ISE (Version 1.2.0.899)      The ISE servers are connected to the corporate Active Directory (the AD servers are configured as external identity sources)   

  • Hp Simplesave 500GB Hard Drive

    Hi, I have an HP Simplesave sd500a external portable hard drive. The drive failed, so I replaced it with another drive. Now it won't let me reformat the drive. Do I need to install new software on my computer to force a format?

  • How to add Flash LIte 3.1 into Publish Settings in Flash CS6?

    I have Flash Lite 3.0 and 4.0 and I need to make a SWF for Flash Lite 3.1. Is there way to add this player to the list so it will work in CS6? Thanks!

  • XML Publisher V/S Report builder

    Hi, Could someone please tell me what are the features that are available in the XML Publisher but not in oracle report or vice-versa, Please provide some link or document for this. Thanks, Pragati

  • Excluding values from variables selection

    Hi, I would like to exclude certain values from the variable selection. There are several plants available in my InfoProvider. But I only would like to show the plant A100 to A400 in the query selection. Is it possible to realize it without using aut