Statistics on tables

Hi,
Does the analyze table command actually delete statistics before regenrating new stats ?. If a query is issued during the analyze is running, will it have any stats available to it ?
Thanks
Vissu

Vissu,
That's an interesting question. I would hope that Oracle offers the same concurrency models to meta-data queries as it does for any other query, so a session that is reading statistics should see a consistent view of the data until the statement has finished. I'd be interested in hearing other opinions...
Hi,
Does the analyze table command actually delete statistics before regenrating new stats ?. If a query is issued during the analyze is running, will it have any stats available to it ?
Thanks
Vissu

Similar Messages

  • How to check/verify running sql in lib cache is using updated statistics of table

    How to check/verify running sql in lib cache is using updated statistics of table used in from clause.
    one of my application table is highly busy i.e frequent update/insert/delete.
    we gather table stats every 30 min.

    Hello, "try dynamic sampling" = think "outside the box", maybe hit two birds with same stone.
    As a matter of fact, I was just backing up your statement: "30 minutes seems pretty extreme"
    cheers

  • How to Gather Statistics of Tables and Indexes

    Hi all,
    Plz help me in Gathering Statistics of Tables and Indexes.
    Thanks

    for tables
    exec dbms_stats.gather_table_stats('SCOTT', 'EMPLOYEES');
    for indexes
    exec dbms_stats.gather_index_stats('SCOTT', 'EMPLOYEES_PK');
    check this link for detail
    http://nimishgarg.blogspot.com/2010/04/oracle-dbmsstats-gather-statistics-of.html

  • RSRV: Oracle statistics info table for table /BIC/FCube1 may be obsolete

    Hi,
    I run RSRV on cube1 and got several yellow lines such as:
    Oracle statistics info table for table /BIC/FCube1 may be OBSOLETE
    Oracle statistics info table for table /BIC/SSalesOrg may be OBSOLETE
    Oracle statistics info table for table /BIC/DCube1P may be OBSOLETE
    DB statistics for Infocube Cube1 must be recreated.
    I read here on SDN that running Correct Error in RSRV is only a temporay fix and that the best solution is to fix it from the database level with BRCONNECT.
    But the DBA says she has already run BRCONNECT yet there was no change in these several of the lines which came out as Yellow.  ... still same OBSOLETE messages.
    1. Any additional sugestions to fix these problems at the database level?
    2. In case I decide to fix with Correct Error in RSRV, what issues can I encounter with the cube?
    Can this lead to a failure of the cube?
    Will users encounter any issues with report?
    Does fixing the OBSOLETE in the error message in RSRV have any hazzards?
    Thanks

    Hi,
    it is years of data but how do you decide that the data is huge to warrant creating a new cube?
    You noted that
    "verify if it makes sense to copy the data into a new cube "
    How do I verify that?
    Is creating a new cube the only solution to this OBSOLETE problem?
    Why is it referring to only particular tables as OBSOLETE and doesn't that indicate that this is not a problem with the overall cube?
    Thanks

  • Null statistics for tables but still got optimizer problem

    Hi,
    We got a batch application on 10.2.0.4 Oracle database that does a lot of deletes and inserts when run it. Nightly statistics gathering was not enough so I deleted and locked the statistics for all tables. Now it has null statistics so it supposes to use dynamic sampling instead of statistics. At beginning the query executions were ok, however after the application ran for a well, it still chosen a very bad query execution plan (with cost over ten times more that it was normal).
    What is happening here? Null statistics is still not good enough for the highly volatile database? Is there anything else I can do?
    Thanks in advance.

    Provide information like the structure of your table (including indexes), your compartibilty and optimizer settings, your SQL Statement and sample explain plan of the SQL.

  • Gather statistics on table slow down after partitioning + 10.2.0.4

    Hi All,
    Oracle 10.2.0.4
    OS: Sun solaris
    We have a table with 36 indexes on that and the number of columns on the table is 60. total number of rows on the table is 19 million.
    The issue now is we used to collect statistics after a bulk load (insert) on the table,it will take maximum 3hrs for completing the statistics collection.
    Recently we partitioned (range partition) the table based on the year column like below,
    PARTITION BY RANGE (year)
    (PARTITION A_2006 VALUES LESS THAN ('2007'),
    PARTITION A_2007 VALUES LESS THAN ('2008'),
    PARTITION A_2008 VALUES LESS THAN ('2009'),
    PARTITION A_2009 VALUES LESS THAN ('2010'),
    PARTITION A_MAX VALUES LESS THAN (MAXVALUE)
    All the indexes are local indexes now.
    Now if we collect the statistics it takes more than 6hrs to complete. Please find the gather stats code below
    exec dbms_stats.gather_table_stats( 'SCHEMANAME','TABLENAME',method_opt=> 'for all indexed columns', cascade => true );
    Please advice on solving the issue....
    TIA,

    ORCLDB wrote:
    Hi All,
    Any help...I won't be able to give you any valid suggestion as I have not come across many such cases.
    If you don't mind, here are a few pointers
    1) You may want to check (from a different session) to identify what is the stats collection routine spending time on
    2) You may want to collect stats on each partition separately (and the collect global stats) to see if any particular partition is the culprit
    3) You may want to collect stats on table and on its indexes separately (i.e. not use cascade=>true) to see which one is taking time.

  • BW Statistics - RSDDSTATWHM table

    Hi all,
    Does somebody know about the "Action" field in the RSDDSTATWHM table, and it's possible values ?
    Some values are sometimes confusing, for example :
    - action 10 : process in source system
    - action 12 : extractor
    what is the difference between these 2 actions ?
    Thanks
    Vincent

    Field ACTION (Procedure in WHM) represents different stages of data load process to record time spent in each of these stages. For example:
    10 represents the total time spent of source system during data load process where as 12 represents only the time spent by extractor in selecting the records (not the data transmission) so 10 includes 11, 12.  This field translates to infoobject 0TCTWHMACT used in cube 0BWTC_C05. Check out few queries on the cube mentioned and compare the times to monitor entries.
    Gopal

  • Oracle table statistics

    Dear All Members,
    Is there any way where we can check whether current statistics in table are outdated in oracle? In OTN i found using view "DBA_TAB_MODIFICATIONS" we can gather timestamps of statistics
    But can we use this to compare the quality of statistics in DB tables?
    Regards,
    Shanaka.

    Hi Shanaka,
    As far as I know that the DBA_TAB_MODIFICATIONS table used to collect quality of statistics value by SAP, also. There's only one exception that at the first step, without paying an attention to the DBA_TAB_MODIFICATIONS records, the system creates new statistics by using "-f stats -t all" parameters. At the second and further steps, new stats will be created for the tables that are need to be collected.
    In short, because of the DB20 transaction reflects DBA_TAB_MODIFICATIONS table to store quality of the table stats, both values should be same.
    In addition to the information above, you can read the notes, below;
    Note 588668 - FAQ: Database statistics
    Note 408527 - Checking the statistics using DBA_TAB_MODIFICATIONS,
    Best regards,
    Orkun Gedik

  • Managing statistics for object collections used as table types in SQL

    Hi All,
    Is there a way to manage statistics for collections used as table types in SQL.
    Below is my test case
    Oracle Version :
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Original Query :
    SELECT
         9999,
         tbl_typ.FILE_ID,
         tf.FILE_NM ,
         tf.MIME_TYPE ,
         dbms_lob.getlength(tfd.FILE_DATA)
    FROM
         TG_FILE tf,
         TG_FILE_DATA tfd,
              SELECT
              FROM
                   TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
         )     tbl_typ
    WHERE
         tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:02.90
    Execution Plan
    Plan hash value: 3970072279
    | Id  | Operation                                | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                         |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  1 |  HASH JOIN                               |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  2 |   HASH JOIN                              |              |  8168 |   287K|   695   (3)| 00:00:09 |
    |   3 |    VIEW                                  |              |  8168 |   103K|    29   (0)| 00:00:01 |
    |   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|              |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   5 |      FAST DUAL                           |              |     1 |       |     2   (0)| 00:00:01 |
    |   6 |    TABLE ACCESS FULL                     | TG_FILE      |   565K|    12M|   659   (2)| 00:00:08 |
    |   7 |   TABLE ACCESS FULL                      | TG_FILE_DATA |   852K|   128M|  3863   (1)| 00:00:47 |
    Predicate Information (identified by operation id):
       1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
    Statistics
              7  recursive calls
              0  db block gets
          16783  consistent gets
          16779  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
    select
         index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
    from
         all_indexes
    where table_name in ('TG_FILE','TG_FILE_DATA');
    INDEX_NAME                     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR     NUM_ROWS SAMPLE_SIZE
    TG_FILE_PK                          2        2160        552842             21401       552842      285428
    TG_FILE_DATA_PK                     2        3544        852297             61437       852297      852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
    But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
    So my question is, is there any way by which I can change the statistics while using collections in SQL ?
    I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
    Modified query with hints :
    SELECT    
        /*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
        9999,
        tbl_typ.FILE_ID,
        tf.FILE_NM ,
        tf.MIME_TYPE ,
        dbms_lob.getlength(tfd.FILE_DATA)
    FROM
        TG_FILE tf,
        TG_FILE_DATA tfd,
            SELECT
            FROM
                TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
        tbl_typ
    WHERE
        tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 1670128954
    | Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                          |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   1 |  NESTED LOOPS                             |                 |       |       |            |          |
    |   2 |   NESTED LOOPS                            |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   3 |    NESTED LOOPS                           |                 |  8168 |  1363K| 16379   (1)| 00:03:17 |
    |   4 |     VIEW                                  |                 |  8168 |   103K|    29   (0)| 00:00:01 |
    |   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   6 |       FAST DUAL                           |                 |     1 |       |     2   (0)| 00:00:01 |
    |   7 |     TABLE ACCESS BY INDEX ROWID           | TG_FILE_DATA    |     1 |   158 |     2   (0)| 00:00:01 |
    |*  8 |      INDEX UNIQUE SCAN                    | TG_FILE_DATA_PK |     1 |       |     1   (0)| 00:00:01 |
    |*  9 |    INDEX UNIQUE SCAN                      | TG_FILE_PK      |     1 |       |     1   (0)| 00:00:01 |
    |  10 |   TABLE ACCESS BY INDEX ROWID             | TG_FILE         |     1 |    23 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
           filter("TF"."FILE_ID"="TFD"."FILE_ID")
    Statistics
              0  recursive calls
              0  db block gets
             16  consistent gets
              8  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed
    Thanks,
    B

    Thanks Tubby,
    While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
    But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
    http://www.oracle-developer.net/display.php?id=427
    If we go across the document, it has mentioned in total 3 hints to set statistics :
    1) CARDINALITY (Undocumented)
    2) OPT_ESTIMATE ( Undocumented )
    3) DYNAMIC_SAMPLING ( Documented )
    4) Extensible Optimiser
    Tried it out with different hints and it is working as expected.
    i.e. cardinality and opt_estimate are taking the default set value
    But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
    With CARDINALITY hint
    SELECT
        /*+ cardinality( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     5 |    10 |    29   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With OPT_ESTIMATE hint
    SELECT
         /*+ opt_estimate(table, e, scale_rows=0.0006) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Execution Plan
    Plan hash value: 4043204977
    | Id  | Operation                              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                       |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   1 |  VIEW                                  |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   2 |   COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   3 |    FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With DYNAMIC_SAMPLING hint
    SELECT
        /*+ dynamic_sampling( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     2 |     4 |    11   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     2 |     4 |    11   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
    I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
    By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
    Regards,
    B

  • How table statistics going to affect insert statements

    I have 2 DB's.I have a process in both DB's which will load records in to tab1 using sql loader.Tab1 in DB1 is not analyzed since 2005 while tab1 in DB2 is recently analyzed.My SQL ldr process is slow in DB1 when compared to DB2.
    I am wondering how sqlloader insert is goign to relate with statistics of table.Please suggest me
    Thanks

    Hi,
    are You speaking about to identical instances, where the only difference are the statistics?
    There are severa aspects that can impact, the instance configuration, the session configuration, the storage parameters, DB workload and so on.
    Have You first checked what it is constant in both instances? Thios can be a good starting point..

  • Gather statistics or analyze table

    What is the difference between gather statistics for table and analyze table?
    Regards
    Arpit

    Analyzing a table is gathering statistics (whether you're using the old ANALYZE statement or the preferred dbms_stats package).

  • Run optimizer statistics for one table

    dear all,
    i noticed an error in DB16 , that is  <b>Missing Statistics for a table </b> SAPGRP.MC03BF0SETUP.
    How to run/ generate the stats. for this table

    Dear Somckit
    here is the error msg.  from DB16.
                                                                                    Description         Table: SAPGRP.MC03BF0SETUP # Table or index has no optimizer
    Correction Type     D                                                           
    Corrective Action   Collect optimizer statistics                                
    Check Log           /oracle/GRP/sapcheck/cdwqhqnz.chk                           
    Single Messages                                                                      
    No.   Description                                                                    
    1     Table: SAPGRP.MC03BF0SETUP # Table or index has no optimizer statistics        
    2     Table: SAPGRP.MC03BX0SETUP # Table or index has no optimizer statistics        
    3     Table: SAPGRP.MC03UM0SETUP # Table or index has no optimizer statistics        
    4     Index: SAPGRP.MC03BF0SETUP~0 # Table or index has no optimizer statistics      
    5     Index: SAPGRP.MC03BX0SETUP~0 # Table or index has no optimizer statistics      
    6     Index: SAPGRP.MC03UM0SETUP~0 # Table or index has no optimizer statistics                                                                               
    Thank u.

  • Analize table ... estimate statistics

    I have read in this discussion group that it was a good idea to analyze the spatial index table to optimize performance.
    Should this be done:
    - only once?
    - after loading the layer?
    - each time it is massively updated?
    BTW what does "sample 32 percent" mean?
    Thanks,
    Jean-Pierre

    Hi Jean-Pierre,
    I've never had to run this more than once (after the spatial index was created on an initial data set), but I don't do a lot of updates usually.
    There are two ways to analyze a table:
    analyze table table_name compute statistics;
    analyze table table_name estimate statistics sample N percent; --- where N is an integer
    Sometimes I have index tables with millions and millions of rows. If I run compute statistics is can take an hour or more. If I say estimate statistics sample 1 percent it completes in seconds/minutes and all of the right things happen with respect to performance. I use sample so I can work with data faster. BTW - this is only for quadtree indexes. It doesn't hurt to gather stats on an rtree index table, but it doesn't help either.
    The procedure I posted happens to sample about 32 percent of the rows, but there is nothing magical about this number.

  • Index Statistics Update - Problem

    We had performance problem yesterday with FI report FAGLL03, it timed out in online execution and in background mode it took 5000+ sec to execute. Result was no more than 100 records.
    Later with some investigation problem drill down to index usage of table FAGLFLEXA. We then updated the index statistics of table from DB02. After that report worked fine with execution time of 10-15 sec for same set of input.
    However user, in morning , was complaining again about performance problem with same report FAGLL03. We did that update index statistics again and as it was the case yesterday it fixed the problem.
    Later today I checked SQL server the job SAP CCMS_xxx_xxx_Update_Tabstats, which I guess is updating index statistics daily at 0400 hours, is working fine. I can't see any error log there. Daily job to check database consistency is also not reporting anything.
    Anyidea what could be going wrong.
    Basis Consultants are looking into problem however I am putting this case here if anyone of you had same problem and fixed it.
    Thanks,
    Pawan.
    Edited by: Pawan Kesari on Dec 11, 2009 4:05 PM

    Hi,
    Appears the stats are dropped eveytime the job runs @04:00
    Have a look at the table DBSTATC in trx: DB21 to see if it's setup to dropped the stats..
    Mark

  • DB02 view is empty on Table and Index analyses  DB2 9.7 after system copy

    Dear All,
                 I did the Quality refresh by System copy export/import method. ECC6 on HP-UX DB29.7.
    After Import Runstats status n Db02 for Table and Index analysis was empty and all value showing '-1'. Eventhough
    a) all standard backgrnd job scheduled in sm36
    b) Automatic runstats are enabled in db2 parameters
    c) Reorgchk all scheduled periodically from db13 and already ran twice.
    4) 'reorgchk update statistics on table all' was also ran on db2 level.
    but Run stats staus in db02 was not getting updated. Its empty.
    Please suggest.
    Regards
    Vinay

    Hi Deepak,
    Yes, that is possible (but only offline backup). But for the new features like reclaimable tablespace (to lower the high watermark)
    it's better to export/import with systemcopy.
    Also with systemcopy you can use index compression.
    After backup and restore you can have also reclaimable tablespace, but you have to create new tablespaces
    and then work with db6conv and online table move to move one tablespace online to the new one.
    Best regards,
    Joachim

Maybe you are looking for