Statistics gathering

Hello ,
Every one I'm little confuse about "Statistics gathering" in ebs so I have some question in my mind which are following.
kindly any one clear my concept about it.I really appreciate you.
1.What is Statistics gathering ?
2. What is the benefit of it?
3.Can after this our ERP performance is better?
one question is out this subject is that can If any one wanna APPS DBA then who must be DBA(oracle 10g,9i etc) or who only have a concept of oracle dba like backup,recovery,cloning etc.
Regards,
Shahbaz khan

1.What is Statistics gathering ?
Statistics gathering is a process by which Oracle scans some or all of your database objects (such as tables, indexes etc.) and stores the information in objects such as dbal_tables, dba_histograms. Oracle uses this information to determine the best execution path for statements it has to execute (such as select, update etc.)
2. What is the benefit of it?
It does help the queries become more efficient.
3.Can after this our ERP performance is better?
Typically, if you are experiencing performance issues, this is one of the first remedies.
one question is out this subject is that can If any one wanna APPS DBA then who must be DBA(oracle 10g,9i etc) or who only have a concept of oracle dba like backup,recovery,cloning etc.I will let Hussein or Helios answer that question. They can offer a lot of helpful advice. You can also refer to Hussein's recent thread on a similar topic.
See Re: Time Management and planned prep
Hope this helps,
Sandeep Gandhi

Similar Messages

  • Cgi statistics gathering under 6.1 and Solaris 9

    Hello all,
    is it possible to log for cgi requests the value for handling each of the time spent on the request?
    I see a lot of editable parameters in the 'Performance, Tuning and Scaling Guide' but can't figure out how to do that.
    Once in a thread I read "...enable statistics gathering, then add %duration% to your access log format line".
    I can't find the terminus %duration% in the guide, which parameter is taken?
    Regards Nick

    Hello elvin,
    thanks for your reply. Now I think I managed to let the webserver log the duration of a cgi request, but I'm unsure how to interpret the value eg. in the access log I get
    ..."GET /cgi/beenden.cgi ... Gecko/20040113 MultiZilla/1.6.3.1d" 431710"
    ..."GET /pic.gif ... Gecko/20040113 MultiZilla/1.6.3.1d" 670"
    so the last value corresponds to my %duration% in the magnus.conf.
    431710 ... in msec? - makes no sense
    670 ... in msec?
    The complete string in magnus.conf reads as follows:
    Init fn="flex-init" access="$accesslog" format.access="%Ses->client.ip% - %Req->vars.auth-user% [%SYSDATE%] \"%Req->reqpb.clf
    -request%\" %Req->srvhdrs.clf-status% %Req->srvhdrs.content-length% \"%Req->headers.user-agent%\" \%duration%\""Regards Nick

  • Setting of Optimizer Statistics Gathering

    I'm checking in my db setting and database is analysing each day. But as I notice there are a lot of tables that information shows last analysis in about month ago... Do I have to change some parameters?

    lesak wrote:
    I don't have any data that show you that my idea is good. I'd like to confirm on this forum that my idea is good or not. I've planned to make some changes to have better performance of query that read from top use tables. If this is bad solutions it's also important information for me.One point of view is that your idea is bad. That point of view would be to figure out what the best access for your query is and set that as a baseline, or figure out what statistics get you the correct plans on a single query that has multiple plans that are best with different values sent in through bind variables, and lock the statistics.
    Another point of view would be to gather current plans for currently used queries, then do nothing at all unless the optimizer suddenly decides to switch away from one, then figure out why.
    Also note the default statistics gathering is done in a window, if you have a lot of tables changing it could happen that you can't get stats in a timely fashion within the window.
    Whether the statistics gathering is appropriate may depend on how far off histograms are from describing the actual data distribution you see. What my be appropriate worry for one app may be obsessive tuning disorder for another. 200K rows out of millions may make no difference at all, or may make a huge difference if the newly added data is way off from what the statistics make the opitmizer think it is.
    One thing you are probably doing right is to recognize that tuning particular queries may be much more useful than obsessing over statistics.
    Note how much I've used the word "may" here.

  • How to check the progress of statistics gathering on a table?

    Hi,
    I have started the statistics gathering on a few big tables in my database.
    How to check the progress of statistics gathering on a table? Is there any data dictionary views or tables to monitor the progress of stats gathering.
    Regds,
    Kunwar

    Hi all
    you can check with this small script.
    it lists the sid details for long running session like
    when it started
    when last update
    how much time still left
    session status "ACTIVE/INACTIVE". etc.
    -- Author               : Syed Kaleemuddin_
    -- Script_name          : sid_long_ops.sql
    -- Description          : list the sid details for long running session like when it started when last update how much time still left.
    set lines 200
    col OPNAME for a25
    Select
    a.sid,
    a.serial#,
    b.status,
    a.opname,
    to_char(a.START_TIME,' dd-Mon-YYYY HH24:mi:ss') START_TIME,
    to_char(a.LAST_UPDATE_TIME,' dd-Mon-YYYY HH24:mi:ss') LAST_UPDATE_TIME,
    a.time_remaining as "Time Remaining Sec" ,
    a.time_remaining/60 as "Time Remaining Min",
    a.time_remaining/60/60 as "Time Remaining HR"
    From v$session_longops a, v$session b
    where a.sid = b.sid
    and a.sid =&sid
    And time_remaining > 0;
    Sample output:
    SQL> @sid_long_ops
    Enter value for sid: 474
    old 13: and a.sid =&sid
    new 13: and a.sid =474
    SID SERIAL# STATUS OPNAME START_TIME LAST_UPDATE_TIME Time Remaining Sec Time Remaining Min Time Remaining HR
    474 2033 ACTIVE Gather Schema Statistics 06-Jun-2012 20:10:49 07-Jun-2012 01:35:24 572 9.53333333 .158888889
    Thanks & Regards
    Syed Kaleemuddin.
    Oracle Apps DBA
    Mobile: +91 9966270072
    Email: [email protected]

  • Understand Oracle statistics gathering

    Hi experts,
    I am new in Oracle performance tuning. can anyone tell me what the mean of "Oracle statistics gathering" in simple words/way. i has read it from Oracle site http://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/stats.htm.
    But i am not understand it properly. It Any role in oracle performance tuning? Does it make good performance of Oracle DB???
    Reg
    Harshit

    Hi,
    You can check this in some Easy way :ORACLE-BASE - Oracle Cost-Based Optimizer (CBO) And Statistics (DBMS_STATS)
    >> It Any role in oracle performance tuning? Does it make good performance of Oracle DB???  :Yes
    HTH

  • Table Statistics Gathering Query

    Hey there,
    I'm currently getting trained in Oracle and one of the questions posed to me were create a table, insert a million rows into it and try to find the number of rows in it. I've tried the following steps to solve this,
    First table creation
    SQL> create table t1(id number);
    Table created.Data insertion
    SQL> insert into t1 select level from dual connect by level < 50000000;
    49999999 rows created.Gathering statistics
    SQL> exec dbms_stats.gather_table_stats('HR','T1');
    PL/SQL procedure successfully completed.Finally counting the number of rows
    SQL> select num_rows from user_tables where table_name='T1';
      NUM_ROWS
      49960410
    SQL> select count(*) from t1;
      COUNT(*)
      49999999My database version is,
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product
    PL/SQL Release 10.2.0.1.0 - Production
    CORE    10.2.0.1.0      Production
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - ProductionI would like to know why there are two different results for the same table when using "num_rows" from the view "user_tables" and the aggregate function "count()" over the same table. Please do keep in mind that i'm studying oracle and this is from a conceptual point of view only. I would like to know how gathering the table statistics works using dbms_stats package works.
    Thank You,
    Vishal

    vishm8 wrote:
    Gathering statistics
    SQL> exec dbms_stats.gather_table_stats('HR','T1');
    PL/SQL procedure successfully completed.I would like to know why there are two different results for the same table when using "num_rows" from the view "user_tables" and the aggregate function "count()" over the same table. Please do keep in mind that i'm studying oracle and this is from a conceptual point of view only. I would like to know how gathering the table statistics works using dbms_stats package works.
    Thank You,
    VishalBecause you aren't specifying a value for estimate_percent in the procedure call (to gather_table_stats) Oracle will pick an estimate value for you. If you want to sample the entire table you would need to explicitly specify that in your procedure call.
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_stats.htm#ARPLS68582

  • Statistics gathering in 10g - Histograms

    I went through some articles in the web as well as in the forum regarding stats gathering which I have posted here.
    http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/
    In the above post author mentions that
    "It may be best to change the default value of the METHOD_OPT via DBMS_STATS.SET_PARAM to 'FOR ALL COLUMNS SIZE REPEAT' and gather stats with your own job. Why REPEAT and not SIZE 1? You may find that a histogram is needed somewhere and using SIZE 1 will remove it the next time stats are gathered. Of course, the other option is to specify the value for METHOD_OPT in your gather stats script"
    Following one is post from Oracle forums.
    Statistics
    In the above post Mr Lewis mentions about adding
    method_opt => 'for all columns size 1' to the DBMS job
    And in the same forum post Mr Richard Foote has mentioned that
    "Not only does it change from 'FOR ALL COLUMNS SIZE 1' (no histograms) to 'FOR ALL COLUMNS SIZE AUTO' (histograms for those tables that Oracle deems necessary based on data distribution and whether sql statements reference the columns), but it also generates a job by default to collect these statistics for you.
    It all sounds like the ideal scenario, just let Oracle worry about it for you, except for the slight disadvantage that Oracle is not particularly "good" at determining which columns really need histograms and will likely generate many many many histograms unnecessarily while managing to still miss out on generating histograms on some of those columns that do need them."
    http://richardfoote.wordpress.com/2008/01/04/dbms_stats-method_opt-default-behaviour-changed-in-10g-be-careful/
    Our environment Windows 2003 server Oracle 10.2.0.3 64bit oracle
    We use the following script for our analyze job.
    BEGIN DBMS_STATS.GATHER_TABLE_STATS
    (ownname => ''username'', '
    'tabname => TABLE_NAME
    'method_opt => ''FOR ALL COLUMNS SIZE AUTO''
    'granularity => ''ALL'', '
    'cascade => TRUE, '
    'degree => DBMS_STATS.DEFAULT_DEGREE);
    END;
    This anayze job runs a long time (8hrs) and we are also facing performance issues in production environment.
    Here are my questions
    What is the option I should use for method_opt parameter?
    I am sure there are no hard and fast rules for this and each environment is different.
    But reading all the above post kind of made me confused and want to be sure we are using the correct options.
    I would appreciate any suggestions, insight or further readings regarding the same.
    Appreciate your time
    Thanks
    Niki

    Niki wrote:
    I went through some articles in the web as well as in the forum regarding stats gathering which I have posted here.
    http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/
    In the above post author mentions that
    "It may be best to change the default value of the METHOD_OPT via DBMS_STATS.SET_PARAM to 'FOR ALL COLUMNS SIZE REPEAT' and gather stats with your own job. Why REPEAT and not SIZE 1? You may find that a histogram is needed somewhere and using SIZE 1 will remove it the next time stats are gathered. Of course, the other option is to specify the value for METHOD_OPT in your gather stats script"
    This anayze job runs a long time (8hrs) and we are also facing performance issues in production environment.
    Here are my questions
    What is the option I should use for method_opt parameter?
    I am sure there are no hard and fast rules for this and each environment is different.
    But reading all the above post kind of made me confused and want to be sure we are using the correct options.As the author of one of the posts cited, let me make some comments. First, I would always recommend starting with the defaults. All to often people "tune" their dbms_stats call only to make it run slower and gather less accurate stats than if they did absolutely nothing and let the default autostats job gather stats in the maintenance window. With your dbms_stats command I would comment that granularity => 'ALL', is rarely needed and certainly adds to the stats collection times. Also, if the data has not changed enough why recollect stats? This is the advantage of the using options=>'gather stale'. You haven't mentioned what kind of application your database is used for: OLTP or data warehouse. If it is OLTP and the application uses bind values, then I would recommend to disable or manually collect histograms (bind peeking and histograms should not be used together in 10g) using size 1 or size repeat. Histograms can be very useful in a DW where skew may be present.
    The one non-default option I find myself using is degree=>dbms_stats.auto_degree. This allows dbms_stats to choose a DOP for the gather based on the size of the object. This works well if you dont want to specify a fixed degree or you would like dbms_stats to use a different DOP than the table is decorated with.
    Hope this helps.
    Regards,
    Greg Rahn
    http://structureddata.org

  • Statistics gathering error

    Hi all,
    I am running on AIX  version 5.3 with oracle 10.2.0.1 database.
    Since yesterday I am encountering errors when gathering statistics from table partitions that already have data in it. I was able to gather without errors for years but then suddenly I got the following errors:
    exec dbms_stats.gather_table_stats('BLP', 'ADJUSTMENT_TRANSACTION', 'ADJUSTMENT_TRANSACTION_P201311', GRANULARITY=>'PARTITION')
    BEGIN dbms_stats.gather_table_stats('BLP', 'ADJUSTMENT_TRANSACTION', 'ADJUSTMENT_TRANSACTION_P201311', GRANULARITY=>'PARTITION'); END;
    ERROR at line 1:
    ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ORA-06512: at "SYS.DBMS_STATS", line 13044
    ORA-00942: table or view does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 13076
    ORA-06512: at line 1
    I also got the following errors in the alert_logs:
    ORA-00600: internal error code, arguments: [KSFD_DECAIOPC], [0x7000004FF189780], [], [], [], [], [], []
    The other day the alert_log generated this error when generating statistics also for another table:
    ORA-01114: IO error writing block to file 1001 (block # 4026567)
    ORA-27063: number of bytes read/written is incorrect
    IBM AIX RISC System/6000 Error: 28: No space left on device
    As I checked, the server has sufficient space.
    Do you guys have any idea what could be the problem? I can't generate table statistics as of the moment due to this problem.
    Regards,
    Tim

    Hi Suntrupth,
    BLP@OLSG3DB  > show parameter filesystemio_options
    NAME                                 TYPE        VALUE
    filesystemio_options                 string      asynch
    BLP@OLSG3DB  > show parameter disk_asynch_io
    NAME                                 TYPE        VALUE
    disk_asynch_io                       boolean     TRUE
    No invalid objects where returned also:
    BLP@OLSG3DB  > select object_name from dba_objects where status='INVALID' and owner='SYS';
    no rows selected
    Regards,
    Tim

  • Statistics gathered during import

    Does after IMP import the tables are analyzed automatically?
    When i done import process of TEST user
    imp system/manager file=/home/oracle/test.dmp FROMUSER=TEST TOUSER=TEST
    and when i execute following query
    SELECT table_name,last_analyzed FROM DBA_TABLES WHERE owner=TEST'
    I find that table/indexes are analyzed automatically.
    Does it means statistics are gathered automatically during import?
    Oracle 10.2.0.1

    Refer to this link please
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/exp_imp.htm#i1020893
    It seems it takes statistics for tables during export
    Kamran Agayev A. (10g OCP)
    http://kamranagayev.wordpress.com

  • AWR Statistics gathering is taking hours...

    Is it normal in 10g to have the following job run for hours?
    EXEC SYS.DBMS_SCHEDULER.RUN_JOB('GATHER_STATS_JOB');
    It takes like 4 hours sometimes to run - we run it once a day..Thank you!

    AWR is the automatic workload repository - which is similar in mechanism to statspack, taking regular snapshots of the dynamic performance views.
    The gather_stats_job has nothing to do with the operation of the AWR, beyond the fact that AWR data is stored in tables, so the gather_stats_job may decided to collect stats on those tables from time to time.
    The default action for gather_stats_job is to collect stats for all tables with missing or stale statistics. The sample size for each table is chosen automatically (effectively by trial and error starting with a very small sample. Histograms are also collected automatically, based on a check of which columns have been used historically for "where" clauses combined with a sample to check if such columns show skewed data patterns.
    If you do a lot of inserts, updates, and deletes on this particular database, you are more likely to end up with table statistics becoming stale more frequently, leading to longer lists of tables that need stats recalculated.
    You may find that Oracle is generating too many histograms, and histograms can take a long time to construct. If this is the case, then you could consider changing the default setting for stats collection to skip the automatic histogram generation and add code to build histograms only on the columns that you think need them.
    [addendum: you say you are running gather_stats_job daily - but it runs automatically every weekday at 10:00 pm and all weekend; did you disable the standard job, or did you mean that you were just letting the standard job).
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "The greatest enemy of knowledge is not ignorance,
    it is the illusion of knowledge." Stephen Hawking.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • 11g Incremental statistics gathering experiences

    Hi folks,
    I was wondering if people who have configured their 11g DB's to use incremental statistics would share their experiences good/bad etc.
    Has anyone setup incremental stats - was worthwhile for you or not - why - etc?
    Any problems / performance issues / bugs hit etc etc?
    I would welcome any posts of experiences encountered or any related comments.
    Thanks,
    firefly
    Edited by: firefly on 10-Mar-2011 06:58

    I was wondering if people who have configured their 11g DB's to use incremental statisticswhat exactly are "incremental statistics"?
    how are they collected?
    where are they stored?
    how are they used?

  • Automate Statistics Gathering (9i)

    Hi All,
    Which one of the following two approaches is recommended for Gathering Daily SCHEMA Level Statistcs in the DB.
    1. Create a CRONJOB that runs
    exec dbms_stats.gather_schema_stats('MYSCHEMA'); at a scheduled time (Off Peak Hours) daily?
    2. Or use DBMS_JOB to schedule the same script?
    What I basically want to know is does scheduling jobs via cron has any added benefit?
    Regards,
    Chinmay

    Really I think it depends on what you are more comfortable with.
    I like putting my jobs like this in cron that way it is very easy for me to see and transfer to another system. I like the control to allow me to schedule it for different days and times in the day quickly. However, it was pointed out that this can be a security issue depending on how you set things up.
    Regards
    Tim

  • Partition Level Statistics gathering

    Hi,
    This is with regards to GRANULARITY option in DBMS_STATS.GATHER_TABLE_STATS procedure.
    If I give 'PARTITION' as GRANULARITY would it gather statistics for all the partitions on a table or can I gather statistics only for a particular partition.
    For example: If I have table T1 which is partitioned by month i.e JAN08, FEB08, ....., DEC09, JAN09, ... and If I apply DML only on say DEC08, can I gather statistics only for DEC08 partition ?
    Please advise.
    Thanks in Advance.
    Regards
    Pat

    I do not think it is just because of misspelled word PARTITION:
    SQL> CREATE TABLE "Patza"
      2      ( prod_id        NUMBER(6)
      3      , cust_id        NUMBER
      4      , time_id        DATE
      5      , channel_id     CHAR(1)
      6      , promo_id       NUMBER(6)
      7      , quantity_sold  NUMBER(3)
      8      , amount_sold         NUMBER(10,2)
      9      )
    10  PARTITION BY RANGE (time_id)
    11    (PARTITION "Sales_Q1_1998" VALUES LESS THAN (TO_DATE('01-APR-1998','DD-MON-YYYY')),
    12     PARTITION "Sales_Q2_1998" VALUES LESS THAN (TO_DATE('01-JUL-1998','DD-MON-YYYY')),
    13     PARTITION "Sales_Q3_1998" VALUES LESS THAN (TO_DATE('01-OCT-1998','DD-MON-YYYY')),
    14     PARTITION "Sales_Q4_1998" VALUES LESS THAN (TO_DATE('01-JAN-1999','DD-MON-YYYY')),
    15     PARTITION "Sales_Q1_1999" VALUES LESS THAN (TO_DATE('01-APR-1999','DD-MON-YYYY')),
    16     PARTITION "Sales_Q2_1999" VALUES LESS THAN (TO_DATE('01-JUL-1999','DD-MON-YYYY')),
    17     PARTITION "Sales_Q3_1999" VALUES LESS THAN (TO_DATE('01-OCT-1999','DD-MON-YYYY')),
    18     PARTITION "Sales_Q4_1999" VALUES LESS THAN (TO_DATE('01-JAN-2000','DD-MON-YYYY')),
    19     PARTITION "Sales_Q1_2000" VALUES LESS THAN (TO_DATE('01-APR-2000','DD-MON-YYYY')),
    20     PARTITION "Sales_Q2_2000" VALUES LESS THAN (TO_DATE('01-JUL-2000','DD-MON-YYYY')),
    21     PARTITION "Sales_Q3_2000" VALUES LESS THAN (TO_DATE('01-OCT-2000','DD-MON-YYYY')),
    22     PARTITION "Sales_Q4_2000" VALUES LESS THAN (MAXVALUE))
    23  ;
    Table created.
    SQL> BEGIN
      2  DBMS_STATS.GATHER_TABLE_STATS
      3  (
      4  OWNNAME => USER
      5  , TABNAME => 'Patza'
      6  , ESTIMATE_PERCENT => 25
      7  , METHOD_OPT => 'FOR ALL COLUMNS SIZE AUTO'
      8  , GRANULARITY => 'PATRITION'
      9  , PARTNAME => 'Sales_Q3_2000'
    10  , CASCADE => TRUE
    11  );
    12  END;
    13  /
    BEGIN
    ERROR at line 1:
    ORA-20000: Unable to analyze TABLE "SCOTT"."PATZA" SALES_Q3_2000, insufficient
    privileges or does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 13046
    ORA-06512: at "SYS.DBMS_STATS", line 13076
    ORA-06512: at line 2
    SQL> BEGIN
      2  DBMS_STATS.GATHER_TABLE_STATS
      3  (
      4  OWNNAME => USER
      5  , TABNAME => '"Patza"'
      6  , ESTIMATE_PERCENT => 25
      7  , METHOD_OPT => 'FOR ALL COLUMNS SIZE AUTO'
      8  , GRANULARITY => 'PATRITION'
      9  , PARTNAME => '"Sales_Q3_2000"'
    10  , CASCADE => TRUE
    11  );
    12  END;
    13  /
    BEGIN
    ERROR at line 1:
    ORA-20001: Illegal granularity PATRITION: must be AUTO | ALL | GLOBAL |
    PARTITION | SUBPARTITION | GLOBAL AND PARTITION
    ORA-06512: at "SYS.DBMS_STATS", line 13056
    ORA-06512: at "SYS.DBMS_STATS", line 13076
    ORA-06512: at line 2
    SQL> BEGIN
      2  DBMS_STATS.GATHER_TABLE_STATS
      3  (
      4  OWNNAME => USER
      5  , TABNAME => 'Patza'
      6  , ESTIMATE_PERCENT => 25
      7  , METHOD_OPT => 'FOR ALL COLUMNS SIZE AUTO'
      8  , GRANULARITY => 'PARTITION'
      9  , PARTNAME => 'Sales_Q3_2000'
    10  , CASCADE => TRUE
    11  );
    12  END;
    13  /
    BEGIN
    ERROR at line 1:
    ORA-20000: Unable to analyze TABLE "SCOTT"."PATZA" SALES_Q3_2000, insufficient
    privileges or does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 13046
    ORA-06512: at "SYS.DBMS_STATS", line 13076
    ORA-06512: at line 2
    SQL>
    SQL> BEGIN
      2  DBMS_STATS.GATHER_TABLE_STATS
      3  (
      4  OWNNAME => USER
      5  , TABNAME => '"Patza"'
      6  , ESTIMATE_PERCENT => 25
      7  , METHOD_OPT => 'FOR ALL COLUMNS SIZE AUTO'
      8  , GRANULARITY => 'PARTITION'
      9  , PARTNAME => '"Sales_Q3_2000"'
    10  , CASCADE => TRUE
    11  );
    12  END;
    13  /
    PL/SQL procedure successfully completed.
    SQL> As you can see, table/partition check is done first. So OP either does not have privs on the table or, as I mentioned, table/partition names are sensitive.
    SY.

  • Best practices for gathering statistics in 10g

    I would like to get some opinions on what is considered best practice for gathering statistics in 10g. I know that 10g has auto statistics gathering, but that doesn't seem to be very effective as I see some table stats are way out of date.
    I have recommended that we have at least a weekly job that generates stats for our schema using DBMS_STATS (DBMS_STATS.gather_schema_stats). Is this the right approach to generate object stats for a schema and keep it up to date? Are index stats included in that using CASCADE?
    Is it also necessary to gather system stats? I welcome any thoughts anyone might have. Thanks.

    Hi,
    Is this the right approach to generate object stats for a schema and keep it up to date? The choices of executions plans made by the CBO are only as good as the statistics available to it. The old-fashioned analyze table and dbms_utility methods for generating CBO statistics are obsolete and somewhat dangerous to SQL performance. As we may know, the CBO uses object statistics to choose the best execution plan for all SQL statements.
    I spoke with Andrew Holsworth of Oracle Corp SQL Tuning group, and he says that Oracle recommends taking a single, deep sample and keep it, only re-analyzing when there is a chance that would make a difference in execution plans (not the default 20% re-analyze threshold).
    I have my detailed notes here:
    http://www.dba-oracle.com/art_otn_cbo.htm
    As to system stats, oh yes!
    By measuring the relative costs of sequential vs. scattered I/O, the CBO can make better decisons. Here are the data items collected by dbms_stats.gather_system_stats:
    No Workload (NW) stats:
    CPUSPEEDNW - CPU speed
    IOSEEKTIM - The I/O seek time in milliseconds
    IOTFRSPEED - I/O transfer speed in milliseconds
    I have my notes here:
    http://www.dba-oracle.com/t_dbms_stats_gather_system_stats.htm
    Hope this helps. . . .
    Don Burleson
    Oracle Press author
    Author of “Oracle Tuning: The Definitive Reference”
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • SQL Developer Behavior When Gathering Table/Index Statistics

    Hey All,
    Not sure if this has been posted yet. I did a search and did not find any threads on the topic though.
    I noticed with SQL Developer 2.x, when you using the context menu to gather table/index statistics for a given table, you get no modal progress/waiting window like you did in 1.x. It just kind of "does nothing", even though it did actually execute the DBMS_STATS package. If you press cancel and try to navigate around, you get multiple "Connection is Busy" errors. Eventually it will come back and say "Statistics gathered for table <whatever>". In the old versions there was just a modal window with an animated progress bar while it ran the DBMS_STATS package. What happened to that? Or is this something unique to my install? Anyone else ran into this? Is there a fix or somewhere I can report this as an official bug? FWIW I'm running 2.1.1.64, and this did occur in the initial 2.0 release.
    It is very confusing the first time you run into it... I pressed the "apply" button several times thinking it didn't take, but it ended up running the DBMS_STATS for every click I did.
    Thanks!

    Same happens to all the other context menu opened dialogs. Indeed very confusing at first and disturbing.
    The only official site to report bugs is Metalink/MOS, but you might be lucky if someone from the team picks it up here.
    Regards,
    K.

Maybe you are looking for

  • Weblogic 10.3.2 - Page reload in browser is very slow after upgrade from 8.

    when i first load portal page with empty cache it comes up with in 8 seconds.. i have noticed that browser requests the page -- weblogic app server sends back the file lets say xxxx.css with response 200 after the first load, if i reload the page bro

  • Modify quantity of an item in material document creation!

    Hi all, I want to change the quantity of an item in a material document, before the commit on the Database. I was thninking in the following BAdi: MBDOCUMENT_BADI_  MBMIGO_BADI_ However, I believe none of these can do what I whant...do You have any h

  • Scripting in JSF pages

    Hey, I have 2 JSF pages: container.jsp and sub.jsp, container.jsp loads sub.jsp in a dynamic call like this: <jsp:include page='<%= pageToLoad %>' />Inside sub.jsp, I have a parameter defined for a commandLink tag: <h:commandLink ...>   <f:param name

  • Environment Channel Strips view like the mixer window?

    Is there a chance to set the environment Channel Strips view like the mixer window? The mixer has the new graphic view whereas the environment has the old LP9 graphic view

  • Oracle Query for Oracle RAC

    Hi Everyone, I want oracle query with which we can retrieve information about oracle RAC. This information is, No of Clusters, Db instances... Which Tables i can query for this information? Thank You In Advance... Roshan