Statistics gather in 11g

Hi,
I used to gather statistics in my db which was in version 9.2.0.8. with below command:
EXEC dbms_stats.gather_schema_stats('SCHEMA_NAME',cascade=>TRUE,estimate_percent=>20,degree=>dbms_stats.default_degree);
Now, i have another database with the same purpose in version 11.1.0.7. I am thinking of collecting statistics for this database also, as performance is getting very slow day by day.
I have checked and found last analyed day is janaury 5th.
In 11g there is some change in collecting statistics.
My question is..can i use the above command for my db in 11.1 ??? Or can any one suggest how to modify the above command so that i can achieve desire result.
Thanks,

You should still be able to run the same command as before  to gather your schema statistics.  There are new features in 11G which will help automate the process:
Should review Managing Optimizer Statistics
Oracle® Database Performance Tuning Guide
11g Release 2 (11.2)
E16638-07
13 Managing Optimizer Statistics

Similar Messages

  • Gather Schema Statistics - GATHER AUTO option failing to gather stats

    Hi ,
    We recently upgraded to 10g DB and 11.5.10 version of Oracle EBS. I want to employ GATHER AUTO option while running Gather Schema Statistics.
    To test the working, I created a test table with 1 million rows. Then, stats were gathered for this table alone by using Gather Table Stats. Now, I deleted ~12% of rows & issued commit. The table all_tab_statistics shows that the table has stale statistics (stale stats column = YES). After that I ran Gather Schema Stats for that particular schema. But the request did not pick the test table to be gathered.
    What is the criterion on which Oracle chooses which all tables to be gather statistics for under Gather Auto option? I am aware of the 10% change in data, but how is this 10% calculated? Is it only based on (insert + update + delete)?
    Also, what is the difference between Gather Auto and Gather Stale ?
    Any help is appreciated.
    Thanks,
    Jithin

    Randalf,
    FYI.. this is what happens inside the concurrent progarm call.. there are a few additional parameters for output/ error msgs:
    procedure GATHER_SCHEMA_STATS(errbuf out varchar2,
    retcode out varchar2,
    schemaname in varchar2,
    estimate_percent in number,
    degree in number ,
    internal_flag in varchar2,
    request_id in number,
    hmode in varchar2 default 'LASTRUN',
    options in varchar2 default 'GATHER',
    modpercent in number default 10,
    invalidate in varchar2 default 'Y'
    is
    exist_insufficient exception;
    bad_input exception;
    pragma exception_init(exist_insufficient,-20000);
    pragma exception_init(bad_input,-20001);
    l_message varchar2(1000);
    Error_counter number := 0;
    Errors Error_Out;
    -- num_request_id number(15);
    conc_request_id number(15);
    degree_parallel number(2);
    begin
    -- Set the package body variable.
    stathist := hmode;
    -- check first if degree is null
    if degree is null then
    degree_parallel:=def_degree;
    else
    degree_parallel := degree;
    end if;
    l_message := 'In GATHER_SCHEMA_STATS , schema_name= '|| schemaname
    || ' percent= '|| to_char(estimate_percent) || ' degree = '
    || to_char(degree_parallel) || ' internal_flag= '|| internal_flag ;
    FND_FILE.put_line(FND_FILE.log,l_message);
    BEGIN
    FND_STATS.GATHER_SCHEMA_STATS(schemaname, estimate_percent,
    degree_parallel, internal_flag, Errors, request_id,stathist,
    options,modpercent,invalidate);
    exception
    when exist_insufficient then
    errbuf := sqlerrm ;
    retcode := '2';
    l_message := errbuf;
    FND_FILE.put_line(FND_FILE.log,l_message);
    raise;
    when bad_input then
    errbuf := sqlerrm ;
    retcode := '2';
    l_message := errbuf;
    FND_FILE.put_line(FND_FILE.log,l_message);
    raise;
    when others then
    errbuf := sqlerrm ;
    retcode := '2';
    l_message := errbuf;
    FND_FILE.put_line(FND_FILE.log,l_message);
    raise;
    END;
    FOR i in 0..MAX_ERRORS_PRINTED LOOP
    exit when Errors(i) is null;
    Error_counter:=i+1;
    FND_FILE.put_line(FND_FILE.log,'Error #'||Error_counter||
    ': '||Errors(i));
    -- added to send back status to concurrent program manager bug 2625022
    errbuf := sqlerrm ;
    retcode := '2';
    END LOOP;
    end;

  • Statistics gather after 10g upgrade

    I am on apps 11.5.10.2, database has been recently upgraded from 9.2.0.6 to 10.2.0.4. I used to periodically use GATHER SCHEMA STATISTICS (ALL SCHEMA), in the previous release of db.
    What should be practiced after the upgrade? There are some custom schemas for third party tool. Does gather schema statistics concurrent program gather statistics for all the schema?
    Thanks
    SA

    user593719 wrote:
    Hussein,
    To register a custom schema, I have to go thru the series of steps like registering application, creating a TOP etc., right? What if my custom application is in some other windows server and only schema is in the apps database? Do we need still follow the same steps?
    Regards,
    SAI am not sure why you have this custom schema in this Apps database. If your custom schema has nothing to do with Apps, you would be better off creating a separate instance for this custom app. You could then use the automatic stats gathering mechanism in 10g to gather stats for this custom schema app.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i41282
    HTH
    Srini

  • How often we need to run gather schema statistics etc.. ??

    HI,
    Am on 11.5.10.2
    RDBMS 9.2.0.6
    How often we need to run the following requests in Production...
    1.Gather schema statistics
    2.Gather Column statistics
    3.Gather Table statistics
    4.Gather All Column statisitics
    Thanks

    Hi;
    We discussed here before about same issue. Please check below thread which could be helpful about your issue:
    How often we need to run gather schema statistics
    Re: Gather schema stats run
    How we can collect custom schema information wiht gather statistics
    gather schema stats for EBS 11.5.10
    gather schema stats conc. program taking too long time
    Re: gather schema stats conc. program taking too long time
    How it runs
    Gather Schema Statistics
    http://oracle-apps-dba.blogspot.com/2007/07/gather-statistics-for-oracle.html
    gather statistict collect which informations
    Gather Schema Statistics...
    Regard
    Helios

  • Statistics Analysis on Tables that are often empty

    Right now I'm dealing with a user application that was originally developed in Ora10g. Recently the database was upgraded to Ora11g, and the schema and data was imported successfully.
    However, since the user started using Ora11, some of their applications have been running slower and slower. I'm just wondering if the problem could be due to statistics.
    The application has several tables which contains temporary data. Usually these tables are empty, although when a user application runs they are populated, and queried against, and then at the end the data is deleted. (Its this program that's running slower and slower.)
    I'm just wondering if the problem could be due to a problem with user statistics.
    When I look at the 'last_analyzed' field in user_tables, the date goes back to the date of last import. I know Oracle regularly updates statistics, so what I suspect is happening is that, by luck, Oracle has only been gathering statistics when the tables are empty. (And since the tables are empty, the statistics are of no help in optimizing the DB.)
    Am I on the right track?
    And if so, is there a way to automatically trigger a statistics gather job when a table gets above a certain size?
    System details:
    Oracle: 11gR2 (64 bit) Standard version
    File System: ASM (GRID infrastructure)

    Usually these tables are empty, although when a user application runs they are populated, and queried against, and then at the end the data is deletedYou have three options (and depending on how the data changes, you might find that not all temporary tables work best with the same option) :
    1. Load representative data into the temporary table, collect statistics (including any histograms that you identify as necessary) and then lock the statistics
    2. Modify the job to re-gather statistics immediately after a temporary table is populated
    3. Delete statistics and then lock the statistics and check the results (execution plan and performance) when the optimizer uses dynamic sampling
    Note : It is perfectly reasonable to create indexes on temporary tables -- provided that you DO create the correct indexes. If jobs are querying the temporary tables for the full data set (all rows) indexes are a hindrance. If there are many separate queries against the temporary table, each query retrieiving a small set of rows, an index or two may be beneficiial. Also some designs do use unique indexes to enforce uniqueness when the tables are loaded.
    Hemant K Chitale

  • Question on gathering statistics in Enterprise Manager

    When in EM, choose Server->Query Optimizer->Manage Optimizer Statistics->Gather Optimizer Statistics:
    My question, when choosing the Object Type "Database": will this gather and optimize all of the other objects below database (i.e. Schemas,Tables,Indexes,etc.)? If I run the optimizer against a given schema, will this also compute statistics for all of the tables and indexes in the users schema?
    I tried looking through the documentation set but it is not clear to me. Can someone break this down sesame street style for me?

    934865 wrote:
    When in EM, choose Server->Query Optimizer->Manage Optimizer Statistics->Gather Optimizer Statistics:
    My question, when choosing the Object Type "Database": will this gather and optimize all of the other objects below database (i.e. Schemas,Tables,Indexes,etc.)? If I run the optimizer against a given schema, will this also compute statistics for all of the tables and indexes in the users schema?
    yes
    I tried looking through the documentation set but it is not clear to me. Can someone break this down sesame street style for me?http://www.oracle.com/pls/db111/search?word=dbms_stats&partno=

  • EAR4 - Gathering Statistics, unusable indexes and more

    Hi,
    Some feedback on statistics:
    1. When choosing a table -> Statistics -> gather Statistics, the minimum is to have a CASCADE option (so all the indexes will be analyzed too). I think it should be the deafult! This way there is a chance that the developers will have good statistics...
    As a bonus, an advanced tab with the rest of the options might be nice, when you have time.
    2. When choosing to gather statistics on an index, you should you dbms_stats and not ALTER INDEX... COMPUTE STATISTICS which is a deprecated syntax.
    And about indexes:
    1. When looking at the index tab of a table, unusable indexes should be visibly different - maybe just color the status column. Well, any color-coding can help to gain more infomation very fast (index types and index status). Well, I guess the same goes for disabled triggers, disabled constraints etc...
    2. When right-clicking an index in an index tab of a table, the only option is export, which makes no sense. Could you replace it with the six relevant index options, just like when we right click an index in the side bar (drop, rebuild, rename, make unusable...)
    Well, same goes for the triggers tab of a table - when right-clicking a trigger give us the trigger actions(enable/disable, drop...), not export.
    my two cents,
    Ofir

    When Choose a partitioned table from the tables list (tree view on the left), I have many tabs with details (Columns, data, indexes,etc).
    1. The last tab, SQL, doesn't generate any CREATE TABLE sql at all for the simple partition table I created (10g Release 2 on windows 2000, raptor 4.1, a table with a single partition).
    2. There is no way to see the partitions definitions - for example, the list of partitions and their ranges (or list values). I would like to have another tab for partitioned table with that information (from all_tab_partitions). Also, how can I easily see the type of partitioning and the partition key of the table?
    3. Also in the builtin reports, there is no way to get that data. The only report about partitioned tables that I see is Table -> Organization -> Partitioned -> Partitioned Tables, that only provide owner, table_name, maybe tablespace and logging (blank in my case).. I think:
    a. You should rewrite the report to use dba/all_part_tables - with columns like partitioning_type, subpartitioning_type, partition_count etc.
    b. add a report about the partition key columns per partitioned table from dba_part_key_columns.
    4. When adding an index to a partitioned table, I can't choose local/global index. The index is always created as a global index. For example, can't create bitmap index on partitioned tables because they must be local.
    Ofir

  • Behaviour of default value of METHOD_OPT

    Hello,
    I was trying to test the impact of extended statistics feature of 11g when I was puzzled by another observation.
    I created a table (from ALL_OBJECTS view). The data in this table was such that it had lots of rows where OWNER = 'PUBLIC'
    and lots of rows where OBJECT_TYPE = 'JAVA CLASS' but no rows where OWNER = 'PUBLIC' AND OBJECT_TYPE = 'JAVA CLASS'.
    I also create an index on the combination of (OWNER, OBJECT_TYPE).
    Now, after collecting statistics on table and index, I queried the table for above condition (OWNER = 'PUBLIC' AND OBJECT_TYPE = 'JAVA CLASS').
    To my surprise (or not), the query used the index.
    Then I recollected the statistics on the table and index and now the same query started to do a full table scan.
    Only creation of extended statistics ensured that the plan changed to indexed access subsequently. While this proved the use of extended stats,
    I am not sure how oracle was able to use indexed access path initially but not afterwards.
    Is this due to column usage monitoring info? Can anybody help?
    Here is my test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE     11.2.0.1.0     Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    5 rows selected.
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_capture_sql_plan_baselines boolean     FALSE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      11.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    optimizer_use_invisible_indexes      boolean     FALSE
    optimizer_use_pending_statistics     boolean     FALSE
    optimizer_use_sql_plan_baselines     boolean     TRUE
    SQL> create table t1 nologging as select * from all_objects ;
    Table created.
    SQL> exec dbms_stats.gather_table_stats(user, 'T1', no_invalidate=>false) ;
    PL/SQL procedure successfully completed.
    SQL> select * from t1 where owner = 'PUBLIC' and object_type = 'JAVA CLASS' ;
    no rows selected
    SQL> select * from table(dbms_xplan.display_cursor) ;
    PLAN_TABLE_OUTPUT
    SQL_ID  bnrj3cac3upfd, child number 0
    select * from t1 where owner = 'PUBLIC' and object_type = 'JAVA CLASS'
    Plan hash value: 3617692013
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |       |       |   226 (100)|          |
    |*  1 |  TABLE ACCESS FULL| T1   |   155 | 15190 |   226   (1)| 00:00:03 |
    Predicate Information (identified by operation id):
       1 - filter(("OBJECT_TYPE"='JAVA CLASS' AND "OWNER"='PUBLIC'))
    18 rows selected.
    SQL> create index t1_idx on t1(owner, object_type) nologging ;
    Index created.
    SQL> select * from t1 where owner = 'PUBLIC' and object_type = 'JAVA CLASS' ;
    no rows selected
    SQL> select * from table(dbms_xplan.display_cursor) ;
    PLAN_TABLE_OUTPUT
    SQL_ID  bnrj3cac3upfd, child number 0
    select * from t1 where owner = 'PUBLIC' and object_type = 'JAVA CLASS'
    Plan hash value: 546753835
    | Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |        |       |       |    23 (100)|          |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T1     |   633 | 62034 |    23   (0)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | T1_IDX |   633 |       |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("OWNER"='PUBLIC' AND "OBJECT_TYPE"='JAVA CLASS')
    19 rows selected.
    SQL> REM This shows that CBO decided to use the index even when there are no extended statistics
    SQL> REM Now, we will gather statistics on the table again and see what happens
    SQL> exec dbms_stats.gather_table_stats(user, 'T1', no_invalidate=>false) ;
    PL/SQL procedure successfully completed.
    SQL> select * from t1 where owner = 'PUBLIC' and object_type = 'JAVA CLASS' ;
    no rows selected
    SQL> select * from table(dbms_xplan.display_cursor) ;
    PLAN_TABLE_OUTPUT
    SQL_ID  bnrj3cac3upfd, child number 0
    select * from t1 where owner = 'PUBLIC' and object_type = 'JAVA CLASS'
    Plan hash value: 3617692013
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |       |       |   226 (100)|          |
    |*  1 |  TABLE ACCESS FULL| T1   | 11170 |  1069K|   226   (1)| 00:00:03 |
    Predicate Information (identified by operation id):
       1 - filter(("OBJECT_TYPE"='JAVA CLASS' AND "OWNER"='PUBLIC'))
    18 rows selected.
    SQL> REM And the plan changes to Full Table scan. Why?

    user503699 wrote:
    Hemant K Chitale wrote:
    A change in statistics drives a change in expected cardinality which drives a change in plan.In that case, how does one explain the same execution plan but huge difference in cardinalities between first and third execution ?1) Oracle sometimes can use index statistics. This most likely explains difference in cardinality estimates between 1st and 2nd statements
    2) You didn't specify estimate_percent - and that is not a good practice. You can get different sets of statistics gathered with different DBMS_STATS runs even when data isn't changed
    3) As already pointed out previously, histograms make CBO behave quite differently. Most likely you have histograms in presence in the 3rd statement, which is quite possible the result of not specifying estimate_percent

  • Stats gathering

    Hi All,
    oracle 11gr2
    Linux
    am new to database, how to gather statistics in oracle 11g? like database/schema/table wise?
    what is the impact gathering stats daily basis? how to automate stats gathering database wise?
    can anyone please suggest & explain with example if possible.
    thanks,
    Sam.

    AUTO_SAMPLE_SIZE lets Oracle Database determine the best sample size necessary for good statistics, based on the statistical property of the object. Because each type of statistics has different requirements, the size of the actual sample taken may not be the same across the table, columns, or indexes.from the link posted before by Aman
    If you ommited this clause, full scan will be done: (again in the document attached by Aman)
    Gathering statistics without sampling requires full table scans and sorts of entire tables. Sampling minimizes the resources necessary to gather statistics

  • Performance problems post Oracle 10.2.0.5 upgrade

    Hi All,
    We have patched our SAP ECC6 system's Oracle database from 10.2.0.2 to 10.2.0.5. (Operating system Solaris). This was done using the SAP Bundle Patch released in February 2011. (patched DEV, QA and then Production).
    Post patching production, we are now experiencing slower performance of our long running background jobs, e.g. our billing runs has increased from 2 hours to 4 hours. The slow down is constant and has not increased or decreased over a period of two weeks.
    We have so far implemented the following in production without any affect:
    We have double checked that database parameters are set correctly as per note Note 830576 - Parameter recommendations for Oracle 10g.
    We have executed with db02old the abap<->db crosscheck to check for missing indexes.
    Note 1020260 - Delivery of Oracle statistics (Oracle 10g, 11g).
    It was suggested to look at adding specific indexes on tables and changing abap code identified by looking at the most "expensive" SQL statements being executed, but these were all there pre patching and not within the critical long running processes. Although a good idea to optimise, this will not resolve the root cause of the problem introduced by the upgrade to 10.2.0.5. It was thus not implemented in production, although suggested new indexes were tested in QA without effect, then backed out.
    It was also suggested to implement SAP Note 1525673 - Optimizer merge fix for Oracle 10.2.0.5, which was not part of the SAP Bundle Patch released in February 2011 which we implemented. To do this we were required to implement the SAP Bundle Patch released in May 2011. As this also contains other Oracle fixes we did not want to implement this directly in production. We thus ran baseline tests to measure performance in our QA environment, implemented the SAP Bundle patch, and ran the same tests again (simplified version of the implementation route ).Result: No improvement in performance, in fact in some cases we had degradation of performance (double the time). As this had the potential to negatively effect production, we have not yet implemented this in production.
    Any suggestions would be greatly appreciated !

    Hello Johan,
    well the first goal should be to get the original performance so that you have time to do deeper analysis in your QA system (if the data set is the same).
    If the problem is caused by some new optimizer features or bugs you can try to "force" the optimizer to use the "old" 10.2.0.2 behaviour. Just set the parameter OPTIMIZER_FEATURES_ENABLE to 10.2.0.2 and check your performance.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams142.htm#CHDFABEF
    To get more information we need an AWR (for an overview) and the problematic SQL statements (with all the information like execution plan, statistics, etc.). This analysis are very hard through a forum. I would suggest to open a SAP SR for this issue.
    Regards
    Stefan

  • Scheduled job does not stop even after the scheduling window closes

    We have written custom stats gathering job whose duration has been set as 14 hours in the schedule. This job starts daily at 6:00 PM and expected to complete upto 8:00 AM next day. But the problem is that this job continues to execute even after 14 hours. If we look at the dba_scheduler_job_run_details table, then its execution duration is about 16 - 19 hours daily. We could not understand as why it is not closed when scheduling window closes? Is there any problem with the scheduler configuration?
    select job_name
          ,program_name
          ,schedule_name
          ,schedule_type
          ,stop_on_window_close
      from dba_scheduler_jobs
    where job_name = 'GATHER_STATS_STD_JOB';
    Output:
    JOB_NAME
    PROGRAM_NAME
    SCHEDULE_NAME
    SCHEDULE_TYPE
    STOP_ON_WINDOW_CLOSE
    GATHER_STATS_STD_JOB
    GATHER_STATS_STD_PROGRAM
    GATHER_STATS_STD_SCHEDULE
    NAMED
    TRUE
    SELECT window_name
          ,schedule_owner
          ,schedule_name
          ,schedule_type
          ,start_date
          ,repeat_interval
          ,end_date
          ,duration
          ,window_priority
          ,next_start_date
          ,last_start_date
          ,enabled
          ,active
      FROM dba_scheduler_windows;
    window_name
    schedule_owner
    schedule_name
    schedule_type
    start_date
    repeat_interval
    end_date
    duration
    window_priority
    next_start_date
    last_start_date
    enabled
    active
    GATHER_STATS_STD_WINDOW
    sys
    GATHER_STATS_STD_SCHEDULE
    named
    +00 14:00:00.000000
    high
    23-feb-15 06.00.10.000000 pm +02:00
    23-feb-15 06.00.10.095878 pm +02:00
    TRUE
    TRUE
    We are using Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod on Linux platform.
    Message was edited by: Moazzam

    Straight out of the docs 83 DBMS_SCHEDULER
    stop_on_window_close
    This attribute only applies if the schedule of a job is a window or a window group. Setting this attribute to TRUE implies that the job should be stopped once the associated window is closed. The job is stopped using the stop_job procedure with force set to FALSE.
    Having said that, stop_job with force applies to whether information about the job is gathered - you would have to test, but I believe that the statistics gather will be terminated in the middle of whatever it's doing.
    Update: force says:
    force
    If force is set to FALSE, the Scheduler tries to gracefully stop the job using an interrupt mechanism. This method gives control back to the slave process, which can update the status of the job in the job queue to stopped. If this fails, an error is returned.
    If force is set to TRUE, the Scheduler will immediately terminate the job slave. Oracle recommends that STOP_JOB with force set to TRUE be used only after a STOP_JOBwith force set to FALSE has failed.

  • Check DB Job taking very long time

    Hi All,
    I have BI system SAP EHP 1 for SAP NetWeaver 7.0 with SP level as
    SAP_ABA 701 SAPKA70105
    SAP_BASIS 701 SAPKB70105
    Kernel is 152 and DBSL patch is 148
    Database is Oracle 10.2.0.4.0.
    Database size is 4.6 TB and checkdb job is taking approx 9-10 hrs to complete. Even if size of database is big CHECKDB should not take this much time to complete.
    Every time we have to cancel this job as it impacts system performance.
    There are enough background work process available in system.
    Please provide any inputs if it can be helpful.
    Regards
    Vinay

    Hi Vinay,
    In order to avoid this unexpected behavior, you need to use the latest BR*Tools and as well as update / adjust individual statistic values, excludes the relevant tables from the statistics creation using the BRCONNECT tool (ACTIV=I in DBSTATC) and also locks the statistics at Oracle level (DBMS_STATS.LOCK_TABLE_STATS).
    BR*Tools 7.10 are used with Oracle 10g by default. This is also a prerequisite for most of the new features.
    We need to have a plan for the following.
    1. Update the brtools version from 7.00 (40) to 7.10 (41) [ Latest available in SMP ] or to 7.20 (16) [ Exceptions with Non-ABAP Systems ]
    2. Execute the script attached to Note 1020260 - Delivery of Oracle statistics (Oracle 10g, 11g)
    Br,
    Venky.

  • Stats gathering pl/sql syntax

    Hi All:
    I'm trying to write PL/SQL to conditionally either run or delete table stats (I am on 10.2.0.3):
    The table name is a variable, something like that:
    exec dbms_stat.delete_table_stats('MYOWNER','v_table_name');
    else
    exec dbms_stat.gather_table_stats(ownname=>'MYOWNER',tabname=>'v_table_name',estimate_percent=>'20', method_opt=>'for all column size SKEWONLY');
    Two questions:
    - what is the syntax to include the variable into the parameter list?
    - is there any way to include more than 1 method_opt: ALL INDEXED and SKEWONLY
    Thanks!

    As 10g is already automatically gathering statistics,.why doe this at all?
    Apart from that you would only need
    create or replace procedure foo(p_owner in varchar2, p_table_name in varchar2, p_percent in number, p_opt in varchar2) is
    begin
    -- dbms_stat.delete_table_stats(p_owner,p_table_name);
    dbms_stat.gather_table_stats(ownname=>p_owner,tabname=>p_table_name,estimate_percent=>p_percent, method_opt=>p_opt);
    end;
    exec foo('MY_OWNER' etc..
    Just standard PL/SQL!
    You only don't need to delete statistics. gather stats will take care of it
    Sybrand Bakker
    Senior Oracle DBA

  • Cursor Cache

    Hi All,
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    I will not be able to share the query due to company policy.
    OEM plan showing as Merge cartesain for the query, I know the plan is not correct, as the query has incorrect number of cardinality. I have SQL profile set on this query:
    OEM shows as :
    Data Source : Cursor Cache
    Additional Information : 'SYS_SQL_PROFXXXXXX' (X is some number)
    Here is what is happening:
    1. The table where the merge join is purged Daily (EOD i.e. 12 AM ), that means it has no rows.
    2.Morning around 4 am one process will populate this table, and the same process further uses this table in a query, the query plan has merge Join cartesain (MJC), and it comes out as the number of rows is very less.
    3. Next around 6am again that process is triggered, this time it has huge number of rows, and again the query picks up the same MJC plan, and this time query executes for hours, as it has incorrect cardinality. When I again run SQL advisory on this query, it shows up an optimized plan, I kill the process and re-run the process again, and it works fine (query is out within 3 seconds)
    Guess it still picks up the previous plan of merge join @6am where the number of rows are less, from the cursor cache, and the OEM also shows data source as Cursor Cache. Can we invalidate the session cache if this is the case.
    Please help how can we handle this one?

    I think you are addressing a common problem in datawarehouses... there are staging tables, some times empty, some times with millions of rows... so, maybe the statistics are not reallistic... What is the result of the following query:
    select num_rows, last_analyzed from dba_tables where table_name = '<your_table>';
    If this is the problem, you should to consider one of the following strategies:
    1) Analize the table when is "full" and assure that never runs an analize table (or a gather_schema_stats) over this table. This strategy works fine if all days the table is populated with similar data... but maybe you need to change a gather_schema_stats job schedule... you should be aware of when and how the statistics are updated
    2) Populate the table, then run a gather_table_stats over the table, wait for the completion of the gathertable_stats_, and finally trigger the 6am process... maybe you need to schedule the process before 6am because the statistics gather process
    I hope this helps
    Regards,
    Alfonso

  • Dashboard cursor cache query statement

    hi,
    i would like to ask if there is a way to get the query statement that was executed in getting the dashboard result. i'm thinking if it is possible to get this programmatically, like using a session variable.
    basically, i would like to replicate the data in the dashboard, and i would like to use the query statement (by another application) to be executed via obiee web service.
    thanks.

    I think you are addressing a common problem in datawarehouses... there are staging tables, some times empty, some times with millions of rows... so, maybe the statistics are not reallistic... What is the result of the following query:
    select num_rows, last_analyzed from dba_tables where table_name = '<your_table>';
    If this is the problem, you should to consider one of the following strategies:
    1) Analize the table when is "full" and assure that never runs an analize table (or a gather_schema_stats) over this table. This strategy works fine if all days the table is populated with similar data... but maybe you need to change a gather_schema_stats job schedule... you should be aware of when and how the statistics are updated
    2) Populate the table, then run a gather_table_stats over the table, wait for the completion of the gathertable_stats_, and finally trigger the 6am process... maybe you need to schedule the process before 6am because the statistics gather process
    I hope this helps
    Regards,
    Alfonso

Maybe you are looking for

  • ICloud with Outlook 365

    I continue to have issues with Outlook 365 and iCloud.  I have had no luck getting these programs to work together.  I had no issues with Outlook 2010 and iCloud.  Any suggestions? Thanks

  • Play option after change position of cursor

    I would like to ask for something. This function is VERY IMPORTANT to me. When we push "PLAY" and then we click to another section of our clip, a clip is stop playing immediately. I don't want this. I want when I click "PLAY" button and then I start

  • How can I transfer a song to my iphone from the whatsapp

    i need to transfer a song that is sent by somone who has android device i tried many but no solution can you help me

  • Mapping of input Stream data onto a file at respective fields

    Hi, Can Anyone provide me info regarding how to Map (Using Java Mapping) a Huge amount of Incoming Arrays of Data from an IDOC onto a File in its Specified Fields?. If Possible provide me some examples.

  • Broadband connection keeps dropping off

    Hi everyone, anybody having a problem with there broadband connection dropping off constantly with the new BT infinity broadband. I constantly have to switch my IPad off when I am on line and my laptop and wireless printer is also being affected by t