Gather Stats on Newly Partitioned Table

I partitioned an existing table containing 92 million rows. The method was using dbms_redefinition, whereby I started the redef and then added the indexes and constraints last. After partitioning, I did not gather stats on any of the partitions that were created and I did not analyze any of the indexes. Then I loaded an additional 4 million records into on of the partitions of the newly partitioned table. I ran dbms gather stats on this particular partition and it took over 15 hours. Normally it only takes 4 hours to run dbms gather stats on the individual partitions, so I stopped it after 15 hours. When I monitored it while it was running, it looked like it was taking a really long time gathering stats on the indexes. Is this normal for a newly partitioned table? Is there something I can to prevent it from taking so long when I run gather stats? Oracle Version 10.2.0.4

-- Gather PARTITION Statistics
SYS.DBMS_STATS.gather_table_stats(ownname => upper(v_table_owner), tabname => upper(v_table_name),
partname =>v_table_partition_name, estimate_percent => 20, cascade=> FALSE,granularity => 'PARTITION');
-- Gather GLOBAL INDEX Statistics
for i in (select * from sys.dba_indexes where table_owner = upper(v_table_owner)
and table_name = upper(v_table_name) and partitioned = 'NO'
order by index_name)
loop
SYS.DBMS_STATS.gather_index_stats(ownname => upper(v_table_owner), indname => i.index_name,
estimate_percent => 20, degree => NULL);
end loop;
-- Gather SUB-PARTITION Statistics
SYS.DBMS_STATS.gather_table_stats(ownname => upper(v_table_owner), tabname => upper(v_table_name),
partname =>v_table_subpartition_name, estimate_percent => 20, cascade=> TRUE,granularity => 'ALL');

Similar Messages

  • Skip gather stats for a partition | table from the gather_stats_job in 10g

    Hi all,
    Can we skip gather statistics for a table or a partition in a partitioned-table from the GATHER_STATS_JOB in Oracle 10g ?
    (cause that partition store in an offline-datafile, so GATHER_STATS_JOB had errors when running in sheduled).
    Thanks.
    Edited by: user8710247 on Nov 26, 2011 6:41 PM

    GATHER_TABLE_STATS will default to GRANULARITY 'AUTO' which will include Global and Partition Statistics. Global Statistics have to be across all the Partitions -- so Oracle will attempt to read all the partitions for this !
    You need to run GATHER_TABLE_STATS with GRANULARITY 'PARTITION' and naming the other Partitions --- i.e. run it for each of the online partitions.
    See :
    SQL> create table XYZ (col_1  number, col_2 varchar2(5))
      2  partition by range (col_1)
      3  (partition P1 values less than (10) tablespace HEMANT,
      4  partition P2 values less than (100) tablespace USERS)
      5  /
    Table created.
    SQL> insert into XYZ values (5,'Five');
    1 row created.
    SQL> insert into XYZ values (50,'Fifty');
    1 row created.
    SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,granularity=>'ALL');
    PL/SQL procedure successfully completed.
    SQL> select partition_name, tablespace_name , num_rows, sample_size from user_tab_partitions
      2  where table_name = 'XYZ'
      3  /
    PARTITION_NAME                 TABLESPACE_NAME                  NUM_ROWS
    SAMPLE_SIZE
    P1                             HEMANT                                  1
              1
    P2                             USERS                                   1
              1
    SQL>
    SQL> exec dbms_stats.lock_table_stats('','XYZ');
    PL/SQL procedure successfully completed.
    SQL> alter tablespace HEMANT offline;
    Tablespace altered.
    SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,granularity=>'ALL');
    BEGIN dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,granularity=>'ALL'); END;
    ERROR at line 1:
    ORA-20005: object statistics are locked (stattype = ALL)
    ORA-06512: at "SYS.DBMS_STATS", line 13159
    ORA-06512: at "SYS.DBMS_STATS", line 13179
    ORA-06512: at line 1
    SQL>
    SQL> exec dbms_stats.unlock_table_stats('','XYZ');
    PL/SQL procedure successfully completed.
    SQL> exec dbms_stats.lock_partition_stats('','XYZ','P1');
    PL/SQL procedure successfully completed.
    SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,granularity=>'ALL');
    BEGIN dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,granularity=>'ALL'); END;
    ERROR at line 1:
    ORA-00376: file 2 cannot be read at this time
    ORA-01110: data file 2:
    '/usr/oracle/oradata/MONDB/datafile/o1_mf_hemant_7d6m8zkx_.dbf'
    ORA-06512: at "SYS.DBMS_STATS", line 13159
    ORA-06512: at "SYS.DBMS_STATS", line 13179
    ORA-06512: at line 1
    SQL>
    SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,partname=>'P2');
    BEGIN dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,partname=>'P2'); END;
    ERROR at line 1:
    ORA-00376: file 2 cannot be read at this time
    ORA-01110: data file 2:
    '/usr/oracle/oradata/MONDB/datafile/o1_mf_hemant_7d6m8zkx_.dbf'
    ORA-06512: at "SYS.DBMS_STATS", line 13159
    ORA-06512: at "SYS.DBMS_STATS", line 13179
    ORA-06512: at line 1
    SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,partname=>'P2',granularity=>'GLOBAL AND PARTITION');
    BEGIN dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,partname=>'P2',granularity=>'GLOBAL AND PARTITION'); END;
    ERROR at line 1:
    ORA-00376: file 2 cannot be read at this time
    ORA-01110: data file 2:
    '/usr/oracle/oradata/MONDB/datafile/o1_mf_hemant_7d6m8zkx_.dbf'
    ORA-06512: at "SYS.DBMS_STATS", line 13159
    ORA-06512: at "SYS.DBMS_STATS", line 13179
    ORA-06512: at line 1
    SQL>
    SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,partname=>'P2',granularity=>'PARTITION');
    PL/SQL procedure successfully completed.
    SQL>Hemant K Chitale

  • How to gather stats on the target table

    Hi
    I am using OWB 10gR2.
    I have created a mapping with a single target table.
    I have checked the mapping configuration 'Analyze Table Statements'.
    I have set target table property 'Statistics Collection' to 'MONITORING'.
    My requirement is to gather stats on the target table, after the target table is loaded/updated.
    According to Oracle's OWB 10gR2 User Document (B28223-03, Page#. 24-5)
    Analyze Table Statements
    If you select this option, Warehouse Builder generates code for analyzing the target
    table after the target is loaded, if the resulting target table is double or half its original
    size.
    My issue is that when my target table size is not doubled or half its original size then traget table DOES NOT get analyzed.
    I am looking for a way or settings in OWB 10gR2, to gather stats on my target table no matter its size after the target table is loaded/updated.
    Thanks for your help in advance...
    ~Salil

    Hi
    Unfortunately we have to disable automatic stat gather on the 10g database.
    My requirement needs to extract data from one database and then load into my TEMP tables and then process it and finally load into my datawarehouse tables.
    So I need to make sure to analyze my TEMP tables after they are truncated and loaded and subsequently updated, before I can process the data and load it into my datawarehouse tables.
    Also I need to truncate All TEMP tables after the load is completed to save space on my target database.
    If we keep the automatic stats ON my target 10g database then it might gather stats for those TEMP tables which are empty at the time of gather stat.
    Any ideas to overcome this issue is appreciated.
    Thanks
    Salil

  • Auto stats gathering for partitioned table

    Hi,
    We are in 10gR2 in sun solaris. We are using auto stats gathering for our DB. Here is my question,
    i know oracle gather statistics of the table, if the table changes more than 10%. How this work out for partitioned table? If the partition table changes more than 10% will last partition analyzed or the full table. We have partitioned based on insertion date.
    Appreciate your responds.
    Regards,
    Satheesh Shanmugam
    http://borndba.com

    I hope it will be only current partition which has teh stale statistics will be gathered the stastics instead of full table.
    Anil Malkai

  • Gather stats on every table in a schema

    Hi,
    i have an CRM application running on 10g R2 db. it has 5000 tbls on which less than 10% of tables are dynamic. gather stats job runs every day at 2am successfully.
    i was monitoring the statistics(dba_tables, dba_tab_modifications, dba_tab_statistics), noticed that only 28 tables r been update with latest stats every day for CRM schema and most of these tables are same. during query tunning i found that some tables has stale stats, but it does't figure in column stale of dba_tab_statistics, but it shows no of rows inserted, updated in tab_modifications.
    my question is there any draw back in gathering stats for all the tables every day irrespective of data is loaded with 10% or not and but not for tables with no rows..

    thanks for the quick response, it was helpful.
    due to application vendor recommendations, for some tables stats were disabled and optimizer parameter were changed which causes dynamic sample not using dynamic stats gather for some queries as they use the tables with no stats. as per documentation it would be calculating the stats on fly when the query the tables which stats has not been updated.
    as of now i am not gathering stats manually for this schema, as auto is scheduled. and will verify if indeed on 10% of data is loaded it updates the stats or not then i may manually gather stats for only those tables.

  • Error while creating partition table

    Hi frnds i am getting error while i am trying to create a partition table using range
    getting error ORA-00906: missing left parenthesis.I used the following statement to create partition table
    CREATE TABLE SAMPLE_ORDERS
    (ORDER_NUMBER NUMBER,
    ORDER_DATE DATE,
    CUST_NUM NUMBER,
    TOTAL_PRICE NUMBER,
    TOTAL_TAX NUMBER,
    TOTAL_SHIPPING NUMBER)
    PARTITION BY RANGE(ORDER_DATE)
    PARTITION SO99Q1 VALUES LESS THAN TO_DATE(‘01-APR-1999’, ‘DD-MON-YYYY’),
    PARTITION SO99Q2 VALUES LESS THAN TO_DATE(‘01-JUL-1999’, ‘DD-MON-YYYY’),
    PARTITION SO99Q3 VALUES LESS THAN TO_DATE(‘01-OCT-1999’, ‘DD-MON-YYYY’),
    PARTITION SO99Q4 VALUES LESS THAN TO_DATE(‘01-JAN-2000’, ‘DD-MON-YYYY’),
    PARTITION SO00Q1 VALUES LESS THAN TO_DATE(‘01-APR-2000’, ‘DD-MON-YYYY’),
    PARTITION SO00Q2 VALUES LESS THAN TO_DATE(‘01-JUL-2000’, ‘DD-MON-YYYY’),
    PARTITION SO00Q3 VALUES LESS THAN TO_DATE(‘01-OCT-2000’, ‘DD-MON-YYYY’),
    PARTITION SO00Q4 VALUES LESS THAN TO_DATE(‘01-JAN-2001’, ‘DD-MON-YYYY’)
    ;

    More than one of them. Try this instead:
    CREATE TABLE SAMPLE_ORDERS
    (ORDER_NUMBER NUMBER,
    ORDER_DATE DATE,
    CUST_NUM NUMBER,
    TOTAL_PRICE NUMBER,
    TOTAL_TAX NUMBER,
    TOTAL_SHIPPING NUMBER)
    PARTITION BY RANGE(ORDER_DATE) (
    PARTITION SO99Q1 VALUES LESS THAN (TO_DATE('01-APR-1999', 'DD-MON-YYYY')),
    PARTITION SO99Q2 VALUES LESS THAN (TO_DATE('01-JUL-1999', 'DD-MON-YYYY')),
    PARTITION SO99Q3 VALUES LESS THAN (TO_DATE('01-OCT-1999', 'DD-MON-YYYY')),
    PARTITION SO99Q4 VALUES LESS THAN (TO_DATE('01-JAN-2000', 'DD-MON-YYYY')),
    PARTITION SO00Q1 VALUES LESS THAN (TO_DATE('01-APR-2000', 'DD-MON-YYYY')),
    PARTITION SO00Q2 VALUES LESS THAN (TO_DATE('01-JUL-2000', 'DD-MON-YYYY')),
    PARTITION SO00Q3 VALUES LESS THAN (TO_DATE('01-OCT-2000', 'DD-MON-YYYY')),
    PARTITION SO00Q4 VALUES LESS THAN (TO_DATE('01-JAN-2001', 'DD-MON-YYYY')))In the future, if you are having problems, go to Morgan's Library at www.psoug.org.
    Find a working demo, copy it, then modify it for your purposes.

  • Regarding partition table..pls help

    I am creating a table
    as
    CREATE TABLE T3
    PARTITION BY RANGE(CREATED)
    (PARTITION T1 VALUES LESS THAN (TO_DATE('01-APR-2006','DD-MON-YYYY')),
    PARTITION T2 VALUES LESS THAN (TO_DATE('01-JUN-2006','DD-MON-YYYY')))
    AS SELECT * FROM T1
    am getting error :
    ORA-14400: inserted partition key does not map to any partition
    since am new to the partitioning concepts, pls help.
    Thanks.

    Oups :-)
    I referenced the source table name (T1) instead on the newly partitionned table (T3) fromyour original post, sory :)
    The select should read:
    SELECT *
    FROM T3 PARTITION (T3);
    Thinking of that, you should give another name to the partition in order to avoid confusion!
    Yoann.

  • Oracle 11.2 - Perform parallel DML on a non partitioned table with LOB column

    Hi,
    Since I wanted to demonstrate new Oracle 12c enhancements on SecureFiles, I tried to use PDML statements on a non partitioned table with LOB column, in both Oracle 11g and Oracle 12c releases. The Oracle 11.2 SecureFiles and Large Objects Developer's Guide of January 2013 clearly says:
    Parallel execution of the following DML operations on tables with LOB columns is supported. These operations run in parallel execution mode only when performed on a partitioned table. DML statements on non-partitioned tables with LOB columns continue to execute in serial execution mode.
    INSERT AS SELECT
    CREATE TABLE AS SELECT
    DELETE
    UPDATE
    MERGE (conditional UPDATE and INSERT)
    Multi-table INSERT
    So I created and populated a simple table with a BLOB column:
    SQL> CREATE TABLE T1 (A BLOB);
    Table created.
    Then, I tried to see the execution plan of a parallel DELETE:
    SQL> EXPLAIN PLAN FOR
      2  delete /*+parallel (t1,8) */ from t1;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3718066193
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |  2048 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |  2048 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    PLAN_TABLE_OUTPUT
    Note
       - dynamic sampling used for this statement (level=2)
    And I finished by executing the statement.
    SQL> commit;
    Commit complete.
    SQL> alter session enable parallel dml;
    Session altered.
    SQL> delete /*+parallel (t1,8) */ from t1;
    2048 rows deleted.
    As we can see, the statement has been run as parallel:
    SQL> select * from v$pq_sesstat;
    STATISTIC                      LAST_QUERY SESSION_TOTAL
    Queries Parallelized                    1             1
    DML Parallelized                        0             0
    DDL Parallelized                        0             0
    DFO Trees                               1             1
    Server Threads                          5             0
    Allocation Height                       5             0
    Allocation Width                        1             0
    Local Msgs Sent                        55            55
    Distr Msgs Sent                         0             0
    Local Msgs Recv'd                      55            55
    Distr Msgs Recv'd                       0             0
    11 rows selected.
    Is it normal ? It is not supposed to be supported on Oracle 11g with non-partitioned table containing LOB column....
    Thank you for your help.
    Michael

    Yes I did it. I tried with force parallel dml, and that is the results on my 12c DB, with the non partitionned and SecureFiles LOB column.
    SQL> explain plan for delete from t1;
    Explained.
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |     4 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |     4 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    The DELETE is not performed in Parallel.
    I tried with another statement :
    SQL> explain plan for
    2        insert into t1 select * from t1;
    Here are the results:
    11g
    | Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT         |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  LOAD TABLE CONVENTIONAL | T1       |       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR         |          |       |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)   | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR    |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL   | T1       |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    12c
    | Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT                   |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                    |          |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)              | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    LOAD AS SELECT                  | T1       |       |       |            |          |  Q1,00 | PCWP |            |
    |   4 |     OPTIMIZER STATISTICS GATHERING |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    |   5 |      PX BLOCK ITERATOR             |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    It seems that the DELETE statement has problems but not the INSERT AS SELECT !

  • Insert statement does not insert all records from a partitioned table

    Hi
    I need to insert records in to a table from a partitioned table.I set up a job and to my surprise i found that the insert statement is not inserting all the records on the partitioned table.
    for example when i am using select statement on to a partitioned table
    it gives me 400 records but when i insert it gives me only 100 records.
    can anyone help in this matter.

    INSERT INTO TABLENAME(COLUMNS)
    (SELECT *
    FROM SCHEMA1.TABLENAME1
    JOIN SCHEMA2.TABLENAME2a
    ON CONDITION
    JOIN SCHEMA2.TABLENAME2 b
    ON CONDITION AND CONDITION
    WHERE CONDITION
    AND CONDITION
    AND CONDITION
    AND CONDITION
    AND (CONDITION
    HAVING SUM(COLUMN) > 0
    GROUP BY COLUMNS

  • Scheduled Job to gather stats for multiple tables - Oracle 11.2.0.1.0

    Hi,
    My Oracle DB Version is:
    BANNER Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    In our application, we have users uploading files resulting in insert of records into a table. file could contain records ranging from 10000 to 1 million records.
    I have written a procedure to bulk insert these records into this table using limit clause. After the insert, i noticed my queries run slow against these tables if huge files are uploaded simultaneously. After gathering stats, the cost reduces and the queries executed faster.
    We have 2 such tables which grow based on user file uploads. I would like to schedule a job to gather stats during a non peak hour apart from the nightly automated oracle job for these two tables.
    Is there a better way to do this?
    I plan to execute the below procedure as a scheduled job using DBMS_SCHEDULER.
    --Procedure
    create or replace
    PROCEDURE p_manual_gather_table_stats AS
    TYPE ttab
    IS
        TABLE OF VARCHAR2(30) INDEX BY PLS_INTEGER;
        ltab ttab;
    BEGIN
        ltab(1) := 'TAB1';
        ltab(2) := 'TAB2';
        FOR i IN ltab.first .. ltab.last
        LOOP
            dbms_stats.gather_table_stats(ownname => USER, tabname => ltab(i) , estimate_percent => dbms_stats.auto_sample_size,
            method_opt => 'for all indexed columns size auto', degree =>
            dbms_stats.auto_degree ,CASCADE => TRUE );
        END LOOP;
    END p_manual_gather_table_stats;
    --Scheduled Job
    BEGIN
        -- Job defined entirely by the CREATE JOB procedure.
        DBMS_SCHEDULER.create_job ( job_name => 'MANUAL_GATHER_TABLE_STATS',
        job_type => 'PLSQL_BLOCK',
        job_action => 'BEGIN p_manual_gather_table_stats; END;',
        start_date => SYSTIMESTAMP,
        repeat_interval => 'FREQ=DAILY; BYHOUR=12;BYMINUTE=45;BYSECOND=0',
        end_date => NULL,
        enabled => TRUE,
        comments => 'Job to manually gather stats for tables: TAB1,TAB2. Runs at 12:45 Daily.');
    END;Thanks,
    Somiya

    The question was, is there a better way, and you partly answered it.
    Somiya, you have to be sure the queries have appropriate statistics when the queries are being run. In addition, if the queries are being run while data is being loaded, that is going to slow things down regardless, for several possible reasons, such as resource contention, inappropriate statistics, and having to maintain a read consistent view for each query.
    The default collection job decides for each table based on changes it perceives in the data. You probably don't want the default collection job to deal with those tables. You probably do want to do what Dan suggested with the statistics. But it's hard to tell from your description. Is the data volume and distribution volatile? You surely want representative statistics available when each query is started. You may want to use all the plan stability features available to tell the optimizer to do the right thing (see for example http://jonathanlewis.wordpress.com/2011/01/12/fake-baselines/ ). You may want to just give up and use dynamic sampling, I don't know, entire books, blogs and papers have been written on the subject. It's sufficiently advanced technology to appear as magic.

  • Keeping stats up to date for partitioned tables

    Hi,
    Oracle version 10.2.0.4
    I have a partioned table. I would like to keep stats up-to-date.
    Can I just run a single command to update table stats, indexes and partitions please?
    exec dbms_stats.gather_table_stats(user, 'TABLE', cascade=>true)or I also need to run exec dbms_stats.gather_table_stats(user, 'TABLE', granularity=>partition)
    thanks,
    Ashok
    Edited by: 902986 on 27-Oct-2012 11:06
    Edited by: 902986 on 27-Oct-2012 11:07

    thanks
    yes there were many indexes on the original non-partitioned table and I have created another table partitioned and now populating it with the data from the original table. the new table is partitioned on a date range column for all years before 2012, then for 2012, 2013 and so forth.
    the indexes are all created locally bar a unique index (as per original table), created globally to enforce uniqueness across the table itself. the search will always look to year to date say 1st jan 2012 tilll today for risk analysis. the partition is on that date column and there is also a local index on that date column as well, to avoid table scan (tested with disabling that index, predictably did table scan and was less efficient).
    in a DW environment, I don't see much value in having global index bar for primary key/unique constraint. I do realise that if the query crosses more than one partition, say 2 partitions, there will be two b-tree local index scans rather than one, but that would be rare (from the way they query the table).
    therefore my plan is to perform a full table stats with cascade=>true and measure the time it takes and plan to do the same if the maintenance window allows it.
    thanks again for your help
    Edited by: 902986 on 28-Oct-2012 13:24

  • Analyse a partitioned table with more than 50 million rows

    Hi,
    I have a partitioned table with more than 50 million rows. The last analyse is on 1/25/2007. Do I need to analyse him? (query runs on this table is very slow).
    If I need to analyse him, what is the best way? Use DBMS_STATS and schedule a job?
    Thanks

    A partitioned table has global statistics as well as partition (and subpartition if the table is subpartitioned) statistics. My guess is that you mean to say that the last time that global statistics were gathered was in 2007. Is that guess accurate? Are the partition-level statistics more recent?
    Do any of your queries actually use global statistics? Or would you expect that every query involving this table would specify one or more values for the partitioning key and thus force partition pruning to take place? If all your queries are doing partition pruning, global statistics are irrelevant, so it doesn't matter how old and out of date they are.
    Are you seeing any performance problems that are potentially attributable to stale statistics on this table? If you're not seeing any performance problems, leaving the statistics well enough alone may be the most prudent course of action. Gathering statistics would only have the potential to change query plans. And since the cost of a query plan regressing is orders of magnitude greater than the benefit of a different query performing faster (at least for most queries in most systems), the balance of risks would argue for leaving the stats alone if there is no problem you're trying to solve.
    If your system does actually use global statistics and there are performance problems that you believe are potentially attributable to stale global statistics and your partition level statistics are accurate, you can gather just global statistics on the table probably with a reasonably small sample size. Make sure, though, that you back up your existing statistics just in case a query plan goes south. Ideally, you'd also have a test environment with identical (or nearly identical) data volumes that you could use to verify that gathering statistics doesn't cause any problems.
    Justin

  • Query on partitioned table runs indefinitely

    Hi,
    Customer's Oracle GL implementation has tables GL_BALANCES and GL_JE_LINES partitioned on PERIOD_NAME i.e MON-YY.
    Both GL_BALANCES and GL_JE_LINES have LOCAL INDEXES.
    For GL_JE_LINES it is (PERIOD_NAME, CODE_COMBINATION_ID)
    For GL_BALANCES it is (PERIOD_NAME, CODE_COMBINATION_ID)
    We have had more than one occasion is our Production Instance when queries run indefinitely for hours when accessing rows via the above local indexes. We then gather stats using fnd_stats.gather_table_stats for the specific partition e.g. NOV-09 and immediately the queries complete once the stats gathering is in progress.
    I am really puzzled how gather stats helps in this case, when the query is already running and from tracing the SQL session shows usage of the LOCAL INDEX.
    Any pointers on what Gather Stats is doing would be great.
    We do gather stats regularly in our Production Instance.
    Also when the SQL query for example against GL_BALANCES for specific period is running for long, I checked the stats and the number of rows in the dba_tab_partitions is not much different from the physical count of records for that period.
    Sometimes we have noticed the TEMP space run out say even upto 48 GB.
    In one instance we saw huge number of direct path write temp wait events. After gather stats was done, we no more see this wait event. Note we never used SORT or GROUP BY when we saw the above wait event.
    Regards,
    Prince
    Edited by: prc_richard on 10/11/2009 15:16

    Presumably the optimizer was choosing a poor execution plan because the statistics (data volumes and distribution) were out of date, and gathering the stats enabled it to make some better decisions and arrive at a better execution plan.
    For example, it may have thought the newest partitions were empty (see num_rows in all_tab_partitions) when in fact they had been loaded with millions of rows since you last gathered stats.
    Edited by: William Robertson on Nov 11, 2009 12:16 AM

  • Gathering statistics on partitioned and non-partitioned tables

    Hi all,
    My DB is 11.1
    I find that gathering statistics on partitioned tables are really slow.
    TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE LAST_ANALYZED PARTITIONED COMPRESSION
    O_FCT_BP1                        112123170     843140    11212317 8/30/2011 3:5            NO                    DISABLED
    LEON_123456                      112096060     521984    11209606 8/30/2011 4:2           NO                   ENABLED
    O_FCT                           115170000     486556      115170 8/29/2011 6:3            YES        
    SQL> SELECT COUNT(*)  FROM user_tab_subpartitions
      2  WHERE table_name =O_FCT'
      3  ;
      COUNT(*)
           112I used the following script:
    BEGIN
      DBMS_STATS.GATHER_TABLE_STATS(ownname          => user,
                                    tabname          => O_FCT',
                                    method_opt       => 'for all columns size auto',
                                    degree           => 4,
                                    estimate_percent =>10,
                                    granularity      => 'ALL',
                                    cascade          => false);
    END;
    /It costs 2 mins for the first two tables to gather the statistics respectively, but more than 10 mins for the partitioned table.
    The time of collecting statistics accounts for a large part of total batch time.
    And most jobs of the batch are full load in which case all partitions and subpartitions will be affected and we can't just gather specified partitions.
    Does anyone have some experiences on this subject? Thank you very much.
    Best regards,
    Leon
    Edited by: user12064076 on Aug 30, 2011 1:45 AM

    Hi Leon
    Why don't you gather stats at partition level? If your partitions data is not going to change after a day (date range partition for ex), you can simply do at partition level
    GRANULARITY=>'PARTITION' for partition level and
    GRANULARITY=>'SUBPARTITION' for subpartition level
    You are gathering global stats every time which you may not require.
    Edited by: user12035575 on 30-Aug-2011 01:50

  • Can I gather object statistics on large tables at the same time?

    We have large partitioned tables to the tune of 3-4 billion rows, and they have no object statistics. Can I gather object statistics for them at the same time? For example, 4-5 large tables at the same time. I need to gather them in multiple tables because we have several of those large tables and I have to schedule the gathering carefully. So, I want to start at 4 tables. I'm wondering if gathering statistics on the above tables will be intrusive or will impact the performance while it is running.
    Alex

    What version are you running? If you are running 11g they are automatically gathered via the autotask job DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC which, depending on your window, will normally run daily at 10 pm. Im very surpised that no stats have been gathered as this collects stats on all new tables or if more than 10% of rows have been changed.
    See the following links:
    http://www.oracle-base.com/articles/11g/automated-database-maintenance-task-management-11gr1.php
    http://docs.oracle.com/cd/B28359_01/server.111/b28274/stats.htm

Maybe you are looking for