Partitioned Incremental Table - no stats gathered on new partitions

Dear Gurus
Hoping that someone can point me in the right direction to trouble-shoot. Version Enterprise 11.1.0.7 AIX.
Range partitioned table with hash sub-partitions.
Automatic stats gather is on.
dba_tables shows global stats YES analyzed 06/09/2011 (when first analyzed on migration of data) and dba_tab_partitions shows most partitions analyzed at that date and most others up until 10/10/2011 - done by the automatically by the weekend stats_gather scheduled job.
46 new partitions added in the last few months but no stats gathered on in dba_tab_partitions and dba_table last_analyzed says 06/09/2011 - the date it was first analyzed manually gathering stats rather than using auto stats gatherer.
Checked dbms_stats.get_prefs set to incremental and all the default values recommended by Oracle are set including publish = TRUE.
dba_tab_partitions has no values in num_rows, last_analyzed etc.
dba_tab_modifications has no values next to the new partitions but shows inserts as being 8 million approx per partition - no deletes or updates.
dba_tab_statistics has no values next to the new partitions. All other partitions are marked as NO in the stale column.
checked the dbms_stats job history window - and it showed that the window for stats gathering stopped within the Window automatically allowed.
Looked at Grid Control - the stats gather for the table started at 6am Saturday morning and closed 2am Monday morning.
Checked the recommended Window - and it stopped analyzing that table at 2am exactly having tried to analyze it since Saturday morning at 6am.
Had expected that as the table was in incremental mode - it wouldn't have timed out and the new partitions would have been analyzed within the window.
The job_queue_processes on the database = 1.
Increased the job_queue_processes on the database = 2.
Had been told that the original stats had taken 3 days in total to gather so via GRID - scheduled a dbms_scheduler (10.2.0.4) - to gather stats on that table over a bank holiday weekend - but asked management to start it 24 hours earlier to take account of extra time.
The Oracle defaults were accepted (and as recommended in various seminars and whilte papers) - except CASCADE - although I wanted the indexes to be analyzed - I decided that was icing on the cake I couldn't afford).
Went to work - 24 hours later - checked dba_scheduler_tasks_job running. Checked stats on dba_tab_stats and tba tablestats nothing had changed. I had expected to see partition stats for those not gathered first - but quick check of Grid - and it was doing a select via full table scan - and still on the first datafile!! Some have suggested to watchout for the DELETE taking along time - but I only saw evidence of the SELECT - so ran an AWR report - and sure enough full table scan on the whole table. Although the weekend gather stats job was also in operation - it wasn't doing my table - but was definitely running against others.
So I checked the last_analyzed on other tables - one of them is a partitioned table - and they were getting up-to-date stats. But the tables and partitions are ridiculously small in comparison to the table I was focussed on.
Next day I came in checked the dba_scheduler_job log and my job had completed within 24 hours and completed successfully.
Horrors of horrors - none of the stats had changed one bit in any view I looked at.
I got my excel spreadsheet out - and worked out whether because there was less than 10% changed - and I'd accepted the defaults - that was why there was nothing in the dba_tables to reflect it had last been analyzed when I asked it to.
My stats roughly worked out showed that they were around the 20% mark - so the gather_table stats should have picked that up and gathered stats for the new partitions? There was nothing in evidence on any views at all.
I scheduled the job via GRID 10.2.04 for an Oracle database using incremental stats introduced in 11.1.0.7 - is there a problem at that level?
There are bugs I understand with incremental tables and gathering statistics in 11.1.0.7 which are resolved in 11.2.0 - however we've applied all the CPU until April of last year - it's possible that as we are so behind - we've missed stuff?
Or that I really don't know how to gather stats on partitioned tables and it's all my fault - in which case - please let me know - and don't hold back!!!
I'd rather find a solution than save my reputation!!
Thanks for anyone who replies - I'm not online at work so can't always give you my exact commands done - but hopefully you'll give me a few pointers of where to look next?
Thanks!!!!!!!!!!!!!

Save the attitude for your friends and family - it isn't appropriate on the forum.
>
I did exactly what it said on the tin:
>
Maybe 'tin' has some meaning for you but I have never heard of it when discussing
an Oracle issue or problem and I have been doing this for over 25 years.
>
but obviously cannot subscribe to individual names:
>
Same with this. No idea what 'subscribe to individual names' means.
>
When I said defaults - I really did mean the defaults given by Oracle - not some made up defaults by me - I thought that by putting Oracle in my text - there - would enable people to realise what the defaults were.
If you are suggesting that in all posts - I should put the Oracle defaults in name becuause the gurus on the site do not know them - then please let me know as I have wrongly assumed that I am asking questions to gurus who know this suff inside out.
Clearly I have got this site wong.
>
Yes - you have got this site wrong. Putting 'Oracle' in the text doesn't enable people to realize
what the defaults in your specific environment are.
There is not a guru that I know of,
and that includes Tom Kyte, Jonathan Lewis and many others, that can tell
you, site unseen, what default values are in play in your specific environment
given only the information you provided in your post.
What is, or isn't a 'default' can often be changed at either the system or session level.0
Can we make an educated guess about what the default value for a parameter might be?
Of course - but that IS NOT how you troubleshoot.
The first rule of troubleshooting is DO NOT MAKE ANY ASSUMPTIONS.
The second rule is to gather all of the facts possible about the reported problem, its symptoms
and its possible causes.
These facts include determining EXACTLY what steps and commands the user performed.
Next you post the prototype for stats
DBMS_STATS.GATHER_TABLE_STATS (
ownname VARCHAR2,
tabname VARCHAR2,
partname VARCHAR2 DEFAULT NULL,
estimate_percent NUMBER DEFAULT to_estimate_percent_type
(get_param('ESTIMATE_PERCENT')),
block_sample BOOLEAN DEFAULT FALSE,
method_opt VARCHAR2 DEFAULT get_param('METHOD_OPT'),
degree NUMBER DEFAULT to_degree_type(get_param('DEGREE')),
granularity VARCHAR2 DEFAULT GET_PARAM('GRANULARITY'),
cascade BOOLEAN DEFAULT to_cascade_type(get_param('CASCADE')),
stattab VARCHAR2 DEFAULT NULL,
statid VARCHAR2 DEFAULT NULL,
statown VARCHAR2 DEFAULT NULL,
no_invalidate BOOLEAN DEFAULT to_no_invalidate_type (
get_param('NO_INVALIDATE')),So what exactly is the value for GRANULARITY? Do you know?
Well it can make a big difference. If you don't know you need to find out.
>
As mentioned earlier - I accepted all the "defaults".
>
Saying 'I used the default' only helps WHEN YOU KNOW WHAT THE DEFAULT VALUES ARE!
Now can we get back to the issue?
If you had read the excerpt I provided you should have noticed that the values
used for GRANULARITY and INCREMENTAL have a significant influence on the stats gathered.
And you should have noticed that the excerpt mentions full table scans exactly like yours.
So even though you said this
>
Had expected that as the table was in incremental mode
>
Why did you expect this? You said you used all default values. The excerpt I provided
says the default value for INCREMENTAL is FALSE. That doesn't jibe with your expectation.
So did you check to see what INCREMENTAL was set to? Why not? That is part of troubleshooting.
You form a hypothesis. You gather the facts; one of which is that you are getting a full table
scan. One of which is you used default settings; one of which is FALSE for INCREMENTAL which,
according to the excerpt, causes full table scans which matches what you are getting.
Conclusion? Your expectation is wrong. So now you need to check out why. The first step
is to query to see what value of INCREMENTAL is being used.
You also need to check what value of GRANULARITY is being used.
And you say this
>
Or that I really don't know how to gather stats on partitioned tables and it's all my fault - in which case - please let me know - and don't hold back!!!
I'd rather find a solution than save my reputation!!
>
Yet when I provide an excerpt that seems to match your issue you cop an attitude.
I gave you a few pointers of where to look next and you fault us for not knowing the default
values for all parameters for all versions of Oracle for all OSs.
How disingenous is that?

Similar Messages

  • Exclude TAO tables from stat gathering

    Hi,
    on FSCM 9.1, Tools 8.52. on Windows 2008. DB Oracle 11g R2.
    Automatic stat gathering, collects statistics for temporary tables PS_XXX_TAO. This distorts the Explain plans cardinality but if I delete the statistics :
    dbms_stats.delete_table_stats(ownname=>'SYSADM',tabname=>'PS_XXX_TAO');
    But it is not permanent and the stats are re collected during the night.
    -how to ran a definite delete for that tables ? I mean delete one time but for a permanent result ?
    or
    -how to exclude these two tables from automatic stat gathering ?
    Thanks.

    thank you I applied :
    exec dbms_stats.lock_table_stats('ADAM','B')
    Any Poeplesoft recommendation in documentaries ?
    Regards.

  • Quickbooks 2007 Mac doesn't run properly under OS10.6; is there a simpler solution than partitioning HD and installing OS10.5 in new partition?

    Quickbooks 2007 Mac doesn't run properly under OS10.6 (Reports doesn't work properly, and entries often are correct when highlighted but incorrect otherwise). is there a simpler solution than partitioning your HD and installing on it OS10.5?  This works for me, but is less convenient than I would like.  Quickbooks suggests upgrading to a newer version of QB, but this workaround saves nearly $200.

    An external USB hard drive with at least 40 GB should do the trick.

  • Shadow tables that have been created via the new partitioning schema

    Hi,
         Complete Partitioning :
                    In a complete partitioning, the fact table of the infocube are fully converted using shadow
    tables that have been created via the new partitioning schema.
                   in the above Explanation what is the meaning of shadow tables which perform the
                   partitioning of an info cube.

    Hi
    Shadow tables have the namespace /BIC/4F<Name of InfoCube> or /BIC/4E<Name of InfoCube>.
    Complete Partitioning
    Complete Partitioning fully converts the fact tables of the InfoCube. The system creates shadow tables with the new partitioning schema and copies all of the data from the original tables into the shadow tables. As soon as the data is copied, the system creates indexes and the original table replaces the shadow table. After the system has successfully completed the partitioning request, both fact tables exist in the original state (shadow table), as well as in the modified state with the new partitioning schema (original table). You can manually delete the shadow tables after repartitioning has been successfully completed to free up the memory. Shadow tables have the namespace /BIC/4F<Name of InfoCube> or /BIC/4E<Name of InfoCube>.
    You can only use complete repartitioning for InfoCubes. A heterogeneous state is possible. For example, it is possible to have a partitioned InfoCube with non partitioned aggregates. This does not have an adverse effect on functionality. You can automatically modify all of the active aggregates by reactivating them.
    Hope it helps and clear

  • Index problem during the creation of a new partition

    We have a range partitioned table, with a local spatial index on each partition. While trying to use the alter table command to add a new partition we get the following errors.
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-13249: internal error in Spatial index: [mdidxrbd]
    ORA-13249: Error in Spatial index: index build failed
    ORA-13249: Error in R-tree: [mdrcritbl]
    ORA-13231: failed to create index table [MDRT_D789CC$] during R-tree
    creation
    ORA-29400: data cartridge error
    ORA-01031: insufficient privileges
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_9I", line 7
    ORA-06512: at line 1
    ORA-06512: at "LPDAACECS_PART.UPDATE_METADATA", line 1937
    ORA-06512: at "LPDAACECS_PART.UPDATE_METADATA", line 3625
    ORA-06512: at line 1

    I just wanted to expand on this for the sake of others who may need a bit more detail, having just resolved my similar problem.
    In Oracle Enterprise Manager, in the left-hand tree view, expand Security then Users inside the relevant Databases entry.
    Then select the name of the user/schema which needs to be able to perform the required task (in my case, create a spatial index from within a stored procedure).
    On the System tab in the right-hand pane, highlight
    Create Any Table
    Create Any Sequence
    Create Any Index
    (depending on the task that needs to be performed)
    Click the arrow to move these items into the "Granted" area. Click apply and your prayers have been answered. Mine were, anyway!
    Regards
    Stuart

  • Status of VIEW after partitioning base table.

    Hi All,
    I would like to know the status of the view when i perform Range partitioning on Base table???
    I will be performing these below steps for partitioning:
    1. Rename the Original base table.
    2. Create new partitioned table by naming it as original table name.
    3. Insert rows from Original table to partitioned table.
    As from above steps i am renaming the original base table and creating Partitioned base table with the same original name.
    Do VIEWS based on this table will refer to new partitioned table having same name, or do i need to re-create views???
    -Yasser

    View refers to NEW Partition TABLE on the same name... it's not the problem.. if you didn't rename delete add column_name.
    SQL> create table a1 (id number);
    Table created.
    SQL> create view v_a1 as select * from a1;
    View created.
    SQL> desc v_a1;
    Name                         Null? Type
    ID                                  NUMBER
    SQL> alter table a1 rename to a2;
    Table altered.
    SQL> desc v_a1;
    ERROR:
    ORA-24372: invalid object for describe
    SQL> create table a1 (id varchar2(40))
    2 /
    Table created.
    SQL> desc v_a1;
    Name                         Null? Type
    ID                                  VARCHAR2(40)

  • Can I install Snow Leopard on a new partition on a Macbook Pro (Late 2011)?

    I need to get Pro Tools 9 up and running again after I migrated from PC to Mac, but I know that Pro Tools 9 doesnt work with Lion.. I dont have the money to upgrade to PT10 so my thought was to go downgrade to Snow Leopard to get it working. But I dont want to leave Lion, so my question is if I can make a new partition and install Snow Leopard on the new partition and have both OSs bootable?
    The guy in the store I bought my mac from said Snow Leopard probably wouldnt play nice with the mac since its adapted to Lion but I dont trust people that get money for preaching about the constant need for "the latest". So I thought I'd ask the experts instead, so here I am! What do you guys think?

    theoretically, it should work - but the guy at the Apple Store is correct....computers that ship with the latest operating system do not support being downgraded.
    You might not get past the spinning beach ball & gray screen if you try to boot from the Snow Leopard install disc.
    It's worth a shot though if you want to try it. Just don't try to 'downgrade' the current Lion installation back to Snow Leopard. Try instead to create a new partition specifically for Snow Leopard. Disk Utility - select the top HD (probably reads Hitachi something)...select it, click on the Partition tab. Select the top partition, and you should then be able to see the + so you can add a new partition. I would probably make it about 20GB give or take depending on how much space you think you will need - but i believe the Snow Leopard installation by itself takes up around 8-10GB.
    Once this partition is created, insert your Snow Leopard installation disc, restart the computer and hold the C key down to start from the install disc. When it walks you through the steps for installation, select the newly created Snow Leopard partition. Install. Be sure to go through all the Software Updates (numerous times) after the installation is done.
    You can select which startup disc you want to boot from by holding the Option button down at startup until you see the gray startup manager that shows your Lion partition, Recovery Disc partiton, and your Snow Leopard partition.
    If for whatever reason this doesn't work, simply just erase the partition. It likely will not work but you should be able to just erase that newly created partition without any other problems.

  • Making new partitions after installing Ubuntu 11.1

    Hi everyone
    First of all, sorry for my bad english, i'll do my best to explain my problem.
    I have a 2010 13" Macbook Pro (7,1), with OsX Snow Leopard running on it.
    Recently, I decided to install Ubuntu (11.1 version), by partitioning the main Macintosh HD using Disk Utility as usual, with 3 new partitions: a swap partition, a "storage" partition ad a partition dedicated to the OS.
    After the installation, everything was going well: with refit at the start i can choose if booting with OS-X or Ubuntu and it works with no problems.
    Yesterday I saw a tutorial on youtube where I saw a Macbook user explaining how to install three operative systems, using ReFit at the start like me.
    So, today I decided to give it a try, but Disk Utility doesn't allow me to modify the macintosh HD partition again. I can set a new partition, deciding the dimension and the file system format, but clicking on "apply" to make the changes I get this error:
    "Mediakit mediakit reports no such partition".
    Does anyone knows a fix for this one?
    Sorry if there was already a topic about a similar problem, I didn't noticed it.

    Try Andrei Cerbu's post here or see TS1538: iOS: Device not recognized in iTunes for Windows, in particular section 5, forcing a driver update.
    tt2

  • Auto stats gathering for partitioned table

    Hi,
    We are in 10gR2 in sun solaris. We are using auto stats gathering for our DB. Here is my question,
    i know oracle gather statistics of the table, if the table changes more than 10%. How this work out for partitioned table? If the partition table changes more than 10% will last partition analyzed or the full table. We have partitioned based on insertion date.
    Appreciate your responds.
    Regards,
    Satheesh Shanmugam
    http://borndba.com

    I hope it will be only current partition which has teh stale statistics will be gathered the stastics instead of full table.
    Anil Malkai

  • Partition Maintenance and Stats gathering running together

    Hi All,
    I have a scenario in my production database wherein Partition Maintenance and Stats Gathering jobs begin at the same time (both are run weekly on Sundays only).
    Partition Maintenance is a job which basically rebuilds indexes and creates new partitions on tables in the database. While Stats Gathering job gathers stats on database tables.
    My question was based on the scenario if we consider - Maintenance job is rebuilding indexes on a table and at the same time Stats gathering job is trying to gather stats for that same table. So will there be any issue caused due to this scenario ??
    I would like to know whether there is any issue with their running at the same time ?
    Database version: Oracle 10 R2
    Environment: Unix AIX version 5.3
    Thanks in advance.

    Sandyboy036 wrote:
    Thanks for the reply.
    Could you elaborate what effect could it have on the table or some issues if I were to run both at the same time please ?
    Thanks.I would be concerned that statistics would not reflect reality.
    A partition could be created and populated & the statistics would not reflect this recent activity.
    why are you regularly rebuilding indexes?

  • Gathering statistics on partitioned and non-partitioned tables

    Hi all,
    My DB is 11.1
    I find that gathering statistics on partitioned tables are really slow.
    TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE LAST_ANALYZED PARTITIONED COMPRESSION
    O_FCT_BP1                        112123170     843140    11212317 8/30/2011 3:5            NO                    DISABLED
    LEON_123456                      112096060     521984    11209606 8/30/2011 4:2           NO                   ENABLED
    O_FCT                           115170000     486556      115170 8/29/2011 6:3            YES        
    SQL> SELECT COUNT(*)  FROM user_tab_subpartitions
      2  WHERE table_name =O_FCT'
      3  ;
      COUNT(*)
           112I used the following script:
    BEGIN
      DBMS_STATS.GATHER_TABLE_STATS(ownname          => user,
                                    tabname          => O_FCT',
                                    method_opt       => 'for all columns size auto',
                                    degree           => 4,
                                    estimate_percent =>10,
                                    granularity      => 'ALL',
                                    cascade          => false);
    END;
    /It costs 2 mins for the first two tables to gather the statistics respectively, but more than 10 mins for the partitioned table.
    The time of collecting statistics accounts for a large part of total batch time.
    And most jobs of the batch are full load in which case all partitions and subpartitions will be affected and we can't just gather specified partitions.
    Does anyone have some experiences on this subject? Thank you very much.
    Best regards,
    Leon
    Edited by: user12064076 on Aug 30, 2011 1:45 AM

    Hi Leon
    Why don't you gather stats at partition level? If your partitions data is not going to change after a day (date range partition for ex), you can simply do at partition level
    GRANULARITY=>'PARTITION' for partition level and
    GRANULARITY=>'SUBPARTITION' for subpartition level
    You are gathering global stats every time which you may not require.
    Edited by: user12035575 on 30-Aug-2011 01:50

  • Migrating a new partition table with transportable tablespace

    I created a partitioned table with 2 partitions (2010 and 2011) and used transportable tablespace to migrate the data over to a new envionment. My question is, if I decide to add a partition (2012) in the future, can I simply move that new partition along with the associated datafile via transportable tablespace or would I have to move all the partitions (2010, 2011, 2012).

    user564785 wrote:
    I created a partitioned table with 2 partitions (2010 and 2011) and used transportable tablespace to migrate the data over to a new envionment. My question is, if I decide to add a partition (2012) in the future, can I simply move that new partition along with the associated datafile via transportable tablespace or would I have to move all the partitions (2010, 2011, 2012).Yes why not.
    1) create a table as CTAS from 2012 in new Tablespace on source
    2) transport the tablespace
    3) Add partition to existing partition table Or exchange partition
    Oracle has also documented this procedure:
    http://docs.oracle.com/cd/B28359_01/server.111/b28310/tspaces013.htm#i1007549

  • Experiences of Partitioning FACT tables

    Running BPC 7.0 SP3 for MS
    We have two very large FACT tables (195milliion records and 105million records) and these are currently growing at a rate of 2m/5m records per month - we are running an incremental optimize twice per day
    It has been suggested that we consider partioning the tables to improve performance, but I have not been able to find any users/customers with any experience of doing this
    Specifically
    1. Does it improve performance?
    2. What additional complexity does it add to regular maintenance ?
    3. Have there been any problems encountered implementing Partioned tables?
    4. It would seem that partioning based on time would make sense - historic data in one partition, current data in another HOWEVER many of our reports pull current year and prior year so will this cause a reporting issue? Or degrade report performance?

    I don't know if this is still an issue for you.  You ask about Fact Table partitioning specifically, but you need to be aware that it is possible to partition either the FACT tables or the Fact table partition of the cube, or both. We have used (further) partioning of Fact table partition in the cube with success, and it sounds as if this is what you are really asking about. 
    The impacts are on
    1. processing time, a full optimize without Compress only processes the paritions that have changed, thereby reducing the run time where there is a lot of unchanged data. You mention that you run incremental updates twice daily,  this is currently reprocessing the whole database.  I would have expected the lite optimize to be more effective, supported by an overnight full optimize, if you have an overnight window. You can also run the lite optimize more frequently.
    2. query time. The filters defined in the partitions provide a more efficient path to data in the reporting processes than the defaults, which have the potential to scan large parts of the database.
    Partitioning is not a panacea. You need to be specific about the areas of performance problem that you have and choose the performance improvement strategy to address these.  Looking at the indexing of the database is also an area where you can improve performance significantly.
    If you partition the cube, it is transparent to the usage of the application, from both user and admin perspective. The greatest complexity comes is the definition of the partitions in the first place, but this is a normal DBA function.  The trick is ensure that the filter statements do not overlap, otherwise you might get a value duplicated in 2 partitions, and to define a catchall partition to include anything not included in specific partitions. You should expect to revist the partitioning from time to time.  It is quite straightforward to repartition, you are not doing anything to the underlying data in the FACT tables
    Time is a common dimension to partition and you may partition at different levels of granularity for different periods, e.g. current year by qtr or month, prior and future years by year.  This reflects where the most frequent updates will be.  It is also possible to define partitions based on combinations of dimensions, we use category and time, so that currenct year actuals has the most granular partitions and all historic years budgets go into a single partition.

  • Partitioning large table

    I'm in need of some expert advice on how best to partition the following table in Oracle 9i. It's expected to hold billions of records. This table is expected to receive 10,000,000 new records every hour or so.
    I've tried a composite range-hash partitioning scheme, however it does not appear to be improving the speed of INSERTs. I've created the tablespaces named "part0" through "part9", which all happen to be on the same disk. I realize that optimally these would be on separate disks, however I expected to see a larger improvement over the same table without partitions which doesn't appear to be happening. Here's it's current definition:
    CREATE TABLE sp_device_usage
    (id NUMBER(15) NOT NULL,
    device_id NUMBER(9) NOT NULL,
    datastream_id NUMBER(9) NOT NULL,
    timestamp NUMBER(10) NOT NULL,
    usage_counter NUMBER(15) NOT NULL,
    CONSTRAINT sp_device_usage_pk PRIMARY KEY (id) USING INDEX TABLESPACE cni_index,
    CONSTRAINT sp_device_usage_fk1 FOREIGN KEY (device_id) REFERENCES encore_device,
    CONSTRAINT sp_device_usage_fk2 FOREIGN KEY (datastream_id) REFERENCES sp_datastream
    PARTITION BY RANGE (timestamp)
    SUBPARTITION BY HASH (device_id) SUBPARTITIONS 10 (
    PARTITION sp_device_usage_ptn1 VALUES LESS THAN (100000) TABLESPACE part0,
    PARTITION sp_device_usage_ptn2 VALUES LESS THAN (200000) TABLESPACE part1,
    PARTITION sp_device_usage_ptn3 VALUES LESS THAN (300000) TABLESPACE part2,
    PARTITION sp_device_usage_ptn4 VALUES LESS THAN (400000) TABLESPACE part3,
    PARTITION sp_device_usage_ptn5 VALUES LESS THAN (500000) TABLESPACE part4,
    PARTITION sp_device_usage_ptn6 VALUES LESS THAN (600000) TABLESPACE part5,
    PARTITION sp_device_usage_ptn7 VALUES LESS THAN (700000) TABLESPACE part6,
    PARTITION sp_device_usage_ptn8 VALUES LESS THAN (800000) TABLESPACE part7,
    PARTITION sp_device_usage_ptn9 VALUES LESS THAN (900000) TABLESPACE part8,
    PARTITION sp_device_usage_ptn10 VALUES LESS THAN (maxvalue) TABLESPACE part9
    CREATE INDEX sp_device_usage_idx1 ON
    sp_device_usage(device_id) TABLESPACE cni_index
    CREATE INDEX sp_device_usage_idx2 ON
    sp_device_usage(datastream_id) TABLESPACE cni_index
    CREATE SEQUENCE sp_device_usage_seq START WITH 1000 INCREMENT BY 1 MINVALUE 1 NOCACHE NOCYCLE NOORDER;

    INSERT operations into a partitioned table won't necessarily be any faster than INSERT operations into a non-partitioned table. For partitioned tables, Oracle has to determine which partition to insert each record into, which is an extra step in the partitioned universe. Since you will, for the most part, be inserting into the same partition, you're basically doing the normal table insert with additional partition key checking overhead. Partitioning is designed to speed the retrieval of and, to a lesser extent, update operations on data as well as to improve managability.
    Justin Cave <[email protected]>
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Stats gathering

    Hi All,
    oracle 11gr2
    Linux
    am new to database, how to gather statistics in oracle 11g? like database/schema/table wise?
    what is the impact gathering stats daily basis? how to automate stats gathering database wise?
    can anyone please suggest & explain with example if possible.
    thanks,
    Sam.

    AUTO_SAMPLE_SIZE lets Oracle Database determine the best sample size necessary for good statistics, based on the statistical property of the object. Because each type of statistics has different requirements, the size of the actual sample taken may not be the same across the table, columns, or indexes.from the link posted before by Aman
    If you ommited this clause, full scan will be done: (again in the document attached by Aman)
    Gathering statistics without sampling requires full table scans and sorts of entire tables. Sampling minimizes the resources necessary to gather statistics

Maybe you are looking for

  • How to process a collection of objects of a derived class?

    I'm trying to clean up some duplicated code. I have several classes that have a common parent class. I have in my application several searching and sorting functions that operate on these child classes. But I realized that this is code duplication, a

  • What is "Data migrations resolutions"

    Hi Team, Last week i went to interview, that person asked that... " what about the knowledge of '*Data migrations resolutions*' "? I have no idea about this. After that i searched in Google, but different types of results came. Any one please suggest

  • IE9 and Flash Player NOT Working

    Since upgrading to IE 9 I can no longer view any FLASH in webpages.  The Flash Player occasionally flickers on the screen offering to ALLOW or DENY (but disappears before you can click).  IF I am lucky enuf to click on the FULL SCREEN option, I can v

  • After startup Firefox only visible maximized

    Hi, Problem 1. After starting up Firefox, there is for only one second a very small window top left named Firefox. After that it disappears and Firefox can only be viewed maximized. This is very annoying on 21" screen. Problem 2. For some time now I

  • Select ROI display window

    Hi Does anyone know of a way to programmatically set the position of the window displayed when using the IMAQ Select Rectangle. It is centre positioned by default, but I'd preferably like to position it in the top right corner. An alternative is to s