Partitioning a big table

Hi,
I have a table with 95 million records and the unique column of this table has a functional index MOD(unique_column,24)+1 that a PL/SQL script use.
Can I benefit from hash partitioning this table on the unique column where there's this functional index?
My objective is to speed up the processing of PL/SQL code and I think I can do this by retrieving and updating this table if it is hash partitioned.
Many thanks!

You have 24 distinct values for this index only. The index will not be very helpful in your queries if your index does not contain other columns. If you are on 11g I suggest to use list partitioning on a virtual column and throw this index away.

Similar Messages

  • Best approach to do Range partitioning on Huge tables.

    Hi All,
    I am working on 11gR2 oracle 3node RAC database. below are the db details.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    Thanks,
    Hari

    >
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    >
    Sorry to tell you but that test and partitioning strategy is essentially useless and won't work for you entire table anyway. One reasone is that if you use the WITHOUT VALIDATION clause you must ensure that the data being exchanged actually belongs to the partition you are putting it in. If it doesn't you won't be able to reenable or rebuild any primary key or unique constraints that exist on the table.
    See Exchanging Partitions in the VLDB and Partitioning doc
    http://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin002.htm#i1107555
    >
    When you specify WITHOUT VALIDATION for the exchange partition operation, this is normally a fast operation because it involves only data dictionary updates. However, if the table or partitioned table involved in the exchange operation has a primary key or unique constraint enabled, then the exchange operation is performed as if WITH VALIDATION were specified to maintain the integrity of the constraints.
    If you specify WITHOUT VALIDATION, then you must ensure that the data to be exchanged belongs in the partition you exchange.
    >
    Comments below are limited to working with ONE table only.
    ISSUE #1 - ALL data will have to be moved regardless of the approach used. This should be obvious since your current data is all in one segment but each partition of a partitioned table requires its own segment. So the nut of partitioning is splitting the existing data into multiple segments almost as if you were splitting it up and inserting it into multiple tables, one table for each partition.
    ISSUE#2 - You likely cannot move that much data in the 2 to 3 hours window that you have available for down time even if all you had to do was copy the existing datafiles.
    ISSUE#3 - Even if you can avoid issue #2 you likely cannot rebuild ALL of the required indexes in whatever remains of the outage windows after moving the data itself.
    ISSUE#4 - Unless you have conducted full volume performance testing in another environment prior to doing this in production you are taking on a tremendous amount of risk.
    ISSUE#5 - Unless you have fully documented the current, actual execution plans for your most critical queries in your existing system you will have great difficulty overcoming issue #4 since you won't have the requisite plan baseline to know if the new partitioning and indexing strategies are giving you the equivalent, or better, performance.
    ISSUE#6 - Things can, and will, go wrong and cause delays no matter which approach you take.
    So assuming you plan to take care of issues #4 and #5 you will probably have three viable alternatives:
    1. use DBMS_REDEFINITION to do the partitioning on-line. See the Oracle docs and this example from oracle-base for more info.
    Redefining Tables Online - http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables007.htm
    Partitioning an Existing Table using DBMS_REDEFINITION
    http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
    2. do the partitioning offline and hope that you don't exceed your outage window. Recover by continuing to use the existing table.
    3. do the partitioning offline but remove the oldest data to minimize the amount of data that has to be worked with.
    You should review all of the tables to see if you can remove older data from the current system. If you can you could use online redefinition that ignores older data. Then afterwards you can extract this old data from the old table for archiving.
    If the amount of old data is substantial you can extract the new data to a new partitioned table in parallel and not deal with the old data at all.

  • Managing a big table

    Hi All,
    I have a big table in my database. When I say big, it is related to data stored in it (around 70 million recs) and also no of columns (425).
    I do not have any problems with it now, but going ahead I assume, it would be a bottleneck or very difficult to manage this table.
    I have a star schema for the application of which this is a master table.
    Apart from partitioning the table is there any other way of better handling such a table.
    Regards

    Hi,
    Usually the fact tables tend to be smaller in number of columns and larger in number of records while the dimension tables obey to the opposite larger number of columns, which is were the powerful of the dimension lays on, and very few (in some exceptions even millions of record) records. So the high number of columns make me thing that the fact table may be, only may be, I don't have enough information, improperly designed. If that is the case then you may want to revisit that design and most likely you will find some 'facts' in your fact table that can become attributes of any of the dimension tables linked to.
    Can you say why are you adding new columns to the fact table? A fact table is created for a specific business process and if done properly there shouldn't be such a requirement of adding new columns. A fact use to be limited in the number of metrics you can take from it. In fact, it is more common the oposite, a factless fact table.
    In any case, from the point of view of handling this large table with so many columns I would say that you have to focus on avoiding the increasing number of columns. There is nothing in the database itself, such as partitioning that could do this for you. So one option is to figure out which columns you want to get 'vertical partition' and split the table in at least two new tables. The set of columns will be those that are more frequently used or those that are more critical to you.Then you will have to link these two tables together and with the rest of dimensions. But, again if you are adding new columns then is just a matter of time that you will be running in the same situation in the future.
    I am sorry but cannot offer better advice than to revisit the design of your fact table. For doing that you may want to have a look at http://www.kimballgroup.com/html/designtips.html
    LW

  • How to use partioning for big table

    Hi,
    Oracle 10gR2/Redhat4
    RAC database
    ASM
    I have a big table TRACES that will also grow very fast, actually I have 15 000 000 rows.
    TRACES (ID NUMBER,
    COUNTRY_NUM NUMBER,
    Timestampe NUMBER,
    MESSAGE VARCHAR2(300),
    type_of_action VARCHAR(20),
    CREATED_TIME DATE,
    UPDATE_DATE DATE)
    The querys that asked this table are and the made a lot of I/O in disks!!
    select count(*) as y0_
    from TRACES this_
    where this_.COUNTRY_NUM = :1
    and this_.TIMESTAMP between :2 and :3
    and lower(this_.MESSAGE) like :4;
    SELECT *
    FROM (SELECT this_.id ,
    this_.TIMESTAMP
    FROM traces this_
    WHERE this_.COUNTRY_NUM = :1
    AND this_.TIMESTAMP BETWEEN :2 AND :3
    AND this_.type_of_action = :4
    AND LOWER (this_.MESSAGE) LIKE :5
    ORDER BY this_.TIMESTAMP DESC)
    WHERE ROWNUM <= :6;
    I have 16 distinct COUNTRY_NUM in the table and the TIMESTAMPE is a number that the application insert in the table.
    My question is the best solution to tune this table is to use partitioninig to a smal parts?
    I need to made a partioning using a list by COUNTRY_NUM and date (YEAR/mounth) , is it a best way to it?
    NB: for an example of TRACES in my test database
    1 select COUNTR_NUM,count(*) from traces
    2 group by COUNTR_NUM
    3* order by COUNTR_NUM
    SQL> /
    COUNTR_NUM COUNT(*)
    -1 194716
    3 1796581
    4 1429393
    5 1536092
    6 151820
    7 148431
    8 76452
    9 91456
    10 91044
    11 186370
    13 76
    15 29317
    16 33470

    Hello,
    You can automate and use dbms_scheduler to add monthly partition. Here is an example of your partitioned table with monthly partitions
    CREATE TABLE traces (
       id NUMBER,
       country_num NUMBER,
       timestampe NUMBER,
       MESSAGE VARCHAR2 (300),
       type_of_action VARCHAR (20),
       created_time DATE,
       update_date DATE
    TABLESPACE TEST_DATA - your tablespace_name
    PARTITION BY RANGE (created_time)
       (PARTITION traces_200901
           VALUES LESS THAN
              (TO_DATE (' 2009-02-01 00:00:00',
                        'SYYYY-MM-DD HH24:MI:SS',
                        'NLS_CALENDAR=GREGORIAN'
           TABLESPACE test_data,  -- Here you can put partition on difference tablespaces meaning different data files residing on diferent disks (Reducing i/o coententions)
       PARTITION traces_200902
          VALUES LESS THAN
             (TO_DATE (' 2009-03-01 00:00:00',
                       'SYYYY-MM-DD HH24:MI:SS',
                       'NLS_CALENDAR=GREGORIAN'
          TABLESPACE test_data);Regards

  • Performance question - Caching data of a big table

    Hi All,
    I have a general question about caching, I am using an Oracle 11g R2 database.
    I have a big table about 50 millions of rows that is accessed very often by my application. Some query runs slow and some are ok. But (obviously) when the data of this table are already in the cache (so basically when a user requests the same thing twice or many times) it runs very quickly.
    Does somebody has any recommendations about caching the data / table of this size ?
    Many thanks.

    Chiwatel wrote:
    With better formatting (I hope), sorry I am not used to the new forum !
    Plan hash value: 2501344126
    | Id  | Operation                            | Name          | Starts | E-Rows |E-Bytes| Cost (%CPU)| Pstart| Pstop | A-Rows |  A-Time  | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
    |  0 | SELECT STATEMENT        |                    |      1 |        |      |  7232 (100)|      |      |  68539 |00:14:20.06 |    212K|  87545 |      |      |          |
    |  1 |  SORT ORDER BY                      |                |      1 |  7107 |  624K|  7232  (1)|      |      |  68539 |00:14:20.06 |    212K|  87545 |  3242K|  792K| 2881K (0)|
      2 |  NESTED LOOPS                      |                |      1 |        |      |            |      |      |  68539 |00:14:19.26 |    212K|  87545 |      |      |          |
    |  3 |    NESTED LOOPS                      |                |      1 |  7107 |  624K|  7230  (1)|      |      |  70492 |00:07:09.08 |    141K|  43779 |      |      |          |
    *  4 |    INDEX RANGE SCAN                | CM_MAINT_PK_ID |      1 |  7107 |  284K|    59  (0)|      |      |  70492 |00:00:04.90 |    496 |    453 |      |      |          |
    |  5 |    PARTITION RANGE ITERATOR        |                |  70492 |      1 |      |    1  (0)|  KEY |  KEY |  70492 |00:07:03.32 |    141K|  43326 |      |      |          |
    |*  6 |      INDEX UNIQUE SCAN              | D1T400P0      |  70492 |      1 |      |    1  (0)|  KEY |  KEY |  70492 |00:07:01.71 |    141K|  43326 |      |      |          |
    |*  7 |    TABLE ACCESS BY GLOBAL INDEX ROWID| D1_DVC_EVT    |  70492 |      1 |    49 |    2  (0)| ROWID | ROWID |  68539 |00:07:09.17 |  70656 |  43766 |      |      |          |
    Predicate Information (identified by operation id):
      4 - access("ERO"."MAINT_OBJ_CD"='D1-DEVICE' AND "ERO"."PK_VALUE1"='461089508922')
      6 - access("ERO"."DVC_EVT_ID"="E"."DVC_EVT_ID")
      7 - filter(("E"."DVC_EVT_TYPE_CD"='END-GSMLOWLEVEL-EXCP-SEV-1' OR "E"."DVC_EVT_TYPE_CD"='STR-GSMLOWLEVEL-EXCP-SEV-1'))
    Your user has executed a query to return 68,000 rows - what type of user is it, a human being cannot possibly cope with that much data and it's not entirely surprising that it might take quite some time to return it.
    One thing I'd check is whether you're always getting the same execution plan - Oracle's estimates here are out by a factor of about 95 (7,100 rows predicted vs. 68,500 returned) perhaps some of your variation in timing relates to plan changes.
    If you check the figures you'll see about half your time came from probing the unique index, and half came from visiting the table. In general it's hard to beat Oracle's caching algorithms, but indexes are often much smaller than the tables they cover, so it's possible that your best strategy is to protect this index at the cost of the table. Rather than trying to create a KEEP cache the index, though, you MIGHT find that you get some benefit from creating a RECYCLE cache for the table, using a small percentage of the available memory - the target is to fix things so that table blocks you won't revisit don't push index blocks you will revisit from memory.
    Another detail to consider is that if you are visiting the index and table completely randomly (for 68,500 locations) it's possible that you end up re-reading blocks several times in the course of the visit. If you order the intermediate result set from the from the driving table first you may find that you're walking the index and table in order and don't have to re-read any blocks. This is something only you can know, though.  THe code would have to change to include an inline view with a no_merge and no_eliminate_oby hint.
    Regards
    Jonathan Lewis

  • Very Big Table (36 Indexes, 1000000 Records)

    Hi
    I have a very big table (76 columns, 1000000 records), these 76 columns include 36 foreign key columns , each FK has an index on the table, and only one of these FK columns has a value at the same time while all other FK have NULL value. All these FK columns are of type NUMBER(20,0).
    I am facing performance problem which I want to resolve taking in consideration that this table is used with DML (Insert,Update,Delete) along with Query (Select) operations, all these operations and queries are done daily. I want to improve this table performance , and I am facing these scenarios:
    1- Replace all these 36 FK columns with 2 columns (ID, TABLE_NAME) (ID for master table ID value, and TABLE_NAME for master table name) and create only one index on these 2 columns.
    2- partition the table using its YEAR column, keep all FK columns but drop all indexes on these columns.
    3- partition the table using its YEAR column, and drop all FK columns, create (ID,TABLE_NAME) columns, and create index on (TABLE_NAME,YEAR) columns.
    Which way has more efficiency?
    Do I have to take "master-detail" relations in mind when building Forms on this table?
    Are there any other suggestions?
    I am using Oracle 8.1.7 database.
    Please Help.

    Hi everybody
    I would like to thank you for your cooperation and I will try to answer your questions, but please note that I am a developer in the first place and I am new to oracle database administration, so please forgive me if I did any mistakes.
    Q: Have you gathered statistics on the tables in your database?
    A: No I did not. And if I must do it, must I do it for all database tables or only for this big table?
    Q:Actually tracing the session with 10046 level 8 will give some clear idea on where your query is waiting.
    A: Actually I do not know what you mean by "10046 level 8".
    Q: what OS and what kind of server (hardware) are you using
    A: I am using Windows2000 Server operating system, my server has 2 Intel XEON 500MHz + 2.5GB RAM + 4 * 36GB Hard Disks(on RAID 5 controller).
    Q: how many concurrent user do you have an how many transactions per hour
    A: I have 40 concurrent users, and an average 100 transaction per hour, but the peak can goes to 1000 transaction per hour.
    Q: How fast should your queries be executed
    A: I want the queries be executed in about 10 to 15 seconds, or else every body here will complain. Please note that because of this table is highly used, there is a very good chance to 2 or more transaction to exist at the same time, one of them perform query, and the other perform DML operation. Some of these queries are used in reports, and it can be long query(ex. retrieve the summary of 50000 records).
    Q:please show use the explain plan of these queries
    A: If I understand your question, you ask me to show you the explain plan of those queries, well, first, I do not know how , an second, I think it is a big question because I can not collect all kind of queries that have been written on this table (some of them exist in server packages, and the others performed by Forms or Reports).

  • Gather table stats takes time for big table

    Table has got millions of record and it is partitioned. when i analyze the table using the following syntax it takes more than 5 hrs and it has got one index.
    I tried with auto sample size and also by changing the estimate percentage value like 20, 50 70 etc. But the time is almost same.
    exec dbms_stats.gather_table_stats(ownname=>'SYSADM',tabname=>'TEST',granularity =>'ALL',ESTIMATE_PERCENT=>100,cascade=>TRUE);
    What i should do to reduce the analyze time for Big tables. Can anyone help me. l

    Hello,
    The behaviour of the ESTIMATE_PERCENT may change from one Release to another.
    In some Release when you specify a "too high" (>25%,...) ESTIMATE_PERCENT in fact you collect the Statistics over 100% of the rows, as in COMPUTE mode:
    Using DBMS_STATS.GATHER_TABLE_STATS With ESTIMATE_PERCENT Parameter Samples All Rows [ID 223065.1]For later Release, *10g* or *11g*, you have the possibility to use the following value:
    estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZEIn fact, you may use it even in *9.2*, but in this release it is recommended using a specific estimate value.
    More over, starting with *10.1* it's possible to Schedule the Statistics collect by using DBMS_SCHEDULE and, specify a Window so that the Job doesn't run during production hours.
    So, the answer may depends on the Oracle Release and also on the Application (SAP, Peoplesoft, ...).
    Best regards,
    Jean-Valentin

  • How to UPDATE a big table in Oracle via Bulk Load

    Hi all,
    in a datastore target as Oracle 11g, I have a big table having 300milions of record; the structure is One integer key + 10 columns attributes .
    In IQ Source i have the same table with the same size ; the structure is One integer key + 1 column attributes .
    What i need to do is to UPDATE that single field in Oracle from the values stored in IQ .
    Any idea on how to organize efficiently the dataflow and the target writing mode ? bulk load ? api ?
    thank you
    Maurizio

    Hi,
    You cannot do bulk load when you need to UPDATE a field. Because all a bulk load does is add records to your table.
    Since you have to UPDATE a field, i would suggest to go for SCD with
    source > TC > MO > KG >target
    Arun

  • Partitioning a fact table

    I am curious to hear techniques for partitioning a fact table with OWB. I know HOW to setup the partitioning for the table, but what I am curious about is what type of partitioning everyone is suggesting. Take the following example...Lets say we have a sales transaction fact table. It has dimensions of Date, Product, and Store. An immediate partitioning idea is to partition the table by month. But my curiosity arises in the method used to partition the fact table. There is no longer a true date field in the fact table to do range partitioning on. And hash partitioning will not distribute the records by month.
    One example I found was to "code" the surrogate key in the date dimension so that it was created in the following manner "YYYYMMDD". Then you could use the range partitioning based on values of the key in the fact table less than 20040200 for Jan. 2004, less than 20040300 for Feb. 2004, and so on.
    Is this a good idea?

    Jason,
    In general, obviously, query performance and scaleability benefit from partitioning. Rather than hitting the entire table upon retrieving data, you would only hit a part of the table. There are two main strategies to identify what partitioning strategy to choose:
    1) Users always query specific parts of the data (e.g. data from a particular month) in which case it makes sense for the part to be the size of the partition. If your end users often query by month or compare data on a month-by-month basis, then partitioning by month may well be the right strategy.
    2) Improve data loading speed by creating partitions. The database supports partion exchange loading, supported by Warehouse Builder as well, which enables you to swap out a temporary table and a partition at once. In general, your load frequency then decides your partitioning strategy: if you load on a daily basis, perhaps you want daily partions. Beware that for Warehouse Builder to use the partition exchange loading feature you will have to have a date field in the fact table, so you would change the time dimension.
    In general, your suggestion for the generated surrogate key would work.
    Thanks,
    Mark.

  • Maximum No. of Partitions in a Table?

    Hi,
    What is the Maximum no.of partitions in a table?
    Best Regards,
    Naresh Kumar C.

    All resides in what is 1K ?
    First option (more popular) : 1K=1024, so 1024K = 1024*1024
    Second option (less popular) : 1K = 1000, 1024K = 1024*1000
    http://en.wikipedia.org/wiki/Byte
    Have you really need to 1024K-1 partitions (either one or other option, it's already huge for number of partitions) ? Good luck for maintenance tasks...
    Nicolas.

  • Diagnostic pack, Tuning pack are not in OEM 10g, Add partition to a table

    Hi All,
    I have 2 questions:
    Q.1: In Oracle 9i Oracle Enterprise Manager java console, we had "Diagnostic Pack" and "Tuning Pack" which helped us seeing performance tuning info (session's statistics such as cpu time, pga memory, physical disk reads, etc.) and privded help for sql tuning and performance improvements. But in 10g, the same product (Oracle Enterprise Manager java console) does not include these 2 packs, due to which we are unable to monitor and control performance tuning information/statistics. Now my question is, do we need to install these 2 packs separately in 10g? if yes, where can we get them? (I am sure in 9i, these packages came with OEM console and we didnt need to get them separately)
    Q.2: I have a partitioned table having 5 partitions based on range partitioning. Now our requirements have changed and we need to insert values beyong the 5th partition, so we need a 6th partition. Can someone tell me the command to add a new partition to the table? I tried "Alter table xxx add partition yyy ....", but it didn't work. If any one can me the correct syntax, it will be great.
    Thanks in advance.

    OP is talking about java-based EM, not web-based DBConsole. In fact he/she has to change to DBConsole, because 10g java EM doesn't longer support these tuning features.
    Alter table ... partition syntax depends on the kind of partitioning, see the documentation:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3001.htm#i2131048
    Werner

  • Problem in truncate/drop partitions in a table having nested table columns.

    Hi,
    I have a table that has 2 columns of type nested table. Now in the purge process, when I try to truncate or drop a partition from this table, I get error that I can't do this (because table has nested tables). Can anybody help me telling how I will be able to truncate/drop partition from this table? IF I change column types from nested table to varray type, will it help?
    Also, is there any short method of moving existing data from a nested table column to a varray column (having same fields as nested table)?
    Thanks in advance.

    >
    I have a table that has 2 columns of type nested table. Now in the purge process, when I try to truncate or drop a partition from this table, I get error that I can't do this (because table has nested tables). Can anybody help me telling how I will be able to truncate/drop partition from this table?
    >
    Unfortunately you can't do those operations when a table has a nested table column. No truncate, no drop, no exchange partition at the partition level.
    A nested table column is stored as a separate table and acts like a 'child' table with foreign keys to the 'parent' table. It is these 'foreign keys' that prevent the truncation (just like normal foreign keys prevent truncating partions and must be disabled first) but there is no mechanism to 'disable' them.
    Just one excellent example (there are many others) of why you should NOT use object columns at all.
    >
    IF I change column types from nested table to varray type, will it help?
    >
    Yes but I STRONGLY suggest you take this opportunity to change your data model to a standard relational one and put the 'child' (nested table) data into its own table with a foreign key to the parent. You can create a view on the two tables that can make data appear as if you have a nested table type if you want.
    Assuming that you are going to ignore the above advice just create a new VARRAY type and a table with that type as a column. Remember VARRAYs are defined with a maximum size. So the number of nested table records needs to be within the capacity of the VARRAY type for the data to fit.
    >
    Also, is there any short method of moving existing data from a nested table column to a varray column (having same fields as nested table)?
    >
    Sure - just CAST the nested table to the VARRAY type. Here is code for a VARRAY type and a new table that shows how to do it.
    -- new array type
    CREATE OR REPLACE TYPE ARRAY_T AS VARRAY(10) OF VARCHAR2(64)
    -- new table using new array type - NOTE there is no nested table storage clause - arrays stored inline
    CREATE TABLE partitioned_table_array
         ( ID_ INT,
          arra_col  ARRAY_T )
         PARTITION BY RANGE (ID_)
         ( PARTITION p1 VALUES LESS THAN (40)
         , PARTITION p2 VALUES LESS THAN(80)
         , PARTITION p3 VALUES LESS THAN(100)
    -- insert the data from the original table converting the nested table data to the varray type
    INSERT INTO PARTITIONED_TABLE_ARRAY
    SELECT ID_, CAST(NESTED_COL AS ARRAY_T) FROM PARTITIONED_TABLENaturally since there is no more nested table storage you can truncate or drop partitions in the above table
    alter table partitioned_table_array truncate partition p1
    alter table partitioned_table_array drop partition p1

  • SELECT query performance : One big table Vs many small tables

    Hello,
    We are using BDB 11g with SQLITE support. I have a query about 'select' query performance when we have one huge table vs. multiple small tables.
    Basically in our application, we need to run select query multiple times and today we have one huge table. Do you guys think breaking them into
    multiple small tables will help ?
    For test purposes we tried creating multiple tables but performance of 'select' query was more or less same. Would that be because all tables will map to only one database in backed with key/value pair and when we run lookup (select query) on small table or big table it wont make difference ?
    Thanks.

    Hello,
    There is some information on this topic in the FAQ at:
    http://www.oracle.com/technology/products/berkeley-db/faq/db_faq.html#9-63
    If this does not address your question, please just let me know.
    Thanks,
    Sandra

  • Table partitioning (intervel partitioning) on existing tables in oracle 11g

    Hi i'm newbie to table partitioning. I'm using 11g. I have table of size 32 gb (which has 22 million records) and i want to apply interval partition on that table. I created a empty table with a partition having columns same as source table and take dump of the source table and import into the new partition table. can you please suggest how to import table dump into new table? also is there any other better idea to do the same.

    Hi,
    imp user/password file=exp.dmp ignore=y
    The ignore=y causes the import to skip the table creation and continues to load all rows.
    On the other hand, you can insert data into the partitioned table with a subquery from the non-partitioned table such as follows;
    insert into patitioned_table
    select * from original_table;
    Hope it helps,

  • HS ODBC GONE AWAY ON BIG TABLE QRY

    Hello,
    I have an HS ODBC connection set up pointing to a MySQL 5.0 database on Windows using mysql odbc 3.51.12. Oracle XE is on the same box and tnsames, sqlnet.ora, and HS ok is all set up.
    The problem is I have a huge table 100 mill rows, in MySQL, and when I run a query in Oracle SQL Developer it runs for about two minutes then I get errrors ORA-00942 lost connection, or gone away.
    I can run a query against a smaller table in the schema and it returns rows quickly. So I know the HS ODBC connection is working.
    I noticed the HS service running on Windows starts up and uses 1.5 gig of memory and the CPU time maxes to 95%, on the big table query, then the connection drops.
    Any advice on what to do here. There doesn't seem to be any config settings with HS service to limit or increase the rows, or increase the cache.
    MySQL does have some advanced ODBC driver options that I will try.
    Does anyone have any suggestions on how to handle this overloading problem??
    Thanks for the help,

    FYI, HS is Oracle Hetrogenous service to connect to non-oracle databases.
    I actually found a workaround. The table is so large the query crashes. So I broke table up with 5 MySql views, and now am able to query the views using select insert Oracle stored procedure into Oracle table.

Maybe you are looking for