Best approach to do Range partitioning on Huge tables.

Hi All,
I am working on 11gR2 oracle 3node RAC database. below are the db details.
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
We tested this partitioning startegy with few million record in other environment with below steps.
1. CREATE TABLE TRANSACTION_N
PARTITION BY RANGE ("CREATED_DT")
( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
as (select * from TRANSACTION where 1=2);
2. exchange partion for data move to new partition table from old one.
ALTER TABLE TRANSACTION_N
EXCHANGE PARTITION DATA1
WITH TABLE TRANSACTION
WITHOUT VALIDATION;
3. create required indexes (took almost 3.5 hrs with parallel 16).
4. Rename the table names and drop the old tables.
this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
Thanks,
Hari

>
in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
We tested this partitioning startegy with few million record in other environment with below steps.
1. CREATE TABLE TRANSACTION_N
PARTITION BY RANGE ("CREATED_DT")
( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
as (select * from TRANSACTION where 1=2);
2. exchange partion for data move to new partition table from old one.
ALTER TABLE TRANSACTION_N
EXCHANGE PARTITION DATA1
WITH TABLE TRANSACTION
WITHOUT VALIDATION;
3. create required indexes (took almost 3.5 hrs with parallel 16).
4. Rename the table names and drop the old tables.
this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
>
Sorry to tell you but that test and partitioning strategy is essentially useless and won't work for you entire table anyway. One reasone is that if you use the WITHOUT VALIDATION clause you must ensure that the data being exchanged actually belongs to the partition you are putting it in. If it doesn't you won't be able to reenable or rebuild any primary key or unique constraints that exist on the table.
See Exchanging Partitions in the VLDB and Partitioning doc
http://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin002.htm#i1107555
>
When you specify WITHOUT VALIDATION for the exchange partition operation, this is normally a fast operation because it involves only data dictionary updates. However, if the table or partitioned table involved in the exchange operation has a primary key or unique constraint enabled, then the exchange operation is performed as if WITH VALIDATION were specified to maintain the integrity of the constraints.
If you specify WITHOUT VALIDATION, then you must ensure that the data to be exchanged belongs in the partition you exchange.
>
Comments below are limited to working with ONE table only.
ISSUE #1 - ALL data will have to be moved regardless of the approach used. This should be obvious since your current data is all in one segment but each partition of a partitioned table requires its own segment. So the nut of partitioning is splitting the existing data into multiple segments almost as if you were splitting it up and inserting it into multiple tables, one table for each partition.
ISSUE#2 - You likely cannot move that much data in the 2 to 3 hours window that you have available for down time even if all you had to do was copy the existing datafiles.
ISSUE#3 - Even if you can avoid issue #2 you likely cannot rebuild ALL of the required indexes in whatever remains of the outage windows after moving the data itself.
ISSUE#4 - Unless you have conducted full volume performance testing in another environment prior to doing this in production you are taking on a tremendous amount of risk.
ISSUE#5 - Unless you have fully documented the current, actual execution plans for your most critical queries in your existing system you will have great difficulty overcoming issue #4 since you won't have the requisite plan baseline to know if the new partitioning and indexing strategies are giving you the equivalent, or better, performance.
ISSUE#6 - Things can, and will, go wrong and cause delays no matter which approach you take.
So assuming you plan to take care of issues #4 and #5 you will probably have three viable alternatives:
1. use DBMS_REDEFINITION to do the partitioning on-line. See the Oracle docs and this example from oracle-base for more info.
Redefining Tables Online - http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables007.htm
Partitioning an Existing Table using DBMS_REDEFINITION
http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
2. do the partitioning offline and hope that you don't exceed your outage window. Recover by continuing to use the existing table.
3. do the partitioning offline but remove the oldest data to minimize the amount of data that has to be worked with.
You should review all of the tables to see if you can remove older data from the current system. If you can you could use online redefinition that ignores older data. Then afterwards you can extract this old data from the old table for archiving.
If the amount of old data is substantial you can extract the new data to a new partitioned table in parallel and not deal with the old data at all.

Similar Messages

  • What is the best approach to handle multiple FK with single table.

    If two tables are joined with each other with more than one ways, for example
    MAIN table is (col1, col2,....coln, person_creator_id, person_modifier_id)
    PERSON table is (person_id, name, address,........ phone) etc
    At database level PERSON_CREATOR_FK and PERSON_MODIFIER_FK are defined.
    Objective is to create a report that shows
    col1, col2...coln, person creator name, person modifier name
    If above two objects are imported with FKs in a EUL and discoverer plus is used to create above report. On first inclusion of person name discoverer plus will ask you to pick the join (provided the checkbox to disable this feature is not checked). Once you pick 'person creator' join it will never allow you to pick person modifier name.
    One solution is two create a custom folder with query like
    select col1, col2,...coln,
    pc.name, pc.address,.... pc.phone
    pm.name, pm.address,.... pm.phone
    from main m,
    person pc,
    person pm
    where m.person_id_creator = pc.person_id
    and m.person_id_modifier = pm.person_id
    Second solution is to import the PERSON folder twice in EUL (optionally named one as perosn_creator and other as person_modifier) and manually define one join per table. i.e. join MAIN with PERSON_CREATOR on person_creator_fk and join MAIN with PERSON_MODIFIER table using person_modifier_fk.
    Now discoverer plus will let you drag Name from each person folder without needing to resolve multiple joins.
    Question is, what approach is better OR is there a better way?
    With solution 1 you will not be able to use functions on folder items.
    With solution 2 there is a EUL design overhead of including same object multiple times and then manually defining all join (or deleting unwanted joins), and this could be a problem when you have person_modifier and person_creator in nearly all tables. It could be more complicated if person table is further linked other tables and users want to see that information too. (for instance, if person address is stored in LOCATION table joined with location_id and user want to see both creator address and modifier address....now you will have to create multiple LOCATION folders).
    A third solution could be to register a function in discoverer that return person name when person_id is passed. This will work perfectly for above requirement but a down side is the report will run slower if they need filters on person names (then function will be used in where clause). Also, this solution is very specific to above scenario, it will not work if you want the report developer the freedom to pick any attribute from person table (lets say, person table contain 50 attributes then its not a good idea to register 50 functions).
    Any comments/suggestion will be appreciated.
    thanks

    Hi
    In a roundabout way you have really answered your own question :-)
    In my opinion, the best approach, although by all means not the only approach - see below) would be to have the object loaded as two folders with one join going to the first folder and the second join to the other folder. You would of course name the folders appropriately.
    Here's a workflow that I use all of the time and one that I teach when I'm giving Discoverer Administrator training. It might help you:
    1. Bring in the PERSON folder to begin with
    2. Make all necessary adjustments to bring it up to deployment standard. These adjustments would be: folder name (E.g PERSON_CREATOR), item names, item placement, default positions, default aggregation and so on.
    3. Create or assign the required lists of values
    4. Create any required calculations
    5. Create any required conditions
    6. Create the first join from this folder to MAIN.
    7. Click on the heading for the folder and press CTRL-C.
    8. Click on the heading for the business area and press CTRL-V. A second copy of the folder, complete with all of the adjustments you made earlier will be inserted into the business area.
    Note: joins are not copied, everything else is.
    9. Rename this folder to say PERSON_MODIFIED
    10. Rename the items as appropriate
    11. Add a join from this folder to MAIN - you're done
    Other ideas that I have used and work well would be to use a database view or create a complex folder. Either will work, In both cases you would need to join on some other column other than the ones you referred earlier.
    I hope this helps
    Best wishes
    Michael

  • How to find out the min & max partition_id in a range partition?

    Hi we have a table set up by range partition
    In the table creation script. It goes something like:
    PARTITION PARTD_CUST_3_1_NEW VALUES LESS THAN ('39500')
    so how we can find out this number '39500' from some of the data_dictionary view?
    What we want to do is to set up parallel processing (do-it-yourself) based on partition.
    So the number of parallel process will be based on this ID-range.
    I can find out how many partitions are there by using something like this:
    SELECT COUNT(*)
    FROM all_tab_partitions WHERE table_owner='OWNER_NAME' AND table_name='TABLE_NAME'
    We have 5 partitions now but the table structure can change in the near future even this table has moer than 130Million rows
    I do not want to hardcode this "39500" just in case the table structure changes (partition structure changes)
    Any idea ??

    vxwo0owxv wrote:
    Hi we have a table set up by range partition
    In the table creation script. It goes something like:
    PARTITION PARTD_CUST_3_1_NEW VALUES LESS THAN ('39500')
    so how we can find out this number '39500' from some of the data_dictionary view?
    What we want to do is to set up parallel processing (do-it-yourself) based on partition.
    So the number of parallel process will be based on this ID-range.
    I can find out how many partitions are there by using something like this:
    SELECT COUNT(*)
    FROM all_tab_partitions WHERE table_owner='OWNER_NAME' AND table_name='TABLE_NAME'
    We have 5 partitions now but the table structure can change in the near future even this table has moer than 130Million rows
    I do not want to hardcode this "39500" just in case the table structure changes (partition structure changes)
    Any idea ??query DBA_TAB_PARTITIONS.HIGH_VALUE

  • Range Partitioning Existing table

    Hi,
    I have a table with numeric field. I want to range partition the existing table. I used the following query
    alter table dept add partition by range(deptno) (
    partition p1 values less than (20),
    partition p1 values less than (40))
    It gives me ORA-00902: invalid datatype.
    What am I doing wrong? Is it possible to partition an existing table?
    Regards,
    Subbu S.

    Hi,
    You cannot partition an existing table with an ALTER TABLE command. There are couple of techinques with which you can partition an existing table, like:
    1) Rename existing table, create a new partitoned table and move data.
    2) Using DBMS_REDEFINITION package.
    Have a look at the following links for details:
    http://www.oracle-base.com/articles/misc/PartitioningAnExistingTable.php
    http://www.dba-oracle.com/oracle_news/2004_3_9_rittman.htm
    Regards
    Asif Momen
    http://momendba.blogspot.com

  • Range Partitioning of Table

    Hi all,
    I need to do range partitioning on a table using current system timestamp. sth like
    CREATE TABLE SAMPLE(ID NUMBER, SAMPLE_DATE TIMESTAMP)PARTITION BY RANGE(SAMPLE_DATE)(PARTITION P1 VALUES LESS THAN (LOCALTIMESTAMP));
    Is there any way I can assign a dynamic date using variable other than string literal after VALUES LESS THAN clause? I tried achieving this with stored procedure but always throws error for this filed.
    CREATE OR REPLACE PROCEDURE partition_test
    IS
    x TIMESTAMP;
    comm long;
    BEGIN
    SELECT LOCALTIMESTAMP into x from dual;
    comm := 'CREATE TABLE SAMPLE(ID NUMBER, SAMPLE_DATE TIMESTAMP)PARTITION BY RANGE(SAMPLE_DATE)(PARTITION P1 VALUES LESS THAN (x))';
    execute immediate comm;
    END partition_test;
    error I'm getting is
    ORA-14019: partition bound element must be one of: string, datetime or interval literal, number, or MAXVALUE
    Any clue? Would really appreciate your help.
    Noman

    Can you please try with proper quotes like this...
    comm := 'CREATE TABLE SAMPLE(ID NUMBER, SAMPLE_DATE TIMESTAMP)PARTITION BY RANGE(SAMPLE_DATE)(PARTITION P1 VALUES LESS THAN (' || x || '))';
    If that didn't work out, try printing the value of comm before execute immediate and if required you can add proper number of quotes.

  • What is the best approach to insert millions of records?

    Hi,
    What is the best approach to insert millions of record in table.
    If error occurred while inserting the record then how to know which record has failed.
    Thanks & Regards,
    Sunita

    Hello 942793
    There isn't a best approach if you do not provide us the requirements and the environment...
    It depends on what for you is the best.
    Questions:
    1.) Can you disable the Constraints / unique Indexes on the table?
    2.) Is there a possibility to run parallel queries?
    3.) Do you need to know the rows which can not be inserted if the constraints are enabled? Or it is not necessary?
    4.) Do you need it fast or you have time to do it?
    What does "best approach" mean for you?
    Regards,
    David

  • Composite range - range partitioning gives ORA-14202

    i am trying to do e composite range - range partitioning, but the table is not created. i get an error, which i cannot look up at ora-codes, it says : value is to high for subpartition, which confuses me...
    CREATE TABLE "NJ_VE_AERIAL_RDT"
    (     "RASTERID" NUMBER NOT NULL ENABLE,
    "BANDBLOCKNUMBER" NUMBER NOT NULL ENABLE,
         "PYRAMIDLEVEL" NUMBER NOT NULL ENABLE,
         "ROWBLOCKNUMBER" NUMBER NOT NULL ENABLE,
         "COLUMNBLOCKNUMBER" NUMBER NOT NULL ENABLE,
         "FILENAME" VARCHAR2(100 CHAR) NOT NULL ENABLE,
         "FILESIZE" NUMBER(12,0) NOT NULL ENABLE,
         "NORTH" NUMBER(19,15) NOT NULL ENABLE,
         "SOUTH" NUMBER(19,15) NOT NULL ENABLE,
         "EAST" NUMBER(19,15) NOT NULL ENABLE,
         "WEST" NUMBER(19,15) NOT NULL ENABLE,
         "BLOCKMBR" "MDSYS"."SDO_GEOMETRY" ,
         "RASTERBLOCK" BLOB,
         "INFO_CREATOR" VARCHAR2(100 BYTE) DEFAULT 'novalue' NOT NULL ENABLE,
         "INFO_CREATED" TIMESTAMP (6) WITH LOCAL TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL ENABLE,
         "INFO_LASTMODIFIER" VARCHAR2(100 CHAR) DEFAULT 'novalue' NOT NULL ENABLE,
         "INFO_LASTMODIEFIER" TIMESTAMP (6) WITH LOCAL TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL ENABLE,
         CONSTRAINT "NJ_VE_RDT_PK" PRIMARY KEY
    ("RASTERID"
    ,"BANDBLOCKNUMBER"
    ,"PYRAMIDLEVEL"
    ,"ROWBLOCKNUMBER"
    ,"COLUMNBLOCKNUMBER"))
    PARTITION BY RANGE (WEST) INTERVAL (1)
    SUBPARTION BY RANGE (NORTH)
    SUBPARTITION TMEPLATE (
    SUBPARTITION s0 VALUES LESS THAN (15)
    ,SUBPARTITION s0 VALUES LESS THAN (5)
    (PARTITION p0 VALUES LESS THAN (0));
    What is my problem ? My current solution is fully dropping subpartitioning. but thats a cheesy workaround...
    ingo
    Message was edited by:
    Ingo Jannick

    achso ?!
    this works, but even switching the subpartition definitions results in the mentioned error. adding another sub does the same.
    my guess is that it is a range by interval issue. i have to figure that out...
    Working:
    CREATE TABLE "NJ_VEL_RDT"
    (     "RASTERID" NUMBER NOT NULL ENABLE,
    "BANDBLOCKNUMBER" NUMBER NOT NULL ENABLE,
         "PYRAMIDLEVEL" NUMBER NOT NULL ENABLE,
         "ROWBLOCKNUMBER" NUMBER NOT NULL ENABLE,
         "COLUMNBLOCKNUMBER" NUMBER NOT NULL ENABLE,
         "FILENAME" VARCHAR2(100 CHAR) NOT NULL ENABLE,
         "FILESIZE" NUMBER(12,0) NOT NULL ENABLE,
         "NORTH" NUMBER(19,15) NOT NULL ENABLE,
         "SOUTH" NUMBER(19,15) NOT NULL ENABLE,
         "EAST" NUMBER(19,15) NOT NULL ENABLE,
         "WEST" NUMBER(19,15) NOT NULL ENABLE,
         "BLOCKMBR" "MDSYS"."SDO_GEOMETRY" ,
         "RASTERBLOCK" BLOB,
         "INFO_CREATOR" VARCHAR2(100 BYTE) DEFAULT 'novalue' NOT NULL ENABLE,
         "INFO_CREATED" TIMESTAMP (6) WITH LOCAL TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL ENABLE,
         "INFO_LASTMODIFIER" VARCHAR2(100 CHAR) DEFAULT 'novalue' NOT NULL ENABLE,
         "INFO_LASTMODIEFIER" TIMESTAMP (6) WITH LOCAL TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL ENABLE,
         CONSTRAINT "NJ_VE_PK" PRIMARY KEY
    ( "RASTERID",
    "BANDBLOCKNUMBER",
    "PYRAMIDLEVEL",
    "ROWBLOCKNUMBER",
    "COLUMNBLOCKNUMBER"))
    PARTITION BY RANGE(WEST) INTERVAL (5)
    SUBPARTITION BY RANGE(NORTH)
    SUBPARTITION TEMPLATE
    SUBPARTITION s1 VALUES LESS THAN (65)
    ,SUBPARTITION s0 VALUES LESS THAN (MAXVALUE)
    (PARTITION p0 VALUES LESS THAN (15))
    ;

  • Range partition the table ( containing huge data ) by month

    There ia a table with huge data ard 9GB.This needs to range patitioned by month
    to improve performance.
    Can any one suggest me the best option to implement partitioning in this.

    I have a lot of tables like this. My main tip is to never assign 'MAXVALUE' for your last partition, because it will give you major headaches when you need to add a partition for a future month.
    Here is an example of one of my tables. Lots of columns are omitted, but this is enough to illustrate the partitioning.
    CREATE TABLE "TSER"."TERM_DEPOSITS"
    ( "IDENTITY_CODE" NUMBER(10), "ID_NUMBER" NUMBER(25),
    "GL_ACCOUNT_ID" NUMBER(14) NOT NULL ,
    "ORG_UNIT_ID" NUMBER(14) NOT NULL ,
    "COMMON_COA_ID" NUMBER(14) NOT NULL ,
    "AS_OF_DATE" DATE,
    "ISO_CURRENCY_CD" VARCHAR2(15) DEFAULT 'USD' NOT NULL ,
    "IDENTITY_CODE_CHG" NUMBER(10)
    CONSTRAINT "TERM_DEPOSITS"
    PRIMARY KEY ("IDENTITY_CODE", "ID_NUMBER", "AS_OF_DATE") VALIDATE )
    TABLESPACE "DATA_TS" PCTFREE 10 INITRANS 1 MAXTRANS 255
    STORAGE ( INITIAL 0K BUFFER_POOL DEFAULT)
    LOGGING PARTITION BY RANGE ("AS_OF_DATE")
    (PARTITION "P2008_06" VALUES LESS THAN (TO_DATE(' 2008-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    TABLESPACE "DATA_TS_PART1" PCTFREE 10 INITRANS 1
    MAXTRANS 255 STORAGE ( INITIAL 1024K BUFFER_POOL DEFAULT)
    LOGGING NOCOMPRESS , PARTITION "P2008_07" VALUES LESS THAN (TO_DATE(' 2008-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS',
    'NLS_CALENDAR=GREGORIAN'))
    TABLESPACE "DATA_TS_PART2" PCTFREE 10 INITRANS 1 MAXTRANS 255
    STORAGE ( INITIAL 1024K BUFFER_POOL DEFAULT) LOGGING NOCOMPRESS )
    PARALLEL 3

  • Best way of partitioning the huge size table in oracle 11gr2

    hi,
    OS: linux
    DB: oracle 11gR2
    table size is about 2T and single table space with multiple dbf files in ASM,
    table partition with dbms.redefinition is running past 2 days (range partition).. its running but not that fast..
    note: exchange partition is too slow.. hence not used..
    what is the best way of doing the huge size table partition. please suggest, thanks.

    >
    what is the best way of doing the huge size table partition
    >
    A few questions
    1. Is that an OLTP or OLAP table?
    2. Is all of the data still needed online or is some of it archivable?
    3. What type of partitiioning are you doing? RANGE? LIST? COMPOSITE?
    4. Why do you you say 'exchange partition is too slow' - did you do a test?
    For example if data will be partitioned by create date then one stragety is to initially put all existing data into the root partition of a range partitioned table. Then you can start using new partitions for new incoming data and take your time to partition the current data.

  • Modify HUGE HASH partition table to RANGE partition and HASH subpartition

    I have a table with 130,000,000 rows hash partitioned as below
    ----RANGE PARTITION--
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)(
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009),
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010),
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011),
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE)
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR, LINE_ID);
    Data: -
    INSERT INTO TEST_PART
    VALUES ('2000',200001,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200009,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200010,'CM');
    VALUES ('2006',NULL,'CM');
    COMMIT;
    Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
    How do I change the current partition of the table from HASH partition to RANGE partition and a sub-partition (HASH) without losing the data and existing indexes?
    The table after restructuring should look like the one below
    COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)
    SUBPARTITION BY HASH (C_NBR) (
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Pls advice
    Thanks in advance

    Sorry for the confusion in the first part where I had given a RANGE PARTITION instead of HASH partition. Pls read as follows;
    I have a table with 130,000,000 rows hash partitioned as below
    ----HASH PARTITION--
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY HASH (C_NBR)
    PARTITIONS 2
    STORE IN (PCRD_MBR_MR_02, PCRD_MBR_MR_01);
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Data: -
    INSERT INTO TEST_PART
    VALUES ('2000',200001,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200009,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200010,'CM');
    VALUES ('2006',NULL,'CM');
    COMMIT;
    Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
    How do I change the current partition of the table from hash partition to range partition and a sub-partition (hash) without losing the data and existing indexes?
    The table after restructuring should look like the one below
    COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)
    SUBPARTITION BY HASH (C_NBR) (
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Pls advice
    Thanks in advance

  • Range Partitioning on Varchar2 column???

    We hava table and it has a date column and its type is varchar2.
    This column's format is '16021013' ('ddmmyyyy').
    We want to make range partition on this column. What will be the best way?
    do you think virtal column partitioning will be efficient?

    >
    I'm not a new DBA:-) You can be sure. I asked only about range partitioning trick with varchar2 column because we did our examination about this table. We will need to archive this table quarter by quarter to the other archive database. Then, off course, nearly all queries are coming firstly by using this date column and they can have different filtering inside "where" conditions. Already, this table has index on this column but with huge number of data, index performance is not enough for us. This is a 7x24 banking system and we are lately joined to this project. Because of this, we cant change everything like changing data type of that column after this moment.
    >
    You are taking things way too personally. No one said anything about whether you were, or were not, a DBA or made any other comments about your skill or abilities.
    What we ask was for you to tell us WHAT the problem was that you were trying to solve and WHY you thought partitioning would solve it.
    And now, after several generic posts, you have finally provided that information. We can only comment based on the information that you post. We can't guess as to what types of queries you use or what kinds of predicates you use in those queries. But we need to know that information in order to provide the best advice.
    Next time you post put the important information in your original question:
    1. A table column is VARCHAR2 but contains a date value. We are unable to change the datatype of this column.
    2. We need to archive data quarterly based on this date value
    3. Nearly all queries use this date value and then also may have additional filter conditions
    4. This date column is indexed but we would like to improve the performance beyond what this index can give us.
    The above is a summary that includes all important information that is needed to know how best to help you.
    And I made a pretty good guess since two replies ago I provided you with example code that shows just how to partition the table.
    >
    Now, our only aim is how to make range partitioning this varchar2 date column.
    >
    As I showed you in my example code earlier you can add a virtual column to the table and partition on it. My example code creates a monthly partitioned table that allows you to archive by month or archive every three months to archive by quarter.
    You can modify that example to use quarterly partitions if you want but I would recommend that you use standard monthly partitions since they will satisfy the widest range of predicates.

  • Creating Local partitioned index on Range-Partitioned table.

    Hi All,
    Database Version: Oracle 8i
    OS Platform: Solaris
    I need to create Local-Partitioned index on a column in Range-Partitioned table having 8 million records, is there any way to perform it in fastest way.
    I think we can use Nologging, Parallel, Unrecoverable options.
    But while considering Undo and Redo also mainly time required to perform this activity....Which is the best method ?
    Please guide me to perform it in fastest way and also online !!!
    -Yasser

    YasserRACDBA wrote:
    3. CREATE INDEX CSB_CLIENT_CODE ON CS_BILLING (CLIENT_CODE) LOCAL
    NOLOGGING PARALLEL (DEGREE 14) online;
    4. Analyze the table with cascade option.
    Do you think this is the only method to perform operation in fastest way? As table contain 8 million records and its production database.Yasser,
    if all partitions should go to the same tablespace then you don't need to specify it for each partition.
    In addition you could use the "COMPUTE STATISTICS" clause then you don't need to analyze, if you want to do it only because of the added index.
    If you want to do it separately, then analyze only the index. Of course, if you want to analyze the table, too, your approach is fine.
    So this is how the statement could look like:
    CREATE INDEX CSB_CLIENT_CODE ON CS_BILLING (CLIENT_CODE) TABLESPACE CS_BILLING LOCAL NOLOGGING PARALLEL (DEGREE 14) ONLINE COMPUTE STATISTICS;
    If this operation exceeds particular time window....can i kill the process?...What worst will happen if i kill this process?Killing an ONLINE operation is a bit of a mess... You're already quite on the edge (parallel, online, possibly compute statistics) with this statement. The ONLINE operation creates an IOT table to record the changes to the underlying table during the build operation. All these things need to be cleaned up if the operation fails or the process dies/gets killed. This cleanup is supposed to be performed by the SMON process if I remember correctly. I remember that I once ran into trouble in 8i after such an operation failed, may be I got even an ORA-00600 when I tried to access the table afterwards.
    It's not unlikely that your 8.1.7.2 makes your worries with this kind of statement, so be prepared.
    How much time it may take? (Just to be on safer side)The time it takes to scan the whole table (if the information can't read from another index), the sorting operation, plus writing the segment, plus any wait time due to concurrent DML / locks, plus the time to process the table that holds the changes that were done to the table while building the index.
    You can try to run an EXPLAIN PLAN on your create index statement which will give you a cost indication if you're using the cost based optimizer.
    Please suggest me if any other way exists to perform in fastest way.Since you will need to sort 8 million rows, if you have sufficient memory you could bump up the SORT_AREA_SIZE for your session temporarily to sort as much as possible in RAM.
    -- Use e.g. 100000000 to allow a 100M SORT_AREA_SIZE
    ALTER SESSION SET SORT_AREA_SIZE = <something_large>;
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • Partition Pruning on Interval Range Partitioned Table not happening when SYSDATE used in Where Clause

    We have tables that are interval range partitioned on a DATE column, with a partition for each day - all very standard and straight out of Oracle doc.
    A 3rd party application queries the tables to find number of rows based on date range that is on the column used for the partition key.
    This application uses date range specified relative to current date - i.e. for last two days would be "..startdate > SYSDATE -2 " - but partition pruning does not take place and the explain plan shows that every partition is included.
    By presenting the query using the date in a variable partition pruning does table place, and query obviously performs much better.
    DB is 11.2.0.3 on RHEL6, and default parameters set - i.e. nothing changed that would influence optimizer behavior to something unusual.
    I can't work out why this would be so. It very easy to reproduce with simple test case below.
    I'd be very interested to hear any thoughts on why it is this way and whether anything can be done to permit the partition pruning to work with a query including SYSDATE as it would be difficult to get the application code changed.
    Furthermore to make a case to change the code I would need an explanation of why querying using SYSDATE is not good practice, and I don't know of any such information.
    1) Create simple partitioned table
    CREATETABLE part_test
       (id                      NUMBER NOT NULL,
        starttime               DATE NOT NULL,
        CONSTRAINT pk_part_test PRIMARY KEY (id))
    PARTITION BY RANGE (starttime) INTERVAL (NUMTODSINTERVAL(1,'day')) (PARTITION p0 VALUES LESS THAN (TO_DATE('01-01-2013','DD-MM-YYYY')));
    2) Populate table 1million rows spread between 10 partitions
    BEGIN
        FOR i IN 1..1000000
        LOOP
            INSERT INTO part_test (id, starttime) VALUES (i, SYSDATE - DBMS_RANDOM.value(low => 1, high => 10));
        END LOOP;
    END;
    EXEC dbms_stats.gather_table_stats('SUPER_CONF','PART_TEST');
    3) Query the Table for data from last 2 days using SYSDATE in clause
    EXPLAIN PLAN FOR
    SELECT  count(*)
    FROM    part_test
    WHERE   starttime >= SYSDATE - 2;
    | Id  | Operation                 | Name      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT          |           |     1 |     8 |  7895  (1)| 00:00:01 |       |       |
    |   1 |  SORT AGGREGATE           |           |     1 |     8 |            |          |       |       |
    |   2 |   PARTITION RANGE ITERATOR|           |   111K|   867K|  7895   (1)| 00:00:01 |   KEY |1048575|
    |*  3 |    TABLE ACCESS FULL      | PART_TEST |   111K|   867K|  7895   (1)| 00:00:01 |   KEY |1048575|
    Predicate Information (identified by operation id):
       3 - filter("STARTTIME">=SYSDATE@!-2)
    4) Now do the same query but with SYSDATE - 2 presented as a literal value.
    This query returns the same answer but very different cost.
    EXPLAIN PLAN FOR
    SELECT count(*)
    FROM part_test
    WHERE starttime >= (to_date('23122013:0950','DDMMYYYY:HH24MI'))-2;
    | Id  | Operation                 | Name      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT          |           |     1 |     8 |   131  (0)| 00:00:01 |       |       |
    |   1 |  SORT AGGREGATE           |           |     1 |     8 |            |          |       |       |
    |   2 |   PARTITION RANGE ITERATOR|           |   111K|   867K|   131   (0)| 00:00:01 |   356 |1048575|
    |*  3 |    TABLE ACCESS FULL      | PART_TEST |   111K|   867K|   131   (0)| 00:00:01 |   356 |1048575|
    Predicate Information (identified by operation id):
       3 - filter("STARTTIME">=TO_DATE(' 2013-12-21 09:50:00', 'syyyy-mm-dd hh24:mi:ss'))
    thanks in anticipation
    Jim

    As Jonathan has already pointed out there are situations where the CBO knows that partition pruning will occur but is unable to identify those partitions at parse time. The CBO will then use a dynamic pruning which means determine the partitions to eliminate dynamically at run time. This is why you see the KEY information instead of a known partition number. This is to occur mainly when you compare a function to your partition key i.e. where partition_key = function. And SYSDATE is a function. For the other bizarre PSTOP number (1048575) see this blog
    http://hourim.wordpress.com/2013/11/08/interval-partitioning-and-pstop-in-execution-plan/
    Best regards
    Mohamed Houri

  • Range Partitioning a table. Max value to be defined

    Hi,
    I am using a range partitioned table, range partitioned on date, and have defined max value as 6 months after the Creation Date.
    I have a proc which creates the partitions I want in advance by splitting up the max partition.
    - Now what do I do when max partition is reached after 6 months?
    - If I define max partition one year or two year after the current date instead of the currently defined 6 months after creation date. What are the negatives attached with it?
    I can't use Interval Partition and have to use Range only.
    Kindly suggest.
    Thanks..

    >
    I am using a range partitioned table, range partitioned on date, and have defined max value as 6 months after the Creation Date.
    I have a proc which creates the partitions I want in advance by splitting up the max partition.
    - Now what do I do when max partition is reached after 6 months?
    - If I define max partition one year or two year after the current date instead of the currently defined 6 months after creation date. What are the negatives attached with it?
    >
    Any data with a partition key that does NOT match any partition will cause your INSERT query to fail.
    Any partition that has no data to match it will simply remain empty.
    A common partitioning scheme is to define one partition for all old data, one partition with a high max value and then split the max value partition to get the partitions you want in the middle.
    Let's say you want monthly partitions but don't have that much data from before the current year, 2012.
    1. Create one partition for dates < 1/1/2012
    2. One partition each for the 12 months of 2012
    3. One max value partition to be 1/1/4000
    You would just split the max value partition to create each month of 2013. The split could be done ahead of time or a month at a time as you choose.
    The only negative is that any data inserted by mistake that has a super-high date will go into the max value partition. But that is going to happen anyway. If you accidentally enter a date of 3/23/3882 it won't be rejected.
    But it is easy to query periodically to see if you have any 'bad' data like that. And the alternative is that an INSERT would fail because of the one bad record and all of your good data would be rejected anyway so it's not really much of a negative.
    Remember - for best management performance each partition should have its own tablespace and the indexes should all be local if possible.

Maybe you are looking for

  • HRMD_A for Super Office ?

    Hi All,        In my client i need to send the HR (PA and OM) data to OS as XML file for Super Office Tool (HR-CRM Tool).        I also have to include the custom infotypes of client which i did using creating new segments.        I need to know foll

  • Detected Blocked Outbound Queues

    Hi All, We have found  detected Blocked Outbound Queues in SMq1 CRM PROD System and we have checked log , its shows below log. 400 R3AUBUPA0008063715       RP1CLNT400                               4  CPICERR  A3A4176F143C4F3031E8C711 06.02.2012 20:02

  • Dynamic Navigation in ES

    Has anyone tried to add dynamic navigation to ESS page? My issue is after I click the links on the page to launch the service (e.g. Working Time: Record Working Time Service), dynamic navigation is lost.  Please suggest.  Thanks, Rong

  • Get local file in jsp

    Hi, I am trying to access the local file which is on local system from the JSP page.I know that it is not possible to access local file from jsp page wheather it is not present on the server.Just i need a conformation that it is possible or not.If it

  • Since IOS 7.0.3 Screen flashes back to main screen for no reason

    If I am reading an email or a web page, the screen usually flashes back to the main screen without reason or notice.  I have "reset all settings" in the settings section, perform cold and warm reboots but nothing seems to help. Any others with simila