Bulk text data insertion in Partitioned Table

Hi All,
I want to insert more than 200 crore records from 5 text data files in a single table say TAB1 (PARTITIONED table with RANGE type).
Currently I am using EXTERNAL TABLE concept and providing 5 filenames (of total 45 crore records) at a time to external table definition access criteria LOCATION() with COMMA ",".
After that using following sql statement:
INSERT /*+ APPEND PARALLEL(6) */ INTO TAB1 SELECT * FROM TAB1_EXT;
Presently The execution time statistics for 45 crore records is as follows:
1)     With /*+ PARALLEL(6) */ hint, execution time : 81 Mins     
2)     with /*+ APPEND PARALLEL */ hint, execution time : 73 Mins     
3)     With cursor loop; bulk collect and FORALL with interval of 1000000 records, execution time : 98 Mins
Earlier the execution time was too much but after changing and trying with variation in SQL logic, DB parameter, and H/w, now it is somewhat good.
Now INDEXES are also removed from table. NOLOGGING is purposely not using as it will not be considerable at PROD environment.
Also Database is not in ARCHIVE mode now.
I also tried with change in DB parameters like SGA_MAX_SIZE which is now 16GB. I am thinking even to change DB_BLOCK_SIZE value from 8K to 32K.
O/S: LINUX
ORACLE DB: ORACLE 11g
RAM: 256 GB
48 CPU CORES, AMD Opteron (HP DL 585 Model)
If you have any suggestion, please pass it to me.

Instead of disturbing parallel coordinators you could just submit n jobs each doing a simple <b>insert into big_table select * from external_table_n</b> or something similar - all external_tables_i having the same structure just different location parameters.
Regards
Etbin
Edited by: Etbin on 26.2.2011 12:25
possibly grouping text data files according to partitions the data would end within

Similar Messages

  • Move data from Non Partitioned Table to Partitioned Table

    Hi Friends,
    I am using Oracle 11.2.0.1 DB
    Please let me know how can i copy /move the data from Non -Partitioned Oracle table to the currently created Partiotioned table.
    Regards,
    DB

    839396 wrote:
    Hi All,
    Created Partitioned table but unable to copy the data from Non Partitioned table:
    SQL> select * from sales;
    SNO YEAR NAME
    1 01-JAN-11 jan2011
    1 01-FEB-11 feb2011
    1 01-JAN-12 jan2012
    1 01-FEB-12 feb2012
    1 01-JAN-13 jan2013
    1 01-FEB-13 feb2013into which partition should row immediately above ("01-FEB-13") be deposited?
    [oracle@localhost ~]$ oerr  ora 14400
    14400, 00000, "inserted partition key does not map to any partition"
    // *Cause:  An attempt was made to insert a record into, a Range or Composite
    //          Range object, with a concatenated partition key that is beyond
    //          the concatenated partition bound list of the last partition -OR-
    //          An attempt was made to insert a record into a List object with
    //          a partition key that did not match the literal values specified
    //          for any of the partitions.
    // *Action: Do not insert the key. Or, add a partition capable of accepting
    //          the key, Or add values matching the key to a partition specification>
    6 rows selected.
    >
    SQL>
    SQL> create table sales_part(sno number(3),year date,name varchar2(10))
    2 partition by range(year)
    3 (
    4 partition p11 values less than (TO_DATE('01/JAN/2012','DD/MON/YYYY')),
    5 partition p12 values less than (TO_DATE('01/JAN/2013','DD/MON/YYYY'))
    6 );
    Table created.
    SQL> SELECT table_name,partition_name, num_rows FROM user_tab_partitions;
    TABLE_NAME PARTITION_NAME NUM_ROWS
    SALES_PART P11
    SALES_PART P12
    UNPAR_TABLE UNPAR_TABLE_12 776000
    UNPAR_TABLE UNPAR_TABLE_15 5000
    UNPAR_TABLE UNPAR_TABLE_MX 220000
    SQL>
    SQL> insert into sales_part select * from sales;
    insert into sales_part select * from sales
    ERROR at line 1:
    ORA-14400: inserted partition key does not map to any partition
    Regards,
    DB

  • How is the data inserted into CST_INV_QTY_TEMP table?

    Hi All,
    How is the data inserted into CST_INV_QTY_TEMP table ?
    Thanks in advance,
    Mayur
    Edited by: 928178 on 17-Apr-2012 04:29

    How is the data inserted into CST_INV_QTY_TEMP table ?TABLE: BOM.CST_INV_QTY_TEMP
    http://etrm.oracle.com/pls/et1211d9/etrm_pnav.show_object?c_name=CST_INV_QTY_TEMP&c_owner=BOM&c_type=TABLE
    Thanks,
    Hussein

  • How to exp a specific partition data from a partitioned table

    Thanks in advance.
    Is is possible to export a specific partition data from a partitioned table? If so Please describe how with an example. Thank You.

    You would specify a partition with the table_name:partition_name syntax at the TABLES parameter.
    See examples at
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/exp_imp.htm#CEGCJABJ

  • Using Bulk operations for INSERT into destination table and delete from src

    Hi,
    Is there any way to expediate the process of data movement?
    I have a source set of tables (with its pk-fk relations) and a destination set of tables.
    Currently my code as of now, is pickin up the single record in cursor from the parentmost table, and then moving the data from other respecitve tables. But this is happening one by one... Is there any way I can make this take less time?
    If I use bulk insert and collections, i will not be able to use the DELETE in the same block for same source record.
    Thanks
    Regards
    Abhivyakti

    Abhivyakti
    I'm not 100% sure how your code flows from what you've stated, but generally you should try and avoid cursor FOR LOOPS and possibly BULK COLLECTING.
    I always follow the sequence in terms of design:
    1. Attempt to use bulk INSERTS, UPDATES and/or DELETES first and foremost. (include MERGE as well!)
    2. If one cannot possibly do the above then USE BULK COLLECTIONS using a combination of RETURNING INTO's and
    FORALL's.
    However, before you follow this method and if you relatively new to Oracle PL/SQL,
    share the reason you cannot follow the first method on this forum, and you're bound to find some
    help with sticking to method one!
    3. If method two is impossible, and there would have to be a seriously good reason for this, then follow the cursor FOR LOOP
    method.
    You can combine BULK COLLECTS with UPDATES and DELETES, but not with INSERTS
    bulk collect into after insert ?
    Another simple example of BULK COLLECTING
    Re: Reading multiple table type objects returned
    P;

  • Copying data into a partitioned table

    Chaps,
    We have a bunch of data tables that we are partitioning to improve performance. I am trying to find the best way to copy the data from the old tables into the partitioned ones.
    For example, one take has about 45million rows. Performing a query on the entire table (one column is a timestamp) between two times takes about 6 minutes. I tried to do:
    INSERT INTO NEWTABLE (SELECT * FROM OLDTABLE)
    But its taking ages and when I checked on the progress in Enterprise Management Console I saw that it was performing a full table scan and ended up taking over 45 minutes! We have 2 more tables of this size and 2 others that are 5 times as large, so if I can make this go faster it would be brilliant!
    I'm pretty sure that exporting the old table using 'exp' and then importing into the new partitioned table using 'imp' will not respect the partitioning?
    So can anyone think of a quicker method that this?
    Many thanks in advance,
    Simon

    1. Target table : alter table nologging
    2. Target table : set indexes unusable
    3. ALTER SESSION SET SKIP_UNUSABLE_INDEXES = TRUE
    4. INSERT /*+ APPEND */ INTO TBL_TARGET SELECT * FROM TBL_SOURCE
    4a. Hint /*+ PARALLEL */ will be probably also the option - it depends ...
    5 Target table REBUILD INDEXES (probably with separate jobs for every partition).
    6. ALTER TABLE LOGGING

  • Importing partitioned table data into non-partitioned table

    Hi Friends,
    SOURCE SERVER
    OS:Linux
    Database Version:10.2.0.2.0
    i have exported one partition of my partitioned table like below..
    expdp system/manager DIRECTORY=DIR4 DUMPFILE=mapping.dmp LOGFILE=mapping_exp.log TABLES=MAPPING.MAPPING:DATASET_NAPTARGET SERVER
    OS:Linux
    Database Version:10.2.0.4.0
    Now when i am importing into another server i am getting below error
    Import: Release 10.2.0.4.0 - 64bit Production on Tuesday, 17 January, 2012 11:22:32
    Copyright (c) 2003, 2007, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "MAPPING"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "MAPPING"."SYS_IMPORT_FULL_01":  MAPPING/******** DIRECTORY=DIR3 DUMPFILE=mapping.dmp LOGFILE=mapping_imp.log TABLE_EXISTS_ACTION=APPEND
    Processing object type TABLE_EXPORT/TABLE/TABLE
    ORA-39083: Object type TABLE failed to create with error:
    ORA-00959: tablespace 'MAPPING_ABC' does not exist
    Failing sql is:
    CREATE TABLE "MAPPING"."MAPPING" ("SAP_ID" NUMBER(38,0) NOT NULL ENABLE, "TG_ID" NUMBER(38,0) NOT NULL ENABLE, "TT_ID" NUMBER(38,0) NOT NULL ENABLE, "PARENT_CT_ID" NUMBER(38,0), "MAPPINGTIME" TIMESTAMP (6) WITH TIME ZONE NOT NULL ENABLE, "CLASS" NUMBER(38,0) NOT NULL ENABLE, "TYPE" NUMBER(38,0) NOT NULL ENABLE, "ID" NUMBER(38,0) NOT NULL ENABLE, "UREID"
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_TG_ID" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."PK_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_UREID" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_V2" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_PARENT_CT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ORA-39112: Dependent object type CONSTRAINT:"MAPPING"."CKC_SMAPPING_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type CONSTRAINT:"MAPPING"."PK_MAPPING_ITM" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_TG_ID" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."PK_MAPPING" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_UREID" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_V2" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_PARENT_CT" creation failed
    Processing object type TABLE_EXPORT/TABLE/COMMENT
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_MAPPING_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_MAPPING_CT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_TG" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_TT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    ORA-39112: Dependent object type INDEX:"MAPPING"."X_PART" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."X_TIME_T" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."X_DAY" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."X_BTMP" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_TG_ID" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_V2_T" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."PK_MAPPING" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_PARENT_CT" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_UREID" creation failed
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Job "MAPPING"."SYS_IMPORT_FULL_01" completed with 52 error(s) at 11:22:39Please help..!!
    Regards
    Umesh Gupta

    yes, i have tried that option as well.
    but when i write one tablespace name in REMAP_TABLESPACE clause, it gives error for second one.. n if i include 1st and 2nd tablespace it will give error for 3rd one..
    one option, what i know write all tablespace name in REMAP_TABLESPACE, but that too lengthy process..is there any other way possible????
    Regards
    UmeshAFAIK the option you have is what i recommend you ... through it is lengthy :-(
    Wait for some EXPERT and GURU's review on this issue .........
    Good luck ....
    --neeraj                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • No data inserted in error table

    hi,
    I developed an ODI interface that inserts data from one table into other table , (both tables are in the same schema of oracle database.)
    But on sending faulty data ,
    I can’t find any records in error table (E$_ , SNP_CHK_TBL)
    I tried these knowledge modules Combination
    CKM SQL - IKM Oracle Incremental Update
    CKM Oracle –IKM SQL Control Append
    Data is inserted in Flow table … but PK Errors records are not inserted in error table.
    query of PK Errors in Operator:
    insert into ODI.E_EMP2
         ROW_ID,
         ERR_TYPE,
         ERR_MESS,
         ORIGIN,
         CHECK_DATE,
         CONS_NAME,
         CONS_TYPE,
         ENO,
         ENAME
    select     distinct
         TRG.rowid,
         'F',
         'The primary key EMP2_PK is not unique.',
         '(7010)ERROR_CHK.EMP1_TO_EMP2',
         sysdate,
         'EMP2_PK',
         'PK',     
         TRG.ENO,
         TRG.ENAME
    from     ODI.I_EMP2 TRG,
         ODI.I_EMP2 TRG2
    where     TRG2.ENO = TRG.ENO and
         TRG2.rowid <> TRG.rowid
    Do I need to modify CKM and IKM.... ?
    Please help....

    Hi, joping to help.
    try using IKM SQL CONTROL APPEND.
    on Control tab in the second tab you have to see the list of the check. Are they set to "yes"?
    if not go to Model and add the costrants
    Regards
    DecaXD

  • Slow inserts into partitioned table

    I am having trouble inserting into a simple partitioned table after an upgrade to 11.2.0.3. I'm seeing insert speeds of subsecond up to 10 and 12 seconds. We have pre created the partitions for this table (and all children via reference partitioning). We have gathered dictionary and static object stats as well as statistics on all partitions.
    Queries against the dictionary are incredibly slow as well and showing very high io.
    Any help would be greatly appreciated. Thank you for your time.
    Windows 2008 advanced server
    Oracle Enterprise edition 11.2.0.3
    Edited by: user593549 on Mar 26, 2012 11:16 AM

    user593549 wrote:
    I am having trouble inserting into a simple partitioned table after an upgrade to 11.2.0.3. I'm seeing insert speeds of subsecond up to 10 and 12 seconds. We have pre created the partitions for this table (and all children via reference partitioning). We have gathered dictionary and static object stats as well as statistics on all partitions.
    Queries against the dictionary are incredibly slow as well and showing very high io.
    Any help would be greatly appreciated. Thank you for your time.
    Windows 2008 advanced server
    Oracle Enterprise edition 11.2.0.3
    Edited by: user593549 on Mar 26, 2012 11:16 AMThread: HOW TO: Post a SQL statement tuning request - template posting
    HOW TO: Post a SQL statement tuning request - template posting

  • How to use DS's text processing to parsel the text data stored in DB table?

    I want to use DS's text processing to parse the text data stored in a database table, but all the demo I see is parsing text from a flat file. I tried with the blueprint  sample and edit the input from a flat file to database table, but can not map the column with the base entityExtraction. anyone can give me a guide or example? thanks so much!
    regards,
    Bin

    Still looking for some ansewers. Can any one help me?

  • Sql* Loader syntax to load data into a partitioned table

    Hi All,
    I was trying to load data from a csv file to a partitioned table.
    The table name is countries_info
    columns :
    country_code ,
    country_name,
    country_language
    The column country_code is list partitioned with partition p1 for values 'EN','DE','IN' etc
    and partition p2 for 'KR','AR','IT' etc.
    I tried to load data into this table, but I was getting some error Message (not mapping to partitioned key).
    I tried syntax
    load data
    infile 'countries.csv'
    append
    into table countries_info
    partition(p1) ,
    partition(p2)
    fields terminated by ','
    country_code ,
    country_name,
    country_language
    What is the correct syntax- I searched a lot but have not been able to find out.

    Peeush_Rediff wrote:
    Hi All,
    I tried to load data into this table, but I was getting some error Message (not mapping to partitioned key).It's not some error message, it's relevant information for resolving problems you encounter while using Oracle.
    In your case, although you didn't specifiy the exact ORA you recived, it sounds like [ORA-14400|http://forums.oracle.com/forums/search.jspa?threadID=&q=ORA-14400&objID=f61&dateRange=all&userID=&numResults=15&rankBy=10001]
    What is the correct syntax- I searched a lot but have not been able to find out.It's not about correct syntax , it's about understanding message that was trying to tell you that something went wrong.
    That message was (i guess) ORA-14400.
    So, refering to that ORA message , you will need to add new partition ,
    where the data from your csv file that currently don't map to any of the specified partition would fit ,
    or add partition that would be able to accept data range in which value/s for which you received that ORA message could fit into.

  • Archiving old data from a partitioned table

    Hi,
    While sifting through all the options for archiving the old data from a table which is also indexed, i came across a few methods which could be used:
    1. Use a CTAS to create new tables on a different tablespace from the partitions on the exisitng table. Then swap these new tables with the data from the partitions, drop the partitions on the exisiting table (or truncate them although i m not sure which one is the recommended method),offline this tablespace and keep it archived. In case you require it in the future, again swap these partitions wih the data from the archived tables. I am not sure if i got the method correctly.
    2. Keep an export of all the partitions which need to be archived and keep that .dmp file on a storage media. Once they are exported, truncate the corresponding partitions in the original table. If required in the future, import these partitions.
    But i have one constraint on my Db which is I cannot create a new archive tablespace for holding the tables containing the partitioned data into then as I have only 1 tablespace allocated to my appplication on that DB as there are multiple apps residing on it together. Kindly suggest what option is the best suited for me now. Should I go with option 2?
    Thanks in advance.

    Hi,
    Thanks a bunch for all your replies. Yeah, I am planning to go ahead with the option 2. Below is the method I have decided upon. Kindly verify is my line of understanding is correct:
    1. export the partition using the clause:
    exp ID/passwd file=abc.dmp log=abc.log tables=schemaname.tablename:partition_name rows=yes indexes=yes
    2. Then drop this partition on the original table using the statement:
    ALTER TABLE tablename drop PARTITION partition_name UPDATE GLOBAL INDEXES;
    If I now want to import this dump file into my original table again, I will first have to create this partition on my table using the statement:
    3. ALTER TABLE tablename ADD PARTITION partition_name VALUES LESS THAN ( '<<>>' ) ;
    4. Then import the data into that partition using:
    imp ID/passwd FILE=abc.dmp log=xyz.log TABLES=schemaname.tablename:partition_name IGNORE=y
    Now my query here is that this partitioned table has a global index associated with it. So once i create this partition and import the data into it, do i need to drop and recreate the global indexes on this table or is there any aother method to update the indexes. Kindly suggest.
    Thanks again!! :)

  • Loading data to indexed partitioned table

    Hi,
    My table is partitioned on day wise and index is created on first column (non unique)
    is it possible to disable the last partition index disable to increase the speed of  data loading to last  partition.
    create table my_table(sr_no,sr_name,doj)
    partitioned on  day wise field doj (date of joining).
    my table size is very high (1TB) ,

    OraFighter wrote:
    Is it possible to disable only last partition..?
    modify_index_partition
    Use the modify_index_partition clause to modify the real physical attributes, logging attribute, or storage characteristics of index partition partition or its subpartitions. For a hash-partitioned global index, the only subclause of this clause you can specify is UNUSABLE.
    when all else fails Read The Fine Manual
    ALTER INDEX

  • Import data from a partitioned table

    I am trying to import data from another server database. I need only data from two partitions from a partitioned table in that database. I have to use IMPORT wizard.
    The new server should also have the partitioned table with those two partitions data.

    I created the database on the destination server with a primary, log, FG1 and FG2 filegroups.
    Next, created the partition function and scheme
    create partition function pfOrders(int)
    as range right
    for values(34);
    create partition scheme psOrders
    as partition pfOrders
    to (FG1,FG2)
    go
    Next, created the table on primary with primary key.
    Next, used import wizard twice to import data related to partition_id 34 and 65.
    The problem is the data is on primary filegroup. FG1 and FG2 does not have any data.

  • Data transfer into partition table took more time

    Scenario: We have a Staging table (consider it as Table A) which is refreshed Every month. Every month table A is truncated and inserted with 6 million records approx. Once the mandatory update done on Table A we have to move the records to main table (Table
    B). Table B is Partitioned by month wise, which contains the complete history of records from the year 2010. Table B has 1 clus. index(month column) and 3 non clustered index. Currently table B has 180,562,235 records. While transfer i didn’t disable the index,
    because it was taking more time for enabling the index again.
    Question: Data transfer from table A to table B took 2.30 hours approx. Is there any way to reduce transfer time.
    Any Suggestion to reduce the time will be helpful.

    Try with SWITCH Partition...
    eg:
    http://www.mssqltips.com/sqlservertip/1406/switching-data-in-and-out-of-a-sql-server-2005-data-partition/

Maybe you are looking for