Use expdp/impdp to reorganize a tablespace to remove additional datafile ?

Oracle 10g (10.2.0.1)
We had a tablespace with a single datafile WORK1, WORK1 filled up, colleague added two datafiles WORK2 and WORK3 (instead of resizing original larger).
I resized WORK1, increasing by 500Mb.
I was able to drop WORK3, but not WORK2 (ORA-03262: the file is non-empty)
My proposed solution is to expdp the tablespace, drop the tablespace and datafiles, recreate the tablespace with a correctly sized datafile and finally impdp the tablespace.
Is this solution valid ?
Any hints at syntax would be useful

1. Map your datafile.
2. If no segments in datafile, drop it and go to 5.
2. Shrink the datafile up to where the data ends.
3. Rebuild/move the last object in the data file,
4. Go to 1
5. Fin
To map data file...
accept file_num char prompt 'File ID: ';
SET PAGESIZE   70
SET LINESIZE   132
SET NEWPAGE    0
SET VERIFY     OFF
SET ECHO       OFF
SET HEADING    ON
SET FEEDBACK   OFF
SET TERMOUT    ON
COLUMN file_name   FORMAT a50          HEADING 'File Name'
COLUMN owner       FORMAT a10   TRUNC  HEADING 'Owner'
COLUMN object      FORMAT a30   TRUNC  HEADING 'Object'
COLUMN obj_type    FORMAT a2           HEADING ' '
COLUMN block_id    FORMAT 9999999      HEADING 'Block|ID'
COLUMN blocks      FORMAT 999,999      HEADING 'Blocks'
COLUMN mbytes      FORMAT 9,999.99     HEADING 'M-Bytes'
SELECT  'free space'      owner,
        ' '               object,
        ' '               obj_type,
        f.file_name,
        s.block_id,
        s.blocks,
        s.bytes/1048576   mbytes
  FROM  dba_free_space s,
        dba_data_files f
WHERE  s.file_id = TO_NUMBER(&file_num)
   AND  s.file_id = f.file_id
UNION
SELECT  owner,
        segment_name,
        DECODE(segment_type, 'TABLE',          'T',
                             'INDEX',          'I',
                             'ROLLBACK',       'RB',
                             'CACHE',          'CH',
                             'CLUSTER',        'CL',
                             'LOBINDEX',       'LI',
                             'LOBSEGMENT',     'LS',
                             'TEMPORARY',      'TY',
                             'NESTED TABLE',   'NT',
                             'TYPE2 UNDO',     'U2',
                             'TABLE PARTITION','TP',
                             'INDEX PARTITION','IP', '?'),
        f.file_name,
        s.file_id,
        s.block_id,
        s.blocks,
        s.bytes/1048576
  FROM  dba_extents s,
        dba_data_files f
WHERE  s.file_id = TO_NUMBER(&file_num)
   AND  s.file_id = f.file_id
ORDER
    BY  file_id,
        block_id

Similar Messages

  • EXP/IMP..of table having LOB column to export and import using expdp/impdp

    we have one table in that having colum LOB now this table LOB size is approx 550GB.
    as per our knowldge LOB space can not be resused.so we have alrady rasied SR on that
    we are come to clusion that we need to take backup of this table then truncate this table and then start import
    we need help on bekow ponts.
    1)we are taking backup with expdp using parellal parameter=4 this backup will complete sussessfully? any other parameter need to keep in expdp while takig backup.
    2)once truncate done,does import will complete successfully..?
    any SGA PGA or undo tablespace size or undo retention we need to increase.. to completer susecfully import? because its production Critical database?
    current SGA 2GB
    PGA 398MB
    undo retention 1800
    undo tbs 6GB
    please any one give suggestion to perform activity without error...also suggest parameter need to keep during expdp/impdp
    thanks an advance.

    Hi,
    From my experience be prepared for a long outage to do this - expdp is pretty quick at getting lobs out but very slow at getting them back in again - a lot of the speed optimizations that may datapump so excellent for normal objects are not available for lobs. You really need to test this somewhere first - can you not expdp from live and load into some test area - you don't need the whole database just the table/lob in question. You don;t want to find out after you truncate the table that its going to take 3 days to import back in....
    You might want to consider DBMS_REDEFINITION instead?
    Here you precreate a temporary table (with same definitiion as the existing one), load the data into it from the existing table and then do a dictionary switch to swap them over - giving you minimal downtime. I think this should work fine with LOBS at 10g but you should do some research and see if it works fine. You'll need a lot of extra tablespace (temporarily) for this approach though.
    Regards,
    Harry

  • Using expdp/impdp to backup schemas to new tablespace

    Hello,
    I have tablespace A for schemas A1 and A2, and I wish back up these schemes to tablespace B using schema names B1 and B2 (so the contents of schemas A1 and A2 are copied into schemas B1 and B2, respectively, to use as backups in case something happens to schemas A1 or A2 or tablespace A).
    I began by creating tablespace B, and schemas B1 and B2. Then I attempted to populate schemas B1 and B2 by doing the following:
    EXPORT SCHEMAS:
    expdp a1/a1password@myIdentifier ESTIMATE=BLOCKS DUMPFILE=myDpumpDirectory:a1.dmp LOGFILE=myDpumpDirectory:a1_export.log SCHEMAS=a1 COMPRESSION=METADATA_ONLY
    expdp a2/a2password@myIdentifier ESTIMATE=BLOCKS DUMPFILE=myDpumpDirectory:a2.dmp LOGFILE=myDpumpDirectory:a2_export.log SCHEMAS=a2 COMPRESSION=METADATA_ONLY
    IMPORT SCHEMAS:
    impdp b1/b1password@myIdentifier DUMPFILE=myDpumpDirectory:a1.dmp LOGFILE=myDpumpDirectory:b1_import.log REMAP_SCHEMA=a1:b1
    impdp b2/b2password@myIdentifier DUMPFILE=myDpumpDirectory:a2.dmp LOGFILE=myDpumpDirectory:b2_import.log REMAP_SCHEMA=a2:b2
    This resulted in backing up schema A1 into schema B1, and schema A2 into B2, but the tablespaces for schemas B1 and B2 remained tablespace A (when I wanted them to be tablespace B).
    I will drop schemas B1 and B2, create new schemas, and try again. What command should I use to get the tablespace correct this time?
    Reviewing the documentation for data pump import
    http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_import.htm#SUTIL300
    specifically the section titled REMAP_TABLESPACE, I'm thinking that I could just add a switch to the above import commands to remap tablespace, such as:
    impdp b1/b1password@myIdentifier DUMPFILE=myDpumpDirectory:a1.dmp LOGFILE=myDpumpDirectory:b1_import.log REMAP_SCHEMA=a1:b1 REMAP_TABLESPACE=a:b
    impdp b2/b2password@myIdentifier DUMPFILE=myDpumpDirectory:a2.dmp LOGFILE=myDpumpDirectory:b2_import.log REMAP_SCHEMA=a2:b2 REMAP_TABLESPACE=a:b
    Is that correct?
    Also, is it OK to use the same export commands above, or should they change to support the REMAP_TABLESPACE?

    Hi,
    if i understand correctly, you want to import  A1:B1 and  A2:B2 with the Respective Tablespace. You are using the expdp with ESTIMATE it can not help you
    You can use something like that with one dump file
    expdp system/password directory=<myDpumpDirectory> dumpfile=A1_A2_Export.dmp logfile=A1_A2_Export.log schemas=A1,A2
    impdp system/password directory=<myDpumpDirectory> dumpfile=A1_A2_Export.dmp logfile=A1_A2_Import.log remap_schemas=<A1:B1,A2:B2> REMAP_TABLESPACE=<TAB1:TAB2>
    HTH

  • Can I use expdp/impdp for APEX schemas to setup a new server?

    If setting up a new development server that will replace an old server, can I simply use datapump tp export the APEX_040100 and FLOWS_FILES schemas on the old server, and just import the schemas on the new server? That would seem much simpler and faster than exporting all the applications in the APEX console, and importing the applications, much less steps involved.

    Pl post database and OS versions. Is there a reason you cannot clone the entire database from the old server to the new ?
    HTH
    Srini

  • Impdp operation taking more tablespace size in compare to expdp...

    Hi All,
    I have one issue with impdp operation. I am using 11gR2 database and schema's dmpfile size is 5G. When I start loading data through impdp schema's tablespace size grow more than 5G. I have to stop the impdp operation because of growing tablespace size. There is no compress parameter passed during expdp. Lastly I given tablespace maxsize= unlimited but seems like it still not sufficient and have to add one more dbf. so the tablespace size as of now is 60G and impdp operation is still running.
    Can anyone guide me if dmp file size is 5G then how could be the tablespace size more than 5G? I have an assumption that if my dmpfile size is 5G then the tablespace size in which I loaded my data (using impdp)should not more than 5G.
    Thanks in advance.

    I was facing the same problem. After giving parameter TRANSFORM=SEGMENT_ATTRIBUTES:n, the problem has been resolved.
    TRANSFORM = transform_name:value[:object_type]
    The transform_name specifies the name of the transform. Some of possible options are as follows:
    SEGMENT_ATTRIBUTES - If the value is specified as y, then segment attributes (physical attributes, storage attributes, tablespaces, and logging) are included, with appropriate DDL. The default is y. ====> IF THIS IS 'N' PHYSICAL STORAGE ATTRIBUTES ARE NOT INCLUDED.
    STORAGE - If the value is specified as y, the storage clauses are included, with appropriate DDL. The default is y. This parameter is ignored if SEGMENT_ATTRIBUTES=n.
    Although thread a quite old, but just updating this in case someone needs to refer in future. My system parameter deferred_segment_creation is set to TRUE.
    Here is the complete syntax, I have used
    impdp vygrdba/******* dumpfile=VYGRVS6I5_25DEC12.dmp logfile=VYGR_PT_09Jan13.log remap_schema= VYGRVS6I5:VYGR_PT TRANSFORM=SEGMENT_ATTRIBUTES:n
    Edited by: 980762 on 9 Jan, 2013 3:58 AM

  • What are other thing should i consider using expdp and impdp

    (11g express) r2 windows 2008 and also xp ( i installed it at two places for testing.)
    i wanted to make backup of one schema
    so i used expdp
    please tel me is it ok or is there any thing more reliable to make backup of a schema and import(impdp) of a schema to any oracle database.
    from window prompt
    MKDIR c:\oraclexe\app\tmp
    from sql plus
    sqlplus SYSTEM/password
    CREATE OR REPLACE DIRECTORY dmpdir AS ’c:\oraclexe\app\tmp’;
    GRANT READ,WRITE ON DIRECTORY dmpdir TO hr;
    from windows command line.
    expdp SYSTEM/password SCHEMAS=hr DIRECTORY=dmpdir DUMPFILE=schema.dmp
    LOGFILE=expschema.log
    then following to import
    impdp SYSTEM/password SCHEMAS=hr DIRECTORY=dmpdir DUMPFILE=schema.dmp
    REMAP_SCHEMA=hr:hrdev EXCLUDE=constraint, ref_constraint, index
    TABLE_EXISTS_ACTION=replace LOGFILE=impschema.log
    2) pls tel me should i consider any thing else while exporting or importing.
    3) i have removed EXCLUDE=constraint, ref_constraint, index because i need constrainta index etc.
    so pls tel me why in documentation it is written.
    yours sincerely
    Edited by: 944768 on Mar 31, 2013 2:02 AM

    Hi,
    Yes, you can (should) remove exclude parameter if you like import all objects from export.
    Here is example how you export full schema
    expdp SYSTEM/password SCHEMAS=hr DIRECTORY=dmpdir DUMPFILE=hr.dmp LOGFILE=exp_hr.logAnd import it to another database
    impdp SYSTEM/password SCHEMAS=hr DIRECTORY=dmpdir DUMPFILE=hr.dmp LOGFILE=imp_hr.logPlease note that you need create directory dmpdir to target database also.
    Regards,
    Jari
    My Blog: http://dbswh.webhop.net/htmldb/f?p=BLOG:HOME:0
    Twitter: http://www.twitter.com/jariolai

  • Using expdp and impdp instead of exp and imp

    Hi All,
    I am trying to use expdp and impdp instead of exp and imp.
    I am facing few issues while using expdb. I have a Job which exports data from one DB server and then its imported so another DB server. Both DB servers are run on separate machines. Job runs on various clients machine and not on any of DB server.
    For using expdp we have to create DIRECTORY and as I understand it has to be created on DB server. Problem here is Job can not access DB Server or files on DB server. Also dump file created is moved by Job to other machines based on requirement( Usually it goes to multiple DB server).
    I need way to create dump files on server where job runs.
    If I am not using expdp correctly please guide. I am new to expdp/impdp and imp/exp.
    Regards,

    Thanks for quick reply ..
    Job executing expdb/impdp runs on Red Hat Enterprise Linux Server release 5.6 (Tikanga)
    ORACLE server Release 11.2.0.2.0
    Job can not access the ORACLE server as it does not have privileges (In fact there is no user / password to access ORACLE server machines). Creating dump on oracle server and moving is not an option for this JOB. It has to keep dump with itself.
    Regards,

  • How to use expdp and impdp in oracle 12c on a pluggable database

    Hi,
    I have a pluggable database in a container DB. I have to use expdp to export a table and later import the table into another pluggable database.
    How can i tell expdp to which pluggable database I want to connect?
    Thanks,
    Sarayu

    Then
    1. Add newpdb to  tnsnames.ora ?
    2. connect to pluggable database create a directory.
    sqlplus sys/<password>@newpdb as sysdba
    create or replace directory exp_dir as '/u01/app';
    grant read, write on directory exp_dir to scott;
    3. export emp table of scott user.  -- with sys user
    expdp directory=exp_dir dumpfile=TDUMP.DMP TABLES='SCOTT'.'EMP'
    username: sys@newpdb as sysdba
    password: <password>
    Regards
    Mahir M. Quluzade
    Paste here result.

  • How to use expdp and impdp in lnux environment

    Hi .
    I want to export data using expdp in linux envirinment.pls give exaple how to get it.

    Here you are: http://www.oracle-base.com/articles/10g/OracleDataPump10g.php

  • Expdp/impdp :: Constraints in Parent child relationship

    Hi ,
    I have one table parent1 and tables child1, child2 and chld3 have foreign key created on this parent1.
    Now I want to do some deletion on parent1. But since number of records are very high on parent1 , we are going with expdp / impdp with querry option.
    I have taken query level expdp on parent1. Now I dropped parent1 with cascade constraints option and all the foreign keys created by child1 ,2 and 3 which references parent1 are automatically dropped.
    Now If i fire the impdp for the query level dump file , will these foreign key constraints will get created automatically on child1 ,2 and 3 or I need to manually re-create it ??
    Regards,
    Anu

    Hi,
    The FK's will not be in the dumpfile - see the example code below where i generate a sqlfile following pretty much the process you would have done. THis is because the FK is part of the DDL for the child table not the parent.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    OPS$ORACLE@EMZA3>create table a (col1 number);
    Table created.
    OPS$ORACLE@EMZA3>alter table a add primary key (col1);
    Table altered.
    OPS$ORACLE@EMZA3>create table b (col1 number);
    Table created.
    OPS$ORACLE@EMZA3>alter table b add constraint x foreign key (col1) references a(col1);
    Table altered.
    OPS$ORACLE@EMZA3>
    EMZA3:[/oracle/11.2.0.1.2.DB/bin]# expdp / include=TABLE:\"=\'A\'\"
    Export: Release 11.2.0.3.0 - Production on Fri May 17 15:45:50 2013
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    Starting "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04":  /******** include=TABLE:"='A'"
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    . . exported "OPS$ORACLE"."A"                                0 KB       0 rows
    Master table "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully loaded/unloaded
    Dump file set for OPS$ORACLE.SYS_EXPORT_SCHEMA_04 is:
      /oracle/11.2.0.3.0.DB/rdbms/log/expdat.dmp
    Job "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully completed at 15:45:58
    Import: Release 11.2.0.3.0 - Production on Fri May 17 15:46:16 2013
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    Master table "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
    Starting "OPS$ORACLE"."SYS_SQL_FILE_FULL_01":  /******** sqlfile=a.sql
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Job "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully completed at 15:46:17
    -- CONNECT OPS$ORACLE
    ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
    -- new object type path: SCHEMA_EXPORT/TABLE/TABLE
    CREATE TABLE "OPS$ORACLE"."A"
       (    "COL1" NUMBER
       ) SEGMENT CREATION IMMEDIATE
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
    NOCOMPRESS LOGGING
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "SYSTEM" ;
    -- new object type path: SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ALTER TABLE "OPS$ORACLE"."A" ADD PRIMARY KEY ("COL1")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "SYSTEM"  ENABLE;
    -- new object type path: SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    DECLARE I_N VARCHAR2(60);
      I_O VARCHAR2(60);
      NV VARCHAR2(1);
      c DBMS_METADATA.T_VAR_COLL;
      df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS';
    stmt varchar2(300) := ' INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,d1,cl1) VALUES (''I'',6,:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,NULL,:14,:15,NULL,:16,:17)';
    BEGIN
      DELETE FROM "SYS"."IMPDP_STATS";
      c(1) := 'COL1';
      DBMS_METADATA.GET_STAT_INDNAME('OPS$ORACLE','A',c,1,i_o,i_n);
      EXECUTE IMMEDIATE stmt USING 0,I_N,NV,NV,I_O,0,0,0,0,0,0,0,0,NV,NV,TO_DATE('2013-05-17 15:43:24',df),NV;
      DBMS_STATS.IMPORT_INDEX_STATS('"' || i_o || '"','"' || i_n || '"',NULL,'"IMPDP_STATS"',NULL,'"SYS"');
      DELETE FROM "SYS"."IMPDP_STATS";
    END;
    /Regards,
    Harry
    http://dbaharrison.blogspot.com/

  • [ETL] TTS vs expdp/impdp vs ctas (dblink)

    Hi, all.
    The database is oracle 10gR2 on a unix machine.
    Assuming that the db size is about 1 tera bytes (table : 500 giga, index : 500 giga),
    how much faster is TTS (transportable tablespace) over expdp/impdp, and over ctas (dblink) ?
    As you know, the speed of etl depends on the hardware capacity. (io capacity, network bandwith, the number of cpu)
    I just would like to hear general guide from your experience.
    Thanks in advance.
    Best Regards.

    869578 wrote:
    Hi, all.
    The database is oracle 10gR2 on a unix machine.
    Assuming that the db size is about 1 tera bytes (table : 500 giga, index : 500 giga),
    how much faster is TTS (transportable tablespace) over expdp/impdp, and over ctas (dblink) ?
    As you know, the speed of etl depends on the hardware capacity. (io capacity, network bandwith, the number of cpu)
    I just would like to hear general guide from your experience.
    Thanks in advance.
    Best Regards.http://docs.oracle.com/cd/B19306_01/server.102/b14231/tspaces.htm#ADMIN01101
    Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are just copied to the destination location, and you use an export/import utility to transfer only the metadata of the tablespace objects to the new database.
    If you really want to know "how much faster" you're going to have to benchmark. Lots of variables come in to play so best to determine this in your actual environment.
    Cheers,

  • System generated Index names different on target database after expdp/impdp

    After performing expdp/impdp to move data from one database (A) to another (B), the system name generated indexes has different names on the target database, which caused a major issue with GoldenGate. Could anyone provide any tricks on how to perform the expdp/impdp and have the same system index name generated to be the same on both source and target?
    Thanks in advance.
    JL

    While I do not agree with Sb choice of wording his solution is correct. I suggest you drop and recreate the objects using explicit naming then you will get the same names on the target database after import for constraints, indexes, and FK's.
    A detailed description of the problem this caused with Golden Gate would be interesting. The full Oracle and Golden Gate versions is use might also be important to the solution, if one exists other than explicitl naming.
    HTH -- Mark D Powell --
    Edited by: Mark D Powell on May 30, 2012 12:26 PM

  • Reorganization of tablespaces, tables and indexes.

    Hello Experts,
    What is the concept of tablespaces, statistics , tables and indexes in SAP/Oracle ? Where are they used and what are they meant for ?
    Wt is the concept and procedure of performing Reorganization of tablespaces, tables and indexes ? why do we need it for ?
    Requested to revertb at earliest as its urgent . points guaranteed .
    Regards,
    Somya

    Hello Somya,
    Probably difficult to explain entire information in this thread. But you definately get good information in the following link
    http://help.sap.com/saphelp_47x200/helpdata/en/0d/d2fafd4a0c11d182b80000e829fbfe/frameset.htm
    Please drill down through menus, and you will be able to get good information.
    Also you can check the following SAP notes
    666061     FAQ: Database objects, segments and extents
    912620     FAQ: Oracle indexes
    588668     FAQ: Database statistics
    592393     FAQ: Oracle
    541538     FAQ: Reorganizations
    Regards,
    Madhukar

  • Exporting index with partition export using expdp

    Hi,
    I am using EXPDP export In 11.1.0.7, is there a way i can export indexes along with the partition export of a table. With full table export, indexes are exported, but i don't see this with only a single partition export because i don't see indexes creation in following scenario.
    1. Export a partition from the production table.
    2. In development environment, drop all indexes on this table.
    3. Drop same partition in development before importing it afresh from the export file created at first step.
    4. Import the partition exported n first step (this does not automatically create indexes dropped in step 2.
    5. Manually recrease indexes again.
    Thanks

    when you do a table mode export, indexes are included, unless you said exclude=index. Please list your expdp and impdp command and the log files to show that indexes are not included.
    Thanks
    Dean

  • Expdp & impdp of R12 schema into 11g DB

    Hi,
    i need to take a backup of GL schema from R12 instance using expdp &
    import it into 11g DB( standalone DB) using impdp.
    i have used the following for expdp in R12(10g DB)
    $expdp system/<pass> schemas=gl directory=dump_dir dumpfile=gl.dmp logfile=gl.log
    export was successfull.
    How to import into an 11g DB( This is on a different machine).
    regards,
    Charan

    Refer to "Oracle® Database Utilities 11g Release 1 (11.1)" Manual.
    Data Pump Import
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_import.htm#i1007653
    Oracle® Database Utilities 11g Release 1 (11.1)
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/toc.htm

Maybe you are looking for

  • 'There was a problem connecting to the server' TimeMachine and NAS. How do I fix this?

    When trying to connect my NAS to timemachine it appears in available disks, I select it then it says 'connecting' then displays a message saying "There was a problem connecting to the server". Wondering how to fix this? Thanks

  • JMS configuration problem log file

    Hello, My purpose is to send xml data from MQSeries to BW using XI. MQSereies is installed in XI server. For thah I am trying to configure JMS adpter for MQSeries in XI. I have successfully deployed the necessary external drivers using SDM. I have co

  • When I close the screen...

    When I close the screen on my MBP it goes to sleep. Does the new O.S. enable me to set it so it does nothing? I have my sony vaio set to do nothing when I close the lid with XP Pro. If I'm running parallels with windows can I set my MBP this way? Tha

  • IPhoto can't see my photos

    I had a hard drive directory problem after filling my HD to 99.9% full. iPhoto now can't see my photo's. The iPhoto library is there, complete with originals, alterations, etc, but when iPhoto opens, it says that there are 0 pictures. I used option+i

  • Problem with closing other firefox windows when window in use has more than 1 tab open

    If I have multiple firefox windows open, and the one that I am using has more than one tab open, when I try to close one of the windows that I am not using by hovering over the firefox icon on the taskbar, then clicking the cross on the other windows