Expdp/impdp a partitioned DB

Hi,
I'm importing a full DB with partitioned tables but have errors like this:
ORA-31693: Table data object "SOMETHING"."SS_TRANSACTION_HIST":"SS_TRANSACTION_HIST_0907" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-14400: inserted partition key does not map to any partition

@Pavol, Yes, It is a RANGE PARTITIONED table on date column and I'm importing a full DB export. The source DB has these partitioned tables!
@Dean, Here is more of the import.log
ORA-39151: Table "SYSMAN"."MGMT_SEC_INFO" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
ORA-39151: Table "SYSTEM"."SYS_EXPORT_FULL_01" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/PRE_TABLE_ACTION
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
. . imported "SOMETHING"."SS_TRANSACTION":"CR_TRANSACTION_02"  1.381 GB 2040885 rows
. . imported "SOMETHING"."SS_TRANSACTION":"CR_TRANSACTION_04"  1.382 GB 2041635 rows
. . imported "SOMETHING"."SS_TRANSACTION":"CR_TRANSACTION_01"  1.383 GB 2043191 rows
. . imported "SOMETHING"."SS_TRANSACTION":"CR_TRANSACTION_03"  1.380 GB 2039135 rows
ORA-31693: Table data object "SOMETHING"."SS_TRANSACTION_HIST":"CR_TRANSACTION_HIST_0907" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-14400: inserted partition key does not map to any partition
ORA-31693: Table data object "SOMETHING"."SS_TRANSACTION_HIST":"CR_TRANSACTION_HIST_0908" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-14400: inserted partition key does not map to any partition
ORA-31693: Table data object "SOMETHING"."SS_TRANSACTION_HIST":"CR_TRANSACTION_HIST_0904" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
. . imported "SOMETHING"."SS_TERM_HIST":"CR_TERM_HIST_0511"      0 KB       0 rows
. . imported "SOMETHING"."SS_TERM_HIST":"CR_TERM_HIST_0512"      0 KB       0 rows
. . imported "SOMETHING"."SS_TERM_HIST":"CR_TERM_HIST_0601"      0 KB       0 rows
. . imported "SOMETHING"."SS_TERM_HIST":"CR_TERM_HIST_0602"      0 KB       0 rowsAnd yes I have lots of already exists messages because I've done the importing procedure many times...

Similar Messages

  • Expdp/impdp :: Constraints in Parent child relationship

    Hi ,
    I have one table parent1 and tables child1, child2 and chld3 have foreign key created on this parent1.
    Now I want to do some deletion on parent1. But since number of records are very high on parent1 , we are going with expdp / impdp with querry option.
    I have taken query level expdp on parent1. Now I dropped parent1 with cascade constraints option and all the foreign keys created by child1 ,2 and 3 which references parent1 are automatically dropped.
    Now If i fire the impdp for the query level dump file , will these foreign key constraints will get created automatically on child1 ,2 and 3 or I need to manually re-create it ??
    Regards,
    Anu

    Hi,
    The FK's will not be in the dumpfile - see the example code below where i generate a sqlfile following pretty much the process you would have done. THis is because the FK is part of the DDL for the child table not the parent.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    OPS$ORACLE@EMZA3>create table a (col1 number);
    Table created.
    OPS$ORACLE@EMZA3>alter table a add primary key (col1);
    Table altered.
    OPS$ORACLE@EMZA3>create table b (col1 number);
    Table created.
    OPS$ORACLE@EMZA3>alter table b add constraint x foreign key (col1) references a(col1);
    Table altered.
    OPS$ORACLE@EMZA3>
    EMZA3:[/oracle/11.2.0.1.2.DB/bin]# expdp / include=TABLE:\"=\'A\'\"
    Export: Release 11.2.0.3.0 - Production on Fri May 17 15:45:50 2013
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    Starting "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04":  /******** include=TABLE:"='A'"
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    . . exported "OPS$ORACLE"."A"                                0 KB       0 rows
    Master table "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully loaded/unloaded
    Dump file set for OPS$ORACLE.SYS_EXPORT_SCHEMA_04 is:
      /oracle/11.2.0.3.0.DB/rdbms/log/expdat.dmp
    Job "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully completed at 15:45:58
    Import: Release 11.2.0.3.0 - Production on Fri May 17 15:46:16 2013
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    Master table "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
    Starting "OPS$ORACLE"."SYS_SQL_FILE_FULL_01":  /******** sqlfile=a.sql
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Job "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully completed at 15:46:17
    -- CONNECT OPS$ORACLE
    ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
    -- new object type path: SCHEMA_EXPORT/TABLE/TABLE
    CREATE TABLE "OPS$ORACLE"."A"
       (    "COL1" NUMBER
       ) SEGMENT CREATION IMMEDIATE
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
    NOCOMPRESS LOGGING
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "SYSTEM" ;
    -- new object type path: SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ALTER TABLE "OPS$ORACLE"."A" ADD PRIMARY KEY ("COL1")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "SYSTEM"  ENABLE;
    -- new object type path: SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    DECLARE I_N VARCHAR2(60);
      I_O VARCHAR2(60);
      NV VARCHAR2(1);
      c DBMS_METADATA.T_VAR_COLL;
      df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS';
    stmt varchar2(300) := ' INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,d1,cl1) VALUES (''I'',6,:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,NULL,:14,:15,NULL,:16,:17)';
    BEGIN
      DELETE FROM "SYS"."IMPDP_STATS";
      c(1) := 'COL1';
      DBMS_METADATA.GET_STAT_INDNAME('OPS$ORACLE','A',c,1,i_o,i_n);
      EXECUTE IMMEDIATE stmt USING 0,I_N,NV,NV,I_O,0,0,0,0,0,0,0,0,NV,NV,TO_DATE('2013-05-17 15:43:24',df),NV;
      DBMS_STATS.IMPORT_INDEX_STATS('"' || i_o || '"','"' || i_n || '"',NULL,'"IMPDP_STATS"',NULL,'"SYS"');
      DELETE FROM "SYS"."IMPDP_STATS";
    END;
    /Regards,
    Harry
    http://dbaharrison.blogspot.com/

  • EXP/IMP..of table having LOB column to export and import using expdp/impdp

    we have one table in that having colum LOB now this table LOB size is approx 550GB.
    as per our knowldge LOB space can not be resused.so we have alrady rasied SR on that
    we are come to clusion that we need to take backup of this table then truncate this table and then start import
    we need help on bekow ponts.
    1)we are taking backup with expdp using parellal parameter=4 this backup will complete sussessfully? any other parameter need to keep in expdp while takig backup.
    2)once truncate done,does import will complete successfully..?
    any SGA PGA or undo tablespace size or undo retention we need to increase.. to completer susecfully import? because its production Critical database?
    current SGA 2GB
    PGA 398MB
    undo retention 1800
    undo tbs 6GB
    please any one give suggestion to perform activity without error...also suggest parameter need to keep during expdp/impdp
    thanks an advance.

    Hi,
    From my experience be prepared for a long outage to do this - expdp is pretty quick at getting lobs out but very slow at getting them back in again - a lot of the speed optimizations that may datapump so excellent for normal objects are not available for lobs. You really need to test this somewhere first - can you not expdp from live and load into some test area - you don't need the whole database just the table/lob in question. You don;t want to find out after you truncate the table that its going to take 3 days to import back in....
    You might want to consider DBMS_REDEFINITION instead?
    Here you precreate a temporary table (with same definitiion as the existing one), load the data into it from the existing table and then do a dictionary switch to swap them over - giving you minimal downtime. I think this should work fine with LOBS at 10g but you should do some research and see if it works fine. You'll need a lot of extra tablespace (temporarily) for this approach though.
    Regards,
    Harry

  • Log file's format in expdp\impdp

    Hi all,
    I need to set log file format for expdp\impdp utility. I have this format for my dump file - filename=<name>%U.dmp which generates unique names for dump files. How can i generate unique names for log files? It'd better if dump file name and log file names will be the same.
    Regards,
    rustam_tj

    Hi Srini, thanks for advice.
    I read doc which you suggest me. The only thing which i found there is:
    Log files and SQL files overwrite previously existing files.
    So i cant keep previos log files?
    My OS is HP-UX (11.3) and database version is 10.2.0.4
    Regards,
    rustam

  • System generated Index names different on target database after expdp/impdp

    After performing expdp/impdp to move data from one database (A) to another (B), the system name generated indexes has different names on the target database, which caused a major issue with GoldenGate. Could anyone provide any tricks on how to perform the expdp/impdp and have the same system index name generated to be the same on both source and target?
    Thanks in advance.
    JL

    While I do not agree with Sb choice of wording his solution is correct. I suggest you drop and recreate the objects using explicit naming then you will get the same names on the target database after import for constraints, indexes, and FK's.
    A detailed description of the problem this caused with Golden Gate would be interesting. The full Oracle and Golden Gate versions is use might also be important to the solution, if one exists other than explicitl naming.
    HTH -- Mark D Powell --
    Edited by: Mark D Powell on May 30, 2012 12:26 PM

  • Expdp+Impdp: Does the user have to have DBA privilege?

    Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) his own schema?
    Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) other (=not his own) schemas ?
    If he is not allowed: Which GRANT is necessary to be able to perform such expdp/impdp operations?
    Peter
    Edited by: user559463 on Feb 28, 2010 7:49 AM

    Hello,
    Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) his own schema?Yes, a User can always export its own objects.
    Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) other (=not his own) schemas ?Yes, if this User has EXP_FULL_DATABASE and IMP_FUL_DATABASE Roles.
    So, you can create a User and GRANT it EXP_FULL_DATABASE and IMP_FULL_DATABASE Roles and, being connected
    to this User, you could export/import any Object from / to any Schemas.
    On databases, on which there're a lot of export/import operations, I always create a special User with these Roles.
    NB: In DataPump you should GRANT also READ, WRITE Privileges on the DIRECTORY (if you use "dump") to the User.
    Else, be accurate on the choice of your words, as previously posted, DBA is a Role not a Privilege which has another meaning.
    Hope this help.
    Best regards,
    Jean-Valentin

  • XE11: expdp/impdp

    Hello,
    i would like to use expdp and impdp.
    As i installed XE11 on Linux, i unlocked the HR account:
    ALTER USER hr ACCOUNT UNLOCK IDENTIFIED BY hr;
    and use the expdp:
    expdp hr/hr DUMPFILE=hrdump.dmp DIRECTORY=DATA_PUMP_DIR SCHEMAS=HR
    LOGFILE=hrdump.log
    This quits with:
    ORA-39006: internal error
    ORA-39213: Metadata processing is not available
    The alert_XE.log reported:
    ORA-12012: error on auto execute of job "SYS"."BSLN_MAINTAIN_STATS_JOB"
    ORA-06550: line 1, column 807:
    PLS-00201: identifier 'DBSNMP.BSLN_INTERNAL' must be declared
    I read some entries here and did:
    sqlplus sys/******* as sysdba @?/rdbms/admin/catnsnmp.sql
    sqlplus sys/******* as sysdba @?/rdbms/admin/catsnmp.sql
    sqlplus sys/******* as sysdba @?/rdbms/admin/catdpb.sql
    sqlplus sys/******* as sysdba @?/rdbms/admin/utlrp.sql
    I restarted the database, but the result of expdp was the same:
    ORA-39006: internal error
    ORA-39213: Metadata processing is not available
    What's wrong with that? What can i do?
    Do i need "BSLN_MAINTAIN_STATS_JOB" or can this set ro FALSE?
    I created the database today on 24.07. and the next run for "BSLN_MAINTAIN_STATS_JOB"
    is on 29.07. ?
    In the Windows-Version it is working correct, but not in the Linux-Version.
    Best regards

    Hello gentlemen,
    back to the origin:
    'Is expdp/impdp working on XE11'
    The answer is simply yes.
    After a view days i found out that:
    - no stylesheets installed are required for this operation
    - a simple installation is enough
    And i did:
    SHELL:
    mkdir /u01/app > /dev/null 2>&1
    mkdir /u01/app/oracle > /dev/null 2>&1
    groupadd dba
    useradd -g dba -d /u01/app/oracle oracle > /dev/null 2>&1
    chown -R oracle:dba /u01/app/oracle
    rpm -ivh oracle-xe-11.2.0-1.0.x86_64.rpm
    /etc/init.d/./oracle-xe configure responseFile=xe.rsp
    ./sqlplus sys/********* as sysdba @/u01/app/oracle/product/11.2.0/xe/rdbms/admin/utlfile.sql
    SQLPLUS:
    ALTER USER hr IDENTIFIED BY hr ACCOUNT UNLOCK;
    GRANT CONNECT, RESOURCE to hr;
    GRANT read, write on DIRECTORY DATA_PUMP_DIR TO hr;
    expdp hr/hr dumpfile=hr.dmp directory=DATA_PUMP_DIR schemas=hr logfile=hr_exp.log
    impdp hr/hr dumpfile=hr.dmp directory=DATA_PUMP_DIR schemas=hr logfile=hr_imp.log
    This was carried out on:
    OEL5.8, OEL6.3, openSUSE 11.4
    For explanation:
    We did the style-sheet-installation for XE10 to have the expdp/impd functionality.
    Thanks for your assistance
    Best regards
    Achim
    Edited by: oelk on 16.08.2012 10:20

  • [ETL] TTS vs expdp/impdp vs ctas (dblink)

    Hi, all.
    The database is oracle 10gR2 on a unix machine.
    Assuming that the db size is about 1 tera bytes (table : 500 giga, index : 500 giga),
    how much faster is TTS (transportable tablespace) over expdp/impdp, and over ctas (dblink) ?
    As you know, the speed of etl depends on the hardware capacity. (io capacity, network bandwith, the number of cpu)
    I just would like to hear general guide from your experience.
    Thanks in advance.
    Best Regards.

    869578 wrote:
    Hi, all.
    The database is oracle 10gR2 on a unix machine.
    Assuming that the db size is about 1 tera bytes (table : 500 giga, index : 500 giga),
    how much faster is TTS (transportable tablespace) over expdp/impdp, and over ctas (dblink) ?
    As you know, the speed of etl depends on the hardware capacity. (io capacity, network bandwith, the number of cpu)
    I just would like to hear general guide from your experience.
    Thanks in advance.
    Best Regards.http://docs.oracle.com/cd/B19306_01/server.102/b14231/tspaces.htm#ADMIN01101
    Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are just copied to the destination location, and you use an export/import utility to transfer only the metadata of the tablespace objects to the new database.
    If you really want to know "how much faster" you're going to have to benchmark. Lots of variables come in to play so best to determine this in your actual environment.
    Cheers,

  • Expdp impdp fails from 10g to 11g db version

    Hello folks,
    Export DB Version : 10.2.0.4
    Import DB Version : 11.2.0.1
    Export Log File
    Export: Release 10.2.0.4.0 - Production on Wednesday, 03 November, 2010 2:19:20
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, Data Mining and Real Application Testing options
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 45 GB
    . . exported "DYM"."CYCLE_COUNT_MASTER" 39.14 GB 309618922 rows
    Master table "DYM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
    Dump file set for DYM.SYS_EXPORT_SCHEMA_01 is:
    Job "DYM"."SYS_EXPORT_SCHEMA_01" successfully completed at 02:56:49
    Import Log File
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    ORA-31693: Table data object "DYM_PRJ4"."CYCLE_COUNT_MASTER" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 10:54:38
    from 10g expdp to 11g impdp is not allowed ? any thoughts appreciated ??

    Nope , I do not see any error file.
    Current log# 2 seq# 908 mem# 0:
    Thu Nov 04 11:58:20 2010
    DM00 started with pid=530, OS id=1659, job SYSTEM.SYS_IMPORT_FULL_02
    Thu Nov 04 11:58:20 2010
    DW00 started with pid=531, OS id=1661, wid=1, job SYSTEM.SYS_IMPORT_FULL_02
    Thu Nov 04 11:58:55 2010
    DM00 started with pid=513, OS id=1700, job SYSTEM.SYS_IMPORT_FULL_02
    Thu Nov 04 11:58:55 2010
    DW00 started with pid=520, OS id=1713, wid=1, job SYSTEM.SYS_IMPORT_FULL_02
    Thu Nov 04 12:00:54 2010
    Thread 1 cannot allocate new log, sequence 909
    Private strand flush not complete
    Current log# 2 seq# 908 mem# 0: ####################redo02.log
    Thread 1 advanced to log sequence 909 (LGWR switch)
    Current log# 3 seq# 909 mem# 0: ###################redo03.log
    Thu Nov 04 12:01:51 2010
    Thread 1 cannot allocate new log, sequence 910
    Checkpoint not complete
    Current log# 3 seq# 909 mem# 0:###################redo03.log

  • Use expdp/impdp to reorganize a tablespace to remove additional datafile ?

    Oracle 10g (10.2.0.1)
    We had a tablespace with a single datafile WORK1, WORK1 filled up, colleague added two datafiles WORK2 and WORK3 (instead of resizing original larger).
    I resized WORK1, increasing by 500Mb.
    I was able to drop WORK3, but not WORK2 (ORA-03262: the file is non-empty)
    My proposed solution is to expdp the tablespace, drop the tablespace and datafiles, recreate the tablespace with a correctly sized datafile and finally impdp the tablespace.
    Is this solution valid ?
    Any hints at syntax would be useful

    1. Map your datafile.
    2. If no segments in datafile, drop it and go to 5.
    2. Shrink the datafile up to where the data ends.
    3. Rebuild/move the last object in the data file,
    4. Go to 1
    5. Fin
    To map data file...
    accept file_num char prompt 'File ID: ';
    SET PAGESIZE   70
    SET LINESIZE   132
    SET NEWPAGE    0
    SET VERIFY     OFF
    SET ECHO       OFF
    SET HEADING    ON
    SET FEEDBACK   OFF
    SET TERMOUT    ON
    COLUMN file_name   FORMAT a50          HEADING 'File Name'
    COLUMN owner       FORMAT a10   TRUNC  HEADING 'Owner'
    COLUMN object      FORMAT a30   TRUNC  HEADING 'Object'
    COLUMN obj_type    FORMAT a2           HEADING ' '
    COLUMN block_id    FORMAT 9999999      HEADING 'Block|ID'
    COLUMN blocks      FORMAT 999,999      HEADING 'Blocks'
    COLUMN mbytes      FORMAT 9,999.99     HEADING 'M-Bytes'
    SELECT  'free space'      owner,
            ' '               object,
            ' '               obj_type,
            f.file_name,
            s.block_id,
            s.blocks,
            s.bytes/1048576   mbytes
      FROM  dba_free_space s,
            dba_data_files f
    WHERE  s.file_id = TO_NUMBER(&file_num)
       AND  s.file_id = f.file_id
    UNION
    SELECT  owner,
            segment_name,
            DECODE(segment_type, 'TABLE',          'T',
                                 'INDEX',          'I',
                                 'ROLLBACK',       'RB',
                                 'CACHE',          'CH',
                                 'CLUSTER',        'CL',
                                 'LOBINDEX',       'LI',
                                 'LOBSEGMENT',     'LS',
                                 'TEMPORARY',      'TY',
                                 'NESTED TABLE',   'NT',
                                 'TYPE2 UNDO',     'U2',
                                 'TABLE PARTITION','TP',
                                 'INDEX PARTITION','IP', '?'),
            f.file_name,
            s.file_id,
            s.block_id,
            s.blocks,
            s.bytes/1048576
      FROM  dba_extents s,
            dba_data_files f
    WHERE  s.file_id = TO_NUMBER(&file_num)
       AND  s.file_id = f.file_id
    ORDER
        BY  file_id,
            block_id

  • Expdp/impdp error

    Hi Aman,
    Sorry about that. Posting it as new one:
    SQL> ALTER USER SCOTT DEFAULT TABLESPACE TEST;
    User altered.
    SQL> ALTER USER TEST DEFAULT TABLESPACE TEST;
    User altered.
    SQL> ALTER TABLESPACE TEST
    2 STORAGE
    3 MAXEXTENTS UNLIMITED;
    STORAGE
    ERROR at line 2:
    ORA-02142: missing or invalid ALTER TABLESPACE option
    SQL> EXIT
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Pr
    oduction
    With the Partitioning, OLAP and Data Mining options
    C:\Documents and Settings\Rafialvi>expdp system/manager directory=MYDIR dumpfile
    =expdpf.dmp schemas=scott
    Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 16:34:57
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
    tion
    With the Partitioning, OLAP and Data Mining options
    ORA-39002: invalid operation
    ORA-39070: Unable to open the log file.
    ORA-29283: invalid file operation
    ORA-06512: at "SYS.UTL_FILE", line 475
    ORA-29283: invalid file operation
    I tried in linux still the same error persist:
    [oracle@dbcl1n1 AUCD1 ~]$ expdp system/system directory=MYDIR dumpfile=expdpf.dmp schemas=adprod
    Export: Release 10.2.0.4.0 - 64bit Production on Monday, 15 February, 2010 3:22:33
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP, Data Mining
    and Real Application Testing options
    ORA-39002: invalid operation
    ORA-39070: Unable to open the log file.
    ORA-29283: invalid file operation
    ORA-06512: at "SYS.UTL_FILE", line 488
    ORA-29283: invalid file operation
    Thanks,
    Rafi.
    C:\Documents and Settings\Rafialvi>
    Thanks,
    Rafi

    Hi Khaja,
    You was quite right thanks man.I was not creating directory.But still struggliing with below problem on windows and export is going on linux...
    code code
    SQL*Plus: Release 10.2.0.1.0 - Production on Mon Feb 15 17:24:09 2010
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Enter user-name: /as sysdba
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> create directory mydir3 as 'C:\oracle\product\10.2.0\expdptest';
    Directory created.
    SQL> grant read,write on mydir3 to public;
    grant read,write on mydir3 to public
    ERROR at line 1:
    ORA-00942: table or view does not exist
    SQL> grant read,write on directory mydir3 to public;
    Grant succeeded.
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Pr
    oduction
    With the Partitioning, OLAP and Data Mining options
    C:\Documents and Settings\Rafialvi>expdp system/manager directory=MYDIR3
    Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:28:47
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
    tion
    With the Partitioning, OLAP and Data Mining options
    Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** directory=MYDIR3
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 320 KB
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM
    Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
    Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/PRE_TABLE_ACTION
    Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type SCHEMA_EXPORT/TABLE/GRANT/CROSS_SCHEMA/OBJECT_GRANT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/COMMENT
    Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
    Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
    Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_
    PACKAGE_SPEC
    Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
    Processing object type SCHEMA_EXPORT/VIEW/VIEW
    Processing object type SCHEMA_EXPORT/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type SCHEMA_EXPORT/VIEW/GRANT/CROSS_SCHEMA/OBJECT_GRANT
    Processing object type SCHEMA_EXPORT/VIEW/COMMENT
    Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/POST_TABLE_ACTION
    Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
    . . exported "SYSTEM"."REPCAT$_AUDIT_ATTRIBUTE" 5.960 KB 2 rows
    . . exported "SYSTEM"."REPCAT$_OBJECT_TYPES" 6.515 KB 28 rows
    . . exported "SYSTEM"."REPCAT$_RESOLUTION_METHOD" 5.656 KB 19 rows
    . . exported "SYSTEM"."REPCAT$_TEMPLATE_STATUS" 5.304 KB 3 rows
    . . exported "SYSTEM"."REPCAT$_TEMPLATE_TYPES" 5.921 KB 2 rows
    . . exported "SYSTEM"."DEF$_AQCALL" 0 KB 0 rows
    . . exported "SYSTEM"."DEF$_AQERROR" 0 KB 0 rows
    . . exported "SYSTEM"."DEF$_CALLDEST" 0 KB 0 rows
    . . exported "SYSTEM"."DEF$_DEFAULTDEST" 0 KB 0 rows
    . . exported "SYSTEM"."DEF$_DESTINATION" 0 KB 0 rows
    . . exported "SYSTEM"."DEF$_ERROR" 0 KB 0 rows
    . . exported "SYSTEM"."DEF$_LOB" 0 KB 0 rows
    . . exported "SYSTEM"."DEF$_ORIGIN" 0 KB 0 rows
    . . exported "SYSTEM"."DEF$_PROPAGATOR" 0 KB 0 rows
    . . exported "SYSTEM"."DEF$_PUSHED_TRANSACTIONS" 0 KB 0 rows
    . . exported "SYSTEM"."DEF$_TEMP$LOB" 0 KB 0 rows
    . . exported "SYSTEM"."LOGSTDBY$APPLY_MILESTONE" 0 KB 0 rows
    . . exported "SYSTEM"."LOGSTDBY$APPLY_PROGRESS":"P0" 0 KB 0 rows
    . . exported "SYSTEM"."LOGSTDBY$EVENTS" 0 KB 0 rows
    . . exported "SYSTEM"."LOGSTDBY$HISTORY" 0 KB 0 rows
    . . exported "SYSTEM"."LOGSTDBY$PARAMETERS" 0 KB 0 rows
    . . exported "SYSTEM"."LOGSTDBY$PLSQL" 0 KB 0 rows
    . . exported "SYSTEM"."LOGSTDBY$SCN" 0 KB 0 rows
    . . exported "SYSTEM"."LOGSTDBY$SKIP" 0 KB 0 rows
    . . exported "SYSTEM"."LOGSTDBY$SKIP_TRANSACTION" 0 KB 0 rows
    . . exported "SYSTEM"."MVIEW$_ADV_INDEX" 0 KB 0 rows
    . . exported "SYSTEM"."MVIEW$_ADV_PARTITION" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_AUDIT_COLUMN" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_COLUMN_GROUP" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_CONFLICT" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_DDL" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_EXCEPTIONS" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_EXTENSION" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_FLAVORS" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_FLAVOR_OBJECTS" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_GENERATED" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_GROUPED_COLUMN" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_INSTANTIATION_DDL" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_KEY_COLUMNS" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_OBJECT_PARMS" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_PARAMETER_COLUMN" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_PRIORITY" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_PRIORITY_GROUP" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_REFRESH_TEMPLATES" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_REPCAT" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_REPCATLOG" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_REPCOLUMN" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_REPGROUP_PRIVS" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_REPOBJECT" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_REPPROP" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_REPSCHEMA" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_RESOLUTION" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_RESOLUTION_STATISTICS" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_RESOL_STATS_CONTROL" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_RUNTIME_PARMS" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_SITES_NEW" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_SITE_OBJECTS" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_SNAPGROUP" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_TEMPLATE_OBJECTS" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_TEMPLATE_PARMS" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_TEMPLATE_REFGROUPS" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_TEMPLATE_SITES" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_TEMPLATE_TARGETS" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_USER_AUTHORIZATIONS" 0 KB 0 rows
    . . exported "SYSTEM"."REPCAT$_USER_PARM_VALUES" 0 KB 0 rows
    . . exported "SYSTEM"."SQLPLUS_PRODUCT_PROFILE" 0 KB 0 rows
    Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
    Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
    C:\ORACLE\PRODUCT\10.2.0\EXPDPTEST\EXPDAT.DMP
    Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 17:29:13
    C:\Documents and Settings\Rafialvi>dumpfile=expdpf.dmp schemas=scott
    'dumpfile' is not recognized as an internal or external command,
    operable program or batch file.
    C:\Documents and Settings\Rafialvi>impdp system/manager directory=MYDIR3 dumpfil
    e=expdpf.dmp remap_schema=scott:test
    Import: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:30:46
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
    tion
    With the Partitioning, OLAP and Data Mining options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31640: unable to open dump file "C:\oracle\product\10.2.0\expdptest\expdpf.d
    mp" for read
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    C:\Documents and Settings\Rafialvi>expdp scott/tiger directory=MYDIR3
    Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:35:51
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
    tion
    With the Partitioning, OLAP and Data Mining options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "C:\oracle\product\10.2.0\expdptest\expdat
    .dmp"
    ORA-27038: created file already exists
    OSD-04010: <create> option specified, file already exists
    C:\Documents and Settings\Rafialvi>dumpfile=expdpf.dmp schemas=scott
    'dumpfile' is not recognized as an internal or external command,
    operable program or batch file.
    C:\Documents and Settings\Rafialvi>expdp scott/tiger directory=MYDIR3
    Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:36:19
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
    tion
    With the Partitioning, OLAP and Data Mining options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "C:\oracle\product\10.2.0\expdptest\expdat
    .dmp"
    ORA-27038: created file already exists
    OSD-04010: <create> option specified, file already exists
    C:\Documents and Settings\Rafialvi>dumpfile=expdptest.dmp schemas=scott
    'dumpfile' is not recognized as an internal or external command,
    operable program or batch file.
    C:\Documents and Settings\Rafialvi>expdp scott/tiger directory=MYDIR3
    Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:36:43
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
    tion
    With the Partitioning, OLAP and Data Mining options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "C:\oracle\product\10.2.0\expdptest\expdat
    .dmp"
    ORA-27038: created file already exists
    OSD-04010: <create> option specified, file already exists
    C:\Documents and Settings\Rafialvi>dumpfile=expdptest3.dmp schemas=scott
    'dumpfile' is not recognized as an internal or external command,
    operable program or batch file.
    C:\Documents and Settings\Rafialvi>expdp scott/tiger directory=MYDIR3
    Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:47:31
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
    tion
    With the Partitioning, OLAP and Data Mining options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "C:\oracle\product\10.2.0\expdptest\expdat
    .dmp"
    ORA-27038: created file already exists
    OSD-04010: <create> option specified, file already exists
    C:\Documents and Settings\Rafialvi>dumpfile=expdptest13.dmp schemas=scott
    'dumpfile' is not recognized as an internal or external command,
    operable program or batch file.
    C:\Documents and Settings\Rafialvi>expdp scott/tiger directory=MYDIR3
    Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:48:32
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
    tion
    With the Partitioning, OLAP and Data Mining options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "C:\oracle\product\10.2.0\expdptest\expdat
    .dmp"
    ORA-27038: created file already exists
    OSD-04010: <create> option specified, file already exists
    C:\Documents and Settings\Rafialvi>dumpfile=expdptest131.dmp schemas=scott
    'dumpfile' is not recognized as an internal or external command,
    operable program or batch file.
    C:\Documents and Settings\Rafialvi>expdp scott/tiger directory=MYDIR3
    Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:50:15
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
    tion
    With the Partitioning, OLAP and Data Mining options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "C:\oracle\product\10.2.0\expdptest\expdat
    .dmp"
    ORA-27038: created file already exists
    OSD-04010: <create> option specified, file already exists
    How to get rid of below error khaja.
    Thanks,
    Rafi.

  • Expdp /impdp  Win 2003 Oracle 11g

    Hi Guys,
    I did the export from production and I want to restore on dev. What is the sysntax for impdp.
    expdp userid=system/system dumpfile=livelink.dmp schemas=livelink logfile=livelink.log
    impdp userid=system/system dumpfile=e:/oradata/livelink.dmp ........ Is there fromuser=livelink touser=livelink
    Thanx.

    Handle:      NPD
    Status Level:      Newbie
    Registered:      Mar 15, 2007
    Total Posts:      359
    Total Questions:      82 (73 unresolved)
    so many questions & so few answers.
    Is there fromuser=livelink touser=livelinkNO
    check out SCHEMAS
    bcm@bcm-laptop:~$ impdp help=yes
    Import: Release 11.2.0.1.0 - Production on Thu Oct 7 08:44:28 2010
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    The Data Pump Import utility provides a mechanism for transferring data objects
    between Oracle databases. The utility is invoked with the following command:
         Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp
    You can control how Import runs by entering the 'impdp' command followed
    by various parameters. To specify parameters, you use keywords:
         Format:  impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
         Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp
    USERID must be the first parameter on the command line.
    The available keywords and their descriptions follow. Default values are listed within square brackets.
    ATTACH
    Attach to an existing job.
    For example, ATTACH=job_name.
    CONTENT
    Specifies data to load.
    Valid keywords are: [ALL], DATA_ONLY and METADATA_ONLY.
    DATA_OPTIONS
    Data layer option flags.
    Valid keywords are: SKIP_CONSTRAINT_ERRORS.
    DIRECTORY
    Directory object to be used for dump, log and sql files.
    DUMPFILE
    List of dumpfiles to import from [expdat.dmp].
    For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
    ENCRYPTION_PASSWORD
    Password key for accessing encrypted data within a dump file.
    Not valid for network import jobs.
    ESTIMATE
    Calculate job estimates.
    Valid keywords are: [BLOCKS] and STATISTICS.
    EXCLUDE
    Exclude specific object types.
    For example, EXCLUDE=SCHEMA:"='HR'".
    FLASHBACK_SCN
    SCN used to reset session snapshot.
    FLASHBACK_TIME
    Time used to find the closest corresponding SCN value.
    FULL
    Import everything from source [Y].
    HELP
    Display help messages [N].
    INCLUDE
    Include specific object types.
    For example, INCLUDE=TABLE_DATA.
    JOB_NAME
    Name of import job to create.
    LOGFILE
    Log file name [import.log].
    NETWORK_LINK
    Name of remote database link to the source system.
    NOLOGFILE
    Do not write log file [N].
    PARALLEL
    Change the number of active workers for current job.
    PARFILE
    Specify parameter file.
    PARTITION_OPTIONS
    Specify how partitions should be transformed.
    Valid keywords are: DEPARTITION, MERGE and [NONE].
    QUERY
    Predicate clause used to import a subset of a table.
    For example, QUERY=employees:"WHERE department_id > 10".
    REMAP_DATA
    Specify a data conversion function.
    For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.
    REMAP_DATAFILE
    Redefine datafile references in all DDL statements.
    REMAP_SCHEMA
    Objects from one schema are loaded into another schema.
    REMAP_TABLE
    Table names are remapped to another table.
    For example, REMAP_TABLE=EMP.EMPNO:REMAPPKG.EMPNO.
    REMAP_TABLESPACE
    Tablespace object are remapped to another tablespace.
    REUSE_DATAFILES
    Tablespace will be initialized if it already exists [N].
    SCHEMAS
    List of schemas to import.
    SKIP_UNUSABLE_INDEXES
    Skip indexes that were set to the Index Unusable state.
    SOURCE_EDITION
    Edition to be used for extracting metadata.
    SQLFILE
    Write all the SQL DDL to a specified file.
    STATUS
    Frequency (secs) job status is to be monitored where
    the default [0] will show new status when available.
    STREAMS_CONFIGURATION
    Enable the loading of Streams metadata
    TABLE_EXISTS_ACTION
    Action to take if imported object already exists.
    Valid keywords are: APPEND, REPLACE, [SKIP] and TRUNCATE.
    TABLES
    Identifies a list of tables to import.
    For example, TABLES=HR.EMPLOYEES,SH.SALES:SALES_1995.
    TABLESPACES
    Identifies a list of tablespaces to import.
    TARGET_EDITION
    Edition to be used for loading metadata.
    TRANSFORM
    Metadata transform to apply to applicable objects.
    Valid keywords are: OID, PCTSPACE, SEGMENT_ATTRIBUTES and STORAGE.
    TRANSPORTABLE
    Options for choosing transportable data movement.
    Valid keywords are: ALWAYS and [NEVER].
    Only valid in NETWORK_LINK mode import operations.
    TRANSPORT_DATAFILES
    List of datafiles to be imported by transportable mode.
    TRANSPORT_FULL_CHECK
    Verify storage segments of all tables [N].
    TRANSPORT_TABLESPACES
    List of tablespaces from which metadata will be loaded.
    Only valid in NETWORK_LINK mode import operations.
    VERSION
    Version of objects to import.
    Valid keywords are: [COMPATIBLE], LATEST or any valid database version.
    Only valid for NETWORK_LINK and SQLFILE.
    The following commands are valid while in interactive mode.
    Note: abbreviations are allowed.
    CONTINUE_CLIENT
    Return to logging mode. Job will be restarted if idle.
    EXIT_CLIENT
    Quit client session and leave job running.
    HELP
    Summarize interactive commands.
    KILL_JOB
    Detach and delete job.
    PARALLEL
    Change the number of active workers for current job.
    START_JOB
    Start or resume current job.
    Valid keywords are: SKIP_CURRENT.
    STATUS
    Frequency (secs) job status is to be monitored where
    the default [0] will show new status when available.
    STOP_JOB
    Orderly shutdown of job execution and exits the client.
    Valid keywords are: IMMEDIATE.
    bcm@bcm-laptop:~$ Edited by: sb92075 on Oct 7, 2010 8:53 AM

  • 오라클 10g R2에서 expdp 받은 파일이 impdp 할 때 오류가 발생합니다.

    To, All Oracle DBAs
    IBM AIX 서버에서 expdp를 이용하여 BIS라는 스키마의 전체 자료를 다음의 par 파일을 작성하여 익스포트 받았습니다.
    directory=bis
    dumpfile=bis.dmp
    logfile=bis.log
    schemas=bis
    CONTENT=ALL
    그리고, 윈도우7상에서 아래와 같은 imp.par 파일을 이용하여 impdp 하였습니다.
    directory=bis
    dumpfile=bis.dmp
    logfile=imp_bis.log
    그런데, 아래와 같은 오류가 떴습니다.
    D:\bis> impdp bis/bis parfile=imp.par
    Import: Release 10.2.0.3.0 - Production on 금요일, 02 4월, 2010 23:51:14
    Copyright (c) 2003, 2005, Oracle.  All rights reserved.
    접속 대상: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    ORA-39002: 부적합한 작업
    ORA-31694: 마스터 테이블 "BIS"."SYS_IMPORT_FULL_01"이(가) 로드/로드 취소를 실패함
    ORA-31640: 읽기를 위해 덤프 파일 "d:\bis\bis.dmp"을(를) 열 수 없음
    ORA-19505: "d:\bis\bis.dmp" 파일을 확인하는데 실패했습니다
    ORA-27046: 파일 크기는 논리 블록 크기의 배가 아닙니다
    OSD-04012: 파일 크기 불일치 (OS 1697225879)
    구글링을 해도 쉽사리 원인을 찾기가 쉽지 않네요...
    글 수정: Korean_Tramper

    문제는 ftp로 다운받을때 ASCII 모드로 받아서 그렇더라구요
    그래서 다시 IBM 서버에서 ftp BINARY 모드로 받으니 임포트가 되더라구요...

  • Error : Temporary Tablespace is Empty  when doing expdp/impdp

    Hi all,
    I was doing expdp on my oracle 10.1.0.2.0 DB on Win XP P, though the user is having a default temporary tablespace with a temp file on autoextend enabled, I got the message as...
    ORA-25153: Temporary Tablespace is Empty
    Then I created a new temporary tablespace for the user with 500M tempfile and autoextend enabled, then expdp went through.
    Now I am doing the impdp for the same .dmp file to generate one sqlfile for the DB,
    again I am facing the same error message as...
    ORA-25153: Temporary Tablespace is Empty
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    17FE07EC 13460 package body SYS.KUPW$WORKER
    17FE07EC 5810 package body SYS.KUPW$WORKER
    17FE07EC 3080 package body SYS.KUPW$WORKER
    17FE07EC 3530 package body SYS.KUPW$WORKER
    17FE07EC 6395 package body SYS.KUPW$WORKER
    17FE07EC 1208 package body SYS.KUPW$WORKER
    17ABE058 2 anonymous block
    Job "CHECKUP"."SYS_SQL_FILE_FULL_02" stopped due to fatal error at 10:09
    The message indicates that...
    ORA-25153: Temporary Tablespace is Empty
    Cause: An attempt was made to use space in a temporary tablespace with no files.
    Action: Add files to the tablespace using ADD TEMPFILE command.
    SO my question is every time I do any imp exp have I to add temp file in my temporary tablespace? will it not be cleared on the completion of the job?
    Any advice please.

    Hi Sabdar,
    The result of the query is as...
    SQL> SELECT * FROM DATABASE_PROPERTIES where
    2 PROPERTY_NAME='DEFAULT_TEMP_TABLESPACE';
    PROPERTY_NAME
    PROPERTY_VALUE
    DESCRIPTION
    DEFAULT_TEMP_TABLESPACE
    TEMP
    Name of default temporary tablespace
    So the default temporary tablespace is TEMP which is not having any tempfile as I cloned this DB from the primary DB, but the user I am using for the impdp is 'checkup' and the temporary tablespace for 'checkup' is 'checkup_temp1' which s having tempfile.
    SO then why the impdp job is going to server's temporary tablespace instead of user's temporary tablespace.
    Is there any way to get whether 'checkup_temp1' tablespace is the default temporary tablespace for 'checkup' or not?
    Can I create create the user mentioning default temporary tablespace anyway because it is giving me error as...
    SQL> create user suman identified by suman
    2 default tablespace checkup_dflt
    3 default TEMPORARY TABLESPACE checkup_temp1;
    default TEMPORARY TABLESPACE checkup_temp1
    ERROR at line 3:
    ORA-00921: unexpected end of SQL command
    Then I did ...
    SQL> create user suman identified by suman
    2 default tablespace checkup_dflt
    3 TEMPORARY TABLESPACE checkup_temp1;
    User created.
    Regards

  • Using expdp/impdp to backup schemas to new tablespace

    Hello,
    I have tablespace A for schemas A1 and A2, and I wish back up these schemes to tablespace B using schema names B1 and B2 (so the contents of schemas A1 and A2 are copied into schemas B1 and B2, respectively, to use as backups in case something happens to schemas A1 or A2 or tablespace A).
    I began by creating tablespace B, and schemas B1 and B2. Then I attempted to populate schemas B1 and B2 by doing the following:
    EXPORT SCHEMAS:
    expdp a1/a1password@myIdentifier ESTIMATE=BLOCKS DUMPFILE=myDpumpDirectory:a1.dmp LOGFILE=myDpumpDirectory:a1_export.log SCHEMAS=a1 COMPRESSION=METADATA_ONLY
    expdp a2/a2password@myIdentifier ESTIMATE=BLOCKS DUMPFILE=myDpumpDirectory:a2.dmp LOGFILE=myDpumpDirectory:a2_export.log SCHEMAS=a2 COMPRESSION=METADATA_ONLY
    IMPORT SCHEMAS:
    impdp b1/b1password@myIdentifier DUMPFILE=myDpumpDirectory:a1.dmp LOGFILE=myDpumpDirectory:b1_import.log REMAP_SCHEMA=a1:b1
    impdp b2/b2password@myIdentifier DUMPFILE=myDpumpDirectory:a2.dmp LOGFILE=myDpumpDirectory:b2_import.log REMAP_SCHEMA=a2:b2
    This resulted in backing up schema A1 into schema B1, and schema A2 into B2, but the tablespaces for schemas B1 and B2 remained tablespace A (when I wanted them to be tablespace B).
    I will drop schemas B1 and B2, create new schemas, and try again. What command should I use to get the tablespace correct this time?
    Reviewing the documentation for data pump import
    http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_import.htm#SUTIL300
    specifically the section titled REMAP_TABLESPACE, I'm thinking that I could just add a switch to the above import commands to remap tablespace, such as:
    impdp b1/b1password@myIdentifier DUMPFILE=myDpumpDirectory:a1.dmp LOGFILE=myDpumpDirectory:b1_import.log REMAP_SCHEMA=a1:b1 REMAP_TABLESPACE=a:b
    impdp b2/b2password@myIdentifier DUMPFILE=myDpumpDirectory:a2.dmp LOGFILE=myDpumpDirectory:b2_import.log REMAP_SCHEMA=a2:b2 REMAP_TABLESPACE=a:b
    Is that correct?
    Also, is it OK to use the same export commands above, or should they change to support the REMAP_TABLESPACE?

    Hi,
    if i understand correctly, you want to import  A1:B1 and  A2:B2 with the Respective Tablespace. You are using the expdp with ESTIMATE it can not help you
    You can use something like that with one dump file
    expdp system/password directory=<myDpumpDirectory> dumpfile=A1_A2_Export.dmp logfile=A1_A2_Export.log schemas=A1,A2
    impdp system/password directory=<myDpumpDirectory> dumpfile=A1_A2_Export.dmp logfile=A1_A2_Import.log remap_schemas=<A1:B1,A2:B2> REMAP_TABLESPACE=<TAB1:TAB2>
    HTH

Maybe you are looking for