Data Pump - Tablespace MetaData

Does anyone know how to get the metadata to create only the tablespaces from a full export dump - I am trying to create the necessary SQL to pre-create my tablespaces before a full import
Q1. Do I have to do this from the expdp instead of the impdp side
I am guessing I need to use CONTENT=METADATA_ONLY on the export side.
Q2. How do I restrictr the MetaData to only tablespaces. Is it simply a question of using
INCLUDE=TABLESPACE
Jim

Hello,
Q1. Do I have to do this from the expdp instead of the impdp side
I am guessing I need to use CONTENT=METADATA_ONLY on the export side.
Q2. How do I restrictr the MetaData to only tablespaces. Is it simply a question of using
INCLUDE=TABLESPACEYes for the 2 questions, you may try this:
expdp system/<password> FULL=Y CONTENT=METADATA_ONLY INCLUDE=TABLESPACE DUMPFILE=tbs.dmp
impdp system/<password> DUMPFILE=tbs.dmp SQLFILE=tbs.sqlThen, edit the tbs.sql file you'll get all the necessary SQL to pre-create the Tablespaces.
Also, but it's another way. On the Target Database you may use Oracle Managed Files (by setting the parameter DB_CREATE_FILE_DEST ). Then, you won't have to specify the Datafile. To create a Tablespace in such a Database you just have to execute the following statement:
CREATE TABLESPACE <tablespace_name>;http://docs.oracle.com/cd/E11882_01/server.112/e25494/omf001.htm#i1006122
http://docs.oracle.com/cd/E11882_01/server.112/e25494/omf002.htm
It's faster to pre-create Tablespaces :-).
Hope this help.
Best regards,
Jean-Valentin

Similar Messages

  • Data Pump execution time

    I am exporting 2 tables from my database as a prelimary test before doing a full export ( as part of a server migration to an new server )
    I have some concerns about the time the export took ( and therefore the time the corresponding import will take - which typically is considerably longer than the export )
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 19.87 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "SAPPHIRE"."A_SDIDATAITEM" 15.45 GB 88263813 rows
    . . exported "SAPPHIRE"."A_SDIDATA" 1.775 GB 14011593 rows
    Master table "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully loaded/unloaded
    Dump file set for SYSTEM.EXPORT_TABLES_LIMSLIVE is:
    E:\ORACLE\PRODUCT\10.2.0\ADMIN\LIMSLIVE\DPDUMP\EXP_TABLES_LIMSLIVE.DMP
    Job "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully completed at 15:43:38
    These 2 tables alone took nearly an hour to export. The bulk of the time seemed to be on the line
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Q1. Is that really the line that was taking time or was the export really working on the the export of the table on the following line ?
    Q2. Will such table stats be brought in on an import ? i.e. are table stats part of the dictionary therefore part of the SYS/SYSTEM schemas and will not be brought in on the import to my newly created target database ?
    Q3. Does anyone know of any performance improvements that can be made to this export / import ? I am exporting from the 10.2.0.1 source Data Pump and will be importing on a new target 11gR2 Data Pump. From experiement with the command line I have found that 10.2.0.1 does not support PARALLEL so I am not able to use that on the export side ( I should be able to use it on the 11gR2 import side )
    thanks,
    Jim

    Jim,
    Q1. what difference does it make knowing the difference between how long the Meta Data and the actual data takes on >export ?
    Is this because you could decide to exclude some objects and manually create them before import.You asked what was taking so long. This was just a test to see if it was metadata or data. It may help us try to figure out if there is a problem or not, but knowing what is slow, would help narrow things down.
    With the old exp/imp utility I sometimes manually created the tablespaces and indexes in this manner, however for Data >Pump the Meta Data contains a lot more than just tablespaces and indexes - I couldn't imagine manually creating all the >tables and grants for example. I guess you can be selective about what objects you include / exclude in the export or import >( via the INCLUDE & EXCLUDE settings ) ?No, I'm not suggesting that you change your process, just trying to figure out what is slow. Also, old exp/imp and Data Pump treat metadata and data the same way. Just to maybe clear things up - when you say content=metadata_only, it exports everything except for data. It will export tablespaces, grants, users, table, statistics, etc. Everything but the data. When you say content=data_only, it only exports the data. You can use this method to export and import everything, but it's not the best solution for most. If you create all of your metadata and then load the data, any indexes on the tables need to be maintained while the data is being loaded and this will slow down the data only job.
    Q2. If I do a DATA ONLY export I presume that means I need to manually pre-create every object I want imported into my >target database. Does this mean every tablespace, table, index, grant etc ( not an attractive option ) ?Again - I was not suggesting this method, just trying to figure out what was slow. If I were to do it this way, I would run the impdp on the metadata only dumpfile, then run the import on the data only dump file.
    Q3. If I use EXCLUDE=statistics does that mean I can simply regenerate the stats on the target database after the import >completes ( how would I do that ? )Yes, you can do that. There are different statistics gathering levels. You can collect them per table, per index, per schema, and I think per database. You want to look at the documentation for dbms_stats.gather...
    Dean

  • Tablespace level backup using data pump

    Hi,
    Im using 10.2.0.4 on RHEL 4,
    I have one doubt, can we take a tablespace level backup using data pump,
    bt i dnt wnt to use it for transportable tablespace.
    thanks.

    Yes, you can only for the tables in that tablespace only.
    Use the TABLESPACES option to export list of tablespaces.*here all the tables in that tablespaces will be exported*.
    and you must have the EXP_FULL_DATABASE role to use tablespace mode.
    Have a look at this,
    http://stanford.edu/dept/itss/docs/oracle/10g/server.101/b10825/dp_export.htm#i1007519
    Thanks
    Edited by: Cj on Dec 12, 2010 11:48 PM

  • Data Pump import to a sql file error :ORA-31655 no data or metadata objects

    Hello,
    I'm using Data Pump to export/import data, one requirement is to import data to a sql file. The OS is window.
    I made the follow export :
    expdp system/password directory=dpump_dir dumpfile=tablesdump.dmp content=DATA_ONLY tables=user.tablename
    and it works, I can see the file TABLESDUMP.DMP in the directory path.
    then when I tried to import it to a sql file:
    impdp system/password directory=dpump_dir dumpfile=tablesdump.dmp sqlfile=tables_export.sql
    the log show :
    ORA-31655 no data or metadata objects selected for job
    and the sql file is created empty in the directory path.
    I'm not DBA, I'm a Java developer , Can you help me?
    Thks

    Hi, I added the command line :
    expdp system/system directory=dpump_dir dumpfile=tablesdump.dmp content=DATA_ONLY schemas=ko1 tables=KO1QT01 logfile=capture.log
    the log in the console screen is (is in Spanish), no log file was cerated in the directory path.
    Export: Release 10.2.0.1.0 - Production on Martes, 26 Enero, 2010 12:59:14
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Conectado a: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    UDE-00010: se han solicitado varios modos de trabajo, schema y tables.
    (English error)
    UDE-00010: multiple job modes requested,schema y tables.
    This is why I used tables=user.tablename instead, is this right ?
    Thks

  • Data Pump - expdp and slow performance on specific tables

    Hi there
    I have af data pump export af a schema. Most of the 700 tables is exported very quickly (direct path) but a couple of them seems to be extremenly slow.
    I have chekced:
    - no lobs
    - no long/raw
    - no VPD
    - no partitions
    - no bitmapped index
    - just date, number, varchar2's
    I'm runing with trace 400300
    But I'm having trouble reading the output from it. It seems that some of the slow performning tables is runinng with method 4??? Can anyone find an explanation for the method in the trace:
    1 > direct path (i think)
    2 > external table (i think)
    4 > ?
    others?
    I have done some stats using v$filestat/v$session_wait (history) - and it seems that we always wait for DB seq file read - and doing lots and lots of SINGLEBLKRDS. Not undo is read
    I have a table 2.5 GB -> 3 minutes
    and then this (in my eyes) similar table 2.4 GB > 1½ hrs.
    There are 367.000 blks (8 K) and avg rowlen = 71
    I'm on Oracle 11.2 on a Linux box with plenty of RAM and CPU power.
    Trace file /opt/oracle112/diag/rdbms/prod/prod/trace/prod_dw00_24268.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    ORACLE_HOME = /opt/oracle112/product/11.2.0.2/dbhome_1
    System name:  Linux
    Node name:  tiaprod.thi.somethingamt.dk
    Release:  2.6.18-194.el5
    Version:  #1 SMP Mon Mar 29 22:10:29 EDT 2010
    Machine:  x86_64
    VM name:  Xen Version: 3.4 (HVM)
    Instance name: prod
    Redo thread mounted by this instance: 1
    Oracle process number: 222
    Unix process pid: 24268, image: [email protected] (DW00)
    *** 2011-09-20 09:39:39.671
    *** SESSION ID:(401.8395) 2011-09-20 09:39:39.671
    *** CLIENT ID:() 2011-09-20 09:39:39.671
    *** SERVICE NAME:(SYS$BACKGROUND) 2011-09-20 09:39:39.671
    *** MODULE NAME:() 2011-09-20 09:39:39.671
    *** ACTION NAME:() 2011-09-20 09:39:39.671
    KUPP:09:39:39.670: Current trace/debug flags: 00400300 = 4195072
    *** MODULE NAME:(Data Pump Worker) 2011-09-20 09:39:39.672
    *** ACTION NAME:(SYS_EXPORT_SCHEMA_09) 2011-09-20 09:39:39.672
    KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML called.
    KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML returned.
    KUPC:09:39:39.693: Setting remote flag for this process to FALSE
    prvtaqis - Enter
    prvtaqis subtab_name upd
    prvtaqis sys table upd
    KUPW:09:39:39.819: 0: KUPP$PROC.WHATS_MY_ID called.
    KUPW:09:39:39.819: 1: KUPP$PROC.WHATS_MY_ID returned.
    KUPW:09:39:39.820: 1: worker max message number: 1000
    KUPW:09:39:39.822: 1: Full cluster access allowed
    KUPW:09:39:39.823: 1: Original job start time: 11-SEP-20 09:39:38 AM
    KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME called.
    KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME returned. Process name: DW00
    KUPW:09:39:39.862: 1: KUPV$FT_INT.GET_INSTANCE_ID called.
    KUPW:09:39:39.866: 1: KUPV$FT_INT.GET_INSTANCE_ID returned. Instance name: prod
    KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE called.
    KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE returned.
    KUPW:09:39:39.871: 1: KUPF$FILE.INIT called.
    KUPW:09:39:39.996: 1: KUPF$FILE.INIT returned.
    KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH called.
    KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH returned.
    KUPW:09:39:39.998: 1: Max character width: 1
    KUPW:09:39:39.998: 1: Max clob fetch: 32757
    KUPW:09:39:39.998: 1: Max varchar2a size: 32757
    KUPW:09:39:39.998: 1: Max varchar2 size: 7990
    KUPW:09:39:39.998: 1: In procedure GET_PARAMETERS
    KUPW:09:39:40.000: 1: In procedure GET_METADATA_FILTERS
    KUPW:09:39:40.001: 1: In procedure GET_METADATA_TRANSFORMS
    KUPW:09:39:40.002: 1: In procedure GET_DATA_FILTERS
    KUPW:09:39:40.004: 1: In procedure GET_DATA_REMAPS
    KUPW:09:39:40.005: 1: In procedure PRINT_MT_PARAMS
    KUPW:09:39:40.005: 1: Master table             : "SYSTEM"."SYS_EXPORT_SCHEMA_09"
    KUPW:09:39:40.005: 1: Metadata job mode        : SCHEMA_EXPORT
    KUPW:09:39:40.005: 1: Debug enable             : TRUE
    KUPW:09:39:40.005: 1: Profile enable           : FALSE
    KUPW:09:39:40.005: 1: Transportable enable     : FALSE
    KUPW:09:39:40.005: 1: Metrics enable           : FALSE
    KUPW:09:39:40.005: 1: db version               : 11.2.0.2.0
    KUPW:09:39:40.005: 1: job version              : 11.2.0.0.0
    KUPW:09:39:40.005: 1: service name             :
    KUPW:09:39:40.005: 1: Current Edition          : ORA$BASE
    KUPW:09:39:40.005: 1: Job Edition              :
    KUPW:09:39:40.005: 1: Abort Step               : 0
    KUPW:09:39:40.005: 1: Access Method            : AUTOMATIC
    KUPW:09:39:40.005: 1: Data Options             : 0
    KUPW:09:39:40.006: 1: Dumper directory         :
    KUPW:09:39:40.006: 1: Master only              : FALSE
    KUPW:09:39:40.006: 1: Data Only                : FALSE
    KUPW:09:39:40.006: 1: Metadata Only            : FALSE
    KUPW:09:39:40.006: 1: Estimate                 : BLOCKS
    KUPW:09:39:40.006: 1: Data error logging table :
    KUPW:09:39:40.006: 1: Remote Link              :
    KUPW:09:39:40.006: 1: Dumpfile present         : TRUE
    KUPW:09:39:40.006: 1: Table Exists Action      :
    KUPW:09:39:40.006: 1: Partition Options        : NONE
    KUPW:09:39:40.006: 1: Tablespace Datafile Count: 0
    KUPW:09:39:40.006: 1: Metadata Filter    Index : 1    Count : 10
    KUPW:09:39:40.006: 1:         1           Name - INCLUDE_USER
    KUPW:09:39:40.006: 1:                    Value - TRUE
    KUPW:09:39:40.006: 1:              Object Name - SCHEMA_EXPORT
    KUPW:09:39:40.006: 1:         2           Name - SCHEMA_EXPR
    KUPW:09:39:40.006: 1:                    Value -  IN ('TIA')
    KUPW:09:39:40.006: 1:         3           Name - NAME_EXPR
    KUPW:09:39:40.006: 1:                    Value -  ='ACC_PAYMENT_SPECIFICATION'
    KUPW:09:39:40.006: 1:                   Object - TABLE
    KUPW:09:39:40.006: 1:         4           Name - INCLUDE_PATH_EXPR
    KUPW:09:39:40.006: 1:                    Value -  IN ('TABLE')
    KUPW:09:39:40.006: 1:         5           Name - ORDERED
    KUPW:09:39:40.006: 1:                    Value - FALSE
    KUPW:09:39:40.006: 1:                   Object - TABLE_DATA
    KUPW:09:39:40.006: 1:         6           Name - NO_XML
    KUPW:09:39:40.006: 1:                    Value - TRUE
    KUPW:09:39:40.006: 1:                   Object - XMLSCHEMA/EXP_XMLSCHEMA
    KUPW:09:39:40.006: 1:         7           Name - XML_OUTOFLINE
    KUPW:09:39:40.006: 1:                    Value - FALSE
    KUPW:09:39:40.006: 1:                   Object - TABLE/TABLE_DATA
    KUPW:09:39:40.006: 1:         8           Name - XDB_GENERATED
    KUPW:09:39:40.006: 1:                    Value - FALSE
    KUPW:09:39:40.006: 1:                   Object - TABLE/TRIGGER
    KUPW:09:39:40.007: 1:         9           Name - XDB_GENERATED
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:                   Object - TABLE/RLS_POLICY
    KUPW:09:39:40.007: 1:         10           Name - PRIVILEGED_USER
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1: MD remap schema    Index : 4    Count : 0
    KUPW:09:39:40.007: 1: MD remap other     Index : 5    Count : 0
    KUPW:09:39:40.007: 1: MD Transform ddl   Index : 2    Count : 11
    KUPW:09:39:40.007: 1:         1           Name - DBA
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - JOB
    KUPW:09:39:40.007: 1:         2           Name - EXPORT
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:         3           Name - PRETTY
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:         4           Name - SQLTERMINATOR
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:         5           Name - CONSTRAINTS
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:                   Object - TABLE
    KUPW:09:39:40.007: 1:         6           Name - REF_CONSTRAINTS
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:                   Object - TABLE
    KUPW:09:39:40.007: 1:         7           Name - OID
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - TABLE
    KUPW:09:39:40.007: 1:         8           Name - RESET_PARALLEL
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - INDEX
    KUPW:09:39:40.007: 1:         9           Name - OID
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - TYPE
    KUPW:09:39:40.007: 1:         10           Name - OID
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - INC_TYPE
    KUPW:09:39:40.007: 1:         11           Name - REVOKE_FROM
    KUPW:09:39:40.008: 1:                    Value - SYSTEM
    KUPW:09:39:40.008: 1:                   Object - ROLE
    KUPW:09:39:40.008: 1: Data Filter        Index : 6    Count : 0
    KUPW:09:39:40.008: 1: Data Remap         Index : 7    Count : 0
    KUPW:09:39:40.008: 1: MD remap name      Index : 8    Count : 0
    KUPW:09:39:40.008: 1: In procedure DISPATCH_WORK_ITEMS
    KUPW:09:39:40.009: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:40.009: 1: KUPC$QUEUE.TRANSCEIVE called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:40.036: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2011
    KUPW:09:39:40.036: 1: DBMS_LOB.CREATETEMPORARY called.
    KUPW:09:39:40.037: 1: DBMS_LOB.CREATETEMPORARY returned.
    KUPW:09:39:40.038: 1: Flags: 18
    KUPW:09:39:40.038: 1: Start sequence number:
    KUPW:09:39:40.038: 1: End sequence number:
    KUPW:09:39:40.038: 1: Metadata Parallel: 1
    KUPW:09:39:40.038: 1: Primary worker id: 1
    KUPW:09:39:40.041: 1: In procedure GET_TABLE_DATA_OBJECTS
    KUPW:09:39:40.041: 1: In procedure CREATE_MSG
    KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT called.
    KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT returned.
    KUPW:09:39:40.041: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:40.041: 1: KUPC$QUEUE_INT.SEND called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:40.044: 1: KUPC$QUEUE_INT.SEND returned.
    KUPW:09:39:40.044: 1: Estimate in progress using BLOCKS method...
    KUPW:09:39:40.044: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
    KUPW:09:39:40.044: 1: Old Seqno: 0 New Path: SCHEMA_EXPORT/TABLE/TABLE_DATA PO Num: -5 New Seqno: 62
    KUPW:09:39:40.046: 1: Created type completion for duplicate 62
    KUPW:09:39:40.046: 1: In procedure CREATE_MSG
    KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT called.
    KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT returned.
    KUPW:09:39:40.046: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:40.046: 1: KUPC$QUEUE_INT.SEND called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:40.047: 1: KUPC$QUEUE_INT.SEND returned.
    KUPW:09:39:40.047: 1: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    KUPW:09:39:40.048: 1: In procedure CONFIGURE_METADATA_UNLOAD
    KUPW:09:39:40.048: 1: Phase: ESTIMATE_PHASE Filter Name:  Filter Value:
    KUPW:09:39:40.048: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
    KUPW:09:39:40.182: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 100001
    KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: ESTIMATE_PHASE
    KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
    KUPW:09:39:40.194: 1: DBMS_METADATA.SET_PARSE_ITEM called.
    *** 2011-09-20 09:39:40.325
    KUPW:09:39:40.325: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
    KUPW:09:39:40.325: 1: DBMS_METADATA.SET_COUNT called.
    KUPW:09:39:40.328: 1: DBMS_METADATA.SET_COUNT returned.
    KUPW:09:39:40.328: 1: DBMS_METADATA.FETCH_XML_CLOB called.
    *** 2011-09-20 09:39:42.603
    KUPW:09:39:42.603: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
    KUPW:09:39:42.603: 1: In procedure CREATE_TABLE_DATA_OBJECT_ROWS
    KUPW:09:39:42.603: 1: In function GATHER_PARSE_ITEMS
    KUPW:09:39:42.603: 1: In function CHECK_FOR_REMAP_NETWORK
    KUPW:09:39:42.603: 1: Nothing to remap
    KUPW:09:39:42.603: 1: In procedure BUILD_OBJECT_STRINGS
    KUPW:09:39:42.604: 1: In procedure LOCATE_DATA_FILTERS
    KUPW:09:39:42.604: 1: In function NEXT_PO_NUMBER
    KUPW:09:39:42.620: 1: In procedure DETERMINE_METHOD_PARALLEL
    KUPW:09:39:42.620: 1: flags mask: 0
    KUPW:09:39:42.620: 1: dapi_possible_meth: 1
    KUPW:09:39:42.620: 1: data_size: 3019898880
    KUPW:09:39:42.620: 1: et_parallel: TRUE
    KUPW:09:39:42.620: 1: object: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION"                               <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    KUPW:09:39:42.648: 1: l_dapi_bit_mask: 7
    KUPW:09:39:42.648: 1: l_client_bit_mask: 7
    KUPW:09:39:42.648: 1: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" either, parallel: 12               <<<<< Here is says either (I thought that was method ?)  <<<<<<<<<<<<<<<<
    KUPW:09:39:42.648: 1: FORALL BULK INSERT called.
    KUPW:09:39:42.658: 1: FORALL BULK INSERT returned.
    KUPW:09:39:42.660: 1: DBMS_LOB.TRIM called. v_md_xml_clob
    KUPW:09:39:42.660: 1: DBMS_LOB.TRIM returned.
    KUPW:09:39:42.660: 1: DBMS_METADATA.FETCH_XML_CLOB called.
    KUPW:09:39:42.678: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
    KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY called.
    KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY returned.
    KUPW:09:39:42.678: 1: In procedure UPDATE_TD_ROW_EXP with seqno: 62
    KUPW:09:39:42.680: 1: 1 rows fetched
    KUPW:09:39:42.680: 1: In function NEXT_PO_NUMBER
    KUPW:09:39:42.680: 1: Next table data array entry: 1 Parallel: 12 Size: 3019898880 Method: 4Creation_level: 0               <<<<<<<<<<<<<<<<  HERE IT SAYS METHOD = 4  and PARALLEL=12 (I'm not using the parallel parameter ???)  <<<<<<<<<<<<<<<<<<
    KUPW:09:39:42.681: 1: In procedure UPDATE_TD_BASE_PO_INFO
    KUPW:09:39:42.683: 1: Updated 1 td objects with bpo between 1 and 1
    KUPW:09:39:42.684: 1: Send table_data_varray called.  Count: 1
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:42.695: 1: Send table_data_varray returned.
    KUPW:09:39:42.695: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:42.695: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
    KUPW:09:39:42.695: 1: Old Seqno: 62 New Path:  PO Num: -5 New Seqno: 0
    KUPW:09:39:42.695: 1: Object count: 1
    KUPW:09:39:42.697: 1: 1 completed for 62
    KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE called. Handle: 100001
    KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE returned.
    KUPW:09:39:42.697: 1: In procedure CREATE_MSG
    KUPW:09:39:42.697: 1: KUPV$FT.MESSAGE_TEXT called.
    KUPW:09:39:42.698: 1: KUPV$FT.MESSAGE_TEXT returned.
    KUPW:09:39:42.698: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:42.698: 1: KUPC$QUEUE_INT.SEND called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:42.699: 1: KUPC$QUEUE_INT.SEND returned.
    KUPW:09:39:42.699: 1: Total estimation using BLOCKS method: 2.812 GB
    KUPW:09:39:42.699: 1: In procedure CONFIGURE_METADATA_UNLOAD
    KUPW:09:39:42.699: 1: Phase: WORK_PHASE Filter Name: BEGIN_WITH Filter Value:
    KUPW:09:39:42.699: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
    KUPW:09:39:42.837: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 200001
    KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: WORK_PHASE
    KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
    KUPW:09:39:42.847: 1: DBMS_METADATA.SET_PARSE_ITEM called.
    KUPW:09:39:42.964: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
    KUPW:09:39:42.964: 1: DBMS_METADATA.SET_COUNT called.
    KUPW:09:39:42.967: 1: DBMS_METADATA.SET_COUNT returned.
    KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT called.
    KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT returned.
    KUPW:09:39:42.968: 1: DBMS_METADATA.FETCH_XML_CLOB called. Handle: 200001
    *** 2011-09-20 09:40:01.798
    KUPW:09:40:01.798: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
    KUPW:09:40:01.798: 1: Object seqno fetched:
    KUPW:09:40:01.799: 1: Object path fetched:
    KUPW:09:40:01.799: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:40:01.799: 1: In procedure COMPLETE_EXP_OBJECT
    KUPW:09:40:01.799: 1: KUPF$FILE.FLUSH_LOB called.
    KUPW:09:40:01.815: 1: KUPF$FILE.FLUSH_LOB returned.
    KUPW:09:40:01.815: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
    KUPW:09:40:01.815: 1: Old Seqno: 226 New Path:  PO Num: -5 New Seqno: 0
    KUPW:09:40:01.815: 1: Object count: 1
    KUPW:09:40:01.815: 1: 1 completed for 226
    KUPW:09:40:01.815: 1: DBMS_METADATA.CLOSE called.  Handle: 200001
    KUPW:09:40:01.816: 1: DBMS_METADATA.CLOSE returned.
    KUPW:09:40:01.816: 1: KUPF$FILE.CLOSE_CONTEXT called.
    KUPW:09:40:01.820: 1: KUPF$FILE.CLOSE_CONTEXT returned.
    KUPW:09:40:01.821: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:40:01.821: 1: KUPC$QUEUE.TRANSCEIVE called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:40:01.827: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2012
    KUPW:09:40:01.827: 1: DBMS_LOB.CREATETEMPORARY called.
    KUPW:09:40:01.828: 1: DBMS_LOB.CREATETEMPORARY returned.
    KUPW:09:40:01.828: 1: Process order range: 1..1
    KUPW:09:40:01.828: 1: Method: 1
    KUPW:09:40:01.828: 1: Parallel: 1
    KUPW:09:40:01.828: 1: Creation level: 0
    KUPW:09:40:01.830: 1: BULK COLLECT called.
    KUPW:09:40:01.830: 1: BULK COLLECT returned.
    KUPW:09:40:01.830: 1: In procedure BUILD_OBJECT_STRINGS
    KUPW:09:40:01.836: 1: In procedure MOVE_DATA UNLOADing process_order 1 TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION"            <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    KUPW:09:40:01.839: 1: KUPD$DATA.OPEN called.
    KUPW:09:40:01.840: 1: KUPD$DATA.OPEN returned.
    KUPW:09:40:01.840: 1: KUPD$DATA.SET_PARAMETER - common called.
    KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - common returned.
    KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags called.
    KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags returned.
    KUPW:09:40:01.843: 1: KUPD$DATA.START_JOB called.
    KUPW:09:40:01.918: 1: KUPD$DATA.START_JOB returned. In procedure GET_JOB_VERSIONThis is how I called expdp:
    expdp system/xxxxxxxxx schemas=tia directory=expdp INCLUDE=TABLE:\" =\'ACC_PAYMENT_SPECIFICATION\'\" REUSE_DUMPFILES=Y LOGFILE=expdp:$LOGFILE TRACE=400300

    Hi there ...
    I have read the note - thats where I found the link to the trace note 286496.1 - on now to setup a trace
    But I still need an explanation for the methods (1,2,4 etc)
    regards
    Mette

  • DATA PUMP ISSUE

    SEVERAL USER BELONG TO THE TABLESPACE I WANT TO EXPORT
    I WANT TO ONLY WEED OUT THE TABLESSPACE AND THE TABLES BELONGING TO ONE USER.
    THE USER ALSO WANT SOME OF THE TABLES WITH ALL THE DATA IN THEM AND SOME OF THE TABLES WITH NO DATA IN THEM
    I TRIED ALL OPTIONS
    WITH DATA_ONLY, METADATA_ONLY ON BOTH THE IMPORT AND EXPORT AND HAVE ISSUES
    HAVE TRIED EXPORTING AND IT GIVES ME ALL THE TABLES AND A SIZE TO INDICATE IT WORKS BUT ON IMPORT KAZOOM-HELL BREAKS LOOSE. JUNIOR DBA NEED HELP
    SQL> select owner, table_name, tablespace_name from dba_tables where tablespace_name='USER_SEG';
    ORAPROBE OSP_ACCOUNTS USER_SEG
    PATSY TRAPP USER_SEG
    PATSY TRAPPO USER_SEG
    PATSY TRAUDIT USER_SEG
    PATSY TRCOURSE USER_SEG
    PATSY TRCOURSEO USER_SEG
    PATSY TRDESC USER_SEG
    PATSY TREMPDATA USER_SEG
    PATSY TRFEE USER_SEG
    PATSY TRNOTES USER_SEG
    PATSY TROPTION USER_SEG
    PATSY TRPART USER_SEG
    PATSY TRPARTO USER_SEG
    PATSY TRPART_OLD USER_SEG
    PATSY TRPERCENT USER_SEG
    PATSY TRSCHOOL USER_SEG
    PATSY TRSUPER USER_SEG
    PATSY TRTRANS USER_SEG
    PATSY TRUSERPW USER_SEG
    PATSY TRUSRDAT USER_SEG
    PATSY TRVARDAT USER_SEG
    PATSY TRVERIFY USER_SEG
    PATSY TRAPPO_RESET USER_SEG
    PATSY TRAPP_RESET USER_SEG
    PATSY TRCOURSEO_RESET USER_SEG
    PATSY TRCOURSE_RESET USER_SEG
    PATSY TRPARTO_RESET USER_SEG
    PATSY TRPART_RESET USER_SEG
    PATSY TRTRANS_RESET USER_SEG
    PATSY TRVERIFY_RESET USER_SEG
    MAFANY TRVERIFY USER_SEG
    MAFANY TRPART USER_SEG
    MAFANY TRPARTO USER_SEG
    MAFANY TRAPP USER_SEG
    MAFANY TRAPPO USER_SEG
    MAFANY TRCOURSE USER_SEG
    MAFANY TRCOURSEO USER_SEG
    MAFANY TRTRANS USER_SEG
    JULIE R_REPOSITORY_LOG USER_SEG
    JULIE R_VERSION USER_SEG
    JULIE R_DATABASE_TYPE USER_SEG
    JULIE R_DATABASE_CONTYPE USER_SEG
    JULIE R_NOTE USER_SEG
    JULIE R_DATABASE USER_SEG
    JULIE R_DATABASE_ATTRIBUTE USER_SEG
    JULIE R_DIRECTORY USER_SEG
    JULIE R_TRANSFORMATION USER_SEG
    JULIE R_TRANS_ATTRIBUTE USER_SEG
    JULIE R_DEPENDENCY USER_SEG
    JULIE R_PARTITION_SCHEMA USER_SEG
    JULIE R_PARTITION USER_SEG
    JULIE R_TRANS_PARTITION_SCHEMA USER_SEG
    JULIE R_CLUSTER USER_SEG
    JULIE R_SLAVE USER_SEG
    JULIE R_CLUSTER_SLAVE USER_SEG
    JULIE R_TRANS_SLAVE USER_SEG
    JULIE R_TRANS_CLUSTER USER_SEG
    JULIE R_TRANS_HOP USER_SEG
    JULIE R_TRANS_STEP_CONDITION USER_SEG
    JULIE R_CONDITION USER_SEG
    JULIE R_VALUE USER_SEG
    JULIE R_STEP_TYPE USER_SEG
    JULIE R_STEP USER_SEG
    JULIE R_STEP_ATTRIBUTE USER_SEG
    JULIE R_STEP_DATABASE USER_SEG
    JULIE R_TRANS_NOTE USER_SEG
    JULIE R_LOGLEVEL USER_SEG
    JULIE R_LOG USER_SEG
    JULIE R_JOB USER_SEG
    JULIE R_JOBENTRY_TYPE USER_SEG
    JULIE R_JOBENTRY USER_SEG
    JULIE R_JOBENTRY_COPY USER_SEG
    JULIE R_JOBENTRY_ATTRIBUTE USER_SEG
    JULIE R_JOB_HOP USER_SEG
    JULIE R_JOB_NOTE USER_SEG
    JULIE R_PROFILE USER_SEG
    JULIE R_USER USER_SEG
    JULIE R_PERMISSION USER_SEG
    JULIE R_PROFILE_PERMISSION USER_SEG
    MAFANY2 TRAPP USER_SEG
    MAFANY2 TRAPPO USER_SEG
    MAFANY2 TRCOURSE USER_SEG
    MAFANY2 TRCOURSEO USER_SEG
    MAFANY2 TRPART USER_SEG
    MAFANY2 TRPARTO USER_SEG
    MAFANY2 TRTRANS USER_SEG
    MAFANY2 TRVERIFY USER_SEG
    MAFANY BIN$ZY3M1IuZyq3gQBCs+AAMzQ==$0 USER_SEG
    MAFANY BIN$ZY3M1Iuhyq3gQBCs+AAMzQ==$0 USER_SEG
    MAFANY MYUSERS USER_SEG
    I ONLY WANT THE TATBLES FROM PATSY AND WANT TO MOVE IT TO ANOTHER DATABASE FOR HER TO USE AND WANT TO KEEP THE SAME TABLESPACE NAME
    THE TABLES BELOW SHOULD ALSO HAVE JUST THE METADATA AND NOT THE DATA
    PATSY TRAPP USER_SEG
    PATSY TRAPPO USER_SEG
    PATSY TRAUDIT USER_SEG
    PATSY TRCOURSE USER_SEG
    PATSY TRCOURSEO USER_SEG
    PATSY TRDESC USER_SEG
    PATSY TREMPDATA USER_SEG
    PATSY TRFEE USER_SEG
    PATSY TRNOTES USER_SEG
    PATSY TROPTION USER_SEG
    PATSY TRPART USER_SEG
    PATSY TRPARTO USER_SEG
    PATSY TRPART_OLD USER_SEG
    PATSY TRPERCENT USER_SEG
    PATSY TRSCHOOL USER_SEG
    PATSY TRSUPER USER_SEG
    PATSY TRTRANS USER_SEG
    PATSY TRUSERPW USER_SEG
    PATSY TRUSRDAT USER_SEG
    THE FOLLOWING WILL OR ARE SUPPOSED TO HAVE ALL THE DATA
    PATSY TRVERIFY USER_SEG
    PATSY TRAPPO_RESET USER_SEG
    PATSY TRAPP_RESET USER_SEG
    PATSY TRCOURSEO_RESET USER_SEG
    PATSY TRCOURSE_RESET USER_SEG
    PATSY TRPARTO_RESET USER_SEG
    PATSY TRPART_RESET USER_SEG
    PATSY TRTRANS_RESET USER_SEG
    PATSY TRVERIFY_RESET USER_SEG
    HAVE TRIED ALL THE FOLLOWING AND LATER GOT STUCK IN MY LIL EFFORT TO DOCUMENT AS I GO ALONG:
    USING DATA PUMP TO EXPORT DATA
    First:
    Create a directory object or use one that already exists.
    I created a directory object: grant
    CREATE DIRECTORY "DATA_PUMP_DIR" AS '/home/oracle/my_dump_dir'
    Grant read, write on directory DATA_PUMP_DIR to user;
    Where user will be the user such as sys, public, oracle etc.
    For example to create a directory object named expdp_dir located at /u01/backup/exports enter the following sql statement:
    SQL> create directory expdp_dir as '/u01/backup/exports'
    then grant read and write permissions to the users who will be performing the data pump export and import.
    SQL> grant read,write on directory dpexp_dir to system, user1, user2, user3;
    http://wiki.oracle.com/page/Data+Pump+Export+(expdp)+and+Data+Pump+Import(impdp)?t=anon
    To view directory objects that already exist
    -     use EM: under Administration tab to schema section and select Directory Objects
    -     DESC the views: ALL_DIRECTORIES, DBA_DIRECTORIES
    select * from all_directories;
    select * from dba_directories;
    export schema using expdb
    expdp system/SCHMOE DUMPFILE=patsy_schema.dmp
    DIRECTORY=DATA_PUMP_DIR SCHEMAS = PATSY
    expdp system/PASS DUMPFILE=METADATA_ONLY_schema.dmp DIRECTORY=DATA_PUMP_DIR TABLESPACES = USER_SEG CONTENT=METADATA_ONLY
    expdp system/PASS DUMPFILE=data_only_schema.dmp DIRECTORY=DATA_PUMP_DIR TABLES=PATSY.TRVERIFY,PATSY.TRAPPO_RESET,PATSY.TRAPP_RESET,PATSY.TRCOURSEO_RESET, PATSY.TRCOURSE_RESET,PATSY.TRPARTO_RESET,PATSY.TRPART_RESET,PATSY.TRTRANS_RESET,PATSY.TRVERIFY_RESET CONTENT=DATA_ONLY

    you are correct all the patsy tables reside in the tablespace USER_SEG that are in a diffrent database and i want to move them to a new database-same version 10g. i have created a user named patsy there also. the tablespace does not exist in the target i want to move it to-same thing with the objects and indexes-they don't exists in the target.
    so how can i move the schema and tablespace with all the tables and keeping the tablespace name and same username patsy that i created to import into.
    tried again and get some errrors: this is beter than last time and that because i remap_tablespace:USER_SEG:USERS
    BUT I WANT TO HAVE THE TABLESPACE IN TARGET CALLED USER_SEG TOO. DO I HAVE TO CREATE IT OR ?
    [oracle@server1 ~]$ impdp system/blue99 remap_schema=patsy:test REMAP_TABLESPACE=USER_SEG:USERS directory=DATA_PUMP_DIR dumpfile=patsy.dmp logfile=patsy.log
    Import: Release 10.1.0.3.0 - Production on Wednesday, 08 April, 2009 11:10
    Copyright (c) 2003, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Release 10.1.0.3.0 - Production
    Master table "SYSTEM"."SYS_IMPORT_FULL_03" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_FULL_03": system/******** remap_schema=patsy:test REMAP_TABLESPACE=USER_SEG:USERS directory=DATA_PUMP_DIR dumpfile=patsy.dmp logfile=patsy.log
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
    . . imported "TEST"."TRTRANS" 10.37 MB 93036 rows
    . . imported "TEST"."TRAPP" 2.376 MB 54124 rows
    . . imported "TEST"."TRAUDIT" 1.857 MB 28153 rows
    . . imported "TEST"."TRPART_OLD" 1.426 MB 7183 rows
    . . imported "TEST"."TRSCHOOL" 476.8 KB 5279 rows
    . . imported "TEST"."TRAPPO" 412.3 KB 9424 rows
    . . imported "TEST"."TRUSERPW" 123.8 KB 3268 rows
    . . imported "TEST"."TRVERIFY_RESET" 58.02 KB 183 rows
    . . imported "TEST"."TRUSRDAT" 54.73 KB 661 rows
    . . imported "TEST"."TRNOTES" 51.5 KB 588 rows
    . . imported "TEST"."TRCOURSE" 49.85 KB 243 rows
    . . imported "TEST"."TRCOURSE_RESET" 47.60 KB 225 rows
    . . imported "TEST"."TRPART" 39.37 KB 63 rows
    . . imported "TEST"."TRPART_RESET" 37.37 KB 53 rows
    . . imported "TEST"."TRTRANS_RESET" 38.94 KB 196 rows
    . . imported "TEST"."TRCOURSEO" 30.93 KB 51 rows
    . . imported "TEST"."TRCOURSEO_RESET" 28.63 KB 36 rows
    . . imported "TEST"."TRPERCENT" 33.72 KB 1044 rows
    . . imported "TEST"."TROPTION" 30.10 KB 433 rows
    . . imported "TEST"."TRPARTO" 24.78 KB 29 rows
    . . imported "TEST"."TRPARTO_RESET" 24.78 KB 29 rows
    . . imported "TEST"."TRVERIFY" 20.97 KB 30 rows
    . . imported "TEST"."TRVARDAT" 14.13 KB 44 rows
    . . imported "TEST"."TRAPP_RESET" 14.17 KB 122 rows
    . . imported "TEST"."TRDESC" 9.843 KB 90 rows
    . . imported "TEST"."TRAPPO_RESET" 8.921 KB 29 rows
    . . imported "TEST"."TRSUPER" 6.117 KB 10 rows
    . . imported "TEST"."TREMPDATA" 0 KB 0 rows
    . . imported "TEST"."TRFEE" 0 KB 0 rows
    Processing object type TABLE_EXPORT/TABLE/GRANT/TBL_OWNER_OBJGRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    ORA-39083: Object type INDEX failed to create with error:
    ORA-00959: tablespace 'INDEX1_SEG' does not exist
    Failing sql is:
    CREATE UNIQUE INDEX "TEST"."TRPART_11" ON "TEST"."TRPART" ("TRPCPID", "TRPSSN") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 32768 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
    ORA-39083: Object type INDEX failed to create with error:
    ORA-00959: tablespace 'INDEX1_SEG' does not exist
    Failing sql is:
    CREATE INDEX "TEST"."TRPART_I2" ON "TEST"."TRPART" ("TRPLAST") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
    ORA-39083: Object type INDEX failed to create with error:
    ORA-00959: tablespace 'INDEX1_SEG' does not exist
    Failing sql is:
    CREATE INDEX "TEST"."TRPART_I3" ON "TEST"."TRPART" ("TRPEMAIL") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
    ORA-39083: Object type INDEX failed to create with error:
    ORA-00959: tablespace 'INDEX1_SEG' does not exist
    Failing sql is:
    CREATE INDEX "TEST"."TRPART_I4" ON "TEST"."TRPART" ("TRPPASSWORD") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "BILLZ"."TRCOURSE_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"BILLZ"', '"TRCOURSE_I1"', NULL, NULL, NULL, 232, 2, 232, 1, 1, 7, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "BILLZ"."TRTRANS_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"BILLZ"', '"TRTRANS_I1"', NULL, NULL, NULL, 93032, 445, 93032, 1, 1, 54063, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "DANQ"."TRAPP_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRAPP_I1"', NULL, NULL, NULL, 54159, 184, 54159, 1, 1, 4597, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "DANQ"."TRAPP_I2" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRAPP_I2"', NULL, NULL, NULL, 54159, 182, 17617, 1, 2, 48776, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRAPPO_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRAPPO_I1"', NULL, NULL, NULL, 9280, 29, 9280, 1, 1, 166, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRAPPO_I2" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRAPPO_I2"', NULL, NULL, NULL, 9280, 28, 4062, 1, 2, 8401, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRCOURSEO_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRCOURSEO_I1"', NULL, NULL, NULL, 49, 2, 49, 1, 1, 8, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TREMPDATA_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TREMPDATA_I1"', NULL, NULL, NULL, 0, 0, 0, 0, 0, 0, 0, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "DANQ"."TROPTION_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TROPTION_I1"', NULL, NULL, NULL, 433, 3, 433, 1, 1, 187, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_11" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_I2" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_I3" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_I4" creation failed
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "DANQ"."TRPART_I5" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRPART_I5"', NULL, NULL, NULL, 19, 1, 19, 1, 1, 10, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRPARTO_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPARTO_I1"', NULL, NULL, NULL, 29, 1, 29, 1, 1, 1, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRPARTO_I2" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPARTO_I2"', NULL, NULL, NULL, 29, 14, 26, 1, 1, 1, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRPART_I2" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I2"', NULL, NULL, NULL, 7180, 19, 4776, 1, 1, 7048, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRPART_I3" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I3"', NULL, NULL, NULL, 2904, 15, 2884, 1, 1, 2879, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRPART_I4" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I4"', NULL, NULL, NULL, 363, 1, 362, 1, 1, 359, 0, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRPART_I5" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I5"', NULL, NULL, NULL, 363, 1, 363, 1, 1, 353, 0, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "DANQ"."TRPART_11" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRPART_11"', NULL, NULL, NULL, 7183, 29, 7183, 1, 1, 6698, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "DANQ"."TRPERCENT_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRPERCENT_I1"', NULL, NULL, NULL, 1043, 5, 1043, 1, 1, 99, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "DANQ"."TRSCHOOL_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRSCHOOL_I1"', NULL, NULL, NULL, 5279, 27, 5279, 1, 1, 4819, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRVERIFY_I2" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRVERIFY_I2"', NULL, NULL, NULL, 30, 7, 7, 1, 1, 1, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "STU"."TRVERIFY_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"STU"', '"TRVERIFY_I1"', NULL, NULL, NULL, 30, 12, 30, 1, 1, 1, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Job "SYSTEM"."SYS_IMPORT_FULL_03" completed with 29 error(s) at 11:17

  • 10g to 11gR2 Upgrade using Data Pump Import

    Hi,
    I am intending to move a 10g database from one windows server to another. However there is also a requirement to upgrade this database to 11gR2. Therefore I was going to combine the 2 in one movement by -
    1. take a full data pump export of the source 10g database
    2. create a new empty 11g database on the target environment
    3. import the dump file into the target database
    However I have a couple of queries running over in my mind about this approach -
    Q1. What happens with the SYSTEM and SYS objects from SYSTEM and SYSAUX during the import ? Given that I will have in effect already created a new dictionary on the empty target database - will any import of SYSTEM or SYS simply produce errror messages which should be ignored ?
    Q2. should I use EXCLUDE on SYS and SYSTEM ( is this EXCLUDE better on the export or import side ) ?
    Q3. what happens if there are things such as scheduled jobs etc on the source system - since these would be stored in SYSTEM owned tables, how would I bring these across to the new target 11g database ?
    thanks,
    Jim

    Jim Thompson wrote:
    Hi,
    I am intending to move a 10g database from one windows server to another. However there is also a requirement to upgrade this database to 11gR2. Therefore I was going to combine the 2 in one movement by -
    1. take a full data pump export of the source 10g database
    2. create a new empty 11g database on the target environment
    3. import the dump file into the target database
    However I have a couple of queries running over in my mind about this approach -
    Q1. What happens with the SYSTEM and SYS objects from SYSTEM and SYSAUX during the import ? Given that I will have in effect already created a new dictionary on the empty target database - will any import of SYSTEM or SYS simply produce errror messages which should be ignored ?
    You wont get error related to system , sysaux tablespace bacsei these wont be exported at all. Schemas like "SYS, CTXSYS, MDSYS and ORDSYS" are never exported using datapump. Thats why oracle reommends not to create any objects under sys,system schema.
    Q2. should I use EXCLUDE on SYS and SYSTEM ( is this EXCLUDE better on the export or import side ) ?
    Not required, as datadictionary schemas wont be exported, so spcefying EXCLUDE wont do anything
    Q3. what happens if there are things such as scheduled jobs etc on the source system - since these would be stored in SYSTEM owned tables, how would I bring these across to the new target 11g database ?
    DDL will get export and imported into new database. For example if you have schema A and you had defined job whos owner is A, then while importing this information also gets imported. For user defined Jobs there are view like USER_SCHEDULER_JOBS etc, So dont worry about the user jobs, these will be created.
    Also see
    MOS note - Schema's CTXSYS, MDSYS and ORDSYS are Not Exported [ID 228482.1]
    Export system or sys schema
    I also run following test in my db which shows i cannt export the objects under sys schema;
    SQL> show user
    USER is "SYS"
    SQL> create table pump (id number);
    Table created.
    SQL> insert into pump values (1);
    1 row created.
    SQL> insert into pump values (2);
    1 row created.
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Pr
    oduction
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    C:\Documents and Settings\rnagi\My Documents\Ranjit Doc\Performance\SQL>expdp tables=sys.pump logfile=test.log
    Export: Release 11.2.0.1.0 - Production on Mon Feb 27 18:11:29 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Username: / as sysdba
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Produc
    tion
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA tables=sys.pump logfi
    le=test.log
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    *ORA-39166: Object SYS.PUMP was not found.*
    *ORA-31655: no data or metadata objects selected for job*
    *Job "SYS"."SYS_EXPORT_TABLE_01" completed with 2 error(s) at 18:11:57*

  • Problem Exporting 'USER' Tablespace Metadata

    I am having problem using Data Pump Export to export the 'USERS' tablespace metadata.
    When I make the USERS tablespace read only and run the Data Export job, the job aborts with "ORA-01647: tablespace 'USERS' is read only, cannot allocate space in it."
    But then when I make the USERS tablespace read and write, the job aborts with "ORA-29335: tablespace 'USERS' is not read only."
    I've been caught in this CATCH-22 for the 2nd day now, and I could really use some help. I'm on a Windows 32 platform running Oracle 10g, and here is a copy of the Data Pump Export script/command that I am running at the operating system prompt, which works fine for other tablespaces' metadata, like 'EXAMPLE':
    $ expdp myusername/**mypassword* TRANSPORT_TABLESPACES=users TRANSPORT_FULL_CHECK=Y DIRECTORY=dtpump DUMPFILE=expdp_users.dmp logfile=expdp_users.log
    Thanks!

    Hi,
    The problem is that the datapump creates a table called the master table. It creates this in the schema running the job. If this schema has a default tablespace that you are trying to transport, then this won't work. You will have to use a different schema that does not use the set of tablespaces that you are transporting, or alter the use that is running the datapump job to have a different default tablespace.
    So, basically, datapump requires write access to the tablesspace that the user uses and transportable requires the tablespace to be read only.
    If it were me, I would do this:
    sqlplus username/password
    alter user username default tablespace system;
    exit
    run your expdp command
    sqlplus username/password
    alter user username default tablespace orig_tablespace;
    exit
    Dean

  • While using data pump (impdp) how to rename references within objects?

    using 10g;
    what i want to accomplish is to change schema & tablespace ownership using the data pump method via the command line; i have had success using the command line for expdp / impdp. Problem is that there are objects that reference the old schemas that DO NOT get updated (e.g. procedure may reference usr1.table1 in the PL/SQL statement) and this is where i have been UN-successfull). Anyone know of a way to change references from old schema to new schama name in objects(procedures, views, etc) via the command line?
    this is what i currently use that works to change schema, tablespace, but will not change references within my objects;
    expdp system/<pass> schemas=usr1,usr2 DIRECTORY=dp_dir DUMPFILE=dataPump_BothSchemas.dmp LOGFILE=expdpAllSchema.log parallel=2
    impdp system/<pass> DIRECTORY=dp_dir DUMPFILE=dataPump_BothSchemas.dmp LOGFILE=impbothSchToEE.log remap_schema=usr1:newUsr1,usr2:newUsr2 remap_tablespace=old_ts_tables:new_ts_tables full=y
    Thanks!
    p.s. I have acomplished this using the enterprise manager.

    (e.g. procedure may reference usr1.table1 in the PL/SQL statement) If you hard coded such reference in stored procedure, you have to manually correct them. Consider use synonym if your storage procedure referencing other schema's objects.

  • Data Pump with parallel data ending up in 1st file!

    Oracle 10g 10.2.0.3 EE
    Ran the following command on a 16 core HPUX PA-RISC machine:
    expdp normaluser/password@RDSPOC FULL=y directory=DMPDIR parallel=12 dumpfile=exp_RDSPOC_2nd_%U.dmp logfile=exp_RDSPOC_2nd.log
    Database size, approx 900Gig of data
    All things looked good at first. All cores close to 100% utilized, Disk also at 100% utilized
    1h23m later I had 11 files all about the same size +- 40Gig
    The first dump file continued to grow. After another 60 hours the first file is approx 500 Gig and growing (status says 85% complete)
    One core is running max, and disk utilization is about 15%
    Note, I am not sys but a normal user with full export privilege (If that could make a difference)
    How do I get it to keep the machine running all cores and disks as hard as possible?
    Thanks

    Hi,
    Metadata is never unloaded in parallel, but is sometimes loaded in parallel.
    See Parallel Capabilities of Oracle Data Pump (Doc ID 365459.1)
    Also check the status of expdp, if there is e.g. one big table only one worker will still have data to pump.
    HTH,
    Peter
    Edited by: pa110564 on 05.08.2011 11:04

  • Best Practice for data pump or import process?

    We are trying to copy existing schema to another newly created schema. Used export data pump to successfully export schema.
    However, we encountered some errors when importing dump file to new schema. Remapped schema and tablespaces, etc.
    Most errors occur in PL/SQL... For example, we have views like below in original schema:
    CREATE VIEW *oldschema.myview* AS
    SELECT col1, col2, col3
    FROM *oldschema.mytable*
    WHERE coll1 = 10
    Quite a few Functions, Procedures, Packages and Triggers contain "*oldschema.mytable*" in DML (insert, select, update) statement, for exmaple.
    Getting the following errors in import log:
    ORA-39082: Object type ALTER_FUNCTION:"TEST"."MYFUNCTION" created with compilation warnings
    ORA-39082: Object type ALTER_PROCEDURE:"TEST"."MYPROCEDURE" created with compilation warnings
    ORA-39082: Object type VIEW:"TEST"."MYVIEW" created with compilation warnings
    ORA-39082: Object type PACKAGE_BODY:"TEST"."MYPACKAGE" created with compilation warnings
    ORA-39082: Object type TRIGGER:"TEST"."MYTRIGGER" created with compilation warnings
    A lot of actual errors/invalid objects in new schema are due to:
    ORA-00942: table or view does not exist
    My question is:
    1. What can we do to fix those errors?
    2. Is there a better way to do the import with such condition?
    3. Update PL/SQL and recompile in new schema? Or update in original schema first and export?
    Your help will be greatly appreciated!
    Thank you!

    I routinely get many (MANY) errors as follows and they always compile when I recompile using utlrp.
    ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."RPTSF_WR_LASTOUTPUNCH" created with compilation warnings
    ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."RPTSF_WR_REFPERIODENDFOREMP" created with compilation warnings
    ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."RPTSF_WR_TAILOFFSECS" created with compilation warnings
    ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."FN_GDAPREPORTGATHERER" created with compilation warnings
    Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
    ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ABSENT_EXCEPTION" created with compilation warnings
    ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACCRUAL_BAL_PROJ" created with compilation warnings
    ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACCRUAL_DETAILS" created with compilation warnings
    ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACCRUAL_SUMMARY" created with compilation warnings
    ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACTUAL_SCHEDULE" created with compilation warnings
    It works. In all my databases: peoplesoft, kronos, and others...
    I should qualify that it may still be necessary to 'debug' specific problems, but most common typical problems are easily resolved using the utlrp.sql. The usual problems I run into are typically because of a database link that points to another database such as in a production environment that we firewall our test and development databases from linking to (for obvious reasons).

  • Migration from 10g to 12c using data pump

    hi there, while I've used data pump at the schema level before, I'm rather new at full database imports.
    we are attempting a full database migration from 10.2.0.4 to 12c using the full database data pump method over db link.
    the DBA has advised that we avoid moving SYSTEM and SYSAUX objects. but initially when reviewing the documentation it appeared that these objects would not be exported from the target system given TRANSPORTABLE=NEVER. can someone confirm this? the export/import log refers to objects that I believed would not be targeted:
    23-FEB-15 19:41:11.684:
    Estimated 3718 TABLE_DATA objects in 77 seconds
    23-FEB-15 19:41:12.450: Total estimation using BLOCKS method: 52.93 GB
    23-FEB-15 19:41:14.058: Processing object type DATABASE_EXPORT/TABLESPACE
    23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"UNDOTBS1" already exists
    23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"SYSAUX" already exists
    23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"TEMP" already exists
    23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"USERS" already exists
    23-FEB-15 20:10:33.200:
    Completed 96 TABLESPACE objects in 1759 seconds
    23-FEB-15 20:10:33.208: Processing object type DATABASE_EXPORT/PROFILE
    23-FEB-15 20:10:33.445:
    Completed 7 PROFILE objects in 1 seconds
    23-FEB-15 20:10:33.453: Processing object type DATABASE_EXPORT/SYS_USER/USER
    23-FEB-15 20:10:33.842:
    Completed 1 USER objects in 0 seconds
    23-FEB-15 20:10:33.852: Processing object type DATABASE_EXPORT/SCHEMA/USER
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"OUTLN" already exists
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"ANONYMOUS" already exists
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"OLAPSYS" already exists
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"MDDATA" already exists
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"SCOTT" already exists
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"LLTEST" already exists
    23-FEB-15 20:10:52.372:
    Completed 1140 USER objects in 19 seconds
    23-FEB-15 20:10:52.375: Processing object type DATABASE_EXPORT/ROLE
    23-FEB-15 20:10:55.255: ORA-31684: Object type ROLE:"SELECT_CATALOG_ROLE" already exists
    23-FEB-15 20:10:55.255: ORA-31684: Object type ROLE:"EXECUTE_CATALOG_ROLE" already exists
    23-FEB-15 20:10:55.255: ORA-31684: Object type ROLE:"DELETE_CATALOG_ROLE" already exists
    23-FEB-15 20:10:55.256: ORA-31684: Object type ROLE:"RECOVERY_CATALOG_OWNER" already exists
    any insight most appreciated.

    Schema's SYS,CTXSYS, MDSYS and ORDSYS are Not Exported using exp/expdp
    Doc ID: Note:228482.1
    I suppose he already installed a software 12c and created a database itseems - So when you imported you might have this "already exists"
    Whenever the database is created and software installed by default system,sys,sysaux will be created.

  • How to exclude statistic using Data Pump API?

    How to exclude all statistics while exporting data using Oracle Data Pump API (DBMS_DATAPUMP package)?

    You would call the metadata filter api like this:
    dbms_datapump.METADATA_FILTER(
    handle = your_handle_here,
    name = 'EXCLUDE_PATH_LIST',
    value = 'STATISTICS');
    Hope this helps.
    Dean

  • Error connecting as DBA using Data Pump Within Transportable Modules

    Hi all
    I am using OWB 10g R2 and trying to set up a transportable Module to test the Oracle Data Pump utility. I have followed the user manual in terms of relevant grants and permissions needed to use this functionality and have sucessfully connect to both my source and target databases and created my transportable module which will extract six tables from a source database on 10.2.0.1.0 to may target schema in my warehouse also on 10.2.0.1.0. When i come to try and deploy / execute the transportable module it fails with the following error.
    RPE-01023: Failed to establish connection to target database as DBA
    Now we have even gone as far as granting the DBA role to the user within our target but we still get the same error so assume it is something to do with the connection of the Transportable Target Module Location and it needs to connect as DBA somehow in the connect string. Has anyone experienced this issue and is their a way of creating the location connection that is not documented.
    There is no mention of this anywhere within the manual and i have even followed the example from http://www.rittman.net/archives/2006_04.html and my target user has the privilages detailed in the manual as detailed below
    User must not be SYS. Musthave ALTER TABLESPACE privilege and IMP_FULL_
    DATABASE role. Must have CREATE MATERIALIZED VIEW privilege with ADMIN
    option. Must be a Warehouse Builder repository databaseuser.
    Any help would be appreciated before i raise a request with Oracle

    Did you ever find a resolution ? We are experiencing the same issue..
    thanks
    OBX

  • Database Upgrade using Data Pump

    Hi,
    I am moving my database from a Windows 2003 server to a Windows 2007 server. At the same time I am upgrading this database from 10g to 11gR2(11.2.0.3).
    therefore I am using the export / import method of upgrade ( via Data Pump not the old exp/imp ).
    I have successfully exported by source database and have created the empty shell database ready to take the import. However I have a couple of queries
    Q1. regarding all the SYSTEM objects from the source database. How will they import given that the new target database already has a SYSTEM tablespace
    I am guessing I need to use the TABLE_EXISTS_ACTION option for the import. However should I set this to APPEND, SKIP, REPLACE or TRUNCATE - which is best ?
    Q2. I am planning to slightly change the directory structure on the new database server - would it therefore be better to pre-create the tablespaces or leave this to the import but use the REMAP DATAFILE option - what is everyone's experience as to which is the better way to go ? Again if I pre-create the tablespaces, how do I inform the import to ignore the creation of the tablespaces
    Q3. these 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump file to the new server and then the import, I could use a Network Link for Import. I was just wondering where there any con's of this method over using the explicit export dump file ?
    thanks,
    Jim

    Jim,
    Q1. regarding all the SYSTEM objects from the source database. How will they import given that the new target database already has a SYSTEM tablespace
    I am guessing I need to use the TABLE_EXISTS_ACTION option for the import. However should I set this to APPEND, SKIP, REPLACE or TRUNCATE - which is best ?If all you have is the base database and nothing created, then you can do the full=y. In fact, this is probably what you want. The system tablespace will be there so when Data Pump tries to create it , it will just fail that create statement. Nothing else will fail. In most cases, your system tables will already be there, and this is ok too. If you do schema mode imports, you will miss out on some of the other stuff.
    Q2. I am planning to slightly change the directory structure on the new database server - would it therefore be better to pre-create the tablespaces or leave this to the import but use the REMAP >DATAFILE option - what is everyone's experience as to which is the better way to go ? Again if I pre-create the tablespaces, how do I inform the import to ignore the creation of the tablespacesIf the directory structure is different (which they usually are) then there is no easier way. You can run impdp but with sqlfile and you can say - include=tablespace. This will give you all of the create tablespace commands in a txt file and you can edit the text file to change what ever you want to change. You can tell datapump to skip the tablespace creation by using - exclude=tablespace
    Q3. these 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump file to the new server and then the import, I could use a Network Link for Import. I >was just wondering where there any con's of this method over using the explicit export dump file ?The only con could be if you have a slow network. This will make it slower, but if you have to copy the dumpfile over the same network, then you will still see the same basic traffic. The pros are that you don't have to have extra disk space. Here is how I look at it.
    1. you need XX GB for the source database
    2. you need YY GB for the source dumpfile
    3. you need YY GB for the target dumpfile that you copy
    4. you need XX GB for the target databse.
    By doing network you get rid if 2*YY GB for the dumpfiles.
    Dean

Maybe you are looking for