Does data pump really replace exp/imp?

hi guys,
Ive read some people saying we should be using data pump instead of exp/imp. But as far as I can see, if I have a database behind a firewall at some other place, and cannot connect to that database directly and need to get some data acrposs, then data pump is useless for me and I can only exp and imp the data.

OracleGuy777 wrote:
...and i guess this means that data pump does not replace exp and imp.Well, depending of your database version, it is.
"+Original Export is desupported for general use as of Oracle Database 11g. The only supported use of original Export in 11g is backward migration of XMLType data to a database version 10g release 2 (10.2) or earlier. Therefore, Oracle recommends that you use the new Data Pump Export and Import utilities, except in the following situations which require original Export and Import:+
+* You want to import files that were created using the original Export utility (exp).+
+* You want to export files that will be imported using the original Import utility (imp). An example of this would be if you wanted to export data from Oracle Database 10g and then import it into an earlier database release.+"
http://download.oracle.com/docs/cd/E11882_01/server.112/e10701/original_export.htm#SUTIL3634
Coming back on your problem, as already suggested, you have the NETWORK_LINK parameter, you're able to export data from source and import directly into your target db without the need of any intermediate file.
Nicolas.

Similar Messages

  • How to export resource manager consumer groups using Data Pump?

    Hi, there,
    Is there any way to export RM Consumer Groups/Mappings/Plans as part of a Data Pump export/import? I was wondering because I don't fancy doing it manually and I don't see the object in the database_export_objects view. I can create them manually, but was wondering whether there's an easier, less involved way of doing it?
    Mark

    Hi,
    I have not tested it but i think a full db export/import (using data pump or traditional exp/imp) may help doing this (which might not be feasible for you to have full exp/imp) because full database mode exports/imports SYS schema objects also, so there is a chance that it will also import the resource group and resource plans.
    Salman

  • Data pump issue for oracle 10G in window2003

    Hi Experts,
    I try to run data pump in oracle 10G in window 2003 server.
    I got a error as
    D:\>cd D:\oracle\product\10.2.0\SALE\BIN
    D:\oracle\product\10.2.0\SALE\BIN>expdp system/xxxxl@sale full=Y directory=du
    mpdir dumpfile=expdp_sale_20090302.dmp logfile=exp_sale_20090302.log
    Export: Release 10.2.0.4.0 - Production on Tuesday, 03 March, 2009 8:05:50
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Produc
    tion
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-31626: job does not exist
    ORA-31650: timeout waiting for master process response
    However, I can run exp codes and works well.
    What is wrong for my data pump?
    Thanks
    JIM

    Hi Anand,
    I did not see any error log at that time. Actually, it did not work any more. I will test it again based on your emial after exp done.
    Based on new testing, I got below errors as
    ORA-39014: One or more workers have prematurely exited.
    ORA-39029: worker 1 with process name "DW01" prematurely terminated
    ORA-31671: Worker process DW01 had an unhandled exception.
    ORA-04030: out of process memory when trying to allocate 4108 bytes (PLS non-lib hp,pdzgM60_Make)
    ORA-06512: at "SYS.KUPC$QUEUE_INT", line 277
    ORA-06512: at "SYS.KUPW$WORKER", line 1366
    ORA-04030: out of process memory when trying to allocate 65036 bytes (callheap,KQL tmpbuf)
    ORA-06508: PL/SQL: could not find program unit being called: "SYS.KUPC$_WORKERERROR"
    ORA-06512: at "SYS.KUPW$WORKER", line 13360
    ORA-06512: at "SYS.KUPW$WORKER", line 15039
    ORA-06512: at "SYS.KUPW$WORKER", line 6372
    ORA-39125: Worker unexpected fatal error in KUPW$WORKER.DISPATCH_WORK_ITEMS while calling DBMS_METADATA.FETCH_XML_CLOB [PROCOBJ:"SALE"."SQLSCRIPT_2478179"]
    ORA-06512: at "SYS.KUPW$WORKER", line 7078
    ORA-04030: out of process memory when trying to allocate 4108 bytes (PLS non-lib hp,pdzgM60_Make)
    ORA-06500: PL/SQL: storage error
    ORA-04030: out of process memory when trying to allocate 16396 bytes (koh-kghu sessi,pmucpcon: tds)
    ORA-04030: out of process memory when trying to allocate 16396 bytes (koh-kghu sessi,pmucalm coll)
    Job "SYSTEM"."SYS_EXPORT_FULL_01" stopped due to fatal error at 14:41:36
    ORA-39014: One or more workers have prematurely exited.
    the trace file as
    *** 2009-03-03 14:20:41.500
    *** ACTION NAME:() 2009-03-03 14:20:41.328
    *** MODULE NAME:(oradim.exe) 2009-03-03 14:20:41.328
    *** SERVICE NAME:() 2009-03-03 14:20:41.328
    *** SESSION ID:(159.1) 2009-03-03 14:20:41.328
    Successfully allocated 7 recovery slaves
    Using 157 overflow buffers per recovery slave
    Thread 1 checkpoint: logseq 12911, block 2, scn 7355467494724
    cache-low rba: logseq 12911, block 251154
    on-disk rba: logseq 12912, block 221351, scn 7355467496281
    start recovery at logseq 12911, block 251154, scn 0
    ----- Redo read statistics for thread 1 -----
    Read rate (ASYNC): 185319Kb in 1.73s => 104.61 Mb/sec
    Total physical reads: 189333Kb
    Longest record: 5Kb, moves: 0/448987 (0%)
    Change moves: 1378/5737 (24%), moved: 0Mb
    Longest LWN: 1032Kb, moves: 45/269 (16%), moved: 41Mb
    Last redo scn: 0x06b0.9406fb58 (7355467496280)
    ----- Recovery Hash Table Statistics ---------
    Hash table buckets = 32768
    Longest hash chain = 3
    Average hash chain = 35384/25746 = 1.4
    Max compares per lookup = 3
    Avg compares per lookup = 847056/876618 = 1.0
    *** 2009-03-03 14:20:46.062
    KCRA: start recovery claims for 35384 data blocks
    *** 2009-03-03 14:21:02.171
    KCRA: blocks processed = 35384/35384, claimed = 35384, eliminated = 0
    *** 2009-03-03 14:21:02.531
    Recovery of Online Redo Log: Thread 1 Group 2 Seq 12911 Reading mem 0
    *** 2009-03-03 14:21:04.718
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 12912 Reading mem 0
    *** 2009-03-03 14:21:16.296
    ----- Recovery Hash Table Statistics ---------
    Hash table buckets = 32768
    Longest hash chain = 3
    Average hash chain = 35384/25746 = 1.4
    Max compares per lookup = 3
    Avg compares per lookup = 849220/841000 = 1.0
    *** 2009-03-03 14:21:28.468
    tkcrrsarc: (WARN) Failed to find ARCH for message (message:0x1)
    tkcrrpa: (WARN) Failed initial attempt to send ARCH message (message:0x1)
    *** 2009-03-03 14:26:25.781
    kwqmnich: current time:: 14: 26: 25
    kwqmnich: instance no 0 check_only flag 1
    kwqmnich: initialized job cache structure
    ktsmgtur(): TUR was not tuned for 360 secs
    Windows Server 2003 Version V5.2 Service Pack 2
    CPU : 8 - type 586, 4 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:7447M/8185M, Ph+PgF:6833M/9984M, VA:385M/3071M
    Instance name: vmsdbsea
    Redo thread mounted by this instance: 0 <none>
    Oracle process number: 0
    Windows thread id: 2460, image: ORACLE.EXE (SHAD)
    Dynamic strand is set to TRUE
    Running with 2 shared and 18 private strand(s). Zero-copy redo is FALSE
    *** 2009-03-03 08:06:51.921
    *** ACTION NAME:() 2009-03-03 08:06:51.905
    *** MODULE NAME:(expdp.exe) 2009-03-03 08:06:51.905
    *** SERVICE NAME:(xxxxxxxxxx) 2009-03-03 08:06:51.905
    *** SESSION ID:(118.53238) 2009-03-03 08:06:51.905
    SHDW: Failure to establish initial communication with MCP
    SHDW: Deleting Data Pump job infrastructure
    is it a system memory issue for data pump? my exp works well
    How to fix this issue?
    JIM
    Edited by: user589812 on Mar 3, 2009 5:07 PM
    Edited by: user589812 on Mar 3, 2009 5:22 PM

  • Data pump include exclude statistics

    I am new to oracle. Database statistics are stored in data dictionary(sys schema), then how can we include and exclude statistics(sys objects) while doing data pump.

    These statistics are for your schema objects (most cases tables). You have permission to create/update your schema object's statistics aka analyze them.
    This link gives you a bit more info what simple export/import does when you use STATISTICS parameter.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/exp_imp.htm#sthref2786
    When using datapump export statistics are always saved for tables and if the source table has statistics, they are imported.

  • Migrate exp/imp into data pump

    Hi Experts,
    we use exp/imp to exp data 150G and works to support stream in past time.
    As I know, that data pump will speed up exp.
    How to migrate ex/imp syntax into data pump?
    my imp/exp as
    exp USERID=SYSTEM/tiger@test OWNER=tiger FILE=D:\Oraclebackup\CLS\exports\test.dmp LOG=D:\Oraclebackup\test\exports\logs\exportTables.log OBJECT_CONSISTENT=Y STATISTICS=NONE
    imp USERID=SYSTEM/tiger FROMUSER=tiger TOUSER=tiger CONSTRAINTS=Y FILE=test.dmp IGNORE=Y COMMIT=Y LOG=importTables.log STREAMS_INSTANTIATION=Y
    Thanks
    Jim

    You are right - expdp more faster and useful than classic exp utility
    There are several features in EXPDP
    - may do only local on current server
    - you must create directory object in database &
    grant read,write priveleges to user
    For Example:
    create directory dump as 'd:\export\hr';
    grant read,write on directory dump to hr;
    Then we may do export:
    expdp hr/hr DIRECTORY=dump DUMPFILE=test.dmp LOGFILE=exportTables.log
    After export we will see two files in directory 'd:\export\hr'
    Other features see from expdp help=y & Oracle Documentation
    Edited by: Vladandr on 15.02.2009 22:07

  • Exp/imp data pump

    I exported tables in 10g using exp(normal export)command
    Can I user impdp(datapump) while doing import

    There is no exclude or include option in the traditional export/import (exp/imp) utility. This is only given in datapump (expdp/impdp) to exclude or include the database objects while exporting or importing.
    All you can do as work around is, create the structure of the table in the target database/schema, then do import, while import if the table is found in the target schema, then that table will not imported. So, don’t use ignore=y, if you use, even if the table is found during the import, the rows will be imported in fact. Hope you got what I mean.
    Regards,
    Sabdar Syed.

  • Can we use Data Pump to export data, using a SQL query, doing a join

    Folks,
    I have a quick question.
    Using Oracle 10g R2 on Solaris 10.
    Can Data Pump be used to export data, using a SQL query which is doing a join between 3 tables ?
    Thanks,
    Ashish

    Hello,
    No , this is from expdp help=Y
    QUERY                 Predicate clause used to export a subset of a table.
    Regards

  • HT1386 I have a new computer and need to sync my iPod touch.  I get a message that says "All existing apps and data will be replaced."  Does this mean progress in the games will be lost?

    I have a new computer and need to sync my iPod touch.  I get a message telling me that all existing apps and their data will be replaced with apps from this iTunes library.  Does that mean I lose all game progress?

    XansMamaw wrote:
    ...  Does that mean I lose all game progress?
    Yes.
    What you need to do is Transfer the iTunes Library from your Old computer to your New computer...
    Copy your ENTIRE iTunes FOLDER to an External Drive... and then from the External Drive to your New Computer..
    Backup iTunes to an External Drive
    http://support.apple.com/kb/HT1751
    An Added Bonus is that you will have a Backup of iTunes.

  • Exp/Imp alternatives for large amounts of data (30GB)

    Hi,
    I've come into a new role where various test database are to be 'refreshed' each night with cleansed copies of production data. They have been using the Imp/Exp utilities with 10g R2. The export process is ok, but what's killing us is the time it takes to transfer..unzip...and import 32GB .dmp files. I'm looking for suggestions on what we can do to reduce these times. Currently the import takes 4 to 5 hours.
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilities. Are 'Transportable Tablespaces' the next logical solution? I've been reading up on them and could start prototyping/testing the process next week. What else is in Oracle's toolbox I should be considering?
    Thanks
    brian

    Hi,
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilitiesDatapump will be faster for a couple of reasons. It uses direct path to unload the data. DataPump also supports parallel processes, so while one process is exporting metadata, the other processes can be exporting the data. In 11, you can also compress the dumpfiles as you are exporting. (Both data and metadata compression is available in 11, I think metadata compression is available in 10.2). This will remove your zip step.
    As far as transportable tablespace, yes, this is an option. There are some requirements, but if it works for you, all you will be exporting will be the metadata and no data. The data is copied from the source to the target by way of datafiles. One of the biggest requirements is that the tablespaces need to be read only while the export job is running. This is true for both exp/imp and expdp/impdp.

  • Database Upgrade using Data Pump

    Hi,
    I am moving my database from a Windows 2003 server to a Windows 2007 server. At the same time I am upgrading this database from 10g to 11gR2(11.2.0.3).
    therefore I am using the export / import method of upgrade ( via Data Pump not the old exp/imp ).
    I have successfully exported by source database and have created the empty shell database ready to take the import. However I have a couple of queries
    Q1. regarding all the SYSTEM objects from the source database. How will they import given that the new target database already has a SYSTEM tablespace
    I am guessing I need to use the TABLE_EXISTS_ACTION option for the import. However should I set this to APPEND, SKIP, REPLACE or TRUNCATE - which is best ?
    Q2. I am planning to slightly change the directory structure on the new database server - would it therefore be better to pre-create the tablespaces or leave this to the import but use the REMAP DATAFILE option - what is everyone's experience as to which is the better way to go ? Again if I pre-create the tablespaces, how do I inform the import to ignore the creation of the tablespaces
    Q3. these 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump file to the new server and then the import, I could use a Network Link for Import. I was just wondering where there any con's of this method over using the explicit export dump file ?
    thanks,
    Jim

    Jim,
    Q1. regarding all the SYSTEM objects from the source database. How will they import given that the new target database already has a SYSTEM tablespace
    I am guessing I need to use the TABLE_EXISTS_ACTION option for the import. However should I set this to APPEND, SKIP, REPLACE or TRUNCATE - which is best ?If all you have is the base database and nothing created, then you can do the full=y. In fact, this is probably what you want. The system tablespace will be there so when Data Pump tries to create it , it will just fail that create statement. Nothing else will fail. In most cases, your system tables will already be there, and this is ok too. If you do schema mode imports, you will miss out on some of the other stuff.
    Q2. I am planning to slightly change the directory structure on the new database server - would it therefore be better to pre-create the tablespaces or leave this to the import but use the REMAP >DATAFILE option - what is everyone's experience as to which is the better way to go ? Again if I pre-create the tablespaces, how do I inform the import to ignore the creation of the tablespacesIf the directory structure is different (which they usually are) then there is no easier way. You can run impdp but with sqlfile and you can say - include=tablespace. This will give you all of the create tablespace commands in a txt file and you can edit the text file to change what ever you want to change. You can tell datapump to skip the tablespace creation by using - exclude=tablespace
    Q3. these 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump file to the new server and then the import, I could use a Network Link for Import. I >was just wondering where there any con's of this method over using the explicit export dump file ?The only con could be if you have a slow network. This will make it slower, but if you have to copy the dumpfile over the same network, then you will still see the same basic traffic. The pros are that you don't have to have extra disk space. Here is how I look at it.
    1. you need XX GB for the source database
    2. you need YY GB for the source dumpfile
    3. you need YY GB for the target dumpfile that you copy
    4. you need XX GB for the target databse.
    By doing network you get rid if 2*YY GB for the dumpfiles.
    Dean

  • Exp/Imp In Oracle 10g client

    Hi All,
    I want to take one schema export from Oracle 10g Client. I am in new Oracle 10g.
    In oracle 9i, we can using exp/imp command for Export and Import from Client itself.
    I heard about in Oracle 10g, we can use Expdb/Impdb in Server only. Is there any possibility to take exp/imp in Client also.
    Pls help me...
    Cheers,
    Moorthy.GS

    To add up to expdb from oracle client
    NETWORK_LINK
    Default: none
    Purpose
    Enables an export from a (source) database identified by a valid database link. The data from the source database instance is written to a dump file set on the connected database instance.
    Syntax and Description
    NETWORK_LINK=source_database_link
    The NETWORK_LINK parameter initiates an export using a database link. This means that the system to which the expdp client is connected contacts the source database referenced by the source_database_link, retrieves data from it, and writes the data to a dump file set back on the connected system.
    The source_database_link provided must be the name of a database link to an available database. If the database on that instance does not already have a database link, you or your DBA must create one. For more information about the CREATE DATABASE LINK statement, see Oracle Database SQL Reference.
    If the source database is read-only, then the user on the source database must have a locally managed tablespace assigned as the default temporary tablespace. Otherwise, the job will fail. For further details about this, see the information about creating locally managed temporary tablespaces in the Oracle Database Administrator's Guide.
    Restrictions
    When the NETWORK_LINK parameter is used in conjunction with the TABLES parameter, only whole tables can be exported (not partitions of tables).
    The only types of database links supported by Data Pump Export are: public, fixed-user, and connected-user. Current-user database links are not supported.
    Example
    The following is an example of using the NETWORK_LINK parameter. The source_database_link would be replaced with the name of a valid database link that must already exist.
    expdp hr/hr DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_linkDUMPFILE=network_export.dmp LOGFILE=network_export.log

  • Question for experts/ wrapping - data pump

    Hello All,
    In am using ORACLE 10.2.0.4.0
    i have an issue while export / import wrapped procedures:
    I have some procedures / packages that are wrapped and when i do export for their related schema and then import it to the database, i got the below error
    Compilation errors for PROCEDURE PROC_NAME
    Error: PLS-00753: malformed or corrupted wrapped unit
    Line: 1
    Text: CREATE OR REPLACE PROCEDURE "PROC_NAME" wrapped
    After i did many tests on this scenario i Noted that:
    This error is not appearing on all procedures, there are some that will be corrupted after the import and some will not be corrupted.
    After i did the normal import export, i found that the character set of the imp/exp is different then the one of the database, so i changed the character set of the database to be the same as the one used by the import, i redo my tests and i found that this solved the issue of normal imp/exp, so now and if i am doing normal exp/imp the recompilation of procedures will not fail after the normal old import. but at this stage the imp/exp character set has changed again.
    The character set of my db was : AR8MSWIN1256 and when i did exp/imp it was for exp /imp AR8ISO8859P6, so i changed the one of my db to AR8ISO8859P6 , after changing the database character set to AR8ISO8859P6 the recompilation was working fine for the normal exp / imp, but now the character set for import/ export changed to AR8MSWIN1256 !! why ?
    The Data pump export import was not solved even after changing the character set of the database, the procedures were imported with errors and it was fixed after recompiling it.
    My issue that i have too many clients and i am not able always to change the character set of the database because i may face ERROR "ORA-12712: new character set must be a superset of old character set"
    Hope the above description is clear,
    Any suggestions or ideas to solve the issue of wrapped procedures with expdp/impdp? can i force certain character set for the expdp/impdp ? knowing that with release 10.2.0.1.0 the expdp/impdp is totally not working with the wrapped procedures

    Any ideas?

  • Data Pump execution time

    I am exporting 2 tables from my database as a prelimary test before doing a full export ( as part of a server migration to an new server )
    I have some concerns about the time the export took ( and therefore the time the corresponding import will take - which typically is considerably longer than the export )
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 19.87 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "SAPPHIRE"."A_SDIDATAITEM" 15.45 GB 88263813 rows
    . . exported "SAPPHIRE"."A_SDIDATA" 1.775 GB 14011593 rows
    Master table "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully loaded/unloaded
    Dump file set for SYSTEM.EXPORT_TABLES_LIMSLIVE is:
    E:\ORACLE\PRODUCT\10.2.0\ADMIN\LIMSLIVE\DPDUMP\EXP_TABLES_LIMSLIVE.DMP
    Job "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully completed at 15:43:38
    These 2 tables alone took nearly an hour to export. The bulk of the time seemed to be on the line
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Q1. Is that really the line that was taking time or was the export really working on the the export of the table on the following line ?
    Q2. Will such table stats be brought in on an import ? i.e. are table stats part of the dictionary therefore part of the SYS/SYSTEM schemas and will not be brought in on the import to my newly created target database ?
    Q3. Does anyone know of any performance improvements that can be made to this export / import ? I am exporting from the 10.2.0.1 source Data Pump and will be importing on a new target 11gR2 Data Pump. From experiement with the command line I have found that 10.2.0.1 does not support PARALLEL so I am not able to use that on the export side ( I should be able to use it on the 11gR2 import side )
    thanks,
    Jim

    Jim,
    Q1. what difference does it make knowing the difference between how long the Meta Data and the actual data takes on >export ?
    Is this because you could decide to exclude some objects and manually create them before import.You asked what was taking so long. This was just a test to see if it was metadata or data. It may help us try to figure out if there is a problem or not, but knowing what is slow, would help narrow things down.
    With the old exp/imp utility I sometimes manually created the tablespaces and indexes in this manner, however for Data >Pump the Meta Data contains a lot more than just tablespaces and indexes - I couldn't imagine manually creating all the >tables and grants for example. I guess you can be selective about what objects you include / exclude in the export or import >( via the INCLUDE & EXCLUDE settings ) ?No, I'm not suggesting that you change your process, just trying to figure out what is slow. Also, old exp/imp and Data Pump treat metadata and data the same way. Just to maybe clear things up - when you say content=metadata_only, it exports everything except for data. It will export tablespaces, grants, users, table, statistics, etc. Everything but the data. When you say content=data_only, it only exports the data. You can use this method to export and import everything, but it's not the best solution for most. If you create all of your metadata and then load the data, any indexes on the tables need to be maintained while the data is being loaded and this will slow down the data only job.
    Q2. If I do a DATA ONLY export I presume that means I need to manually pre-create every object I want imported into my >target database. Does this mean every tablespace, table, index, grant etc ( not an attractive option ) ?Again - I was not suggesting this method, just trying to figure out what was slow. If I were to do it this way, I would run the impdp on the metadata only dumpfile, then run the import on the data only dump file.
    Q3. If I use EXCLUDE=statistics does that mean I can simply regenerate the stats on the target database after the import >completes ( how would I do that ? )Yes, you can do that. There are different statistics gathering levels. You can collect them per table, per index, per schema, and I think per database. You want to look at the documentation for dbms_stats.gather...
    Dean

  • Full Database Exp & Imp

    Hi,
    I am trying to Exp & Imp a full database. I am working on Oracle 9i & 10g on Solaris 9. I am not using Data Pump. Can anyone please help me with the following:
    1. I am performing the full export using SYSTEM user.
    1.a Does a Full export include (or backups up) the DATA DICTIONARY of the database?
    1.b Does a Full export include the backup of SYS and SYSTEM objects?
    I am using the following command to export
    exp system/system@testdb file=$HOME/testdbfullexp.dmp full=y statistics=none
    I have tried importing the FULL export to another database and i did see that SYS and SYSTEM objects were also being imported ( got some errors regarding constraints and inconsistencies).
    I would like to ask like what are the ideal steps to follow to copy a database from DB1 to DB2 using EXP and IMP
    Any information will be of a great help
    Thanks,
    Harris.

    1 a) No, as the data dictionary will be automagicall recreated by implicit SQL.
    This means any non dictionary objects under SYS will be lost
    1 b) as above. SYSTEM however is a normal user.
    any %SYS user will NOT be exported (CTXSYS, MDSYS, etc)
    On import of SYSTEM there will be always errors, as SYSTEM is non-empty after initial database creation.
    Sybrand Bakker
    Senior Oracle DBA

  • Full database exp/imp  between RAC  and single database

    Hi Experts,
    we have a RAC database oracle 10GR2 with 4 node in linux. i try to duplicate rac database into single instance window database.
    there are same version both database. during importing, I need to create 4 undo tablespace to keep imp processing.
    How to keep one undo tablespace in single instance database?
    any experience of exp/imp RAC database into single instance database to share with me?
    Thanks
    Jim
    Edited by: user589812 on Nov 13, 2009 10:35 AM

    JIm,
    I also want to know can we add the exclude=tablespace on the impdp command for full database exp/imp?You can't use exclude=tablespace on exp/imp. It is for datapump expdp/impdp only.
    I am very insteresting in your recommadition.
    But for a full database impdp, how to exclude a table during full database imp? May I have a example for this case?
    I used a expdp for full database exp. but I got a exp error in expdp log as ORA-31679: Table data object "SALE"."TOAD_PLAN_TABLE" has long columns, and longs can not >be loaded/unloaded using a network linkHaving long columns in a table means that it can't be exported/imported over a network link. To exclude this, you can use the exclude expression:
    expdp user/password exclude=TABLE:"= 'SALES'" ...
    This will exclude all tables named sales. If you have that table in schema scott and then in schema blake, it will exclude both of them. The error that you are getting is not a fatal error, but that table will not be exported/imported.
    the final message as
    Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
    Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
    F:\ORACLEBACKUP\SALEFULL091113.DMP
    Job "SYSTEM"."SYS_EXPORT_FULL_01" completed with 1 error(s) at 16:50:26Yes, the fact that it did not export one table does not make the job fail, it will continue on exporting all other objects.
    . I drop database that gerenated a expdp dump file.
    and recreate blank database and then impdp again.
    But I got lots of error as
    ORA-39151: Table "SYSMAN"."MGMT_ARU_OUI_COMPONENTS" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
    ORA-39151: Table "SYSMAN"."MGMT_BUG_ADVISORY" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
    ......ORA-31684: Object type TYPE_BODY:"SYSMAN"."MGMT_THRESHOLD" already exists
    ORA-39111: Dependent object type TRIGGER:"SYSMAN"."SEV_ANNOTATION_INSERT_TR" skipped, base object type VIEW:"SYSMAN"."MGMT_SEVERITY_ANNOTATION" >already exists
    and last line as
    Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 2581 error(s) at 11:54:57Yes, even though you think you have an empty database, if you have installed any apps or anything, it may create tables that could exist in your dumpfile. If you know that you want the tables from the dumpfile and not the existing ones in the database, then you can use this on the impdp command:
    impdp user/password table_exists_action=replace ...
    If a table that is being imported exists, DataPump will detect this, drop the table, then create the table. Then all of the dependent objects will be created. If you don't then the table and all of it's dependent objects will be skipped, (which is the default).
    There are 4 options with table_exists_action
    replace - I described above
    skip - default, means skip the table and dependent objects like indexes, index statistics, table statistics, etc
    append - keep the existing table and append the data to it, but skip dependent objects
    truncate - truncate the existing table and add the data from the dumpfile, but skip dependent objects.
    Hope this helps.
    Dean

Maybe you are looking for

  • Open as "Smart Object" from ACR to PS questions.

    I have 2 questions associated with opening from ACR as a Smart Object in Photoshop. I'm using CS4. 1. Is there a way to retrieve the original filename (complete with path) using a script or a setting somewhere to do this? I'm currently using Bridge T

  • Need urgent help with creating .war file ...

    My boss gave me one day to create an app. that runs on Apache Tomcat 4.0.6. I managed to get it done, but now I'm trying to move it from my machine to the web server. Each time I try to create a .war file I get an error saying "no such file or direct

  • Difference between the Field Group  and Internal Table.

    Hi all, Can anybody tell me the difference between the Field group and Internal table and when they will used? Thanks, Sriram.

  • Stopping TestStand Sequence With PreUUT Loop Disabled

    Hi All, I have edited my sequence so I do not use the default UUT Information dialog to enter serial number information etc. I handle all of that on my own to get a specific file name and perform other tasks within my sequence. I have the PreUUT call

  • Registering for Apple Care Problem

    I tried to register for the applecare protection plan. when i got to the 2nd step, "Agreement & Shipping Information", there's a drop down box for "Select Agreement Type". but the drop down box was blank. so it wouldn't let me me continue and kept as