Exp/imp data pump

I exported tables in 10g using exp(normal export)command
Can I user impdp(datapump) while doing import

There is no exclude or include option in the traditional export/import (exp/imp) utility. This is only given in datapump (expdp/impdp) to exclude or include the database objects while exporting or importing.
All you can do as work around is, create the structure of the table in the target database/schema, then do import, while import if the table is found in the target schema, then that table will not imported. So, don’t use ignore=y, if you use, even if the table is found during the import, the rows will be imported in fact. Hope you got what I mean.
Regards,
Sabdar Syed.

Similar Messages

  • Does data pump really replace exp/imp?

    hi guys,
    Ive read some people saying we should be using data pump instead of exp/imp. But as far as I can see, if I have a database behind a firewall at some other place, and cannot connect to that database directly and need to get some data acrposs, then data pump is useless for me and I can only exp and imp the data.

    OracleGuy777 wrote:
    ...and i guess this means that data pump does not replace exp and imp.Well, depending of your database version, it is.
    "+Original Export is desupported for general use as of Oracle Database 11g. The only supported use of original Export in 11g is backward migration of XMLType data to a database version 10g release 2 (10.2) or earlier. Therefore, Oracle recommends that you use the new Data Pump Export and Import utilities, except in the following situations which require original Export and Import:+
    +* You want to import files that were created using the original Export utility (exp).+
    +* You want to export files that will be imported using the original Import utility (imp). An example of this would be if you wanted to export data from Oracle Database 10g and then import it into an earlier database release.+"
    http://download.oracle.com/docs/cd/E11882_01/server.112/e10701/original_export.htm#SUTIL3634
    Coming back on your problem, as already suggested, you have the NETWORK_LINK parameter, you're able to export data from source and import directly into your target db without the need of any intermediate file.
    Nicolas.

  • Migrate exp/imp into data pump

    Hi Experts,
    we use exp/imp to exp data 150G and works to support stream in past time.
    As I know, that data pump will speed up exp.
    How to migrate ex/imp syntax into data pump?
    my imp/exp as
    exp USERID=SYSTEM/tiger@test OWNER=tiger FILE=D:\Oraclebackup\CLS\exports\test.dmp LOG=D:\Oraclebackup\test\exports\logs\exportTables.log OBJECT_CONSISTENT=Y STATISTICS=NONE
    imp USERID=SYSTEM/tiger FROMUSER=tiger TOUSER=tiger CONSTRAINTS=Y FILE=test.dmp IGNORE=Y COMMIT=Y LOG=importTables.log STREAMS_INSTANTIATION=Y
    Thanks
    Jim

    You are right - expdp more faster and useful than classic exp utility
    There are several features in EXPDP
    - may do only local on current server
    - you must create directory object in database &
    grant read,write priveleges to user
    For Example:
    create directory dump as 'd:\export\hr';
    grant read,write on directory dump to hr;
    Then we may do export:
    expdp hr/hr DIRECTORY=dump DUMPFILE=test.dmp LOGFILE=exportTables.log
    After export we will see two files in directory 'd:\export\hr'
    Other features see from expdp help=y & Oracle Documentation
    Edited by: Vladandr on 15.02.2009 22:07

  • Exp/Imp alternatives for large amounts of data (30GB)

    Hi,
    I've come into a new role where various test database are to be 'refreshed' each night with cleansed copies of production data. They have been using the Imp/Exp utilities with 10g R2. The export process is ok, but what's killing us is the time it takes to transfer..unzip...and import 32GB .dmp files. I'm looking for suggestions on what we can do to reduce these times. Currently the import takes 4 to 5 hours.
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilities. Are 'Transportable Tablespaces' the next logical solution? I've been reading up on them and could start prototyping/testing the process next week. What else is in Oracle's toolbox I should be considering?
    Thanks
    brian

    Hi,
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilitiesDatapump will be faster for a couple of reasons. It uses direct path to unload the data. DataPump also supports parallel processes, so while one process is exporting metadata, the other processes can be exporting the data. In 11, you can also compress the dumpfiles as you are exporting. (Both data and metadata compression is available in 11, I think metadata compression is available in 10.2). This will remove your zip step.
    As far as transportable tablespace, yes, this is an option. There are some requirements, but if it works for you, all you will be exporting will be the metadata and no data. The data is copied from the source to the target by way of datafiles. One of the biggest requirements is that the tablespaces need to be read only while the export job is running. This is true for both exp/imp and expdp/impdp.

  • Schema export via Oracle data pump with Database Vault enabled question

    Hi,
    I have installed and configured Database Vault on an Oracle 11g-r2-11.2.0.3 to protect a specific schema (SCHEMA_NAME) via a realm. I have followed the following doc:
    http://www.oracle.com/technetwork/database/security/twp-databasevault-dba-bestpractices-199882.pdf
    to ensure that the sys and the system user has sufficient rights to complete a schedule Oracle data pump export operation.
    I.e. I have granted to sys and system the following:
    execute dvsys.dbms_macadm.authorize_scheduler_user('sys','SCHEMA_NAME');
    execute dvsys.dbms_macadm.authorize_scheduler_user('system','SCHEMA_NAME');
    execute dvsys.dbms_macadm.authorize_datapump_user('sys','SCHEMA_NAME');
    execute dvsys.dbms_macadm.authorize_datapump_user('system','SCHEMA_NAME');
    I have also create a second realm on the same schema (SCHEMA_NAME) to allow sys and system to maintain indexes for real-protected tables, To allow a sys and system to maintain indexes for realm-protected tables. This separate realm was created for all their index types: Index, Index Partition, and Indextype, sys and system have been authorized as OWNER to this realm.
    However, when I try and complete an Oracle Data Pump export operation on the schema, I get two errors directly after the following line displayed in the export log:
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/DOMAIN_INDEX/INDEX:
    ORA-39127: unexpected error from call to export_string :=SYS.DBMS_TRANSFORM_EXIMP.INSTANCE_INFO_EXP('AQ$_MGMT_NOTIFY_QTABLE_S','SYSMAN',1,1,'11.02.00.00.00',newblock)
    ORA-01031: insufficient privileges
    ORA-06512: at "SYS.DBMS_TRANSFORM_EXIMP", line 197
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 9081
    ORA-39127: unexpected error from call to export_string :=SYS.DBMS_TRANSFORM_EXIMP.INSTANCE_INFO_EXP('AQ$_MGMT_LOADER_QTABLE_S','SYSMAN',1,1,'11.02.00.00.00',newblock)
    ORA-01031: insufficient privileges
    ORA-06512: at "SYS.DBMS_TRANSFORM_EXIMP", line 197
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 9081
    The export is completed but with this errors.
    Any help, suggestions, pointers, etc actually anything will be very welcome at this stage.
    Thank you

    Hi Srini,
    Thank you very much for your help. Unfortunately after having followed the instructions of the DOC I am still getting the same errors ?
    none the less thank you for your input.
    I was also wondering if someone could tell me how to move this thread to the Database Security area of the forum, as I feel I may have posted the thread in the wrong place as it appears to be a Database Vault issue and not an imp/exp problem. ?
    Edited by: zooid on May 20, 2012 10:33 PM
    Edited by: zooid on May 20, 2012 10:36 PM

  • How to export resource manager consumer groups using Data Pump?

    Hi, there,
    Is there any way to export RM Consumer Groups/Mappings/Plans as part of a Data Pump export/import? I was wondering because I don't fancy doing it manually and I don't see the object in the database_export_objects view. I can create them manually, but was wondering whether there's an easier, less involved way of doing it?
    Mark

    Hi,
    I have not tested it but i think a full db export/import (using data pump or traditional exp/imp) may help doing this (which might not be feasible for you to have full exp/imp) because full database mode exports/imports SYS schema objects also, so there is a chance that it will also import the resource group and resource plans.
    Salman

  • Database Upgrade using Data Pump

    Hi,
    I am moving my database from a Windows 2003 server to a Windows 2007 server. At the same time I am upgrading this database from 10g to 11gR2(11.2.0.3).
    therefore I am using the export / import method of upgrade ( via Data Pump not the old exp/imp ).
    I have successfully exported by source database and have created the empty shell database ready to take the import. However I have a couple of queries
    Q1. regarding all the SYSTEM objects from the source database. How will they import given that the new target database already has a SYSTEM tablespace
    I am guessing I need to use the TABLE_EXISTS_ACTION option for the import. However should I set this to APPEND, SKIP, REPLACE or TRUNCATE - which is best ?
    Q2. I am planning to slightly change the directory structure on the new database server - would it therefore be better to pre-create the tablespaces or leave this to the import but use the REMAP DATAFILE option - what is everyone's experience as to which is the better way to go ? Again if I pre-create the tablespaces, how do I inform the import to ignore the creation of the tablespaces
    Q3. these 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump file to the new server and then the import, I could use a Network Link for Import. I was just wondering where there any con's of this method over using the explicit export dump file ?
    thanks,
    Jim

    Jim,
    Q1. regarding all the SYSTEM objects from the source database. How will they import given that the new target database already has a SYSTEM tablespace
    I am guessing I need to use the TABLE_EXISTS_ACTION option for the import. However should I set this to APPEND, SKIP, REPLACE or TRUNCATE - which is best ?If all you have is the base database and nothing created, then you can do the full=y. In fact, this is probably what you want. The system tablespace will be there so when Data Pump tries to create it , it will just fail that create statement. Nothing else will fail. In most cases, your system tables will already be there, and this is ok too. If you do schema mode imports, you will miss out on some of the other stuff.
    Q2. I am planning to slightly change the directory structure on the new database server - would it therefore be better to pre-create the tablespaces or leave this to the import but use the REMAP >DATAFILE option - what is everyone's experience as to which is the better way to go ? Again if I pre-create the tablespaces, how do I inform the import to ignore the creation of the tablespacesIf the directory structure is different (which they usually are) then there is no easier way. You can run impdp but with sqlfile and you can say - include=tablespace. This will give you all of the create tablespace commands in a txt file and you can edit the text file to change what ever you want to change. You can tell datapump to skip the tablespace creation by using - exclude=tablespace
    Q3. these 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump file to the new server and then the import, I could use a Network Link for Import. I >was just wondering where there any con's of this method over using the explicit export dump file ?The only con could be if you have a slow network. This will make it slower, but if you have to copy the dumpfile over the same network, then you will still see the same basic traffic. The pros are that you don't have to have extra disk space. Here is how I look at it.
    1. you need XX GB for the source database
    2. you need YY GB for the source dumpfile
    3. you need YY GB for the target dumpfile that you copy
    4. you need XX GB for the target databse.
    By doing network you get rid if 2*YY GB for the dumpfiles.
    Dean

  • Exp/Imp In Oracle 10g client

    Hi All,
    I want to take one schema export from Oracle 10g Client. I am in new Oracle 10g.
    In oracle 9i, we can using exp/imp command for Export and Import from Client itself.
    I heard about in Oracle 10g, we can use Expdb/Impdb in Server only. Is there any possibility to take exp/imp in Client also.
    Pls help me...
    Cheers,
    Moorthy.GS

    To add up to expdb from oracle client
    NETWORK_LINK
    Default: none
    Purpose
    Enables an export from a (source) database identified by a valid database link. The data from the source database instance is written to a dump file set on the connected database instance.
    Syntax and Description
    NETWORK_LINK=source_database_link
    The NETWORK_LINK parameter initiates an export using a database link. This means that the system to which the expdp client is connected contacts the source database referenced by the source_database_link, retrieves data from it, and writes the data to a dump file set back on the connected system.
    The source_database_link provided must be the name of a database link to an available database. If the database on that instance does not already have a database link, you or your DBA must create one. For more information about the CREATE DATABASE LINK statement, see Oracle Database SQL Reference.
    If the source database is read-only, then the user on the source database must have a locally managed tablespace assigned as the default temporary tablespace. Otherwise, the job will fail. For further details about this, see the information about creating locally managed temporary tablespaces in the Oracle Database Administrator's Guide.
    Restrictions
    When the NETWORK_LINK parameter is used in conjunction with the TABLES parameter, only whole tables can be exported (not partitions of tables).
    The only types of database links supported by Data Pump Export are: public, fixed-user, and connected-user. Current-user database links are not supported.
    Example
    The following is an example of using the NETWORK_LINK parameter. The source_database_link would be replaced with the name of a valid database link that must already exist.
    expdp hr/hr DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_linkDUMPFILE=network_export.dmp LOGFILE=network_export.log

  • Full Database Exp & Imp

    Hi,
    I am trying to Exp & Imp a full database. I am working on Oracle 9i & 10g on Solaris 9. I am not using Data Pump. Can anyone please help me with the following:
    1. I am performing the full export using SYSTEM user.
    1.a Does a Full export include (or backups up) the DATA DICTIONARY of the database?
    1.b Does a Full export include the backup of SYS and SYSTEM objects?
    I am using the following command to export
    exp system/system@testdb file=$HOME/testdbfullexp.dmp full=y statistics=none
    I have tried importing the FULL export to another database and i did see that SYS and SYSTEM objects were also being imported ( got some errors regarding constraints and inconsistencies).
    I would like to ask like what are the ideal steps to follow to copy a database from DB1 to DB2 using EXP and IMP
    Any information will be of a great help
    Thanks,
    Harris.

    1 a) No, as the data dictionary will be automagicall recreated by implicit SQL.
    This means any non dictionary objects under SYS will be lost
    1 b) as above. SYSTEM however is a normal user.
    any %SYS user will NOT be exported (CTXSYS, MDSYS, etc)
    On import of SYSTEM there will be always errors, as SYSTEM is non-empty after initial database creation.
    Sybrand Bakker
    Senior Oracle DBA

  • Question for experts/ wrapping - data pump

    Hello All,
    In am using ORACLE 10.2.0.4.0
    i have an issue while export / import wrapped procedures:
    I have some procedures / packages that are wrapped and when i do export for their related schema and then import it to the database, i got the below error
    Compilation errors for PROCEDURE PROC_NAME
    Error: PLS-00753: malformed or corrupted wrapped unit
    Line: 1
    Text: CREATE OR REPLACE PROCEDURE "PROC_NAME" wrapped
    After i did many tests on this scenario i Noted that:
    This error is not appearing on all procedures, there are some that will be corrupted after the import and some will not be corrupted.
    After i did the normal import export, i found that the character set of the imp/exp is different then the one of the database, so i changed the character set of the database to be the same as the one used by the import, i redo my tests and i found that this solved the issue of normal imp/exp, so now and if i am doing normal exp/imp the recompilation of procedures will not fail after the normal old import. but at this stage the imp/exp character set has changed again.
    The character set of my db was : AR8MSWIN1256 and when i did exp/imp it was for exp /imp AR8ISO8859P6, so i changed the one of my db to AR8ISO8859P6 , after changing the database character set to AR8ISO8859P6 the recompilation was working fine for the normal exp / imp, but now the character set for import/ export changed to AR8MSWIN1256 !! why ?
    The Data pump export import was not solved even after changing the character set of the database, the procedures were imported with errors and it was fixed after recompiling it.
    My issue that i have too many clients and i am not able always to change the character set of the database because i may face ERROR "ORA-12712: new character set must be a superset of old character set"
    Hope the above description is clear,
    Any suggestions or ideas to solve the issue of wrapped procedures with expdp/impdp? can i force certain character set for the expdp/impdp ? knowing that with release 10.2.0.1.0 the expdp/impdp is totally not working with the wrapped procedures

    Any ideas?

  • Migrating from 9i to 10g through exp/imp

    Hi,
    We need to migrate a database on 9i in HP-UNIX to 10g in IBM AIX. Can you please tell me whether i can export data in 9i with 9i export utility and import it into 10g using 10g data pump utility for faster import ?
    We don't have the 10g software installed in the HP-UNIX platform.
    And what other alternatives we have to reduce the amount of time for the migration process as it's a critical Production database ?
    Thanks,
    Jayanta

    I believe that Justin is correct that if you use exp to unload the data you will need to use imp and not impdp to reload the data.
    To speed up the cross-platform exp/imp process I suggest you consider running multiple concurrent exp/imp jobs. You can generate a table=list into a spool file using sql. You can give each large table its own exp job and bunch the small tables.
    Depending on how your database objects are organized you might also be albe to export by owner or a combination of owner, tables= exports.
    You can use a full=y with the rows=n option to grab the public synonyms, non-owning users, packages, etc.... not brought in by the prior exp/imp.
    HTH -- Mark D Powell --

  • Schema Refresh using exp/imp.

    Hello All,
    I want to perform Schema Refresh of SAMPLE user from producation to Testing envrionment using export/import.
    Cud u plz tell what is the command to perform it ?
    Also Cud anyone plz tell me whether same user(SAMPLE) in Test environment gets dropped before Import done.
    Can i Perform the exp/imp using sys/system or user SAMPLE?

    tvenkatesh07 wrote:
    Hello All,
    I want to perform Schema Refresh of SAMPLE user from producation to Testing envrionment using export/import.
    Cud u plz tell what is the command to perform it ?
    Also Cud anyone plz tell me whether same user(SAMPLE) in Test environment gets dropped before Import done.
    Can i Perform the exp/imp using sys/system or user SAMPLE?If you're runnnig 10g, then use Data Pump and read the documentation:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm#i1007466
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#i1007653
    My Oracle Video Tutorials - http://kamranagayev.wordpress.com/oracle-video-tutorials/

  • Data Pump Consistent parameter?

    Hi All,
    Is there any consistent parameter in data pump as it is in exp/imp.
    Becuase we are using data pump for backups and want to disable consitenct parameter.
    Please let me know how I can disable consistent parameter in Data Pump.
    Thanks

    if it;s not backup method then How you do the logical
    full database backup????
    From my thinking it's called logical DB backup (when
    u are using exp or expdp)There are many reasons that export shouldn't be used as backup method,
    1. It's very slow to do export on huge database ( which you already experiencing) the import will take much longer.
    2. You only have a 'snapshot' of your database at time of backup, in the event of disaster, you will lost all data changes after backup.
    3. It has performance impact on busy database ( which you also experiencing)
    Other than all these, if you turn CONSISTENT to N, your 'logic' backup is logically corrupted.

  • Data Pump execution time

    I am exporting 2 tables from my database as a prelimary test before doing a full export ( as part of a server migration to an new server )
    I have some concerns about the time the export took ( and therefore the time the corresponding import will take - which typically is considerably longer than the export )
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 19.87 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "SAPPHIRE"."A_SDIDATAITEM" 15.45 GB 88263813 rows
    . . exported "SAPPHIRE"."A_SDIDATA" 1.775 GB 14011593 rows
    Master table "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully loaded/unloaded
    Dump file set for SYSTEM.EXPORT_TABLES_LIMSLIVE is:
    E:\ORACLE\PRODUCT\10.2.0\ADMIN\LIMSLIVE\DPDUMP\EXP_TABLES_LIMSLIVE.DMP
    Job "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully completed at 15:43:38
    These 2 tables alone took nearly an hour to export. The bulk of the time seemed to be on the line
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Q1. Is that really the line that was taking time or was the export really working on the the export of the table on the following line ?
    Q2. Will such table stats be brought in on an import ? i.e. are table stats part of the dictionary therefore part of the SYS/SYSTEM schemas and will not be brought in on the import to my newly created target database ?
    Q3. Does anyone know of any performance improvements that can be made to this export / import ? I am exporting from the 10.2.0.1 source Data Pump and will be importing on a new target 11gR2 Data Pump. From experiement with the command line I have found that 10.2.0.1 does not support PARALLEL so I am not able to use that on the export side ( I should be able to use it on the 11gR2 import side )
    thanks,
    Jim

    Jim,
    Q1. what difference does it make knowing the difference between how long the Meta Data and the actual data takes on >export ?
    Is this because you could decide to exclude some objects and manually create them before import.You asked what was taking so long. This was just a test to see if it was metadata or data. It may help us try to figure out if there is a problem or not, but knowing what is slow, would help narrow things down.
    With the old exp/imp utility I sometimes manually created the tablespaces and indexes in this manner, however for Data >Pump the Meta Data contains a lot more than just tablespaces and indexes - I couldn't imagine manually creating all the >tables and grants for example. I guess you can be selective about what objects you include / exclude in the export or import >( via the INCLUDE & EXCLUDE settings ) ?No, I'm not suggesting that you change your process, just trying to figure out what is slow. Also, old exp/imp and Data Pump treat metadata and data the same way. Just to maybe clear things up - when you say content=metadata_only, it exports everything except for data. It will export tablespaces, grants, users, table, statistics, etc. Everything but the data. When you say content=data_only, it only exports the data. You can use this method to export and import everything, but it's not the best solution for most. If you create all of your metadata and then load the data, any indexes on the tables need to be maintained while the data is being loaded and this will slow down the data only job.
    Q2. If I do a DATA ONLY export I presume that means I need to manually pre-create every object I want imported into my >target database. Does this mean every tablespace, table, index, grant etc ( not an attractive option ) ?Again - I was not suggesting this method, just trying to figure out what was slow. If I were to do it this way, I would run the impdp on the metadata only dumpfile, then run the import on the data only dump file.
    Q3. If I use EXCLUDE=statistics does that mean I can simply regenerate the stats on the target database after the import >completes ( how would I do that ? )Yes, you can do that. There are different statistics gathering levels. You can collect them per table, per index, per schema, and I think per database. You want to look at the documentation for dbms_stats.gather...
    Dean

  • Exclude tables list kink option in exp/imp ????

    Hi,
    Is there any option to exclude tables while exp/imp, if so from which version?
    Thanks.

    Actually, in 10.1 and later, the Data Pump versions of export and import allow you to specify an EXClUDE parameter to exclude one or more tables.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

Maybe you are looking for