Oracle Data Pump

What are the pre-requisites for using the Oracle Data Pump Export and Import ? Like, the config parameters in the initialization files, permissions, etc and the to-do's
Thanks

No, nothing like that.
You need to have Oracle 10g and have a directory object created to hold dump files.

Similar Messages

  • Schema export via Oracle data pump with Database Vault enabled question

    Hi,
    I have installed and configured Database Vault on an Oracle 11g-r2-11.2.0.3 to protect a specific schema (SCHEMA_NAME) via a realm. I have followed the following doc:
    http://www.oracle.com/technetwork/database/security/twp-databasevault-dba-bestpractices-199882.pdf
    to ensure that the sys and the system user has sufficient rights to complete a schedule Oracle data pump export operation.
    I.e. I have granted to sys and system the following:
    execute dvsys.dbms_macadm.authorize_scheduler_user('sys','SCHEMA_NAME');
    execute dvsys.dbms_macadm.authorize_scheduler_user('system','SCHEMA_NAME');
    execute dvsys.dbms_macadm.authorize_datapump_user('sys','SCHEMA_NAME');
    execute dvsys.dbms_macadm.authorize_datapump_user('system','SCHEMA_NAME');
    I have also create a second realm on the same schema (SCHEMA_NAME) to allow sys and system to maintain indexes for real-protected tables, To allow a sys and system to maintain indexes for realm-protected tables. This separate realm was created for all their index types: Index, Index Partition, and Indextype, sys and system have been authorized as OWNER to this realm.
    However, when I try and complete an Oracle Data Pump export operation on the schema, I get two errors directly after the following line displayed in the export log:
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/DOMAIN_INDEX/INDEX:
    ORA-39127: unexpected error from call to export_string :=SYS.DBMS_TRANSFORM_EXIMP.INSTANCE_INFO_EXP('AQ$_MGMT_NOTIFY_QTABLE_S','SYSMAN',1,1,'11.02.00.00.00',newblock)
    ORA-01031: insufficient privileges
    ORA-06512: at "SYS.DBMS_TRANSFORM_EXIMP", line 197
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 9081
    ORA-39127: unexpected error from call to export_string :=SYS.DBMS_TRANSFORM_EXIMP.INSTANCE_INFO_EXP('AQ$_MGMT_LOADER_QTABLE_S','SYSMAN',1,1,'11.02.00.00.00',newblock)
    ORA-01031: insufficient privileges
    ORA-06512: at "SYS.DBMS_TRANSFORM_EXIMP", line 197
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 9081
    The export is completed but with this errors.
    Any help, suggestions, pointers, etc actually anything will be very welcome at this stage.
    Thank you

    Hi Srini,
    Thank you very much for your help. Unfortunately after having followed the instructions of the DOC I am still getting the same errors ?
    none the less thank you for your input.
    I was also wondering if someone could tell me how to move this thread to the Database Security area of the forum, as I feel I may have posted the thread in the wrong place as it appears to be a Database Vault issue and not an imp/exp problem. ?
    Edited by: zooid on May 20, 2012 10:33 PM
    Edited by: zooid on May 20, 2012 10:36 PM

  • Oracle data pump vs import/export utility

    Hello all,
    What is the difference between Oracle Data Pump and Import/Export utility? which on is the faster?

    Handle:      user9362044
    Status Level:      Newbie
    Registered:      Jan 26, 2011
    Total Posts:      31
    Total Questions:      11 (7 unresolved)
    so many questions & so few answers.
    What is the difference between Oracle Data Pump and Import/Export utility?unwilling or incapable to Read The Fine Manual yourself?
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/toc.htm

  • An Oracle data pump situation that I need help with

    Oracle 10g running on Sun Solaris:
    I have written a unix shell script that exports data into a dump file(data pump export using expdp). Similarly I have import script that imports the data from the dump file(import using impdp). These are not schema exports. In other words, we have logically divided our schema into 4 different groups based on their functionality (group1, group2,group3 and group4). Each of these groups consists of about 30-40 tables. For expdp, there are 4 parfiles. group1.par, group2.par, group3.par and group4.par. Depending on what parameter you pass while running the script, the respective par file will be picked and an export will be done.
    For example,
    While running,
    exp_script.ksh group1
    will pick up group1.par and will export into group1.dmp.
    Similarly,
    exp_script.ksh group3
    will pick up group3.par and will export into group3.dmp.
    My import script also needs the parameter to be passed to pick the right dmp file.
    For example,
    imp_script.ksh group3
    will pick up group3.dmp and import every table that group3.dmp has(Here, import process does not need par files - all it needs is the right dmp file. )
    I am now faced with a difficulty, where, I must use Oracle's dbms_datapump API, to achieve the above same(and not expdp and impdp). I haven't used this API before. How best can I use this API to stick with my above stategy -- which is, I eventually need group1.dmp, group2.dmp, group3.dmp and group4.dmp. I will need to pass table names that each group contains. How do I do this using this API ? Can someone point me to an example, or perhaps suggest ?
    Thanks

    Or atleast, how do you do a table export using dbms_datapump ?

  • Oracle Data Pump - Table Structure change

    Hi,
    we have daily partitioned table, and for backup we are using data pump (expdp). we policy to drop partition after backup (archiving).
    we have archived dump files for 1year, few days back developer made changes with table structure they added one new column to table.
    Now we are unable to restore old partitions is there a way to restore partition if new column added / dropped from currect table.
    Thanks
    Sachin

    If a new column has been added to the table, you can import only the the data from the old structure to the new structure. Use the parameter CONTENT=DATA_ONLY.

  • How to run Oracle data pump export in silent mode

    when i run the expdp command for data export, then i get the following information message,
    Export: Release 10.2.0.1.0 - 64bit Production on Wednesday, 26 March, 2008 15:33:49
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Starting "TEST"."SYS_EXPORT_TRANSPORTABLE_01": test/********@mqa dumpfile=data.dmp directory=export TRANSPORT_TABLESPACES = client_tables,client_indexes logfile=export.log
    Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
    Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
    Master table "TEST"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
    Dump file set for TEST.SYS_EXPORT_TRANSPORTABLE_01 is:
    /export/home/oracle/TQAImport/DataFiles/data.dmp
    Job "TEST"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 15:34:53
    Can i avoid these messages and run it in silent mode ?

    I am not really sure about this but try using LOGFILE option.I cant recall that I have used than what happens,whether the output comes to the logfile only or on both the places but in addition to Paul's solution which are more accurate,you can give this also a try.
    http://download-west.oracle.com/docs/cd/B12037_01/server.101/b10825/dp_export.htm#sthref133
    Aman....

  • Oracle Data Pump (expdp) credentials via cron job

    I have Oracle 10.2 on Linux Red Hat Server. In additions to performing appropriate backups of my database I also have a cron job I use to performa full logical export using expdp every night to export user objects in the event that a singleobjects needs recovered. This is an extra safeguard for object recovery.
    Currently I do my export (expdp) via a cron job run as the oracle software owner as a db user with db credentials specified. I would however like to change this script to essentially run as sys by doing something like "expdp / as sysdba... " However it appears doing so actually requires the password to be supplied and to run expdp as "expdp sys/password as sysdba".
    does anyone have experience performng an expdp as sys without specifying the password... essentially being able to do "/ as sysdba"?
    Hope that makes sense.
    Thanks for any suggestions.

    I appreciate all comments. At the expense of being long winded I did not get into all the details. But to be more clear I want to say
    1. I am using a parfile as to hide the password from the ps command.
    2. I also understand that doing it as sysdba is not recommended but i thought if I did so I could eliminate need for storing clear text password.
    3. This system is on a seperate "air-gapped" (a.k.a "sneaker-net") from the outside world and is a better protected than if sitting somewhere near the internet.
    4. Historically we have not been permitted to use OP$ accounts. This may be a legacy issue,and will (re-)investigate this as an option.
    5. Just to be clear, I really do not want to do a full export as sysdba. In fact currently I am doing it as a user with EXP_FULL_DATABASE role. However, that requires password to be stored in file (parfile). The file has it in clear text which is still not optimal because System Admins could gain access to this password (Yes I know system admins could do larger damage, but we still need to protect the passwords).
    I am going to look at calling the API directly, and OP$ if needed.
    Thanks for the suggestions.

  • Help needed with Export Data Pump using API

    Hi All,
    Am trying to do an export data pump feature using the API.
    while the export as well as import works fine from the command line, its failing with the API.
    This is the command line program:
    expdp pxperf/dba@APPN QUERY=dev_pool_data:\"WHERE TIME_NUM > 1204884480100\" DUMPFILE=EXP_DEV.dmp tables=PXPERF.dev_pool_data
    Could you help me how should i achieve the same as above in Oracle Data Pump API
    DECLARE
    h1 NUMBER;
    h1 := dbms_datapump.open('EXPORT','TABLE',NULL,'DP_EXAMPLE10','LATEST');
    dbms_datapump.add_file(h1,'example3.dmp','DATA_PUMP_TEST',NULL,1);
    dbms_datapump.add_file(h1,'example3_dump.log','DATA_PUMP_TEST',NULL,3);
    dbms_datapump.metadata_filter(h1,'NAME_LIST','(''DEV_POOL_DATA'')');
    END;
    Also in the API i want to know how to export and import multiple tables (selective tables only) using one single criteria like "WHERE TIME_NUM > 1204884480100\"

    Yes, I have read the Oracle doc.
    I was able to proceed as below: but it gives error.
    ============================================================
    SQL> SET SERVEROUTPUT ON SIZE 1000000
    SQL> DECLARE
    2 l_dp_handle NUMBER;
    3 l_last_job_state VARCHAR2(30) := 'UNDEFINED';
    4 l_job_state VARCHAR2(30) := 'UNDEFINED';
    5 l_sts KU$_STATUS;
    6 BEGIN
    7 l_dp_handle := DBMS_DATAPUMP.open(
    8 operation => 'EXPORT',
    9 job_mode => 'TABLE',
    10 remote_link => NULL,
    11 job_name => '1835_XP_EXPORT',
    12 version => 'LATEST');
    13
    14 DBMS_DATAPUMP.add_file(
    15 handle => l_dp_handle,
    16 filename => 'x1835_XP_EXPORT.dmp',
    17 directory => 'DATA_PUMP_DIR');
    18
    19 DBMS_DATAPUMP.add_file(
    20 handle => l_dp_handle,
    21 filename => 'x1835_XP_EXPORT.log',
    22 directory => 'DATA_PUMP_DIR',
    23 filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
    24
    25 DBMS_DATAPUMP.data_filter(
    26 handle => l_dp_handle,
    27 name => 'SUBQUERY',
    28 value => '(where "XP_TIME_NUM > 1204884480100")',
    29 table_name => 'ldev_perf_data',
    30 schema_name => 'XPSLPERF'
    31 );
    32
    33 DBMS_DATAPUMP.start_job(l_dp_handle);
    34
    35 DBMS_DATAPUMP.detach(l_dp_handle);
    36 END;
    37 /
    DECLARE
    ERROR at line 1:
    ORA-39001: invalid argument value
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 3043
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 3688
    ORA-06512: at line 25
    ============================================================
    i have a table called LDEV_PERF_DATA and its in schema XPSLPERF.
    value => '(where "XP_TIME_NUM > 1204884480100")',above is the condition i want to filter the data.
    However, the below snippet works fine.
    ============================================================
    SET SERVEROUTPUT ON SIZE 1000000
    DECLARE
    l_dp_handle NUMBER;
    l_last_job_state VARCHAR2(30) := 'UNDEFINED';
    l_job_state VARCHAR2(30) := 'UNDEFINED';
    l_sts KU$_STATUS;
    BEGIN
    l_dp_handle := DBMS_DATAPUMP.open(
    operation => 'EXPORT',
    job_mode => 'SCHEMA',
    remote_link => NULL,
    job_name => 'ldev_may20',
    version => 'LATEST');
    DBMS_DATAPUMP.add_file(
    handle => l_dp_handle,
    filename => 'ldev_may20.dmp',
    directory => 'DATA_PUMP_DIR');
    DBMS_DATAPUMP.add_file(
    handle => l_dp_handle,
    filename => 'ldev_may20.log',
    directory => 'DATA_PUMP_DIR',
    filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
    DBMS_DATAPUMP.start_job(l_dp_handle);
    DBMS_DATAPUMP.detach(l_dp_handle);
    END;
    ============================================================
    I dont want to export all contents as the above, but want to export data based on some conditions and only on selective tables.
    Any help is highly appreciated.

  • Data Pump with parallel data ending up in 1st file!

    Oracle 10g 10.2.0.3 EE
    Ran the following command on a 16 core HPUX PA-RISC machine:
    expdp normaluser/password@RDSPOC FULL=y directory=DMPDIR parallel=12 dumpfile=exp_RDSPOC_2nd_%U.dmp logfile=exp_RDSPOC_2nd.log
    Database size, approx 900Gig of data
    All things looked good at first. All cores close to 100% utilized, Disk also at 100% utilized
    1h23m later I had 11 files all about the same size +- 40Gig
    The first dump file continued to grow. After another 60 hours the first file is approx 500 Gig and growing (status says 85% complete)
    One core is running max, and disk utilization is about 15%
    Note, I am not sys but a normal user with full export privilege (If that could make a difference)
    How do I get it to keep the machine running all cores and disks as hard as possible?
    Thanks

    Hi,
    Metadata is never unloaded in parallel, but is sometimes loaded in parallel.
    See Parallel Capabilities of Oracle Data Pump (Doc ID 365459.1)
    Also check the status of expdp, if there is e.g. one big table only one worker will still have data to pump.
    HTH,
    Peter
    Edited by: pa110564 on 05.08.2011 11:04

  • Data pump + inserting new values in a table

    Hello Everybody!!
    My question is : is it possible to insert new row into a table which is included in running DataPump process (export data) ?
    Best regards
    Marcin Migdal

    Have a look here, never done it myself but seems kind of cool :)
    http://www.pythian.com/blogs/766/oracle-data-pump-11g-little-known-new-feature

  • File name substitution with Data pump

    Hi,
    I'm experimenting with Oracle data pump export, 10.2 on Windows 2003 Server.
    On my current export scripts, I am able to create the dump file name dynamically.
    This name includes the database name, date, and time such as the
    following : exp_testdb_01192005_1105.dmp.
    When I try to do the same thing with data pump, it doesn't work. Has anyone
    had success with this. Thanks.
    ed lewis

    Hi Ed
    This is an example for your issue:
    [oracle@dbservertest backups]$ expdp gsmtest/gsm directory=dpdir dumpfile=exp_testdb_01192005_1105.dmp tables=ban_banco
    Export: Release 10.2.0.1.0 - Production on Thursday, 19 January, 2006 12:23:55
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Starting "GSMTEST"."SYS_EXPORT_TABLE_01": gsmtest/******** directory=dpdir dumpfile=exp_testdb_01192005_1105.dmp tables=ban_banco
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 64 KB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/COMMENT
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "GSMTEST"."BAN_BANCO" 7.718 KB 9 rows
    Master table "GSMTEST"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    Dump file set for GSMTEST.SYS_EXPORT_TABLE_01 is:
    /megadata/clona/exp_testdb_01192005_1105.dmp
    Job "GSMTEST"."SYS_EXPORT_TABLE_01" successfully completed at 12:24:18
    This work OK.
    Regards,
    Wilson

  • How to exclude statistic using Data Pump API?

    How to exclude all statistics while exporting data using Oracle Data Pump API (DBMS_DATAPUMP package)?

    You would call the metadata filter api like this:
    dbms_datapump.METADATA_FILTER(
    handle = your_handle_here,
    name = 'EXCLUDE_PATH_LIST',
    value = 'STATISTICS');
    Hope this helps.
    Dean

  • Error connecting as DBA using Data Pump Within Transportable Modules

    Hi all
    I am using OWB 10g R2 and trying to set up a transportable Module to test the Oracle Data Pump utility. I have followed the user manual in terms of relevant grants and permissions needed to use this functionality and have sucessfully connect to both my source and target databases and created my transportable module which will extract six tables from a source database on 10.2.0.1.0 to may target schema in my warehouse also on 10.2.0.1.0. When i come to try and deploy / execute the transportable module it fails with the following error.
    RPE-01023: Failed to establish connection to target database as DBA
    Now we have even gone as far as granting the DBA role to the user within our target but we still get the same error so assume it is something to do with the connection of the Transportable Target Module Location and it needs to connect as DBA somehow in the connect string. Has anyone experienced this issue and is their a way of creating the location connection that is not documented.
    There is no mention of this anywhere within the manual and i have even followed the example from http://www.rittman.net/archives/2006_04.html and my target user has the privilages detailed in the manual as detailed below
    User must not be SYS. Musthave ALTER TABLESPACE privilege and IMP_FULL_
    DATABASE role. Must have CREATE MATERIALIZED VIEW privilege with ADMIN
    option. Must be a Warehouse Builder repository databaseuser.
    Any help would be appreciated before i raise a request with Oracle

    Did you ever find a resolution ? We are experiencing the same issue..
    thanks
    OBX

  • Migration using Data Pump confusion :(

    Hi,
    I have task to migrate a database from Windows x64 to Linux x64. From 10.2.0.4 to 11.2.0.4. Please correct my scenario:
    1. Install and configure binaries on new Linux host
    2. Create a empty database on Linux host
    3. Performe DP Export from old Windows database.
    4. Import dump to new database on Linux.
    Are the general steps correct? My confiusion is about Data Pump utility. As i know DP doesnt export SYS/SYSTEM schemas right? I've performed test migration with the following parameters:
    expdp ..... full=y
    impdp ..... full=y
    and i have a lot of errors similar to: "object already exists" and so on. Am i correct with using full=y in both expdp and impdp? Maybe i should export a full database and then import only specified schemas (application schemas)? What about priviledges? Where they are stored in database? Maybe when i import only application schemas there will be a problem with privilegdes grants etc?
    Thanks

    Yes your steps are correct. When you create the new database, the SYS/SYSTEM object are already populated, so you will see "object already exists" errors.
    Moving Data Using Oracle Data Pump
    HTH
    Srini

  • Using Data Pump Storage Parameter option

    I am creating database replica of our production environment - the db name don't have to be the same.
    My option is to use Oracle data pump  to move the data from source database to target database.
    I performed the same schenerio for our Windows 2003 evnironment with no problem.
    Doing the same for Linux, I am getting tablespace creation error as you can see below:
    Lix
    Linux-x86_64 Error: 2: No such file or directory Failing sql is: CREATE TABLESPACE "INQUIRY" DATAFILE '/oraappl/pca/vprod/vproddata/inquiry01.dbf' SIZE 629145600 LOGGING ONLINE PERMANENT BLOCKSIZE 8192 EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO ORA-39083: Object type TABLESPACE failed to create with error: ORA-01119: error in creating database file '/oraappl/pca/vprod/vproddata/medical01.dbf' ORA-27040: file create error, unable to create file
    My question is do we have to create the tablespaces or data pump should use the default tablespace location already being used by the new database?

    Hi Richard,
    I am working creating my extra database using duplicate command as you suggested.
    I got everything up until I got this error:
    channel ORA_AUX_DISK_1: reading from backup piece /oraappl/pca/backups/weekly/vproddata/rman/VPR ackupset/2013_08_27/o1_mf_nnndf_TAG20130827T083750_91s7dz0r_.bkp ORA-19870: error reading backup piece /oraappl/pca/backups/weekly/vproddata/rman/VPROD/backupset 3_08_27/o1_mf_nnndf_TAG20130827T083750_91s7dz0r_.bkp ORA-19505: failed to identify file "/oraappl/pca/backups/weekly/vproddata/rman/VPROD/backupset/2 08_27/o1_mf_nnndf_TAG20130827T083750_91s7dz0r_.bkp" ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory
    It is defaulting to the recovery location of the production database, instead of the auxiliary db.
    My next option was to catalog the backup files, even that is not working. Any suggestion?

Maybe you are looking for

  • Multiple users logged into one server, each users printer has a different name, application needs ONE name to print to.

    Multiple users logged into one server, each users printer has a different name, application needs ONE name to print to.  I'm NOT in any way a Terminal Services expert and I need help trying to get an application program working in a multi-user enviro

  • Moving mail from one user to another

    I am moving from my old iMac to a newer eMac. The eMac had been used by a housemate for Mail, and I have now set up a separate user account for them so they can access this older Mail. However, I am having trouble getting the "Mail" folder into the n

  • Snow Leopard Mac Mini install

    I have a 2006 intel Mac Mini 10.4 and I have the Snow Leopard Install dvd in it and it has been 3 hours installing. Is this normal?

  • Webdynpro ABAP application canot be run

    Hi,     i done a sample simple webdynpro application in web dynpro ABAP. when i run the application the follwing error is occured in browser. pls help me ................. Error when processing your request What has happened? The URL http://sapserver

  • How to reserve a prefix namespace

    Hello, I need your help in understanding how can we reserve a prefix namespace for our UDTs and UDFs. We are designing several addons, and we would like their UDTs to carry our company name. For example @be1s.TableName How can we reserve this namespa