Datapump question

Hi,
Is Datapump creating triggers, view or tables when you issue a full expdp?
Checking documentation cant see nothing, probably dont searching in the right site.
Can someone show me some light about it?

Ricard,
The question was if it creates any object in the users beign exported, like triggers for example, to do the operation.
So if i expdp full=y with sys user, and the expdp is exporting schema scott, will appear new objects on the schema scott, that will remain in it when expdp finish?
If I understood you correctly,you want to ask that the Scott's objects will be there in the Scott schema or will go away once the export will finish right?What makes you think that they will go away and why?Export is "dumping" the informatioon of the objects in the dump file.Its idea is not to temper the already existing objects.It is not just Data Pump,its the standard export also which does the same thing.
Reading the documentation seems it doesn't happens, only a master table is created on the user sys, and exported too when expdp finish. And you also said me that expdp doesnt create any object to do the operation.
My only questio to you,why you think it will drop the objects?
s the table master dropped when expdp finish btw?
Yes master table indeed gets created and is exported in the export file once it is over.
You need to re-read the docs I guess.
Aman....

Similar Messages

  • RMAN and Datapump Question

    Hello,
    Database : 11gR2
    I was recently provieded an access to a new database, because users are complaining of poor perfromance.
    During my monitoring of this database, and using statspack report (because we don't have licensed diagnostics pack), I have seen both RMAN and Datapump processes running together during peak hours. So I asked the DBA responsible of this server, why she's scheduling RMAN and datapump together?!! She replied that she knows nothing about these datapump processes and that could be RMAN who is using (“ Module: Data Pump Worker”) !
    This answer was surprising me since in the Oracle documentation, I never read that before! Is it correct that RMAN utility can use Data Pump Workers??!
    This is here answer : "I think these “ Module: Data Pump Worker” are run by rman during backup."
    Thank you and sorry for this stupid question!
    Edited by: 899767 on 26-ago-2011 0:47

    RMAN and DATAPUMP are two different utility provided by oracle to take backup
    Oracle Backup are divided into two parts: ---
    a) Physical Backup: -- These are the copies of physical database files and we can take it by using RMAN utilities provided by Oracle.
    b) Logical Backup: -- These contain logical data (for example tables and stored procedures) extracted with Oracle Datapump Utility and stored in a binary file
    Refer to this thread it will help
    https://forums.oracle.com/forums/thread.jspa?threadID=1133819
    And
    http://myoracleservice.com/?q=node/38

  • DataPump Question for Full Expdp

    Dear All,
    how to export full database using data pump utility. actually the client need the full 11i ebiz database alone for his testing purpose.
    i have some basic idea which i use to follow for local exports.
    $expdp scott/tiger directory=dump dumpfile=table_scott.dmp logfile=table_scott.log tables=empthis is for a single table
    for the whole database how we have to export,, before export what are the privileges does the use must have and what are the prerequisites !?!
    $expdp system/manager full=y directory=dump dumpfile=full.dmp logfile=full.log job_name=fulljobam i correct in this above syntax for export the full db.. ?!?!?
    while i import what are the prerequisits i have to check in the new database. kindly help me to study this subject deeply
    Thanks and Regards
    HAMEED
    One more Question :
    Before export do we have to take a report ? if it what kind of report and how to take that ?
    Edited by: Hameed on May 11, 2011 5:17 AM

    so how come its possible to import each and ever tablespace by remap technic ?AFAIK you have to do that....create a parameter file and use either REMAP_DATAFILES or REMAP_TABLESPACE parameter. Please be aware that REMAP_DATAFILE value uses quotation marks so specifiy in parameter file.
    Also check below MOS before using this parameter:-
    Note:429846.1 "Slow Data Pump with REMAP_SCHEMA and REMAP_TABLESPACE parameters"
    Thanks,
    JD

  • Datapump question with "query"

    Hi,
    I'm trying to use expdp to export only certain rows of a table. Am I doing something wrong? (obviously yes ;-) Getting the following error
    $ expdp user/password directory=DP dumpfile=dunp.dmpdp tables=user.t_calculationhistory query=user.t_calculationhistory:"WHERE DATEOFCALCULATION>TO_DATE('20081111','YYYYMMDD') AND DATEOFCALCULATION<TO_DATE('20081112','YYYYMMDD')"
    **LRM-00116: syntax error at ')' following 'YYYYMMDD'**
    $
    Any suggestions on the syntax

    Hi,
    Try this
    $ expdp user/password directory=DP dumpfile=dunp.dmpdp tables=user.t_calculationhistory query=user.t_calculationhistory:' "WHERE DATEOFCALCULATION>TO_DATE('20081111','YYYYMMDD') AND DATEOFCALCULATION<TO_DATE('20081112','YYYYMMDD')" '
    Your condition is wrong because date of calculation is greater than 11 and less than 12, i think you have given wrong date.
    regards
    Jafar

  • Expdp (Datapump) question

    Hi,
    I am using the following expdp command to create database export for one of my schemas. The issue is when I run it manually on command window, it creates 2 files 'EXPORT01.dmp' and 'EXPORT02.dmp' which is exactly what I want. However, when I run the same command from the batch file it creates only one file: EXPORTU.dmp and runs out of space as I have limited the space to only 3m. Why does the same script not run in the batch and creates 2 files (export01.dmp and export02.dmp)?
    Here's my command:
    expdp system/password SCHEMAS=myschema LOGFILE=EXPDP_LOG_DIR:EXPORT.log DUMPFILE=EXPDP_DATA_DIR:EXPORT%U.dmp FILESIZE=3m
    Please help.
    Thank you.

    Because when You run it from script it can't correctly parse "%" character. You need to solve it, try
    DUMPFILE=EXPDP_DATA_DIR:EXPORT\%U.dmp FILESIZE=3m
    backslash before % on most os it must work

  • Datapump network_link questions

    I had a few high level datapump questions that I have been unable to find a straight answer for and it's been a little while since I last did a large migration. Database will be either 10gR2 or 11gR2.
    - Since you are not using any files, does the Oracle user doing either the export or import even need a directory created and granted read/write on it to the user?
    - Does the user even need any other permission besides create session? (like create table, index, etc) I wasn't sure if any create table statements are executed behind the scenes when the DDL is run.
    - Just out of curiosity, is there any other way to use NETWORK_LINK without using a public database link? For some reason, the system guys do not like us creating these, they see it as a security issue for some reason...
    - Does using NETWORK_LINK lock the tables out for write access when doing a single schema level export/import or can it be done while the database is online without any consequences (besides some reduced performance I'd imagine)? I thought I read transportable tablespaces locked the tables for read access only (not that I'm trying to do that).
    - We have at least 1 TB of raw data in the tablespaces to migrate. If I start the data pump using the NETWORK_LINK, and the network connection gets dropped unexpectedly (router outage, etc), and say I was at 900 GB, would I have to start over or is there any kind of savepoint and resume concept?
    Thanks.

    Hi
    is there any other way to use NETWORK_LINK without using a public database link?
    Please find the document
    http://www.oracle-base.com/articles/10g/oracle-data-pump-10g.php#NetworkExportsImports
    is there any kind of savepoint and resume concept?
    expdp help=y OR impdp help=y
    START_JOB Start/resume current job.
    Please refer oracle documentation
    http://docs.oracle.com/cd/B28359_01/server.111/b28319/dp_export.htm
    Best of Luck :)
    Regards
    Hitgon
    Edited by: hitgon on Apr 26, 2012 9:34 AM

  • Datapump table import, simple question

    Hi,
    As a junior dba, i am a little bit confused.
    Suppose that a user wants me to import a table with datapump which he exported from another db with different schemas and tablespaces(He exported with expdp using tables=XX and i dont know the details..)...
    If i only know the table name, should i ask these three questions 1)schema 2)tablespace 3)oracle version ?
    From docs. i know remapping capabilities of impdp.. But are they mandatory when importing just a table?
    thanks in advance

    Hi,
    Suppose that a user wants me to import a table with datapump which he exported from another db with different schemas and tablespaces(He >exported with expdp using tables=XX and i dont know the details..)...
    If i only know the table name, should i ask these three questions 1)schema 2)tablespace 3)oracle version ?You can get this information from the dumpfile if you want - just to make sure you get the right information. If you run your import command but add:
    sqlfile=mysql.sql
    Then you can edit that sql file to see what is in the dumpfile. It won't show data, but it will show all of the metadata (tables, tablespaces, etc). It is a .sql file that contains all of the create statements that would have been executed if you did not add sqlfile parameter.
    From docs. i know remapping capabilities of impdp.. But are they mandatory when importing just a table?
    thanks in advanceYou never have to remap anything, but if the dumpfile contains a table scott.emp, then it will import that table into scott.emp. If you want it to go into blake, then you need to remap_schema. If it is going into tablespace tbs1 and you want it in tbs2, then you need a remap_tablespace.
    Suppose that an enduser wants me to export a spesific table using datapump ..
    Should i give him also the name of the tablespace where exported table resides?It would be nice, but see above, you can get the tablespace name from the sqlfile command on import.
    Hope this helps.
    Dean

  • Some question regarding the DATAPUMP parameter

    Hi,
    I am trying to perform a database import using the EXCLUDE=GRANT parameter but I get the following errors
    ORA-39002: invalid operation
    ORA-39168: Object path GRANT was not found.
    What is the correct syntax of using the parameter?
    =====================================
    In the old import method the ANALYZE parameter was available, what is the equivalent parameter (if any) in DATAPUMP import?
    Thanks,
    Barry

    Barrry wrote:
    Hi,
    I am trying to perform a database import using the EXCLUDE=GRANT parameter but I get the following errors
    ORA-39002: invalid operation
    ORA-39168: Object path GRANT was not found.
    What is the correct syntax of using the parameter?
    =====================================
    In the old import method the ANALYZE parameter was available, what is the equivalent parameter (if any) in DATAPUMP import?
    Thanks,
    BarrySyntax????
    SYNTAX?????
    Did you ever consider that syntax can be looked up in the very fine reference manual?
    =================================================
    I don't want to be flippant or rude, but if people want to be professionals in ANY field, the first knowledge they need to acquire is how to locate AND USE+ the fundamental reference materials for that profession. And the most important trait, the one for which they are really hired, is the ability to do independent research, and having a modicum of curiosity that would drive one to do that research. We don't mind helping newbies, and even the most experienced person on this board will run into something they are not familiar with, or occasionally just require a second set of eyes to look at something. But a professional+ needs, above all, a willingness and capability to check the docs. A professional isn't necessarily someone who has all the answers at their fingertips or has a full understanding about every arcane subject in their field. It certainly isn't someone who has an encyclopedia full of memorized answers but little understanding of how it all fits together. It's someone who knows where to find the answers when needed, how to recognize them when he sees them. It's less about knowing than it is about attitude. Everything you asked can be answered in the Oracle Concepts Manual at tahiti.oracle.com. You should bookmark that site.
    =================================================
    Learning how to look things up in the documentation is time well spent investing in your career. To that end, you should drop everything else you are doing and do the following:
    Go to tahiti.oracle.com.
    Drill down to your product and version.
    <b><i><u>BOOKMARK THAT LOCATION</u></i></b>
    Spend a few minutes just getting familiar with what is available here. Take special note of the "books" and "search" tabs. Under the "books" tab you will find the complete documentation library.
    Spend a few minutes just getting familiar with what <b><i><u>kind</u></i></b> of documentation is available there by simply browsing the titles under the "Books" tab.
    Open the Reference Manual and spend a few minutes looking through the table of contents to get familiar with what <b><i><u>kind</u></i></b> of information is available there.
    Do the same with the SQL Reference Manual.
    Do the same with the Utilities manual.
    You don't have to read the above in depth. They are <b><i><u>reference</b></i></u> manuals. Just get familiar with <b><i><u>what</b></i></u> is there to <b><i><u>be</b></i></u> referenced. Ninety percent of the questions asked on this forum can be answered in less than 5 minutes by simply searching one of the above manuals.
    Then set yourself a plan to dig deeper.
    - Read a chapter a day from the Concepts Manual.
    - Take a look in your alert log. One of the first things listed at startup is the initialization parms with non-default values. Read up on each one of them (listed in your alert log) in the Reference Manual.
    - Take a look at your listener.ora, tnsnames.ora, and sqlnet.ora files. Go to the Network Administrators manual and read up on everything you see in those files.
    - When you have finished reading the Concepts Manual, do it again.
    Give a man a fish and he eats for a day. Teach a man to fish and he eats for a lifetime.
    =================================

  • Datapump monitoring question

    Hi,
    I've set up two stored procedures:
    1. One to do a datapump iimport over a network link using the datapump api.
    2. Another one to read the resulting log file on the server into a clob fiield in a table.
    Individually, both of the above work fine.
    The problem is when I try to call stored proc number 2 from number 1. Since the stored procedure returns before the imports complete, then the stored procedure to read the log file into the clob in the table runs, the log file is empty.
    I know I can monitor user_datapump_sessions for completion, any ideas how to make that work in a stored procedure? I thought maybe chained jobs, but I'm not sure.

    I found the answer. User DBMS_DATAPUMP.WAIT_FOR_JOB instead of start_job.
    Thanks.

  • Datapump Export Wipro Question

    Hi,
    we are having a 50GB size of Schema but the Dumpfile size will be 5GB only,
    now what the question is ,
    How can we Export this 50gb Schema using the 5GB Dumpfile?
    Is this is posible?
    Give me the solution for this with query?

    Create a compressed export on the fly. Depending on the type of data, you probably can export up to 10 gigabytes to a single file. This example uses gzip. It offers the best compression I know of, but you can also substitute it with zip, compress or whatever.
    # create a named pipe
    mknod exp.pipe p
    # read the pipe - output to zip file in the background
    gzip < exp.pipe > scott.exp.gz &
    # feed the pipe
    exp userid=scott/tiger file=exp.pipe ...
    refer the link:
    http://www.orafaq.com/wiki/Import_Export_FAQ#Can_one_export_to_multiple_files.3F.2F_Can_one_beat_the_Unix_2_Gig_limit.3F

  • Editing a datapump .dmp file question

    I am running 10.2.0.4 rdbms on hpux platfrom and I am taking a data pump export with content=metadata_only. I want to then import this metadata into a new database install which will be running an 11.2.0.3 database.
    I want to be able to change the default tablespaces for where the objects live in the export file and also change the initial extent size so that the empty objects will only take up a small amouth of space in the new database and also have all of the objects created in one tablespace in the new database.
    Is is possible to edit the .dmp file and make these kind of global changes so that all objects can be created in the users tablespace and set the initial extent size as 10mb?
    Thanks.

    923395 wrote:
    I am running 10.2.0.4 rdbms on hpux platfrom and I am taking a data pump export with content=metadata_only. I want to then import this metadata into a new database install which will be running an 11.2.0.3 database.
    I want to be able to change the default tablespaces for where the objects live in the export file and also change the initial extent size so that the empty objects will only take up a small amouth of space in the new database and also have all of the objects created in one tablespace in the new database.
    Is is possible to edit the .dmp file Not possible.
    You could use DBMS_METADATA.GET_DDL to a file then do global replace with any text editor.

  • Datapump API PL/SQL question

    I had craeted a procedure. In the procedure, schema name SCOTT is hardcoded
    DBMS_DATAPUMP.METADATA_FILTER(h1,'SCHEMA_EXPR','IN (''SCOTT'')');
    if in procedure I want to pass in the schema name say inSchema then how it will be replace 'IN (''SCOTT'')' by this.
    DBMS_DATAPUMP.METADATA_FILTER(h1,'SCHEMA_EXPR',inSchema);
    it complies but not run. Thanks

    Now it works by calling exec test_datapump('IN (''SCOTT'')');
    this code (with very little modification) is in the ORACLE API documentation.
    CREATE OR REPLACE procedure DO_EXPORT.test_datapump (inSchema varchar2)
    is
    ind NUMBER; -- Loop index
    h1 NUMBER; -- Data Pump job handle
    percent_done NUMBER; -- Percentage of job complete
    job_state VARCHAR2(30); -- To keep track of job state
    le ku$_LogEntry; -- For WIP and error messages
    js ku$_JobStatus; -- The job status from get_status
    jd ku$_JobDesc; -- The job description from get_status
    sts ku$_Status; -- The status object returned by get_status
    BEGIN
    -- Create a (user-named) Data Pump job to do a schema export.
    h1 := DBMS_DATAPUMP.OPEN('EXPORT','SCHEMA',NULL,'EXAMPLE907','LATEST');
    -- Specify a single dump file for the job (using the handle just returned)
    -- and a directory object, which must already be defined and accessible
    -- to the user running this procedure.
    DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','NIGHTLY_DB_EXPORT');
    -- A metadata filter is used to specify the schema that will be exported.
    --DBMS_DATAPUMP.METADATA_FILTER(h1,'SCHEMA_EXPR','IN (''SCOTT'')');
    -- inSchema
    -- DBMS_DATAPUMP.METADATA_FILTER( h1,'SCHEMA_EXPR', 'IN (''' || inSchema || ''')');
    DBMS_DATAPUMP.METADATA_FILTER( h1,'SCHEMA_EXPR', inSchema );
    -- Start the job. An exception will be generated if something is not set up
    -- properly.
    DBMS_DATAPUMP.START_JOB(h1);
    -- The export job should now be running. In the following loop, the job
    -- is monitored until it completes. In the meantime, progress information is
    -- displayed.
    percent_done := 0;
    job_state := 'UNDEFINED';
    while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
    dbms_datapump.get_status(h1,
    dbms_datapump.ku$_status_job_error +
    dbms_datapump.ku$_status_job_status +
    dbms_datapump.ku$_status_wip,-1,job_state,sts);
    js := sts.job_status;
    -- If the percentage done changed, display the new value.
    if js.percent_done != percent_done
    then
    dbms_output.put_line('*** Job percent done = ' ||
    to_char(js.percent_done));
    percent_done := js.percent_done;
    end if;
    -- If any work-in-progress (WIP) or error messages were received for the job,
    -- display them.
    if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
    then
    le := sts.wip;
    else
    if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
    then
    le := sts.error;
    else
    le := null;
    end if;
    end if;
    if le is not null
    then
    ind := le.FIRST;
    while ind is not null loop
    dbms_output.put_line(le(ind).LogText);
    ind := le.NEXT(ind);
    end loop;
    end if;
    end loop;
    -- Indicate that the job finished and detach from it.
    dbms_output.put_line('Job has completed');
    dbms_output.put_line('Final job state = ' || job_state);
    dbms_datapump.detach(h1);
    END;

  • How to load data in a oracle-datapump table ...

    I have done following steps,
    1) I have created a external table locating a .csv file
    CREATE TABLE TB1 (
    COLLECTED_TIME VARCHAR2(8)
    ORGANIZATION EXTERNAL(
    TYPE oracle_loader
    DEFAULT DIRECTORY SRC_DIR
    ACCESS PARAMETERS (
    RECORDS DELIMITED BY NEWLINE
    BADFILE 'TB1.bad'
    LOGFILE 'TB1.log'
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '`'
    MISSING FIELD VALUES ARE NULL
    COLLECTED_TIME
    LOCATION ('TB.csv')
    2) I am creating a datapump table and .dmp file using the source from step1 table (TB1)
    CREATE TABLE TB2
    ORGANIZATION EXTERNAL
    ( TYPE oracle_datapump
    DEFAULT DIRECTORY SRC_DIR
    LOCATION ('TB.dmp')
    ) AS SELECT
    to_date(decode(COLLECTED_TIME,'24:00:00','00:00:00',COLLECTED_TIME),'HH24:MI:SS') COLLECTED_TIME
    FROM TB1;
    3) Finally I have create a datapump table which will use TB.dmp as source created by step2
    CREATE TABLE TB (
    COLLECTED_TIME DATE
    ORGANIZATION EXTERNAL (
    TYPE ORACLE_DATAPUMP
    DEFAULT DIRECTORY SRC_DIR
    LOCATION ('TB.dmp')
    Question : How to create the TB.dmp file in SRL_DIR of step2 using the PL/SQL code?
    How to get data in table TB of step3 using the PL/SQL code?
    Thanks
    abc

    Hi,
    Yes I want to execute the SQL code step 2(as mentioned below) in PL/SQL package,
    CREATE TABLE TB2
    ORGANIZATION EXTERNAL
    ( TYPE oracle_datapump
    DEFAULT DIRECTORY SRC_DIR
    LOCATION ('TB.dmp')
    ) AS SELECT
    to_date(decode(COLLECTED_TIME,'24:00:00','00:00:00',COLLECTED_TIME),'HH24:MI:SS') COLLECTED_TIME
    FROM TB1;
    Please advise.
    Thanks
    abc
    Edited by: user10775411 on Sep 15, 2010 10:40 AM

  • Any specific disadvantage of using DataPump for upgrade ?

    Hi,
    Here's my config
    SERVER A
    Source O/S : Win 2003
    Source DB : 10.2.0.4 ( 32 bit )
    DB Size : 100 GB
    SERVER B
    Target O/S : Win 2008 R2 sp1
    Target DB : 11.2.0.3 ( 64 bit )
    I have to upgrade 10g Database of server A , by installing 11g on Server B. I have not used database upgrade assistant or RMAN or similar utilities to perform a 10g upgrade to 11g anytime in the past.
    Here is my question ...
    a) I was planning to use datapump to perform this upgrade ( Downtime is not a issue ), since I know how to use datapump, do you guys see any potential problem with this approach ? OR
    b) Based on your exp. would you suggest that i should avoid option (a) because of potential issues and use other methods that oracle suggests like upgrade assistants etc
    I am open to both options, it's just that since i am not a expert at this point of time, i was hesitating a bit to go with option (b)
    db is supposed to undergo from 32 to 64 bit. Not sure if this would be deal breaker.
    Note : The upgrade is suppose to happen on 2nd server. Can somebody provide high level steps as pointer.
    -Learner

    If downtime is not an issue, datapump is certainly an option. How big if the database ? The steps are documented
    http://docs.oracle.com/cd/E11882_01/server.112/e23633/expimp.htm#i262220
    HTH
    Srini

  • Can RMAN backup and export datapump executed at the same time?

    Hello,
    I have several databases that I backup using RMAN and export datapump everynight starting at 6PM and end at Midnight. The backup maintenance window doesn't give me enough time for each database to run at different time. I am using crontab to schedule my backups. Since I have so many databases that need to be backed up between 6PM - Midnight, some of the export and RMAN backup scripts will execute almost at the same time. My question is can my export data pump and RMAN backup scripts run at the same time?
    Thank you in advance.
    John

    Needs must. If you don't run expdp parallel then it doesn't use that much. If it was really killing the system then look into setting up a Resource plan that knocks that user down but this is a big step.
    I woud better look into using Rman
    system incrementals, and block change tracking, to minimize your RMAN time.
    Regards
    If your shop needs to do both simultaneously then go for it.
    Chris.
    PS : One of my shops has maybe 20-30 rmans and pumps all kicking off some simultaneous, some not, from 0000 to 0130. No complaints from users and no problems either. Go for it.
    Edited by: Chris Slattery on Nov 25, 2012 11:19 PM

Maybe you are looking for