Datapump dump file view

Hi Experts,
I have some old datapump expdp dumpfiles and wanted to check the contents of it prior importing.
Do you think its possible to to check the content of a dump file with any impdp or expdp command?
Thanks
Edited by: Nadvi on Nov 22, 2010 1:00 PM

Nadvi wrote:
Hi,
I found the option for Import, ie, IMP show=Y but that doesn't work with IMPDP.
Did anyone tried viewing contents of datapump EXPDP dumpfile before?
Thanks.For impdp you can use sqlfile parameter for your purpose,please refer
http://download.oracle.com/docs/cd/B13789_01/server.101/b10825/dp_import.htm

Similar Messages

  • IMPORT a VIEW from an export dump file

    I'm trying to import a view from an export dump file. In using the Import Utility, do I use the TABLE parameter and give the View name ie: TABLES=view1
    null

    Only I want to do is import ONE view from an export dmp file that contains hundereds of table, many views and procedures. I do not want to reimport everything from the export file.

  • Create dump file with datapump, with read right for everybody

    Hello,
    I have a problem under Linux : I am creating dump files with datapump.
    Those dump files are owned by a dba group, with no read access for users not in the dba group.
    Is there a way that the datapump utility creates dump files with a read access given to any user ?
    Franck

    Unlike "exp", when using "expdp", the dumpfile is created by the server process. The server process is forked from the database instance. It inherits the umask settings that are present when the database instance is started.
    (Therefore, the only way to change the permissions would be to change the umask for the oracle database server id and restart the database instance --- which is NOT what I would recommend).
    umask is set so that all database files created (e.g. with CREATE TABLESPACE or ALTER TABLESPACE ADD DATAFILE) are created with "secure" permissions preventing others from overwriting them -- of course, this is relevant if your database files are on FileSystem.
    Hemant K Chitale

  • How to view database core dump file

    I have an Oracle server on windows 2000. Now, I found the oracle server will produce a core dump file in ..\cdump directory. But this file is only a system memory image, so it's hard to analysis the core dump reason, could you tell me how to view this file? thanks

    There's no much useful information for you too see. You can create a SR with Oracle support and upload file if you want to investigate the cause of core dump. also check if there's any errors in your alert<SID>.log and core_dump_dest

  • Encountering ORA-39000: bad dump file specification while using datapump

    Hello,
    I am trying to use datapump to take a export of a schema(Meta_data only). However, I want the dump file to be named after the date & time of the export taken.
    When i use the following command -- the job runs perfectly.
    expdp system@***** dumpfile=expdp-`date '+%d%m%Y_%H%M%S'`.dmp directory=EXP_DP logfile=expdp-`date '+%d%m%Y_%H%M%S'`.log SCHEMAS=MARTMGR CONTENT=METADATA_ONLY
    However, I want to run the export using an parfile. but if use the below parfile, i am encountering the following errors.
    USERID=system@*****
    DIRECTORY=EXP_DP
    SCHEMAS=TEST
    dumpfile=expdp-`date '+%d%m%Y_%H%M%S'`
    LOGFILE=MARTMGR.log
    CONTENT=METADATA_ONLY
    expdp parfile=martmgr.par
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning option
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-39157: error appending extension to file "expdp-`date '+%d%m%Y_%H%M%S'`"
    ORA-07225: sldext: translation error, unable to expand file name.
    Additional information: 7217
    How do i append date&time to the dumpfile name when using a parfile.
    Thanks
    Rohit

    I got the below error while using the dumpfile parameter as dumpfile=dump_$export_date.dmp
    Export: Release 11.2.0.2.0 - Production on Thu Feb 7 16:46:22 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning option
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-39157: error appending extension to file "dump_$export_date.dmp"
    ORA-07225: sldext: translation error, unable to expand file name.
    Additional information: 7217
    Script i used is as follows
    export ORACLE_HOME=/orcl01/app/oracle/product/11.2.0.2
    export ORACLE_SID=$1
    export PATH=$PATH:$ORACLE_HOME/bin
    echo $ORACLE_HOME
    export export_date=`date '+%d%m%Y_%H%M%S'`
    echo $export_date
    expdp parfile=$2
    parfile is
    DIRECTORY=EXP_DP
    SCHEMAS=MARTMGR
    dumpfile=dump_$export_date.dmp
    LOGFILE=MARTMGR.log
    CONTENT=METADATA_ONLY

  • Getting Datapump Export Dump file to the local machine

    I apologize to everyone as this is a duplicate post.
    Re: Getting Datapump Export Dump file to the local machine
    My initial thread(started yesterday)was in 'Database General' and didn't get much response today. Where do i post questions on EXPORT/IMPORT utilities?
    Anyway, here is my problem:
    I want to take the export dump of itemrep schema in orcl database (in a remote machine). I have an Oracle server (10G Rel2) running in my local Windows machine. I have created a user john with necessary EXPORT/IMPORT privileges in my local db. Then i created a Directory object,ie a folder named datapump in my local hard drive and granted READ WRITE privileges to john.
    So john, who is a user in my local machine's oracle db is going to run the expdp utility.
    expdp john/jhendrix@my_local_db_alias SCHEMAS=itemrep directory=datapump logfile=itemrepexp.log
    The above command will fail because it will look for itemrep schema inside my local db, not the remote db where the itemprep is actually located. And you can't qualify the schemaname with its db in the SCHEMAS parameter (like SCHEMAS=itemrep@orcl).
    Can anyone provide me a solution for this?

    I think you can initiate the datapump exp utility from your client machine to export a schema in a remote database.But, Upon execution,oracle looks for the directory in the remote database and not on your local machine.
    You're inovoking expdp from a client (local DB) to export data from a remote DB.
    So, With this method, you can create the dumpfiles only on the Remote server and not on the local Machine.
    You can perform a direct import instead of export using the NETWORK_LINK option.
    Create a DBlink from your local and Remote DB and verify the connection.
    Then,Initiate the Impdp from Your local machine DB using the parameter network_link=<db_link of the Remote DB> to import the schema.
    The advantage of this option eliminates the Dumpfile creation at the Server side.
    There are no dumpfiles during the import process. the Data is imported directly to the target schema.

  • Estimate the Import Time by viewing the DUMP File Size

    Its very generic question and I know it can varry from DB to DB.
    But is there any standard way that we can Estimate the Oracle dump import time by viewing the Export Dump file size ?

    Well, it's going to be vaguely linear, in that it probably takes about twice as long to load a 2 GB dump file as to load a 1 GB dump file, all else being equal. Of course, all things rarely are equal.
    For example,
    - Since only index DDL is stored in the dump file, dumps from systems with a lot of indexes will take longer to load than dumps from systems with few indexes, all else being equal.
    - Doing a direct path load will generally be more efficient than doing a conventional path load
    Justin

  • Datapump expdp dump file missing permissions

    I need help on this :
    After export my dump file missing the permissions for others.
    -rw-r----- abc.dmp
    AFter export it needs to provide read permissions to all but not working.i need like below
    -rw-r--r-- abc.dump
    I am using following commands before expdp (datapump)
    setfacl -m mask:rwx,u:${ORACLE_OWNER}:rwx ${HOME}/dumpfiles
    setfacl -m other:rwx ${HOME}/dumpfiles
    Thanks in Advance,

    If UMASK is set to 022 you could not generate this:
    -rw-r----- abc.dmp which corresponds to a umask of 137.
    I would look again. What are the user and group permissions under which it was created?

  • Is it possible to view/get the table data from a dump file

    I have dmp files generated using expdp on oracle 11g..
    expdp_schemas_18MAY2013_1.dmp
    expdp_schemas_18MAY2013_2.dmp
    expdp_schemas_18MAY2013_3.dmp
    Can I use a parameter file given below to get the table data in to the sql file or impdp the only option to load the table data in to database.
    vi test1.par
    USERID="/ as sysdba"
    DIRECTORY=DATA
    dumpfile=expdp_schemas_18MAY2013%S.dmp
    SCHEMAS=USER1,USER2
    LOGFILE=user_dump_data.log
    SQLFILE=user_dump_data.sql
    and impdp parfile=test1.par.

    Hi,
    To explain more about my situation.
    Target database has the dump files, where as the source database (cloud) doesn't have access to the target database.
    However, I can request the target DB DBA team to run the par files and provide me the output like a SQL file which I can take and run it in my source database.
    I got the metadata the same way, but is there any way I could get the data from the dump files on the target DB without actually accessing it? My question might sound weird, but please let me know.
    <
    1. export from the source into a dumpfile. You can do this on the source database and then copy the file over to your local database or you can do this from a local database if you have a database link defined on the local database that points to the source database.  In the second case, your dumpfile will be created on your local database.
    >
    How can i access the dump using the database link?

  • Datapump exp with network_link generated crashed dump file

    Hi Experts,
    I am using oracle 10.2.0.4 DB at 32 bit window. I try to exp full source db in target db by data pump with network_link.
    But I got a dump file as a crashed dump type under window explorer.
    syntax as expdp system/test@targetDB FULL=y DIRECTORY=dataload NETWORK_LINK=sale DUMPFILE=salefull091115.dmp LOGFILE=salefulllog091115.log
    what kind of issue is for this event?
    Thanks
    JIm
    Edited by: user589812 on Nov 15, 2009 3:48 AM

    Pl post the last 50 lines of the export log file, along with any errors recorded in the Windows system log.
    See you duplicate post here - data pump export network_link   with dblink issue
    Srini

  • Can DataPump Export Compress the Dump file?

    hi everyone,
    This is 10g.
    I use the expdp utility to export 1 schema at month-end. The size of the .dmp file is 9 GB.
    Can expdp perfrom compression to reduce the size of the dump file ?
    Thanks, John

    Thanks Srini and Dean.
    I dont have 11g so I could only benefit by 10g's ability to compress metadata.
    My monthly export is for 1 user schema. I assume the schema contains both data and metadata which means if I requested compression, I would get a smaller dump file. Is that a good assumption? Would I still get only 1 dump file or do you get more than one file when compression is used?
    The 10g documentation I have read about expdp does not mention how to request compression.
    Thanks, John

  • Can we create export dump files on client side??

    Hi,
    Please see this link
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_overview.htm#i1010293
    Check the "NOTE" which says that
    All Data Pump Export and Import processing, including the reading and writing of dump files, is done on the system (server) selected by the specified database connect string. This means that, for nonprivileged users, the database administrator (DBA) must create directory objects for the Data Pump files that are read and written on that server file system. For privileged users, a default directory object is available
    So does this mean, the dump files can be created only on the server???
    What if i want to create the dump file on my client c:\
    Is it possible??
    As per my knowledge, this was possible in 9i
    Thanks

    Yes in the "orignal" export ,it was possible to have a client side dump file.But the Export DataPump uses the Directory parameter which maps to a directory on the server side only and the option to give an absolute path for the dump file on the client side is removed,the files can be only on server side.
    Aman....

  • Problem in data import from dump file on NAS device

    I am using oracle 10g on solaris 10. I have mounted NAS device on my solaris machine.
    I have created a directory object dir on my NAS storage in which i have placed the dump file.
    Now when i execute the following command to import data
    impdp user/pass DIRECTORY=DIR DUMPFILE=MQA.DMP logfile=import.log
    then i get the following error message
    ORA-39002: invalid operation
    ORA-39070: Unable to open the log file.
    ORA-29283: invalid file operation
    ORA-06512: at "SYS.UTL_FILE", line 475
    ORA-29283: invalid file operation
    Can any body help ?

    Also, Datapump from NAS device would perform slow.
    Take a look at this:
    10gR2 "database backup" to nfs mounted dir results in ORA-27054 NFS error
    Note:356508.1 - NFS Changes In 10gR2 Slow Performance (RMAN, Export, Datapump) From Metalink.

  • How to list out the tablenames alone from EXP Dump File

    Hi,
    We are creating a dump file by using EXP command through a Parameter file.
    Now, from this exp_backup.dmp file, I just want to find out the list of tables, which got exported.
    I don't have access to the Parameter file used for exporting data. Please share your views, suggestions, thank you.

    Hi,
    did you try to get the details logfile created for your export dump ?
    try strings on your dumpfile
    strings <dumpfilename>
    Regards
    Veda

  • Import Oracle DataBase dump file Into Sql Server With out connection oracle using c# code.

    Hi All,
    I would like to ask how can imports my dump oracle database file data and struct into sql server 2005
    with out make connection oracle.
    best regards.
    Hakim. 

    Hi Hakim.
    Based on your title, what I understood there is a issue regarding transfer oracle db dump to sql database using C# code. Am I right? If so, I am afraid there is no build-in method which meet your requrements in .NET framework. You must provides
    OraDump  Export API to allow developers integrate corresponding capabilities  into their applications. These capabilities are about export data from  Oracle dump files into popular databases like Microsoft SQL server. Every developer
    having OraDump Export API can write just few lines of code on C# or  Visual Basic to implement the appropriate export process.
    I am search a third-party link, but it's a paid service. 
    http://www.convert-in.com/data-migration-service.htm
    If you don't have to with code, Please refer to: Migrating Oracle Databases to SQL Server 2000
    http://technet.microsoft.com/en-us/library/cc966513.aspx
    Best regards,
    Kristin
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

Maybe you are looking for