Datapump expdp dump file missing permissions

I need help on this :
After export my dump file missing the permissions for others.
-rw-r----- abc.dmp
AFter export it needs to provide read permissions to all but not working.i need like below
-rw-r--r-- abc.dump
I am using following commands before expdp (datapump)
setfacl -m mask:rwx,u:${ORACLE_OWNER}:rwx ${HOME}/dumpfiles
setfacl -m other:rwx ${HOME}/dumpfiles
Thanks in Advance,

If UMASK is set to 022 you could not generate this:
-rw-r----- abc.dmp which corresponds to a umask of 137.
I would look again. What are the user and group permissions under which it was created?

Similar Messages

  • Getting Datapump Export Dump file to the local machine

    I apologize to everyone as this is a duplicate post.
    Re: Getting Datapump Export Dump file to the local machine
    My initial thread(started yesterday)was in 'Database General' and didn't get much response today. Where do i post questions on EXPORT/IMPORT utilities?
    Anyway, here is my problem:
    I want to take the export dump of itemrep schema in orcl database (in a remote machine). I have an Oracle server (10G Rel2) running in my local Windows machine. I have created a user john with necessary EXPORT/IMPORT privileges in my local db. Then i created a Directory object,ie a folder named datapump in my local hard drive and granted READ WRITE privileges to john.
    So john, who is a user in my local machine's oracle db is going to run the expdp utility.
    expdp john/jhendrix@my_local_db_alias SCHEMAS=itemrep directory=datapump logfile=itemrepexp.log
    The above command will fail because it will look for itemrep schema inside my local db, not the remote db where the itemprep is actually located. And you can't qualify the schemaname with its db in the SCHEMAS parameter (like SCHEMAS=itemrep@orcl).
    Can anyone provide me a solution for this?

    I think you can initiate the datapump exp utility from your client machine to export a schema in a remote database.But, Upon execution,oracle looks for the directory in the remote database and not on your local machine.
    You're inovoking expdp from a client (local DB) to export data from a remote DB.
    So, With this method, you can create the dumpfiles only on the Remote server and not on the local Machine.
    You can perform a direct import instead of export using the NETWORK_LINK option.
    Create a DBlink from your local and Remote DB and verify the connection.
    Then,Initiate the Impdp from Your local machine DB using the parameter network_link=<db_link of the Remote DB> to import the schema.
    The advantage of this option eliminates the Dumpfile creation at the Server side.
    There are no dumpfiles during the import process. the Data is imported directly to the target schema.

  • SELECTing expdp dump files

    Dear Experts,
    Can we read the export dumps created using expdp using ORACLE_DATAPUMP driver (11.2.0.3)?
    For example:
    1) Export a table using expdp
    expdp testuser/testuser DIRECTORY=TEST_DUR tables=TABLE_1 DUMPFILE=test.dmp  LOGFILE=test.log
    2) Read the dump file created in step-1
    CREATE TABLE my_test(
    id number)
      ORGANIZATION EXTERNAL (
         TYPE ORACLE_DATAPUMP
         DEFAULT DIRECTORY test_dir
         LOCATION ('test.dmp')
      )REJECT LIMIT UNLIMITED;
    select * from my_test;
    ERROR at line 1:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-31619: invalid dump file "/u08/export/test.dmp"
    Here's the version info:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    Is it possible to read the dumpfile like this????
    Appreciate your answer in this regard.
    Thanks
    P

    correct - from the oracle docs:
    Note:
    When Data Pump uses external tables as the data access mechanism, it uses the ORACLE_DATAPUMP
    access driver. However, it is important to understand that the files that Data Pump creates when it uses external tables are not
    compatible with files created when you manually create an external table using the SQL CREATE TABLE ... ORGANIZATION EXTERNAL
    statement. One of the reasons for this is that a manually created external table unloads only data (no metadata), whereas Data Pump maintains both data and metadata information for all objects involved.
    Cheers,
    Harry

  • Exdp dump file

    hello experts ,
    can we compress the exportdp dump file ?
    actually I have tried to do so , I have copied expdp dump file to windows machine then I have used winrar tool to compress the expdp dump file
    when I uncompressed the expdp dump to another windows machine its giving the error that file is corrupted ?
    thanks

    I came across this problem, or one like it, on Windows Server 2003 (32-bit). My research indicated that older versions of Winzip (I was using 8.2) do not have support for large files (greater than 2GB). The solution was to install the newest version (at least 11.2) which effectively eliminates zip file size restrictions.
    See also:
    http://www.winzip.com/prodpagecl.htm

  • Create dump file with datapump, with read right for everybody

    Hello,
    I have a problem under Linux : I am creating dump files with datapump.
    Those dump files are owned by a dba group, with no read access for users not in the dba group.
    Is there a way that the datapump utility creates dump files with a read access given to any user ?
    Franck

    Unlike "exp", when using "expdp", the dumpfile is created by the server process. The server process is forked from the database instance. It inherits the umask settings that are present when the database instance is started.
    (Therefore, the only way to change the permissions would be to change the umask for the oracle database server id and restart the database instance --- which is NOT what I would recommend).
    umask is set so that all database files created (e.g. with CREATE TABLESPACE or ALTER TABLESPACE ADD DATAFILE) are created with "secure" permissions preventing others from overwriting them -- of course, this is relevant if your database files are on FileSystem.
    Hemant K Chitale

  • Dbms_datapupm, dump file permissions

    Oracle 11.2.0.2
    Using dbms_datapipe APIs I can successfully create the dump file. However, that file does not have read or write permissions, so the UNIX user can not zip and ftp the file as required, not can it chmod the permissions. Please, advise. Thanks.

    Use ACLs. For example:
    hpux > # whoami
    hpux > whoami
    oradba
    hpux > # Create directory /tmp/acl_test for datapump files
    hpux > mkdir /tmp/acl_test
    hpux > # set directory /tmp/acl_test access to rwx for owner (Unix user oradba) and no access to group and other
    hpux > chmod 700 /tmp/acl_test
    hpux > # set ACL access to directory /tmp/acl_test file itself to rwx for user oracle
    hpux > setacl  -m u:oracle:rwx /tmp/acl_test
    hpux > # set ACL access to any file created in directory /tmp/acl_test to rwx for user oracle
    hpux > setacl  -m d:u:oracle:rwx /tmp/acl_test
    hpux > # set ACL access to any file created in directory /tmp/acl_test to rwx for user oradba
    hpux > setacl  -m d:u:oradba:rwx /tmp/acl_test
    hpux > # show directory /tmp/acl_test ACLs
    hpux > getacl /tmp/acl_test
    # file: /tmp/acl_test
    # owner: oradba
    # group: appdba
    user::rwx
    user:oracle:rwx
    group::---
    class:rwx
    other:---
    default:user:oracle:rwx
    default:user:oradba:rwx
    hpux > # create Oracle directory object
    hpux > sqlplus / << EOF
    hpuxcreate directory acl_test as '/tmp/acl_test';
    hpuxexit
    hpuxEOF
    SQL*Plus: Release 11.1.0.7.0 - Production on Mon Aug 22 15:27:56 2011
    Copyright (c) 1982, 2008, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    SQL>
    Directory created.
    SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0
    - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    hpux > # datapump export
    hpux > expdp / JOB_NAME=acl_test TABLES=T_INDEX_USAGE PARALLEL=1 COMPRESSION=ALL
    REUSE_DUMPFILES=Y DIRECTORY=ACL_TEST dumpfile=acl_test_%U.dmp logfile=acl_test.
    log
    Export: Release 11.1.0.7.0 - 64bit Production on Monday, 22 August, 2011 15:28:0
    7
    Copyright (c) 2003, 2007, Oracle.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
    Production
    With the Partitioning, OLAP and Data Mining options
    Starting "OPS$ORADBA"."ACL_TEST":  /******** JOB_NAME=acl_test TABLES=T_INDEX_USAGE PARALLEL=1 COMPRESSION=ALL REUSE_DUMPFILES=Y DIRECTORY=ACL_TEST dumpfile=acl_test_%U.dmp logfile=acl_test.log
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    . . exported "OPS$ORADBA"."T_INDEX_USAGE"                    0 KB       0 rows
    Master table "OPS$ORADBA"."ACL_TEST" successfully loaded/unloaded
    Dump file set for OPS$ORADBA.ACL_TEST is:
      /tmp/acl_test/acl_test_01.dmp
    Job "OPS$ORADBA"."ACL_TEST" successfully completed at 15:28:40
    hpux > # directory /tmp/acl_test listing
    hpux > ls -l /tmp/acl_test
    total 64
    -rw-r-----+  1 oracle     dba           1036 Aug 22 15:28 acl_test.log
    -rw-r-----+  1 oracle     dba          20480 Aug 22 15:28 acl_test_01.dmp
    hpux > # copy datapump files (to prove we can read them)
    hpux > cp /tmp/acl_test/acl_test_01.dmp /tmp/acl_test/acl_test_01_copy.dmp
    hpux > cp /tmp/acl_test/acl_test.log /tmp/acl_test/acl_test_copy.log
    hpux > # delete files
    hpux > rm /tmp/acl_test/*
    /tmp/acl_test/acl_test.log: 640+ mode ? (y/n) y
    /tmp/acl_test/acl_test_01.dmp: 640+ mode ? (y/n) y
    hpux > # delete directory
    hpux > rmdir /tmp/acl_test
    hpux > But based on "Oracle does have rights for the directory" and "UNIX user does have rights for the directory too" all you need is ACL for non-oracle UNIX user:
    setacl -m d:u:unix_user:rwx directory_pathSY.

  • Encountering ORA-39000: bad dump file specification while using datapump

    Hello,
    I am trying to use datapump to take a export of a schema(Meta_data only). However, I want the dump file to be named after the date & time of the export taken.
    When i use the following command -- the job runs perfectly.
    expdp system@***** dumpfile=expdp-`date '+%d%m%Y_%H%M%S'`.dmp directory=EXP_DP logfile=expdp-`date '+%d%m%Y_%H%M%S'`.log SCHEMAS=MARTMGR CONTENT=METADATA_ONLY
    However, I want to run the export using an parfile. but if use the below parfile, i am encountering the following errors.
    USERID=system@*****
    DIRECTORY=EXP_DP
    SCHEMAS=TEST
    dumpfile=expdp-`date '+%d%m%Y_%H%M%S'`
    LOGFILE=MARTMGR.log
    CONTENT=METADATA_ONLY
    expdp parfile=martmgr.par
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning option
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-39157: error appending extension to file "expdp-`date '+%d%m%Y_%H%M%S'`"
    ORA-07225: sldext: translation error, unable to expand file name.
    Additional information: 7217
    How do i append date&time to the dumpfile name when using a parfile.
    Thanks
    Rohit

    I got the below error while using the dumpfile parameter as dumpfile=dump_$export_date.dmp
    Export: Release 11.2.0.2.0 - Production on Thu Feb 7 16:46:22 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning option
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-39157: error appending extension to file "dump_$export_date.dmp"
    ORA-07225: sldext: translation error, unable to expand file name.
    Additional information: 7217
    Script i used is as follows
    export ORACLE_HOME=/orcl01/app/oracle/product/11.2.0.2
    export ORACLE_SID=$1
    export PATH=$PATH:$ORACLE_HOME/bin
    echo $ORACLE_HOME
    export export_date=`date '+%d%m%Y_%H%M%S'`
    echo $export_date
    expdp parfile=$2
    parfile is
    DIRECTORY=EXP_DP
    SCHEMAS=MARTMGR
    dumpfile=dump_$export_date.dmp
    LOGFILE=MARTMGR.log
    CONTENT=METADATA_ONLY

  • Datapump dump file view

    Hi Experts,
    I have some old datapump expdp dumpfiles and wanted to check the contents of it prior importing.
    Do you think its possible to to check the content of a dump file with any impdp or expdp command?
    Thanks
    Edited by: Nadvi on Nov 22, 2010 1:00 PM

    Nadvi wrote:
    Hi,
    I found the option for Import, ie, IMP show=Y but that doesn't work with IMPDP.
    Did anyone tried viewing contents of datapump EXPDP dumpfile before?
    Thanks.For impdp you can use sqlfile parameter for your purpose,please refer
    http://download.oracle.com/docs/cd/B13789_01/server.101/b10825/dp_import.htm

  • How to find a dump file is taken using norma exp or expdp

    Hi All,
    How to find out whether a dump file is taken using the conventional exp or using the expdp utility?
    OS: HPUX
    DB: 10.2.0.4

    Hi ,
    I go with Helios's reply. We cannot just predict if it was taken my expdp or exp.
    Since your DB version is 10 , both could be possible.
    The simplest way would be : just run the imp , if it throws error , then its created through expdp.
    because dump from expdp cannot be used with imp and vice versa. So that could help you find..
    else , try to get the syntax by which the dump was created.
    If you have any doubts , wait for experts to reply.
    HTH
    Kk

  • Datapump exp with network_link generated crashed dump file

    Hi Experts,
    I am using oracle 10.2.0.4 DB at 32 bit window. I try to exp full source db in target db by data pump with network_link.
    But I got a dump file as a crashed dump type under window explorer.
    syntax as expdp system/test@targetDB FULL=y DIRECTORY=dataload NETWORK_LINK=sale DUMPFILE=salefull091115.dmp LOGFILE=salefulllog091115.log
    what kind of issue is for this event?
    Thanks
    JIm
    Edited by: user589812 on Nov 15, 2009 3:48 AM

    Pl post the last 50 lines of the export log file, along with any errors recorded in the Windows system log.
    See you duplicate post here - data pump export network_link   with dblink issue
    Srini

  • Can DataPump Export Compress the Dump file?

    hi everyone,
    This is 10g.
    I use the expdp utility to export 1 schema at month-end. The size of the .dmp file is 9 GB.
    Can expdp perfrom compression to reduce the size of the dump file ?
    Thanks, John

    Thanks Srini and Dean.
    I dont have 11g so I could only benefit by 10g's ability to compress metadata.
    My monthly export is for 1 user schema. I assume the schema contains both data and metadata which means if I requested compression, I would get a smaller dump file. Is that a good assumption? Would I still get only 1 dump file or do you get more than one file when compression is used?
    The 10g documentation I have read about expdp does not mention how to request compression.
    Thanks, John

  • Some objects didn't come from dump file

    Al salamo 3likom ,,
    Hi guys.. this is the first question here in this forum and I will appreciate your answers
    when I made import dump file to my database via impdp command I found some objects
    didn't come like triggers, some function and procedures, I don't know what is the reason?!could any body help me
    by the way my Database: 10g and my Operation system: windows xp
    this is the code i wrote " impdp user_name/password@host_string directory=directory_name dumpfile=name.dmp logfile=name.log full=y "
    thank you and God bless you :)

    Pl post exact 10g version, along with the complete impdp log file. Are the objects that are missing definitely in the dump file ? Can you post the export command used and the expdp log file ?
    HTH
    Srini

  • DUMP FILES UNABLE TO DELETE IN LINUX

    Hi
    Our company currently runs the databases on RAC (Oracle 11gR2). We generally take the backup of SCHEMAS everyday using the datapump. My issues here is I am unable to delete the dump files older that 3 days using the below script
    00 14 * * * find /nbds2/orabkp/*.dmp.gz -mtime +3 -exec rm -f {} \;
    but I am able to delete the files using the below command
    rm -f abcb.dmp.gz . I verified the file permissions etc on the dump file that is generated everything looks fine.
    Even I tried using the below script
    #################cut here - rmdumpfiles #############
    #!/bin/bash
    echo "starting at `date`"
    for i in `find /nbds2/orabkp -mtime +3 -name "*.dmp.gz" `
    do
    echo "deleting $i "
    rm -f $i
    done
    echo "done at `date` "
    chmod 750 rmdumpfiles
    Crontab entry:
    00 14 * * * /home/oracle/rmdumpfiles >> /home/oracle/rmdump.log
    But the files didn't get deleted and below is the information I got from the log file
    "starting at Mon Feb 18 17:59:01 PST 2013
    done at Mon Feb 18 17:59:01 PST 2013"
    Can someone help me please
    Thank you
    Karren

    Hi Dude,
    Thank you very much for your quick response. Actually I had to delete a lot of files manually due to storage issue( I am left with only the dump files from of 17,18,19). But my results are weird. I am not able to find the issue. Can you help me
    stat /nbds2/orabkp/FULL_EXP_FSDEV_expdp_Feb_17_2013.dmp.gz
    File: `/nbds2/orabkp/FULL_EXP_FSDEV_expdp_Feb_17_2013.dmp.gz'
    Size: 7013910756 Blocks: 13753000 IO Block: 65536 regular file
    Device: 16h/22d Inode: 21604 Links: 1
    Access: (0640/-rw-r-----) Uid: ( 1001/ oracle) Gid: ( 1001/oinstall)
    Access: 2013-02-17 18:00:04.000000000 -0800
    Modify: 2013-02-17 18:21:11.000000000 -0800
    Change: 2013-02-17 18:26:41.988556000 -0800
    find /nbds2/orabkp/* -iname "*.dmp.gz" -mtime +3
    Ans: None
    find /nbds2/orabkp/* -iname "*.dmp.gz" -mtime +2
    Ans: None
    find /nbds2/orabkp/* -iname "*.dmp.gz" -mtime +1
    Ans: /nbds2/orabkp/FULL_EXP_FSDEV_expdp_Feb_17_2013.dmp.gz
    ls -l /nbds2/orabkp/FULL_EXP_FSDEV_*
    -rw-r----- 1 oracle oinstall 7013910756 Feb 17 18:21 /nbds2/orabkp/FULL_EXP_FSDEV_expdp_Feb_17_2013.dmp.gz
    -rw-r--r-- 1 oracle asmadmin 75805 Feb 17 18:21 /nbds2/orabkp/FULL_EXP_FSDEV_expdp_Feb_17_2013.log
    -rw-r--r-- 1 oracle asmadmin 75805 Feb 18 18:21 /nbds2/orabkp/FULL_EXP_FSDEV_expdp_Feb_18_2013.log
    -rw-r----- 1 oracle oinstall 7014081297 Feb 19 18:20 /nbds2/orabkp/FULL_EXP_FSDEV_expdp_Feb_19_2013.dmp.gz
    -rw-r--r-- 1 oracle asmadmin 75805 Feb 19 18:20 /nbds2/orabkp/FULL_EXP_FSDEV_expdp_Feb_19_2013.log
    date
    Ans: Wed Feb 20 09:27:02 PST 2013
    Now I am able to delete the files thank you very much for your help.This is the first time I am having this issue I am not sure the reason behind it "date" and "mtime" are conflicting and that is causing the issue. Can you please enlighten me regarding this
    Edited by: 988802 on Feb 20, 2013 9:42 AM
    Edited by: 988802 on Feb 20, 2013 10:04 AM

  • Error while restoring a dump file

    Hello,
    While import a dump file i receive error as below. even i try to restore this table separately i get the same issue.
    Can anyone comment on this.
    ORA-31693: Table data object "A_DDAPTE1"."FQQQ_CCCC001" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00217: invalid character 23 (U+0017)
    Error at line 1
    Thanks,
    Vinodh

    yes looks similar even the client recently upgrade from 10gR2 to 11gR2.
    Infact i found a fix for this similar issue but even with this fix installation i had same problem.
    Patch: 11877267 – EXPDP/IMPDP ARE NOT ABLE TO HANDLE HEXADECIMAL CHARACTER REFERENCES IN XMLTYPE
    Thanks,
    Vinodh

  • Corrupt dump file, foreign keys not generated on import

    Hi,
    In our refresh job, we get the dump file from production and use it to refresh the schema in UAT.
    The dump file thus we got from production seems to be corrupt - as its giving error - "IMP-00008: unrecognized statement in the export file: "
    But the error we are getting is for few users, while other users are being imported fine. For the users which got imported fine, the foreign keys are missing !!!
    There is no "error" in the import log regarding the foreign keys.
    We tried to get the indexfile and it has all "Create Table" statements, along with indexes, primary key, check constraints etc .... only the FOREIGN KEYS are missing.
    How can this be explained ?
    With Regards,
    Shalini.

    Some suggestions:
    1. Check Note:111384.1 if you are trying to import from the tape directly.
    2. You can set a bigger buffer size and check whether the error reappears.
    3. If you are trying to perform a full import and the error always seemed to be happening when it tried to import the 56th user, you set init.ora parameter license_max_users (which is set to 55) to a bigger value.
    These are apart from the main reason for this error, i.e., corrupted export file or Import internal bug.

Maybe you are looking for