Compressed dump file

hi,
Is there any problem if we compress the export file .dmp, with linux gzip, or with winzip or 7-zip on windows machines ?
Is the unzipped file still OK for an import ???
I think I heard once this is not recommended, but I'm not sure
Thanks !

Any lossless archive tool (ie zip, 7zip, gzip, rar, etc) is fine. Those tools are designed to return the file to its original state. There is nothing special about an export file that would cause those tools to corrupt it, and any corruption should be reported as a bug to the tool developers.
Windows-directory compression is not as tight as these tools as it is designed to allow random access to the file even in its compressed state. You will get smaller files if you use the tools.

Similar Messages

  • Compressed dump file while export on Windows!

    Hi All
    Could someone suggest how do we take and compressed dump file while exporting on Windows environment with oracle 9.2.0.6.Please specify the exact syntax how to proceed on this.
    Thanks
    Bala.

    I don't think that exp tool can compress the export file it is creating (the compress parameter has nothing to do with the export file but with the way the database objects are going to be created in the target database).
    If you run export under Unix, there is a possibility to use Unix pipes to compress the export file during the export using Unix commands (compress or gzip for example). I don't know how to do something similar under Windows I have some doubts about this possibility.

  • Can DataPump Export Compress the Dump file?

    hi everyone,
    This is 10g.
    I use the expdp utility to export 1 schema at month-end. The size of the .dmp file is 9 GB.
    Can expdp perfrom compression to reduce the size of the dump file ?
    Thanks, John

    Thanks Srini and Dean.
    I dont have 11g so I could only benefit by 10g's ability to compress metadata.
    My monthly export is for 1 user schema. I assume the schema contains both data and metadata which means if I requested compression, I would get a smaller dump file. Is that a good assumption? Would I still get only 1 dump file or do you get more than one file when compression is used?
    The 10g documentation I have read about expdp does not mention how to request compression.
    Thanks, John

  • Compress a dump file

    Hi All
    I export schema in oracle using expdp and i put it in disk as dump file (a_exp_schema.dmp) and its size is 100 GB.
    After that i decided to compress this dump file...
    Anyone can help me on that.
    Oracle : 11gR2
    OS : Solaris

    COMPRESSION={ALL | DATA_ONLY | METADATA_ONLY | NONE}
    also this thread mentioned before
    how to compress dmp file in expdp..

  • Compress large oracle dump file

    Hi all,
    I need to bring a large export dump file across the network which takes very long time. Is there any way with in Oracle 10g that I can compress this large dump files.
    Thanks in Advance.

    You can use any compress tools to zip your dump file. gzip, bzip etc.
    Oracle dump file usually yield pretty good compress ratio. most time close to 90%

  • Dbms_datapupm, dump file permissions

    Oracle 11.2.0.2
    Using dbms_datapipe APIs I can successfully create the dump file. However, that file does not have read or write permissions, so the UNIX user can not zip and ftp the file as required, not can it chmod the permissions. Please, advise. Thanks.

    Use ACLs. For example:
    hpux > # whoami
    hpux > whoami
    oradba
    hpux > # Create directory /tmp/acl_test for datapump files
    hpux > mkdir /tmp/acl_test
    hpux > # set directory /tmp/acl_test access to rwx for owner (Unix user oradba) and no access to group and other
    hpux > chmod 700 /tmp/acl_test
    hpux > # set ACL access to directory /tmp/acl_test file itself to rwx for user oracle
    hpux > setacl  -m u:oracle:rwx /tmp/acl_test
    hpux > # set ACL access to any file created in directory /tmp/acl_test to rwx for user oracle
    hpux > setacl  -m d:u:oracle:rwx /tmp/acl_test
    hpux > # set ACL access to any file created in directory /tmp/acl_test to rwx for user oradba
    hpux > setacl  -m d:u:oradba:rwx /tmp/acl_test
    hpux > # show directory /tmp/acl_test ACLs
    hpux > getacl /tmp/acl_test
    # file: /tmp/acl_test
    # owner: oradba
    # group: appdba
    user::rwx
    user:oracle:rwx
    group::---
    class:rwx
    other:---
    default:user:oracle:rwx
    default:user:oradba:rwx
    hpux > # create Oracle directory object
    hpux > sqlplus / << EOF
    hpuxcreate directory acl_test as '/tmp/acl_test';
    hpuxexit
    hpuxEOF
    SQL*Plus: Release 11.1.0.7.0 - Production on Mon Aug 22 15:27:56 2011
    Copyright (c) 1982, 2008, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    SQL>
    Directory created.
    SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0
    - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    hpux > # datapump export
    hpux > expdp / JOB_NAME=acl_test TABLES=T_INDEX_USAGE PARALLEL=1 COMPRESSION=ALL
    REUSE_DUMPFILES=Y DIRECTORY=ACL_TEST dumpfile=acl_test_%U.dmp logfile=acl_test.
    log
    Export: Release 11.1.0.7.0 - 64bit Production on Monday, 22 August, 2011 15:28:0
    7
    Copyright (c) 2003, 2007, Oracle.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
    Production
    With the Partitioning, OLAP and Data Mining options
    Starting "OPS$ORADBA"."ACL_TEST":  /******** JOB_NAME=acl_test TABLES=T_INDEX_USAGE PARALLEL=1 COMPRESSION=ALL REUSE_DUMPFILES=Y DIRECTORY=ACL_TEST dumpfile=acl_test_%U.dmp logfile=acl_test.log
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    . . exported "OPS$ORADBA"."T_INDEX_USAGE"                    0 KB       0 rows
    Master table "OPS$ORADBA"."ACL_TEST" successfully loaded/unloaded
    Dump file set for OPS$ORADBA.ACL_TEST is:
      /tmp/acl_test/acl_test_01.dmp
    Job "OPS$ORADBA"."ACL_TEST" successfully completed at 15:28:40
    hpux > # directory /tmp/acl_test listing
    hpux > ls -l /tmp/acl_test
    total 64
    -rw-r-----+  1 oracle     dba           1036 Aug 22 15:28 acl_test.log
    -rw-r-----+  1 oracle     dba          20480 Aug 22 15:28 acl_test_01.dmp
    hpux > # copy datapump files (to prove we can read them)
    hpux > cp /tmp/acl_test/acl_test_01.dmp /tmp/acl_test/acl_test_01_copy.dmp
    hpux > cp /tmp/acl_test/acl_test.log /tmp/acl_test/acl_test_copy.log
    hpux > # delete files
    hpux > rm /tmp/acl_test/*
    /tmp/acl_test/acl_test.log: 640+ mode ? (y/n) y
    /tmp/acl_test/acl_test_01.dmp: 640+ mode ? (y/n) y
    hpux > # delete directory
    hpux > rmdir /tmp/acl_test
    hpux > But based on "Oracle does have rights for the directory" and "UNIX user does have rights for the directory too" all you need is ACL for non-oracle UNIX user:
    setacl -m d:u:unix_user:rwx directory_pathSY.

  • How to compress a file expdp?

    Hi.
    I am testing the Data Pump Export and when I try to compress the dump file this don't work.
    The Oracle Database is 10G R2.
    I have read some files and post about this and the "guru" say : migrate to 11G and this is solved.
    I don't thing I migrate to 11G soon, and I need compress the dumpfile.
    Somebody know how to do that?
    Regards,
    Milton

    Hi, you only can compress the meta-data, please review the next link.
    <br><br>
    Data Pump Export
    <br><br>
    If you want compress the entire file you must use some utility compression (compress, gzip, etc) at finished export dump command .
    <br><br>
    Regards.

  • Dump file size

    Hi,
    on 10g R2, on AIX 6.1 I use the following EXPDP :
    expdp system@DB SCHEMAS=USER1 DIRECTORY=dpump_dir1 DUMPFILE=exp_USER  Logfile=log_expdpuserwhich results in a 3Gb dump file. Is there any option to decrease dump file size ?
    I saw in documentation :
    COMPRESSION=(METADATA_ONLY | NONE)
    but it seams to me that COMPRESSION=METADATA_ONLY is already used since it is default value and COMPRESSION=NONE can not reduce the size.
    Thank you.

    You can use FILESIZE parameter. and specify multilple dumps.
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96652/ch01.htm
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm
    Otherwise you can use RMAN Compressed backup sets.
    Thanks
    Edited by: Cj on Dec 28, 2010 2:15 AM

  • Import issue - split and compressed dump

    Hi,
    I received 15GB export dump file from site as below, they are splited and compressed
    1. xaa 4gb
    2. xab 4gb
    3. xac 4gb
    4. xad 3gb
    i have import these dump file here in Unix server. i found some document to import the split and compressed dump
    i follow the below steps
    1. copy all 4 files into a directory
    2. then i used commands
    rm -f import_pipe
    mknod import_pipe p
    chmod 666 import_pipe
    (import_pipe file created in the current diriecty)
    nohup cat xaa xab xac xad | uncompress - > import_pipe & ( Process number created like 23901. we need to wait till complete the backgroud process before giving IMPORT command?)
    then i give
    imp userid=<connection string> file=import_pipe full=yes ignore=yes log=dumplog.txt
    then it shows the imp-0009 error...
    pls help me resolve this issue.
    Thanks in advacne

    Pl post details of OS and database versions of the source and target. You will have to contact the source to determine how these files were created. It is quite possible that they were created using the FILESIZE parameter (http://docs.oracle.com/cd/E11882_01/server.112/e22490/original_export.htm#autoId25), in which case the import process can read from these multiple files without you having to further manipulate them.
    HTH
    Srini

  • "EXP" For Dump File

    Sorry...but...Why Oracle 10g XE when I use "Exp" or "Export" to export a dump file, respond me with an error? I tried on internet another command, but Oracle responds ever that the command does not exist...
    I see the Help List and there isn't nothing about this command
    (Sorry for my poor english ^^)

    I want to know whether we can compress datapump dump with this method.No. See MOS doc id. 276521.1. If you are on 10g you have to compress the file once generated.
    On 11g you can use COMPRESSION=ALL (no need of pipes).

  • Export larger dump file  in small drive

    Hello Everybody,
    I want to take the export of 100GB database (Dump file will come approx 50 to 60 GB) to the drive having 40GB space .Is it possible.
    Thanks in Advance
    Regards
    Hamid
    Message was edited by:
    Hamid

    No version, no platform... Why? Too difficult? Strain on your fingers?
    The answer is platform and version dependent!
    Anyway: on 9i and before : Winblows: Set the compression attribute of the directory or drive you plan to compress to, make sure this attribute is inherited.
    Unix: export to a pipe, compress the input of this pipe.
    Scripts are floating on this forum and on the Internet everywhere (also on Metalink) everyone should be able to find them with little effort.
    On 10g: expdp can generate compressed exports.
    Sybrand Bakker
    Senior Oracle DBA

  • Exdp dump file

    hello experts ,
    can we compress the exportdp dump file ?
    actually I have tried to do so , I have copied expdp dump file to windows machine then I have used winrar tool to compress the expdp dump file
    when I uncompressed the expdp dump to another windows machine its giving the error that file is corrupted ?
    thanks

    I came across this problem, or one like it, on Windows Server 2003 (32-bit). My research indicated that older versions of Winzip (I was using 8.2) do not have support for large files (greater than 2GB). The solution was to install the newest version (at least 11.2) which effectively eliminates zip file size restrictions.
    See also:
    http://www.winzip.com/prodpagecl.htm

  • Error while importing a dump file in Oracle 10g R1

    Hi all,
    While trying to import a schema using Data Dump, I am facing the following issue -
    UDI-00018 - Import utility version can not be more recent than the Data Dump server.
    Following is the version information of the source and target DB and the utilities :
    Source DB server : 10.1.0.2.0
    Export utility : 10.1.0.2.0
    Import utility : 10.1.0.2.0
    Target DB server : 10.1.0.2.0
    Export utility : 10.2.0.1.0
    Import utility : 10.2.0.1.0
    I can figure out the cause for the problem, but don't know how to resolve it.
    Any help will be appreciated.
    Thanks in advance.
    Gitika Khurana

    How did you get thre DMP file created and how are you trying to import the dump file? Could you post the commands you're using, please?

  • How to import external table, which exist in export dump file.

    My export dump file has one external table. While i stated importing into my developement instance , I am getting the error "ORA-00911: invalid character".
    The original definition of the extenal table is as given below
    CREATE TABLE EXT_TABLE_EV02_PRICEMARTDATA
    EGORDERNUMBER VARCHAR2(255 BYTE),
    EGINVOICENUMBER VARCHAR2(255 BYTE),
    EGLINEITEMNUMBER VARCHAR2(255 BYTE),
    EGUID VARCHAR2(255 BYTE),
    EGBRAND VARCHAR2(255 BYTE),
    EGPRODUCTLINE VARCHAR2(255 BYTE),
    EGPRODUCTGROUP VARCHAR2(255 BYTE),
    EGPRODUCTSUBGROUP VARCHAR2(255 BYTE),
    EGMARKETCLASS VARCHAR2(255 BYTE),
    EGSKU VARCHAR2(255 BYTE),
    EGDISCOUNTGROUP VARCHAR2(255 BYTE),
    EGREGION VARCHAR2(255 BYTE),
    EGAREA VARCHAR2(255 BYTE),
    EGSALESREP VARCHAR2(255 BYTE),
    EGDISTRIBUTORCODE VARCHAR2(255 BYTE),
    EGDISTRIBUTOR VARCHAR2(255 BYTE),
    EGECMTIER VARCHAR2(255 BYTE),
    EGECM VARCHAR2(255 BYTE),
    EGSOLATIER VARCHAR2(255 BYTE),
    EGSOLA VARCHAR2(255 BYTE),
    EGTRANSACTIONTYPE VARCHAR2(255 BYTE),
    EGQUOTENUMBER VARCHAR2(255 BYTE),
    EGACCOUNTTYPE VARCHAR2(255 BYTE),
    EGFINANCIALENTITY VARCHAR2(255 BYTE),
    C25 VARCHAR2(255 BYTE),
    EGFINANCIALENTITYCODE VARCHAR2(255 BYTE),
    C27 VARCHAR2(255 BYTE),
    EGBUYINGGROUP VARCHAR2(255 BYTE),
    QTY NUMBER,
    EGTRXDATE DATE,
    EGLISTPRICE NUMBER,
    EGUOM NUMBER,
    EGUNITLISTPRICE NUMBER,
    EGMULTIPLIER NUMBER,
    EGUNITDISCOUNT NUMBER,
    EGCUSTOMERNETPRICE NUMBER,
    EGFREIGHTOUTBOUNDCHARGES NUMBER,
    EGMINIMUMORDERCHARGES NUMBER,
    EGRESTOCKINGCHARGES NUMBER,
    EGINVOICEPRICE NUMBER,
    EGCOMMISSIONS NUMBER,
    EGCASHDISCOUNTS NUMBER,
    EGBUYINGGROUPREBATES NUMBER,
    EGINCENTIVEREBATES NUMBER,
    EGRETURNS NUMBER,
    EGOTHERCREDITS NUMBER,
    EGCOOP NUMBER,
    EGPOCKETPRICE NUMBER,
    EGFREIGHTCOSTS NUMBER,
    EGJOURNALBILLINGCOSTS NUMBER,
    EGMINIMUMORDERCOSTS NUMBER,
    EGORDERENTRYCOSTS NUMBER,
    EGRESTOCKINGCOSTSWAREHOUSE NUMBER,
    EGRETURNSCOSTADMIN NUMBER,
    EGMATERIALCOSTS NUMBER,
    EGLABORCOSTS NUMBER,
    EGOVERHEADCOSTS NUMBER,
    EGPRICEADMINISTRATIONCOSTS NUMBER,
    EGSHORTPAYMENTCOSTS NUMBER,
    EGTERMCOSTS NUMBER,
    EGPOCKETMARGIN NUMBER,
    EGPOCKETMARGINGP NUMBER,
    EGWEIGHTEDAVEMULTIPLIER NUMBER
    ORGANIZATION EXTERNAL
    ( TYPE ORACLE_LOADER
    DEFAULT DIRECTORY EV02_PRICEMARTDATA_CSV_CON
    ACCESS PARAMETERS
    LOCATION (EV02_PRICEMARTDATA_CSV_CON:'VPA.csv')
    REJECT LIMIT UNLIMITED
    NOPARALLEL
    NOMONITORING;
    While importing , when i seen the log file , it is failing the create the external table. Getting the error "ORA-00911: invalid character".
    Can some one suggest how to import external tables
    Addressing this issue will be highly appriciated.
    Naveen

    Hi Srinath,
    When i observed the create table syntax of external table from import dump log file, it show few lines as below. I could not understand these special characters. And create table definationis failing with special character viz ORA-00911: invalid character
    ACCESS PARAMETERS
    LOCATION (EV02_PRICEMARTDATA_CSV_CON:'VPA.csv').
    I even observed the create table DDL from TOAD. It is same as i mentioned earlier
    Naveen

  • Heap dump file - Generate to a different folder

    Hello,
    When the AS Java iis generating the heap dump file, is it possible to generate it to a different folder rather than the standard one: /usr/sap// ?
    Best regards,
    Gonçalo  Mouro Vaz

    Hello Gonçalo
    I don't think this is possible.
    As per SAP Note 1004255;
    On the first occurrence (only) of an OutOfMemoryError the JVM
    will write a heap dump in the
    /usr/sap/ directory
    Can i ask why you would like it in a different folder?
    Is it a space issue?
    Thanks
    Kenny

Maybe you are looking for