Dump file size in exp

Hi all,
i exported the database having around 900 mb size using parameters
exp file=full.dmp log=full.log full=y direct=y
but the exported file size is 137mb.
why this file size is small. even it compressed it has to be at least 70%.
does this file hold entire data of the database.
please let me know.
thank you!

Hi,
Is the 900 MB size the allocated size or the actual database size?
select (select sum(bytes)/1024/1024 from dba_data_files)+
(select sum(bytes)/1024/1024 from dba_temp_files) "Size in MB" from dual;
select sum(bytes/1024/1024) from dba_free_space;
Anyway since the size of the dumpfile is small, you can try to import it in a test environment first and verify whether all the schema objects are matching.
Thanks and Regards,
Rajesh K.

Similar Messages

  • *** DUMP FILE SIZE IS LIMITED TO 0 BYTES ***

    I always get a trace file with
    *** DUMP FILE SIZE IS LIMITED TO 0 BYTES ***

    Do you have a cleardown script on your dump directory (probably user dump) that deletes dumps while sessions are still running. You get a trace in background dump directory (pmon, I think) that gives your message when that happens.
    OTOH it might be something completely different.

  • Dump file size

    Hi,
    on 10g R2, on AIX 6.1 I use the following EXPDP :
    expdp system@DB SCHEMAS=USER1 DIRECTORY=dpump_dir1 DUMPFILE=exp_USER  Logfile=log_expdpuserwhich results in a 3Gb dump file. Is there any option to decrease dump file size ?
    I saw in documentation :
    COMPRESSION=(METADATA_ONLY | NONE)
    but it seams to me that COMPRESSION=METADATA_ONLY is already used since it is default value and COMPRESSION=NONE can not reduce the size.
    Thank you.

    You can use FILESIZE parameter. and specify multilple dumps.
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96652/ch01.htm
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm
    Otherwise you can use RMAN Compressed backup sets.
    Thanks
    Edited by: Cj on Dec 28, 2010 2:15 AM

  • Estimate the Import Time by viewing the DUMP File Size

    Its very generic question and I know it can varry from DB to DB.
    But is there any standard way that we can Estimate the Oracle dump import time by viewing the Export Dump file size ?

    Well, it's going to be vaguely linear, in that it probably takes about twice as long to load a 2 GB dump file as to load a 1 GB dump file, all else being equal. Of course, all things rarely are equal.
    For example,
    - Since only index DDL is stored in the dump file, dumps from systems with a lot of indexes will take longer to load than dumps from systems with few indexes, all else being equal.
    - Doing a direct path load will generally be more efficient than doing a conventional path load
    Justin

  • Heap dump file size vs heap size

    Hi,
    I'd like to clarify my doubts.
    At the moment we're analyzing Sun JVM heap dumps from Solaris platform.
    Observation is that heap dump file is around 1,1GB while after loading to SAP Memory Analyzer it displays statistics: "Heap: 193,656,968" which as I understood is size of heap.
    After I run:
    jmap -heap <PID>
    I get following information:
    using thread-local object allocation
    Parallel GC with 8 thread(s)
    Heap Configuration:
       MinHeapFreeRatio = 40
       MaxHeapFreeRatio = 70
       MaxHeapSize      = 3221225472 (3072.0MB)
       NewSize          = 2228224 (2.125MB)
       MaxNewSize       = 4294901760 (4095.9375MB)
       OldSize          = 1441792 (1.375MB)
       NewRatio         = 2
       SurvivorRatio    = 32
       PermSize         = 16777216 (16.0MB)
       MaxPermSize      = 67108864 (64.0MB)
    Heap Usage:
    PS Young Generation
    Eden Space:
       capacity = 288620544 (275.25MB)
       used     = 26593352 (25.36139678955078MB)
       free     = 262027192 (249.88860321044922MB)
       9.213949787302736% used
    From Space:
       capacity = 2555904 (2.4375MB)
       used     = 467176 (0.44553375244140625MB)
       free     = 2088728 (1.9919662475585938MB)
       18.27830779246795% used
    To Space:
       capacity = 2490368 (2.375MB)
       used     = 0 (0.0MB)
       free     = 2490368 (2.375MB)
       0.0% used
    PS Old Generation
       capacity = 1568669696 (1496.0MB)
       used     = 1101274224 (1050.2569427490234MB)
       free     = 467395472 (445.74305725097656MB)
       70.20434109284916% used
    PS Perm Generation
       capacity = 67108864 (64.0MB)
       used     = 40103200 (38.245391845703125MB)
       free     = 27005664 (25.754608154296875MB)
       59.75842475891113% used
    So I'm just wondering what is this "Heap" in Statistic Information field visible in SAP Memory Analyzer.
    When I go to Dominator Tree view, I look at Retained Heap column and I see that they roughly sum up to 193,656,968.
    Could someone put some more light on it?
    thanks
    Michal

    Hi Michal,
    that looks indeed very odd. First, let me ask which version do you use? We had a problem in the past where classes loaded by the system class loader were not marked as garbage collection roots and hence were removed. This problem is fixed in the current version (1.1). If it is version 1.1, then I would love to have a look at the heap dump and find out if it is us.
    Having said that, this is what we do: After parsing the heap dump, we remove objects which are not reachable from garbage collection roots. This is necessary, because the heap dump can contain garbage. For example, the mark-sweep-compact of the old/perm generation leaves some dead space in the form of int arrays or java.lang.Object to win time during the compacting phase: by leaving behind dead objects, not every live object has to be moved which means not every object needs a new address. This is the kind of garbage we remove.
    Of course, we do not remove objects kept alive only by weak or soft references. To see what memory is kept alive only through weak or soft references, one can run the "Soft Reference Statistics" from the menu.
    Kind regards,
       - Andreas.
    Edited by: Andreas Buchen on Feb 14, 2008 6:23 PM

  • Dump File Size vs Database Table Size

    Hi all!
    Hope you're all well. If Datapump estimates that 18 million records will produce a 2.5GB dumpfile, does this mean that 2.5GB will also be consumed on the database table when this dump file is imported into a database?
    Many thanks in advance!
    Regards
    AC

    does this mean that 2.5GB will also be consumed on the database table when this dump file is imported into a database?No since the size after import depends on various factors like block size, block storage parameters etc.

  • What is the dump file size limit?

    Hello all,
    Can somebody tell me what is the maximum size of the dump file created by export utility in Oracle? Is there any maximum size applicable to dump file?
    Thanks in advance
    Himanshu

    Hi,
    Later Oracle 8.1.6, is no more Oracle limitation on dump file, with restriction 2Gb for full 32-bits OS (32-bit with 32-bit files). The maximum value that can be stored in a file is dependent on your operating system.
    http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10825/exp_imp.htm#g1040908
    Nicolas.

  • Export dump file size

    i am exporting one schema.The total size of all the objects in that schema is 70GB.When i perform export on that schema the the size of the dmp file will be 70GB or less?Can any there out to help me.
    OS:Solaris
    DB Version:9.2.0.6
    Thank you..

    Hi,
    When you do an export, you are only exporting the data. Other objects, such as indexes are only exported as a description so take up virtually no space. A general rule of thumb is to work out the actual amount of used data blocks you have, remembering that tables can have allocated space that is empty, and this will give you a size that is probably slightly larger than the export, maybe by about 10%. Also, export files are highly compressible. So if you need to save space, you will usually get about 80 ~ 90% compression ratios.
    Andre

  • How to list out the tablenames alone from EXP Dump File

    Hi,
    We are creating a dump file by using EXP command through a Parameter file.
    Now, from this exp_backup.dmp file, I just want to find out the list of tables, which got exported.
    I don't have access to the Parameter file used for exporting data. Please share your views, suggestions, thank you.

    Hi,
    did you try to get the details logfile created for your export dump ?
    try strings on your dumpfile
    strings <dumpfilename>
    Regards
    Veda

  • 8 MB dump file is becoming more than 10GB after importing

    Hi,
    I have got a dump to load into the database. the Dump file size was only about 8MB
    But while importing it to a database it is becaming more than 10GB.
    I have given extend up to 10GB. So that Import command was terminated with an error.
    Now, What may be the problem and how to rectify that.
    Kindly Suggest.
    Thanks in advance
    Regards
    Gatha

    Hi,
    The oracle version is 10.2.0.1
    Data has been imported using IMP command.
    Since the tablespace has been filled it shows an error
    IMP-00017: following statement failed with ORACLE error 1658:
    "CREATE TABLE "USTTAXTRANSACTION" ("ACCOUNT_NUM" VARCHAR2(20) NOT NULL ENABL"
    "E, "BILL_SEQ" NUMBER(9, 0) NOT NULL ENABLE, "BILL_VERSION" NUMBER(3, 0) NOT"
    " NULL ENABLE, "TAX_TRANSACTION_ID" NUMBER(9, 0) NOT NULL ENABLE, "UST_TAX_T"
    "RANSACTION_TYPE" NUMBER(1, 0) NOT NULL ENABLE, "CUSTOMER_REF" VARCHAR2(20) "
    "NOT NULL ENABLE, "BILL_TO" VARCHAR2(18), "BILL_TO_TYPE" NUMBER(9, 0), "ORIG"
    "IN" VARCHAR2(18), "ORIGIN_TYPE" NUMBER(9, 0), "TERM" VARCHAR2(18), "TERM_TY"
    "PE" NUMBER(9, 0), "SHIP_TO" VARCHAR2(18), "SHIP_TO_TYPE" NUMBER(9, 0), "SHI"
    "P_FROM" VARCHAR2(18), "SHIP_FROM_TYPE" NUMBER(9, 0), "ORDER_ACCEPT" VARCHAR"
    "2(18), "ORDER_ACCEPT_TYPE" NUMBER(9, 0), "TAXATION_DAT" DATE NOT NULL ENABL"
    "E, "TAXABLE_AMOUNT_MNY" NUMBER(18, 0), "EVENT_MINUTES" NUMBER(9, 0), "UST_C"
    "ATEGORY_ID" NUMBER(9, 0) NOT NULL ENABLE, "UST_CODE_ID" NUMBER(9, 0) NOT NU"
    "LL ENABLE, "UST_CHARGE_GROUP_ID" NUMBER(9, 0) NOT NULL ENABLE, "UST_INCITY_"
    "BOO" VARCHAR2(1) NOT NULL ENABLE, "BUSINESS_BOO" VARCHAR2(1), "ENDCUST_BOO""
    " VARCHAR2(1), "REGULATED_BOO" VARCHAR2(1), "DEBIT_CARD_BOO" VARCHAR2(1), "N"
    "UMBER_OF_LINES" NUMBER(9, 0), "NUMBER_OF_LOCATIONS" NUMBER(9, 0), "FEDERAL_"
    "EXEMPT_BOO" VARCHAR2(1), "STATE_EXEMPT_BOO" VARCHAR2(1), "COUNTY_EXEMPT_BOO"
    "" VARCHAR2(1), "CITY_EXEMPT_BOO" VARCHAR2(1), "EVENT_SOURCE" VARCHAR2(40), "
    ""LEVYING_PERIOD_NUM" NUMBER(3, 0), "EXTERNAL_TRANSACTION_ATTRS" VARCHAR2(40"
    ")) PCTFREE 10 PCTUSED 40 INITRANS 19 MAXTRANS 255 STORAGE(INITIAL 65536 FR"
    "EELISTS 23 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) "
    " LOGGING NOCOMPRESS"
    IMP-00003: ORACLE error 1658 encountered
    ORA-01658: unable to create INITIAL extent for segment in tablespace GENEVA
    Import terminated successfully with warnings.
    Kindly Help me out.
    Thanks.
    Regards
    Gatha

  • Analyse large heap dump file

    Hi,
    I have to analyse large heap dump file (3.6GB) from production environment. However if open it in eclipse mat, it is giving OutOfMemoryError. I tried to increase eclipse workbench java heap size as well. But it doesnt help. I also tried with visualVM as well. Can we split the heap dump file into small size? Or is there any way to set max heap dump file size for jvm options so that we collect reasonable size of heap dumps.
    Thanks,
    Prasad

    Hi, Prasad
    Have you tried open in 64-bit mat on a 64-bit platform with large heap size and CMS gc policy in MemoryAnalyzer.ini file ? the mat is good toolkit on analysing java heapdump file, if it cann't works, you can try Memory Dump Diagnostic for Java(MDD4J) in 64-bit IBM Support Assistant with large heap size.

  • Parse dump file

    Hi q all.
    maybe this question already many times asked, but I've a task to develop an app that parses a dump file (created by exp-tool) and show to user the version, all users and his objects that were exported. I've already build the app that reads the log, created by "imp show=y log=... " and interprets that content.
    Maybe there are some other posibilities or ideas.
    Please don't ask me for the reason of that task.
    Thank you.

    Vike wrote:
    Hi q all.
    maybe this question already many times asked, but I've a task to develop an app that parses a dump file (created by exp-tool) and show to user the version, all users and his objects that were exported. I've already build the app that reads the log, created by "imp show=y log=... " and interprets that content.
    Maybe there are some other posibilities or ideas.
    Please don't ask me for the reason of that task.
    Thank you.imp user/pass show=YES

  • Ddl triggers froma dump file

    I have a dump file made with exp command. It is a schema dump.
    I would like to extract the ddl of the trigger. how can I do this?
    I tried with imp + indexfile
    but I the file created there are no triggers.

    Why not use dbms_metadata.get_ddl('TRIGGER','<TRIGGER_NAME') to generate the ddl and create.

  • Size of export dump file

    I read an article on EXPORT of dump file cannot over the limit of 2Gig. This is because prior to 8.1.3 there is no large file support for Oracle Import, Export, or SQL*Loader utilties.
    How bout the current version? 9i or 10g? Can the exp dump file be larger than 2G?

    Under AIX Unix, there is a fsize parameter for every user in /etc/security/limits file. if fsize = -1 then user will have authority to create any file size. But to do this change, you should have root access under Unix enviroment.

  • How to find a dump file is taken using norma exp or expdp

    Hi All,
    How to find out whether a dump file is taken using the conventional exp or using the expdp utility?
    OS: HPUX
    DB: 10.2.0.4

    Hi ,
    I go with Helios's reply. We cannot just predict if it was taken my expdp or exp.
    Since your DB version is 10 , both could be possible.
    The simplest way would be : just run the imp , if it throws error , then its created through expdp.
    because dump from expdp cannot be used with imp and vice versa. So that could help you find..
    else , try to get the syntax by which the dump was created.
    If you have any doubts , wait for experts to reply.
    HTH
    Kk

Maybe you are looking for

  • Xmlns attribute in 8.1.7

    I am trying to process an XML file with the Oracle 8.1.7 PL/SQL XDK. I'm using Steve's helper programs from the XML book. The problem is that the document in question uses XML Schema. My script chokes whenever I attempt anything with this file. After

  • E-commerce 5.0 for SAP ERP 6.0 ehp3: detailed description of product

    Hello community, I have installed and customized B2C shop (version 5.0) for SAP ERP 6.0 (NO TREX, NO IPC). At first i maintained my shop with english language and after i checked out that all work properly i decided to localize it for russian languag

  • How to add a video play list on palm pre

    Does anybody knows how to add a video play list in palm pre? I did try to add video in different folders but in the palm  device are loaded in the same playlist. Is there a way to organize video files? tks Post relates to: Pre p100ueu (O2)

  • What are ways to transfer data from online system to SAP

    Hi all, We have a requirement to transfer data from an online system (Non SAP) to SAP. What are all the possible ways to  do that and which one is preferred? Thanks! Rahul.

  • Forms on iPad?

    I realize there isn't any Java support on the iPad, but I was wondering if anyone may have thought of a way to get Forms to work on an iPad? Oracle has a couple of apps for the iPhone which I assume would work on the iPad, so does Oracle have any pla