Export Backup Size is different

Hi Gurus,
We are using Oracle 10G (10.2.0.1.0) on Solaris 10 and we used to take export backup daily basis. While taking export backup using 'exp' then .dmp file size is becoming 30.3 GB where as when we use 'expdp' .dmp file size is 26.1 GB. I've checked both the log files and found no. of tables along with their records are same. so I'm confused why this size difference? and as a result we are not in a position to implement 'expdp'. Can anybody tell me why this size difference and whether can we rely on expdp or not?

user606947 wrote:
Hi Gurus,
We are using Oracle 10G (10.2.0.1.0) on Solaris 10 and we used to take export backup daily basis. While taking export backup using 'exp' then .dmp file size is becoming 30.3 GB where as when we use 'expdp' .dmp file size is 26.1 GB. I've checked both the log files and found no. of tables along with their records are same. so I'm confused why this size difference? and as a result we are not in a position to implement 'expdp'. Can anybody tell me why this size difference and whether can we rely on expdp or not?
Oracle recommends that you use the new Data Pump Export and Import utilities because they support all Oracle Database 10g features
Look at the following documentation:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_overview.htm#sthref14

Similar Messages

  • Is it possible in 9i to take export backup in two different mount point

    Hello Team,
    Is it possible in 9i to take export in two different mount point with file size 22 gb.
    exp owner=PERFSTAT FILE =/global/nvishome5/oradata/jlrvista/PERFSTAT_exp01.dmp,/global/nvishome4/oradata/jlrvista/export/PERFSTAT_exp02.dmp FILESIZE=22528
    I tried with above but no luck..so later killed session
    prs72919-oracle:/global/nvishome5/oradata/jlrvista$ exp owner=SLENTON FILE =/global/nvishome5/oradata/jlrvista/PERFSTAT_exp01.dmp,/global/nvishome4/oradata/jlrvista/export/PERFSTAT_exp02.dmp FILESIZE=2048
    Export: Release 9.2.0.8.0 - Production on Thu Nov 14 13:25:54 2013
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Username: / as sysdba
    Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.8.0 - Production
    Export done in US7ASCII character set and UTF8 NCHAR character set
    server uses UTF8 character set (possible charset conversion)
    About to export specified users ...
    . exporting pre-schema procedural objects and actions
    . exporting foreign function library names for user SLENTON
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions for user SLENTON
    continuing export into file /global/nvishome4/oradata/jlrvista/export/PERFSTAT_exp02.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    About to export SLENTON's objects ...
    . exporting database links
    . exporting sequence numbers
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    . exporting cluster definitions
    . about to export SLENTON's tables via Conventional Path ...
    . . exporting table                      G_AUTHORS
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp > ps -ef | grep exp
    continuing export into file ps -ef | grep exp.dmp
    Export file: expdat.dmp > ^C
    continuing export into file expdat.dmp
    EXP-00056: ORACLE error 1013 encountered
    ORA-01013: user requested cancel of current operation
    . . exporting table                        G_BOOKS
    Export file: expdat.dmp > ^C
    continuing export into file expdat.dmp
    EXP-00056: ORACLE error 1013 encountered
    ORA-01013: user requested cancel of current operation
    . . exporting table                 G_BOOK_AUTHORS
    Export file: expdat.dmp > ^C
    continuing export into file expdat.dmp
    Export file: expdat.dmp > Killed

    See the text in BOLD , if you do not specify the sufficient export file names, export will prompt you to provide additional file names. So either for your 22 GB you need to give 11 different file names or provide the filename when its prompted.
    FILE
    Default: expdat.dmp
    Specifies the names of the export files. The default extension is .dmp, but you can specify any extension. Since Export supports multiple export files , you can specify multiple filenames to be used.
    When Export reaches the value you have specified for the maximum FILESIZE, Export stops writing to the current file, opens another export file with the next name specified by the parameter FILE and continues until complete or the maximum value of FILESIZE is again reached. If you do not specify sufficient export filenames to complete the export, Export will prompt you to provide additional filenames.

  • Export file size is different from the original raw size

    Could someone explain to why, when the original size of a raw file in a folder is about 11mb. But when in Lightroom is shows about 7.1mb. When I export it to a jpeg the size becomes around 3.5mb.
    I know that sRAW1 file is 7.1mb as stated in the book. But why does it go down to about 3.5mb when exported to jpeg.
    Is there a way to export it to the a higher size? Like 7.1
    I used sRAW1 in a canon 50D.
    Thanks,
    Ray

    The "actual" size of you images file is measured by this basic formula, 8-bits is one byte. Each pixel has one byte for the Red, one byte for the Green, and one byte for the Blue colour channels. So, we have 3 bytes per pixel. Multiply the total number of Mega Pixals of your camera's sensor by 3 and you have the true size of an 8 bit image file.
    How this image will end up in a final file size depends upon the amount of compression you choose to apply.
    The more complex the image, the harder it is to compress without some degradation of image quality. So the original size of an image will not always be a guide to its size when compress. For instance an image with little or no sharpening applied will compress to a much smaller size than the same image with a large amount of sharpening even though the same compression settings were used on export.
    If you wish to resize your images on export from LR in specific megapixals (and therefore MB's) then LR/Mogrify has this functionality.
    There is a nice simple explanation of this here

  • How much(in size) export backup can be used

    hi,
    i want to know is there any limit of export backup size ? means export backup should not exceed a limit size otherwise it would be difficult or problematic to import .
    Thanxs

    can you tell me how we can split export dump file in multiple file... ?http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/exp_imp.htm#sthref2270

  • Icloud-different backup sizes for whatsapp under the documents

    The option to backup whatsapp appears in two different places in the icloud settings menu.
    Under documents and settings it shows the backup size as 39.1 MB and under the Backup Options it shows 112MB. ( i have attached screenshots of both the sizes) Could someone please explain to me why this discrepancy and do i need to keep both the backups?
    Thank you,
    Akshay

    Is the iCloud a backup tool for your computer?  ---- Apple care suggested using iCloud for a backup to my documents.
    Not really. You can store "Documents in the Cloud" but this applies only to documents created by Pages, Numbers, Preview, Keynote, and Microsoft Office documents created on your computer stored in iCloud's "iWork". Within that limited purpose, iCloud works very well.
    My question is -- If I purchase backup from a clould provider - other than apple, back up my files in documents in their cloud (make sure I can open them) and then delete the link to iCloud.  Would I preserve my data?
    I'm not quite sure I understand what you mean by "purchase backup from a cloud provider" but if you're considering a service such as Carbonite, I don't advocate any of them. To back up your Mac, use your own backup device - Time Capsule or an external hard disk. It's inexpensive and reliable: two elements essential for a backup strategy, that no subscription service can offer.

  • Time Machine gives different full backup sizes

    After this recent Time Machine update I am no longer able to back up. When Time Machine computes the backup it tells me that it is 320GB, which is larger than my backup drive, so it can't back up. However, when I go into the Time Machine preferences it computes the estimated backup size as 272.52GB. Before this Time Machine update was applied my backups were happening with no issues.

    Mark Trolley wrote:
    Thank you. They must have changed the amount of extra space required with this latest patch
    It's been 20% since the early days of Leopard.
    because it was working fine up until that point.
    It may be a combination of things: the drive is clearly too small; while it varies widely depending on how you use your Mac, our general "rule of thumb" is that TM needs at least twice the space of the data it's backing-up.
    If you're like most of us, the amount of data on your system has been growing.
    And apparently Time Machine is doing a full backup, so it just got beyond the capacity of the disk to hold it all.
    Looks like I need a larger Time Machine disk.
    Yup. The good thing is, they continue to get less and less expensive.

  • Different full backup size of identical databases

    Hello,
    I am on Oracle 10GR2.
    I have 1 database instance with size approximately 50GB. Today I created second instance with RMAN duplication process from the first instance. So now I have two similar DB instances with same size 50GB.
    What is strange to me is size of FULL LEVEL0 backups of these databases.
    Backup size of original database has approximately 22 GB, backup size of second (duplicated) instance has 7GB.
    Can you explain me why? Or what should I do with original database to have same small backup size.
    Executed RMAN command for backup is: BACKUP INCREMENTAL LEVEL 0 DATABASE PLUS ARCHIVELOG;
    Thank you

    select sum(bytes)/1024/1024/1024 GB from dba_segments;
    This select gives me 6,79 GB in both instances.
    I did not used UNTIL TIME for duplication.
    RMAN settings is same for both instances and I don't use any compression
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default

  • Limiting Time Machine backup Size with WD MyBookLive and 10.8

    I cannot take credit for the any part of this solution; merely for merging and clarifying how the solutions discovered by 2 Apple Support Communities contributors much smarter than I (namely “Pondini” – Florida and  “himynameismarek”) - worked perfectly for my situation. All cudo’s to these two!
    I have about average or better PC skills, but am an absolute newbie with Apple. This week I got a new iMac. Having a number of home PC’s all sharing files and back up space on a Western Digital MyBookLive (“WD MBL”) 3TB network drive (NAS), naturally I wanted to use it to backup the new Mac rather than rushing out to buy an Apple Time Capsule.
    There are hundreds of threads on limiting size of a Time Machine (“TM”) backup, many of which required entries in “Terminal” or were devised on older versions of OSX. I’m running OSX Mountain Lion 10.8, so was concerned they may not work.
    The issues I wanted to resolve were:
    Time Machine will use up all of the space on my WD MBL if left to it’s own devices.
    The WD MBL is compatible with Mac and PC’s… which is good… but unlike a back up in Windows 7 Pro which will allow you to make backups in a mapped “share” you create yourself, Apple TM Backups will not; they end up in a hidden folder on the NAS (much like PC backups done with WD Smartware)
    At first I thought maybe I could limit the size of a share created in the MBL, but not possible, at least not that I've seen and I have searched for days.
    The solutions:
    First make sure you have the latest firmware for the WD MBL as of today it is MyBookLive 02.11.09-053. From what I’ve read Western Digital fixed the compatibility issues with 10.8 Mountain Lion just recently.
    Next you need to start TM so that it starts to create a back up. You can stop the back up once you see files being copied. Do this before you walk thru the video tutorial by my Marek below. WD MBL will create the hidden folder you need to find for TM Backups. This folder is called “TimeMachine” but it is not visible even in the “MBL_NAME-backup” folder in Finder.
    Open safari and type “ afp://xxx.xxx.x.xxx ” but use your own ip address of your MBL. Mine was 192.168.1.120, yours will be different.
    It will ask how you want to connect. CHOOSE AS A GUEST even if your MBL is protected… I’m not sure why it works but it does. Then a window will come up asking which share you’d like to mount. You will see all of your own shares plus one called software and now one called “TimeMachine”. Choose that one.
    Now in “Finder” you will see a mounted shared item called “YOUR_MBL_NAME-“ (the same as the one that is probably already there but with a dash(-) at the end). You’ll also see a new “device” in the device list called “Time Machine Backups” (If you already have watched the video tutorial by Marek, you’d know you are looking for a file called “YOUR_MACHINE_NAME.sparsebundle”. Well if you browse the folder “Backups.backupdb” in the Time Machine Backups device you won’t find it… again I don’t know why but you won’t. It resides in the hidden folder called “TimeMachine” that is now visible in the thing you just mounted in step 4)
    NOW watch this video tutorial http://youtu.be/Nq7mSizqUSI and follow it step by step.
    Voila... issues resolved. Thank you Pondini and Marek!

    Try Use Terminal to limit Time Machine sparcebundle size on timecapsule,
    should work to limit Time Machine backup size on any NAS or external disk (or not...)
    sudo defaults write /Library/Preferences/com.apple.TimeMachine MaxSize 500000
    to return to ilimited
    sudo defaults delete /Library/Preferences/com.apple.TimeMachine MaxSize
    if you want to reclame deleted files space shrink it use
    hdiutil resize -size 500g -shrinkonly /Volumes/TimeMachineYOURNAME/YOURNAME.sparsebundle/
    Regards

  • How to ZIP Oracle Datapump export backup file

    Hello All,
    My customer is asking to give him the production data dump to the following path \\138.90.17.56\OMNISAFE.
    I really don't understand his requirement and he also wants me to zip the export backup file. How do I do that, Do you know any unix command to zip backup files.
    thanks and regards
    cherry

    1013498 wrote:
    Well Thanks for your reply.....my oracle version is 11.2.0.3.b and if we have the compression option can you please elaborate how to do that......
    It's in the documentation.  See Data Pump Export
    let us say my expdp file is abc.dmp...should I give the command gzip abc.dmp or any different.
    Let me google that for you
    One more question what does teh customer mean by production data dump to the following path \\138.90.17.56\OMNISAFE. please explain
    How do we know what the customer means?  Why don't you ask him?
    That said, it looks like a url to an ip address and a defined folder at that ip address.  Again, if the customer wants you to send them a file, you need to be working with said customer on the mechanics of accessing their system.
    All that said ....
    Learning how to look things up in the documentation is time well spent investing in your career.  To that end, you should drop everything else you are doing and do the following:
    Go to tahiti.oracle.com.
    Locate the link for your Oracle product and version, and click on it.
    You are now at the entire documentation set for your selected Oracle product and version.
    BOOKMARK THAT LOCATION
    Spend a few minutes just getting familiar with what is available here. Take special note of the "books" and "search" tabs. Under the "books" tab (for 10.x) or the "Master Book List" link (for 11.x) you will find the complete documentation library.
    Spend a few minutes just getting familiar with what kind  of documentation is available there by simply browsing the titles under the "Books" tab.
    Open the Reference Manual and spend a few minutes looking through the table of contents to get familiar with what kind of information is available there.
    Do the same with the SQL Reference Manual.
    Do the same with the Utilities manual.
    You don't have to read the above in depth.  They are reference manuals.  Just get familiar with what is there to be referenced. Ninety percent of the questions asked on this forum can be answered in less than 5 minutes by simply searching one of the above manuals.
    Then set yourself a plan to dig deeper.
    - *Read a chapter a day from the Concepts Manual*.
    - Take a look in your alert log.  One of the first things listed at startup is the initialization parms with non-default values. Read up on each one of them (listed in your alert log) in the Reference Manual.
    - Take a look at your listener.ora, tnsnames.ora, and sqlnet.ora files. Go to the Network Administrators manual and read up on everything you see in those files.
    - *When you have finished reading the Concepts Manual, do it again*.
    Give a man a fish and he eats for a day. Teach a man to fish and he eats for a lifetime.

  • Setting backup on two different locations

    Hello, people. I am a newbie DBA and at my work place , Using RMAN for my backup strategy I want to set up a backup to two different location ( e.g A:\backup and B:\backup), I have configure two channels, and also set configure device type disk parallelism 2, but I seem not to be getting the result i need.
    What does configure device type disk parallelism actually do?
    For me to proceed, what do i need to do? thank you.

    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 2;
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT 'D:\oracle\backup\data_%U', '\\atapp\data\olu\backup\data_%U';
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'D:\ORACLE\ORACLE9I\DATABASE\SNCFTESTER.ORA'; # default
    this is my configuration and it works fine, A copy of the backup are on both servers, the same size the and there is only copy of the archive log and controlfile on one of the servers which is fine

  • Exported file size

    I've finished my first project in imovie 11. It' 8 minutes long. I've exported using quicktime, couple different settings. The files I'm getting are 80mb. I'm guessing this isn't right for uploading to website. I made a similar video couple years ago in earlier version imovie- have no idea how I exported it, didn't spend any time learning about compressing, etc, but I have it on my website as a .mov file, and it only 2.2 mb (granted, the video quality is lousy). Am I supposed to be doing something with the optimize or compressing before export?

    I'm assuming you mean Share/Export using quicktime? I'm not seeing "movie inspector" , but here's what's under options. I've been playing around with this, so this may be for last export I did that was 23mb.
    Not sure what "codec" is
    VIDEO FORMAT: H.264 
    DATA RATE 256
    IMAGE SIZE 320X240
    FRAME RATE 30
    KEY FRAME EVERY 24 FRAMES
    AUDIO FORMAT AAC-LC
    DATA RATE 128
    STEREO
    44.1
    ENCODING :GOOD

  • It seems like Ethernet backup and Airport backup goes in different ways

    Since one month a go, I've been having problems with Time Machine backup over Time Capsule.
    When I run Time Machine through Ethernet, there is no problem. Fast and clean. In the Console there is not any error message.
    I unplug ethernet cable, start airport, and in the next copy appears: "Bulk settings atributes failled", several times. Few backups after it will appear problems with a "file path" and few after will appear "deep traversal".
    I plug the Ethernet cable, disconnect airport and I forced to re-index the image bundle. Few backups correct with Ethernet. Unplug ethernet, connect airport, and again the same history.
    It seems like Ethernet backup and Airport backup goes in different ways.
    Permissions are corrected and there is no problems in any disc with First Aid.
    ¿Any idea?. Thanks in advance.

    I am having the same issue.  I have three layered elements as three columns in the body of the page.  When I add the farthest right element it pushes the other left elements to the bottom of the page.  There are aren't any other elements on the page.
    Is this a CSS issue?  I thought about exporting to Dreamweaver and fixing it there but would like to take the easy route first.

  • SSRS report Excel Export file size is huge

    Hello everyone,
    I am facing an issue with SSRS report exporting  to Excel.
    Issue : Exported file size is too big.
    I have a matrix report ( parametrized report ) it fetches data from multiple table.
    initially when
    1) I select all parameters all values
    2) load the report after that
    3) exporting to excel
    so exported file size was about 10 Mb to 12MB
    Now, suddenly from this month exported file size is too big around 62 MB for same data,
    I checked on different forums and tried the solutions but it didn't worked,
    can anyone please suggest me on this.
    My environment is
    Visual studio (BIDS) 2008
    SQL Server 2008 R2
    Excel 2007
    Thanks

    Hi UdayKGR,
    According to your description, when you export a report into an excel file, the exported file size is too huge. Now you want to reduce the file size. Right?
    In this scenario, do you export the report into excel 2003 file? When reports are first exported and saved to Excel 2003, they do not benefit from the file optimization that Excel automatically applies to its *.xls workbook files. The larger file size can
    cause problems for e-mail subscriptions and attachments. To reduce the size of the *.xls files for exported reports, open the *.xls files and then resave the workbooks. Resaving the workbooks typically reduces their file sizes by 40 to 50 percent. Also please
    try to install the Reporting Services 2008 R2 Service Pack 2 and reduce the merged columns in exported excel file. For more information, please see the links below:
    Exporting to Microsoft Excel (Report Builder and SSRS)
    problem
    regarding the size of exported report from reporting services to excel format
    SSRS Export to Excel - Performance
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • Estimated Backup size

    Hi,
    I'm tryin to use TM for the first time to backup my internal disc of my MacBook Pro to a newly bought external one.
    The internal drive (the source for the backup) has a total capacity of 232,9 GB (250.059.350.016 Bytes) according to Disk Utility of which 98,0 GB (105.187.246.080 Bytes) are used.
    Time Machine however claim for the initial backup it is to backing up 422.42 GB which is about 4.3 times the amount of data on my drive!
    Is this normal? Why is it that far off the weeds?
    This actually doesn't give me a very good feeling regarding the reliability of Time Machine.
    Needles to say the initial backup runs endless....
    BTW: Forgot to mention: This 10.5.8 and the external drive is 1.5 TB, USB 2.0

    I don't use File Vault, and it does some strange things vs. Time Machine. As I understand it, you should have both a normal home folder and a sparse image (the encrypted version). That's the "horst" volume in the second UUID message.
    Right. That's what I found out in the meantime.
    Your "normal" home folder should have been excluded automatically. If you now exclude it manually, however, it doesn't sound like that will get to the correct size.
    It was, when I checked initially, but now it no longer isn't
    And yes, it's FileVault confusing not only Time Machine but rather the entire system:
    When I select my home folder in Finder's sidebar and do a Get Info on it, the Finder tells me my home folder is has a capacity of 465.13 GB of which 330.56 GB are used (which is entire crap since my entire hard disk only is about 233 GB and the amount of dat in my home folder is about 48 GB).
    But then it sums up fairly well: 330 GB (of assumed data) plus 98 GB actual data on disk are about the estimated backup size, TM is reporting.
    The Unix command df reports the same false data, so there obviously is something wrong in Leopard's calculation of FileVault sizes:
    Horsts-MBP:~ horst$ df
    Filesystem 512-blocks Used Available Capacity Mounted on
    /dev/disk0s2 487725344 204928248 282285096 43% /
    devfs 218 218 0 100% /dev
    fdesc 2 2 0 100% /dev
    map -hosts 0 0 0 100% /net
    map auto_home 0 0 0 100% /home
    /dev/disk1s2 975450624 693231064 282219560 72% /Users/horst
    /dev/disk2s2 2929605344 249137376 2680467968 9% /Volumes/Time Machine Backups
    Do you really need your entire home folder encrypted? It's not going to play well with TM, even under the best of circumstances. You won't be able to see the details of what was backed-up, and I don't believe you can restore individual items. Plus, TM will only back it up when you log out, not on it's normal hourly schedule.
    What I tried but after logging out, the previous backup stopped. I see TimeMachine and FileVault aren't really good friends, possibly even mutually exclusive. If I have to choose I'll stick to FikeVault because seemless data encryption for me is currently more important than hourly backups.
    Maybe I'll have to investigate in a different backup strategy

  • Minimum backup size

    What are your minimum backup sizes? Even if I do nothing and immediately manually repeat a backup my size is 320Mb. (and about 7mins)
    If you are able to show your invisible system files there is a hidden log file (.backup.log) in each backup folder which shows the size, the time and the length of that backup. Here is download page of a little script (not written by me) that enables invisible files http://www.scriptbuilders.net/files/showhideinvisiblefiles1.6.html
    Caveat: use at own risk - Please dont move or delete any invisible files.
    Also I am using OSX server

    BastiChina wrote:
    After some research I found Pondini's website (http://web.me.com/pondini/Time_Machine/Home.html)
    Just for reference, there are links to it in the *User Contributed Tips* box at the top of this forum, too.
    that the Time Machine's minimum backup size seems to be around 160MB - even when backups are only 5 or 10 minutes apart.
    That's not a lot, and shouldn't take long.
    And they shouldn't be 5-10 minutes apart, unless you mean one runs for 50 or 55 minutes, then the next one starts. If it's taking that long to back up that little, something is terribly wrong.
    Full backups are another story. There are a few things that can cause them, such as doing a full restore. Are you sure they're full backups of your entire system, or "only" backups of something very large, like your home folder? Those may have different causes, and fixes, so try to narrow it down.
    See #A1 in [Time Machine - Troubleshooting|http://web.me.com/pondini/Time_Machine/Troubleshooting.html] for a handy widget that will display the backup messages from your logs; that may help clarify what's been going on with those.
    I had the theory that Carbonite somewhat screws up the backup. To be sure, I paused Carbonite and suddenly the incremental backups dropped from 160MB to a few KB.
    See #A2 in Troubleshooting for a couple of 3rd-party apps that will show exactly what's being backed-up.
    Also see #D2 there -- your backups sound awfully slow.

Maybe you are looking for