Backup size growing more than expected

I have about 180GB of data on my system and a 500GB volume dedicated to Time Machine. For the past few weeks I've been scanning a lot of old photos, which go into a folder before being imported into iPhoto, and are then deleted from the folder. As expected, the free space on the Time Machine volume steadily decreased as the photos were backed up from the folder and the iPhoto library.
When I started running out of space on the Time Machine volume I deleted all backups of the temp folder, but I only got a fraction of the expected space back. I know I can exclude the above folder from backups if I want, but how do I tell what is taking up the additional space?

There's been some confusion while talking about iPhoto because prior to version '08 the library wasn't a package, it was just a folder. Everyone who has indicated that TimeMachine is only backing up new photos has turned out not to be using the newest version. I'll be honest, I've not taken a look at how iPhoto '08 behaves with Time Machine - I'll put that on my list of things to do - my long list of things to do.
The link you posted makes out that TimeMachine just backs up the changes that occur in a package, not the entire package. It is possible that the engineers have specifically hard coded special treatment for iPhoto, but it makes no sense for TM to just back up changes to packages. That would make rewinding to previous applications, among other things, more difficult than it need be.

Similar Messages

  • Aggregation growing more than expected

    Hi All,
    I have loaded the cube with data and it takes up 37 GB. I ran aggregation on the cube. See following statement.
    execute aggregate process on database "APP123"."DATA123" stopping when total_size exceeds 1.5;
    I expected the cube to grow to about 55 GB (Considering aggregation factor of 1.5) but the cube grew to 117 GB.
    My understanding is that since I set the factor at 1.5 the cube should aggregate until it reaches 55 GB and then stop. Any reason why its growing to 117 GB. Any help/pointers are appreciated.
    Thanks

    I'm also seeing this behavior in several cubes I've created. I have an SR open and have spent a significant amount of time conversing with developers. The problem is the estimation algorithm. Essbase needs to have some estimate of each view size to determine what will fit inside the space indicated by your stop value, but for cubes of a significant enough size, and with a skewed distribution (certain outline members associated with much more data than others) the estimation Essbase comes up with can be wildly off.
    There are 1 or more views that Essbase is building that are a significant percent of the overall level-0 data size.
    The only work-around is to look at the generated views and to manually remove those from the .csc file that are too large. There is no way to see what the size of the aggregate actually is (even after the build if you query for the size it's still an estimate), so you'll need to build the aggregates one at a time to figure out which ones are too big.
    Thanks,
    Michael

  • Unable to view image if the size is more than 3KB using XML Publisher.

    Hello,
    We are printing PO approver signature using xml publisher (rtf) on a pdf.
    If the size of the image is 3KB or less, the image gets printed.
    But, if the size is more than 3KB the image does not get printed.
    Additional Info:
    1. The signature is stored as jpg image in fnd_lobs table.
    2. On following code is mentioned in the rtf
    <fo:instream-foreign-object content-type="image/jpg">
    <xsl:value-of select="IMG_SIGNATURE"/>
    </fo:instream-foreign-object>
    3. We are using the following function that converts BLOB to CLOB.
    CREATE OR REPLACE FUNCTION XX_BLOBTOBASE64
    b IN BLOB
    RETURN CLOB
    IS
    sizeb PLS_INTEGER := 4080 ;
    buffer RAW(4080);
    offset PLS_INTEGER DEFAULT 1;
    RESULT CLOB;
    BEGIN
    -- dbms_lob.createtemporary
    -- lob_loc => RESULT
    -- , cache => FALSE
    -- , dur => dbms_lob.CALL
    -- LOOP
    -- BEGIN
    -- dbms_lob.READ
    -- ( lob_loc => b
    -- , amount => sizeb
    -- , offset => offset
    -- , buffer => buffer
    -- EXCEPTION
    -- WHEN no_data_found
    -- THEN
    -- EXIT;
    -- END;
    -- offset := offset + sizeb;
    -- dbms_lob.append
    -- ( dest_lob => RESULT
    -- , src_lob => to_clob(utl_raw.cast_to_varchar2(utl_encode.base64_encode(buffer)))
    -- END LOOP;
    DBMS_LOB.createtemporary(lob_loc => RESULT, CACHE => FALSE, dur => 0);
    Wf_Mail_Util.EncodeBLOB ( b, RESULT );
    RETURN RESULT;
    END;
    Requesting any of you to let us know if there is any method to resolve this issue.
    Thanks,
    Angelica.

    Hi,
    Are you using Outlook.com to send/receive emails? Based on my research, we can only add an image/ picture in your e-mail signature that’s Web based (picture that is available in existing websites or stored in an online storage). See:
    http://answers.microsoft.com/en-us/outlook_com/forum/osettings-oemailset/add-logo-to-outlookcom-signature/4455facf-0926-42a6-aad7-756de662a865
    Since this forum is for general questions and feedback related to Outlook desktop application, if you are using Outlook.com, I'd recommend you post your question in the Outlook.com forum:
    http://answers.microsoft.com/en-us/outlook_com/forum?tab=Threads
    The reason why we recommend posting appropriately is you will get the most
    qualified pool
    of respondents, and other partners who read the forums regularly can either share their knowledge or learn from your interaction with us. Thank you for your understanding.
    Steve Fan
    TechNet Community Support
    It's recommended to download and install
    Configuration Analyzer Tool (OffCAT), which is developed by Microsoft Support teams. Once the tool is installed, you can run it at any time to scan for hundreds of known issues in Office
    programs.

  • Creating form - want to change font style and size in more than one field at a time HOW??

    creating form - want to adjust font style and size in more than one field at a time - HOW??

    Select the fields with the mouse, right-click one of them, go to Properties
    and set the settings you'd like them all to have.
    On Wed, Jan 21, 2015 at 8:51 PM, chuckm38840797 <[email protected]>

  • Cant  we create a custom PA infotype whose size is more than 1000 chars?

    can we create a custom PA infotype whose size is more than 1000 characters
    i have tried this way i have given 4 fields in PM01 transaction with the data element 255 so the total size is 1020 it is throwing an error. pls help......

    Hi,
    as described in the following documentaion the length of the data could be 1500 bytes.
    http://help.sap.com/saphelp_erp2005vp/helpdata/en/4f/d526be575e11d189270000e8322f96/content.htm
    Regards
    Bernd

  • How we will know that dimension size is more than the fact table size?

    how we will know that dimension size is more than the fact table size?

    Hi,
    Let us assume that we are going to take Division and distribution channel in a dimension and assume we have 20 distinct values for Division in R/3 and 30 Distinct values for Distribution channel .So Maximum, we can get 20 * 30 records in dimension table and we can take rough estimation of records in the cube by observing the raw data in source system.
    With rgds,
    Anil Kumar Sharma .P

  • Profile Manager backup size growing quickly

    Over the past 2 weeks Profile Manager (or PostgreSQL) has been creating backup files at a very high rate (in total more than 800 backups since March 30th). Each of these backup files are 16.7 MB in size  (by now more than 14GB in total) and have this format '000000010000000x000000nn' (with x a capital character and nn a hexdecimal sequential number). The location of the backups is: /Library/Server/ProfileManager/Data/backup.
    Questions:
    What purpose do these backups serve?
    Is something wrong (since it only started 2 weeks ago)?
    Can I stop it?
    Can I delete older ones safely?
    What changed recently on my server:
    Installed Server 3.1 (March 22nd) and Server 3.1.1 (April 4th)
    Renewed my server certificate (was about to expire April 25yj so warnings since March 26th or 27th)
    I did make some changes to profiles March 22nd. Also enrolled a new device
    Thanks for the help

    The files are created by PostGreSQL and contain WAL (write-ahead-logging) data. It's still not clear why these files should be growing at more than 33 MB a day.
    Why is there no snapshot of the database taken, past WAL files deleted and new ones started every day/week/month? Can I do this myself? I'm now wasting 16GB of storage as an inefficient backup for PostgreSQL.

  • Data size increase more than 5 fold after essbase services hangs up

    I have a situation where data explodes by more than 5 fold after essbase hangs. This data explosion occurs not just at level 0 but at the parent also. The logs does not really tell me anything, but it really seem to be a consistent process.
    Any thoughts?
    Thanks

    If you export the level 0 data iin column format I bet you see a lot of either zero data or data which is repetitive; I've never seen essbase just spontaneously make data without a calculation or incorrectly setup member formula causing it.
    What are the circumstances around the hanging condition? How do you get out of the hang condition? When was it stable last and what changes have been made to the outlilne and/or calculations since this behavior began?
    Regards,
    John A. Booth
    http://www.metavero.com

  • Backup VHD size is much more than expected and other backup related questions.

    Hello,I have few windows 2008 server and i have scheduled the weekly backup (Windows Backup) which runs on saturday.
    I recently noticed that the actual size of the data drive is only 30 GB but the weekly backup creates a VHD of 65 GB.This is not happening for all servers but for most of the server.Why is it so..and how can i get the correct VHD size..60 GB VHD does't make
    sence for 30 GB data.
    2. If any moment of time if i have to restore entire Active Directory on windows 2008 R2 then is is the process same as windows 2003..means going to DSRM mode,restoring the backup,authoritative restore or is there any difference..
    3. I also noticed that if i have a backup VHD of one server (Lets Say A) and if i copy that Backup to other server (Let Say B) then windows 2008 only gives me an option to attach the VHD to server B but through backup utlity is there any way to restore the
    data from the VHD,Currently i am doing copy paste from VHD to server B data drive but  that is not the correct way of doing it..Is it a limitation of windows 2008?
    Senior System Engineer.

    Hi,
    If there are large number of files getting deleted on the data drive, the backup image can have large holes. You can compact the vhd image by using diskpart command to the correct VHD size.
    For more detaile information, please refer to the thread below:
    My Exchange backup is bigger than the used space on the Exchange drive.
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/3ccdcb6c-e26a-4577-ad4b-f31f9cef5dd7/my-exchange-backup-is-bigger-than-the-used-space-on-the-exchange-drive?forum=windowsbackup
    For the second question, the answer is yes. If you want to restore entire Active Directory on windows 2008 R2, you need to start the domain controller in Directory Services Restore Mode to perform a nonauthoritative restore from backup.
    Performing Nonauthoritative Restore of Active Directory Domain Services
    http://technet.microsoft.com/en-us/library/cc816627(v=ws.10).aspx
    If you want to restore a backup to server B which is created on server A, you need to copy the WindowsImageBackup folder to server B.
    For more detailed information, please see:
    Restore After Copying the WindowsImageBackup Folder
    http://standalonelabs.wordpress.com/2011/05/30/restore-after-copying-the-windowsimagebackup-folder/
    Best Regards,
    Mandy
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • TimeMachine backups more than expected ?

    Hi, I have a little issue with TimeMachine, I only Backup on User for whom I removed The Desktop and some other folders that I don't need to backup. Somehow, everytime it does a backup, it finds 6000 files with about 700 MB to backup although I almost did not do anything on my Mac. I would like to check what files exactly are in the backup, so I can determinate whether I can exclude extra folders. I did not find any way to have this information so far. The system.log actually contains information about how much, how many files, when etc ... but that is all. I wanted to try TimeTracker which has been recommended in this forum, but it is an Alpha version and is already expired and cannot be used anymore. Any idea is welcome.

    you can use TimeTracker to see what exactly is being backed up every time. I don't know what version you are using but the one they officially offer works and is certainly not alpha
    http://www.charlessoft.com/
    but in general, TM is not a good tool if you want to back up only a few folders. TM tries to back up everything and it's very difficult to exclude everything but those folders. and I would recommend you back up the whole system. this provides by far the best security and given the prices and sizes of hard drives these days there are no reasons not to do it.

  • Copy to Clipboard Method (Table) copies one additional empty column more than expected

    Hi,
    I'm using the Copy to Clipboard Method for a Table, to copy for example 4 rows with 3 columns. When I paste it to Excel I get 4 rows with 3 columns and an extra column, which is empty so the real size is than 4x4.
    Is this a Labview Error or can someone explain it to me why this is happening? Or even better, how can I fix that?
    I have isolated the problem to an extra vi so you can reproduce the error. Just let the vi run once and then paste the clipboard to Microsoft Excel.
    My Labview Version is 11.0 32 Bit, Microsoft Office 2010, WinXP SP3
    Regards
    Marcel
    Solved!
    Go to Solution.
    Attachments:
    LabVIEW2011_Tablebug.vi ‏11 KB

    Snippets apparently hate property and invoke nodes.
    See attached vi for proposed workaround using the Clipboard.Write method.
    Attachments:
    LabVIEW2011_Tablebug mod.vi ‏13 KB

  • Input required - when line size exceeds more than 1023

    hi all,
    in my ouput i have to display 64 fields which exceeds the length of 1023. if i enter greater than this value it is showing syntax error like max line size is 1023.so what shall i do in this case.
    thanks,
    maheedhar.t

    Hi
    U have to split the line.
    If you show list created by yourself, it should be better to use the ALV functions, they'll automatically sets the layout in order to produce a list no longer than 1023 and the user can choose which fields to be hidden.
    If you needs to show all fields, u have to split the line and show the fields in several lines (perhaps 2 lines are enough).
    Max

  • RAC Dataguard Switchover timing taking more than expected time

    I have Dataguard setup in RAC environment and my dataguard is also configured and it is working fine.
    Our goal is to do the switchover using DGMGRL withing the 5 minutes. We have followed the proper setup and MAA tuning document and everything is working fine, Just the switchover timeing is 8 to 10 minutes. which varies depending on some parameters but not meeting our goal of less than 5 minutes.
    The only observation that we have seen is as follow
    After switchover to <db_name> comman in DGMGRL
    1) it will shutdown abort the 2nd instance
    2) transfter all the archivelog ( using LGWR in ASYNC mode) of instance 1
    3) Now it looks for the archive log of 2nd instance, this steps take time upto 4 minutes
    we do not know why it takes that much time and how to tune this??
    4) Now converts primary to standby
    5) Now starts the old standby as new primary
    here All steps are tunined except the step 3, that where our lot of time is going any Idea or explanation
    why it takes such a long time to find the exact archive log 2nd instance (Aborted) to transfer to standby site?
    Can any one give explanation or solution to tune this???
    Regards
    Bhushan

    Hi Robert,
    I am on 10.2.0.4 and we have used "MAA_WP_10gR2_DataGuardNetworkBestPractices.pdf", which is available on oracle site.
    Here are by configuration details
    GMGRL> connect sys@dv01aix
    Password:
    Connected.
    DGMGRL> show configuration;
    Configuration
    Name: dv00aix_dg
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    dv00aix - Physical standby database
    dv01aix - Primary database
    Current status for "dv00aix_dg":
    SUCCESS
    DGMGRL> show database verbose dv00aix
    Database
    Name: dv00aix
    Role: PHYSICAL STANDBY
    Enabled: YES
    Intended State: ONLINE
    Instance(s):
    dv00aix1 (apply instance)
    dv00aix2
    Properties:
    InitialConnectIdentifier = 'dv00aix'
    ObserverConnectIdentifier = ''
    LogXptMode = 'ASYNC'
    Dependency = ''
    DelayMins = '0'
    Binding = 'OPTIONAL'
    MaxFailure = '0'
    MaxConnections = '4'
    ReopenSecs = '300'
    NetTimeout = '60'
    LogShipping = 'ON'
    PreferredApplyInstance = 'dv00aix1'
    ApplyInstanceTimeout = '0'
    ApplyParallel = 'AUTO'
    StandbyFileManagement = 'AUTO'
    ArchiveLagTarget = '900'
    LogArchiveMaxProcesses = '5'
    LogArchiveMinSucceedDest = '1'
    DbFileNameConvert = ''
    LogFileNameConvert = '+SPARE1/dv01aix/,+SPARE/dv00aix/'
    FastStartFailoverTarget = ''
    StatusReport = '(monitor)'
    InconsistentProperties = '(monitor)'
    InconsistentLogXptProps = '(monitor)'
    SendQEntries = '(monitor)'
    LogXptStatus = '(monitor)'
    RecvQEntries = '(monitor)'
    HostName(*)
    SidName(*)
    LocalListenerAddress(*)
    StandbyArchiveLocation(*)
    AlternateLocation(*)
    LogArchiveTrace(*)
    LogArchiveFormat(*)
    LatestLog(*)
    TopWaitEvents(*)
    (*) - Please check specific instance for the property value
    Current status for "dv00aix":
    SUCCESS
    DGMGRL> show database verbose dv01aix
    Database
    Name: dv01aix
    Role: PRIMARY
    Enabled: YES
    Intended State: ONLINE
    Instance(s):
    dv01aix1
    dv01aix2
    Properties:
    InitialConnectIdentifier = 'dv01aix'
    ObserverConnectIdentifier = ''
    LogXptMode = 'ASYNC'
    Dependency = ''
    DelayMins = '0'
    Binding = 'OPTIONAL'
    MaxFailure = '0'
    MaxConnections = '4'
    ReopenSecs = '300'
    NetTimeout = '60'
    LogShipping = 'ON'
    PreferredApplyInstance = 'dv01aix1'
    ApplyInstanceTimeout = '0'
    ApplyParallel = 'AUTO'
    StandbyFileManagement = 'AUTO'
    ArchiveLagTarget = '0'
    LogArchiveMaxProcesses = '2'
    LogArchiveMinSucceedDest = '1'
    DbFileNameConvert = '+SPARE/dv00aix/, +SPARE1/dv01aix/'
    LogFileNameConvert = '+SPARE/dv00aix/,+SPARE1/dv01aix/'
    FastStartFailoverTarget = ''
    StatusReport = '(monitor)'
    InconsistentProperties = '(monitor)'
    InconsistentLogXptProps = '(monitor)'
    SendQEntries = '(monitor)'
    LogXptStatus = '(monitor)'
    RecvQEntries = '(monitor)'
    HostName(*)
    SidName(*)
    LocalListenerAddress(*)
    StandbyArchiveLocation(*)
    AlternateLocation(*)
    LogArchiveTrace(*)
    LogArchiveFormat(*)
    LatestLog(*)
    TopWaitEvents(*)
    (*) - Please check specific instance for the property value
    Current status for "dv01aix":
    SUCCESS
    DGMGRL>
    log_archive_dest_2 string service="(DESCRIPTION=(ADDRESS
    _LIST=(ADDRESS=(PROTOCOL=TCP)(
    HOST=*****-vip0)(PORT=1527))
    )(CONNECT_DATA=(SERVICE_NAME=d
    v00aix_XPT)(INSTANCE_NAME=dv00
    aix1)(SERVER=dedicated)))",
    LGWR ASYNC NOAFFIRM delay=0 O
    PTIONAL max_failure=0 max_conn
    ections=4 reopen=300 db_uniq
    ue_name="dv00aix" register net
    NAME TYPE VALUE
    timeout=60  validfor=(online
    logfile,primaryrole)
    NAME TYPE VALUE
    fal_client string (DESCRIPTION=(ADDRESS_LIST=(AD
    DRESS=(PROTOCOL=TCP)(HOST=*****
    -vip0)(PORT=1527)))(CONNECT
    DATA=(SERVICENAME=dv01aix_XP
    T)(INSTANCE_NAME=dv01aix1)(SER
    VER=dedicated)))
    fal_server string (DESCRIPTION=(ADDRESS_LIST=(AD
    DRESS=(PROTOCOL=TCP)(HOST=*****
    -vip0)(PORT=1527))(ADDRESS=
    (PROTOCOL=TCP)(HOST=*****-vi
    p0)(PORT=1527)))(CONNECT_DATA=
    (SERVICE_NAME=dv00aix_XPT)(SER
    VER=dedicated)))
    db_recovery_file_dest string +SPARE1
    db_recovery_file_dest_size big integer 100G
    recovery_parallelism integer 0
    fast_start_parallel_rollback string LOW
    parallel_adaptive_multi_user boolean TRUE
    parallel_automatic_tuning boolean FALSE
    parallel_execution_message_size integer 2152
    parallel_instance_group string
    parallel_max_servers integer 8
    parallel_min_percent integer 0
    parallel_min_servers integer 0
    parallel_server boolean TRUE
    parallel_server_instances integer 2
    parallel_threads_per_cpu integer 2
    recovery_parallelism integer 0

  • Forecast Engine forecasting 2.5 to 3 times more than expected

    Is there a system parameter which can be used to reduce the forecast qty?
    Is it possible that 2 instances with same data ( one instance cloned from another) have different forecasts, after running engine on both instances?
    Please advise
    Thanks
    D

    Hi,
    Compare the value for GLOB_PROP in UAT and PROD for a particular combination where you see this difference in forecast. If you see the value is 2-3 times higher in PROD than UAT, then u can try below steps:
    1. Update mdp_matrix set prop_changes=1;
    2. Commit;
    3. Exec proport;
    4 Run Engine in BATCH mode
    Cheers
    Raj
    (http://oracledemantra.blogspot.com/)
    (http://www.demandgyan.com/)

  • After upgrade,stderr3 file is growing more than capacity of the file system

    The file is located in /usr/sap/C11/DVEBMGS00/ work/ folder was increasing rapidly. How to reduce the file size? How to truncate?

    Hi,
    1. Could you check out SAP note 1140307 ? Update the kernel patch stated in that note, if yours is old.
    2. normally, what I will do is to check the content of stderr3. see what information that keep written in repeatedly.
    3. Check if you have lot of inconsistencies in the temse. See SAP note 48400
    At the moment, delete stderr file which it is not active from operating sytem level.
    Kindly refer to SAP note 16513 (File system is full- What to do)
    Hope this helps.
    Regards,
    Vincent

Maybe you are looking for