Folder Size Exceed Disk Size

I am using Western Digital 2TB external HDD for Time Machine backup for 3 Mac. The remaining disk space now is 733GB. I am confused that one of the backup folder occupy more than 2TB. And the other 2 folders are still calculating since I started about 6 hours ago.
My 2 questions are:
1) Why the folder size is 3.13TB which is more than the physical size of the harddisk
2) Why it is taking such a long time to compute the disk space for the other two folder

Hi,
Actually, it may be a display error, anyway I would record this issue.
Although, I would give you some short description about this issue.
Files would have two properties about size:
Size: is the actual size of the file in bytes.
Size on disk: is the actual amount of space being take up on the disk.
I believe Windows Explorer is invoking  GetCompressedFileSize function to retrieves the actual number of bytes of disk storage used to store a specified file. If the file is located on a volume that supports compression and the file is
compressed, the value obtained is the compressed size of the specified file.
We can see everything works well in Windows 8 and previous versions. To prove this is a misread issue, let’s use GetCompressedFileSize function to see if we can retrieve the right value of compressed file size.
Here is the first picture in Windows 8:
The second is in Windows 8.1, but we use PowerShell script to retrieve the right value:
About the whole process, you can refer to this article:
Get the size of a file on disk
http://www.powershellmagazine.com/2013/10/31/pstip-get-the-size-of-a-file-on-disk/
GetCompressedFileSize function
http://msdn.microsoft.com/en-us/library/windows/desktop/aa364930(v=vs.85).aspx
So, I think it just don't read the compressed file size successfully, so it use the "Size" value directly to calculate the Size on disk.
Hope this could make it clear.
Alex Zhao
TechNet Community Support

Similar Messages

  • IPad photos sort by size, largest disk size to smallest?

    iPad photos sort by size, largest disk size to smallest?

    PS. I searched for exact phrase "sort iPad photos by size" on Google and I am aghast that there were ZERO RESULTS.
    https://www.google.com/search?site=webhp&ei=yEsuU9nNGYzkoASmmILoBQ&q=%22sort+iPa d+photos+by+size%22&oq=%22sort+iPad+photos+by+size%22&gs_l=mobile-gws-serp.3..0i 22i30.19488.31073.0.31755.19.19.0.0.0.2.350.3389.0j15j3j1.19.0....0...1c.1.37.mo bile-gws-serp..2.17.2771.LsiZBTPRvQk

  • How the Performance data depends on Number of CPU/ RAM size/ Hard Disk size

    Hi,
    I have started Performance testing (web test and Load tests) using VSTS 2012. I was analysing the VSTS results summary page.
    I could not find a way by which we can find a co-relation between the Performance Result data Impacted by Server's Configuration such as
    1- # of CPUs /Cores
    2- RAM size
    3- Hard Disk capacity
    Could you help me out if we can reach to a point, through which we can say that, For eg - the Server Performance can be improved if we increase any of the dependent hardware configuration and Also, How the Hardware configuration impacts the Performance.
    Thanks.
    Thanks, Anoop Singh

    The results show the performance with the hardware and software configuration
    when the test is run. The results may show that some parts of the configuration are lightly loaded (and hence have plenty of spare capacity) whereas other parts are heavily loaded (and hence may be limiting the system performance).
    To see the effect of changing the hardware or software configuration would need running exactly the same test with that changed configuration. Then the results of the two (or more) different runs can be compared. Microsoft has some details on how
    to compare load test results. See
    https://msdn.microsoft.com/en-us/library/dd728091(v=vs.120).aspx
    Regards
    Adrian

  • Problem in Rolling to new a log file only when it exceeds max size (Log4net library)

    Hello,
    I am using log4net library to create log files.
    My requirement is roll to a new log file with name appended with timestamp only when file size exceeds max size (file name ex: log_2014_12_11_12:34:45 etc).
    My config is as follow
     <appender name="LogFileAppender"
                          type="log4net.Appender.RollingFileAppender" >
            <param name="File" value="logging\log.txt" />
            <param name="AppendToFile" value="true" />
            <rollingStyle value="Size" />
            <maxSizeRollBackups value="2" />
            <maximumFileSize value="2MB" />
            <staticLogFileName value="true" />
            <lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
            <layout type="log4net.Layout.PatternLayout">
              <param name="ConversionPattern"
                   value="%-5p%d{yyyy-MM-dd hh:mm:ss} – %m%n" />
              <conversionPattern
                   value="%newline%newline%date %newline%logger 
                           [%property{NDC}] %newline>> %message%newline" />
            </layout>
          </appender>
    Issue is date time is not appending to file name. 
    But if i set "Rolling style" as "Date or composite", file name gets appended with timestamp, but new file gets created before reaching max file size.(Because file gets created  whenever date time changes, which i dont want) .
    Please help me in solving this issue?
    Thanks

    Hello,
    I'd ask the logfornet people: http://logging.apache.org/log4net/
    Or search on codeproject - there may be some tutorials that would help you.
    http://www.codeproject.com/Articles/140911/log-net-Tutorial
    http://www.codeproject.com/Articles/14819/How-to-use-log-net
    Karl
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer.
    My Blog: Unlock PowerShell
    My Book:
    Windows PowerShell 2.0 Bible
    My E-mail: -join ('6F6C646B61726C406F75746C6F6F6B2E636F6D'-split'(?<=\G.{2})'|%{if($_){[char][int]"0x$_"}})

  • Building a dvd with custom disk size (disk, ISO, or dvd folder)

    My goal at the end of all this is to get a DVD format of video at 6mbps with menu's to be burned to a blu-ray 25GB disk.
    Using Encore CS6 on a Windows 7 (4.7Gzh 8-core, 16gb ram @2400mhz)
    1st attempt:  I used a bluray disk and tried to burn a dvd format and it ejected my disk and said please insert a black dvd. (used custom disk of 25GB showing extra space available)
    2nd attempt: I thought perhaps I would build it to an ISO (also tried dvd folder) to burn later. it responded with an error message saying, "disk capacity is too small - building navigation failed"
    I dont care how I can get this to work but I would like to get all the video and menu's onto a single disk. Encore limits the "custom disk size" to 99GB so I assume it should work for 25GB (23.2GB).
    The reason I dont want to make a bluray format is because it requires high bitrate and the video is so old it would be a waist of space. I dont need 720p of video from 1935. (unless I can make a bluray format at a 6mbps)
    Thank you for any help you can provide
    bullfrogberry

    You can do this in Encore.
    I am assuming you are only picking presets, and not customizing one. You pick prests in the Transcode Settings dialog. Do you see the "Edit Quality Presets" button? Pick it, customize one by setting the bitrates to get the results you want, then SAVE IT as your own. Then pick that in the transcode setting. (In the transcode setting image, you can see my custom example "Stan Custom 4Mb CBR".) And yes, you can select all your video assets and apply this custom preset to all of them at once. I would do one short one to see if you are getting in the ball park. I would do 2 samples: one in MPEG2-Bluray and one in H.264-Blu-ray. (I'd follow Richard's recommendation to  you and use H.264.)

  • Essbase Error:Set is too large to be processed. Set size exceeds 2^64 tuple

    Hi,
    we are using obiee 11.1.1.6 version with essbase 9.3.3 as a data source . when I try to run a report in obiee, I am getting the below error :
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 96002] Essbase Error: Internal error: Set is too large to be processed. Set size exceeds 2^64 tuples (HY000)
    but if I run the same query in excel add in, I am just getting 20 records . wondering why I am getting this error in obiee . Does any one encountered the same issue ?
    Thanks In advance,

    Well if you want to export in I think you have to manually do it.
    The workaround it to open your aperture library by right clicking it and show contents...
    Then go into your project right click show contents...
    In here there are sub folders on dates that the pictures were added to those projects. If you open the sub folder and search for your pictures name it should be in that main folder.
    You can just copy it out as you would any normal file to any other location.
    Voila you have manually exported out your file.
    There is a very similar post that has been close but again you can't export the original file that you are working on - FYI http://discussions.apple.com/thread.jspa?threadID=2075419

  • Save for web&.... the image exceeds the size for ....issue with AI on imac i7

    Hi,
    I just upgraded from using macbook Pro to Imac inte core i7 with 4GB. I use illustrator CS3 for high resolution documents i.e saving a Design for say maybe saving for iphone &ipod skin; a quality jpg that is 1300px(width) by 2000px(height.
    General information
    The issue is I keep getting the pop up with:
    The image exceeds the size Save for Web was designed for. You may experience out of memory erros and slow performance. Are you sure you want to continue?
    I click yes, but get another pop up with:
    Could not complete this operation. The rastarized image exceeded the maximum bounds for save for web.
    And last weekend after NOT being able to save anything I went and bought 16GB memory thinking that this ussue would be solved, but nada.
    I have researched around here and google only to find about scratch disk: went to preferences then to plug-ins and scratch Disks -I seem NOT to get to the bottom of this at all.
    And this morning I was trying to create a pattern Design and could not even get the design on the color swatches as I have done before with macbook pro with 4GB -IT'S NOT FUN, because I am NOT doing it right. Do you care to help me out?
    Thanks a whole bunch in advance
    //VfromB

    Are you including a color profile?  What metadata are you including with it?
    How many pixels (h x v)?
    50 kb is not all that big.  Does your image have a lot of detail in it?  Content can affect final compressed size.
    -Noel

  • [SOLVED] Trying to understand the "size on disk" concept

    Hi all,
    I was trying to understand the difference between "size" and "size on disk".
    A google search gave plenty of results and I thought I got a clear idea about
    it.. All data is stored in small chunks of a fixed size depending on the
    filesystem and the last chunk is going to have some wasted space which
    will *have* to be allocated.. Thus the extra space on disk.
    However I'm still confused.. When I look at my home folder, the size on disk
    is more than 320 GB, where as my partition is actually less than 80 GB, so
    I guess I'm missing something.. Could somebody explain to me what does
    320 GB of 'size on disk' means?
    Thanks a lot in advance..
    Last edited by geo909 (2011-12-15 23:17:25)

    Hi all,
    Thank you for your replies.. My file manager is indeed pcman fm and
    indeed it seems to be an issue.. In b4data's link the last post reads:
    B-Con wrote:
    FWIW, I found the problem. (This bug is still persistent in v0.9.9.)
    My (new) PCManFM bug report is here: http://sourceforge.net/tracker/index.ph … tid=801864
    I submitted a potential patch here: http://sourceforge.net/tracker/?func=de … tid=801866
    Bottom line is that the file's block size and block count data from the file's inode wasn't being interpreted and used properly. The bug is in PCManFM, not any dependent libraries. Details are in the bug report.
    Since PCManFM is highly used by the Arch community, I figured I should post an update here so that there's some record of it in our community. Hopefully this gets addressed by the developer(s) relatively soon. :-)
    I guess that pretty much explains things.. And I think I did understand the 'size on disk' concept
    anyway
    Thanks again!
    Last edited by geo909 (2011-12-15 23:17:10)

  • Table size exceeds Keep Pool Size (db_keep_cache_size)

    Hello,
    We have a situation where one of our applications started performing bad since last week.
    After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
    After the data increase, the table size exceeded db_keep_cache_size.
    I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
    But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
    Is my inference correct here ?
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - ProductionSetup
    SQL> show parameter keep                    
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 4M
    SQL>
    SQL>     
    SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
    Table created.
    SQL> set autotrace on
    SQL>
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    PL/SQL procedure successfully completed.
    SQL> set serveroutput on
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    SEGMENT_NAME                  : T1
    PARTITION_NAME                :
    SEGMENT_TYPE                  : TABLE
    SEGMENT_SUBTYPE               : ASSM
    TABLESPACE_NAME               : HR_TBS
    BYTES                         : 16777216
    BLOCKS                        : 2048
    EXTENTS                       : 31
    INITIAL_EXTENT                : 65536
    NEXT_EXTENT                   : 1048576
    MIN_EXTENTS                   : 1
    MAX_EXTENTS                   : 2147483645
    MAX_SIZE                      : 2147483645
    RETENTION                     :
    MINRETENTION                  :
    PCT_INCREASE                  :
    FREELISTS                     :
    FREELIST_GROUPS               :
    BUFFER_POOL                   : KEEP
    FLASH_CACHE                   : DEFAULT
    CELL_FLASH_CACHE              : DEFAULT
    PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              9  recursive calls
              0  db block gets
           2006  consistent gets
           2218  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=10M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=10M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 12M
    SQL>
    SQL> set autotrace on
    SQL>
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=20M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=20M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 20M
    SQL> set autotrace on
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
           1656  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
              0  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedOnly with 20M db_keep_cache_size I see no physical reads.
    Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
    Or am I missing something ?
    Rgds,
    Gokul

    Hello Jonathan,
    Many thanks for your response.
    Here is the test I ran;
    SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
    BUFFER_     BLOCKS
    KEEP          1977
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
          1939
    SQL> show parameter db_keep_cache_size
    NAME                                 TYPE        VALUE
    db_keep_cache_size                   big integer 20M
    SQL>
    SQL> alter system set db_keep_cache_size = 5M scope=both;
    System altered.
    SQL> select count(*) from hr.t1;
      COUNT(*)
        135496
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
           992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
    Rgds,
    Gokul

  • I was copying files to my time capsule when it crashed when it was almost finished. Now I am stuck with a light grey folder I can not delete. I would´t care if it wasn´t for the fact the folder is 1 TB size and takes up too much space. I ha

    I was copying files to my time capsule when it crashed when it was almost finished. Now I am stuck with a light grey folder I can not delete. I would´t care if it wasn´t for the fact the folder is 1 TB size and takes up too much space. I have tried turning on and of Hiding Files in Finder and what not with no success. So, how do I delete this folder. Remember, I am not extremely familiar with technical terms.

    I recommend you erase the TC.. it is the easiest and fastest solution.
    In Airport utility open the disk tab and select erase.. choose quick erase.. it will take less than 1min.
    Start again but copy files in small amounts.

  • How to create a folder of a certian size

    hi all
    I have a 140 gig hard drive. While trying to install Sun Studio I receive this message.
    There is not enough free disk space to extract installation data
    806 MB of free disk space is required in a temporary folder.
    Clean up the disk space and run installer again. You can specify a temporary folder with sufficient disk space using --tempdir installer argument.
    the install file is in documents. How due you create a temp folder with 1 or 2 gig for this purpose with zfs file system.
    Thank you Kevin

    df -kh
    Filesystem size used avail capacity Mounted on
    /dev/dsk/c1d0s0 6.9G 3.5G 3.4G 51% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 864M 968K 863M 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    sharefs 0K 0K 0K 0% /etc/dfs/sharetab
    /usr/lib/libc/libc_hwcap1.so.1
    6.9G 3.5G 3.4G 51% /lib/libc.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 863M 144K 863M 1% /tmp
    swap 863M 32K 863M 1% /var/run
    /dev/dsk/c1d1s7 27G 28M 27G 1% /export/home
    /dev/dsk/c1d0s7 139G 64M 138G 1% /export/home0
    thank for responce kevin

  • Time Machine backup disk size - total capacity of disk or just files used?

    Hi folks,
    After upgrading to Leopard, I'm trying to set up my Time Machine. My main HD is 175 Gig and all the OS and other files take up 37 Gig of that. The drive I want to use for Time Machine (a spare internal hard drive) is a 75 Gig drive with 74 Gig of space available. Time machine says this drive is too small to use.
    According to the Time Machine documentation, Time Machine takes the _total size of the files_ to be backed up and multiplies that by 1.2. So in my case, since the total files on my 175 Gig drive take up 37 Gig, then I would need only 42 Gig for my Time Machine back up. So, in theory, my 75 Gig spare drive should work just fine.
    The problem is that Time Machine is taking the total size of the entire HD and using that to calculate the size of the back up drive, which would be 210 Gig. Does anyone know why this problem is occurring? It seems like Time Machine is not calculating the needed back up disk size properly and is incorrectly including the unused disk space on my main HD.

    Not sure exactly, but your drive really is too small. Yes, 37 gb plus workspace would do for your initial Full Backup, but subsequent incrementals could fill it up pretty fast. That would depend, of course, on how you use your Mac -- how often you add or update files of what sizes.
    If you change your habits and, say, download a multi-gb video, then work on editing it for a few hours, you could eat up the remaining space very, very quickly.
    Just to be sure, how are you determining space used? Via right-click (or control-click) and Get Info on your HD icon?
    Also, do you have any other HDs connected? If so, exclude it/them, as TM will include them by default.
    Three possible workarounds:
    First, get a bigger drive. HDs have gotten ridiculously cheap -- 500 gb (or even some 1 tb) for not much over $100.
    Second, use CarbonCopyCloner, SuperDuper, or a similar product instead of TM. CCC is donationware, SuperDuper about $30, I think. Either can make a full bootable "clone", and CCC has an option to either archive previous versions of changed files or delete them. CCC can be set to run automatically hourly, daily, etc. (I suspect SD can, too, but I don't know it's details). An advantage is, of course, if your HD fails you can boot and run from the "clone" until you get it replaced, then reverse the process and clone the external to the internal.
    Note that these will take considerably longer, as unlike TM, they don't use the OSX internals to figure out what's been added or changed, but must look at every file and folder. In my case, even smaller than yours, TM's hourly backup rarely runs over 30 seconds; CCC's at least 15 minutes (so I have it run automatically at 3 am). And, if you don't keep previous versions, of course, you lose the ability to recover something that you deleted or changed in error, or got corrupted before the last backup.
    Third (and NOT recommended), continue with TM but limit it to your home folder. This means if you lose your HD, you can't restore your whole system from the last TM backup. You'd have to reload from your Leopard disk, the apply all OS updates, and reload any 3rd party settings, then restore from TM. As a friend of mine used to say, "un-good"!

  • Hi I currently have about 20,000 photos in I photo, which the 'info' displays as being 68GB's worth. However the I Photo Library with the Pictures Folder displays it's size as 341.21GB ... Why would that be? How do I look at what's in the I Photo Library?

    Hi I currently have about 20,000 photos in I photo, which 'info' displays as being 68GB's worth. However the I Photo Library within the Pictures Folder displays it's size as 341.21GB ... Why would that be? How do I look at what's in the I Photo Library to figure out what's happening, and if there's stuff in there I need to delete?

    Best to keep the terms clear.
    You export photos from iPhoto.
    You move the iPhoto Library to an external disk.
    They're are quite different processes. Exporting means at the end you have a file outside iPhoto. Moving means the files are still inside iPhoto.
    So if you say:
    I'm currently exporting the  library to an external drive
    That reads like I''ve selected all the photos in iPhoto and am exporting them to a folder on this external drive, and at the end I'll have a folder full of photos outside iPhoto. Is that what you mean?
    Is it best to have a 'master' I photo library on an external drive for all the thousands of pics Im never really going to look at often - and a smaller one kept on the HD for day to day usage ...
    Best? Hard to know. It might make sense on a portable. But if you're on an iMac why do you need a smaller one on the internal? Why not just have the whole library on the external? There is no performace hit.

  • Zen Micro Disk Size 5GB

    Hi,
    I just bought a Zen Micro mp3 player, 5GB. With Creative Mediasource I copied all my mp3 and WMA music to the Zen Micro. If I right-click in XP on my Music folder, Windows tells me the total size on disk is 2.9GB, 595 files.
    BUT, after 500 files I get a warning saying that the destination is Full ?? But the Zen Micro is 5GB !?!?
    How is this possible ? I checked some file sizes on the Zen Micro, and compared them with XP file sizes, but they are the same !?
    What is going on here ?
    Thanks
    Roger Luijten

    It also depends on what you have your bit rate at.
    for example, if it is set at wma 60kbps you will only be able to fit about 800 - 000 songs on it.
    the only way the zen micro can hold 2,500 songs on it is if you set the bit rate to wma at 64kb
    ps.
    I have all my songs set to wma 60kbps and i'm already takin up 40% of my hard dri've space: with 384 songs on it so far.

  • Cisco wave virtual blade disk size shrunk, unable to restore vblade

    Hi All,
    I applied the workaround listed by Cisco for bug CSCsy47235 on our wave device, which is also mentioned numerous times on the forum here, to do: disk delete-data-partitions and reload. after the reload, we are unable to restore the backed up blade image file, we get the error: '162GiB image is too large! 160GiB image is allowed'. when i try to create a virtual blade, the maximum disk size has reduced from 162GB(before the change) to 160GB(after the change).
    Any idea or solutions would be greatly appreciated.
    Thank you
    Jack

    Hi Tim,
    These disk size are fixed and cannot be altered, hence you will not be able to allocate the unused cifs/dre to vb disk or the unused vb disk for cifs and dre.
    There is no way to increase this as well, as most of these are determined whne you load the software and the partition table is fixed. Hence you wont find any maximum numbers. 32 gb is what you will get as part of wave 274 and wave 474 and that is the maximum disk space. You can split this across two virual blades , allocating them 16gb each , but however the total still cannot exceed the maximum of 32gb for all the virual machines.
    Regards
    Abijith

Maybe you are looking for

  • Micrologix 1400 Ethernet/IP invalid tag format error

    I am unable to get the Micrologix 1400 to communicate via Ethernet/IP using register address as tag name, error that appears in the VI is: EthernetIP Tag Read BOOL.vi; Details: The tag name is not correctly formatted Compact Logix PLC works correctly

  • Is it possible to load or import 1 journal from file?

    Hi to all, We are searching how to load or import from and excel file some journals into the journal table of SAP BPC. As we have seen it is possible to load all the table but not only to add one journal to the table. Anybody knows if it is possible

  • Date arithmatics not working properly

    if i m passing a varchar string and this string is in a default date format then it should be converted to dd-mon-rr SQL> select '12-jan-04'+3 from dual; select '12-jan-04'+3 from dual ERROR at line 1: ORA-01722: invalid number but instead of that if

  • How do i clear the  console

    Hi Experts I need to know "how do i clear the console " and "how do position the mouse cursor in x and y coorditates in a console " Thanks in advance

  • LMS Reporting & AICC

    we have created a few modules on some new automated HR forms. Each form has a demo module and a scenario-based quiz module. I do not score either one. With that said I want to be able to award credit to those who have completed either module. Can I d