Compress and uncompress data size

Hi,
I have checked total used size from dba_segments, but I need to check compress and uncompress data size separately.
DB is 10.2.0.4.

I have checked total used size from dba_segments, but I need to check compress and uncompress data size separately.
DB is 10.2.0.4.
Unless you have actually performed any BULK inserts of data then NONE of your data is compressed.
You haven't posted ANYTHING that suggests that ANY of your data might be compressed. For 10G compression will only be performed on NEW data and ONLY when that data is inserted using BULK INSERTS. See this white paper:
http://www.oracle.com/technetwork/database/options/partitioning/twp-data-compression-10gr2-0505-128172.pdf
However, data, which is modified without using bulk insertion or bulk loading techniques will not be compressed
1. Who compressed the data?
2. How was it compressed?
3. Have you actually performed any BULK INSERTS of data?
SB already gave you the answer - if data is currently 'uncompressed' it will NOT have a 'compressed size'. And if data is currently 'compressed' it will NOT have an 'uncompressed size'.
Now our management wants that how much compressed data size is and how much uncompressed data size is?
1. Did that 'management' approve the use of compression?
2. Did 'management' review the tests that were performed BEFORE compression was done? Those tests would have reported the expected compression and any expected DML performance changes that compression might cause.
The time for testing the possible benefits of compression are BEFORE you actually implement it. Shame on management if they did not do that testing already.

Similar Messages

  • K3b compressed dvd and uncompressed data

    Using k3b to burn .isoDVD R/W media results in a usable DVD of the Knoppix Live CD type.
    Can the same media be burned again with uncompressed data in the unused portion of the media using k3b?
    If so, what setup is required to do so?
    If this be possible, then it would seem possible to symlink to the uncompressed files and extend the compressed file capability to include the files on the uncompressed portion.
    Alternatively, the symlink could address additional compressed format files and thereby extend the basic Live CD to full size DVD capability.  This would be faster in execution (on-the-fly decompressed).
    A symlink seems the useful tool......

    Hi,
    Yes, both Compressed and Uncompressed data will be displayed in reports. Compression activity is to save the memory space in your database.
    Its not mandatory to compress the whole requests till date in cube.
    You can specify the request(desired request) in Collapse tab and release for compression.
    Suppose if you have requests from 1st Juky to till date, you specified the 9th July Request in the Collapse tab, then from 1st July to 9th July requests will be compressed. It will not consider from 1st July to till date.
    Regards,
    Suman

  • GeoRaster performance: Compressed vs Uncompressed

    I tried to read compressed and uncompressed GeoRaster. The different in the performance confused me. I expected better performance for compressed raster, because Oracle needs to read in few times less data from hard drive (1:5 in my case). However, reading uncompressed data is approximately twice faster. I understand Oracle needs to use more CPU for uncompressing data. But I thought that saved time of reading data would be more than time for uncompressing a raster.
    Did anybody compare the performance?
    Thanks,
    Dmitry.

    Dmitry,
    You can try for yourself. QGIS is a free-open-source-software.
    QGIS uses GDAL to access raster and vector data and there is a plugin called "Oracle Spatial GeoRaster", or just oracle-raster, to deal with GeoRaster. To access Geometries you don't need to activate the plugin, just select Oracle as your database "type" in the Add Vector Layer dialog box.
    Displaying GeoRaster work pretty fast, as long as you have created pyramids. Yes, there is a little delay when the GeoRaster is compressed but that is because GDAL request the data to be uncompressed and QGIS has no clue about it.
    Wouldn't be nice to have a viewer that used the JPEG as it is?
    Regards,
    Ivan

  • Initramfs - compressed vs. uncompressed

    I just recently came to think about this. It's common practice to compress iniramfs, but an uncompressed initramfs is also an option.
    But what about the pros and cons? How time consuming is the decompression actually and what will the extra size of an uncompressed image mean?
    Personally I've got an SSD, so that helps with the extra size and as for the decompression, I am using as rather limited atom CPU, so in theory it seems to work out for me at least.
    So, what do you think, that is theoretically, what is the best way to go? In reality of course, the difference is negligible, maybe not even measurable, but that's not really the point.
    I'll look forward to your answers.
    Best regards.

    blackout23 wrote:
    zacariaz wrote:
    blackout23 wrote:
    I have done 5 runs of uncompressed and gzip initramfs.
    CPU is Core i7 2600K 4,5 Ghz
    archbox% sudo systemd-analyze
    Startup finished in 1328ms (kernel) + 311ms (userspace) = 1640ms
    archbox% sudo systemd-analyze
    Startup finished in 1338ms (kernel) + 277ms (userspace) = 1616ms
    archbox% sudo systemd-analyze
    Startup finished in 1305ms (kernel) + 274ms (userspace) = 1580ms
    archbox% sudo systemd-analyze
    Startup finished in 1305ms (kernel) + 304ms (userspace) = 1610ms
    archbox% sudo systemd-analyze
    Startup finished in 1302ms (kernel) + 287ms (userspace) = 1590ms
    Gzip:
    archbox% sudo systemd-analyze
    Startup finished in 1375ms (kernel) + 347ms (userspace) = 1723ms
    archbox% sudo systemd-analyze
    Startup finished in 1375ms (kernel) + 331ms (userspace) = 1706ms
    archbox% sudo systemd-analyze
    Startup finished in 1368ms (kernel) + 351ms (userspace) = 1720ms
    archbox% sudo systemd-analyze
    Startup finished in 1385ms (kernel) + 340ms (userspace) = 1725ms
    archbox% sudo systemd-analyze
    Startup finished in 1402ms (kernel) + 351ms (userspace) = 1753ms
    Even on a faster CPU you can measure a difference between compressed and uncompressed.
    If you add the timestamp hook to your HOOKS array in /etc/mkinitcpio.conf and rebuild your initramfs, systemd-analyze will also be able to show you how much time was spent in the initramfs alone. Without it kernel is basically both combined.
    I dare say the difference is less than a pitiful atom CPU, still interesting that the difference is still there though.
    I am however done timing the initramfs. I know now which is best for me.
    One thing that catches my eyes is the userspace time. the kernel time is not too different between our two setups, but the userspace is apparently very much so. Is that simply because the CPU play a greater role or do you utilize some fancy optimization technique I haven't heard of?
    Try systemd-analyze blame and see what takes the most time. If it's NetworkManager try setting ipv6 to "Ignore" in your settings and assign a static ip.
    Other than that I don't run any extra services and mount only my SSD no external drives.
    Good point, always forget about ipv6. Isn't there a kernel param to disable it btw?
    Edit:
    WICD is responcible for 922ms
    Last edited by zacariaz (2012-09-03 20:07:04)

  • Help needed to find the schema/application data size

    Hi,
    Would i request you to help me to measure schema size/(APEX)application data size.
    I've 3 applications running on same schema and now i want to move one application to new server, new schema,
    Now i need to know how much space is required for this application to host on the new server, so i should find the application size and application data size in the current server, your hep is appreciated. thanks in advance.
    Regards

    Hi,
    Would i request you to help me to measure schema size/(APEX)application data size.
    I've 3 applications running on same schema and now i want to move one application to new server, new schema,
    Now i need to know how much space is required for this application to host on the new server, so i should find the application size and application data size in the current server, your hep is appreciated. thanks in advance.
    Regards

  • Compression and query performance in data warehouses

    Hi,
    Using Oracle 11.2.0.3 have a large fact table with bitmap indexes to the asscoiated dimensions.
    Understand bitmap indexes are compressed by default so assume cannot further compress them.
    Is this correct?
    Wish to try compress the large fact table to see if this will reduce the i/o on reads and therfore give performance benefits.
    ETL speed fine just want to increase the report performance.
    Thoughts - anyone seen significant gains in data warehouse report performance with compression.
    Also, current PCTFREE on table 10%.
    As only insert into tabel considering making this 1% to imporve report performance.
    Thoughts?
    Thanks

    First of all:
    Table Compression and Bitmap Indexes
    To use table compression on partitioned tables with bitmap indexes, you must do the following before you introduce the compression attribute for the first time:
    Mark bitmap indexes unusable.
    Set the compression attribute.
    Rebuild the indexes.
    The first time you make a compressed partition part of an existing, fully uncompressed partitioned table, you must either drop all existing bitmap indexes or mark them UNUSABLE before adding a compressed partition. This must be done irrespective of whether any partition contains any data. It is also independent of the operation that causes one or more compressed partitions to become part of the table. This does not apply to a partitioned table having B-tree indexes only.
    This rebuilding of the bitmap index structures is necessary to accommodate the potentially higher number of rows stored for each data block with table compression enabled. Enabling table compression must be done only for the first time. All subsequent operations, whether they affect compressed or uncompressed partitions, or change the compression attribute, behave identically for uncompressed, partially compressed, or fully compressed partitioned tables.
    To avoid the recreation of any bitmap index structure, Oracle recommends creating every partitioned table with at least one compressed partition whenever you plan to partially or fully compress the partitioned table in the future. This compressed partition can stay empty or even can be dropped after the partition table creation.
    Having a partitioned table with compressed partitions can lead to slightly larger bitmap index structures for the uncompressed partitions. The bitmap index structures for the compressed partitions, however, are usually smaller than the appropriate bitmap index structure before table compression. This highly depends on the achieved compression rates.

  • How to find data compression and speed

    1. whats the command/way for viewing how much space the data has taken on its HANA tables as oposed to the same data in disk . I mean how do people measure that there has been a  10:1 data compression.
    2. The time taken for execution, as per seen from executing a same SQL on HANA varies ( i see that when i am F8 ing the same query repeatedly) , so its not given in terms of pure cpu cycles , which would have been more absolute .
    I always thought  that there must a better way of checking the speed of execution  like checking the log which gives all data regarding executions , than just seeing the output window query executions.

    Rajarshi Muhuri wrote:
    1. whats the command/way for viewing how much space the data has taken on its HANA tables as oposed to the same data in disk . I mean how do people measure that there has been a  10:1 data compression.
    The data is stored the same way in memory as it is on disk. In fact, scans, joins etc. are performed on compressed data.
    To calculate the compression factor, we check the required storage after compression and compare it to what would be required to save the same amount of data uncompressed (you know, length of data x number of occurance for each distinct value of a column).
    One thing to note here is: compression factors must always be seen for one column at a time. There is no such measure like "table compression factor".
    > 2. The time taken for execution, as per seen from executing a same SQL on HANA varies ( i see that when i am F8 ing the same query repeatedly) , so its not given in terms of pure cpu cycles , which would have been more absolute .
    >
    > I always thought  that there must a better way of checking the speed of execution  like checking the log which gives all data regarding executions , than just seeing the output window query executions.
    Well, CPU cycles wouldn't be an absolute measure as well.
    Think about the time that is not  spend on the CPU.
    Wait time for locks for example.
    Or time lost because other processes used the CPU.
    In reality you're ususally not interested so much in the perfect execution on one query that has all resources of the system bound to it, but instead you strive to get the best performance when the system has it's typical workload.
    In the end, the actual response time is what means money to business processes.
    So that's what we're looking at.
    And there are some tools available for that. The performance trace for example.
    And yes, query runtimes will always differ and never be totally stable all the time.
    That is why performance benchmarks take averages for multiple runs.
    regards,
    Lars

  • File and/or vocal size compression

    New to Mac and GB (got a new MacBook for Christmas)...
    I have a school assignment: to record a 30-minute speech, and I'd like to do it using the onboard mic in my new MacBook. I've noodled with GB and have figured out how to record it, but is it possible to break down or compress the file size and/or vocal size so that it's not pushing 10 gb? I tested a few options (turning the mic from stereo to mono, etc.), but even this yields a one minute file that's 13 - 15 mb. Any suggestions? Is GB even the best program with which to tackle this?
    jeff

    I would probably lean towards an Audio editor rather than GB
    http://thehangtime.com/gb/gbfaq2.html#audioeditors
    but GB will work fine.
    GB records audio at the highest quality that it can, 44.1Kc/16-bit AIFF audio. This sample rate and bit depth creates a file that is just a bit over 10MB/minute of recording, and this will also be reflected in the export when you Send to iTunes (if you're project does not have a podcast track in it) ... it will export a high quality 44.1K/16-bit uncompressed AIFF file.
    Once in iTunes you can convert the file to an mp3 which is more like 1MB/minute (though you can choose the quality/size)
    http://thehangtime.com/gb/gbfaq2.html#converttomp3

  • Get Total DB size , Total DB free space , Total Data & Log File Sizes and Total Data & Log File free Sizes from a list of server

    how to get SQL server Total DB size , Total DB free space , Total Data  & Log File Sizes and Total Data  & Log File free Sizes from a list of server 

    Hi Shivanq,
    To get a list of databases, their sizes and the space available in each on the local SQL instance.
    dir SQLSERVER:\SQL\localhost\default\databases | Select Name, Size, SpaceAvailable | ft -auto
    This article is also helpful for you to get DB and Log File size information:
    Checking Database Space With PowerShell
    I hope this helps.

  • How can I retrieve the col data size and Null information from table using labview connectivity toolset

    Hi, there,
    I am wondering how to get the table information by labview database
    connectivity toolset. The table list vi comes with the toolset can get
    only col name, data type and data size. And I found the the data size
    always gives back -1, even though it is a string type. Do somebody has
    some idea about it?
    Thanks.
    JJ

    JJ,
    Go into the diagrams of the DBTools List Columns and DBTools Get Properties respectively. When you inspect this diagram, you will see the raw ActiveX properties and methods called to get the size information. The value of -1 means the requested recordset is already closed. This is the sort of thing that is controled by the driver (ODBC, OLE DB, Jet, etc) you are using. Notice that you can right click on the property and invoke nodes and get more information about these specific items directly from the ADO online help.
    Crystal

  • BTREE and duplicate data items : over 300 people read this,nobody answers?

    I have a btree consisting of keys (a 4 byte integer) - and data (a 8 byte integer).
    Both integral values are "most significant byte (MSB) first" since BDB does key compression, though I doubt there is much to compress with such small key size. But MSB also allows me to use the default lexical order for comparison and I'm cool with that.
    The special thing about it is that with a given key, there can be a LOT of associated data, thousands to tens of thousands. To illustrate, a btree with a 8192 byte page size has 3 levels, 0 overflow pages and 35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note that I wrote "can", since some keys only have a few dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default lexical ordering with set_dup_compare is OK, so I don't touch that. I'm getting the data items sorted as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA) performance", due to a lot of disk read operations.
    While there may be a lot of reasons for this anomaly, I suspect BDB spends a lot of time tracking down duplicate data items.
    I wonder if in my case it would be more efficient to have a b-tree with as key the combined (4 byte integer, 8 byte integer) and a zero-length or 1-length dummy data (in case zero-length is not an option).
    I would loose the ability to iterate with a cursor using DB_NEXT_DUP but I could simulate it using DB_SET_RANGE and DB_NEXT, checking if my composite key still has the correct "prefix". That would be a pain in the butt for me, but still workable if there's no other solution.
    Another possibility would be to just add all the data integers as a single big giant data blob item associated with a single (unique) key. But maybe this is just doing what BDB does... and would probably exchange "duplicate pages" for "overflow pages"
    Or, the slowdown is a BTREE thing and I could use a hash table instead. In fact, what I don't know is how duplicate pages influence insertion speed. But the BDB source code indicates that in contrast to BTREE the duplicate search in a hash table is LINEAR (!!!) which is a no-no (from hash_dup.c):
         while (i < hcp->dup_tlen) {
              memcpy(&len, data, sizeof(db_indx_t));
              data += sizeof(db_indx_t);
              DB_SET_DBT(cur, data, len);
              * If we find an exact match, we're done. If in a sorted
              * duplicate set and the item is larger than our test item,
              * we're done. In the latter case, if permitting partial
              * matches, it's not a failure.
              *cmpp = func(dbp, dbt, &cur);
              if (*cmpp == 0)
                   break;
              if (*cmpp < 0 && dbp->dup_compare != NULL) {
                   if (flags == DB_GET_BOTH_RANGE)
                        *cmpp = 0;
                   break;
    What's the expert opinion on this subject?
    Vincent
    Message was edited by:
    user552628

    Hi,
    The special thing about it is that with a given key,
    there can be a LOT of associated data, thousands to
    tens of thousands. To illustrate, a btree with a 8192
    byte page size has 3 levels, 0 overflow pages and
    35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note
    that I wrote "can", since some keys only have a few
    dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default
    lexical ordering with set_dup_compare is OK, so I
    don't touch that. I'm getting the data items sorted
    as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA)
    performance", due to a lot of disk read operations.In general, the performance would slowly decreases when there are a lot of duplicates associated with a key. For the Btree access method lookups and inserts have a O(log n) complexity (which implies that the search time is dependent on the number of keys stored in the underlying db tree). When doing put's with DB_NODUPDATA leaf pages have to be searched in order to determine whether the data is not a duplicate. Thus, giving the fact that for each given key (in most of the cases) there is a large number of data items associated (up to thousands, tens of thousands) an impressive amount of pages have to be brought into the cache to check against the duplicate criteria.
    Of course, the problem of sizing the cache and databases's pages arises here. Your size setting for these measures should tend to large values, this way the cache would be fit to accommodate large pages (in which hundreds of records should be hosted).
    Setting the cache and the page size to their ideal values is a process of experimenting.
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/pagesize.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/cachesize.html
    While there may be a lot of reasons for this anomaly,
    I suspect BDB spends a lot of time tracking down
    duplicate data items.
    I wonder if in my case it would be more efficient to
    have a b-tree with as key the combined (4 byte
    integer, 8 byte integer) and a zero-length or
    1-length dummy data (in case zero-length is not an
    option). Indeed, these should be the best alternative, but testing must be done first. Try this approach and provide us with feedback.
    You can have records with a zero-length data portion.
    Also, you could provide more information on whether or not you're using an environment, if so, how did you configure it etc. Have you thought of using multiple threads to load the data ?
    Another possibility would be to just add all the
    data integers as a single big giant data blob item
    associated with a single (unique) key. But maybe this
    is just doing what BDB does... and would probably
    exchange "duplicate pages" for "overflow pages"This is a terrible approach since bringing an overflow page into the cache is more time consuming than bringing a regular page, and thus performance penalty results. Also, processing the entire collection of keys and data implies more work from a programming point of view.
    Or, the slowdown is a BTREE thing and I could use a
    hash table instead. In fact, what I don't know is how
    duplicate pages influence insertion speed. But the
    BDB source code indicates that in contrast to BTREE
    the duplicate search in a hash table is LINEAR (!!!)
    which is a no-no (from hash_dup.c):The Hash access method has, as you observed, a linear search (and thus a search time and lookup time proportional to the number of items in the buckets, O(1)). Combined with the fact that you don't want duplicate data than hash using the hash access method may not improve performance.
    This is a performance/tunning problem and it involves a lot of resources from our part to investigate. If you have a support contract with Oracle, then please don't hesitate to put up your issue on Metalink or indicate that you want this issue to be taken in private, and we will create an SR for you.
    Regards,
    Andrei

  • Why is Data Size in QT inspector twice the size of the file?

    I am not asking this to save space. I am just curious. When playing a dv (captured from camera) clip, I notice that the QT inspector shows data size about twice the size of the dv clip size in file system. Is it because the dv is compressed and QT decompresses?
    Actually, on clip in QT shows 22GB which is impossible to fit on a miniDV tape.

    You'll notice that a DV Stream file has two tracks (audio and video) of the same size when viewed in QuickTime Movie Inspector or the Movie Properties window of QuickTime Player Pro.
    Obviously the audio track shouldn't be the same size as the video track. There are separated from the single stream to allow editing.

  • Why is Modified photo smaller in data size than Original after red-eye redu

    I'm using iPhoto 6 and was just about to go through and delete some unnecessary originals that are taking up lots of disk space, but then I noticed, all my originals are larger in file size than the modified images (example: Original = 1.2 MB, Modified = 872 KB).
    The very minor modification that I've applied in most of these is the red-eye reduction in iPhoto.
    Now why would the file sizes be smaller? And is it safe then to delete the originals? Am I losing image size, quality, data? If anything, I'd expect the modified image to be larger in size, so I'm pretty confused at this point.
    I really appreciate the help anyone can offer.
    Quad-core 2.5 G5   Mac OS X (10.4.5)   30" Apple Cinema Display

    Brian:
    Any time a jpg file is edited and saved it goes thru some jpeg compression, often very minimal. Redeye removal removes some color data and thus reduces the file size. However, some edits like Retouch increased the file size since it changes and adds data to the file.
    Do you Twango?

  • Data Size in QT inspector is vastly different than Finder

    Hi all,
    I am looking at the size of my QT movie files in the finder, and comparing the size to the Data Size in the QT Inspector window.
    I notice that the average video has a difference of a few percentage points between the Inspector and the Finder.  For example, the Inspector might think the video is 3.52GB in size, and the Finder will say that it is 3.3GB in size.  Fine, this can be rounding differences that add up.
    That said, I recently copied and compressed 27 DVDs for a client (home movies on mini-DVDs) to DVCPRO50 movie files, and I notice that the difference between the Finder and the QT inspector is vastly different. For example, the QT Inspector thinks the file is 23.1GB, but the Finder thinks it is 12.4GB.
    This kind of difference is roughly the same for all of the videos that I just compressed.
    Any thoughts?
    Notes:
    1) I ran Disk First Aid (repairs, and permissions). No problems found.
    2) The transcoded files are DVCPRO50.
    Thanks,
    matt

    So, setting aside the irregularities in the reporting of file size, is there anything I should be aware of now that I DID transcode a full days worth of files using DVCPRO50 as a DV file?
    Not really. The DVCPRO50 video is the same whether in in the DV or the MOV file container.  Always rely on the Finder when judging the file size as it reflects the actual amount of space the data + unused space the container takes up on your drive. Just remember that the Finder reports in 2^10 magnitudes of storage as opposed to "true" decimal numbers.
    Besides it being curious that all of my files have a .dv file type, I notice that they play back poorly in Final Cut Pro.  They seem to be hitting the processor with a double-whammy.
    Not sure what you are comparing to here. Remember DVCPRO50 has twice the data rate of DVCPRO25/DV25 and half the data rate of a DV100 codec. Playback, of course, is related to dropped frames which depends on the CPU power of your system, as well as, the number and type of procedures that are simultaneously active/sharing the CPU(s). Also remember that DVCPRO50 is not a delivery-distribution compression format and is not normally used for the playback of content. It is, instead, a source/editing/archiving format for the retention of optimum quality at the expense of increased data rate/file size.
    Any other potential gotchyas that I should be terrified about as I edit this video, export it, and prep it to go back on a DVD?
    Again, not really. Depending on the sourcing of your content and the original color space, it may or may not be the most suitable compression format but is not likely to do harm either at an amateur/prosumer work flow level (which is the category I would use to describe my typical projects). As long as you have sufficient file space and can be satisfied with the final result, you should be fine.

  • Data size discrepancy

    I've been doing some compression tests for a clip destined for the web. I noticed when looking at the finished versions that the file sizes shown in the Movie Info box in QT Pro are different from those shown in the finder. For one file QT Pro shows 2.9MB as opposed to the finder's 2.2MB, for another 2.9 as opposed to 2.1, and for a third QT shows 1.76 as opposed to the finder's 1.8 (not a lot but it's curious). Anybody know why I'm seeing these discrepancies?
    thanks-

    I've been doing some compression tests for a clip destined for the web. I noticed when looking at the finished versions that the file sizes shown in the Movie Info box in QT Pro are different from those shown in the finder.
    Never gave it serious thought before, but since no one else wants to take a shot here, I'll give it a try. There are a number of aspects at work here. Not sure if all file types are the same, but it seemed that some of your numbers may have been reversed in your examples.
    Finder File Size:
    The finder file size represents the files total potential for storage and includes the actual audio/video data stored, allocated but unused storage, and file overhead which includes header, file type, creator, metadata/tags, etc. In addition, this information is given in both binary uint and binary power (2^10) equivalents where: total bytes = file size in KBs x 1024^1, total bytes = file size in MB x 1024^2, total bytes = file size in GBs x 1024^3, etc. So here we have our first discrepancy.
    QT Movie Info File Size:
    This entry represents the actual amount of audio/video data contained/stored in the media file. (I.e., the amount of "sequential" data contained as if it were stored in a single dimension or "linear" array.) It does not include file overhead or allocated space not used by the particular packing algorithm empoyed by a spacific codec. File overhead is probably the most "consistent" discrepancy and can probably be considered as a constant with respect to most, if not all, common QT multimedia file types. Storage allocation or packing algorithms, on the other hand, vary widely with the least efficient being the most linear in nature and the most efficient probably best described as multidimensional arrays whose block/superblock allocation may vary as a constant or vary linearily (i.e., in direct proportion to) or as a power of the current file size. Thus, the discrepancy between Finder and QT file size can vary greatly here dependent on the data density of the most recent storage block allocation.
    It is likely that there are other considerations which have been overlooked here. However, these are probably the most significant. Hope this helps.

Maybe you are looking for

  • Cross-origin resource sharing (CORS) does not work in Firefox 13.0.1 or 6.0.2

    I have a simple Java HttpServlet and a simple JSP page. They are both served by a WebSphere Application Server at port 80 on my local host. I have created a TCP/IP Monitor at port 8081 in Eclipse IDE so as to create a second origin. The protocol outp

  • No split screen option for Nokia C7 Malaysian Prod...

    No split screen option for Nokia C7 Malaysian Product even after complete and successful updates of all Symbian Anna files.  Mods, any updates or advice?

  • Transfer photos to memory card

    I have a 2GB memory card. I am trying to load it with 144 MB of photos. It accepted about half of the photos and now, after exporting the photos to my desktop and trying to copy them onto the memory card, I get the message: "cannot be copied because

  • Single data set, loaded once, with multiple xpath paths?

    All, Is it possible to load xml via a URL one time on a page request and then create mutliple datasets from that single call using different xpath expressions? sort of like doing a mega query then creating subsets of that query? something like: <root

  • Measure the Frequency Of User Report Running

    Hi Experts, Currently I am working on the BW3.5 version. Currently we are having 70 reports in the system. Our client requirement to list out the reports which is not executed frequently and label that obsolete. So I want to find out how many times a