On exponential file size growth

I can see why they've done this. The presentation is far more portable if the images are embedded in the .key file. No-one has to keep track of file locations, relative paths, etc. Though for those of us who prefer another image editing tool keynote should have a pref to embed or not; and in the case of not should maintain a live link to referenced media files. I suspect this embedded tif generation is what's causing problems with QT mov playbacks.
I've found that doing "Save As...." seems to force keynote to ditch redundant internal tiffs thereby making for a smaller .key file.

Well, that certainly explains why a Pages test I sent Brian had blossomed to 8M with only six photos. I had used the adjustment tool on one and it went from a 988k photo to a 'filtered' tiff that was 7.3M.
That's quite a bit of hard drive realestate used up for something that should be fairly small.
I had thought that the Adjustments were non-destructive, and I guess they are. I had originally thought they would only be done 'on the fly' using a bit of the Core Image goodies in 10.4, but it appears a completely new photo is being generated, with somewhat greedy tendancies.
Send this to feedback so they know about it. There should definitely be an option to save as a jpeg instead of as a tiff.
Just ran a test.
Dragging an image and saving the presentation resulted in one image in the file.
Adjusting it and saving it resulted in two files, one 7 times as large saved as a tiff.
Resettign the adjustment pane and closing it, and saving it resulted in two images still.
Deleting the image and saving got rid of both files.
Dragging the image in again and saving it gave only one file. I deselected the 'resize images to fit on slide', just in case. (That used to generate large tiffs as well)
There is seriously something wrong with this. I honestly thought that it did it's image manipulations on the fly as the slide was displayed. In fact I was hoping it did that. What I do know is that if an image has an adjustment applied, and is then turned into a placeholder image (makes it easy to add a replacement image) the replacement image will be retain the adjustment. I guess this is only a potential problem with Pages, since Keynote does not use placeholder images.
More play is required before figuring this out.

Similar Messages

  • Exponential File Size Reduction

    Hi All,
    Attached is a .vi that I recieved from this forum (Altenbach I believe).
    Currently this vi copies every x line from a source file to a new file. It works great however now I am interested in reducing the sampling rate. For example I would like to grab every line initially, then grab every other line, then every third line etc. I was thinking that the easiest way to do this is to introduce a feature that counts up (i.e. grab on line then skip a line, then skip 2, then skip 3, then skip 4...etc to the end of the file).
    Any thoughts or help would be greatly appreciated.  (Note: I am a very new user to labview and it may be a simple change that I am missing)
    Solved!
    Go to Solution.
    Attachments:
    Reduce File Size.vi ‏12 KB

    Christian_M wrote:
    Attachement is compiled for 8.5.1
    I don't think the calculation for N is still correct. We probably need a while loop to see when we run out of elements (or do some more fancy math).
    Ideally, the decimation rate should depend on how fast the important values change with each line: Skip many lines if the value is nearly constant, but retain most lines if the signal changes significantly between lines, for example. 
    Message Edited by altenbach on 10-16-2008 10:04 AM
    LabVIEW Champion . Do more with less code and in less time .

  • File size growth fr Aperture to Photoshop and back. Why?

    When i take a 15Mb file from Aperture to Photoshop, work on it in photoshop them save back into Aperture its grown to about 56MB back in Aperture, BUT why when i open the Adjustments HUD for this new file in Aperture does it make a new copy of the image and make it 150MB size in Aperture.

    This is because Aperture converts the file to a TIFF wrapped as a PSD file to import to photoshop, which gives you the initial size increase. When photoshop saves the file, it is also saving (in the PSD) any layers you have created, so several layers will increase the size of the file several times.
    Nothing wrong with Aperture - just a fact of life when storing native Photoshop files (PSDs) to disk - they're huge!
    Hope that helps.
    Dave.

  • Lookout causing system file size growth in W2000

    When running Lookout 4.5 my WINNT/system32/config/system file is growing. The file should be around 2 to 3 MB but has grown to over 6 MB in just 4 days. Previously when this file got to 20 mb the system fried and had I to redo the above mentioned system file to original and also reinstall Lookout 4.5. I am afraid I am headed back to the same place. What is the problem?
    Thanks for your input.
    Dave

    Dave,
    I do not know why should Lookout increase the system hive when Lookout is running regularly with a process application.
    The only part Lookout is somehow related to the system hive is during installation. Lookout installer register its services (LKAds, LKCitadel, LKTime, and in Lookout 5 probably Lookout used as NT Service). This installation would increase the system hive a little bit. However, I've installed Lookout 5 and 4.5.1b18 and my system hive is as you said around 2-3 MB and I have not seen it growing when I was using Lookout.
    If you could reproduce and see that this system file is growing it might be interessting to know if you do something special in your process application. (E.g. DDE Exchange, OPC Servers...).
    Let me know if you attache an exampl
    e that reproduces the system hive grow.
    Regards
    Roland
    PS: More about this growing file: e.g.: http://scilnet.fortlewis.edu/tech/NT-Server/regist​ry.htm#Hives
    PPS: If you encounter this problem - for sure it would be good to know why - but maybe you could increase the registry max file setting thus the system would not freeze - or what do you mean by "fried", a crash (Would you have a stack dump?)

  • Exponential File Growth?

    There's some semblance of reason in resizing a project
    causing it to increase its size (from 200 to 300mb) in my case. It
    is a long project (the full demonstration is about 40 minutes), so
    I deleted a number of slides and resaved as a handful of them as a
    new partial project hoping to improve overall performance in
    Captivate, and the file size was then down to about 100MB with a 10
    min length.
    After a fair bit of editing and corrections, it now takes
    almost 10 minutes to open the project file. It has grown from a
    mere 112MB on Friday morning, to over 1GB. This was after
    re-syncing the audio and deleting some slides. The project is
    approximately 11 minutes long and narrated, but captured at a
    relatively high resolution [the program being captured doesn't
    rescale in itself very well, and the target is CD/DVD distribution
    so that's not much of an issue of itself].
    Still, there is NO EXCUSE for a 10x increase in file size
    after editing. Resizing the project, deleting unused items from the
    library and/or resaving does nothing. This is not performance I
    expect from a commercial application.
    Is there any way to reduce the project file to a manageable
    size?
    This seems like an issue that Adobe needs to patch, yet other
    than these Forums I see no easy way of contacting their tech
    support to report what (at best) seems to be an obvious bug.
    Perhaps after using Flash, Premiere, and Photoshop I'm expecting
    too much...

    Thanks for the quick response. I've been publishing to HTML -
    to be precise, to HTML in a ZIP archive since oddly enough that
    combination seems to be the fastest & most reliable of the
    publishing choices - and that isn't affecting my file size.
    I'll try creating a new Project and copying the slides over
    and see if that helps, if I can. Besides the fact that it doesn't
    let you easily copy by slide group, I discovered that I can't
    actually create a new project of the correct dimensions without
    doing a test capture of the application first. My Capture
    resolution is 1604x1170 (a large widescreen monitor minus the start
    menu), but apparently Captivate doesn't support blank-project
    resolutions larger than 2960x1050 [so it accepts my width, but not
    my height...]
    So many bugs and no place to report them and no patches to
    fix them . . .

  • PDF file size grows with each save if .access property set on a field

    We are seeing an odd form behavior and have isolated the apparent trigger to something we are doing in the form script.  I'm hoping someone can confirm they see the same problem, details follow:
    We have a form generated in LiveCycle.  It contains a text field.  In the docReady event for that field we have javascript which sets the field to be readOnly (TextField1.access = "readOnly").  We reader extend the form so we can save it from reader and/or the plug-in/control which is used by a browser when reading PDFs.
    With that simple form, open the form via the browser (we've tested with both IE and Chrome) and without doing anything else, just save the form (with a new name).  When we do that, the saved copy of the form is significantly bigger than the copy we started with.  If we then repeat the process using the newly saved file, the third copy is bigger than the second.
    This file growth does not happen if you open the file in Adobe Reader (instead of in the browser).
    When we look at the file contents via a text editor, what we have found is that each save via the browser is tacking on a chunk of data to the end of the file AFTER the %%EOF mark.  This new section appears to be one or more object definitions and ends with another %%EOF.  The first portion of the file (prior to the first %%EOF) is identical in all versions of the file.
    If you take a copy of the file that has these extra section added, then open and save it in Adobe Reader, it eliminates those extra sections and you get a 'clean', small version of the file again.  So those extra sections are clearly erroneous and unnecessary.
    Another thing worth noting, we took the script for setting the field access property out of the docReady event and put it as the click event on a button added to the form.  If you then open the form, press the button and save it you see the file growth (but not if you don't press the button.)  So it doesn't appear related to the docReady event, or timing of when we set the access property of the field.
    On the small test form described above the growth of the file is around 13KBytes.  But in testing with our real forms we've found that the amount  of growth seems to be tied to the size/complexity of the form and the form data.  In our most complex form with multiple pages with hundreds of fields and a large amount of XML data (form size is 2+MB), we are getting a file size increase of 700KBytes on each save.  This is a therefore, a significant issue for us, particularly since the process in which the form is used requires the users to save it a couple of times a day over the period of a month or more.

    I would start by exporting the XML data from the form before and after it grows to see if it is the underlying data that is growing and where. Did you define a schema and bind your form fields to a data connection created from the schema? That is always my first step when creating a form. It makes for a smaller saved file not including multiple levels of sub forms in the data structure.

  • Index file increase with no corresponding increase in block numbers or Pag file size

    Hi All,
    Just wondering if anyone else has experienced this issue and/or can help explain why it is happening....
    I have a BSO cube fronted by a Hyperion Planning app, in version 11.1.2.1.000
    The cube is in it's infancy, but already contains 24M blocks, with a PAG file size of 12GB.  We expect this to grow fairly rapidly over the next 12 months or so.
    After performing a simple Agg of aggregating sparse dimensions, the Index file sits at 1.6GB.
    When I then perform a dense restructure, the index file reduces to 0.6GB.  The PAG file remains around 12GB (a minor reduction of 0.4GB occurs).  The number of blocks remains exactly the same.
    If I then run the Agg script again, the number of blocks again remains exactly the same, the PAG file increases by about 0.4GB, but the index file size leaps back to 1.6GB.
    If I then immediately re-run the Agg script, the # blocks still remains the same, the PAG file increases marginally (less than 0.1GB) and the Index remains exactly the same at 1.6GB.
    Subsequent passes of the Agg script have the same effect - a slight increase in the PAG file only.
    Performing another dense restructure reverts the Index file to 0.6GB (exactly the same number of bytes as before).
    I have tried running the Aggs using parallel calcs, and also as in series (ie single thread) and get exactly the same results.
    I figured there must be some kind of fragmentation happening on the Index, but can't think of a way to prove it.  At all stages of the above test, the Average Clustering Ratio remains at 1.00, but I believe this just relates to the data, rather than the Index.
    After a bit of research, it seems older versions of Essbase used to suffer from this Index 'leakage', but that it was fixed way before 11.1.2.1. 
    I also found the following thread which indicates that the Index tags may be duplicated during a calc to allow a read of the data during the calc;
    http://www.network54.com/Forum/58296/thread/1038502076/1038565646/index+file+size+grows+with+same+data+-
    However, even if all the Index tags are duplicated, I would expect the maximum growth of the Index file to be 100%, right?  But I am getting more than 160% growth (1.6GB / 0.6GB).
    And what I haven't mentioned is that I am only aggregating a subset of the database, as my Agg script fixes on only certain members of my non-aggregating sparse dimensions (ie only 1 Scenario & Version)
    The Index file growth in itself is not a problem.  But the knock-on effect is that calc times increase - if I run back-to-back Aggs as above, the 2nd Agg calc takes 20% longer than the 1st.  And with the expected growth of the model, this will likely get much worse.
    Anyone have any explanation as to what is occurring, and how to prevent it...?
    Happy to add any other details that might help with troubleshooting, but thought I'd see if I get any bites first.
    The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
    Thanks for reading.

    alan.d wrote:
    The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
    Thanks for reading.
    I haven't tried Direct I/O for quite a while, but I never got it to work properly. Not exactly the same issue that you have, but it would spawn tons of .pag files in the past. You might try duplicating your cube, changing it to buffered I/O, and run the same processes and see if it does the same thing.
    Sabrina

  • File size bloat still evident in CC 2014.1 (or just stabilization?)

    In the last couple of nights, the project I am working on (originally imported from 2014.0) has gone from 3MB to 16MB to 61MB without any new importing or explicit rendering.
    It's a small project. A single 10-minute sequence with perhaps 16 master clips, most used 4-5 times or less. There's a lot of primary and secondary color correction (half to master clips) and vignettes, but not an excessive amount. I am new to Premiere Pro (but very very smitten with it ).
    I have done most of my warp stabilizing over the last few nights (corresponding, perhaps coincidentally, with the 2014.1 update). Several people on these forums have suggested that the project file storage of warp stabilization data is to blame, but I have not heard this from Adobe and, as a software developer, that doesn't make sense to me. (I would think that the product of warp stabilization needn't be more than the stabilization "solution", an overall zoom value and an x,y for each frame.)
    If additional "hint" data from the stabilization is also stored in the project file to help with future re-stabilizations, this would explain the growing file size but the optimization seems to be hurting more than helping. The clips took only 5-10 seconds each to stabilize initially and I lose that time with every auto-save. In addition, the performance of unrendered playback has suffered significantly on my Win7 i7 16Gb with 1100 CUDA cores.
    If it is warp stabilization data, it would be good to know. I can disable auto-saves for now and, in the future, ensure that stabilization is done much later in the workflow.
    I also want to thank you folks for all for the answers you provide here -- I have learned a huge amount from this community!

    To be fair, the stabilization solution is more than just a 2-D offset, it's a warp stabilizer, after all.
    I did a quick test using 3 of the 6 unstable master shots in a new project and made 6 clips overall to make a small representation of my larger project. As warp stabilization was added, the file size grew from 150k to 4MB.
    Scaling this growth up to match the relative number of clips in my full project, it roughly matches. So question answered, I suppose.
    The performance hit on adjustment visual feedback and playback has me seriously considering removing all the stabilization and repeating it closer to end of editing.

  • Huge binary file size difference LV 6.1 to LV 8.2

    I have an application that was first developed under LabVIEW 4.2 and has been periodically upgraded as necessary.  It is normally run as a built application so LabVIEW version changes only come in when something in the application has to be changed.  That occasion occurred and we changed from LabVIEW 6.1 to LabVIEW 8.2 for the build environment.  No changes were made to any of the file handling vis (other than what LabVIEW did in the conversion, which I didn't check at all).
    The problem is that this application writes a large structure (4320 elements in each of 13 single-precision variables) into a binary file once a day.  When I started the new version of the application the size of the files went from 242956 bytes to 775334 bytes.
    The vi to read these files still works for the old files as well as the new files and a set of C routines that I use to read the files seems to also work correctly with the new files - at least the data looks like it is supposed to when it is plotted.
    The change in file size is a concern since this application is supposed to keep writing these files every day for many years into the future.
    Is there any known way to return to the original behavior with the smaller files?

    Hi Bryan,
    you say: "all changes are made with rotates and insert elements to prefent the arrays from growing"!
    You should use "replace array elements" to prevent array growth!!! Please check this with your application!
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • Downloading Excel file shows increase in file size

    Hi,
    I have a XLS file, available in my SharePoint site.
    When i Click on the file, I get the "Open", "Save" and "Save As option". On top of this, I can see the file size which shows as "400kb".
    When I click on save and file is download the file size is shown as 900 kb. How does that happen? How is the file size increasing exponentially? How to resolve this?
    Thanks

    This element is true but not necessarily related to just the versions.  All Office documents will retain the list info from which they're downloaded (such as metadata info) as XML.  You could try clearing this data out as a test and then
    re-comparing the file sizes.
    Steven Andrews
    SharePoint Business Analyst: LiveNation Entertainment
    Blog: baron72.wordpress.com
    Twitter: Follow @backpackerd00d
    My Wiki Articles:
    CodePlex Corner Series
    Please remember to mark your question as "answered" if this solves (or helps) your problem.

  • Unable to set max log file size to unlimited

    Hi all,
    Hoping someone can give me an explanation of an oddity I've had. I have a series of fairly large databases. I wanted the make the database log files 8GB in size with 8GB growth increments with
    unlimited maximum file size. So, I wrote the script below. It seems to have worked but the database max size doesn't show as unlimited, it shows as 2,097,152MB and cannot be set to unlimited using a script or in SSMS by clicking on the unlimited radio button.
    2TB is effectively unlimited anyway but why show that rather than actually setting it to unlimited?
    USE
    [master]
    GO
    --- Note: this only works for SIMPLE RECOVERY MODE. FOR FULL / BULK RECOVERY modes you need to backup the transaction log instead of a CHECKPOINT.
    DECLARE
    @debug
    varchar(1)
    SET @debug =
    'Y'
    DECLARE
    @database
    varchar(255)
    DECLARE
    @logicalname
    varchar(255)
    DECLARE
    @command
    varchar(8000)
    DECLARE
    database_cursor
    CURSOR LOCAL
    FAST_FORWARD FOR
    select
    DB_NAME(database_id)
    as DatabaseName
    name
    as LogicalName
    from
    master.sys.master_files
    where
    file_id
    = 2
    AND type_desc
    = 'LOG'
    AND physical_name
    LIKE '%_log.ldf'
    AND
    DB_NAME(database_id)
    NOT IN('master','model','msdb','tempdb')
    OPEN
    database_cursor
    FETCH
    NEXT FROM database_cursor
    into @database,@logicalname
    WHILE
    @@FETCH_STATUS
    = 0
    BEGIN
    SET
    @command
    = '
    USE ['
    + @database
    + ']
    CHECKPOINT
    DBCC SHRINKFILE('''
    + @logicalname
    + ''', TRUNCATEONLY)
    IF
    (@debug='Y')
    BEGIN
    PRINT
    @command
    END
    exec
    (@command)
    SET
    @command
    = '
    USE master
    ALTER DATABASE ['
    + @database
    + ']
    MODIFY FILE
    ( NAME = '''
    + @logicalname
    + '''
    , SIZE = 8000MB
    ALTER DATABASE ['
    + @database
    + ']
    MODIFY FILE
    ( NAME = '''
    + @logicalname
    + '''
    , MAXSIZE = UNLIMITED
    , FILEGROWTH = 8000MB
    IF
    (@debug='Y')
    BEGIN
    PRINT
    @command
    END
    exec
    (@command)
    FETCH
    NEXT FROM database_cursor
    INTO @database,@logicalname
    END
    CLOSE
    database_cursor
    DEALLOCATE
    database_cursor

    Hi,
    http://technet.microsoft.com/en-us/library/ms143432.aspx
    File size (data)
    16 terabytes
    16 terabytes
    File size (log)
    2 terabytes
    2 terabytes
    Thanks, Andrew
    My blog...

  • Please suggest on Autoextend on and file size

    Hi Experts,
    We have installed SAP ecc 6.0 system on oracle 10.2.0. Our file system & tablespace setting is AUTO Extend on.
    1. Some files in the database reached the size above 30GB. will it be problem if the size of datafiles is grown above 30GB?
    2. If we have sufficient space at file system, is it suggestable to keep file settings auto extend on for oracle database?
    Kindly advice me.
    Thanks in advance
    Regards
    Veera

    Hi,
    OS ?
    Maximum File size varies from OS to OS. Refer [SAP Note 129439 - Maximum file sizes with Oracle|https://service.sap.com/sap/support/notes/129439] to get more detailed information. With HP_UX 64GB of File size with db block size of 16KB is possible theoretically.
    I would suggest you to Maintain main Table spaces  manually (PSAPSR3, PSAPSR3DB, PSAPSR3USER,PSAPSR3700) with small data file sizes (10 to 20 GB in size), based on its Daily/Weekly/Monthly growth rate calculation.
    Regards,
    Bhavik G. Shroff

  • File size doesn't update

    On my Win 2008 SP2 Enterprise x64 system I have a ftp application I'm running.  It is downloading a file.  When I look at the file in Windows Explorer or with the DOS dir command the file size + last modify time aren't current and can be many hours
    old.
    If I then look at the file properties via right click and Properties I see the same.  Close the Properties box, exit the folder, then go back to the folder and the file size/modify time are correct.
    The drive is set to "optimize for quick removal" (cache disabled) and if I'm downloading several files the size discrepancy can be several GB - more than my system RAM.  The folder isn't shared.  Shadow copies is disabled.
    Event logs show no problem and saving to a different disk doesn't resolve it.
    Thoughts?

    Hi Dennies,
    Here is the result I got:
    It is not a bug, but rather just the way that NTFS works.
    The file IS getting updated, but the file Size is just not being updated along with it until some outside event forces a refresh of the file size on disk (F5 or file completion)
    Following Blog resources provide more information about this:
    http://blogs.msdn.com/ntdebugging/archive/2008/07/03/ntfs-misreports-free-space.aspx
    http://blogs.technet.com/b/askcore/archive/2009/10/16/the-four-stages-of-ntfs-file-growth.aspx
    Shaon Shan |TechNet Subscriber Support in forum |If you have any feedback on our support, please contact [email protected]

  • Reduce the Production Log file size(.LDF)

    Hi Everybody,
                We are using R/3 ECC 6.0 VERSION with SQL 2005 Database. For the past two days our Production Server Performance is very slow due to the size of Production Log file(.LDF) it crossed 17 GB. i want to reduce this Log file size. i dont know how. plz some one help me to do this job.otherwise this will become Serious Issue.
    Points will be rewarded
    Thanks
    Siva

    How did you trace the slowness back to the log file?  A 17 GB log file is on the small side for a Production system.  I don't think a hotfix is going to fix your log growth.
    Is the log on the same physical disk as your data files?  Is it on a very slow hard drive or is the drive having an I/O problem?  That is the only way it would impact performance to a noticable degree.  A large or small log file will have no real effect on performance since it is just appended to and not read during writes, and in most production environments it is on a seperate disk or part of a SAN.
    You can decrease its growth by increasing your log backup time.  Do you back it up now?  You can probably set your backup software to shrink the file when it finished backing up.  You should consult your DBA team and ask for their advice, they can quickly point you in the right direction.

  • Reducing main file size

    Hi,
    We had a large .mdf file of size around 135 gb. We splited the one .mdf file into 3 secondary files [.mdf, .ndf1,.ndf2,.ndf3] so as to increase the File I/O. We were adviced that reindexing tables will move data to secondary files thereby distributing the
    data equally among all data files. We are able to reduce the main .mdf file size from 135 to around 95 gb but we are unable to reduce it further.
    I often run reindexall command to move data to secondary files but of no use. I have stopped file growth on the main file to enable data distrivution to secondary files but the .mdf files is stuck on 95gb size.
    Is there any way to distribute data by moving it from .mdf file to .ndf file?
    Shady

    Do you have any LOB data? 
    The SQL Server uses Propotional fill algorithm in a Round Robin mechanism.
    http://sqlmag.com/blog/rebalancing-data-across-files-filegroup
    Identify the huge tables and try move it's clustered index to new filegroup will actually spread the data across different filegroups.
    http://msdn.microsoft.com/en-us/library/ms188783.aspx
    CREATE CLUSTERED INDEX <IndexName> ... WITH (DROP_EXISTING = ON, ONLINE = ON, ...) ON newfilegroup
    To find the biggest table you can use the below query.
    create table #TableSize (
    Name varchar(255),
    [rows] int,
    reserved varchar(255),
    data varchar(255),
    index_size varchar(255),
    unused varchar(255))
    create table #ConvertedSizes (
    Name varchar(255),
    [rows] int,
    reservedKb int,
    dataKb int,
    reservedIndexSize int,
    reservedUnused int)
    EXEC sp_MSforeachtable @command1="insert into #TableSize
    EXEC sp_spaceused '?'"
    insert into #ConvertedSizes (Name, [rows], reservedKb, dataKb, reservedIndexSize, reservedUnused)
    select name, [rows],
    SUBSTRING(reserved, 0, LEN(reserved)-2),
    SUBSTRING(data, 0, LEN(data)-2),
    SUBSTRING(index_size, 0, LEN(index_size)-2),
    SUBSTRING(unused, 0, LEN(unused)-2)
    from #TableSize
    select * from #ConvertedSizes
    order by reservedKb desc
    drop table #TableSize
    drop table #ConvertedSizes
    --Prashanth

Maybe you are looking for