Very large  PSD files created in library ?

When I finish processing a NEF file in Lightroom and send to Photoshop CS3 to make a lens distortion correction then save the revised file as a PSD file, the resulting file size saved to my Lightroom library is more than 5 times the size of my original NEF (11MB-62MB).
This doesn't seem right and really discourages me from using CS3 - or maybe Lightroom (?). Am I doing something wrong ? Is there a work flow that will allow me to use CS3 for corrections not possible in Lightroom without generating such giant files to my library.
In my pre-Lightroom workflow I would import the original NEF to a photo library folder, open the NEF file in Bridge to make raw adjustments, then open in Photoshop to make the distortion corrections, etc, then save as a jpg. The resulting file sizes in my library folder would be 11MB for the original NEF and 3-4MB for the final JPG.
I really prefer sorting, developing and viewing photos in Lightroom rather than Photoshop, but am reluctant to accept an additional 58MB of storage requirement for each photo that requires Photoshop revisions.
Would appreciate any suggestions or ideas !!
NikonD300,MacOS10.4.11,Lightroom1.3.1,PhotoshopCS3

That's normal. A RAW file is usually 12-bit with only one greyscale value at each pixel. There are individual color filters in front of every pixel. This pattern is called a Bayer Mosaic. A RAW converter demosaics this into a picture that has RGB values defined at each pixel. If for example, you have a 10MP camera, your RAW file uncompressed will be at a minimum 15MB, or about 10MB compressed. Now demosaic this into a 16-bit file. This yields 10*3*16/8=60MB uncompressed. Adding compression gets you down to about 30-50MB depending on the complexity of the picture. Nothing you can do about this, but for using 8-bits instead of 16-bits, halving the necessary filesize.

Similar Messages

  • How to open very large PSD files?

    I am unable to open psd file - "could not open file as result will be too long"

    PFF! WTF? Why wouldn't Illy CS3 be able to open Illy 1.1 files? That is **RETARDED**. Who at Adobe thinks "Gee, no one would Ever need to open old files any more!"? Some day I'm gonna want to open some old files just for nostalgia, and I'll be using Illy 2025 trying to open an Illy 6 file that I made in college.
    If anyone has any input at Adobe, please tell them that we should **always** be able to open older versions of files in new applications. This goes for Ai, InD, PS, Acro, AE.
    [/crazy rant]

  • Today, I randomly happened to have less than 1GB of hard drive space left. I found very large "frame" files, what are they?

    I found very large "frame" files, what are they & can I delete them? (See screenshot). I'm a (17 today)-year-old film-maker and can't edit in FCP X anymore because I "don't have enough space". Every time I try to delete one, another identical file creates itself...
    If that can help: I just upgraded to FCP 10.0.4 and every time I launch it it asks to convert my current projects (I know it would do it at least once) and I accept, but everytime I have to get it done AGAIN. My computer is slower than ever and I have a deadline this friday
    I also just upgraded to Mac OS X 10.7.4, and the problem hasn't been here for long, so it may be linked...
    Please help me!
    Alex

    The first thing you should do is to back up your personal data. It is possible that your hard drive is failing. If you are using Time Machine, that part is already done.
    Then, I think it would be easiest to reformat the drive and restore. If you ARE using Time Machine, you can start up from your Leopard installation disc. At the first Installer screen, go up to the menu bar, and from the Utilities menu, first select to run Disk Utility. Completely erase the internal drive using the Erase tab; make sure you have the internal DRIVE (not the volume) selected in the sidebar, and make sure you are NOT erasing your Time Machine drive by mistake. After erasing, quit Disk Utility, and select the command to restore from backup from the same Utilities menu. Using that Time Machine volume restore utility, you can restore it to a time and date immediately before you went on vacation, when things were working.
    If you are not using Time Machine, you can erase and reinstall the OS (after you have backed up your personal data). After restarting from the new installation and installing all the updates using Software Update, you can restore your personal data from the backup you just made.

  • HELP!! Very Large Spooling / File Size after Data Merge

    My question is: If the image is the same and only the text is different why not use the same image over and over again?
    Here is what happens...
    Using CS3 and XP (P4 2.4Ghz, 1GB Ram, 256MB Video Card) I have taken a postcard pdf (the backside) placed it in a document, then I draw a text box. Then I select a data source and put the fields I wish to print (Name, address, zip, etc.) in the text box.
    Now, under the Create Merged Document menu I select Multiple Records and then use the Multiple Records Layout tab to adjust the placement of this postcard on the page. I use the preview multiple records option to lay out 4 postcards on my page. Then I merge the document (it has 426 records).
    Now that my merged document is created with four postcards per page and the mailing data on each card I go to print. When I print the file it spools up huge! The PDF I orginally placed in the document is 2.48 MB but when it spools I can only print 25 pages at a time and that still takes FOREVER. So again my question is... If the image is the same and only the text is different why not use the same image over and over again?
    How can I prevent the gigantic spooling? I have tried putting the PDF on the master page and then using the document page to create the merged document and still the same result. I have also tried createing a merged document with just the addresses then adding the the pdf on the Master page afterward but again, huge filesize while spooling. Am I missing something? Any help is appreciated :)

    The size of the EMF spool file may become very large when you print a document that contains lots of raster data
    View products that this article applies to.
    Article ID : 919543
    Last Review : June 7, 2006
    Revision : 2.0
    On This Page
    SYMPTOMS
    CAUSE
    RESOLUTION
    STATUS
    MORE INFORMATION
    Steps to reproduce the problem
    SYMPTOMS
    When you print a document that contains lots of raster data, the size of the Enhanced Metafile (EMF) spool file may become very large. Files such as Adobe .pdf files or Microsoft Word .doc documents may contain lots of raster data. Adobe .pdf files and Word .doc documents that contain gradients are even more likely to contain lots of raster data.
    Back to the top
    CAUSE
    This problem occurs because Graphics Device Interface (GDI) does not compress raster data when the GDI processes EMF spool files and generates EMF spool files.
    This problem is very prominent with printers that support higher resolutions. The size of the raster data increases by four times if the dots-per-inch (dpi) in the file increases by two times. For example, a .pdf file of 1 megabyte (MB) may generate an EMF spool file of 500 MB. Therefore, you may notice that the printing process decreases in performance.
    Back to the top
    RESOLUTION
    To resolve this problem, bypass EMF spooling. To do this, follow these steps:1. Open the properties dialog box for the printer.
    2. Click the Advanced tab.
    3. Click the Print directly to the printer option.
    Note This will disable all print processor-based features such as the following features: N-up
    Watermark
    Booklet printing
    Driver collation
    Scale-to-fit
    Back to the top
    STATUS
    Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.
    Back to the top
    MORE INFORMATION
    Steps to reproduce the problem
    1. Open the properties dialog box for any inbox printer.
    2. Click the Advanced tab.
    3. Make sure that the Print directly to the printer option is not selected.
    4. Click to select the Keep printed documents check box.
    5. Print an Adobe .pdf document that contains many groups of raster data.
    6. Check the size of the EMF spool file.

  • I received a psd file created in CS6 for MAC, I am unable to find the layers when I open it in photoshop in Windows. What can i do to edit the files?

    I received a psd file created in CS6 for MAC, I am unable to find the layers when I open it in photoshop in Windows. What can i do to edit the files? What can be done so that I can either open and see the layers or how can the sender save it in a way that it doesn't "merge" the layers in some way to just one?

    Could try saving as tiff provided layers and transparency are chosen at the time of saving. But it's hard to give a definitive answer as it depends on the final usage. For example PSD's tend to work better in applications like In Design in comparison with tiff.

  • Can I open psd files created in Photoshop CC with lower version? Actually CS5.

    Can I open psd files created in Photoshop CC with lower version? Actually CS5.
    My home and office have different version each.

    Yes, PROVIDED that the file has not been saved with any layers containing any of the specific new features introduced starting with Photoshop CC and higher.
    If the file has been flattened or if it was flat to begin with there will be no problems.

  • Import error when importing .psd file created in Photoshop CS3

    I have a file that was created in Photoshop CS3. I am trying to import this file into Catalyst CS5, but I keep getting an input error. I created a file in Photoshop CS5 and I was able to import this .psd file successfully into catalyst. So I guess my question is this; Does Catalyst support .psd files created in Photoshop CS5 only? If this is the case, how can I convert the CS3 .psd to a CS5 .psd

    I am now building my files using photoshop CS5. A few challenges I have experienced while trying to import into Catalyst:
    - If I do a straight import into Catalyst, I loose all my colors in Catalyst. That is, catalyst imports all my vectors without any of the colors I had in Photoshop. My understanding is that Catalyst has libraries to support Photoshop files, so your import from Photoshop should not look any different in Catalyst.
    Work Around: I have had to rasterize / merged all my image layers for the different components I have built in Photoshop CS5. Then when I start the import process I click the Advanced button on the catalyst import dialog. From the advanced dialog, I have to flatten (to bitmap) all the images I intend to import.
    This process takes about two hours because I have a lot of components on the page.
    Is there something  I'm not doing right here? I'm not seeing the value add to Catalyst, if it's taking me this long to just import .psd files

  • I need to sort very large Excel files and perform other operations.  How much faster would this be on a MacPro rather than my MacBook Pro i7, 2.6, 15R?

    I am a scientist and run my own business.  Money is tight.  I have some very large Excel files (~200MB) that I need to sort and perform logic operations on.  I currently use a MacBookPro (i7 core, 2.6GHz, 16GB 1600 MHz DDR3) and I am thinking about buying a multicore MacPro.  Some of the operations take half an hour to perform.  How much faster should I expect these operations to happen on a new MacPro?  Is there a significant speed advantage in the 6 core vs 4 core?  Practically speaking, what are the features I should look at and what is the speed bump I should expect if I go to 32GB or 64GB?  Related to this I am using a 32 bit version of Excel.  Is there a 64 bit spreadsheet that I can us on a Mac that has no limit on column and row size?

    Grant Bennet-Alder,
    It’s funny you mentioned using Activity Monitor.  I use it all the time to watch when a computation cycle is finished so I can avoid a crash.  I keep it up in the corner of my screen while I respond to email or work on a grant.  Typically the %CPU will hang at ~100% (sometimes even saying the application is not responding in red) but will almost always complete the cycle if I let it go for 30 minutes or so.  As long as I leave Excel alone while it is working it will not crash.  I had not thought of using the Activity Monitor as you suggested. Also I did not realize using a 32 bit application limited me to 4GB of memory for each application.  That is clearly a problem for this kind of work.  Is there any work around for this?   It seems like a 64-bit spreadsheet would help.  I would love to use the new 64 bit Numbers but the current version limits the number of rows and columns.  I tried it out on my MacBook Pro but my files don’t fit.
    The hatter,
    This may be the solution for me. I’m OK with assembling the unit you described (I’ve even etched my own boards) but feel very bad about needing to step away from Apple products.  When I started computing this was the sort of thing computers were designed to do.  Is there any native 64-bit spreadsheet that allows unlimited rows/columns, which will run on an Apple?  Excel is only 64-bit on their machines.
    Many thanks to both of you for your quick and on point answers!

  • Best technology to navigate through a very large XML file in a web page

    Hi!
    I have a very large XML file that needs to be displayed in my web page, may be as a tree structure. Visitors should be able to go to any level depth nodes and access the children elements or text element of those nodes.
    I thought about using DOM parser with Java but dropped that idea as DOM would be stored in memory and hence its space consuming. Neither SAX works for me as every time there is a click on any of the nodes, my SAX parser parses the whole document for the node and its time consuming.
    Could anyone please tell me the best technology and best parser to be used for very large XML files?

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • Have a very large text file, and need to read lines in the middle.

    I have very large txt files (around several hundred megabytes), and I want to be able to skip and read specific lines. More specifically, say the file looks like:
    scan 1
    scan 2
    scan 3
    scan 100,000
    I want to be able to skip move the filereader immediately to scan 50,000, rather than having to read through scan 1-49,999.
    Thanks for any help.

    If the lines are all different lengths (as in your example) then there is nothing you can do except to read and ignore the lines you want to skip over.
    If you are going to be doing this repeatedly, you should consider reformatting those text files into something that supports random access.

  • How can NI FBUS Monitor display very large recorded files

    NI FBUS Monitor version 3.0.1 outputs an error message "Out of memory", if I try to load a large recorded file of size 272 MB. Is there any combination of operating system (possible Vista32 or Vista64) and/or physical memory size, where NI FBUS Monitor can display such large recordings ? Are there any patches or workarounds or tools to display very large recorded files?

    Hi,
    NI-FBUS Monitor does not set the limitation on the maximum record file size. The physical memory size in the system is one of the most important factors that affect the loading of large record file.  Monitor will try loading the entire file into the memory during file open operation.
    272MB is a really large file size. To open the file, your system must have sufficient physical memory available.  Otherwise "Out of memory" error will occur.
    I would recommend you do not use Monitor to open a file larger than 100MB. Loading of a too large file will consume the system memory quickly and decrease the performance.
    Message Edited by Vince Shen on 11-30-2009 09:38 PM
    Feilian (Vince) Shen

  • Best data Structor for dealing with very large CSV files

    hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
    however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
    does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
    any suggestions would be greatly apprechiated.
    Message was edited by:
    ninjarob

    How much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
    If the data size turns out to be prohibitive of loading into memory, how about a relational database?
    Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).

  • What are the best tools for opening very large XML files and examining the tree and confirming they are valid?

    I am generating some very large XML files (600,000+ lines, 50MB+ characters). I finally have them all being valid XML and valid UTF-8.
    But the files are so large Safari and Chrome will often not open them. FireFox will though.
    Instead of these browsers, I was wondering if there are there any other recommended apps for the Mac for opening and viewing the XML, getting an error message if they are not valid for some reason and examing the XML tree?
    I opened the file in the default app for XML which is Xcode, but that is just like opening it in a plain text editor. You can't expand/collapse the XML tree like you can with a browser, and it doesn't report errors.
    Thanks,
    Doug

    Hi Tom,
    I had not seen that list. I'll look it over.
    I'm also in touch with the developer of BBEdit (they are quite responsive) and they are willing to look at the file in question and see why it is not reporting UTF-8 errors while Chrome is.
    For now I have all the invalid characters quashed and things are working. But it would be useful in the future.
    By the by, some of those editors are quite pricey!
    doug

  • Bridge locks briefly when updating metadata on large PSD files

    When updating metadata on a large PSD file, Bridge "locks" up for a brief period of time. This is apparently when it's rewriting that large PSD file to update the metadata. I'd like to be able to use Bridge in an uninterrupted fashion while metadata is updated in the background. This normally works for smaller files, just not on large PSD files. You can reproduce this easily by just adding a keyword to a couple 100MB PSD files.
    --John

    Chris,
    How large are the PSD files? If they are over 400MB, I think I know what's wrong.
    Bridge has a file-size preference, and won't process files that are larger than your setting. You'll find this preference on the Thumbnails panel in the Bridge Preferences dialog. The default is 400MB--try bumping it up to a size larger than your largest PSD and then purge the cache for the folder containing those PSDs.
    If that does not work, I'd like to have a copy of one of the PSDs that is not working for you.
    -David Franzen
    Quality Engineer,
    Adobe Systems, Inc.

  • When using Lightroom Book module for Blurb book making, why do I keep getting a low image quality message if it's supposedly accessing my large raw files in my library?

    When using Lightroom Book module for Blurb book making, why do I keep getting a low image quality message if it's supposedly accessing my large raw files in my library?

    I think I've solved my problem with a Google Search. I came across a free slide show generator
    (contributions requested) that shows much higher quality slide shows than either iPhoto or Aperture 3.
    You click on a folder of jpegs and it almost immediately generates thumbnails and within a few seconds
    I can be viewing a full screen, tack sharp, slideshow of all of the files in the folder. Much sharper than
    I'm used to seeing.
    I think I'll keep the Aperture 3 and use if for the purpose it's intended for in the future. I'll also redo the
    image preview files to the small size it started with and then I'll copy all of the files I'm interested in from
    iPhoto into a separate folder on another disk. I'll use Aperture to catalog and to perform image manipulations
    on but I won't try to use it as an iPhoto replacement. I don't think I'll be using iPhoto much as an image
    viewer in the future either after I finish moving my favorite pictures to the Phoenix Slides folder.
    The name of the free program is Phoenix Slides. It's free to download and try, free to keep (though I
    think you'd want to pay the small amount requested) and fast. My pictures have never looked so good
    before.
    http://blyt.net/phxslides/
    Message was edited by: Jimbo2001

Maybe you are looking for