From dng_image to pixel buffer?

Various things of the DNG SDK return a dng_image. The stage1, stage2 and stage3 are an RGB image. And if you use dng_render, you also get a dng_image.
I'd like to get a buffer of pixels (with for example R, G, B values). However, I haven't found any way in the API to get such a buffer.
For example, dng_image has member functions to get its width and height, but nothing to get the pixel buffer or even the value of a single pixel.
And I experimented a bit with the "Get" method of dng_image, but if I don't put anything in fArea of the buffer parameter, I didn't manage to get anything interesting, while if define a rectangle there, the whole program corrupts and crashes a bit after using dng_image.Get.
So, basically, my question is: how can I get the pixel buffer from a dng_image?
This is for displaying it on a computer monitor BTW.
Unfortunately the dng_validate sample doesn't help me out, because they don't get any pixel values there, they just call an API method to convert the dng_image to a TIFF file, which doesn't help me at all because I need RGB pixels on a screen, not a TIFF file on a disk.
Thanks.

I found the answer in the meantime, here it is: it's done with a dng_const_tile_buffer, where you give the dng_image as parameter in its constructur, and then you can get the pixels with dng_const_tile_buffer::ConstPixel.

Similar Messages

  • Create Image from Array of Pixels

    Greetings:
    I am trying to create an industry standard image (jpeg, tiff) from a raw data format. I am getting mixed up as to which classes to use. The data itself is in binary little-endian. I am using a ByteBuffer to readInt()s in the correct byte order. It appears that I am reading all of the input data ok, but I can't get the actual image creation part of the code. I would appreciate any insight offered as I think this likely not the best approach to solving the problem. Thank you for your time.
    My code is as follows:
    import java.awt.*;
    import java.awt.color.*;
    import java.awt.image.*;
    import java.io.*;
    import java.util.*;
    import javax.imageio.*;
    import javax.swing.*;
    import java.nio.*;
    public class Main {
        public static void main(String[] args) throws IOException {
            final int H = 2304;
            final int W = 3200;
            byte[] pixels = createPixels(W*H);
            BufferedImage image = toImage(pixels, W, H);
            display(image);
            ImageIO.write(image, "jpeg", new File("static.jpeg"));
        public static void _main(String[] args) throws IOException {
            BufferedImage m = new BufferedImage(1, 1, BufferedImage.TYPE_BYTE_GRAY);
            WritableRaster r = m.getRaster();
            System.out.println(r.getClass());
        static byte[] createPixels(int size) throws IOException {
            int newsize = size*4;
            byte[] pixels = new byte[newsize];
            String filename = "Run 43.raw";
            InputStream inputStream = new FileInputStream(filename);
            int offset = 0;
            int numRead = 0;
            while (offset < pixels.length
                    && (numRead=inputStream.read(pixels, offset, pixels.length-offset)) >= 0) {
                offset += numRead;
            return pixels;
        static BufferedImage toImage(byte[] pixels, int w, int h) {
            ByteBuffer byteBuffer = ByteBuffer.wrap(pixels);
            byteBuffer.order(ByteOrder.LITTLE_ENDIAN);
            int trash = byteBuffer.getInt(); // strip width
            trash = byteBuffer.getInt(); // strip height
            int[] fixedOrder = new int[2304*3200];
            for (int i=0; i < (2303*3199); i++) {
                fixedOrder[i] = byteBuffer.getInt();
            System.out.println(fixedOrder.length);
            DataBuffer db = new DataBufferInt(fixedOrder, w*h);
            WritableRaster raster = Raster.createPackedRaster(db,
                w, h, 1, new int[]{0}, null);
            ColorSpace cs = ColorSpace.getInstance(ColorSpace.CS_GRAY);
            ColorModel cm = new ComponentColorModel(cs, false, false,
                Transparency.OPAQUE, DataBuffer.TYPE_INT);
            return new BufferedImage(cm, raster, false, null);
        static void display(BufferedImage image) {
            final JFrame f = new JFrame("");
            f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
            f.getContentPane().add(new JLabel(new ImageIcon(image)));
            f.pack();
            SwingUtilities.invokeLater(new Runnable(){
                public void run() {
                    f.setLocationRelativeTo(null);
                    f.setVisible(true);
    }

    I have modified my code to be easier to read and it now compiles and builds without errors, however the image it produces is all black and the jpeg it writes is 0 bytes.
    Note: This code was modified from a forum post by user DrLaszloJamf. Thank you for your starting point.
    import java.io.*;
    import java.nio.*;
    import java.awt.*;
    import java.awt.color.*;
    import java.awt.image.*;
    import java.util.*;
    import javax.swing.*;
    import javax.imageio.*;
    public class Main {
        /** Creates a new instance of Main */
        public Main() {
         * @param args the command line arguments
        public static void main(String[] args) throws IOException {
            final int width = 2304;
            final int height = 3200;
            final int bytesInHeader = 4;
            final int bytesPerPixel = 2;
            final String filename = "Run 43.raw";
            InputStream input = new FileInputStream(filename);
            byte[] buffer = new byte[(width*height*bytesPerPixel)+bytesInHeader];
            System.out.println("Input file opened. Buffer created of size " + buffer.length);
            int numBytesRead = input.read(buffer);
            System.out.println("Input file read. Buffer has " + numBytesRead + " bytes");
            long numBytesSkipped = input.skip(4);
            System.out.println("Stripped header. " + numBytesSkipped + " bytes skipped.");
            ByteBuffer byteBuffer = ByteBuffer.wrap(buffer);
            byteBuffer.order(ByteOrder.LITTLE_ENDIAN);
            System.out.println("Created ByteBuffer to fix little endian.");
            short[] shortBuffer = new short[width*height];
            for (int i = 0; i < (width-1)*(height-1); i++) {
                shortBuffer[i] = byteBuffer.getShort();
            System.out.println("Short buffer now " + shortBuffer.length);
            DataBuffer db = new DataBufferUShort(shortBuffer, width*height);
            System.out.println("Created DataBuffer");
            WritableRaster raster = Raster.createInterleavedRaster(db,
                width, height, width, 1, new int[]{0}, null);
            System.out.println("Created WritableRaster");
            ColorSpace cs = ColorSpace.getInstance(ColorSpace.CS_GRAY);
            System.out.println("Created ColorSpace");
            ColorModel cm = new ComponentColorModel(cs, false, false,
                Transparency.OPAQUE, DataBuffer.TYPE_USHORT);
            System.out.println("Created ColorModel");
            BufferedImage bi = new BufferedImage(cm, raster, false, null);
            System.out.println("Combined DataBuffer, WritableRaster, ColorSpace, ColorMode" +
                    " into Buffered Image");
            ImageIO.write(bi, "jpeg", new File("test.jpeg"));
            display(bi);
            System.out.println("Displaying image...");
        static void display(BufferedImage image) {
            final JFrame f = new JFrame("");
            f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
            f.getContentPane().add(new JLabel(new ImageIcon(image)));
            f.pack();
            SwingUtilities.invokeLater(new Runnable(){
                public void run() {
                    f.setLocationRelativeTo(null);
                    f.setVisible(true);
    }

  • After importing a Light Room catalog, the files I export, export smaller than the originals - with every option of file ending.  I want to export them as DNGs but the files are reduced from around 5000 pixels to about 1700 pixels in length.

    Hi All ~ Thanks in advance for your help! 
    I have been on a support chat with Adobe for 2 hours now and they can't figure this out but hopefully one of you can! 
    I exported my files as a LightRoom Catalog
    I zipped the catalog and the smart previews together
    I sent the zipped folder via Dropbox to a photo editor
    Editor edited the photos
    Editor exported the files as a Lightroom Catalog and zipped them
    Editor sent the LR Catalog back to me via Dropbox
    I opened the Lightroom Catalog in LR (the pixel dimensions are the same as the originals)
    I exported the photos as DNG's (which does not allow for any file size changes - I also tried JPGs and the same thing happened)
         Camera Raw 7.1 and later (I tried them all just to see - and they all rendered the same result)
         Embed Fast Load Data (checked on)
         Embed Original Raw File (check on)
    I checked the Pixel dimensions on my exported photos and they are smaller
    The original pixel dimensions are:
    4912x7360 and the new size (after exporting) is: 1709x2560
    But here's the real kicker!  WHEN I UPLOAD A PHOTO TO LIGHTROOM FROM MY COMPUTER DIRECTLY (not from a LR catalog), AND EXPORT THAT FILE, IT EXPORTS AT FULL SIZE with no change in the pixels dimensions!

    Hi All ~ Thanks in advance for your help! 
    I have been on a support chat with Adobe for 2 hours now and they can't figure this out but hopefully one of you can! 
    I exported my files as a LightRoom Catalog
    I zipped the catalog and the smart previews together
    I sent the zipped folder via Dropbox to a photo editor
    Editor edited the photos
    Editor exported the files as a Lightroom Catalog and zipped them
    Editor sent the LR Catalog back to me via Dropbox
    I opened the Lightroom Catalog in LR (the pixel dimensions are the same as the originals)
    I exported the photos as DNG's (which does not allow for any file size changes - I also tried JPGs and the same thing happened)
         Camera Raw 7.1 and later (I tried them all just to see - and they all rendered the same result)
         Embed Fast Load Data (checked on)
         Embed Original Raw File (check on)
    I checked the Pixel dimensions on my exported photos and they are smaller
    The original pixel dimensions are:
    4912x7360 and the new size (after exporting) is: 1709x2560
    But here's the real kicker!  WHEN I UPLOAD A PHOTO TO LIGHTROOM FROM MY COMPUTER DIRECTLY (not from a LR catalog), AND EXPORT THAT FILE, IT EXPORTS AT FULL SIZE with no change in the pixels dimensions!

  • How do I change font size from points into pixels?

    I used to be able to set font size in points but CS 6 has removed this? Please dont say that points = pixels. They do not. See for yourself.  A photoshop doc and an Indesign doc have the same pixel size. A 20 point character "A" in InDesign, will be larger than a 20 pixel character "A" in photoshop. This creates hours of extra work when translating text heavy InDesign documents into Photoshop PSDs. Is there any workaround? Help!

    Ok, I know why you dont understand my question. I should have said, that points in InDesign do not equal pixels in HTML  (and I should have left Photoshop out of it). 
    My problem is going directly from InDesign to HTML CSS code. I used to be able to look at the type in my layout sketch in InDesign, select pixels in the units preferences and (assuming I am using the same webfont thanks to google fonts and the @fontface style) the font size would match exactly when I view the webpage in a browser. But now, when I try to code my webpage fonts in pixels, using the value from InDesign's points, fonts sizes dont match when viewed in borwser. This may seem like I am splitting hairs to you but it makes a big difference if I can just look at my InDesign sketch and see the excact size of the type without having to export a tracing/guide image and try to match visually. It just worked when I could see font size in pixles in InDesign in CS 5.5 but does not work in CS 6.
    Try a test: make a div in webpage with same pixel size as an InDeisgn doc. Use a defualt font like helvetic that is aval. to both InDesign and your browser. A 20 point word will not match a 20 pixel word in the same font.
    Also: I believe I rulled out any font replacment issues. For my designes, I am using the same font in both the InDesign doc and webpage code. I am using google fonts (donloaded for indesign and "link href" in head of code on webpage and/or using the @fontface style).

  • Captivate 5 - Inserted PNG images from Fireworks are pixelated

    The PNG images I inserted from Fireworks are showing up pixelated when I play the file in a browser, but look fine while I'm working in Captivate.  Any thoughts?
    THANKS!
    Birgit

    Hello,
    Did you try changing the publish quality? Because of the fact that CP when converting to SWF will still be doing some compression, I really would give that first of all a try.
    PNG allows even partial transparency, and certainly its 24-bit version. That is one of the reasons that it is superior to JPEG (no transparency) and GIF (only 100% transparency).  But I'm not really used to Fireworks, always using Photoshop for graphical assets.
    Lilybiri

  • Imported graphic from Photoshop is pixelated

    I have a problem when trying to import any graphic file, even a .psd file, in flash pro. The problem is that when imported, the graphic gets pixelated and the edges of of the graphic does not look smooth (not curves but pixelated lines). Interestingly the problem only occurs when I test the movie, but while editing it the graphic looks fine.
    I am using windows and have flash and photoshop cs5. How can I transfer my graphic from photoshop to flash without the pixelation of the curved lines? Thanks.

    To add to what Adninjastrator said, You could also think about creating vectors in Photoshop as most games use vectors because it doesn't pixelate no matter what size
    Learn about vectors at the following link..
    http://www.melissaevans.com/tutorials/vector-art-with-photoshop

  • AME Export from HDV source: Pixel aspect ratio squeezed in Queue; ok in direct export.

    I have some existing HDV footage that was converted to Cineform codec (CFHD) in an .avi file that I need to export.  My target requires Windows media format (.wmv Window Media video 9).
    My output settings are the following:
    640x480, square pixels, 29.97, lower, VBR, 2 pass unconstrained, 800 kbps.
    When I export these files in a sequence directly from Premiere Pro CC (Export button), the pixel aspect ratio looks correct. (It looks ok in Premiere and in the Premiere Export window, too.)
    When I export using the Queue button in Premiere to AME CC, the video renders out squeezed, as if the aspect ratio is ignored.
    I can also see the squeezed video in the AME preview window when the file is rendering.
    This worked OK previously in Premiere 5.5.
    Is there a setting that controls this and allows normal rendering from the Queue?
    Using the Queue is much preferred to avoid bottlenecks in workflow.
    System: Windows 7 enterprise 64 bit, Lenovo W510, Core i7, 16GB ram.
    Nvidia Quadro FX 880M
    Premiere Pro CC: 7.2.1(4)
    Adobe Media Encoder: 7.2.0.43 (64-bit)
    Not sure if I am using the Mercury Playback Engine GPU Acceleration.
    Thanks.

    I'm using CC, updated today. (see system info at top).
    I'm starting to wonder if the codec info is not read correctly when using the Queue (Cineform codec).  I took your suggestions and created a new sequence with one clip and used the same settings.  I opened the export settings window in two separate ways:
    Export settings in Premiere (Ctrl M). All is good in the Source tab. PAR is interpreted correctly:
    I sent the export to the queue, and from the job in AME clicked the "Custom" link to retrieve the same settings window.
    However, the Source video PAR is not interpreted correctly:
    This must be related to why the render from the AME queue is not correct.

  • When I export photos from Aperture, the pixel dimensions are halved thus losing data.  I have set the export preferences to export at original size, to no avail.  This happens even if I just drag to the desktop and then back, or if I export into my iphoto

    When I export photos from Aperture to desktop or iphoto I lose pixels.  The pixel dimensions are halved, despite setting the export preference to export at original size.  Anyone know why or how to correct this?

    Dragging an Image from Aperture exports the Preview.  Preview parameters are set on the Previews tab of Aperture preferences.
    Are you seeing the same results when you export using one of the export commands?
    Is so, confirm that the settings in the selected Image Export Preset ("Aperture➞Presets➞Image Export") truly represent those in the Image Export Preset's name.
    HTH,
    --Kirby.

  • Data retrival from cluster table using BUFFER???????????

    Hi,
    i would like to retreive payroll results from it's allocated cluster tables using buffer.
    i donot know how to use BUFFER concept and it's statments.
    could anyone guide me in this topic?
    thanks
    theresita.j

    Hi theresita,
    i would like to retreive payroll results from it's allocated cluster tables .
    1. U want the remuneration (monthly salary )
    2. U won't get it DIRECTLY from any table.
    (Its stored in cluster format)
    3. Use this logic and FM.
    DATA: myseqnr LIKE hrpy_rgdir-seqnr.
    DATA : mypy TYPE payin_result.
    DATA : myrt LIKE TABLE OF pc207 WITH HEADER LINE.
    SELECT SINGLE seqnr FROM hrpy_rgdir
    INTO myseqnr
    WHERE pernr = mypernr
    AND fpper = '200409'
    AND srtza = 'A'.
    IF sy-subrc = 0.
    CALL FUNCTION 'PYXX_READ_PAYROLL_RESULT'
    EXPORTING
    clusterid = 'IN'
    employeenumber = mypernr
    sequencenumber = myseqnr
    CHANGING
    payroll_result = mypy
    EXCEPTIONS
    illegal_isocode_or_clusterid = 1
    error_generating_import = 2
    import_mismatch_error = 3
    subpool_dir_full = 4
    no_read_authority = 5
    no_record_found = 6
    versions_do_not_match = 7
    error_reading_archive = 8
    error_reading_relid = 9
    OTHERS = 10.
    myrt[] = mypy-inter-rt.
    READ TABLE myrt WITH KEY lgart = '1899'.
    4. the internal table myrt
    will contain what u require.
    I dont think there is any buffering concept
    involved in cluster table.
    regards,
    amit m.

  • Construct image from Array of pixels

    Here's my problem:
    I've managed to call the pixelgrabber method on an image, so i've got a 1d array of pixels. Now i want to use that array to create the image within the internal frame so that the image can be changed quickly when the contents of the array change. I was wondering whether using the paint method and mofiying it in some way is the way to go, or is there a much better way to do it?

    i'm guessing that no one knows an answer?Bad guess. I know the answer.
    The solution lays in the correct use of the following classes:
    BufferedImage
    WritableRaster
    DataBuffer
    SampleModel
    ColorModel
    It goes something like this:
    1) Wrap a raw array (probably bytes or ints) in a DataBuffer.
    2) Create a SampleModel that describes the layout of the raw data.
    3) Now create a WritableRaster that wraps the DataBuffer and SampleModel
    4) Create a ColorModel that describes how the raw data maps to the color of a pixel.
    5) A BufferedImage can be created from the WritableRaster and the ColorModel.
    Drawing operations on the BufferedImage's Graphics2D objects will be reflected in the raw data array. And modifications to the raw data array will be shown when the BufferedImage is drawn onto other Graphics.

  • How to recover from 'INSTALLER LOG SHARED BUFFER IS FUL' during Yosemite install/upgrade?

    My MacBook Pro is currently out of commission as it will only boot into the Yosemite Install process which currently cannot be completed successfully.
    I learned (after the fact) of the install issues related to homebrew but from that helpful article I found out about Cmd-L to open the Log Viewer.
    I've had two unsuccessful installs now. The first took 5+ hours. Since it was taking so long, I left it running over night with the Log Viewer running. I awoke to a failed install with 'Line 128,493 - INSTALLER LOG SHARED BUFFER IS FUL' as the last output in the log viewer. I figured that maybe it was the actual Log Viewer that was causing the issue, but alas my second attempt failed and I only had the log viewer open briefly part way through the process. Even without the Log Viewer running, it hit the same errror.
    Is there a way I can reenable Mavericks to move some files around in preparation for the install?  I don't have the hardware to mount the laptop as a target disk - if I brought the machine to the Apple Store would they let me mount my MBP and move stuff around so I could minimize the install process? Anyone else have any suggestions?

    As for question 1) I can aster that myself: yes, the installer is still running.
    My installation just finished now and seems to be fine. However, if someone has a clue about question 2) it would be great to hear the answer.

  • How do i stop PS from deleting transparent Pixels on layers?

    Hello community,
    i have a problem concering the place-function of Photoshop CS6. I want to include hundreds of png files in a psd file and i want them to keep their original size. Unfortinatly they have transparent borders and PS deletes these borders when you place the images in the PSD-file. For example: I have a square with x=50px and y=50px and it is positioned in an png with the dimensions x=100px and y=100px. I want to have these kind of files in my psd-file an need to keep their full dimensions, because my graphic specifications relate to the 100px/100px size and i need it later in my workflow.
    How do i stop PS from cutting off the tranparent borders of my images?

    Yes Free Transform has what was set during the place and you can see in the option bar if the layer has been scaled or is at 100 % width and height. You can also drag out guide line to the transforms bounding box. Then later measure the size or drop in a low opacity pixel in two diagonal corners to force the bounds to the png's canvas size.   However if you don't add those pixels in a script the bounds you get is that of the bounds of the non transparent pixels and in an action Ctrl|CMD clicking the layers content in the layers palette selects the non transparent pixels.
    My Photoshop Photo Collage populating scripts do not always fill an image area when they resizing odd shaped png images, However if the png is rectangular with transparent borders the script will fill the area with pixel for the scripts gets the bounds of the non transparent pixels, not the png canvas size. So resizing works well. There is a difference between layer size and canvas size in the png file. Layers can also be larger then canvas size in psd files Photoshop support any size layer you have the canvas act like a cropping mask on a document.
    Message was edited by: JJMack

  • Image Processing Algorithms - From Matlab to Pixel Bender

    Hello.
    Got a few Image Processing (Mainly Image Enhancement) Algorithms I created in Matlab.
    I would like to make them run on Photoshop, Create a Plug In out of it.
    Would Pixel Bender be the right way doing it?
    The Algorithms mainly use Convolutions and Fourier Domain operations.
    All I need is a simple Preview Window and few Sliders, Dropbox and Buttons.
    I'd appreciate your help.

    pixel vs float - Couldn't figure out what exactly is the difference if there is at all. I assume Pixel always get clipped into [0 1] and float won't until it gets to be shown on the screen as an output?
    There is no difference between them. At one stage of development we had some ideas about the way the pixel type should work that would make it different to float, but the ideas never came to anything and by the time we realized that it was too late to change. It's #1 on my list of "mistakes we made when developing Pixel Bender".
    Regions - Let me see if I get is straight. For the example assuming Gaussian Blur Kernel of Radius 5 (Not the STD, but the radius - a 11x11 Matrix). I should use "needed()" in order to define the support of each pixel output in the input image. I should do it to make sure no one changes those values before the output pixel is calculated.
    Now, In the documentation is goes needed(region outputRegion, imageRef inputIndex). Should I assume that at default the outputRegion is actually the sampled pixel in the input? Now I use outset(outputRegion, float2(x, y)) to enlarge the "Safe Zone". I don't get this float2 number. Let's say it's (4, 3) and the current pixel is (10, 10). Now the safe zone goes 4 pixel to the left, 4 to the right, 3 up and 3 down? I assume it actually creates a rectangle area, right? Back to our example I should set outset(outputRegion, float2(5.0, 5.0)) right?
    Needed is the function the system calls to answer the question "what area of the input do I need in order to calculate a particular area of the output?".
    I should do it to make sure no one changes those values before the output pixel is calculated.
    No, you should do it to make sure the input pixel values needed to compute the output pixel values have been calculated and stored.
    Should I assume that at default the outputRegion is actually the sampled pixel in the input?
    No. When "the system" (i.e. After Effects, PB toolkit or the Photoshop plugin) decides it wants to display a particular area of the output, it will call the needed function with that area in the outputRegion parameter. The job of the needed function is to take whatever output region it is given and work out what input area is required to compute it correctly.
    Let's say it's (4, 3) and the current pixel is (10, 10).
    Don't think in terms of "current pixel" when you're looking at the needed function. The region functions are not called on a per-pixel basis, they are called once at the start of computing the frame, before we do the computation for each pixel.
    Back to our example I should set outset(outputRegion, float2(5.0, 5.0)) right?
    Yes - you're correct. Whatever size the output region is, you require an input region that has an additional border of 5 pixels all round to calculate it correctly.

  • Is there a way to prevent LR from reducing size (pixels) of files transfered to Photoshop as a psd file for editing?

    The subject line says it all.  I have discovered that LR reduces the size of files transferred to Photoshop for further editing when compared to the same RAW file processed in ACR in Photoshop.  For example, a psd file created by LR from a 36 MB RAW file will typically result in a 100 MB psd file.  The same RAW file processed similarly in Photoshop will result in a file around 300 MB.  This makes a huge difference when the files are converted to JPEGs for printing.  The smaller LR psd files result in JPEGs typically under 500 KB where the Photoshop JPEGs are typically around 1.5 MB.   

    Whew!  I didn't think this would turn into such a long discussion!.     OK - my error,  spatial data not color.  But that is loosing focus on my question.  My objective is to get the best print possible from the final JPEG file used for printing..  As previously noted, for a given RAW file w/similar adjustments but with one processed in LR and the other in PS , a print made from the larger of two resulting JPEG files, the larger JPEG file resulting from only  PS ACR processing (i.e., no LR) will result in a better print.  Elie-d appears to have answered my basic question. If LR will always pass a 103 MB file then I assume there is no way to change this size.  In case anyone is interested here is some additional info for consideration.  Both LR & PS using 8 bit mode and 240 ppi..  Procedure in LR: 1) download RAW files from camera CD; 2) make adjustments to photo;  3) using Edit in External Processor, photo transferred to PS in PSD format, 8 bit. 4) file size around 100 MB (103 MB per elie-d).  Procedure in PS:  1) download RAW file to hard drive with Nikon View2; 2) open file in PS - auto opens in ACR; 3) apply essentially the same adjustments (e.g., exposure, highlight, shadow, etc) ) as applied in LR;  4) file opened in PS editor;  5) resulting PSD file size with no further processing (i.e., no layers, etc.) is 310 MB).  The resulting JPEG file created from the LR PSD is typically less than 200 KB whereas the JPEG from the PS PSD will generally be around 1.5 MB.   As an academic question it would still be interesting to know why one gets different size PSD files from LR and PS and what, if any, affect this has on print quality.  As for now I will not use LR to process RAW files.  I prefer the larger PSD & JPEG files I get using just PS            

  • How do i change the default image rulers from cm to pixels?

    Im having an issue with PS CC 2014. I am not able to change the default setting when I create a new image. It is currently set to cm and I would like it in pixels. I have gone to:
    Photoshop > Preferences > Units & Rulers and changed the settings but it always defaults back to cm. Is this a known glitch or am I doing something wrong?
    Thanks
    Dizl

    Are you talking about the units in the File - New dialog, then?  Seems to me that one will remember whatever you last successfully used.
    If you do File - New, then set the units to Pixels, and actually complete the dialog (creating a new document), I believe that's what you'll get next time.  Is that not happening for you?  I just tested it here and it puts up whatever units I used last.
    I don't know if Mac vs. PC might make a difference.  I'm on PC.
    -Noel

Maybe you are looking for

  • Class not found in VC6?

    Yesterday, under the VC6 & JDK1.5.0_06 environment I tested JNI, encountered the following problems. After compiling & linking, executed "M_env->FindClass in m_cls = ( "htmlparser_test/Demo"); " the return value m_cls = Null. However, if I remove the

  • My memory is full of "other" what files need to be deleted to free up more memory?

    My memory is full of "other" what files need to be deleted to free up more memory?

  • Regarding Download data in MS Excel format and Check box

    Hi Friends In my Project some task is here i.e. select check box of the particular Customer number click on submit button. That time display of those customer details How we can do this if have any coding of this application can you sent me. And one

  • Am I missing out on higher potential speeds?

    I'm an existing Infinity customer on Option 2 - very close to the cabinet and have had an excellent ~38Mbps DL connection since I joined.  Having seen they are rolling out "Infinity 2" I wondered if I would be automatically upgraded to the faster ser

  • Version confirmation

    HI Experts ! plz tell me how can i find " sap r/3 enterprise 47x110 or  47x200 version" .