16-bit image depth (aerial imagery) histogram stretching

I am wondering if it is possible to find a way to view the full statistics of a 16-bit image. The sensor is 13-bit, therefore all of the data shows up as nearly black. When converting to 8 bit, I am worried that I may not be using the entire histogram available. Pointers would be greatly appreciated...

Thnks for the quick answer.
I believe I'm being consistent. I get the image type from the properties value of the IMAQ session and that's the value I use for the Grab Acqure VI and the Extract Tetragon VI. I think the type is grayscale U16.
I was doing something wrong with the way I checked the values though. The maximum value of the histogram is the number of times a pixel value is repeated, so actually those 52k wasnt the maximum pixel value after all. 
Now if I use the ImageToArray VI then I see the maximum value within the 12-bit range->4090.
So in light of this, should I take it that labiew is doing the padding on the left?
A 111111111111 pixel value is stored as 0000111111111111 value thus preserving the actual value of the pixel?
I'd think then the image would be dimmer because now the maximum value would be 4090 out of 65k levels. But the actual image is bright.
So in order to preserve the fidelity of the image I would expect the padding to be made on the right, thus storing a 111111111111 pixel value as 1111111111110000.
As for how matlab deals with it, my guess is that Matlab just recognises it is a uint16 image and stores the values. But, to be certain I need to know how labview deals with the padding, because in matlab I'm getting values close the maximum (2^16-1) which suggests padding on the right, but they are odd, which would suggest padding on the left. 

Similar Messages

  • Filter plugin. Problem after change image depth.

    Hi All !
    I already wrote filter plugin it work fine but only for image depth 8bit, after i change image depth on 16 or 32 bits I getting error msg box from photoshop.
    I try change on 'destination.colBits = 8' or 'destination.colBits = pChannel->depth' or ' (pChannel->bounds.bottom - pChannel->bounds.top) * pChannel->depth;'  but all the same.
    PixelMemoryDesc destination;
    destination.data = data; //*pixel
    destination.depth = pChannel->depth;
    destination.rowBits = (pChannel->bounds.right - pChannel->bounds.left) * pChannel->depth;
    destination.colBits = 8;
    destination.bitOffset = 0 ;
    Please help someone !
    Very Thanks in Advance !
    All code below:
    //  Gauss.cpp
    //  gauss
    //  Created by Dmitry Volkov on 30.12.14.
    //  Copyright (c) 2014 Automatic System Metering. All rights reserved.
    #include "Gauss.h"
    #include "GaussUI.h"
    #include "FilterBigDocument.h"
    #include <fstream>
    using namespace std;
    SPBasicSuite* sSPBasic = NULL;
    FilterRecord* gFilterRecord = NULL;
    PSChannelPortsSuite1* sPSChannelPortsSuite = NULL;
    PSBufferSuite2* sPSBufferSuite64 = NULL;
    int16* gResult = NULL;
    void DoParameters ();
    void DoPrepare ();
    void DoStart ();
    void DoFinish ();
    void DoEffect();
    void GaussianBlurEffect(ReadChannelDesc* pChannel, char* data);
    void ReadLayerData(ReadChannelDesc* pChannel, char* pLayerData);
    void WriteLayerData(ReadChannelDesc* pChannel, char* pLayerData);
    DLLExport MACPASCAL void PluginMain(const int16 selector,
                                        FilterRecordPtr filterRecord,
                                        intptr_t * data,
                                        int16 * result)
        sSPBasic = filterRecord->sSPBasic;
        gFilterRecord = filterRecord;
        gResult = result;
        try {
                if (sSPBasic->AcquireSuite(kPSChannelPortsSuite,
                                                   kPSChannelPortsSuiteVersion3,
                                                   (const void **)&sPSChannelPortsSuite))
                    *gResult = errPlugInHostInsufficient;
                if (sSPBasic->AcquireSuite( kPSBufferSuite,
                                                   kPSBufferSuiteVersion2,
                                                   (const void **)&sPSBufferSuite64))
                    *gResult = errPlugInHostInsufficient;
                if (sPSChannelPortsSuite == NULL || sPSBufferSuite64 == NULL)
                    *result = errPlugInHostInsufficient;
                    return;
                switch (selector)
                    case filterSelectorParameters:
                        DoParameters();
                        break;
                    case filterSelectorPrepare:
                        DoPrepare();
                        break;
                    case filterSelectorStart:
                        DoStart();
                        break;
                    case filterSelectorFinish:
                        DoFinish();
                        break;
        catch (...)
            if (NULL != result)
                *result = -1;
    void DoParameters ()
    void DoPrepare ()
    void DoStart ()
        if (*gResult == noErr)
            if (doUi())
                DoEffect();
    void DoFinish ()
    #define defColBits 8
    void DoEffect()
        // Start with the first target composite channel
        ReadChannelDesc *pChannel = gFilterRecord->documentInfo->targetCompositeChannels;
        // Calculation width and height our filter window
        int32 width = pChannel->bounds.right - pChannel->bounds.left;
        int32 height = pChannel->bounds.bottom - pChannel->bounds.top;
        fstream logFile ("/Volumes/Macintosh Media/GaussLogFile.txt", ios::out);
        logFile << endl << "top " << pChannel->bounds.top;
        logFile << endl << "bottom " << pChannel->bounds.bottom;
        logFile << endl << "left " << pChannel->bounds.left;
        logFile << endl << "right " << pChannel->bounds.right;
        logFile << endl << "depth " << pChannel->depth;
        logFile << endl << "vRes " << gFilterRecord->documentInfo->vResolution;
        logFile << endl << "hRes " << gFilterRecord->documentInfo->hResolution;
        // Get a buffer to hold each channel as we process. Note we can using standart malloc(size_t) or operator new(size_t)
        // functions, but  Adobe recommend sPSBufferSuite64->New() for memory allocation
        char *pLayerData = sPSBufferSuite64->New(NULL, width*height*pChannel->depth/8);
        if (pLayerData == NULL)
            return;
        // we may have a multichannel document
        if (pChannel == NULL)
            pChannel = gFilterRecord->documentInfo->alphaChannels;
        // Loop through each of the channels
        while (pChannel != NULL && *gResult == noErr)
            ReadLayerData(pChannel, pLayerData);
            GaussianBlurEffect(pChannel, pLayerData);
            WriteLayerData(pChannel, pLayerData);
            // off to the next channel
            pChannel = pChannel->next;
        pChannel = gFilterRecord->documentInfo->targetTransparency;
        // Delete pLayerData
        sPSBufferSuite64->Dispose((char**)&pLayerData);
    void GaussianBlurEffect(ReadChannelDesc* pChannel, char *data)
        // Make sure Photoshop supports the Gaussian Blur operation
        Boolean supported;
        if (sPSChannelPortsSuite->SupportsOperation(PSChannelPortGaussianBlurFilter,
                                                    &supported))
            return;
        if (!supported)
            return;
        // Set up a local rect for the size of our port
        VRect writeRect = pChannel->bounds;
        PIChannelPort inPort, outPort;
        // Photoshop will make us a new port and manage the memory for us
        if (sPSChannelPortsSuite->New(&inPort,
                                      &writeRect,
                                      pChannel->depth,
                                      true))
            return;
        if (sPSChannelPortsSuite->New(&outPort,
                                      &writeRect,
                                      pChannel->depth,
                                      true))
            return;
        // Set up a PixelMemoryDesc to tell how our channel data is layed out
        PixelMemoryDesc destination;
        destination.data = data; //*pixel
        destination.depth = pChannel->depth;
        destination.rowBits = (pChannel->bounds.right - pChannel->bounds.left) * pChannel->depth;
        destination.colBits = defColBits;
        destination.bitOffset = 0 ;
        // Write the current effect we have into this port
        if (sPSChannelPortsSuite->WritePixelsToBaseLevel(inPort,
                                                         &writeRect,
                                                         &destination))
            return;
        // Set up the paramaters for the Gaussian Blur
        PSGaussianBlurParameters gbp;
        int inRadius = 1;
        Fixed what = inRadius << 16;
        gbp.radius = what;
        gbp.padding = -1;
        sPSChannelPortsSuite->ApplyOperation(PSChannelPortGaussianBlurFilter,
                                                                 inPort,
                                                                 outPort,
                                                                 NULL,
                                                                 (void*)&gbp,
                                                                 &writeRect);
        if (sPSChannelPortsSuite->ReadPixelsFromLevel(outPort,
                                                      0,
                                                      &writeRect,
                                                      &destination))
            return;
        // Delete the temp port in use
        sPSChannelPortsSuite->Dispose(&inPort);
        sPSChannelPortsSuite->Dispose(&outPort);
    void ReadLayerData(ReadChannelDesc *pChannel, char *pLayerData)
        // Make sure there is something for me to read from
        Boolean canRead;
        if (pChannel == NULL)
            canRead = false;
        else if (pChannel->port == NULL)
            canRead = false;
        else if (sPSChannelPortsSuite->CanRead(pChannel->port, &canRead))
            // this function should not error, tell the host accordingly
            *gResult = errPlugInHostInsufficient;
            return;
        // if everything is still ok we will continue
        if (!canRead || pLayerData == NULL)
            return;
        // some local variables to play with
        VRect readRect = pChannel->bounds;
        PixelMemoryDesc destination;
        // set up the PixelMemoryDesc
        destination.data = pLayerData;
        destination.depth = pChannel->depth;
        destination.rowBits = pChannel->depth * (readRect.right - readRect.left);
        destination.colBits = defColBits;
        destination.bitOffset = 0 ;
        // Read this data into our buffer, you could check the read_rect to see if
        // you got everything you desired
        if (sPSChannelPortsSuite->ReadPixelsFromLevel(
                                                      pChannel->port,
                                                      0,
                                                      &readRect,
                                                      &destination))
            *gResult = errPlugInHostInsufficient;
            return;
    void WriteLayerData(ReadChannelDesc *pChannel, char *pLayerData)
        Boolean canWrite = true;
        if (pChannel == NULL || pLayerData == NULL)
            canWrite = false;
        else if (pChannel->writePort == NULL)
            canWrite = false;
        else if (sPSChannelPortsSuite->CanWrite(pChannel->writePort, &canWrite))
            *gResult = errPlugInHostInsufficient;
            return;
        if (!canWrite)
            return;
        VRect writeRect = pChannel->bounds;
        PixelMemoryDesc destination;
        destination.data = pLayerData;
        destination.depth = pChannel->depth;
        destination.rowBits = pChannel->depth * (writeRect.right - writeRect.left); //HSIZE * pChannel->depth * gXFactor*2;
        destination.colBits = defColBits;
        destination.bitOffset = 0 ;
        if (sPSChannelPortsSuite->WritePixelsToBaseLevel(
                                                         pChannel->writePort,
                                                         &writeRect,
                                                         &destination))
            *gResult = errPlugInHostInsufficient;
            return;

    Have you reviewed your code vs the Dissolve example? It is enabled for other bit depths as well.

  • LV 8.0 PDA Image Depth Error

    Attached is an 8-bit image.  Why does the PDA Load Image File.vi in LV8.0 PocketPC module give me an image depth of 24 bit when its actually 8 bits?!!
    This is a complete shows-stopper

    Dear members.
    I have tried to use load image file to but I can't load a 10KB JPG witch is expanded a 24bit 400x240 bitmap.
    I always get the information that there is not enought memory available (Error code 2). If I try to use bmp
    files of 288KB (3x400x240 = 288KB) there is no problem to load them.
    Is there another explanation of this error? I am using Win CE 5 on a ARM based 300MHz Board with 32MB
    FLASH and 32 MB RAM. The Kernel has a zize of 12MB and with this I have 11..20MB free RAM space.
    The main problem is the huge amount for memory I need to load from the FLASH. This costs more than a
    secound for such a BMP.
    Is there a possibility that there is a missing kernel modul for the JPG uncompress or what does this error mean
    exactly?
    Yours with kind regards
              Martin Kunze
    With kind regards
    Martin Kunze
    KDI Digital Instrumentation.com
    e-mail: [email protected]
    Tel: +49 (0)441 9490852

  • How to convert color image(24 bit) YUV 4:2:2 to gray scale 8 bit image

    I am using sony DFW-X700 color camera for one of vision applications.Does NI Compact Vision System(CVS) support YUV 4:2:2 format(8 bit each).I want to do gray scale processing, so i need to convert the YUV color into gray scale (8-bit) in the software(like Labview). Please suggest us how to do this conversion for better gray scale image clarity from color.

    In YUV color space, Y represents the gray scale; in RGB color space, R=G=B represents gray scale. You can simply set R=G=B=Y, to convert YUV to RGB. If the original color depth is 24 bit, then the result is 24 bit too.
    You can create gray scale color table like this:
    array size = 256;
    [0] = 0x000000;
    [1] = 0x010101;
    [2] = 0x020202;
    [255] = 0xFFFFFF;
    To convert 24 bit gray scale to 8 bit, check every pixel in 24 bit image to find the array index according to the color table, and replace the pixel with array index.
    George Zou
    http://gtoolbox.yeah.net
    George Zou
    http://webspace.webring.com/people/og/gtoolbox

  • Roundtrip mismatch for 8-bit to 32bit to 8-bit images. Gamma appears to not be reversible.

    I am trying to debug a slight difference in gamma correction from 8-bit to 32-bit VS  32-bit to 8-bit.
    The observed issue can be seen in darker, shadowed areas of the image. To recreate, try the following:
    1. Find a good test 8-bit image, open in photoshop
    2. Create an empty file of the same resolution, bit depth - 32bit
    3. Select all in the 8-bit image, copy.
    4. Paste in the 32-bit image
    The image should look a bit washed out (like a gamma 2.2 has been applied)
    5. Selected the pasted layer, add an exposure adjustment layer. Set gamma to 0.454545 (eg. 1 /2.2)
    6. Convert back to 8-bit. Select image mode 8-bit. Choose merge.
    7. Under type select 'exposure and gamma'. Leave exposure @ 0 and gamma @ 1.0
    This should look close to the original image.
    8. Flip between the original and this round-tripped image -- they are slightly different, especially in the shadow areas.
    A gamma correction should be reversible. Since we promote to float - there shouldn't really be any precision issues.
    Any thoughts?
    cl

    Well, but do you actually use color management? Unless you work perfectly linearized and can be sure the source image doesn't already have built in specific color skew and/ or PS is applying a correction based on color profiles, your values will never match. also the conversion back from 32bit will never yield a 100% exact match due to simple quantization issues when going from float to integer values...
    Mylenium

  • Converting a Signed 16 bit colour Image to 32 bit Image without Losing the Colour

    Hi,
    I am using the IMAQColourHistogram which only accepts 32 bit image inputs. I changed the Image from 16 bit to 32 bit using 'Cast Image'. The program worked but inserting an indicator, I realised that the Cast Image actually changes the Color in the image to a black and white Image. Is there a way I can change the 16 bit to 32 bit without losing the colour. 
    Any suggestions on this would be very helpful as I am currently stuck in my project.
    Thanks.

    Hi,
    Is your 16bit image a real color image or only a grayscale image with a false color palette?
    If your image is only a grayscale image, simply use the IMAQHistogram function  (located in the Analysis palette) wich accepts 8 and 16bit images.
    Color images are 32bit because you have 3 planes of 8 bit (Red, Green, Blue)
    16bit are not usually color images, but grayscale image with extended pixel depth compared ti traditionnal 8bit images.
    Regards

  • Image placeholder - how not to stretch images by default

    Captivate 8.0.1.242 - Windows 7
    How do I make the image placeholder to not to stretch images by default and rather when I add the image using the + symbol, it should maintain the aspect ratio of the image and resize only based on either width or height?
    Or still maintain aspect ratio of the image but crop of the excess beyond the current placeholder size.
    I want this by default as when I create my courses, my images vary in aspect ratios and sizes quite a bit. Thus investing time on each image through "edit image" in the properties isn't efficient.

    Do you have any suggestions for what I can do instead? I just don't want to spend couple of minutes for each picture which obviously gets multiplied for each of the 3 different responsive views. This is lead to hours wasted for each course work. Am I forced to find only pictures of same aspect ratio and size for the future or is there any workaround?

  • Layer won't unlock for 32 bit image in CS4

    I'm running Photoshop CS4 11.0.1 on Windows XP SP3. When I generate a 32 bit image from FILE > AUTOMATE > MERGE TO HDR, the resulting image has a single layer that cannot be unlocked. Consequently, when converting to 16 bit, I can't manually adjust the sliders in the Toning Curve & Histogram area of the HDR Conversion dialog box. What am I missing?

    Ah, that's why I've been pulling my hair out! Thanks for preventing me from losing any more, Chris. I'm not using Extended, so that explains it. I did find the sliders, and they were helpful.

  • Quicktime converting 24 bit images to 16 bit images on paste

    I am doing image processing of a QuickTime (7.1.5 pro) movie in Photoshop (CS2) using Applescript. When I copy an image (either with Applescript or manually) and paste it into a new QuickTime Player movie the image gets converted to a video depth of 16 (thousands) rather than 24. I have tried:
    set DestMovie to make new movie with properties {data rate:SourceRate, video depth:24, high quality:true} -- I also tried video depth:0
    paste
    I also try:
    tell movie 1
    paste given video depth:24 -- I've also tried just paste
    end tell
    and still it converts my images to thousands. The image gets to photoshop perfectly. If I paste it into a Pages document the image depth is correct. I have tried both RGB and CMYK in 8 and 16 bit depths in Photoshop. If I save the file as JPEG in high quality mode and then open the file with Quicktime and copy and paste it, it works great, the only problem is I then have to make a new file for every image I process because if I close the file, QuickTime keeps it open so Photoshop can't then save to the same file.....
    Will someone please help me figure out how to keep my image quality on pasting into Quicktime???
    Thanks
    Powerbook G4   Mac OS X (10.4.9)  

    I open my original movie in Quicktime. I make a new movie in Quicktime for the modified images to be pasted into.
    I copy the current frame from my source movie and paste it into Photoshop. I do my image processing and then save as a jpeg high quality and close my file (which strips out the audio from a muxed mpeg-1 file) then reopen the file in Photoshop. Now I have a good quality image in Photoshop. I have been using RGB 8 bit as my mode in Photoshop. I only tried CMYK to see if it would work better. It was exactly the same.
    I then do a select all and copy in Photoshop, then go to Quicktime and do a paste. This is when it gets converted to 16 bit depth. If I do the same exact sequence and paste into a Pages document the image stays 24 bit, so I am pretty sure it is something in Quicktime.
    Again, if I save the file in Photoshop and then open the same file in Quicktime, copy the image and then paste it into my destination movie, it works great. The only problem with this method is I have 15 minutes of video to process with is 27,000 jpeg files in the end. I prefer to use one file if I can. The reason for the 27,000 files is that when I open an image in Quicktime and paste it into a movie, it saves it as a track in the movie and even when I close the original file, Quicktime keeps it as in use. When I look at the movie properties, it keeps the original file in the list until I do a save as movie and it flattens the movie. If I flatten the movie once, then when I do another save, it keeps the original file open again. I would have to do a save as movie... again 27,000 movie files now. Geez...
    Thanks!

  • I want to convert pictures to 1 bit image

    dir sir;
    i want to make a programe by it i want to upload image real image and then it converted it for 1 bit image ;
    can i use java to do that?
    and if soo what method and package and function helps me to do that .
    if you can provide me with simple code i will be thankfull.
    beast regards.

    Hi,
    if you have vision, you could use the functin IMAQ Image to Array to have a 2D array of the pixel values.
    You can then compare pixel by pixel; if your images come from a camera, I would recommend to set a treshold of acceptance.
    This is a time consuming solution anyway.
    Alternative methods:
    1) Make a subtraction of the two images, the resulting image will be the difference of them
    2 ) Use IMAQ LogDiff function (operators palette)
    3) Calculate the histogram of both image and compare the histogram reports
    Good luck,
    Alberto

  • 16-bit color depth for gradients?

    Hello,
    I have created gradient backgrounds inside of inDesign CS6 but see some color banding in offset printer's color proofs. Is there a setting/option to set inDesign to generate its color gradients at 16-bit color depth?
    Thanks in advance.
    Jeffrey

    Then open and rasterize in Photoshop, where you can specify both the resolution and bit depth.
    Jeffery, It seems like that should work, but if you look closely at the histograms it doesn't look like you get 16-bits of gray levels. Here's a black to white blend exported to PDF/X-4 and opened as 8 and 16 bit CMYK. The black channel histograms are the same:
    If there were more than 256 levels of gray in the 16-bit version I should be able to make a correction without gaps showing in the histogram, but that doesn't happen:
    If I make the black channel blend in Photoshop I can make the same correction without gaps:

  • Image depth on clipboard

    When a RAW image file is opened using Preview, what is its image depth (8 bits, 12 bits, 16 bits)?? Assuming it's more than 8 bits, if the image is copied to the clipboard and pasted into a new Photoshop document, is the original image depth maintained? Or does clipboard limit the image depth of a copied image?
    Thanks in advance,
    rr

    You can use this code:
    ImageInputStream is = ImageIO.createImageInputStream(new File(fileName));
    ImageReader ir = (ImageReader) ImageIO.getImageReaders(is).next();
    ir.setInput(is);
    ImageTypeSpecifier its = (ImageTypeSpecifier) ir.getImageTypes(0).next();
    System.out.println("BufferedImageType = "+its.getBufferedImageType());
    System.out.println("Color depth = "+(its.getNumComponents()*8));
    You can figure the color depth from the BufferedImage type if you need also b/w (1bit).

  • 32 bit color depth support

    I'm operating from a Samsung Galaxy Tab 7.7, which natively supports 32-bit color depth. Is photoshop touch rendering at this level? or is it 16-24 bit color. not sure i could tell without comparing two images from actual screens.
    basically from what i gather, the answer is at that photoshop touch, in its ability to render psd, jpg and png is at least using a color space providing 24 or 30 bit color depth. the device's color space is capable of rendering 32 bit color depth.

    David,
    My web site address is:
    http://www.geocities.com/gzou999/index.html
    There is no login needed.
    George Zou
    http://gtoolbox.yeah.net
    George Zou
    http://webspace.webring.com/people/og/gtoolbox

  • Using Photoshop Elements as external editor with Aperture 3: What are issues with requiring 8-bit image?

    While I would do most of my photo edits with Aperture there may be times I would like to do special effects that a PSE might offer with layers.  I am considering purchasing PSE 10 and was concerned with having to lose information by saving as an 8-bit image.  Can anyone enlighten me of the circumstances when this would be of concern? 
    Thanks.

    Can anyone enlighten me of the circumstances when this would be of concern?
    If you had to do high quality printing larger than 8x10.
    Before buying PSE trial Pixelmator and try GIMP. Note that GIMP (now v2.6) will have 16-bit per channel with v3.0.
    -Allen

  • Using Labview to save image from PCO camera(12 bit images)

    Hello,
     I have labview 8.5 version in my Winxp. I have PCO camera (pixelfly). So far I know that it saves 12 bit images. I used normal save pattern of labviewas png,tiff or jpeg. As .pngit saves the images as 32 bit and as bmp it takes 8 bit images. But the picture quality is not good. I used IMAQ to take single picture and to save it I used IMAQ Write File 2. I used the following mechanism.
    1) IMAQ ImageTo Array--> To unsigned word integer--->IMAQ ArrayToImage--->IMAQ Write File 2---> save it as .png file.
    Please inform where I made the problem. Why the picture is not as like as 12 bit images of PCO Camera.
    Thanks,
    Stuttgart University,Germany.

    IMAQ Create.vi supports Grayscale(I16) and RGB (U64). Both should be suitable for 12Bit Greyscale.
    Documentation of "IMAG Write File 2.vi" says:
    IMAQ Write TIFF File 2
    Writes an image to a file in TIFF format.
    Note  16-bit monochrome images and 64-bit RGB images are nonstandard extensions of the TIFF standard. Most third-party applications cannot read 16-bit monochrome or 64-bit RGB TIFF files. For compatibility with most applications, write 16-bit monochrome images or 64-bit RGB images into PNG files.
    Best regards
    chris
    CL(A)Dly bending G-Force with LabVIEW
    famous last words: "oh my god, it is full of stars!"

Maybe you are looking for