16-bit greyscale colormap

Is there an easy way to create a 16-bit greyscale colormap?
I can do it for an 8-bit using the RGB to color.vi, but I'm not sure how to do it for 16-bit.

I know all that.  The bottom line is: LabVIEW can't display 16 bit gray scale, which is required in the original post.
George Zou
http://gtoolbox.yeah.net
David S.  wrote:
Actually George, the display of images is limited to 8-bit grayscale and 32-bit RGBA.  (This is currently a standard because most architectures are based on 32-bit word data.) However, an image can contain any arbitrary bit depth, and industrial cameras customarily acquire 10-, 12-, 14- and 16-bit grayscale images, as well as 40- and 48-bit color images. The NI-Vision toolkit is equipped to handle and manipulate such image data as easily as 8-bit ones. 
When displaying an image with a bitdepth greater than 8, the image data is scaled down to 8-bits so Windows can handle it.  The original image isn't altered when doing this though, so you dont actually lose any resolution in your data.
George Zou
http://webspace.webring.com/people/og/gtoolbox

Similar Messages

  • Is possible to Convert a 1 bit B&W Tiff image to a 16 Bit Greyscale Tiff image?

    Hi there,
    I am trying to convert a 1 bit Black & White Tiff image to a 16 Bit Greyscale Tiff image, Anyone out there know if it is possible and how it may be done? We are using Adobe Acrobat 9 Pro Extended.
    Any help would be appreciated.
    Cheers, Chris

    Almost any graphic editor can do it, but it doesn't make a sense. The picture information remain the same - 1 bit. It has nothing to do with Acrobat.

  • Why do 16 bit greyscale images look significantly worse than 32 bit?

    Howdy,
    I am trying to display a greyscale image from a camera. Parts of the program are written by someone else, and the image comes in as a U16 array. I am using IMAQ create and IMAQ arraytoimage. The image displays perfectly if I use the 'float' inputs on the above to sub vi's, but if I use 16 bit or 8 bit, the image quality is terrible. I would like to be able to display the image as 16bit.
    If I save the 16 bit image, and reopen it with another program, it's still just as bad. I have also tried converting the array to I16, and U8, but it makes no difference to the image quality.
    From what I understand, there should be very little visible difference between 8, 16 and 32 bit greyscale images. Does anyone have any ideas where the problem might be? my next guess is the camera settings, but I'd love it to be something in my code..
    Cheers,
    Andy

    Andy,
    Thank you for contacting National Instruments.  The key thing to note is that the image data type that LabVIEW uses is a signed interpretation so you need to do some more conversion to get an Unsigned 16-bit array to display properly as an NI-IMAQ image.  Refer to the Knowledge Base: 16-bit Images in NI-Vision for more information on how to do this.  Thanks and have a great day.
    Regards,
    Mark T
    Applications Engineering | National Instruments

  • Are 16-bit greyscale images in IMAQ Vision True 16-bit

    Does 16-bit image in Vision control display true 16-bit grayscale (or is it really mapped to 8-bit)?
    thanks,
    Don

    Most displays only provide 8 bits of each color (RGB). These can be combined to create millions of colors, but can still only display 8 bits of grayscale. Very few display are capable of 16 bit display, so the 16 bit images must be converted to 8 bit. It is useful that LV provides several options for the conversion.
    Bruce
    Bruce Ammons
    Ammons Engineering

  • Should I modify the camera file to change the greyscale resolution of my Photonfocus camera from 8 to 12 bits?

    I have some difficulties to use my Photonfocus camera with 12 bits greyscale resolution.
    I have a Photonfocus camera MV-D1024E-160-CL-12, a NI PCIe-1429 frame grabber, PFRemote 2.0 and Measurement & Automation Explorer 4.2.1.3001.
    If I work with 8 bits greyscale resolution (PFRemote -> Data Output: Resolution : 8 bit) and look at the image with MAX (Système -> Périphériques et interfaces -> NI-IMAQ Devices -> img0: NI PCIe-1429 -> Channel 0: PhotonFocus MV-D1024-160 (file in attachment), everything is working.
    Now I'd like to work with 12 bits and I don't know exactly what I have to change? I've tried to change only the resolution in PFRemote (PFRemote -> Data Output: Resolution : 12 bit) but then the image in MAX remains a 8 bit resolution. Then I've tried also to modify the camera file (file in attachment). I've put the "BitDepth" and the "BitsPerComponent" at 12 instead of 8 and the "BinaryThreshold" to (0,4095). But when trying to grab an image in MAX, an error occured: "Error 0xBFF600FE Le nombre de bits par pixel requis est invalide".
    Should I really modify the camera file in order to grab 12 bits images? Or what should I do?
    Thank you
    Mélanie
    Attachments:
    PhotonFocus MV-D1024-160.txt ‏5 KB

    Hi toto26,
    Thank you very much for the information. I did exactly the same thing as you suggested for 8-bit, and I were able to change the resolution (number of pixels), exposure time, frame rate without any problem. I understand that I need to coordinate the MAX icd file and the settings of PFRemote.
    When I tried to switch to 12-bit, I found that I need the camera file (.icd) for 12-bit. I tried the attached icd in the previous discussion (Photonfocus MV-D1024E-160-CL-12 12-bit 1429 X4 only.zip 2 KB). There was an IMAX Error: Error 0xBFF6002C, FIFO overflow caused acquisition to halt. Then I think maybe this camera file is not good for me because I have a PCIe-1427, instead of a PCIe-1429. 
    So, I downloaded the camera file generator (from the same link you suggested) to create my own camera file. When I run the generator, the first window popped out was for camera setting, and there was no Photonfocus in the choice for the Manufacturer. Is it true that I cannot create camera files for Photonfocus camera, or I simply don't know how to use the generator?
    Thanks again for the help and I look forward to more advices.
    Regards,
    Hasi

  • Jpg greyscale file not importing

    I wonder if there is a bug in the new iPhoto '09? I used to be able to import 8-bit greyscale images saved from Photoshop with an ICC profile; now I get a message that the file format is not supported. I also have noticed that at least one other such photo, previously imported into iPhoto, will not display. (In the past, the previews were always black, but double-clicking the preview would show the image as expected.)

    That's too bad that grayscale is not supported. I have used grayscale primarily to save file size. I scan my kids school reports and their pencil drawings and import them. Since so many of Apple's customers are graphic designers, I'll bet many of us do the same thing. And, because of that, I'll bet it would be helpful to support CMYK, too.

  • Retouching large 16-bit tiff images causes endless processing.

    I have a few high-resolution 16-bit greyscale tiff scans of antique family photos in my Aperture library. The largest file size is 420MB. Since upgrading my library to AP3, editing these images causes Aperture to process endlessly. Aperture doesn't hang or crash and I'm able to close the application. If I re-open Aperture, it's no longer processing, but as soon as I view one of the problem images, it says "loading" and "processing" and never finishes. I've tried repairing permissions, repairing the library and rebuilding the library, to no effect. If I export the master and re-import it, everything is fine until I start retouching. after 10-20 brushstrokes, the image displays with strange artifacts and the endless processing begins again. These images work fine in Photoshop CS3. I'm using the latest 3.0.2 build of Aperture. Has anyone else had this problem and fixed it?

    I've had this problem with images saved as 8-bit scans.
    Are you using any compression in the TIFs? I scanned using Nikon Scan 4, cleaned up a bit in Photoshop 7, reduced the bit depth to 8-bit, and saved at TIFs with embedded colour profile using ZIP compression, Macintosh byte order.
    This used to work with Aperture 1.5.6, but with Aperture 2.0 I see a kind of offset pattern, so a vertical line looks like this:
    If export the master TIF, open it in Photoshop (where it looks normal) and save it with no compression Aperture displays it properly.
    (I'm still using Tiger, so I don't think this is an OS issue.)

  • Creating a 16-bit grayscale JPEG?

    I am trying to creat a 16-bit grayscale JPEG in Java. The code below will work for TIFF and it will also work for 8-bit grayscale JPEGs but not 16. When I do this it throws a run time exception saying I can only creat 1 or 3 banded images.
    What is my problem?
                int [] bits = {16};
                ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_GRAY),bits,false,false, Transparency.OPAQUE,DataBuffer.TYPE_USHORT);
                SampleModel samp = new BandedSampleModel(DataBuffer.TYPE_USHORT,width,height,1);
                Raster ras = null;
                Point p = new Point();
                WritableRaster WR = ras.createWritableRaster(samp,p);
                for(int x=0; x<width; x ++)
                    double intensity = (double)x * 65000 / (double)width;
                   // intensity -= 32500;
                    for(int y=0; y<height; y++)
                        WR.setSample(x, y, 0, intensity);
                BufferedImage test = new BufferedImage(colorModel,WR,false,null);
                RenderedImage ri = test;
                String filename = new String("JPEG/16Bitgray.jpeg");
                OutputStream out = new FileOutputStream(filename);
                JPEGEncodeParam param = new JPEGEncodeParam();
                float q = (float)1.0;
                param.setQuality(q);
                ImageEncoder encoder = ImageCodec.createImageEncoder("JPEG", out, param);
                encoder.encode(ri); 

    As far as I know notion of 16 bit greyscale images is not present in JPEG specification but its available in JPEG2000. At the moment JDK supports JPEG but not JPEG2000 (both ImageIO and Toolkit) and it does not try to convert images automatically.
    One possibility is to try JAI JPEG2000 ImageIO plugin (https://jai-imageio-core.dev.java.net/).
    Another option is to convert image to something that is supported by JDK (you can test it using ImageWriterSpi.canEncode()) and then save it.
    For instance:
       BufferedImage gray16;
       BufferedImage bi = new BufferedImage(w, h, TYPE_INT_RGB);
       Graphics g = bi.createGraphics();
       g.drawImage(gray16, 0, 0, null);
       ImageIO.write(bi, "jpeg", out_file);

  • Proper picture display for 16 bits/ 24 bits image

    Hi,
    I want to use 24 bits bits display for data that is saved as 8 bits. I also want to display image which is stored as 16 bits.
    Please open the attached word document, if you are willing to help me.
    This document would contain the detail of what I am trying to do.
    I have also attached two codes; "PD_Display_From_Spreadsheet" is the main vi and the other one is subvie.
    Thanks,
    DK
    Attachments:
    Display Coord to Motor Coord_XY.vi ‏47 KB
    Labview Help.doc ‏52 KB
    PD_Display_From_Spreadsheet.vi ‏186 KB

    Ok, here is what I am seeing.  You have a file that contains an image.  The data in that file seems to be 10-bit greyscale format (from the data and what you have described it as).  The image is approximatily 100x300 pixels in size.
    Now, some background.  The graphic VIs expect that color data is going to be in the format of 0x00RRGGBB (in hex) in a 32-bit word.  Each color gets 8 bits of that word and there are 8 bits unused.
    If you try to display your image which fills 0x00000XXX you will get a mostly blue/black image (which you have already observed).  If you copy the XX data up to the other red and green bits you will get a grey scale image (actually a color image with only grey pixels).  See the attached image.  The down side of this is that you will lose data because you have a 10-bit image (greyscale).  It may be benifical to copy the most significant bits of the image (i.e. bits 9:2, not 7:0).  Using the flatten pixmap VI and using the 8-bit option will also loose the extra 2 bits.
    If this doesn't satisfy you,  you may have to look into finding a different way to display this 10-bit greyscale image.  Of the top of my head I know that the Visions software would do this quick and easily.
    Hope this helps
    Message Edited by paulmw on 11-14-2006 07:46 AM
    Attachments:
    grVScl.JPG ‏21 KB

  • Problem of ragged shadows etc in 3D images/meshes created from greyscale

    I am having a problem creating smooth 3D images.
    I have been taking a 16-bit greyscale image with a various domes and other shapes with systematic gradations of greyness 'impressed' onto it. From this, I use the "create 3D mesh from greyscale" to create a surface whose depth is based on the values in the 16-bit greyscale image: this greyscale image becomes the "Background Depth" layer in the 3D image. I then add a Background layer (that exactly matches the greyscale image) to the 3D image, which will determine how the 3D shape will be coloured.
    I then force ray tracing using the "Quality: " "Ray Trace Draft" or "Ray Trace Final" and the result is a 3D surface with some shadows. Okay so far, BUT the surface is a bit pitted, a bit ragged, not smooth, the shadows have jagged bit in - it's just not very good, in fact, it's unusably poor.  Even if I apply some filtering to the greyscale image to smooth out any nasty discontinuities there may be in it, the final 3D surface is still pitted, with jagged shadows etc.
    If I look at a wireframe version of a 3D image, it is clear that the resolution of the starting image is better than that of the wireframe image: there are several points in the original image to each point on the wireframe image (that is to say, each vertex in the triangles that make up the wire frame image is several points away from each other vertex).
    When I look at the sample images in tutorials etc they look incredibly smooth and appealing. Why cannot I produce the same quality of image?  Are there some settings in the Edit->Preferences->3D panel that I need to reset?  Some way of increasing the number of triangles in the wireframe model?
    I have posted a sample of the grayscale image, and a sample of the 3D image that this greyscale image produces when converted into a mesh. (I've saved them as JPG files, which forces the 16-bit greyscale into 8-bit mode. But the original conversion to a mesh was performed using a 16-bit greyscale image).
    I hope I have described my problem clearly enough.  Can anyone help me?
    I am using windows 7 on an HP Pavilion Laptop, using OpenGL to draw the graphics.
    Many thanks in anticipation.
    Marc

    The main reason for the jagged edges and shadows in your posted rendering was the relatively low number of polygons in the regular array mesh generated from the heightmap image containing sharp edges. To make matters worse, the edges were jaggier than the pixel grid in many places.
    To get non-jagged edges and shadows in your rendering of a heightmap with sharp edges, each polygon of the mesh will need to be smaller than a pixel of the rendered image. Photoshop creates a mesh containing far fewer polygons than the count of pixels in a heightmap. Therefore the resolution of the heightmap image will need to be higher than the resolution of the rendered image unless the mesh is sufficiently distant from the camera to result in it filling only a small region of the rendered image.
    Here is a simplistic example where it was necessary to use a heightmap image with 8 times the horizontal and vertical resolution of the rendered image.
    First, the result when using the black and white heightmap with the same resolution as the rendered image.
    Next is a rendering, at the same resolution as before, but the heightmap and mesh were created in a document with 4 times the vertical and horizontal resolution of the initial attempt. Still slightly jaggy.
    Finally, the same rendering resolution again, but the heightmap and mesh were created in a document with 8 times the vertical and horizontal resolution of the initial attempt. This time the mesh has sufficient density to be rendered with no jaggies.

  • Canon MP560 prints wirelessly ok but scanner is not recognized

    Something is going on with the latest Mac OS. Seems to be only with Intel Macs only.Image capture does not recognize either scanner I have.(Canoscan 8400F also).
    I see others have broken connections other than Canon also.
    Has Apple given an official release about this problem? Some have workarounds that are at programming level,but i'm not going anywhere near that fix. And Twain Sane isn't much of a fix as it doesn't give full capability of scanning like the original program used to run the scanner.I don't want to go through a whole number of steps to get it to work.I don't have Photoshop to make a workaround.
    Here is what my friend had to do to get his new and old one to work:
    "I am okay now, as I have the older CS3 Photoshop now sitting next to the CS4 version, and I've got the older one running in Rosetta compatibility mode and this lets me run the Canon scanner driver in full 16 bit colour mode. If I only wanted 8 bit colour files, I could just use the Canon utility but for working with restorations of old images, you want 16 bit greyscale (for old B&W pictures) or 16 bit colour mode."

    Ross W wrote:
    Something is going on with the latest Mac OS. Seems to be only with Intel Macs only.Image capture does not recognize either scanner I have.(Canoscan 8400F also).
    In order to use Image Capture for scanning via USB from either Canon device you may need to update the Canon ICA drivers included with Snow Leopard. The Canon v1.5.1 update can be downloaded via the following link.
    http://support-au.canon.com.au/contents/AU/EN/0100213301.html
    This will update the Canon IJScanner1 package located within the core Image Capture > Devices folder.
    For the wireless connected MP560 I don't believe Image Capture can be used. But you can use Preview or even the MP560 Scanner queue in Print & Fax. You will need to install the Canon MP560 scanner driver listed below.
    http://support-au.canon.com.au/contents/AU/EN/0100204302.html
    With regards to using Preview, to see the networked scanner you will have to click on File > Import from Scanner > Included Networked Devices. Then when you click on File > Import from Scanner again, the MP560 should be shown as an available device.
    Has Apple given an official release about this problem?
    I don't believe there is a known problem with OS X 10.6.4 Image Capture so therefore I would say there is no official release from Apple.
    Some have workarounds that are at programming level,but i'm not going anywhere near that fix.
    I would suggest that the workaround you have seen, especially at a programming level, would be meant for older models of scanners that have no official support from the vendor. But this is not needed for your scanners as Canon has provided drivers that function correctly on 10.6 for the majority of their scanners and AIO's.
    And Twain Sane isn't much of a fix as it doesn't give full capability of scanning like the original program used to run the scanner.
    There is no need to SANE. The following link is the Canon provided scanner driver for your CS8400F.
    http://support-au.canon.com.au/contents/AU/EN/0900334701.htm
    I don't want to go through a whole number of steps to get it to work.I don't have Photoshop to make a workaround.
    Like SANE, Photoshop is not needed. I am scanning over a wireless network at home with my new MP990 using the Canon MP990 scanner queue without a problem.
    And my LiDE200 connected via USB to my Mac works fine also.
    Pahu

  • I reinstalled my plug-ins into Photoshop CC (2014) on Mac OSX 10.9.4, and they are greyed out in the Filters menu. How do I get them to work? I have tried everything including repairing permissions.

    I reinstalled my plug-ins into Photoshop CC (2014) on Mac OSX 10.9.4, and they are greyed out in the Filters menu. How do I get them to work? I have tried everything including repairing permissions.

    My apologies. Your post jogged my little grey cells to the point that I did a few experiments. In essence, you are right. My plug-ins do not work with 8-bit greyscale images. I like to deliberately shoot B&W and have a Wratten #12 minus-blue filter on the lens--as I used to do with film cameras.
    The raw files ( Fujifilm's .RAF) were converted in Adobe CR by checking off "Convert to Grayscale" under the fourth tab. In the future, I will do the B&W conversion in Photoshop with either Nik or Topaz plug-ins.
    Thanks for your input!

  • Rendering of an 8Bit-tiff with java.awt produces crap

    Hi Folks,
    i do a downsampling of tiff images. The source tiffs are colored 8 Bit greyscale with a 256 color-table. Now i have 2 possibilities to resample with java.awt:
    1. the target-tiff is of 24Bit true color:
    BufferedImage objDownsample = new BufferedImage(iTargetWidth, iTargetHeight, BufferedImage.TYPE_INT_RGB);The result is a perfect looking, but oversized tiff with 16 million colors each pixel.
    2. the target-tiff is of 8Bit like the source-tiff:
    BufferedImage objDownsample = new BufferedImage(iTargetWidth, iTargetHeight, BufferedImage.TYPE_BYTE_INDEXED,(IndexColorModel)image.getColorModel());The result is a small sized tiff image of 8Bit. Problem: it now has visible vertical stripes of two colors (which both composed result the source-color i assume).
    Does anybody know what's wrong here and how to retrieve the 8Bit color-image without stripes?
    Here comes the whole source:
        private BufferedImage resize(int newHeight, int newWidth, BufferedImage image) {
            int iTargetHeight = newHeight;
            int iTargetWidth = newWidth;
            //Create a BufferedImage that fits
            BufferedImage objDownsample = new BufferedImage(iTargetWidth, iTargetHeight, BufferedImage.TYPE_BYTE_INDEXED,(IndexColorModel)image.getColorModel());
            //A map with all necessary rendering hints to optimize the quality of the image
            Map<java.awt.RenderingHints.Key, java.lang.Object> obj_Map = new HashMap<java.awt.RenderingHints.Key, java.lang.Object>();       
            obj_Map.put(RenderingHints.KEY_DITHERING, RenderingHints.VALUE_DITHER_ENABLE);
            obj_Map.put(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
            obj_Map.put(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BILINEAR);
          // Draw the scaled image
          Graphics2D graphics2D = objDownsample.createGraphics();
            graphics2D.addRenderingHints(obj_Map);
            graphics2D.drawImage(image, 0, 0, iTargetWidth, iTargetHeight, null);
            return objDownsample;
        }Thanks
    Albrecht

    As far as I can tell, this solution only allows compositing within the component that is currently being painted. What would be nice is if I could add a custom composite to say .. a JLabel, which paints based on the colors of the underlying pixel data (ie. from its parent container). In this way you could make a JLabel that always inverts the color of what's underneath.
    Here, painting to the screen using a composite allows me to get the pixels underneath as the destination raster, the pixels in the JLabel as the source raster, and what actually gets painted which is the destination output raster.
    If I paint to a BufferedImage instead, the composite's 'destination raster' is the new BufferedImage's default values which are all black. Thus compositing over this would not have the desired effects described above (inverting the underlying image for instance). The missing piece when rendering to the BufferedImage is first copying the underlying screen data into a BufferedImage before painting to it. How do I do that? Something tells me it's very difficult.

  • Annoying copyPixels problem, please help

    Hi
    I have a problem using copyPixels command.
    My task is following:
    1) I have an image castmember with an png imported, that
    contains some
    number of image objects (for example 10 letters). This image
    has alpha
    channels
    2) I need to create another image containing just a part of
    this letters,
    for example 2,4 and 8 letter
    3) I create new image Object and use a copyPixels command to
    copy the
    letters i need from original to destination image
    4) But, here the problem - if I copy the letters on empty
    image (with no
    backgroundimage) I get problem with alphachannels - they
    became "broken", I
    mean, alpha channels are not more transparent es they shell
    be, but shey are
    shown as white (grey) tattered spots under letters
    5) when before copying the letters I copy to thtn new image
    some background
    image, and then copy letters - they are shown correct - the
    alphachannelw
    are OK
    6) when I copy a background image, that is smaller then
    letters, then the
    part, where copied letter lies over background, is correkt,
    and the part of
    letter, copied over empty space is "not beautifull"
    I tried to play with parameters of CopyPixels (maskImage,
    maskOffset,
    something with extractAlpha, useAlpha) but with no success
    Can anyone help me with this
    I need just a line of correct lingo-code?
    Or is this a problem of Director?
    Thanx in advance
    ANY HELP WILL BE APPRECIATED
    Jorg

    Hi Ben.
    Thank you very much vor your replay
    but I still can get it
    Here the code
    on testAlphaChannels sourceImage, cNewWidth, cNewHeight,
    pRects
    cSourceAlphaImage=sourceImage.extractAlpha()
    newImage = image(cNewWidth, cNewHeight, 32)
    newImage.useAlpha = FALSE
    newAlphaImage = image(cNewWidth, cNewHeight, 8)
    repeat with i=1 to pRects.count
    destRect=......
    newImage.copyPixels(sourceImage, destRect, pRects
    newAlphaImage.copyPixels(cSourceAlphaImage, destRect,
    pRects,
    [#ink:#darkest])
    end repeat
    newImage.useAlpha = TRUE
    newImage.setAlpha(newAlphaImage)
    textMember = new(#bitmap)
    textMember.image=newImage
    end
    But the result is not correct. O my example
    http://www.lvivmedia.com/fontPr/Fontproblems3.jpg
    image to the left is
    created on background image, and image to the right - with
    code above
    What is wrong in the code, I quoted above?
    Any help will be appreciated
    Jorg
    "duckets" <[email protected]> wrote in
    message
    news:ekhekq$c6g$[email protected]..
    > I think this is what you'll have to do:
    >
    >
    >
    > Do the copypixels command as per your 2nd result example
    (where "no
    background
    > image is used") using destImage.useAlpha = false.
    >
    > Create a new image as a blank alpha channel image (8
    bit, #greyscale)
    >
    > Repeat the same copypixels commands for each number, but
    this time the
    source
    > image is 'sourceAlphaImage', and the dest Image is this
    new alpha image.
    And
    > the crucial part, use: [#ink:#darkest] for these
    operations. This is
    because
    > you are merging greyscale images which represent the
    alpha channels of
    each
    > letter. The darker parts are more opaque, and the
    lighter parts are more
    > transparent, so you always want to keep the darkest
    pixels with each
    copypixels
    > command.
    >
    > hope this helps!
    >
    > - Ben
    >
    >
    >
    >

  • MOVED: [Athlon64] Annoying little problem! PLease help

    This topic has been moved to Operating Systems.
    [Athlon64] Annoying little problem! PLease help

    Hi Ben.
    Thank you very much vor your replay
    but I still can get it
    Here the code
    on testAlphaChannels sourceImage, cNewWidth, cNewHeight,
    pRects
    cSourceAlphaImage=sourceImage.extractAlpha()
    newImage = image(cNewWidth, cNewHeight, 32)
    newImage.useAlpha = FALSE
    newAlphaImage = image(cNewWidth, cNewHeight, 8)
    repeat with i=1 to pRects.count
    destRect=......
    newImage.copyPixels(sourceImage, destRect, pRects
    newAlphaImage.copyPixels(cSourceAlphaImage, destRect,
    pRects,
    [#ink:#darkest])
    end repeat
    newImage.useAlpha = TRUE
    newImage.setAlpha(newAlphaImage)
    textMember = new(#bitmap)
    textMember.image=newImage
    end
    But the result is not correct. O my example
    http://www.lvivmedia.com/fontPr/Fontproblems3.jpg
    image to the left is
    created on background image, and image to the right - with
    code above
    What is wrong in the code, I quoted above?
    Any help will be appreciated
    Jorg
    "duckets" <[email protected]> wrote in
    message
    news:ekhekq$c6g$[email protected]..
    > I think this is what you'll have to do:
    >
    >
    >
    > Do the copypixels command as per your 2nd result example
    (where "no
    background
    > image is used") using destImage.useAlpha = false.
    >
    > Create a new image as a blank alpha channel image (8
    bit, #greyscale)
    >
    > Repeat the same copypixels commands for each number, but
    this time the
    source
    > image is 'sourceAlphaImage', and the dest Image is this
    new alpha image.
    And
    > the crucial part, use: [#ink:#darkest] for these
    operations. This is
    because
    > you are merging greyscale images which represent the
    alpha channels of
    each
    > letter. The darker parts are more opaque, and the
    lighter parts are more
    > transparent, so you always want to keep the darkest
    pixels with each
    copypixels
    > command.
    >
    > hope this helps!
    >
    > - Ben
    >
    >
    >
    >

Maybe you are looking for