Add alpha channel to a BufferedImage

Hi, I am making a small jigsaw puzzle type game. I read an image from a file into a BufferedImage using ImageIO.read(). From this image I cut out rectangular pieces using getSubimage(). Then I would like to mask out the small figure cuts that should be along the edges of each piece. I'm thinking that I would like to do this by alpha transparency - making the background show through the irregular edges. So, my problem is that the images returned by ImageIO.read() does not necesarily have an alpha channel.
Is there a simple way to add an alpha channel to an existing BufferedImage?
Or does anyone have a better solution?
P.S. Currently I intend to use a single circle at each edge, either inside the piece as a hole or filled outside the edge. These circles can easily be drawn, but I plan to eventually expand this to be able to make arbitrarily shaped pieces.

Is there a simple way to add an alpha channel to an
existing BufferedImage?Not for the general case. For example, I used ImageIO.read to load in a jpeg image. It's 50x60 pixels, and as an image it has 3 channels: red, green and blue -- no alpha. The pixel data is stored in a DataBufferByte, which has a single array of 9,000 bytes (3 channels x W x H). Given the packed structure of that data, there is no efficient way to slide in alpha values -- there's no room.
One thing you can always do is copy the data from this BufferedImage into another BufferedImage that has an alpha channel. Here's some quick and dirty code to do it:
static BufferedImage addAlpha(BufferedImage bi) {
     int w = bi.getWidth();
     int h = bi.getHeight();
     int type = BufferedImage.TYPE_INT_ARGB;
     BufferedImage bia = new BufferedImage(w,h,type);
     Graphics2D g = bia.createGraphics();
     ImageObserver imo = null;
     g.drawImage(bi, 0, 0, imo);
     g.dispose();
     return bia;
Or does anyone have a better solution?
P.S. Currently I intend to use a single circle at each
edge, either inside the piece as a hole or filled
outside the edge. These circles can easily be drawn,
but I plan to eventually expand this to be able to
make arbitrarily shaped pieces.1. The code given above is sufficient to take the original image and create the same image, but with an alpha channel. Don't worry about the fact that you're copying pixel data. Are you generating 100 jigsaw puzzles per second?
2. Another possible solution is to start with the entire image, perhaps with no alpha channel, and draw individual jigsaw pieces by adjusting the Clip property before drawing each piece. Now, this may be a little slow, because calculating a fancy clipping shape with half-circles etc... may complicate the rendering
3. My Conclusion. Because a jigsaw application would involve lots of repainting, rotating pieces and dragging them, I'd do the following:
a. Create buffered images (with alpha) for each piece. I'm not using subimages because I'm trading more space for faster redrawing time. To be rigorous, one should do it both ways and compare the results.
b. Just once, for each of these piece images, using a clipping shape, draw from the original image (with or without alpha) onto the piece image to generate the image of one piece. If your jigsaw game allows pieces to be rotated 90/180/270 degrees you may want to pre-generate those rotated piece images to speed up drawing the pieces in rotated orientations. Again, one should do it both ways and compare the results.
--Nax

Similar Messages

  • BufferedImage from PNG : Alpha Channel disregaurded

    I'm trying to load a PNG with an alpha channel into a BufferedImage, and
    then sample the BufferedImage at various pixels for some operations.
    However, the alpha channel seems to get lost upon creation of the BufferedImage.
    For example, I'm using a PNG that is 100% transparent, and when I
    load it into a BufferedImage and display it on a panel all I see is the panel's background color.
    So far so good. Now, the problem. When I try to sample a pixel, the alpha is always 255 or 100% opaque
    which is clearly not right. Also, when I print out the BufferedImage's type, I get 0 which means the image
    type is "Custom". Care to shed any light as to how I can accurately sample an image with an alpha channel?
    import javax.swing.*;
    import java.awt.*;
    import java.io.*;
    import javax.imageio.*;
    import java.awt.image.*;
    public class PNGTest extends JFrame {
        public PNGTest() {
            setLocation(500,500);
            BufferedImage img = new BufferedImage(640,480,BufferedImage.TYPE_INT_RGB);
            try {
                img = ImageIO.read(new File("C:/folder/folder/myPNG.png"));
            } catch (Exception e) {
            setSize(img.getWidth(), img.getHeight());
            getContentPane().setBackground(Color.white);
            getContentPane().add(new JLabel(new ImageIcon(img)));
            setVisible(true);
            //Sample top left pixel of image and pass it to a new color
            Color color = new Color(img.getRGB(0,0));
            //print the alpha of the color
            System.out.println(color.getAlpha());
            //print the type of the bufferedimage
            System.out.println(img.getType());
        public static void main(String[] args) {
            new PNGTest();
    }Edited by: robbie.26 on May 20, 2010 4:26 PM
    Edited by: robbie.26 on May 20, 2010 4:26 PM
    Edited by: robbie.26 on May 20, 2010 4:29 PM

    Here ya go robbie, ti seems to work for the rest of the world, but not for you:
    import java.awt.*;
    import java.awt.image.*;
    import java.net.URL;
    import javax.swing.*;
    import javax.imageio.*;
    public class JTransPix{
      JTransPix(){
        JFrame f = new JFrame("Forum Junk");
        JPanel p = new MyJPanel();
        f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        f.add(p);
        f.pack();
        f.setVisible(true);
      public static void main(String[] args) {
        new JTransPix();
      class MyJPanel extends JPanel{
        BufferedImage bi = null;
        MyJPanel(){
          try{
            bi = ImageIO.read(new URL("http://upload.wikimedia.org/wikipedia/commons/archive/4/47/20100130232511!PNG_transparency_demonstration_1.png"));  //here ya go--one liner
            this.setPreferredSize(new Dimension(bi.getWidth(), bi.getHeight()));
          }catch(java.io.IOException e){
            System.out.println(e.toString());
        public void paintComponent(Graphics g){
          super.paintComponent(g);
          g.setColor(Color.BLUE);
          g.fillRect(0, 0, bi.getWidth(), bi.getHeight());
          g.drawImage(bi,0, 0, this);
    }Please notice how the BufferedImage is declared and set to NULL, then allow ImageIO to set the type--Just as I said before. If you have any question about the PNG producing an ARGB image with ImageIO, then change the color painted onto the backgroun din the paintComponent to what ever you want and you'll see it show through.
    BTW: even in the short nobrainers--you still need to have VALID CATCH BLOCKS.
    To get what you are looking for, you can mask off the Alpha component by dividing each getRGB by 16777216 (256^3) or do a binary shift right 24 places.

  • Keying with an alpha channel

    I have a client that needs some footage that we shot over green screen delivered to them with the green keyed out and an alpha channel already in place. The end deliverable that they are looking for is the talent over an "invisible" background that they can simply drag and drop into FCP without having to key it themselves.
    So my question is, how do I key out the green and an add alpha channel to preserve the "invisibility" of what was once green?
    I hope that question makes some sense!
    Thanks!

    The most straightforward solution is to export your keyed sequences
    to Animation, PNG, or JPEG 2000 using millions of colors +.
    Make sure the sequence is NOT RENDERED before you export.
    (In other words, if you want to render and preview to make sure it's good,
    do that, but delete your render file before exporting.) The only problem
    is that the resulting files are huge and will really bog down your client's
    edit.
    Another solution is to export two files per clip, with the second file
    being a black-and-white version of the alpha channel.
    To create the second clip, add the Channel Swap filter with
    red, green, and blue all coming from alpha.
    Then your client places the two files over each other, with
    the "alpha" clip below the regular version, and the Composite
    Mode of the upper clip set to Travel Matte - Luma.
    In fact, if your client already has the clips, you can just
    send them the alpha versions.
    The big advantage of the second solution is that you can
    stay in the same codec, which should make your client's
    editing go a lot faster.

  • How do I add a new alpha channel in pse9?

    where would I find the channel tab for pse3? I would like to use a black, white gradient and the lens blur to give a picture depth. I would like to make things in the background look farther away with a gradual blending effect of the lese blur. thank you in advance!

    You make a selection, then save it as a new selection.
    However, I'm not at all clear as to why you need to do this. For what you described in your first post, grouped/clipped layers would work better.
    I myself only make alpha channels per se when other programs require them. (If you load the saved selection, then save the file with the selection active, it will be seen as an alpha channel by many programs).

  • How can I see the alpha channel in the channels palette?

    Hello, mi format plugin loads a rgba image. I see it with transparency, that's ok, but when I go to the channels tab I only see 4 items (RGB, Red, Green and Blue).
    How can I see the alpha channel of my file in the channel tab?
    Thanks!

    OK, something must be wrong... but I don't find it!
    That's my whole code (resumed). I ommit some code (saving file code (not used) or main function, where I only call te "DoSomething" functions. You can see that I use layers. The DoReadContinue function is only used to show the preview.
    In the DoReadStart function I set the parameters for the layers (and the preview), and I fill the "data" and "layerName" params in the DoReadLayerContinue function. I hope you can understand the code!
    const int32 IMAGE_DEPTH = 32;
    SPBasicSuite * sSPBasic = NULL;
    SPPluginRef gPluginRef = NULL;
    FormatRecord * gFormatRecord = NULL;
    intptr_t * gMxiInfoHandle = NULL;
    MXIInfo* gMxiInfo = NULL;
    int16 * gResult = NULL;
    #define gCountResources gFormatRecord->resourceProcs->countProc
    #define gGetResources   gFormatRecord->resourceProcs->getProc
    #define gAddResource    gFormatRecord->resourceProcs->addProc
    CmaxwellMXI* cMax;
    static void DoReadPrepare (void){
        gFormatRecord->maxData = 0;
    static void DoReadStart(void){
        char header[2];
        ReadScriptParamsOnRead (); // override params here
      if (*gResult != noErr) return;
        // Read the file header
        *gResult = SetFPos (gFormatRecord->dataFork, fsFromStart, 0);
        if (*gResult != noErr) return;
        ReadSome (sizeof( header ) * 2, &header);
        if (*gResult != noErr) return;
      // Check the magic number for avoid no-mxi files
        int headerID = CheckIdentifier (header);
        if( headerID != HEADER_MXI ) *gResult = formatCannotRead;
      if (*gResult != noErr) return;
      // The file is OK. Let's continue to obtain the data of the image.
      cMax = new CmaxwellMXI( 0 );
      strlen((char*)gFormatRecord->fileSpec->name);
      gMxiInfo->filename = _strdup((char *)gFormatRecord->fileSpec->name + 1);
      bool res = cMax->getMXIIInfo(
                    (const char*)gMxiInfo->filename,
                    gMxiInfo->width, gMxiInfo->height,
                    gMxiInfo->burn, gMxiInfo->monitorGamma, gMxiInfo->iso,
                    gMxiInfo->shutter, gMxiInfo->fStop, gMxiInfo->intensity,
                    gMxiInfo->scattering,
                    gMxiInfo->nMultilightChannels, gMxiInfo->lightNamesList,
                    gMxiInfo->availableBuffersMask,
                    gMxiInfo->widthPreview, gMxiInfo->heightPreview,
                    gMxiInfo->bufferPreview);
      if(!res) return;
      // Check the available extra buffers
      int count = 0;
      if( gMxiInfo->availableBuffersMask & CmaxwellMXI::ALPHA_BUFFER ){
        // We will use that string to obtain later the desired extra buffer.
        gMxiInfo->extraBuffersList[count] = "ALPHA";
        gMxiInfo->hasAlpha = true;
        count++;
      else{
        gMxiInfo->hasAlpha = false;
      gMxiInfo->nExtraBuffers = count;
      switch( IMAGE_DEPTH ){
      case 8:
          gMxiInfo->mode = plugInModeRGBColor;
          break;
      case 16:
          gMxiInfo->mode = plugInModeRGB48;
          break;
      case 32:
          gMxiInfo->mode = plugInModeRGB48; //96 gives me an error
          break;
      // SET UP THE DOCUMENT BASIC PARAMETERS.
      VPoint imageSize;
      if( gFormatRecord->openForPreview ){
        // Preview always RGB8.
        imageSize.v = gMxiInfo->heightPreview;
        imageSize.h = gMxiInfo->widthPreview;
        gFormatRecord->depth = 8;
        gFormatRecord->imageMode = plugInModeRGBColor;
        gFormatRecord->planes = 3;
        gFormatRecord->loPlane = 0;
        gFormatRecord->hiPlane = 2;
        gFormatRecord->colBytes = 3;
        gFormatRecord->rowBytes = imageSize.h * gFormatRecord->planes;
        gFormatRecord->planeBytes = 1;
      else{
        // Configure the layers. All RGBA32.
        imageSize.v = gMxiInfo->height;
        imageSize.h = gMxiInfo->width;
        gFormatRecord->depth = IMAGE_DEPTH;
        gFormatRecord->imageMode = gMxiInfo->mode;
        gFormatRecord->layerData =
            2 + gMxiInfo->nMultilightChannels + gMxiInfo->nExtraBuffers;
        gFormatRecord->planes = 4; // RGBA.
        gFormatRecord->loPlane = 0;
        gFormatRecord->hiPlane = 3;
        gFormatRecord->planeBytes = IMAGE_DEPTH >> 3;
        gFormatRecord->rowBytes = imageSize.h * gFormatRecord->planes * ( IMAGE_DEPTH >> 3 );
        gFormatRecord->colBytes = gFormatRecord->planes * ( IMAGE_DEPTH >> 3 );
        gFormatRecord->transparencyPlane = 3;
        gFormatRecord->transparencyMatting = 1;
        gFormatRecord->blendMode = PIBlendLinearDodge;
        gFormatRecord->isVisible = true;
      SetFormatImageSize(imageSize);
      gFormatRecord->imageHRes = FixRatio(72, 1);
      gFormatRecord->imageVRes = FixRatio(72, 1);
      VRect theRect;
      theRect.left = 0;
      theRect.right = imageSize.h;
      theRect.top = 0;
      theRect.bottom = imageSize.v;
      SetFormatTheRect(theRect);
      // No resources for now.
      if (sPSHandle->New != NULL) gFormatRecord->imageRsrcData = sPSHandle->New(0);
      gFormatRecord->imageRsrcSize = 0;
        return;  
    /// Called for prewiew only.
    static void DoReadContinue (void){
        // Dispose of the image resource data if it exists.
        DisposeImageResources ();
      if( gFormatRecord->openForPreview ){   
        VPoint imageSize = GetFormatImageSize();
        gFormatRecord->data = gMxiInfo->bufferPreview;
          if (*gResult == noErr) *gResult = gFormatRecord->advanceState();
        if( gFormatRecord->data != NULL ){
          delete[] (Crgb8*)gMxiInfo->bufferPreview;
          gMxiInfo->bufferPreview = NULL;
          gFormatRecord->data = NULL;
      // De momento nos olvidamos de los icc profiles [TODO]
        //DoReadICCProfile ();
    static void DoReadFinish (void)
        // Dispose of the image resource data if it exists.
        DisposeImageResources ();
        WriteScriptParamsOnRead (); // should be different for read/write
      // write a history comment
        AddComment ();
      // Clean some memory.
      if( gMxiInfo->lightNamesList != NULL ){
        for( unsigned int i = 0; i < gMxiInfo->nMultilightChannels; i++){
          if( gMxiInfo->lightNamesList[i] != NULL ){
            delete[] gMxiInfo->lightNamesList[i];
            gMxiInfo->lightNamesList[i] = NULL;
        delete[] gMxiInfo->lightNamesList;
        gMxiInfo->lightNamesList = NULL;
      if( gMxiInfo->bufferPreview != NULL ){
        delete[] gMxiInfo->bufferPreview;
        gMxiInfo->bufferPreview = NULL;
      if( gMxiInfo->filename != NULL ){
        delete[] gMxiInfo->filename;
        gMxiInfo->filename = NULL;
      if( cMax != NULL ){
        delete cMax;
        cMax = NULL;
    static void DoReadLayerStart(void){
      // empty
    static void DoReadLayerContinue (void){
      int32 done;
        int32 total;
      VPoint imageSize = GetFormatImageSize();
      // Set the progress bar data
      done = gFormatRecord->layerData + 1;
      total = gMxiInfo->nMultilightChannels + gMxiInfo->nExtraBuffers + 2;
      // Dispose of the image resource data if it exists.
      DisposeImageResources ();
      uint32 bufferSize = imageSize.v * gFormatRecord->rowBytes;
      int nPixels = gMxiInfo->width * gMxiInfo->height;
      char* lightName = NULL;
      // SET THE BLACK BACKGROUND
      if( gFormatRecord->layerData == 0 ){
        gFormatRecord->data = (void*)new byte[bufferSize];
        for( int i = 0; i < nPixels; i++ ){
          ((float*)gFormatRecord->data)[ i * 4 ]     =
          ((float*)gFormatRecord->data)[ i * 4 + 1 ] =
          ((float*)gFormatRecord->data)[ i * 4 + 2 ] = 0.0;
          ((float*)gFormatRecord->data)[ i * 4 + 3 ] = 1.0;
        // Set the layer name.
        gFormatRecord->layerName = new uint16[64];
        gFormatRecord->layerName[0] = 'B';
        gFormatRecord->layerName[1] = 'a';
        gFormatRecord->layerName[2] = 'c';
        gFormatRecord->layerName[3] = 'k';
        gFormatRecord->layerName[4] = 'g';
        gFormatRecord->layerName[5] = 'r';
        gFormatRecord->layerName[6] = 'o';
        gFormatRecord->layerName[7] = 'u';
        gFormatRecord->layerName[8] = 'n';
        gFormatRecord->layerName[9] = 'd';
        gFormatRecord->layerName[10] = '\0';
      // LOAD THE LIGHT LAYERS
      else if( gFormatRecord->layerData < gMxiInfo->nMultilightChannels + 1 ){
        void* lightBuffer = NULL;
        void* alphaBuffer = NULL;
        byte foob;
        dword food;
        // Get the light buffer.
        bool res = cMax->getLightBuffer(
                               (char*)gMxiInfo->filename,
                               gFormatRecord->layerData - 1, IMAGE_DEPTH,
                               lightBuffer,
                               gMxiInfo->width, gMxiInfo->height, lightName);
        if(!res){
          *gResult = readErr;
          return;
        if( gMxiInfo->hasAlpha ){
          // Get the alpha buffer.
          res = cMax->getExtraBuffer(
                                (char*)gMxiInfo->filename,
                                "ALPHA", IMAGE_DEPTH, alphaBuffer,
                                food, food, foob);
          if(!res){
            *gResult = readErr;
            return;
        else{
          alphaBuffer = (void*)new float[ gMxiInfo->width * gMxiInfo->height * 3 ];
          for( int i = 0; i < nPixels; i++ ){
            // Only need to set the red channel.
            ((float*)alphaBuffer)[ i * 3 ] = 1.0;
        // Put them together.
        gFormatRecord->data = (void*)new byte[bufferSize];
        for( int i = 0; i < nPixels; i++ ){
          ((float*)gFormatRecord->data)[ i * 4 ]     = ((float*)lightBuffer)[ i * 3 ];
          ((float*)gFormatRecord->data)[ i * 4 + 1 ] = ((float*)lightBuffer)[ i * 3 + 1 ];
          ((float*)gFormatRecord->data)[ i * 4 + 2 ] = ((float*)lightBuffer)[ i * 3 + 2 ];
          ((float*)gFormatRecord->data)[ i * 4 + 3 ] = ((float*)alphaBuffer)[ i * 3 ];
        delete[] (float*)lightBuffer;
        delete[] (float*)alphaBuffer;
        // Set the layer name.
      //LOAD THE EXTRA CHANNELS
      if( ... ){
      //READ THE RENDER BUFFER
      if( ... ){
      // User can abort.
      if (gFormatRecord->abortProc()){
          *gResult = userCanceledErr;
          return;
      // Commit the layer.
      if (*gResult == noErr) *gResult = gFormatRecord->advanceState();
      // Update the progress bar.
      (*gFormatRecord->progressProc)( done, total );
      // Free memory.
      if( gFormatRecord->data != NULL ){
        delete[] (float*)gFormatRecord->data;
        gFormatRecord->data = NULL;
      if( lightName != NULL ){
        delete[] lightName;
        lightName = NULL;
    static void DoReadLayerFinish (void)
      // Nothing to do.
    And that's the image that I obtain loading a 8 layer image:
    The layers have transparency (when I set "transparencyPlane" to  -1, or 0, or 1, or 2, or 3, or 4....., I got the same result!). The blending mode is still "normal". I had set it to "linear dodge" The "isVisible" param works OK.
    Alpha 1 is still black.
    Is possible that I need to set something in the .r file? I had to add "FormatLayerSupport { doesSupportFormatLayers }," to manage layers, for instance.

  • ImageIO PNG Writing Slow With Alpha Channel

    I'm writing a project that generates images with alpha channels, which I want to save in PNG format. Currently I'm using javax.ImageIO to do this, using statements such as:
    ImageIO.write(image, "png", file);
    I'm using JDK 1.5.0_06, on Windows XP.
    The problem is that writing PNG files is very slow. It can take 9 or 10 seconds to write a 640x512 pixel image, ending up at around 300kb! I have read endless documentation and forum threads today, some of which detail similar problems. This would be an example:
    [http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6215304|http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6215304]
    This surely must be resolvable, but after much searching I've yet to find a solution. If it makes any difference, I ONLY want to write png image, and ONLY with an alpha channel (not ever without), in case there are optimisations that that makes possible.
    If anyone can tell me how to address this problem, I'd be very grateful.
    Many thanks, Robert Redwood.

    This isn't a solution, but rather a refinement of the issue.
    Some of the sources I was reading were implying that the long save time might be due to a CPU heavy conversion process that had to take place before the BufferedImage could be saved. I decided to investigate:
    I loaded back in one of the (slowly) saved PNG images using ImageIO.read(file). Sure enough, the BufferedImage returned differed from the BufferedImage I had created. The biggest difference was the color model, which was DirectColorModel on the image I was generating, and was ComponentColorModel on the image I was loading back in.
    So I decided to manually convert the image to be the same as how it seemed to end up anyway. I wrote the following code:
          * Takes a BufferedImage object, and if the color model is DirectColorModel,
          * converts it to be a ComponentColorModel suitable for fast PNG writing. If
          * the color model is any other color model than DirectColorModel, a
          * reference to the original image is simply returned.
          * @param source The source image.
          * @return The converted image.
         public static BufferedImage convertColorModelPNG(BufferedImage source)
              if (!(source.getColorModel() instanceof DirectColorModel))
                   return source;
              ICC_Profile newProfile = ICC_Profile.getInstance(ColorSpace.CS_sRGB);
              ICC_ColorSpace newSpace = new ICC_ColorSpace(newProfile);
              ComponentColorModel newModel = new ComponentColorModel(newSpace, true, false, ComponentColorModel.TRANSLUCENT, DataBuffer.TYPE_BYTE);
              PixelInterleavedSampleModel newSampleModel = new PixelInterleavedSampleModel(DataBuffer.TYPE_BYTE, source.getWidth(), source.getHeight(), 4, source.getWidth() * 4, new int[] { 0, 1, 2, 3 });
              DataBufferByte newDataBuffer = new DataBufferByte(source.getWidth() * source.getHeight() * 4);
              ByteInterleavedRaster newRaster = new ByteInterleavedRaster(newSampleModel, newDataBuffer, new Point(0, 0));
              BufferedImage dest = new BufferedImage(newModel, newRaster, false, new Hashtable());
              int[] srcData = ((DataBufferInt)source.getRaster().getDataBuffer()).getData();
              byte[] destData = newDataBuffer.getData();
              int j = 0;
              byte argb = 0;
              for (int i = 0; i < srcData.length; i++)
                   j = i * 4;
                   argb = (byte)(srcData[i] >> 24);
                   destData[j] = argb;
                   destData[j + 1] = 0;
                   destData[j + 2] = 0;
                   destData[j + 3] = 0;
              //Graphics2D g2 = dest.createGraphics();
              //g2.drawImage(source, 0, 0, null);
              //g2.dispose();
              return dest;
         }My apologies if that doesn't display correctly in the post.
    Basically, I create a BufferedImage the hard way, matching all the parameters of the image I get when I load in a PNG with alpha channel.
    The last bit, (for simplicity), just makes sure I copy over the alpha channel of old image to the new image, and assumes the color was black. This doesn't make any real speed difference.
    Now that runs lightning quick, but interestingly, see the bit I've commented out? The alternative to setting the ARGB values was to just draw the old image onto the new image. For a 640x512 image, this command (drawImage) took a whopping 36 SECONDS to complete! This may hint that the problem is to do with conversion.
    Anyhow, I got rather excited. The conversion went quickly. Here's the rub though, the image took 9 seconds to save using ImageIO.write, just the same as if I had never converted it. :(
    SOOOOOOOOOOOO... Why have I told you all this?
    Well, I guess I think it narrows dow the problem, but eliminates some solutions (to save people suggesting them).
    Bottom line, I still need to know why saving PNGs using ImageIO is so slow. Is there any other way to fix this, short of writing my own PNG writer, and indeed would THAT fix the issue?
    For the record, I have a piece of C code that does this in well under a second, so it can't JUST be a case of 'too much number-crunching'.
    I really would appreciate any help you can give on this. It's very frustrating.
    Thanks again. Robert Redwood.

  • Photoshop CS6 using javaScript to truncate alpha channel name

    Hello,
    I'm a production artist and I work with PSD files that were created in Adobe Scene7 Image Authoring Tool. These PSDs contain a background layer along with 1-20 alpha channels. My script has to make a new blank layer for every alpha channel in the document. Then it fills the new layer with light gray. So far, my code accomplishes this. However, I'd like to apply the name of the alpha channel to the layer, but I need the name to be truncated. Every alpha channel starts with one or more characters followed by a backslash and then finishes with one or more characters. Here's an example:
    An alpha channel might be named:  Floor\floor
    In this example I need my layer name to be just:  floor. This means all character to the left of the backslash, including the backslash itself needs to be discarded. I was using the subSring() statement to do this. When I try to step through the code, line by line in ExtendScript, I immediately get an error that says Unterminated String Constant and Line 31 of my code is highlighted. I suspect it doesn't like the way I wrote the backslash character, although I surrounded it in double quotes to define it as a string.
    Can anyone tell me why I'm getting this error?
    Below is my code with lots of comments to walk you through the process. I wrote where the error occurs in red type.
    I'm new to JavaScript so I'm not sure my while loop is accurate.
    #target photoshop
    // The #target photoshop makes the script run in PS.
    // declare variable to contain the active document
    var myDoc=app.activeDocument;
    // declare variable to contain the number of alpha channels, excluding the RGB channels
    var alphaChan = myDoc.channels.length - 3;
    alert(alphaChan + " alpha channels exist");
    // create loop to make new layers based on number of alpha channels, fill layer with gray and apply alpha channel name to new layer
    for (a=0 ; a<alphaChan ; a+=1){
    // make new blank layer
    myDoc.artLayers.add();
    // fill blank layer with gray
    var color = new SolidColor();
    color.rgb.red = 161;
    color.rgb.green = 161;
    color.rgb.blue= 161;
    myDoc.selection.fill(color);
    //variable stores alpha channel name
    var alphaName = myDoc.channels[3+a];
    // variable stores lenght of alpha channel name
    var lz = alphaName.length;
    // declare index variable to initialize position of 1st  character of alpha channel name
    var x= 0 ;
    // truncate alpha channel name by removing all characters preceding the "\" symbol
    while (alphaName.subString(x) != "\"){          (ExtendScript gives an error for this line and highlights the backslash and surrounding quotation marks)
        alphaName = alphaName.subString((x+1),z);
        x+=1;
        z-=1;
    return alphaName;
    // remove the backslash from alpha channel name
    alphaName = alphaName.subString((x+1),z);
    //  apply truncated alpha channel name to corresponding layer
    myDoc.artLayers[a].name = alphaName;

    while (alphaName.subString(x) != "\"){ 
    should be
    while (alphaName.subString(x) != "\\"){ 

  • Are Quicktimes with alpha channel gone in Keynote 6.0?

    Say it aint so. This is a very crucial feature of the app for my company.

    While presenting from an iOS device is certainly useful, I want to be able to do the best presentations possible. Looks like Keynote 5.3 will be the tool of choice for some time yet.
    Keynote 6, while a step forward for maybe 99% of users (that don't want to care about what works and what doesn't on various platforms), is a major leap backwards for the professional presentation business.
    Time for Apple - or perhaps better yet, some third party - to make a Keynote Pro and make it desktop only, if that's what it takes. Bring back the larger choice of transitions, support for QuickTime alpha channels and dynamic Quartz Compositions, interaction with master slide objects and mixing of themes in the same presentation. And don't stop there:
    Add parallell layers or tracks, so that some visuals or audio can play uninteruppted over several slides.
    Add dynamic inputs, so a text and media fields can be populated by tweets or other RSS feeds.
    Add a proper theme editor, so you easily can change various defaults and maybe add custom border styles and stuff.
    Make the presenter's display more flexible, and allow for (simple) editing of slides while running the show.
    Phil Schiller may have the time and resources to let 3D animators render full frame wow!-inducing movies-as-transitions, but isn't it better to have more power within the application itself? And with Keynote 6, some of that power is gone.
    Maybe it'll be like with FCP X, where some of the lost features of the previous version got added back in an update. But as with anything Apple, who knows?

  • Flash Player for FLV files with alpha channel encoded

    My goal is to play the the transparent background flash video on the bottom right hand corner similar to the video on this website : http://www.dropshipblueprint.com/
    I already have the FLV file with alpha channel encoded.  I was made to understand I will need a special flash player that can read alpha channel in the flv file to make the background transparent? Is this correct? If yes, then how to accomplish this or where do I get that player, maybe opensourece player?  If the player is not a solution then how do I accomplish my end objective taking into consideration I have the FLV file with the alpha channel encoded.  Thanks for your help. Sam

    Sam,
    Welcome to the forum.
    Where do you need to play this "sprite" (the name for such a Flash video)?
    If you need to add that to a Video, then there could be a few challenges in PrE.
    If you need to add it to a Web site, then Flash Player (free from Adobe) should be able to display that.
    Can you please give us just a bit more info, on how you wish to use the sprite?
    Good luck,
    Hunt

  • ImageIO.read stripping alpha channel

    I've been chasing this one for the last few hours and haven't gotten anywhere.
    I've been trying to load a 32 bit bitmap (I'm using photoshop and I put a gradient on the alpha channel), and no matter what I do, I keep getting a 24 bit image from the api. I could really use some help on this one. This is the toString() of what should be a 32 bit image: BufferedImage@1f44f8a: type = 1 DirectColorModel: rmask=ff0000 gmask=ff00 bmask=ff amask=0 IntegerInterleavedRaster: width = 72 height = 72 #Bands = 3 xOff = 0 yOff = 0 dataOffset[0] 0
    And here's the code I'm using to load the image:
    public Image Fetch(String path) throws CommonException
         BufferedImage image = null;
         try
             image = ImageIO.read(new File(path));
             System.out.println(image.toString());
         catch(IOException ex)
             Error(ex.getMessage());
            return image;
        }Thanks in advance.
    EDIT
    Here's the image I'm using: [http://img.photobucket.com/albums/v461/vaine0/thebitmap.jpg]
    Edited by: Ax.Xaein on Aug 1, 2009 2:03 AM

    I can probably help you, but you need to save the actual 32-bit bmp image on an image hosting sight so I can take a look at it.
    If the image data contained 4 channels of information, but the BMPImageReader used a destination image of only 3 channels, then you would get an IOException complaining that the source bands doesn't match the destination bands.
    And I'm pretty sure the BMPImageReader can read version 4 and version 5 bmp's (the ones that support an alpha channel).
    Are you absolutely sure you have a bmp with an alpha channel? Not many applications support writing v4 and v5 BMP's.

  • No alpha channel when BCC lens flare applied

    Hello
    With the help of Andy Neil, I created a light sweep for a lighthouse logo. I added a BCC RT Lens Flare Adv filter to enhance the effect of the light sweeping past the viewer. I looked great.
    Then I got to thinking that the lighthouse logo with light sweep would make a great lower third, so I scaled it down to logo bug size and superimposed it over a lower third background.
    The problem is that the inclusion of the BCC Lens Flare filter seems to add a black layer over the entire movie so that the lower third cannot superimpose over my talking head. I know it's the BCC Lens Flare because when I deactivate the filter and export, the alpha channel is present and the lower third is superimposed over the footage.
    I've tried exporting with Lossless+Movie Alpha and Animation,
    My question: Can anyone tell me why this is happening and how I might be able to remedy?
    Thank you,
    Blake

    Yes, the lens flare is the problem. The generator is rendered as a blend mode which blends only with the elements in the project. The blend doesn't export as transparency because a blend mode is only transparent when combined with another layer.
    Here are your solutions:
    1) Composite the lower thirds with the video in Motion. If the bkg layer is in Motion, then the ADD blend mode applied to the lens flare will work properly.
    2) Create an alpha matte layer. This requires you to create a black and white image of your project that defines transparency for your lower 3rd. Export that as well, and then do a Composite Mode-Luma in FCP.
    3) Apply a Luma Key to your lens flare before export. This works, but is imperfect because the filter will inevitably key out some of the not-so-bright aspects of the lens flare, leaving you with a diminished result.
    4) (My Suggestion): Apply the lens flare in FCP. FCP contains the SAME lens flare generator used by Motion. This will give you the best possible result with the least amount of extra work on your part.
    Andy

  • Unexpected Alpha channel behavior in Premiere

    Is there no way to interpret alphas as straight versus premultiplied in Premiere?  Why do my transparent layers and clips with alpha channels display differently with a Black Video layer below them?
    Background info:
    I'm exporting my film to files with codecs that support alpha channels, not because I want an alpha in the result but because I need full RGB / 444 color coding for digital cinema theaters. I'm using ProRes444, Targa and PNG Image sequences to preserve as much quality as possible in my master exports.
    I've discovered that there is no option to disable the alpha transparency in the Export Media step while preserving at least 10 bits per color channel (Trillions of colors in AE).
    To solve this problem, I've had to add a "Black Video" clip to the first video track, underneath the film's content. When this is done, the pre-rendered text elements with alphas look different. They are movie trailer style text treatments that are light text over pure black. (I'm on my ipad right now and don't see a way to upload screen shots to show examples)
    The result with the black video layer applied at the bottom is similar to the way QuickTime 7 displays straight alphas incorrectly. This problem occurs with both straight and premultiplied renders. The rendered text is from after effects and is stored using the animation codec.
    I'm aware of how to work around this, by maintaining the alpha transparency in Premiere and then removing the alpha channel of my export by running it through After Effects, but that is a ridiculous, time consuming step.
    I'm working with the GPU Mercury Engine on with a K5000 from NVIDIA.  Can anyone explain this behavior? Is there no way to interpret alphas as straight versus premultiplied in Premiere?  Why do my transparent layers display differently with a Black Video layer below them?
    Here are the screen shots
    UNDESIRED EFFECT using Black Video layer at bottom: Notice the fuzzy glow surrounding the text. This is happening with both straight and premultiplied renders.
    INTENDED LOOK when composited over nothing: The glowing edges are displayed the same way in Premiere as in After Effects.

    In my understanding, the reason 444 is indicated is because they are not RGB codecs. They simply sample the same amount of color as is possible in true RGB codecs. They still require a conversion from RGB to YUV for coding to a file, which almost always introduces a very slight shift in values, though usually imperceptible or acceptable. If the codec is truly RGB there is no need to indicate luminance vs color sampling.
    ...I think.
    Again thank you for your awesome input! You've been a huge help here on my day off. :)

  • Alpha channel basics from Canon 7D video

    Maybe I'm missing something basic here, as I am pretty new to messing with alpha layers/channels on VIDEO. I otherwise have a pretty good understanding of alpha when it comes to animations, photos, etc.ffects
    I am trying to make the black in a video of some flames into an alpha layer so I can use them as particles in After Effects.
    ( like in this video http://www.videocopilot.net/tutorial/green_smoke/ )
    I understand the codec/format restrictions for alpha channels, but the problem is I can't seem to get the black to come out as an alpha layer at all.
    I've tried Quicktime with the None codec and FLV and the other formats that are supposed to support alpha channels.
    I THINK what's wrong is that the format that the Canon shoots in doesn't have an alpha channel natively so exporting the video into different formats is not having an effect. If this is the case, how does one go about making the black parts of a video into an alpha channel.
    I know there are ways to make green into an alpha channel (greenscreening). I can't for the life of me figure out how to get premier OR aftereffects to recognize the straight black as an alpha layer though.
    I've looked at some 30 tutorials trying to figure this out, but it would appear I am incapable of asking the right question to get the answer I want.
    There's the export settings.
    Perhaps someone can point out my gross missunderstanding of what I'm trying to do, or otherwise set me on the right track.
    Below is a clip from within aftereffects... the arrow shows that the clip SHOULD have alpha properties... i think, but when I turn off the alpha channel EVERYTHING dissapears.
    Do I need to d something in aftereffects?  I dunno, I'm lost.
    If someone thinks they can help I am willing to make a youtube video showing what I'm doing.

    awesome... exactly what I was looking for
    you'd think when spending an hour or more asking the question "how do you add an alpha channel to a video" this would think this would come up
    having a weird issue now with it rendering a box in the lower right hand corner though... it bugs me that I don't know why it's there... but i'll just crop it out
    again... thank you!

  • Alpha channel issues

    Greetings,
    I'm trying to create a lower third using a still image that I created from a video clip using Quicktime Conversions. I then import it into FCP, drag it to a sequence and then add livetype text to it. In the preview mode on my main sequence when played is fine, but when I render it the alpha channel disappears and a black background replaces my video with the lower third material still there. What happened to my alpha channel? Is there a problem or a bug in FCP 5.1.1? I'm bewildered.
    Thanks,
    Jordan

    Editmojo,
    I exported just the text from Livetype. I'm trying to create a customized lower third, but everytime I render the lower third nested sequence after I have added it to the main sequence a black background takes over with the lower third media there. Ordinarily, when you follow the instructions that I did it automatically keys over the video behind it.What happened? I did not have this problem with FCP 4.5. Is there an issued with FCP 5.1? I just spoke to a tech at Digital Juice regarding the use of their animations and he said that there was an issue with FCP 5.1 when it came to alpha channels. Is that the case? Would the update solve the problem?
    Thanks,
    Jordan

  • Graphics, ImageIO, and 32-bit PNG images with alpha-channels

    I have a series of 32-bit PNG images, all with alpha channels. I'm using ImageIO.read(File) : BufferedImage to read the PNG image into memory.
    When I call graphics.drawImage( image, 0, 0, null ); I see the image drawn, however all semi-transparent pixels have a black background, only 100% transparent pixels in the source image are transparent in the drawn image.
    The Graphics2D instance I'm drawing to is obtained from a BufferStrategy instance (I'm painting onto an AWT Canvas).
    Here's my code:
    Loading the image:
    public static BufferedImage getEntityImage(String nom, String state) {
              if( _entityImages.containsKey(nom) ) return _entityImages.get( nom );
              String path = "Entities\\" + nom + "_" + state + ".png";
              try {
                   BufferedImage image = read( path );
                   if( image != null ) _entityImages.put( nom, image );
                   return image;
              } catch(IOException iex) {
                   iex.printStackTrace();
                   return null;
         private static BufferedImage read(String fileName) throws IOException {
              fileName = Program.contentPath + fileName;
              File file = new File( fileName );
              if( !file.exists() ) return null;
              return ImageIO.read( new File( fileName ) );
         }Using the image:
    Graphics2D g = (Graphics2D)_bs.getDrawGraphics();
    g.setRenderingHint( RenderingHints.KEY_ANTIALIASING , RenderingHints.VALUE_ANTIALIAS_ON);
    public @Override void render(RenderContext r) {
              Point p = r.v.translateWorldPointToViewportPoint( getLoc() );
              int rad = getRadius();
              int x = (int) p.x - (rad / 2);
              int y = (int) p.y - (rad / 2);
              BufferedImage image = Images.getEntityImage( getCls(), "F" );
              r.g.drawImage( image, x, y, null );
         }

    You may want to check on you system and see what ImageReaders are available, it could be ImageIO is just not picking the best one, if not, then you can use getImageReaders to get an iterator of image readers, then choose the more appropriate one.

Maybe you are looking for

  • How can I display file size in Finder

    Hi folks, I am trying to figure out where 40GB of data is hiding. I would like to see file sizes in the Finder window associated with my folders in Mountain Lion 10.8.2..  I'm having, what seems to me, a strange problem. When I 'Get Info' on my Hard

  • IS-IS not working on ASR903.

    Hey you all I have a very strange problem regarding IS-IS routing process on an ASR903 router, that I hope someone can help me with. I am trying to get IS-IS up and running between an ASR903 router and a ME3600X switch. Between the two boxes is a ser

  • I installed itunes, and when it starts, it says the program has stopped working.  I uninstalled and reinstalled, same thing

    I recently installed Itunes on my laptop, and was able to add music to it, but then it would always pop up a window that said itunes had stopped working anytime I opened it.  I uninstalled it, and reinstalled it, and it's doing the same thing.  Pleas

  • How to type Korean Characters in command prompt of Korean win2k3/XP

    We are not able to type the Korean multi-bytes in command prompt of Korean-Win2k3/XP. But we able to copy paste the Korean characters command prompt successfully. Here we are using "MS-IME2002" as keyboard type In case of Korean-Win2k8, able to type

  • WL60 Management problems

    I am new to WebLogic and am charged with understanding how to make it work. I have figured out a great deal, however there are several problems I keep running into: 1) When I change a .war file for a configured application, occasionally I start getti