OpenGL Blending

How do I do this using OpenGL - how do I use the various blending modes to accomplish the following? I am using a texture as a "brush", it's background is black and a white circle in the middle. After I "paint" on screen, I would like to have the black pixels on screen remains black and all other colors should become white.

Apple have dropped support for pre-3GS because to pull of the features and performance in iOS 4.3 they had to develop just for the ARM7 chip, and not for the ARM6. They probably have a good sense of the number of people that have the older devices, and think it's ok to not update the software on those devices.
The same may be true for Adobe.
I am trying to find out how to make AIR 2.6 and AIR 2.0 live together in Flash CS5. If I work it out I'll let you know.

Similar Messages

  • Gateway computer not able to activate opengl in after effects?

    i got a gateway computer (bad, bad idea..) and it came with an "ATI Radion HD 4650" with 1GB of VRAM, an AMD Phenom quad 9750, 8GB of DDR2 RAM, yet after effects won't let me use my card to render in opengl, i have the latest opengl drivers, and the latest drivers gateway has published (the normal drivers can't finish installing because of something im sure gateway is behind), on PS it wouldnt allow it either, but i did the registry hack, and it runs fine on there now, but i kinda want it for after effects so i can do a faster render than 10 minutes for 10 seconds of preview

    but it should be a one-time render, then just composites, right?
    Sure, AE will only read the frame from the source file once, but if the comp is used 30 times, 30 individual comp buffers need to be calculated, possibly involving complex blending operations with other layers or the footage itself if used as nested comp. Furthermore, since you mention 100fps, there may be temporal operations involved when time-stretching/ time-remapping, resulting in look-up of multiple source frames per any single comp frame which for all intents and purposes can take a while, especially with compressed sources that require to decode larger parts since they are GOP based. Could be perfectly normal, if you ask me, but there may be room to improve performance by checking your comps and optimizing things here and there.
    Mylenium

  • Firefox 9 crashes with other opengl apps open

    With Blender 2.6 and the latest FF9 windows open, either one is prone to crash intermittently. This does not happen when working with multiple opengl applications, such as Lightwave, Cinema4d and Blender - only when Firefox is added into the mix will it crash the video card, or crash Blender, or crash Firefox. This is repeatable.
    It also occurred with FF8 at times, but FF9 is much worse.
    It means I cannot use Firefox while working in any other application with an opengl viewport.
    Video driver: Catalyst 11.8 /OpenGL Version 6.14.10.11005
    My machine specs:
    Win7 64bit - i7 [email protected], p6t Deluxe v1, 48gb (6x8gb RipjawsX), ATI 5870 1gb, Revodrive X2 240gb, e-mu 1820

    Do you have any crash IDs?
    Did you try to update all plugs (e.g. Flash and Silverlight)?
    If you have submitted Breakpad crash reports then post the IDs of one or more Breakpad crash reports (bp-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx).<br />
    You can find the IDs of the submitted crash reports on the <i>about:crashes</i> page.<br />
    You can open the <b>about:crashes</b> page via the location bar, like you open a website.
    See:
    *http://kb.mozillazine.org/Mozilla_Crash_Reporter
    *https://support.mozilla.org/kb/Mozilla+Crash+Reporter

  • OpenGL issue 10.4.3 + Nvidia

    I have a blender (www.blender.org) issue with 10.4.3. Blender stopped working after upgrade on my nVIDIA based Powerbook. I've seen topics on an openGL issue about the Simpsons which seems to indicate this direction.
    Any hint, workaround, similar experience out there?
    Thomas
    Powerbook G4, 12   Mac OS X (10.4.3)  

    It has been solved with 10.4.4

  • OpenGL inaccuracies and black levels

    Here is an sRGB file consisting of 0,0,0 background, with five overlaid squares at one level increments, from 1 to 5.
    On my Eizo CG246, calibrated with Eizo ColorNavigator, all five squares are discernible against the black background, as long as Graphics Processor > Advanced is set to "Basic" mode. This means that the color management logic (the conversion to the display profile) is performed in the CPU.
    In "Normal" and "Advanced" modes, color management is shifted to the GPU. With these settings, black levels disappear. It's not dramatic, but it's there. The difference can be illustrated in a screenshot, as long as the display profile is assigned and the screenshot is pasted in the Basic mode. Here's how it looks straight up (should be viewed against a dark background):
    Here I put a Curves layer on top to exaggerate:
    And here I read out the numbers from this exaggerated version:
    So far it seems this can not be reproduced on another monitor I have, an NEC P232W calibrated with Spectraview II. It also seemed a bit more pronounced on a different Eizo I no longer use, an S2243W, calibrated with Eizo EasyPix.
    This suggests that the problem is in the interaction between the display profile and the Open GL engine that does the conversion. I think this is related to the ProPhoto cyan banding issue previously reported, because that also seems to behave differently between these two systems (I'll do some more testing on that).
    In all cases and all scenarios, all irregularities disappear with Open GL in the "Basic" setting.

    You neglected to include your test image with black and levels (1,1,1) through (5,5,5), but no matter, I have the one I made...
    http://Noel.ProDigitalSoftware.com/ForumPosts/DarkGrayColorLevels16Bit.png
    I've found that here I'm not seeing a visible difference between GPU and CPU color-management with the image in the sRGB color space.  In other words, no crush.  With the image in the ProPhoto RGB color space, some slight crush was apparent, along with color shifts almost exclusively toward red.
    However, that's not to say there are no differences between the two.  They're just more subtle than what you're seeing.  What I did to test was this:
    Open the image I linked to above using Advanced mode OpenGL.
    Float it in a window.
    Screen grab it.
    Pick it up and start to drag it.
    While still dragging, screen grab it again.
    Overlay the two images, and pixel align them.
    Set the top image to "Difference" blending.
    Add a couple of Curves layers over the top to greatly enhance the differences to make them more visible.
    Since my test image has a dark grayscale gradient expressed in 16 bits/channel, it's not only testing for accuracy at the visible level, but also for very subtle changes.  Lo and behold, changes are revealed.  Note:
    Enhanced differences between GPU and CPU rendering of the image in the sRGB color space:
    Assigning the ProPhoto RGB color space to my test image, then comparing GPU vs. CPU rendering...
    -Noel

  • Problems with blending modes

    Hi,
    I have a layer with a solid color (212º,93,78) on top of which I have another layer with a solid color (205º,100,74).  Call this second layer the blend layer. I then choose for it a blending mode of Color, and the resulting color in the top layer is (205º,100, 74). If instead I choose the luminosity blending mode, the resulting color in the top layer is (212º,93,78).
    ---all these are HSB coordinates.
    According to the documentation. color and luminosity blending modes should behave differently:
    Color
    Creates a result color with the luminance of the base color and the hue and saturation of the blend color. This preserves the gray levels in the image and is useful for coloring monochrome images and for tinting color images.
                              ------ the luminance of the base color is 78 (at least the brightness anyway - I know of no method to get the luminance)
    Luminosity
    Creates a result color with the hue and saturation of the base color and the luminance of the blend color. This mode creates the inverse effect of Color mode.
                   -------  in this case everything is equal, which seems to me is not right.
    Some light?
    Juan Dent

    Generally speaking, Photoshop works fine with Windows 8 but the display drivers aren't all up to the task yet, and Photoshop uses the GPU for a lot of stuff.
    What video card do you have?
    You might find you can go into Edit - Preferences - Performance in Photoshop and disable the usage of GPU (the setting is called [  ] Use Graphics Processor in Photoshop CS6, and [  ] Enable OpenGL Drawing in earlier versions).  It's also possible that you might find things to work better if you leve the GPU enabled but change the advanced mode for it to a lower setting, such as Basic.
    In a general answer to your question, yes, I imagine there will be updates through Windows Update and from Adobe that will correct the problems.  You may be able to accelerate the fix by visiting the web site of the maker of your video card and downloading their recent driver release (though I caution against this with ATI drivers right now - their latest aren't really too good).
    -Noel

  • Blending Modes ("vivid light") in After Effects?

    I discovered that in After Effects CS6 (Win7) some blending modes seem to not work as expected.
    I have a bitmap with transparencies above another bitmap and want to blend it with "vivid light" - but the result is the same as if i had no blending mode selected at all. The same is for "hard light", "linear light" or "hard mix".
    Other blending modes like "multiply", "soft light" and "overlay" work just fine.
    Does anyone have an idea what is wrong with my After Effects?

    I'm sorry for not being as specific as requested in the faq. I thought it might be more helpful to reduce the problem description to the most relevant facts first, and see if it can be solved in a simple way.
    My former experiences with providing all existing information in the first post is usually met with no answers at all, because people think: WOAH! Too much information. ;-)
    Okay, I designed my video design in Photoshop CS6 first. This is how the design was supposed to look like - as it was composed in Photoshop.
    There are two layers:
    1. a pattern of semi-transparent patches on top and
    2. a blue gradient background
    the pattern (1) is composited with blending mode "vivid light" on the background (2).
    After I exported both layers as pngs (24bit alpha) and imported them in After Effects and used the same settings as in photoshop, I got the following:
    As you can see, I used "vivid light" as blending mode, but it looks exactly as if blending mode "normal" was used.
    The render quality is set to final quality.
    The same happens with blending modes "hard light", "linear light" or "hard mix" (i guess, you don't need screenshots, because they all look the same)
    But other blending modes like "multiply", "soft light" and "overlay" work fine (in this case I used "overlay"):
    So, the problem is: Why do some blending modes not work, while others work?
    BTW: I also tried to import the PSD directly into After Effects with all layers retained, in case something was wrong with the pngs, that I exported - but the imported PSD produced exactly the same error. So, no difference there.
    Now for all the questions from the faq.  - i think most of them will not help to identify the issue, but i will try and answer all of them:
    What version of After Effects? Include the minor version number (e.g., After Effects CS5.5 with the 10.5.1 update).
    CS6 (11.0.2)
    Have you installed the recent updates? (If not, you should. They fix a lot of problems.)
    yes, I have
    What operating system? This should include specific minor version numbers, like "Mac OSX v10.6.8"---not just "Mac".
    Windows 7, 64bit latest version, all updates installed
    What kind(s) of source footage? When telling about your source footage, tell us about the codecs, not just the container types. For example, "H.264 in a .mov container", not just "QuickTime".
    any footage. it's regardless of what footage I use. in this special case it was 2 pngs (24bit alpha) - see above.
    If you are getting error message(s), what is the full text of the error message(s)?
    there is no error message
    What were you doing when the problem occurred?
    see above
    Has this ever worked before? If this worked before by doesn't work now, what has changed on your computer in the intervening time?
    it's the first time I tried
    What other software are you running?
    mainly software like Google Chrome or Microsoft Office - I'm not sure they have anything to do with the problem
    Do you have any third-party effects or codecs installed?
    I have Magic bullet looks installed, but since I don't use it in this project, I don't think it's got anything to do with the problem
    Tell us about your computer hardware. Include CPU type and number of processor cores, amount of RAM installed, GPU, number of disk drives, and how any external drives are connected.
    Intel something, 6 cores, 12GB RAM, and NVIDIA GTX 590 graphics card, (driver 306.97)
    Do you have any third-party I/O hardware (e.g., AJA, Matrox, Blackmagic, MOTU)?
    no
    Are you using OpenGL features in After Effects?
    I don't know.. I thought OpenGL in CS6 is all automatically and can not be turned off or on?
    but i switched quality to "final render".
    Does the problem only happen with your final output, RAM preview, or both?
    everywhere
    Are you using Render Multiple Frames Simultaneously multiprocessing?
    yes. but if I turn it off, nothings changes
    What is the exact sequence of steps that you are taking?
    see above
    hope this helps

  • Blend Mode is not working

    Ok, so randomly blend mode just completely stopped working on the image I was working with, I tried other images with the same result. I called Adobe and they had me go to %appdata% and extract a folder called Photoshop CS5 settings on to my desktop. It worked for a second, but stopped working as soon as I closed the image I had tested it on. Has anyone had this problem? Any help would be greatly appreciated!
    Thanks,
    Nolan

    Did they advise you to update your video drivers from the web site of the maker of your video card?
    What happens if you disable the OpenGL Drawing setting in Edit - Preferences - Performance?
    -Noel

  • Leopard 10.5.2, Blender, and MacBook

    I had been eagerly waiting for Leopard to be updated to 10.5.2 to see if the bug that Blender is suffering from on the MacBook that causes horrible UI performance was fixed on Apple's end. Unfortunately it looks like it is still there. The Blender devs in their forums claim that it is an OpenGL bug with the Intel GMA950 driver that Apple needs to deal with so they are refusing to even try to fix it on their end.
    Poking around /System/Libraries/Extensions showed that after 10.5.2 was installed and the Graphics Update was applied, I had two new Intel related kext files, one IntelIntegratedFrameBuffer.kext and one for the X3100. All of the GMA950 kexts appear to still be the 10.5.1 versions. (Their last date of being modified are still Oct of 2007.)
    Can anybody confirm that the MacBook saw any fixes for graphics and is indeed using an updated driver after 10.5.2 and the Graphics Update? Anybody from Apple care to comment? It would be nice to be able to actually try to use Blender and get my head around its UI if not for this annoying problem.

    DJ_Boxer wrote:
    On the Website: it says that the minimum required is 144mb of video ram and that is not available on current or previous Macbooks because of the Integrated Graphics card that shares video memory with the OS. One thing to remember Blender may be free and open source but it is a real program and needs good hardware. I was also wondering what kind of Ram do you guys have on your Macbooks?
    DJ
    Update to previous post:
    Go to the website: http://www.apple.com/support/downloads and download and reapply the updates from there.
    Operating Systems
    Windows 98, ME, 2000, XP or Vista
    Mac OS X 10.2 and later
    Linux 2.2.5 i386
    Linux 2.3.2 PPC
    FreeBSD 6.2 i386
    Irix 6.5 mips3
    Solaris 2.8 sparc
    Minimal specs for Hardware
    300 MHz CPU
    128 MB Ram
    20 MB free hard disk Space
    1024 x 768 px Display with 16 bit color
    3 Button Mouse
    Open GL Graphics Card with 16 MB Ram
    Optimal specs for Hardware
    2 Ghz dual CPU
    2 GB Ram
    1920 x 1200 px Display with 24 bit color
    3 Button Mouse
    Open GL Graphics Card with 128 or 256 MB Ram
    *Here are my recommendations*:
    1: Enough Video Memory and Ram
    2: Make sure you are rendering w/o other open applications
    3: repair permissions and make sure you have all updates including the graphic update
    Message was edited by: DJ_Boxer

  • HELP!  OpenGL Drawing Disabled in CS5...

    I'm running dual Nvidia Gefore 8800 GTs w/ SLI enabled on a Windows 7 32 bit machine.  The checkbox to enable OpenGL drawing is disabled (as is the advanced option button), and where it says "Detected Video Card:" it's just blank.  I should note - Photoshop CS4 was working just fine with OpenGL Drawing enabled and full GPU acceleration.  The Adobe drivers documentation states that the 8800 GT is fully supported for CS5...
    I've scoured the support forums on both Nvidia and Adobe and no one seems to be able to answer this.  Yep - I'm running the latest Nvidia drivers (v. 258.96), I've uninstalled and reinstalled CS5, and totally cleaned my registry.
    I'm aware of the option to download the test tools from Adobe and force hardware emulation mode to get the OpenGL options to work, but that's really not a solution in my opinion.  That's basically bypassing the acceleration I should be getting from Nvidia.  It's pretty clearly stated on the download page that Adobe doesn't recommend it as a solution either.
    Is anyone else having this issue?  I'm incredibly frustrated and would greatly appreciate any/all information anyone has...

    My video cards are dual GeForce 8800 GTs w/ SLI.  According to Nvidia  and Adobe's documentation, they are fully capable of driving Photoshop  CS5.  I run ZBrush, SoftImage, Blender, and a slew of other much more  graphically intensive apps with no issues.
    I installed the latest version of the Nvidia drivers; after that didn't work, I ran the latest beta drivers, which also didn't work.  There's got to be a fix from either Nvidia or Adobe for this.  The cards are fully capable and are listed in the index of supported cards on Adobe's site.

  • Photoshop CS4/Vista Blending Options "Mouse" Frozen

    Odd issue that I may have caused while working fast and not paying attention. MAY have clicked "do you want to remove all your prefs?" that came up when using CTRL+SHIFT+ALT+S (save for web & devices). Since then, my mouse won't work in the blending options box. Example: I can choose gradient but cannot click the default gradient to get the list of my gradients. Also cannot switch from gradient to (e.g.) bevel in the same dialog box with my mouse, have to go back to the layer, blending options, choose the one I want, and use only SOME of the options. Mouse just won't work in the blending options box.
    I renamed my prefs to "old" and let Photoshop rebuild them. Same issue.
    Does anyone have any ideas? Many thanks in advance.

    I am having the same problem using CS2. I cannot click on any of the checkboxes
    in blending options.
    I dont think cs2 uses opengl.
    any suggestions are welcome.

  • Layer-Blending Options is frozen in CS3?

    When I navigate to Layer>Blending Options, all the options (for blending) are frozen.For example,Say I want to change 'Drop Shadow', I hover over it,click it,but it does not take me to drop shadow options, or if I want to change the master opacity by sliding the bar,I can't do so. Blending options is the only tool in Photoshop that is freezing,all other tools/brushes/actions/presets are working properly. Using:
    OS-Windows XP SP2.
    Adobe Photoshop- CS3

    From the sound of your post, it's probably not user error (e.g., not having the proper layer selected, etc.) but some kind of system problem.
    If you have intermittent display issues, you might want to check to see if you can update your display driver - which implements OpenGL for Photoshop to use.
    Go to the web site of the maker of your video card, locate the latest driver package that matches your hardware and OS, and download/install it.  Driver updates solve a surprising number of problems.
    -Noel

  • IPhone: OpenGL ES Texturing

    I just began writing an iPhone game and I'm having difficulties displaying multiple textures.
    Only the last texture I generate is appearing and all the textures I generated previously appear white.
    Here is major rendering functions so far, is there anything that stands out as incorrect?
    //Generates textures for a game object
    - (void)GenerateTextures:(Hairball::Objects)Obj
    CGImageRef spriteImage;
    CGContextRef spriteContext;
    GLubyte *spriteData;
    size_t width, height;
    GLuint *tempTex;
    // Creates a Core Graphics image from an image file
    switch (Obj)
    case Hairball::PLAYER:
    spriteImage = [UIImage imageNamed:@"Sprite.png"].CGImage;
    tempTex = &(TextureArray[0]);
    break;
    case Hairball::BACKGROUND:
    spriteImage = [UIImage imageNamed:@"BG1.png"].CGImage;
    tempTex = &(TextureArray[1]);
    break;
    case Hairball::HUD:
    spriteImage = [UIImage imageNamed:@"Icon.png"].CGImage;
    tempTex = &(TextureArray[2]);
    break;
    default:
    break;
    // Get the width and height of the image
    width = CGImageGetWidth(spriteImage);
    height = CGImageGetHeight(spriteImage);
    if(spriteImage)
    // Allocated memory needed for the bitmap context
    spriteData = (GLubyte *) malloc(width * height * 4);
    // Uses the bitmatp creation function provided by the Core Graphics framework.
    spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width * 4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast);
    // After you create the context, you can draw the sprite image to the context.
    CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), spriteImage);
    // You don't need the context at this point, so you need to release it to avoid memory leaks.
    CGContextRelease(spriteContext);
    // Use OpenGL ES to generate a name for the texture.
    glGenTextures(1, tempTex);
    // Bind the texture name.
    glBindTexture(GLTEXTURE2D, *tempTex);
    // Speidfy a 2D texture image, provideing the a pointer to the image data in memory
    glTexImage2D(GLTEXTURE2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GLUNSIGNEDBYTE, spriteData);
    // Release the image data
    free(spriteData);
    //Inits OpenGl for drawing
    - (void)setupView
    //Creates object Manager to handle
    ObjectMgr = new ObjectManager();
    //Create Objects
    GameObject* Object1 = new GameObject(Hairball::BACKGROUND, 0.0f, 0.0f, 0.0f, 0.0f, 3.2f, 3.2f);
    ObjectMgr->AddToList(Object1);
    GameObject* Object2 = new GameObject(Hairball::PLAYER, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f);
    ObjectMgr->AddToList(Object2);
    //Generate all textures
    [self GenerateTextures:Hairball::PLAYER];
    [self GenerateTextures:Hairball::BACKGROUND];
    //For each obj assign a texture
    std::list<GameObject*>::iterator Object = (ObjectMgr->mObjectList.begin());
    std::list<GameObject*>::iterator End = (ObjectMgr->mObjectList.end());
    //For each game object
    while(Object != End)
    //Apply texture to each object
    switch ((*Object)->mObjectType)
    case Hairball::PLAYER: (*Object)->mSpriteTexture = TextureArray[0]; break;
    case Hairball::BACKGROUND: (*Object)->mSpriteTexture = TextureArray[1]; break;
    case Hairball::HUD: (*Object)->mSpriteTexture = TextureArray[2]; break;
    default:
    break;
    ++Object;
    // Clears the view with grey
    glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
    // Set the texture parameters to use a minifying filter and a linear filer (weighted average)
    glTexParameteri(GLTEXTURE2D, GLTEXTURE_MINFILTER, GL_LINEAR);
    // Enable use of the texture
    glEnable(GLTEXTURE2D);
    // Set a blending function to use
    glBlendFunc(GL_ONE, GLONE_MINUS_SRCALPHA);
    // Enable blending
    glEnable(GL_BLEND);
    - (void)drawView
    [EAGLContext setCurrentContext:context];
    glBindFramebufferOES(GLFRAMEBUFFEROES, viewFramebuffer);
    glViewport(0, 0, backingWidth, backingHeight);
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
    glMatrixMode(GL_MODELVIEW);
    //Clear the backbuffer
    glClear(GLCOLOR_BUFFERBIT);
    std::list<GameObject*>::iterator Object = (ObjectMgr->mObjectList.begin());
    std::list<GameObject*>::iterator End = (ObjectMgr->mObjectList.end());
    //For each game object
    while(Object != End)
    // Bind the texture name.
    glBindTexture(GLTEXTURE2D, (*Object)->mSpriteTexture);
    //apply transformations
    glPushMatrix();
    glTranslatef((*Object)->mPosition.x, (*Object)->mPosition.y, (*Object)->mPosition.z);
    glRotatef((*Object)->mRotation, 0.0f, 0.0f, 1.0f);
    (*Object)->mRotation += 0.5f; //Rotate
    //Get current sprites verticies
    glVertexPointer(2, GL_FLOAT, 0, (*Object)->mSpriteVertices);
    glEnableClientState(GLVERTEXARRAY);
    //Get current sprites tex coords
    glTexCoordPointer(2, GL_SHORT, 0, (*Object)->mSpriteTexcoords);
    glEnableClientState(GLTEXTURE_COORDARRAY);
    //Render the vertex array
    glDrawArrays(GLTRIANGLESTRIP, 0, 4);
    //pop the transformation matrix
    glPopMatrix();
    ++Object;
    glBindRenderbufferOES(GLRENDERBUFFEROES, viewRenderbuffer);
    [context presentRenderbuffer:GLRENDERBUFFEROES];
    }

    The value generated by glGenTextures() in (*Object)->mSpriteTexture for the BACKGROUND object is 1 and for the sprite object it is 2. Which appear to be correct values?
    If I exchange:
    self GenerateTextures:Hairball::PLAYER;
    self GenerateTextures:Hairball::BACKGROUND;
    To
    self GenerateTextures:Hairball::BACKGROUND;
    self GenerateTextures:Hairball::PLAYER;
    then my scene goes from this:
    http://img207.imageshack.us/img207/8229/picture1nf6.png
    to this:
    http://img253.imageshack.us/img253/5282/picture2gr8.png
    So both textures draw, just not at the same time?
    Oh, here it is formatted better:
    //Generates textures for a game object
    - (void)GenerateTextures:(Hairball::Objects)Obj
    CGImageRef spriteImage;
    CGContextRef spriteContext;
    GLubyte *spriteData;
    size_t width, height;
    int texIndex = 0;
    // Creates a Core Graphics image from an image file
    switch (Obj)
    case Hairball::PLAYER:
    spriteImage = [UIImage imageNamed:@"Sprite.png"].CGImage;
    texIndex = 0;
    break;
    case Hairball::BACKGROUND:
    spriteImage = [UIImage imageNamed:@"BG1.png"].CGImage;
    texIndex = 1;
    break;
    case Hairball::HUD:
    spriteImage = [UIImage imageNamed:@"Icon.png"].CGImage;
    texIndex = 2;
    break;
    default:
    break;
    // Get the width and height of the image
    width = CGImageGetWidth(spriteImage);
    height = CGImageGetHeight(spriteImage);
    if(spriteImage)
    // Allocated memory needed for the bitmap context
    spriteData = (GLubyte *) malloc(width * height * 4);
    // Uses the bitmatp creation function provided by the Core Graphics framework.
    spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width * 4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast);
    // After you create the context, you can draw the sprite image to the context.
    CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), spriteImage);
    // You don't need the context at this point, so you need to release it to avoid memory leaks.
    CGContextRelease(spriteContext);
    // Bind the texture name.
    glBindTexture(GLTEXTURE2D, TextureArray[texIndex]);
    // Speidfy a 2D texture image, provideing the a pointer to the image data in memory
    glTexImage2D(GLTEXTURE2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GLUNSIGNEDBYTE, spriteData);
    // Release the image data
    free(spriteData);
    //Inits OpenGl for drawing
    - (void)setupView
    //Creates object Manager to handle
    ObjectMgr = new ObjectManager();
    //Create Objects
    GameObject* Object1 = new GameObject(Hairball::BACKGROUND, 0.0f, 0.0f, 0.0f, 0.0f, 3.2f, 3.2f);
    ObjectMgr->AddToList(Object1);
    GameObject* Object2 = new GameObject(Hairball::PLAYER, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f);
    ObjectMgr->AddToList(Object2);
    //Generate all textures
    glGenTextures(NUMTEXTURES, TextureArray); // Use OpenGL ES to generate a name for the texture.
    [self GenerateTextures:Hairball::BACKGROUND];
    [self GenerateTextures:Hairball::PLAYER];
    //For each obj assign a texture
    std::list<GameObject*>::iterator Object = (ObjectMgr->mObjectList.begin());
    std::list<GameObject*>::iterator End = (ObjectMgr->mObjectList.end());
    //For each game object
    while(Object != End)
    //Apply texture to each object
    switch ((*Object)->mObjectType)
    case Hairball::PLAYER: (*Object)->mSpriteTexture = TextureArray[0]; break;
    case Hairball::BACKGROUND: (*Object)->mSpriteTexture = TextureArray[1]; break;
    case Hairball::HUD: (*Object)->mSpriteTexture = TextureArray[2]; break;
    default:
    break;
    ++Object;
    // Clears the view with grey
    glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
    // Set the texture parameters to use a minifying filter and a linear filer (weighted average)
    glTexParameteri(GLTEXTURE2D, GLTEXTURE_MINFILTER, GL_LINEAR);
    // Enable use of the texture
    glEnable(GLTEXTURE2D);
    // Set a blending function to use
    glBlendFunc(GL_ONE, GLONE_MINUS_SRCALPHA);
    // Enable blending
    glEnable(GL_BLEND);
    - (void)drawView
    [EAGLContext setCurrentContext:context];
    glBindFramebufferOES(GLFRAMEBUFFEROES, viewFramebuffer);
    glViewport(0, 0, backingWidth, backingHeight);
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
    glMatrixMode(GL_MODELVIEW);
    //Clear the backbuffer
    glClear(GLCOLOR_BUFFERBIT);
    std::list<GameObject*>::iterator Object = (ObjectMgr->mObjectList.begin());
    std::list<GameObject*>::iterator End = (ObjectMgr->mObjectList.end());
    //For each game object
    while(Object != End)
    // Bind the texture name.
    glBindTexture(GLTEXTURE2D, (*Object)->mSpriteTexture);
    //apply transformations
    glPushMatrix();
    glTranslatef((*Object)->mPosition.x, (*Object)->mPosition.y, (*Object)->mPosition.z);
    glRotatef((*Object)->mRotation, 0.0f, 0.0f, 1.0f);
    //(*Object)->mRotation += 0.5f; //Rotate
    //Get current sprites verticies
    glVertexPointer(2, GL_FLOAT, 0, (*Object)->mSpriteVertices);
    glEnableClientState(GLVERTEXARRAY);
    //Get current sprites tex coords
    glTexCoordPointer(2, GL_SHORT, 0, (*Object)->mSpriteTexcoords);
    glEnableClientState(GLTEXTURE_COORDARRAY);
    //Render the vertex array
    glDrawArrays(GLTRIANGLESTRIP, 0, 4);
    //pop the transformation matrix
    glPopMatrix();
    ++Object;
    glBindRenderbufferOES(GLRENDERBUFFEROES, viewRenderbuffer);
    [context presentRenderbuffer:GLRENDERBUFFEROES];

  • Has anyone been able to combine OpenGL ES with another View controller?

    Looking at all the sample code, I still can't find any example of combining OpenGL ES with another View controller. So that one could have a view that the user could input values, and another that draws OpenGL ES stuff in. Is this possible, or does one have to draw the UI stuff OpenGL? I haven't downloaded any iPhone Apps store stuff that uses any OpenGL drawing.
    Thanks for any suggestions....

    I have been able to add a second UIView to the app's window. And I set the bg color for the view to be transparent and I use the view for drawing quartz objects over the opengles scene. It's a little Heads Up UI that is not too hard to deal with. And it can blend better as an iphone app.
    You could easily add views and force them in size to be side by side, or top/bottom, whatever works.
    Have fun,
    ceder

  • CS5 - Layer Blending Options randomly stops working completely

    Okay, been trying to figure this one out for a while and it's driving me nuts. It seems like it's just some kind of crazy bug because it sometimes works, sometimes doesn't work, and I'm not really doing anything differently either way.
    Right clicking on a layer and going into blend options sometimes simply doesn't work. The options come up, but stroke, gradient, drop shadow, inner/outer glow, etc... do absolutely nothing. The preview box is checked and it shows the effects properly in the little preview picture -- but it doesn't show any effects on the actual image, and when I click OK it doesn't apply them.
    Any idea what could be wrong? This was happening today, then I changed nothing, went outside for a cigarette, and came back and tried the same exact thing and it worked fine. Almost positive that I did nothing else between attempts. Makes no sense...

    From the sound of your post, it's probably not user error (e.g., not having the proper layer selected, etc.) but some kind of system problem.
    If you have intermittent display issues, you might want to check to see if you can update your display driver - which implements OpenGL for Photoshop to use.
    Go to the web site of the maker of your video card, locate the latest driver package that matches your hardware and OS, and download/install it.  Driver updates solve a surprising number of problems.
    -Noel

Maybe you are looking for