IOS GPU mode artifacting

I've searched around the forums and google extensively but haven't seen anyone with a similar issue. I'm creating a simple iOS app using Box2d. When I enable GPU rendering, the objects on screen get a lot of visual artifacting and often disappear completely. See the attached video below. The isue doesn't exist when rendering in CPU mode.
Configuration:
AIR 2.7, iPad2, OSX Lion, CS5.5
Things I've tried without luck:
1.) CacheAsBitmap + CacheAsBitmapMatrix for some or all objects.
2.) Choosing the render option "export as bitmap" for all graphical objects.
3.) Recreating the FLA.
4.) Restarting the iPad.
Any help or suggestions would be greatly appreciated. Having a blast with the iOS exporter, can't wait to get back to it!
Thanks,
–Scott

Hi,
This looks like a bug. Could you please file it at http://bugbase.adobe.com so that we can work on it? Please make sure to inlcude the sources of your project. If you are not comfotable sharing the sources there, you could also mail them to me at [email protected]
Regards,
Sanika

Similar Messages

  • How to reduce suttering in Air for iOS GPU mode?

    My game currently runs at 60FPS on iPad 2 and has no problem regarding rendering speed but I'm rather annoyed by constant stuttering which causes the game to stop for 0.1-0.5 sec  once in a while. The stuttering behavior is similar to when garbage collector is ran and I supposed it is casued by GPU memory swapping as my game uses lots of bitmapdata.
    Problem is that my game transits one scene to another scene seamlessly without stopping animations in the game screen by letting old game scene sliding out of the screen and new scene sliding in. So there's no time to preload/precache graphics assets used in the new scene. After transitting different scenes a few times, the game starts stuttering when trying to show new images.
    My game works fine without stuttering on old PCs but on iPad2 it is quite obvious. Could anyone tell me some tips to reduce stuttering when using Air for iOS? In the game, all vector graphics are pre-drawn to bitmapdata(so no vector graphics are shown) and the size of graphic assets each scene has is about 2048*1024 pixels. There are about 10 scenes. On top of that, there are common interface graphic assets which are used in all scenes and the size is about 30x 400*400 pixels.
    I know the game uses quite a lot of graphic resources. Making the game preload the assets before transitting to a new scene will eliminate stutter but I'd still like to see if I can keep the seamless scene transition on iOS.
    I'm currently using Air3.5 + Flash CS6.
    * I mean preloading/precaching by actually displaying( addChild and visible=true)  them on stage and make time for GPU to cache. All the actual graphic data are already loaded.

    Some things that might help:
    I've heard that textures are uploaded as square textures on iOS. I'm not certain that really happens, but if it does then having two 1024x1024 textures would be better than one 2048x1024, because that would end up actually taking 2048x2048.
    Bitmaps are sent to the GPU when they first hit the stage, and it takes a significant amount of time for them to get there. If you are timeline animating a transition to the next scene, stagger the graphics that are going to be appearing next. That is, before you start to move on to the next area have the biggest bitmap of the next are already touching the edge of the stage. It can be invisible, or underneath another graphic, it will still get uploaded. But if you tween in all of the graphics exactly when they are needed, they will take a while to upload.
    Dispose of the bitmaps that are no longer in the scene. I think your stuttering is because you're going into a new area that has a lot of new graphics, and the GPU still has the bitmaps from the current scene and the one before that, and so has to spend time freeing up the memory before taking the new bitmaps. If you had already disposed of them the GPU might not need to clear memory for you.
    There is System.gc(). That will force the system to garbage collect, which you could do at moments that there isn't anything animating.

  • IOS 6: problem with sprites rotation in gpu mode

    Hi,
    I have a game published on iTunes and today I found a serious problem with the latest iOS 6 GM.
    Sprite rotation around X and Y axes doesn't work with gpu render mode.
    Here is my code:
    <mx:UIComponent id="gameCanvas" mouseChildren="false" mouseEnabled="false" width="100%" height="100%" />
    var gridSprite:Sprite=new Sprite();
    gameCanvas.addChildAt(gridSprite,0);
    // then I draw stuff on this gridSprite
    It works fine if this sprite is not rotated.
    But when I change its rotationX or rotationY properties, the sprite disappears from the screen!
    My game works fine on older versions of iOS (5.0, 5.1.1 and 6 beta 3).
    This problem only happens on iOS 6 GM and since iOS 6 will be officially released in a few days, I am really worried...
    I tried Air 3.3 and Air 3.4 - both have this problem.
    In "cpu" and "direct" render modes sprite rotation works, but graphics performance is terrible. So "gpu" is my only option...
    I'll really appreciate any help.

    I performed some other tests and so it seems like everything is 2D in gpu mode. When I try to do 3D transformations, objects disappear from the view.
    For example out of these 3 images only image1 is displayed:
    <s:BitmapImage id="image1"  source="logo.png" height="25%" width="100%" />
    <s:BitmapImage id="image2"  source="logo.png" height="25%" width="100%"  rotationY="5"/>
    <s:BitmapImage id="image3"  source="logo.png" height="25%" width="100%"  z="5"/>
    Maybe there is some setting that enables 3D (something like zBufferEnabled=true) that needs to be explicitly set for iOS 6 in gpu mode?
    UPD: Ok, I'm pretty sure it's a bug, I reported it to Adobe - Bug 3330901

  • Visuals appear blurred under GPU mode.

    Hi folks,
    How to avoid the blurry effect when render in GPU mode? I attempt to set quality to "LOW" but still get the same "filtered" look. To the contrary, in CPU mode things appear far clearer and sharper, bitmaps tend to be pixelated, but this is just what I wanted.
    Any help is appreciated!

    I guess that stage.quality = "low" ought to do that, but I tried, and it didn't help.
    About blitting, I just did a test. Using the code below I timed how long it takes to copypixels a 512x512 square a thousand times. I had the frame rate set to 100 fps, to make the arithmetic easier. On a test movie for Android it took about 10.7 seconds (10 would be perfect). On Android set to CPU it was about 20 seconds, and set to GPU it was 30 seconds. That makes the frame rate be about 50 fps for CPU and 33 fps for GPU.
    Those tests were on a high end dual core Honeycomb tablet, and I used Advanced Task Manager to force quit all apps before running the test to give it the best chance. I also tried the test on my iPad 2, and the results weren't quite what I expected. For the iPad, the CPU time was about 19 seconds, and GPU was 17 seconds. That's about 52 fps and 59 fps, so both better than on the Android, but strangely the GPU mode worked slightly better.
    So, imperfections in my theory aside, blitting with GPU is about 50% slower than CPU on Android, and only slightly quicker on iOS.
    Here's my test code ("tf" is a text field already on the stage):
    import flash.display.BitmapData;
    import flash.display.Bitmap;
    var bmd0:BitmapData = new BitmapData(512,512,false,0x000000);
    var bmd1:BitmapData = new BitmapData(512,512,false,0xff0000);
    var bmd2:BitmapData = new BitmapData(512,512,false,0xffff00);
    var bnum:int = 1;
    var p:Point = new Point(0,0);
    var r:Rectangle = new Rectangle(0,0,512,512);
    var c:int = 0;
    var b:Bitmap = new Bitmap(bmd0);
    addChild(b);
    var t:int = getTimer();
    addEventListener(Event.ENTER_FRAME,blit);
    function blit(e:Event) {
    c++;
    bnum = 1 - bnum;
    if (bnum) {
    bmd0.copyPixels(bmd1,r,p);
    } else {
    bmd0.copyPixels(bmd2,r,p);
    if (c==1000) {
    tf.text = String(getTimer()-t);
    removeEventListener(Event.ENTER_FRAME,blit);

  • Is there a way to use a single bitmapdata for multiple images with GPU mode?

    With GPU mode is there a way to bring in a single 1024 X 1024 png containing all my sprites and then slice it up into multiple display objects all refering to the original BitmapData?
    I have an app that runs in GPU mode - but I want to optimize the image management.
    I am making three sets of images in the orders of 2048, 1024 and 512 px.
    The app is a book app and each page has around 4 to 5 bitmaps. I would like to bring in one single image and slice it up - but just refer to the original bitMapData in the memory.
    Is there a way to do this - e.g. using a masking technique?
    I think it is possible using textures in direct mode - but that is not an ideal solution for me - as the app is already in the appstore - and I would have to entirely refactor it for stage3D. Also I use very large bitmaps which have some masking animations applied to them dynamically: http://youtu.be/Nm0T1jLCTz8?t=42s
    Currently, I use jpgs and a jpeg mask file for each image which I composite to get the alpha - then I scale them.
    PNGs may be better for GPU - (no compositing) but they make for a huge app file.
    Now I am converting it to use diffent sized assets depending on the device, then scaling them if need, and then compositing them for apha.  What I was hoping was to find a technique that could reduce the number of bitmapdata objects used and reduce the operations in general. 
    Any thoughts on optimizing would be appreciated.  I can add my code here if it helps.

    Tell Apple:
    http://www.apple.com/feedback/iphone.html
    We're all users here, just like you.

  • Adobe AIR for Android - GPU Mode - Bitmap Auto-Smoothing Issue

    Hi everyone
    I'm having a bit of an issue with the AS3 bitmap object. I'm currently developing an 8-bit style pixel game for AIR for Android.
    This game is being developed at a very low resolution and is being scaled up to maintain the charm of an old retro game.
    One of the methods I'm using is drawing pixels directly to bitmap objects and scaling the object up to create the old look.
    When testing on a mobile device, this works beautifully when you set the rendering method to Direct but when you change
    the render method to GPU the visuals go all blurry and anti-aliased (it's as if the bitmap is being smoothed out). The mini map
    for example is rendered using the setPixel method and then scaled up 9 times. Looks great on my PC but once I export it to my phone
    it looks absolutely awful! This is no good as I want to keep the clean, solid pixel look to maintain the the old 8-bit feel and obviously
    I'd like to stick to GPU mode due to it's speed.
    Like I said, this only happens once you test on a mobile device in GPU mode - it doesn't do it on my main desktop machine or
    in Direct mode. I already have the stage quality set to low and I've tried setting the bitmap's smoothing property to false but
    it does nothing.
    Does anyone have any suggestions as to how I can get around this?

    How about first blit your image to a small bitmapData, then draw it on a large bitmapData (9X larger)?
    Like,
    var small_bmd:BitmapData = new BitmapData(SMALL_WIDTH, SMALL_HEIGHT, false);
    var large_bmd:BitmapData = new BitmapData(SMALL_WIDTH * 9, SMALL_HEIGHT * 9, false);
    var bm:Bitmap = new Bitmap(large_bmd, PixelSnapping.NEVER, false);
    var blitRect:Rectangle = new Rectangle(0, 0, 9, 9);
    var i:uint, j:uint, blitColor:uint;
    small_bmd.draw(SOURCE_IMAGE);
    large_bmd.lock();
    for(j = 0; j < SMALL_HEIGHT; j++){
         for(i = 0; i < SMALL_WIDTH; i++){
              blitColor = small_bmd.getPixel(i, j);
              blitRect.x = i * 9;
              blitRect.y = j * 9;
              large_bmd.fillRect(blitRect, blitColor);
    large_bmd.unlock();
    Not sure if the code works or not, but hopefully this helps.

  • Kindle Fire & Adobe Air issues with scaling & bottom menu - GPU Mode only

    Hi,
    Long post ahead, please buckle down for a while.
    I'm having issues with the Kindle Fire and the bottom 20 pixel menu in GPU mode, CPU mode behaves correctly. The project was created with flash cs5.5, the stage size is 1024x600 and I've set the NO_SCALE flag. When the app is run on the Kindle Fire the Kindle scales it to 1024x580 for the bottom 20 pixel menu. If I create the project as 1024x580 it still scales it down and then centers it vertically introducing a 10pixel black bar at the top and I would assume a 10 pixel black bar at the bottom under their menu. In CPU mode the app is not scaled and the bottom 20 pixels is obscured.
    The problems I have encountered so far are incorrect mouse locations as reported in various mouse events. Messed up scrollRects when I try and clip something for display. Not being able to press buttons because of the incorrect mouse coordinates, the further down on the screen the more incorrect the buttons are. You can click below a button near the bottom of the screen and have it activate whereas a button at the top acts perfectly. This is due to the accumulated error value from the yPos * scaleValue.
    The mouse positions are off exactly by a scale factor of 600 / 580 (1.034).
    The video below shows the problems, sorry for the poor video handling. The red line in the video is drawn in the center of the y position detected on a mouse move and mouse down event, but shifted to the left for visibility. The center of that stylus is where it should be at all times, as you can see it isn't. You can also see displayed the buttons not working correctly, they are clearly being activated when the press on the screen is not over the button (indicated by the red line).
    The circle in the middle of the screen has it's scrollRect set to to the width and height of the circle, as you can see it is clipped.
    At the end of the video I press the power button then press it again to unlock, you can see the screen looks good for a split second then gets resized. Once resized the circle becomes clipped again. If I were fast enough I would assume the buttons would work correctly as well. Watch closely, it happens fast.
    Oh, and the app works fine on the desktop in either GPU or CPU mode. The app also works fine in GPU and CPU mode on my iPad2, Google HTC Nexus One, Nook Color and my 4th gen iPod Touch.
    Now whos issue is this? Adobes, Amazons or mine? I know what the problem is but don't know how to fix it.
    Youtube video link:
    http://youtu.be/660O3YMK9k8
    Thanks for your time!
    PS: if anyone wants the .fla for testing themselves, just ask and I will post a link to it.

    I'm going to forward this along to our android team for their review.  In the meantime, could you open a bug on this over at bugbase.adobe.com?  Please post back with the URL so that others affected can visit the bug and cast their votes.
    Thanks,
    Chris

  • Air 2.6 in GPU-mode: Filters possible?

    Hi,
    I am converting my app from air 2.5 to air 2.6 and noticed that it doesn't render filters anymore when using GPU-mode. It renders filters in CPU-mode, but rendering is much slower than GPU.
    Does anyone know if it's possible to use filers (a glow-filter in my case) in GPU-mode in Air 2.6?
    Thanx!

    Just had the issues.. in GPU mode any CPU intense rendering (like applyFilter does) slows down the app heavily. Like it dropped from 35fps to 5fps here. Let us know if you've got the same result or if I was wrong

  • Disable auto smoothing on Android (GPU mode)

    I am making an 8 bit style game for android, where the canvas (Bitmap) is half the size of the screen and is scaled x2.
    when using gpu mode, the canvas is automatically smoothed, even if I set smoothing to false
    but when using cpu mode it is not smoothed.
    so is there a way to disable smoothing on gpu mode?
    thanks

    This question has not been answered...
    I'm rather disappointed that we don't have the control to turn this off for a number of reasons:
    GPU mode is faster for sprite based games
    Turning this off "may" improve performance which is already really good
    For game designs like 8bit retro, it completely ruins the graphics when they are scaled up
    Can we please get control in Air 2.7 SDK?

  • GPU Mode crash when using 1024x1024px images on iPad Retina with cacheAsBitmapMatrix

    So I am building a menu for a mobile app and I have created a sliding menu with tiles, I have done this before, all works great normally.
    This time, my tiles were really close to the power of 2 ( 1024x1024px ) so I decided to use that size and resize my tiles to that for performance gains.
    So the setup is this:
    GPU Mode active, 7 movieclips with 3 frames each, PNG images in those movieclips sized at 1024x1024px. So that's 21 png images.
    I need to use the cacheAsBitmapMatrix feature for full smooth performance and when I do, it crashes on the Retina device when loading 1024x1024px images.
    On non-retina devices I load the lower resolution assets ( 512x512px) and that works perfectly fine with the cacheAsBitmapMatrix on. On the iPad 1 it runs super smooth.
    So I started playing around with the different render modes:
    No crashing on CPU mode and no crashing on DIRECT mode, only on GPU setting.
    The retina device is NOT out of memory, it does not do anything really heavy. It just loads in these movieclips in a sliding menu, and tweens their alpha, scaleX and scaleY ( which is not part of the issue as it crashes without this too ).
    In summary:
    1024x1024px images inside of movieclip containers crash an iPad Retina device when using cacheAsBitmapMatrix.
    The fix I have found is to just lower the image size a tiny bit: from 1024px to 1016px, it resolves the crash and makes everything work perfect.
    So, what do you guys think this could be? I tried with previous versions of AIR (3.7,3.8) and it happens on those versions too. I am using the latest beta of AIR 3.9.

    I have not had that happen to me, so I can only speculate, but here are some questions that might help us narrow down the problem.
    1. Were the slides created on your ipad, or did you create the presentation on your mac/pc & then import it onto the ipad for presenting?
    2. You said When not using Keynote projected images are sharp." Does that mean that the same images used in the presentation display fine on the same projector when you open them in Phoots or another app?
    3. Is it only the images on the slide that are compressed, or does it impact text as well?
    If you run the presentation on your ipad, not connected to any outside screens, dothe pictures show up normally, or are they compressed then as well?
    Like I said, I havent' had this happen to me. My first guess (without knowing any of the details) is that maybe the presentation was created on another device & it was not set to display as 1024 x 768, but at some other aspect ratio, and since that it what the iPad displays at, it is compressing the slides to fit.

  • What affects GPU mode rendering?

    Hi all,
    I use bitmap based rendering in GPU mode for performance.  All vectors and fliters etc are converted to a bitmap at runtime.  It's working great.
    I understand that rotation and scale will not affect the performance - this is all handled by the GPU.
    1) Does ColorTransform affect GPU (will the bitmap need to be rerendered and pushed into texture memory again?
    2) What about masking the bitmap with a shape, or with another bitmap?
    Cheers,
    Peter

    After I fixed all the orientation issues on my PC (by deleting all thumbnails after the photo is correctly oriented), and deleting the ipod photo cache, I resync my ipod and all my photos now have the correct orientation on both PC and ipod.
    Except now two of the album covers (icon) have the wrong orientation! Even though the same photos (first photo) in the album displays correctly on the ipod.
    I'm running out of ideas. Any suggestions?

  • GPU mode not functioning, using ATI HD3870 or ATI HD3200.

    When I run Pixel Bender in GPU mode, on either my desktop computer or my laptop, image loading fails with the following message:
    "Error loading file: mandelbrot.png" (same problem with the other example files, and also for other jpg and png files).
    and when I try to exit the program after failing to load an image, Windows 7 tells me it stopped responding.
    When I run it in CPU mode, this problem disappears.
    I think this is strange because I thought the HD3200(onboard graphics chip) or at least the HD3870 were supported.
    Both my Desktop computer and my laptop are running Windows 7 64-bit.

    I forgot to include Dxdiag information.
    DirectX Version: DirectX 11
    Card name: ATI Radeon HD 3800 Series
           Manufacturer: ATI Technologies Inc.
              Chip type: ATI display adapter (0x9501)
               DAC type: Internal DAC(400MHz)
             Device Key: Enum\PCI\VEN_1002&DEV_9501&SUBSYS_12001462&REV_00
         Display Memory: 2297 MB
       Dedicated Memory: 506 MB
          Shared Memory: 1791 MB
           Current Mode: 1440 x 900 (32 bit) (60Hz)
            Output Type: HD15
            Driver Name: atiumd64.dll,atidxx64.dll,atiumdag,atidxx32,atiumdva,atiumd6a.cap,atitmm64.dll
    Driver File Version: 8.14.0010.0716 (English)
         Driver Version: 8.681.0.0
            DDI Version: 10.1
           Driver Model: WDDM 1.1
      Driver Attributes: Final Retail

  • Error occurred while packaging the application (IOS - standard mode)

    When compiling for IOS in Standard mode an error ocurred:
    Error occurred while packaging the application:
    Exception in thread "main" com.adobe.air.ipa.ProcessError: Assembler failed
              at com.adobe.air.ipa.AOTCompiler.launchProcess(AOTCompiler.java:263)
              at com.adobe.air.ipa.AOTCompiler.compileBitcode(AOTCompiler.java:935)
              at com.adobe.air.ipa.AOTCompiler.trimAndCompileBitcode(AOTCompiler.java:763)
              at com.adobe.air.ipa.ASMGenerator.main(ASMGenerator.java:72)
    Compilation failed while executing : ADT
    Any idea?

    In fact I resolved it, it was while using FlashDevelop IDE, one of my swc library, the assets, was included completely (not a dynamic library nor an external one). The packager didn't seem to like that so I changed it to dynamic and added a simple Array containing all my assets instance names. It worked, thanks anyways Neh

  • IDSM on catalyst 6500 to provide IOS Inline mode support

    I am currently evaluating what kind of method to apply in my 6500. I would like to ask if IOS Version 12.2(33)SXI2a  support inline mode and inline vlan pair mode with IDSM-2???what configuration should be done with the switch in order for the multiple vlan traffic to flow with an inline interface of the IDSM2??? In my case I have 16 user vlans and 1 server vlan on catalyst 6500...The task is to protect the servers from users....The requirement is to configure inline mode to monitor the traffic from these 16 vlans when they access the servers...But as we know the IDSM-2 has only two logical sensing ports...So my question is how will you configure the switch to forward the traffic from these 16 vlans to the IDSM-2 module via only ONE sensing port, since the other sensing port will be configured in the server vlan???  Because as far as i know, when you configure inline mode on IOS,you will have to configure the sensing ports in access mode( While in CatOS, you configure these as TRUNK ports)...But this will work when you have only two vlans...But in my case, I have 16 vlans to monitor in inline mode..Please suggest any solution.
    Any urgent reply will be much grateful...
    Many Thanks in advance

    Hi Mubin,
       If you're looking to monitor all the traffic from the user VLANs to the server VLANs then the simplest way to configure the IDSM-2 would be inline on the server VLAN segment.  All traffic destined to the servers (from the users or anywhere else) has to traverse that VLAN.  Assuming you have something like this to start:
    VLAN 100-120 (users) ====== Switch ------ VLAN 200 (servers)
    you'd drop the IDSM-2 inline on VLAN 200 by using a helper VLAN:
    VLAN 100-120 (users) ====== Switch ----- VLAN 201 (server gateway) ----- IDSM-2 (bridging 201 to 200) ----- VLAN 200 (servers)
    To do this you'll need to perform the following steps:
    1.  Designate a new VLAN to use as a helper VLAN for your current server VLAN.  I'll use 201 for this example and assume your current server VLAN is 200.
    Create the helper VLAN on the switch:
    switch# conf t
    switch(config)# vlan 201
    2.  Configure the IDSM-2 to bridge the helper VLAN and the server VLAN (200-201)
    sensor# conf t
    sensor(config)# service interface
    sensor(config-int)# phsyical-interface GigabitEthernet0/7
    sensor(config-int-phy)# admin-state enabled
    sensor(config-int-phy)# subinterface-type inline-vlan-pair
    sensor(config-int-phy-inl)# subinterface 1
    sensor(config-int-phy-inl-sub)# vlan1 200
    sensor(config-int-phy-inl-sub)# vlan2 201
    sensor(config-int-phy-inl-sub)# description Server-Helper pair
    sensor(config-int-phy-inl-sub)# exit
    sensor(config-int-phy-inl)# exit
    sensor(config-int-phy)# exit
    sensor(config-int)# exit
    Apply Changes:?[yes]:
    3.  Configure the switch to trunk the helper and server VLANs to the IDSM-2 module.  I assume the module is in slot 5 in the example.  Replace the 5 with the correct slot for your deployment:
    switch# conf t
    switch(config)# intrusion-detection module 5 data-port 1 trunk allowed-vlan 200,201
    switch(config)# intrusion-detection module 5 data-port 1 autostate include
    *Warning! This next step may cause an outage if everything is configured correctly.  You'll probably want to schedule a window to do this.*
    4.  Finally, force the traffic from the server VLAN through the IDSM-2 by moving the server VLAN gateway from VLAN 200 (where it is currently) to the helper VLAN you created.  To do this, remove the SVI from VLAN 200 and apply the same IP address to VLAN 201.  I assume the current server gateway is 192.168.1.1/24
    switch# conf t
    switch(config)#int vlan 200
    switch(config-int)#no ip addr
    switch(config-int)#int vlan 201
    switch(config-int)#ip addr 192.168.1.1 255.255.255.0
    switch(config-int)#exit
    switch(config)#exit
    switch# wr mem
    Now, when the servers try to contact 192.168.1.1 (their gateway) they'll have to be bridged through the IDSM-2 to reach VLAN 201 and in the process all traffic destined to them or sourced from them will be inspected.  Do not put any hosts or servers in the helper VLAN (201) or they will not be inspected.
    Best Regards,
    Justin

  • IDSM2 on 6500-IOS inline mode support?

    Hi,
    I have an IDSM-2 running IPS5.1(1d) software (recently upgraded from 4.x) that is sitting on a 6500 IOS.
    The IPS device manager shows gi0/7 and gi0/8 as both in Promiscuous mode. There is no option to change the mode to inline and pair them.
    Is it so that IDSM-2 currently supports only Promiscuous mode?
    If so, then this module is still acting as an IDS despite running IPS5.1. Isn't it? What is the advantage that I get after upgrading it from 4.x to 5.1?
    -- Vasanth

    There are 2 pieces to the puzzle.
    There is the IDSM-2 version and what it supports, but also the Cat 6K Native IOS version and what it supports.
    IDSM-2 v5.1(1d) supports
    a) Promiscuous mode,
    b) InLine Interface Pair mode (2 interfaces are paired for inline monitoring), and also
    c) InLine Vlan Pair mode (2 vlans on a single interface are paired for inline monitoring, you will also see it called inline-on-a-stick)
    But for these features to be used, the switch code must also support configuring the switch side of the IDSM-2 for each of these 3 features.
    Native IOS Versions prior to 12.2(18)SXE will support only Promiscuous mode on the IDSM-2.
    12.2(18)SXE and later versions will support InLine Interface Pair mode on the IDSM-2.
    No Native IOS versions currently support InLine Vlan Pair mode on the IDSM-2 (a new Native IOS versions with this support is currently in development).
    So to get Inline (IPS) functionality you need to be running a Native IOS version 12.2(18)SXE or later, and on the IDSM-2 run IPS versions 5.1 (or even the older 5.0).
    (NOTE: Cat OS 8.5(1) does support all 3 modes of the IDSM-2. So if you are using Cat OS instead of Native IOS, then run version 8.5(1) to have access to all of the features of IPS 5.1(1) on the IDSM-2)
    If you are running a Native IOS version prior to 12.2(18)SXE then the IDSM-2 can only be operated in Promiscuous mode even if 5.1(1) is loaded on the IDSM-2.
    However, even in promiscuous mode the IPS 5.1(1) software does have a few advantages.
    There are several engines, and engine parameters that are only supported in the 5.1 version and not the 4.0 version. So there are several signatures that are either a) not even created for 4.x sensors, or b) the 4.x signature is not as precise as the 5.x signature in the new engines.
    (These new engines have proved invaluable in writing signatures to detect some of the new attacks that have come out over the past year.)
    There are of course other advantages as well:
    For example:
    1) Risk Rating to better aid in prioritization of alerts.
    2) More flexible fitlering mechanism for alerts that allows for fitlering individual actions
    The 2 features above are just 2 of the new features that have been added in 5.0 and 5.1 that apply to both promiscuous and inline modes.

Maybe you are looking for

  • How do I get photos from my MacBook Pro onto my iPhone?

    I have a Macbook Pro and I am wondering how I can easily transfer photos from my MacBook Pro on to my iPhone 3GS?

  • Drivers for touchsmart missing

    Hi,  I have the TouchSmart FK532AA IQ512ME. So basically my HDD crashed and i got a new one installed from an HP authorized Service center. Once i got the PC back.. Everything works fine except things like my headphones are not detected by the PC ...

  • Error while transport to Consolidation

    Hi All, When I am trying to import the activities in Consolidation I am getting the following error in the SDM log. 'appl/wd_test'/'lonmin.com'/'DEP_APPL_C'/'4410'/'0' dependency:        name:     'appl/ext_lib_sp14'      vendor:     'lonmin.com' And

  • More than one summary level for a matrix with group

    Hello friends at www.oracle.com , I have a report where I need to create a summary level for each row and column. In the Report wizard, it works finely, creating the summary for the 1st matrix group level. However, if I have some extra matrix group l

  • What is getResources ?

    public ActionForward execute(ActionMapping mapping,                                  ActionForm form,                                  HttpServletRequest request,                                  HttpServletResponse response)     throws Exception {