CopyPixel large image killing FPS on iOS

So, problem is pretty simple (and limited to the iPhone 3GS, the 4+ seems to handle it ok).  I've got a game that uses sprite blitting. So the rendering methods looks something like this:
background.copyPixels(background,rect,point);
then for each sprite :
sprite.copyPixels(spriteSheet,rect,point);
Pretty standard blitting stuff.  The problem is when I blit the background image (which is one large 960x640 image) the iPhone justs chokes. Cuts my FPS in half. 
For an experiment I tried using fillRect to just clear the bitmap between each frame instead of recopying the BG image, and fillRect took just as long (13FPS!).
Obviously the iPhone 3GS is choking on blitting such a large image.  Anyone got anythoughts on how to optimize this?  I'd much rather fix it then have to drop all those devices. 
thanks guys

UPDATE:
I wrote up a quick test to try 'damage mapping' see explination here: http://www.8bitrocket.com/2009/05/03/tutorial-clearing-a-blit-canvas-by-erasing-only-the-p ortions-that-have-changed-using-damage-maps-or-a-dirty-rect/
basically only blitting the bitmap where the sprites are as opposed to the entire thing.
Yes, it is faster than copying the whole bg everyframe. Unfortunately, as soon as the number of sprites on screen increases to a decent level- its still having to proccess almost just as information as copying the entire back ground.  And the FPS drops right back down.
I'm guess I'm boned here as far as the 3GS support.  Which is unfortuante, as I'm sure that accounts for about 20 or 30 million devices.  I'm also guessing that if all my assets were based on 480x320 instead of 960x640 I wouldn't have this issue.  Anyone know of a quick and easy way to downrez all my bitmaps by x2?
I'm not sure if I can make seperat builds for 3GS and 4 (a low rez and a high rez)  maybe one more knowledgeable than I can help me out there. 
-dis

Similar Messages

  • Resized and shrunk down image is distorted on ios/android

    It should be something very simple that just dont know.  When i proprtionally resize large image and display it on my mobile apps (ios & android) the image is slightly distorted and some smaller texts are little hard to read, or just doesnt look good.. As3 on desktop doesnt show it, and seems like only on mobile apps it happens.  I am using remote jpeg images loaded as bitmap and changing the width and height, is there something i can do to make it resize more correctly?  Thanks

    You could try applying smoothing to the image.
    _urlRequest=new URLRequest("pathtofile");
    _loader=new Loader;
    _loader.load(_urlRequest);
    _loader.addEventListener(IOErrorEvent.IO_ERROR, function(e:IOErrorEvent):void{ trace(e) });
    _loader.contentLoaderInfo.addEventListener(Event.COMPLETE,smooth);
    private function smooth(e:Event) {
    var bit:Bitmap = e.target.content;          
    if (bit != null) {
       bit.smoothing = true;

  • Compress image in keynote for iOS

    how do i compress image in keynote for iOS? anyone has any idea?
    i have added a few image taken from my new iPad and the keynote became such large file size that iCloud takes ages to upload. Hence i am looking for ways to compress each image in every slide to a lower resolution so as i could email my keynote to my colleagues too.
    Anyone knows the how to or workaround?

    I think I just found out what is wrong with the file sizes of KeyNote for iOS. In the Mac version of this app, there is a preference setting called "Copy theme images into document" which if checked includes about 7 MB of images for every theme used in the presentation(!). Normally this setting is kept off (thank god). However, in the iOS version of KeyNote, there is no such setting. Instead, that function is always ON(!!!). This means that every time you create a presentation in there, or edit one from the Mac version, it will grow by N times 7 MB, where N is the number of themes used in your presentation. Now, if you store your presentations on iCloud — like a good user should — then every single file will be sure to explode in size. And consequently take forever to upload and download from the cloud  :-(
    Any solution, anybody?

  • Need to read/paint large images fast. Help!

    Hello Java community,
    I am having trouble writing a program which can read large image files from disk (~ 1-2 MB) and paint them to the screen while maintaining a frame rate of 15 or 20 fps. Right now I am doing this through a Producer/Consumer scheme. I have one thread which constantly creates ImageIcons (using the constructor which accepts a String file path), and places them on a blocking queue (production) with a maximum capacity of 5 or 10. I have a second thread which removes an image from the queue (consumption) and paints it to the JFrame every so many milliseconds to honor the frame rate.
    My problem is that this approach is not fast enough or smooth enough. With smaller images it works fine, but with larger images it cannot maintain a high frame rate. It also seems to consume more memory than it should. I am assuming that once I paint a new ImageIcon, the old one goes out of scope and is freed from memory by the Garbage Collector. However, even with a queue capacity of 5 or 10, I am getting out-of-memory exceptions. I plan on trying the flush() method of the Image class to see if that helps to recover resources. After searching around a bit, I see that there are many different ways to read/load an image, but giving a local path to an ImageIcon and letting it load the image seems to be a safe way to go, especially because in the documentation for ImageIcon, it says that it pre-loads the image using Media Tracker.
    Any help, ideas, or links would be appreciated!
    Thanks,
    Elliot

    Thanks for another insightful response.
    I've played a bit more, so let me update you. I am preloading images, the blocking queue (FIFO) is full when the animation begins. Right now, the queue capacity is 10, but for smaller images it can be closer to 40 or 50. The image reader queue reads images and places them in this queue. Oddly, the problem does not seem to be that the reader thread can't keep up. I added a print statement which displays the size of the queue before each removal and it remains very close to the capacity, at least for the my test images which are only 400 x 400, approximately 100 KB each.
    I've tried animating the images in two distinct ways. My first approach, as I mentioned in the original question, was to animate the images using purely swing components and no manual painting of any kind. This is always nice because it frames the image perfectly and the code is minimal. To accomplish this I simply had the image reader thread pass in the image path to the constructor of ImageIcon and put the resulting ImageIcon in the queue. The animator thread had a simple swing timer which fired an ActionEvent every so-many milliseconds (in the event dispatch thread). In the actionPerformed method I simply wrote a few lines to remove the head of the queue and I used the JLabel setIcon(ImageIcon) to update the display. The code for this was very nice, however the animation was choppy (not flickering).
    In my second approach, which was a bit longer, I created a class called AnimationPanel which extended JPanel and implemented Runnable. I simply overrode paintComponent and inside I painted a member Image which was set in the thread loop. Rather than storing ImageIcons, the queue stored Images. The reader thread used ImageIO.read(...) to generate an Image. I used simple logic in the thread loop to monitor the fps and to fire repaints. This approach suffered from the same problem as the one above. The animation was choppy.
    I found the following things to be true:
    - the reader can keep up with the animator
    - if I run the image reader and perform a basic polygon animation, both of my approaches animate smoothly at high frame rates. In other words, it's not the work (disk reads) being done by the reader that is causing the lag
    - I believe the slowness can be attributed to the following calls: label.setIcon(imageIcon) or g.drawImage(image, 0, 0, this). This leads me to believe that the images are not fully processed as they are being placed on the queue. Perhaps Java is waiting until they are being used to fully read/process the images.
    - I need to find a way to process the images fully before trying to paint them, but the I felt that my approaches above were doing that. I considered have the reader frame actually generate a JLabel or some object that can be displayed without additional processing.
    - your idea about AWT components is a good one and I plan on trying that.
    Elliot

  • Intermittent image link behaviour in iOS mail client

    We use image links with a mailto reference for mobil approvals. For some reason the links occasionally do not behave as links but instead are processed only has an image, meaning the save image / copy menu pops up opposed to an email draft. I have stripped the html email down to the bare minimum but still happens. Same mail works fine on some phones/ipads. Any thoughts much appreciated.

    I have the same problem. iPhone 4 is on iOS 5.1 (9B176) and iPad 2 on iOS 5.1 (9B176). I have seen someone with an iPhone 4S that apparently did not have that problem (there the menu also has an "open" option). Also I noticed that if you touch the icon before the image has finished loading (so with a very large image), it actually does open the URL, but once the image has been loaded, it assumes that you want to save or copy the image.

  • When a pop up window comes up it is - search bookmarks and history window! I cannot log into my bank as login button should open new window to log in but I get the search page. I cannot see larger images as again I get the search bookmarks and history pa

    When a pop up window comes up it is - search bookmarks and history window! I cannot log into my bank as login button should open new window to log in but I get the search page. I cannot see larger images as again I get the search bookmarks and history page etc. Happens on all options that should open new page. I am so frustrated, this has been happening since Firefox updated itself 2 days ago to Mozilla/5.0 (Windows; U; Windows NT 6.0; en-GB; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8 ( .NET CLR 3.5.30729; .NET4.0C) was fine before that. using windows vista. Can you please advise what I should do? Also can you go back to previous version? Error console eg
    Warning: Error in parsing value for 'cursor'. Declaration dropped.
    Source File: https://ib.nab.com.au/nabib/styles/menu_nab.css?id=009
    Line: 116
    ib.nab.com.au : server does not support RFC 5746, see CVE-2009-3555 and Warning: Selector expected. Ruleset ignored due to bad selector.
    Source File: https://ib.nab.com.au/nabib/styles/nabstyle.css?id=014
    Line: 837
    == This happened ==
    Every time Firefox opened
    == 2 days ago after update.

    Do you have that problem when running in the Firefox SafeMode?
    [http://support.mozilla.com/en-US/kb/Safe+Mode]
    ''Don't select anything right now, just use "Continue in SafeMode."''
    If not, see this:
    [http://support.mozilla.com/en-US/kb/troubleshooting+extensions+and+themes]

  • Open Dialog box: disable image preview (For large images, it's SLOOOWWWW)

    Open Dialog box in column view:
    Can I disable image preview?
    For large images, it's SLOOOWWWW !!!!!
    ( I often work with 8000 x 3500 panoramic images )

    William
    Mostly this is a problem in PhotoshopOK, my Photoshop is still a Classic version. I was looking at one or two others, like GraphicConverter, but the only pref there seems to be to generate a Preview if one doesn't exist.
    I'll look some more—it's irritating me now

  • Tiling or slicing large images in tables? Completely outdated?

    I'm working through the XHTML tutorials on Lynda.com
    The instructor does a demo of slicing a large image with sections that have animated .gifs. He slices the static portions and slices the animated portions and then combines them in a 3 slice x 3 slice table.
    He says "I still prefer to do this with tables, since it's what it was designed for."
    I know tables are outdated, but I'm not sure if it's in all respects. Like if I was going to do a data table, I certainly would use tables...
    Im not sure when this video was made, but is it still in practice and a good practice?
    He then shows a similar demo of how to combine slices using CSS. Is this outdated too? Are people still slicing large images?

    Unlike print design where everything is static and unchanging, web pages need to be flexible and web accessible to accommodate all users, displays and devices.
    Image slices have their plusses and minuses.  Occasionally, you may need them to create a flexible container that resizes to content.
         3 image slices in a CSS layout ~
         http://alt-web.com/DEMOS/Image-slices-in-a-CSS-based-layout.shtml
    That said, you can add visual interest to web pages without a lot of images using CSS.
         2-image web page design ~
         http://alt-web.com/TEMPLATES/2-image-web-design.shtml
    Finally, have a look at CSS Zen Garden where the power of CSS is demonstrated.
    Each page contains identical HTML markup but with wildly different styles.
    Hopefully this will inspire you to move away from tables and use CSS for primary layouts.
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists
    http://alt-web.com/
    http://twitter.com/altweb

  • How to load a large image for display on the Palm PDA

    Hello:
    I have a large image I would like to display on LV for Palm PDA. The documentation states there should be a 64K chunk limitation for VIs to compile on the Palm. Therefore, I decided to break this image up into 2D arrays of less than 64K per subVI. My 3.2 Meg image therefore has about 224 2D array subVIs. My motivation is to somehow use the build array function (with concatenate inputs) to reconstruct the image at runtime. I can compile each 2D array subVI as a test, but my attempt to reassemble and compile is futile. When I try to use the build array with only 8 of these subVIs in a top level VI the compiler complains its still too big of a VI. Using get info, memory for this top level states total ~2
    3K of memory usage.
    So can someone help me with this problem. Is there another way to load this size of an image into memory so I can display it on a picture control?
    BTW, this can build on LV for PocketPC.
    Thank you
    Robert

    Hi Robert,
    You are correct in that one of the limitations of the Palm OS is that files cannot be larger than 64 KB in size (as stated on page 5-3 of the LabVIEW PDA Module User Manual). The suggested workaround is indeed to have a top-level PDA VI that calls subVIs, and this top-level VI can then have a total file size larger than 64 KB.
    My suspicion is that with the picture control on the Palm OS, even if you are breaking up the image into smaller subVIs, the process of building the array to reassemble the picture is still forcing the Palm to allocate memory for each of those 2D array subVIs to place on the top-level VI's picture control
    and thus causing the Palm to complain about not having enough memory. The LabVIEW PDA User Manual also states that "There is a total limit of 64 KB on all front panel array data". Thus, when you are trying to use the build array on the top-level VI, the 64 KB limit on front panel array data is still being exceeded -- even with only 8 of these subVIs.
    The following KnowledgeBase article KB 38EJRHFQ: How Do I Put a Picture or Image on My LabVIEW PDA for Palm Front Panel? states that as a workaround, you can shrink the size of your image by using 4-bit or 1-bit data instead of the standard 8-bit pixmap representation.
    This is really just a limitation of the Palm OS and the fact that there is simply no way of displaying that much front panel information for your picture. Either way, you would need to build up the array of pixels for your bitmap, and for a 3.2 MB image, I don
    't believe there is any way for the Palm OS PDA to display the image in its entire at once.
    Sorry about that...
    Kileen C.
    National Instruments

  • Unable to create large image file in iphoto

    I make a lot of panoramic images by stitching together overlapping images with hugin in TIFF format, then edit them in iphoto and use it to export them in jpeg format for greater versitality of use. So far I haven't had any trouble, even with large images as big as ~25,000x2000 pixels and ~90MB file size.
    However, I have come across a problem recently trying to export an edited version of a particularly large one - 38,062x1799 136MB file size. I can export an unedited version to jpeg without trouble, but when I tried to export an edited version it gave me an error saying it couldn't create the file. I looked around here for solutions and found the suggestion to check the file size in finder, it was zero. After experimenting I've found that the file size goes to zero as soon as any changes are made to the file, the edited image can still be viewed in iphoto, but does not display if you try to zoom in. I have tried making the TIFF file again with hugin to see if it is a problem with the file, but I also experience the same problem with the newly made file.
    I am working on a Macbook air (1.7 GHz Intel Core i5, 4 GB 1333 MHz DDR3 Memory, Intel HD Graphics 3000 384 MB, Version 10.7.5 OS X) in iphoto '11 version 9.2.2 (629.40). Any suggestions for getting round this issue?

    I thought of something that might help with working out what's going on.
    After I make any edits to the image, if I move on to the info tab the image initially goes a bit blurry as it often does with large images, but instead of coming up as a clear image after a few seconds like normal, I get the image I've attached here and the file size changes to zero. To be able to see the thumbnail of the edited image I need to come back out to the library again, but the file size is still zero. If I revert to original the file size is restored.

  • [BUG] Flash plug-in crash with large images, sometimes

    Hello,
    I've been working with using Pixel Bender shaders in Flash and have encountered plugin / browser crashes when using shaders on large images, but only on certain machines.
    I've been unable to reproduce the crash on any machine I have physical access to, but was able to determine that it was caused by executing a shader on an image over 1028x1028 in size.  I created a test Flash file (http://www.saltgames.com/?page_id=283) that runs a simple shader on gradually increasing image sizes.  On most machines it will run fine, but on a few the Flash plugin (and possibly browser too, depending on the broswer) will lock up once it tries to render the 1500x1500 sized image.
    One user reported that although it crashed when using Firefox or Chrome, in Internet Explorer on the same machine it was fine.  I haven't been able to see a common feature in the set-ups that suffer this crash.
    I had approximately 50 people test a Flash game that featured shaders operating on large images, and 4 reported a crash.  Changing it to only run shaders on smaller image sizes seems to have fixed the crashes.

    Hi,
    0xe9000 - 0xebfff +net.culater.SIMBL 0.8.2 (8) /Library/InputManagers/SIMBL/SIMBL.bundle/Contents/MacOS/SIMBL
    Follow the path: /Library/Input Managers. Move all the files in the Input Managers folder to the Trash.
    Instant Hijack Server.hermesmodule/Contents/MacOS/Instant Hijack Server
    If you have any browser enhancers like Unsanity's haxies, move them to the Trash.
    Relaunch Safari.
    Carolyn

  • How to view large images ?

    i just updated my phone c601 from symbian anna to belle..may i know how to view large images in the nokia internet browser..i cant change it

    sorry,since i have just updated my mobile,i can't show you how large is it,
    but i can show you how small is it...
    even though i click 'more>view image',
    the image that shows me is just that small....
    Attachments:
    scr000001.jpg ‏102 KB

  • Button to b clicked and large image shown on the stage(link to swf file here)

    I have a button when clicked is supposed to show a larger image of the button/ I am new to the code and think if I could see and example it would help. This is a school project. I am failing as of now.
    how do I upload my files to here so I can show   http://threadcontent.next.ecollege.com/(NEXT(54b34a37e8))/Main/CourseMode/Thread/DownloadA ttachment.ed?virtualFileId=961613945&GoldenTicketParams=_u=8538800;_dt=634657009144447173; virtualFileIDs=956864518,956864621,956864990,960296914,960355361,960465721,960465764,96037 9092,961379343,960908613,961013978,961014291,961613945,961035853,961618981,961042519,96144 4490,961569920,961629398,961629452;&GoldenTicketSignature=35-AA-34-CE-10-A6-FD-4F-D1-DD-73 -4C-A9-EF-8D-9D-63-E5-87-88-FB-D4-4D-15-06-BB-82-8A-9E-F2-36-DC
    I believe this is the download link to the swf I have you can see the buttons click and sound and over and when pressed they are supposed to show the larger image of the buttons original image that you see before you hover

    stop();
    trail_btn.addEventListener(MouseEvent.CLICK,
    trail);
    function trail(event:MouseEvent):void
        gotoAndStop(10);
    and the link I gave is the download link from my class at AI Online its perfectly fine

  • Is it possible to downsize a large image on the server side (before serving it to the user)?

    One of the banes of my existence is resizing the same image several times because of the various contexts it appears in. For example, the main image in an article will appear bigger than it will on the index page, where many article teasers are placed. Then there's the image that goes in the Facebook "share" og. Then there's... you get the idea.
    Same image, different contexts, lots of Photoshop resizing, lots of files to keep track of... but I save on bandwidth.
    On the flip side, I can target the same (large) image and simply downsize via traditional HTML (width/height) on the browser end, but that will mean downloading a file that is 50-75% larger than what is actually needed. If the front page teaser displays a 1280px image at 500px, that's a huge (and potentially costly) waste of bandwidth.
    HOWEVER...
    If I could do the same thing on the SERVER side... in other words, tell the server to take only the pixels it needs from the (large) image and serve only THOSE to the end user... then the same image could be used in each and every context, and only the necessary bandwidth is used.
    Is this do-able? If so, what is the process formally called, and where would I begin to learn how to do it?
    Thanks!

    That's amazing. I didn't think it was possible without saving files on the server first.
    This will suit my needs just fine, but allow me to push my luck even further with the following "hail mary" question : would it be possible for the server to serve the PNG with a width of 100% of the container? In other words, the behavior would mimic width=100% but the server would still only serve as many bytes as are needed to fill that space.
    Not only would I not have to create separate files for every resize (we fixed that problem with your suggestion) but I wouldn't even have to customize every link to a specific size.
    I'm not expecting an affirmative on this one, but it was worth asking. =)

  • How to increase performance speed of Photoshop CS6 v13.0.6 with trasformations in LARGE image files (25,000 X 50,000 pixels) on IMac3.4 GHz Intel Core i7, 16 GB memory, Mac OS 10.7.5?   Should I purchase a MacPro?

    I have hundreds of these large image files to process.  I frequently use SKEW, WARP, IMAGE ROTATION features in Photoshop.  I process the files ONE AT A TIME and have only Photoshop running on my IMac.  Most of the time I am watching SLOW progress bars to complete the transformations.
    I have allocated as much memory as I have (about 14GB) exclusively to Photoshop using the performance preferences panel.  Does anyone have a trick for speeding up the processing of these files--or is my only solution to buy a faster Mac--like the new Mac Pro?

    I have hundreds of these large image files to process.  I frequently use SKEW, WARP, IMAGE ROTATION features in Photoshop.  I process the files ONE AT A TIME and have only Photoshop running on my IMac.  Most of the time I am watching SLOW progress bars to complete the transformations.
    I have allocated as much memory as I have (about 14GB) exclusively to Photoshop using the performance preferences panel.  Does anyone have a trick for speeding up the processing of these files--or is my only solution to buy a faster Mac--like the new Mac Pro?

Maybe you are looking for

  • How to run a servlet in tomcat 5.0.

    Hi all, how to run a servlet in tomcat 5.0 please tell me the entire procedure....(directory structure) step by step....

  • Multi lingual form

    Hi all, Can you give me an idea on how to make a multi-lingual form. Lets say I have a form and I want it to accept Chinese language or convert it into a Chinese language. How will I do this stuff? Because we are developing a system here which will b

  • Inappropriate renderings  while dynamically including the same Page twice

    Hi, I am including same page twice with only one backing bean. It works fine when I execute the first included page or the second included page alone. But after executing the first included page and then if I clicked second included page which comes

  • PS Cs6, just update?

    Signed up for the one app $9.99 plan. I already have PS CS6. I thought there would be a newer version of photoshop, instead there was an update to CS6. How do I get the newer version? Thanks.  Gordon Cox

  • Missing exe file

    I don't know how to reply to the message Millenium sent but it was discovered I was sold Adobe Photo shop for Mac OS and not for Windows this was why my exe file was missing.