Optimization techniques: cacheAsBitmap

I read that cacheAsBitmap is advantageous when used on display objects. I have a btimap not a vector in my game which I convert to a movie clip - a hero ship for example. Do I use cacheAsBitmap on that too even though it is already a bitmap (well png)? I also read that scaling and rotating when using cacheAsBitmap is OK.
edit:
http://forums.adobe.com/thread/758774
Just read this post. The information I could glean was:
a. cacheAsBitmapMatrix  - is needed or preferred if you want to rotate and scale mcs
b. You DO use cacheAsBitmapMatrix even if the mc is a bitmap(png)
c. Even static backgrounds images should be cached
However at the end ot the post it says that if you are using a large background then just add it from the library as a bitmap
var myLibraryBitmap:Bitmap = new Bitmap(new LibraryBitmapSymbol());
Nothing to cache as Bitmap, also no memory overhead of your movieclip
This is incredibly useful if an expert could confirm the above as everybody making games should be optimizing properly.
Message was edited by: codeBeastAdobe

Everything mentioned is true but I'd add a few notes.
cacheAsBitmapMatrix is good when you're not literally rotating/scaling/etc the object constantly. It's for an object that is merely adjusting x/y properties most of the time but may occasionally rotate or scale. If it's a ship of some sort that is constantly on the move I wouldn't even bother with cacheAsBitmapMatrix because it's just going to re-draw the clip constantly anyhow. 
Static backgrounds / bitmaps (buttons, graphics, etc) should always be cached to keep the redraw down.
Huge backgrounds should use a blitting technique to keep the display list simplified. Bitmaps for backgrounds will indeed remove a little overhead. As important, backgrounds and all other non-interactive objects should also have
mouseChildren=false set so events don't phase through them. Every single object that has no interactive purpose should set this to drastically reduce events.
Lastly keep in mind that cacheAsBitmap is a toggle and works best when you have several objects in a single clip. Caching a single object inside a clip isn't really a big advantage unless it's a vector. But as you animate, if you know a complex object isn't going to change for a while you can enable cacheAsBitmap. Then when the object is going to transform, simply turn it off until you're finished and then re-enable it, like a toggle. 

Similar Messages

  • Criticism of new data "optimization" techniques

    On February 3, Verizon announced two new network practices in an attempt to reduce bandwidth usage:
    Throttling data speeds for the top 5% of new users, and
    Employing "optimization" techniques on certain file types for all users, in certain parts of the 3G network.
    These were two separate changes, and this post only talks about (2), the "optimization" techniques.
    I would like to criticize the optimization techniques as being harmful to Internet users and contrary to long-standing principles of how the Internet operates. This optimization can lead to web sites appearing to contain incorrect data, web sites appearing to be out-of-date, and depending on how optimization is implemented, privacy and security issues. I'll explain below.
    I hope Verizon will consider reversing this decision, or if not, making some changes to reduce the scope and breadth of the optimization.
    First, I'd like to thank Verizon for posting an in-depth technical description of how optimization works, available here:
    http://support.vzw.com/terms/network_optimization.html
    This transparency helps increase confidence that Verizon is trying to make the best decisions for their users. However, I believe they have erred in those decisions.
    Optimization Contrary to Internet Operating Principles
    The Internet has long been built around the idea that two distant servers exchange data with each other by transmitting "packets" using the IP protocol. The headers of these packets contain the information required such that all the Internet routers located between these servers can deliver the packets. One of the Internet's operating principles is that when two servers set up an IP connection, the routers connecting them do not modify the data. They may route the data differently, modify the headers in some cases (like network address translation), or possibly, in some cases, even block the data--but not modify it.
    What these new optimization techniques do is intercept a device's connection to a distant server, inspect the data, determine that the device is downloading a file, and in some cases, to attempt to reduce bandwidth used, modify the packets so that when the file is received by the device, it is a file containing different (smaller) contents than what the web server sent.
    I believe that modifying the contents of the file in this matter should be off-limits to any Internet service provider, regardless of whether they are trying to save bandwidth or achieve other goals. An Internet service provider should be a common carrier, billing for service and bandwidth used but not interfering in any way with the content served by a web server, the size or content of the files transferred, or the choices of how much data their customers are willing to use and pay for by way of the sites they choose to visit.
    Old or Incorrect Data
    Verizon's description of the optimization techniques explains that many common file types, including web pages, text files, images, and video files will be cached. This means that when a device visits a web page, it may be loading the cached copy from Verizon. This means that the user may be viewing a copy of the web site that is older than what the web site is currently serving. Additionally, if some files in the cache for a single web site were added at different times, such as CSS files or images relative to some of the web pages containing them, this may even cause web pages to render incorrectly.
    It is true that many users already experience caching because many devices and nearly all computer browsers have a personal cache. However, the user is in control of the browser cache. The user can click "reload" in the browser to bypass it, clear the cache at any time, or change the caching options. There is no indication with Verizon's optimization that the user will have any control over caching, or even knowledge as to whether a particular web page is cached.
    Potential Security and Privacy Violations
    The nature of the security or privacy violations that might occur depends on how carefully Verizon has implemented optimization. But as an example of the risk, look at what happened with Google Web Accelerator. Google Web Accelerator was a now-discontinued product that users installed as add-ons to their browsers which used centralized caches stored on Google's servers to speed up web requests. However, some users found that on web sites where they logged on, they were served personalized pages that actually belonged to different users, containing their private data. This is because Google's caching technology was initially unable to distinguish between public and private pages, and different people received pages that were cached by other users. This can be fixed or prevented with very careful engineering, but caching adds a big level of risk that these type of privacy problems will occur.
    However, Verizon's explanation of how video caching works suggests that these problems with mixed-up files will indeed occur. Verizon says that their caching technology works by examining "the first few frames (8 KB) of the video". This means that if multiple videos are identical at the start, that the cache will treat them the same, even if they differ later on in the file.
    Although it may not happen very frequently, this could mean that if two videos are encoded in the same manner except for the fact that they have edits later in the file, that some users may be viewing a completely different version of the video than what the web server transmitted. This could be true even if the differing videos are stored at completely separate servers, as Verizon's explanation states that the cataloguing process caches videos the same based on the 8KB analysis even if they are from different URLs.
    Questions about Tethering and Different Devices
    Verizon's explanation says near the beginning that "The form and extent of optimization [...] does not depend on [...] the user's device". However, elsewhere in the document, the explanation states that transcoding may be done differently depending on the capabilities of the user's device. Perhaps a clarification in this document is needed.
    The reason this is an important issue is that many people may wish to know if optimization happens when tethering on a laptop. I think some people would view optimization very differently depending on whether it is done on a phone, or on a laptop. For example, many people, for, say, business reasons, may have a strong requirement that a file they downloaded from a server is really the exact file they think they downloaded, and not one that has been optimized by Verizon.
    What I would Like Verizon To Do
    With respect to Verizon's need to limit bandwidth usage or provide incentives for users to limit their bandwidth usage, I hope Verizon reverses the decision to deploy optimization and chooses alternate, less intrusive means to achieve their bandwidth goals.
    However, if Verizon still decides to proceed with optimization, I hope they will consider:
    Allowing individual customers to disable optimization completely. (Some users may choose to keep it enabled, for faster Internet browsing on their devices, so this is a compromise that will achieve some bandwidth savings.)
    Only optimizing or caching video files, instead of more frequent file types such as web pages, text files, and image files.
    Disabling optimization when tethering or using a Wi-Fi personal hotspot.
    Finally, I hope Verizon publishes more information about any changes they may make to optimization to address these and other concerns, and commits to customers and potential customers about their future plans, because many customers are in 1- or 2-year contracts, or considering entering such contracts, and do not wish to be impacted by sudden changes that negatively impact them.
    Verizon, if you are reading, thank you for considering these concerns.

    A very well written and thought out article. And, you're absolutely right - this "optimization" is exactly the reason Verizon is fighting the new net neutrality rules. Of course, Verizon itself (and it's most ardent supporters on the forums) will fail to see the irony of requiring users to obtain an "unlimited" data plan, then complaining about data usage and trying to limit it artificially. It's like a hotel renting you a room for a week, then complaining you stayed 7 days.
    Of course, it was all part of the plan to begin with - people weren't buying the data plans (because they were such a poor value), so the decision was made to start requiring them. To make it more palatable, they called the plans "unlimited" (even though at one point unlimited meant limited to 5GB, but this was later dropped). Then, once the idea of mandatory data settles in, implement data caps with overages, which is what they were shooting for all along. ATT has already leapt, Verizon has said they will, too.

  • Performance Optimization - Evaluation & Optimization techniques

    Hello,
    Does something like this exist? Methods/Best practices of evaluating or optimizing performance of BPC NW?
    Thanks.

    Hi Zack,
    Please check the [Performance Analysis and Tuning Guide|http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e08c2aa2-6c58-2e10-3588-e6ed2e7c04f8?QuickLink=index&overridelayout=true] and also [Improve your Reporting Performance|http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e085456b-2685-2e10-2fa4-dfb1a49243ec?QuickLink=index&overridelayout=true] guide.
    You can also go through the BPC 7.5 Admin and Installation Guides for Optimization techniques.
    Hope it helps.
    Regards,
    Raghu

  • What are the Optimization Techniques?

    What are the Optimization Techniques? Can any one send the one sample program which is having Good Optimization Techniques.
    Phani

    Hi phani kumarDurusoju  ,
    ABAP/4 programs can take a very long time to execute, and can make other processes have to wait before executing. Here are
    some tips to speed up your programs and reduce the load your programs put on the system:
    Use the GET RUN TIME command to help evaluate performance. It's hard to know whether that optimization technique REALLY helps
    unless you test it out. Using this tool can help you know what is effective, under what kinds of conditions. The GET RUN TIME
    has problems under multiple CPUs, so you should use it to test small pieces of your program, rather than the whole program.
    Generally, try to reduce I/O first, then memory, then CPU activity. I/O operations that read/write to hard disk are always the
    most expensive operations. Memory, if not controlled, may have to be written to swap space on the hard disk, which therefore
    increases your I/O read/writes to disk. CPU activity can be reduced by careful program design, and by using commands such as
    SUM (SQL) and COLLECT (ABAP/4).
    Avoid 'SELECT *', especially in tables that have a lot of fields. Use SELECT A B C INTO instead, so that fields are only read
    if they are used. This can make a very big difference.
    Field-groups can be useful for multi-level sorting and displaying. However, they write their data to the system's paging
    space, rather than to memory (internal tables use memory). For this reason, field-groups are only appropriate for processing
    large lists (e.g. over 50,000 records). If you have large lists, you should work with the systems administrator to decide the
    maximum amount of RAM your program should use, and from that, calculate how much space your lists will use. Then you can
    decide whether to write the data to memory or swap space. See the Fieldgroups ABAP example.
    Use as many table keys as possible in the WHERE part of your select statements.
    Whenever possible, design the program to access a relatively constant number of records (for instance, if you only access the
    transactions for one month, then there probably will be a reasonable range, like 1200-1800, for the number of transactions
    inputted within that month). Then use a SELECT A B C INTO TABLE ITAB statement.
    Get a good idea of how many records you will be accessing. Log into your productive system, and use SE80 -> Dictionary Objects
    (press Edit), enter the table name you want to see, and press Display. Go To Utilities -> Table Contents to query the table
    contents and see the number of records. This is extremely useful in optimizing a program's memory allocation.
    Try to make the user interface such that the program gradually unfolds more information to the user, rather than giving a huge
    list of information all at once to the user.
    Declare your internal tables using OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to be accessing. If the
    number of records exceeds NUM_RECS, the data will be kept in swap space (not memory).
    Use SELECT A B C INTO TABLE ITAB whenever possible. This will read all of the records into the itab in one operation, rather
    than repeated operations that result from a SELECT A B C INTO ITAB... ENDSELECT statement. Make sure that ITAB is declared
    with OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to access.
    If the number of records you are reading is constantly growing, you may be able to break it into chunks of relatively constant
    size. For instance, if you have to read all records from 1991 to present, you can break it into quarters, and read all records
    one quarter at a time. This will reduce I/O operations. Test extensively with GET RUN TIME when using this method.
    Know how to use the 'collect' command. It can be very efficient.
    Use the SELECT SINGLE command whenever possible.
    Many tables contain totals fields (such as monthly expense totals). Use these avoid wasting resources by calculating a total
    that has already been calculated and stored.
    These r good websites which wil help u :
    Performance tuning
    http://www.sapbrainsonline.com/ARTICLES/TECHNICAL/optimization/optimization.html
    http://www.geocities.com/SiliconValley/Grid/4858/sap/ABAPCode/Optimize.htm
    http://www.abapmaster.com/cgi-bin/SAP-ABAP-performance-tuning.cgi
    http://abapcode.blogspot.com/2007/05/abap-performance-factor.html
    cheers!
    gyanaraj
    ****Pls reward points if u find this helpful

  • Querry Optimization Techniques....

    Dear All,
    kindly list down how a querry can be optimized in tems of using indexes, joins , predicates , cursors , inline views, scalar function etc ....
    inshort i would like people to list down Querry Optimization Techniques.
    Regards,
    Hassan

    Hi,
    The metalink note 398838.1: FAQ: Query Tuning Frequently Asked Questions is a very good resource for the same.
    Regards,
    S.K.

  • Phenomenal optimization technique!

    I just discovered an amazing way of optimizing a script, which I thought
    I'd share.
    I have a script that adds line numbers to an InDesign document
    (www.freelancebookdesign.com under the scripting tab).
    It works by adding a text frame alongside each InDesign frame and adds
    numbers in that frame.
    I've done quite a lot of optimization on it already, and the script
    starts at a very nice pace, but it soon slows down.
    So on a book with 100 pages, it's pretty quick. But adding line numbers
    to a 500-page book becomes very slow, because by the last 1/3 or so of
    pages the script has slowed to a crawl.
    Now, many of you will probably recognize the symptoms: with each page, a
    new text frame + contents has been created, so after 200 or so
    operations, the undo queue has become very long.
    The question then becomes: how to flush the undo queue.
    Now, I remember reading once a suggestion to do a "save as". Thing is, I
    don't want to "save as" the user's document -- they won't thank me if
    they need to undo a few steps before they ran the script!
    Of course, the script already uses a doScript call with
    UndoModes.ENTIRE_SCRIPT so it's all a single step. And we know that
    FAST_ENTIRE_SCRIPT isn't safe to use -- it's quite buggy.
    What I figured out, and am quite proud of , is to break up the loop
    that goes through those 500 pages into 10 loops of around 50 pages each
    -- and run each loop with a separate doScript (ENTIRE_SCRIPT) call. So
    we have a nested doScript.
    The thing about UndoModes.ENTIRE_SCRIPT seems to be that the undo queue
    is still written to, and when the doScript call ends, they are all
    deleted and turned into one step. So each time a doScript call finishes,
    even if your call involved a thousand steps, they will all be reduced to
    a single undo step when it finishes -- and this is the equivalent of a
    "save as".
    And since it seems to take exponentially longer to execute a command the
    longer the undo queue is, by dividing the queue into 10 chunks of 50
    (instead of a single chunk of 500), a huge amount of time is saved.
    Every 50 iterations, the undo queue is flushed, and the script therefore
    continues at the same pace as when it was first run. (Obviously, if
    there are thousands of iterations, it is probably a good idea to add
    another nested doScript call).
    So, case in point: Experiments with a 500-page book have show a 360%
    increase in efficiency! What used to take 288 seconds now takes 80 seconds!
    I'm pretty impressed!
    Do you have a better way of dealing with undo slowness?
    Ariel

    Thanks. @Pickory: Yes, a nested doScript.
    Here's a test script. The script creates a new document and adds 1000
    pages, each with a text frame on it. It does this twice: First time,
    with a single doScript call, second time with a nested doScript (ie 10 x
    100 pages).
    The results I get are 48 seconds for the first run, 31 seconds for
    the second -- only 2/3 of the time it takes the first loop!
    And this is for a relatively simple operation: The more the script does,
    the more the advantage is noticeable (as I mentioned, in my Line Number
    script, it took 1/4 of the time for a long document!).
    Ariel
    // TEST 1: Single doScript to create 1000 pages with text frame
    var myDoc = app.documents.add();
    alert("Starting Test 1");
    $.hiresTimer;
    app.doScript (main, undefined, undefined, UndoModes.ENTIRE_SCRIPT, "test");
    alert("Single doScript call took "$.hiresTimer/1000000" seconds");
    function main(){
         for (var i = 0; i<1000; i++){
             myPage = myDoc.pages.add();
             myPage.textFrames.add();
    myDoc.close(SaveOptions.NO);
    // TEST 2: Nested doScript to create 1000 pages with text frame
    myDoc = app.documents.add();
    alert("Starting Test 2");
    $.hiresTimer;
    app.doScript(main2, undefined, undefined, UndoModes.ENTIRE_SCRIPT, "test
    2");
    alert("Nested doScript version took "$.hiresTimer/1000000" seconds.");
    function main2(){
         for (i = 0; i<10; i++){
             app.doScript(nestedDoScript, undefined, undefined,
    UndoModes.ENTIRE_SCRIPT, "The user will never see this");
    function nestedDoScript(){
         for (var j=0; j<100; j++){
             myPage = myDoc.pages.add();
             myPage.textFrames.add();

  • Optimization techniques for JSC2 and its AppServer

    There have been several forums posts about the lagging performance of JSC2 and the bundled App Server.
    Could anyone suggest some steps to optimize the IDE and app server without any hardware changes?

    check out:
    http://developers.sun.com/prodtech/javatools/jscreator/reference/faqs/technical/tshooting/hot_fix_2.html
    NB Performance FAQ's:
    http://wiki.netbeans.org/wiki/view/NetBeansUserFAQ#section-NetBeansUserFAQ-Performance
    HTH,
    Sakthi

  • Optimization techniques when animating?

    Hi,
    I am newbie when it comes to animations and was wondering if there are techniques/procedures that should or shouldn't be done when using EA to animate?
    I have been playing around with animations and I clearly noticed one animation which was an image which moves from diagonally (initially not on the stage) to the top left corner tearing (I think that is the word). There is not much going on in the animation which has confused me.
    So are certain things I should avoid in making animations?
    I have an icore7 laptap with 16GB of Ram so I wasn't expecting to see tearing in a such a simple animation.
    Thanks.

    Hi,
    I have been working with Edge Animate for some time, and some animations that worked on desktop did not work on mobile.
    If that is your case, the solution was to use GreenSock | GSAP 
    Resdesign sent me this nice sample:
    resdesign 5 days ago
    sample here:
    https://app.box.com/s/xxqqnx25i6p5vbn5mzya
    regards,
    Paul

  • Optimization techniques

    Why the retreival time will be high, if you specify dense dimension member as as dynamic calc and store and can anybody elaborate how essbase going to allocate storage space in this situation.

    Essbase works best when all of the data for a given datablock is stored contiguously. When you export your data and then reload the exported file, the data is contiguous meaaning that the disk heads only need to be positioned one time to complete a logical read. Essbase also compresses the block when it can to reduce space consumption. Calc & store needs to write the value somewhere but there is no room contiguosly so the disk system creates an extent to point to the value stored elsewhere. Now, to perform a logical read of the block, the disk heads must be positioned twice or as many times as there are extents. Calc & store are probably the least used of the storage options. Experienced developers rarely use them.

  • How to optimize the performance of this code ?

    I've two Movie clips on a flash project. One of them is fixed and the other can be moved by arrow buttons on the keyboard. The two Movie clips have irregular shapes, so HitTestObject and HitTestPoint doesn't work very well. I've a function that detects collision of two movie clips using bitmap. I wanted to update the position of the movable Movie clip so I put the collision detection function under the code of ENTER_FRAME event listener. It works very well, but when I add many fixed movie clips  ( about 10 fixed movie clips in one frame ), the game (.swf file) becomes slower and the performance of the PC becomes slower. I thought that my collision detection function has a negative effect on PC performance so I used the class on this page : https://forums.adobe.com/thread/873737
    but the same thing happens.
    Would you tell me how to speed up the execution of my codes ?
    Here is part of my code :
    stage.addEventListener(Event.ENTER_FRAME, myOnEnterFrame);
    function myOnEnterFrame(event:Event):void
      if (doThisFn) // doThisFn is a variable to allow or prevent the movable movie clip form being moved with keyboard arrows
      if ( left && !right ) {
      player.x -= speed;
      player.rotation = player.rotation - speed ;
      if( right && !left ) {
      player.x += speed;
      player.rotation = player.rotation + speed ;
    if( up && !down ) {
      player.y -= speed;
    if( down && !up ) {
      player.y += speed;
    // The fixed movie clips are wall1 ,wall2 , wall3 , wall4 , ... and so on
    // the following code checks how many walls exists on each frame and pushes them into the wallA  array
      for(var i:int=0;i<1000;i++) // We can put up to 1000 wall object into the wallA array
      if(this['wall'+i]) // If the object wall exists, push it into the wallA array
      wallA.push(this['wall'+i]);
      for(i=0;i<wallA.length;i++)
      if( h.hitF (player , wallA[i] ) || gameOverTest ) // This code checks if the player ( the movable movie clip ) hits the walls or not
      trace ( "second try" ) ;
      gameOver.visible = true ;
      doThisFn = false ;
    //I think the following codes are easy to excite and run. I think the performance issue is due to previous codes.
      if (player.hitTestObject(door))
      win.visible = true ;
      doThisFn = false ;
      if (key) // if there is a key on frame
      if (player.hitTestObject(key))
      key.visible = false ;
      switch( currentFrame )
      case 4:
      wallA[0].visible = false ;
      wallA[0].x = 50000;
      break;
      case 5:
      wall14.play();
      wall8.x = 430 ;
      break;

    it's a simple question that usually has no simple answer.
    here's an excerpt from a book (Flash Game Development: In a Social, Mobile and 3D World)  i wrote.
    Optimization Techniques
    Unfortunately, I know of no completely satisfactory way to organize this information. In what follows, I discuss memory management first with sub-topics listed in alphabetical order. Then I discuss CPU/GPU management with sub-topics listed in alphabetical order.
    That may seem logical but there are, at least, two problems with that organization.
    1. I do not believe it is the most helpful way to organize this information.
    2. Memory management affects CPU/GPU usage, so everything in the Memory Management section could also be listed in the CPU/GPU section.
    Anyway, I am going to also list the information two other ways, from easiest to hardest to implement and from greatest to least benefit.
    Both of those later listings are subjective and are dependent on developer experience and capabilities, as well as, the test situation and test environment. I very much doubt there would be a consensus on ordering of these lists.  Nevertheless, I think they still are worthwhile.
    Easiest to Hardest to Implement
    1.  Do not use Filters.
    2.  Always use reverse for-loops and avoid do-loops and avoid while-loops.
    3.  Explicitly stop Timers to ready them for gc (garbage collection).
    4.  Use weak event listeners and remove listeners.
    5.  Strictly type variables whenever possible.
    6.  Explicitly disable mouse interactivity when mouse interactivity not needed.
    7.  Replace dispatchEvents with callback functions whenever possible.
    8.  Stop Sounds to enable Sounds and SoundChannels to be gc'd.
    9.  Use the most basic DisplayObject needed.
    10. Always use cacheAsBitmap and cacheAsBitmapMatrix with air apps (i.e., mobile devices).
    11. Reuse Objects whenever possible.
    12. Event.ENTER_FRAME loops: Use different listeners and different listener functions applied to as few DisplayObjects as possible.
    13. Pool Objects instead of creating and gc'ing Objects.
    14. Use partial blitting.
    15. Use stage blitting.
    16. Use Stage3D
    Greatest to Least Benefit
    Use stage blitting (if there is enough system memory).
    Use Stage3D.
    Use partial blitting.
    Use cacheAsBitmap and cacheAsBitmapMatrix with mobile devices.
    Explicitly disable mouse interactivity when mouse interactivity not needed.
    Do not use Filters.
    Use the most basic DisplayObject needed.
    Reuse Objects whenever possible.
    Event.ENTER_FRAME loops: Use different listeners and different listener functions applied to as few DisplayObjects as possible.
    Use reverse for-loops and avoid do-loops and while-loops.
    Pool Objects instead of creating and gc'ing Objects.
    Strictly type variables whenever possible.
    Use weak event listeners and remove listeners.
    Replace dispatchEvents with callback functions whenever possible.
    Explicitly stop Timers to ready for gc.
      16. Stop Sounds to enable Sounds and SoundChannels to be gc'd.

  • Optimization of Drill Down time to Base level members

    Hi All,
    I have built a Balance Sheet Lead Sheet in which I need to show all the Entities in rows which have values for a particular Balance Sheet Account.What I am doing is providing the Entities structure at Consolidated level and then allowing users to Drill Down to base level entities but the problem is that while drilling down to last level of entity member it takes whole lot of time.I can understand that the Application will take time in returning the response on fly after suppressing the zero rows but is there any way of applying some optimization techniques to this either at FR level or HFM level.
    Thanks in advance.
    Regards
    AG

    I think you double-posted. I just replied to the other question: Re: Defining Drilling Level Restrictions
    Cheers,
    Mehmet

  • Code Optimization for handling large volume of data

    Hi All,
    We are facing a problem when executing a report... lot of time is taken to execute..... Many a times the program is terminated with a dump that "Timeout :Program terminated because of endless loop".
    the internal table which has to looped has more than 8.5 lac records...
    and for each run of loop there are two read and one select statement (unavoidable)...,,
    (We have followed almost all the optimization techniques,,,,)
    Please suggest if you have any idea as to ... what can be done in such situation....
    Thanks and Regards,
    Sushil Hadge.

    Hi Martin,
    Following is the piece of code.....
    SELECT bukrs gpart hkont waers
          FROM dfkkop
          INTO TABLE it_dfkkop
          WHERE bukrs = p_bukrs AND bldat IN so_bldat AND hkont IN so_hkont.
    SORT it_dfkkop BY gpart.
    Loop at it_dfkkop into wa_dfkkop.
    <Read statement>
    <Read Statement>
    ON CHANGE OF wa_dfkkop-gpart.
    SELECT gpart hkont waers betrw FROM dfkkop INTO TABLE it_subtot WHERE hkont = wa_dfkkop-hkont AND gpart = wa_dfkkop-gpart.
          IF it_subtot IS NOT INITIAL.
            LOOP AT it_subtot INTO wa_subtot.
              v_sum = v_sum + wa_subtot-betrw.
            ENDLOOP.
         Endif. 
    Endon.
    Endloop.
    Please suggest if this can be improved in some way....
    Thanks ,
    Sushil
    Edited by: Sushil Hadge on Jun 4, 2008 3:12 PM

  • 培训邀请函 - SAP DB2 Migration Optimization workshop -- SAP DB2迁移优化 (免费)

    尊敬的客户,您好:
    为了帮助SAP客户更好地进行系统异构迁移,提高用户在DB2数据库环境下的SAP迁移和管理能力,我们代表IBM公司邀请您参加由IBM公司提供的《SAP DB2 Migration Optimization》培训。
    此次免费培训将在北京(4月12日 -4月14 日 )举办。授课对象为具有一定SAP管理经验,希望进一步深入了解DB2 LUW,并了解如何在SAP环境下有效进行异构迁移的SAP系统管理员,DBA和技术顾问。培训为每位学员提供实验环境,其授课目标为:
    u2022     了解DB2的基础知识;
    u2022     掌握使用SAP和DB2工具进行高效安全系统异构迁移;
    u2022     了解迁移监控工具
    u2022     了解迁移相关优化工具和方法;
    u2022     了解针对SAP ERP及BW的迁移优化;
    u2022     掌握迁移过程中基本问题分析能力。
    培训具体事项请见日程安排。
    如您决定参加培训,请填写以下回执表在2011年4月5日之前通过Email予以确认。
    联系人  郭亦群 Guo Yi Qun
    Tel:  86-10 63614570         Email: guoyiq at cn.ibm.com
    Mobile: 86-13701235290                  
    谢谢您的合作!
    (注:交通及住宿自理)
    SAP  DB2迁移优化培训日程
    地点:和盛嘉业大厦605室/北京易智康达科技有限公司
         北京市海淀区中关村大街32号
    时间: 4月12u201414日
    (每日上课时间:9:30-17:30)
    Day 1
    1.1    DB2 & SAP Overview
    1.2    Migration Overview
    1.3    Migration Tools Usage and Optimization
    1.4    Hands On Lab 1
    Day 2
    2.1    Advanced Optimization Techniques
    2.2    Hands On Lab 2
    2.3     Monitoring Tools
    2.4     DB2 Layout and Configuration Options
    2.5     Hands On Lab 3
    2.6     Import Optimization (part 1)
    2.7     Hands On Lab 4
    Day 3
    3.1   Import Optimization (part 2)
    3.2     Hands On Lab 5
    3.3     Import Optimization (part 3)
    3.4     Hands On Lab 6
    3.5     Special Considerations for Migrating BI systems
    3.6     SAP/DB2 Migration Optimization u2013 Summary
    3.7     Q&A
    回执表
    公司名称:___________________________________________________________
    地址:___________________________________________________________
    姓名:          职务:  
    Email:          电话:  
    姓名:          职务:   
    Email:          电话:

    不错,可以去一下。

  • General optimization suggestions

    Hi
    I love Arch. It's fast, convenient and easy to understand. However i sometimes feel the system should be faster.
    So maybe i'm missing something... i'd like to hear system optimization techniques in this thread.
    Let me start:
    * compile your own kernel
    * make sure DMA is turned on for hard drives
    * use light desktop env/wm

    Dusty wrote:
    kensai wrote:Still a custom compiled archck kernel uses less memory and is more cpu efficient than archck from repo.
    The point is, the time it takes to compile that kernel (7-20 minutes, I've heard) will never be recovered in terms of *noticeable* responsiveness to the user. You can quote numbers saying its more efficient or uses less memory, but these resources are certainly not scarce in modern systems.
    Dusty
    Well, that's what I thought so i've been using standard kernel with adapted mkinitrd.conf for almost a year with great success. Also when I bought a new computer, I made sure I got plenty of (unexpansive) DDR400 SDRAM + the fastest rather than the biggest SATA(II) HDD along an AMD64 & a cool Mobo.
    Now I succesfully compiled my 1st Iphitus' Archck kernel without initrd/~ramfs  on my old laptop last month, & I found it eating as low as 29MB RAM with Founy's e17-cvs fully launched, which I liked (less RAM for system is allways good & better still  on laptop in terms on usability & autonomy )
    It seems that Archck kernel improved or maybe my knowledge on compiling an appropriate kernel, cauz' last week I compiled it on my Dell Latitude L400 along with Suspend & Speedstep support. Rebooted & :GASP:! Memory usage before starting X was 9 MB !!! I just  didn't even dreamed about such a result.
    And yesterday on my AMD64 main PC (1024 MB RAM with lots of FS & accessories like TV TUner or external HDD) --> 19MB without X, and an amazing 60MB with f@h, rTorrent, e16  (X) & aterm
    Thoses would have been postponned without Wain's great kernel26-archck PKGBUILD (FR) which has improved _a lot_ recently, & allow any AL user to take care only of the modules (s)he needs

  • Load Optimization

    Hi Experts,
    Can you please tell me the load optimization techniques in BW. I need to enhance the performance of a process chain which used to take less time earlier for a particular but takes more time for very few extra records in addition to the previous records
    Please suggest if i can do something

    Hi,
    Please find below some of the performance optimization techniques which may help u...
    1) Increase the parallel processing during extraction.
    2) Selective loading.
    3) Every job will be having a priority (A, B, C u2013 A being the highest and C being the lowest), choose this based on your scenario.
    4) Check with BASIS for the sizing of your server.
    5) We can increase the number of background processes during data loads. This can be done by making dialog processes as Background processes.
    For this you need BASIS inputs. (This is done in some profile settings by making the system behave differently during loads. (something like day mode/night mode))
    6) There are some maintenance jobs that should run regularly in any SAP box to ensure proper functioning. Check them.
    7) Use of start routines is preferred instead of update routines.
    Regards,
    KK.

Maybe you are looking for