Optimization techniques when animating?

Hi,
I am newbie when it comes to animations and was wondering if there are techniques/procedures that should or shouldn't be done when using EA to animate?
I have been playing around with animations and I clearly noticed one animation which was an image which moves from diagonally (initially not on the stage) to the top left corner tearing (I think that is the word). There is not much going on in the animation which has confused me.
So are certain things I should avoid in making animations?
I have an icore7 laptap with 16GB of Ram so I wasn't expecting to see tearing in a such a simple animation.
Thanks.

Hi,
I have been working with Edge Animate for some time, and some animations that worked on desktop did not work on mobile.
If that is your case, the solution was to use GreenSock | GSAP 
Resdesign sent me this nice sample:
resdesign 5 days ago
sample here:
https://app.box.com/s/xxqqnx25i6p5vbn5mzya
regards,
Paul

Similar Messages

  • Criticism of new data "optimization" techniques

    On February 3, Verizon announced two new network practices in an attempt to reduce bandwidth usage:
    Throttling data speeds for the top 5% of new users, and
    Employing "optimization" techniques on certain file types for all users, in certain parts of the 3G network.
    These were two separate changes, and this post only talks about (2), the "optimization" techniques.
    I would like to criticize the optimization techniques as being harmful to Internet users and contrary to long-standing principles of how the Internet operates. This optimization can lead to web sites appearing to contain incorrect data, web sites appearing to be out-of-date, and depending on how optimization is implemented, privacy and security issues. I'll explain below.
    I hope Verizon will consider reversing this decision, or if not, making some changes to reduce the scope and breadth of the optimization.
    First, I'd like to thank Verizon for posting an in-depth technical description of how optimization works, available here:
    http://support.vzw.com/terms/network_optimization.html
    This transparency helps increase confidence that Verizon is trying to make the best decisions for their users. However, I believe they have erred in those decisions.
    Optimization Contrary to Internet Operating Principles
    The Internet has long been built around the idea that two distant servers exchange data with each other by transmitting "packets" using the IP protocol. The headers of these packets contain the information required such that all the Internet routers located between these servers can deliver the packets. One of the Internet's operating principles is that when two servers set up an IP connection, the routers connecting them do not modify the data. They may route the data differently, modify the headers in some cases (like network address translation), or possibly, in some cases, even block the data--but not modify it.
    What these new optimization techniques do is intercept a device's connection to a distant server, inspect the data, determine that the device is downloading a file, and in some cases, to attempt to reduce bandwidth used, modify the packets so that when the file is received by the device, it is a file containing different (smaller) contents than what the web server sent.
    I believe that modifying the contents of the file in this matter should be off-limits to any Internet service provider, regardless of whether they are trying to save bandwidth or achieve other goals. An Internet service provider should be a common carrier, billing for service and bandwidth used but not interfering in any way with the content served by a web server, the size or content of the files transferred, or the choices of how much data their customers are willing to use and pay for by way of the sites they choose to visit.
    Old or Incorrect Data
    Verizon's description of the optimization techniques explains that many common file types, including web pages, text files, images, and video files will be cached. This means that when a device visits a web page, it may be loading the cached copy from Verizon. This means that the user may be viewing a copy of the web site that is older than what the web site is currently serving. Additionally, if some files in the cache for a single web site were added at different times, such as CSS files or images relative to some of the web pages containing them, this may even cause web pages to render incorrectly.
    It is true that many users already experience caching because many devices and nearly all computer browsers have a personal cache. However, the user is in control of the browser cache. The user can click "reload" in the browser to bypass it, clear the cache at any time, or change the caching options. There is no indication with Verizon's optimization that the user will have any control over caching, or even knowledge as to whether a particular web page is cached.
    Potential Security and Privacy Violations
    The nature of the security or privacy violations that might occur depends on how carefully Verizon has implemented optimization. But as an example of the risk, look at what happened with Google Web Accelerator. Google Web Accelerator was a now-discontinued product that users installed as add-ons to their browsers which used centralized caches stored on Google's servers to speed up web requests. However, some users found that on web sites where they logged on, they were served personalized pages that actually belonged to different users, containing their private data. This is because Google's caching technology was initially unable to distinguish between public and private pages, and different people received pages that were cached by other users. This can be fixed or prevented with very careful engineering, but caching adds a big level of risk that these type of privacy problems will occur.
    However, Verizon's explanation of how video caching works suggests that these problems with mixed-up files will indeed occur. Verizon says that their caching technology works by examining "the first few frames (8 KB) of the video". This means that if multiple videos are identical at the start, that the cache will treat them the same, even if they differ later on in the file.
    Although it may not happen very frequently, this could mean that if two videos are encoded in the same manner except for the fact that they have edits later in the file, that some users may be viewing a completely different version of the video than what the web server transmitted. This could be true even if the differing videos are stored at completely separate servers, as Verizon's explanation states that the cataloguing process caches videos the same based on the 8KB analysis even if they are from different URLs.
    Questions about Tethering and Different Devices
    Verizon's explanation says near the beginning that "The form and extent of optimization [...] does not depend on [...] the user's device". However, elsewhere in the document, the explanation states that transcoding may be done differently depending on the capabilities of the user's device. Perhaps a clarification in this document is needed.
    The reason this is an important issue is that many people may wish to know if optimization happens when tethering on a laptop. I think some people would view optimization very differently depending on whether it is done on a phone, or on a laptop. For example, many people, for, say, business reasons, may have a strong requirement that a file they downloaded from a server is really the exact file they think they downloaded, and not one that has been optimized by Verizon.
    What I would Like Verizon To Do
    With respect to Verizon's need to limit bandwidth usage or provide incentives for users to limit their bandwidth usage, I hope Verizon reverses the decision to deploy optimization and chooses alternate, less intrusive means to achieve their bandwidth goals.
    However, if Verizon still decides to proceed with optimization, I hope they will consider:
    Allowing individual customers to disable optimization completely. (Some users may choose to keep it enabled, for faster Internet browsing on their devices, so this is a compromise that will achieve some bandwidth savings.)
    Only optimizing or caching video files, instead of more frequent file types such as web pages, text files, and image files.
    Disabling optimization when tethering or using a Wi-Fi personal hotspot.
    Finally, I hope Verizon publishes more information about any changes they may make to optimization to address these and other concerns, and commits to customers and potential customers about their future plans, because many customers are in 1- or 2-year contracts, or considering entering such contracts, and do not wish to be impacted by sudden changes that negatively impact them.
    Verizon, if you are reading, thank you for considering these concerns.

    A very well written and thought out article. And, you're absolutely right - this "optimization" is exactly the reason Verizon is fighting the new net neutrality rules. Of course, Verizon itself (and it's most ardent supporters on the forums) will fail to see the irony of requiring users to obtain an "unlimited" data plan, then complaining about data usage and trying to limit it artificially. It's like a hotel renting you a room for a week, then complaining you stayed 7 days.
    Of course, it was all part of the plan to begin with - people weren't buying the data plans (because they were such a poor value), so the decision was made to start requiring them. To make it more palatable, they called the plans "unlimited" (even though at one point unlimited meant limited to 5GB, but this was later dropped). Then, once the idea of mandatory data settles in, implement data caps with overages, which is what they were shooting for all along. ATT has already leapt, Verizon has said they will, too.

  • What are the Optimization Techniques?

    What are the Optimization Techniques? Can any one send the one sample program which is having Good Optimization Techniques.
    Phani

    Hi phani kumarDurusoju  ,
    ABAP/4 programs can take a very long time to execute, and can make other processes have to wait before executing. Here are
    some tips to speed up your programs and reduce the load your programs put on the system:
    Use the GET RUN TIME command to help evaluate performance. It's hard to know whether that optimization technique REALLY helps
    unless you test it out. Using this tool can help you know what is effective, under what kinds of conditions. The GET RUN TIME
    has problems under multiple CPUs, so you should use it to test small pieces of your program, rather than the whole program.
    Generally, try to reduce I/O first, then memory, then CPU activity. I/O operations that read/write to hard disk are always the
    most expensive operations. Memory, if not controlled, may have to be written to swap space on the hard disk, which therefore
    increases your I/O read/writes to disk. CPU activity can be reduced by careful program design, and by using commands such as
    SUM (SQL) and COLLECT (ABAP/4).
    Avoid 'SELECT *', especially in tables that have a lot of fields. Use SELECT A B C INTO instead, so that fields are only read
    if they are used. This can make a very big difference.
    Field-groups can be useful for multi-level sorting and displaying. However, they write their data to the system's paging
    space, rather than to memory (internal tables use memory). For this reason, field-groups are only appropriate for processing
    large lists (e.g. over 50,000 records). If you have large lists, you should work with the systems administrator to decide the
    maximum amount of RAM your program should use, and from that, calculate how much space your lists will use. Then you can
    decide whether to write the data to memory or swap space. See the Fieldgroups ABAP example.
    Use as many table keys as possible in the WHERE part of your select statements.
    Whenever possible, design the program to access a relatively constant number of records (for instance, if you only access the
    transactions for one month, then there probably will be a reasonable range, like 1200-1800, for the number of transactions
    inputted within that month). Then use a SELECT A B C INTO TABLE ITAB statement.
    Get a good idea of how many records you will be accessing. Log into your productive system, and use SE80 -> Dictionary Objects
    (press Edit), enter the table name you want to see, and press Display. Go To Utilities -> Table Contents to query the table
    contents and see the number of records. This is extremely useful in optimizing a program's memory allocation.
    Try to make the user interface such that the program gradually unfolds more information to the user, rather than giving a huge
    list of information all at once to the user.
    Declare your internal tables using OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to be accessing. If the
    number of records exceeds NUM_RECS, the data will be kept in swap space (not memory).
    Use SELECT A B C INTO TABLE ITAB whenever possible. This will read all of the records into the itab in one operation, rather
    than repeated operations that result from a SELECT A B C INTO ITAB... ENDSELECT statement. Make sure that ITAB is declared
    with OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to access.
    If the number of records you are reading is constantly growing, you may be able to break it into chunks of relatively constant
    size. For instance, if you have to read all records from 1991 to present, you can break it into quarters, and read all records
    one quarter at a time. This will reduce I/O operations. Test extensively with GET RUN TIME when using this method.
    Know how to use the 'collect' command. It can be very efficient.
    Use the SELECT SINGLE command whenever possible.
    Many tables contain totals fields (such as monthly expense totals). Use these avoid wasting resources by calculating a total
    that has already been calculated and stored.
    These r good websites which wil help u :
    Performance tuning
    http://www.sapbrainsonline.com/ARTICLES/TECHNICAL/optimization/optimization.html
    http://www.geocities.com/SiliconValley/Grid/4858/sap/ABAPCode/Optimize.htm
    http://www.abapmaster.com/cgi-bin/SAP-ABAP-performance-tuning.cgi
    http://abapcode.blogspot.com/2007/05/abap-performance-factor.html
    cheers!
    gyanaraj
    ****Pls reward points if u find this helpful

  • Performance Optimization - Evaluation & Optimization techniques

    Hello,
    Does something like this exist? Methods/Best practices of evaluating or optimizing performance of BPC NW?
    Thanks.

    Hi Zack,
    Please check the [Performance Analysis and Tuning Guide|http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e08c2aa2-6c58-2e10-3588-e6ed2e7c04f8?QuickLink=index&overridelayout=true] and also [Improve your Reporting Performance|http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e085456b-2685-2e10-2fa4-dfb1a49243ec?QuickLink=index&overridelayout=true] guide.
    You can also go through the BPC 7.5 Admin and Installation Guides for Optimization techniques.
    Hope it helps.
    Regards,
    Raghu

  • Querry Optimization Techniques....

    Dear All,
    kindly list down how a querry can be optimized in tems of using indexes, joins , predicates , cursors , inline views, scalar function etc ....
    inshort i would like people to list down Querry Optimization Techniques.
    Regards,
    Hassan

    Hi,
    The metalink note 398838.1: FAQ: Query Tuning Frequently Asked Questions is a very good resource for the same.
    Regards,
    S.K.

  • Why no Motion Blur when animating a camera?

    I think I'm in trouble. I'm finishing a project where a logo (let's call it a box) was built by creating all the sides using solids, then making these solid layers 3D. I then move the CAMERA around the box. That seemed to be a lot easier than trying to move all the 3D layers in front of a camera.
    I cannot get any motion blur effect when the camera moves fast. I've tried numerous settings (and of course have the layers enabled for motion blur). Is it not working because the layers aren't in motion? Can I acheive motion blur by animating a camera around 3D layers?

    Thanks, but I do have motion blur on for the composition and the layers and the render queue. There is no "switch" to turn motion blur on for a camera layer, and of course no 3D switch for a camera because it's inherently 3D.
    I used various motion blur (advanced) composition settings, as well as adjusting the camera's apature, fstop, focal length, focal distance, etc. Can't seem to get any motion blur effect. The camera occasionally whip-pans, slows down, speeds up - so it's certainly moving fast enough.
    Just wondering if I should have created a Null layer, parent it to the camera and animate the Null because that does have switches for 3D and motion blur.
    Why wouldn't motion blur show when animating a camera around an object?

  • Graphical artifacts when animating scrollRect property in ENTER_FRAME handler?

    I tried to animate a drop down menu by assigning a rectangle to the scrollRect property of the menu.
    The rectangle's width is fixed, and the height of the rectangle is animated from zero to the final height of the menu over a 250ms interval.
    The alterations to the scrollRect originate from a master handler for ENTER_FRAME, which runs a series of registered task functions, one of which is my menu animation function.
    As the menu drops down, various areas of the child DisplayObjects, including the background, fail to draw completely.  If I resize the stage at all, then everything is redrawn properly.
    Are there any known problems with animating the scrollRect property when:
    altering a scrollRect from within an ENTER_FRAME handler
    when child objects also have a scrollRect set?
    The exact regions that fail to draw are inconsistent each time the menu drops down, but they occur in the standalone player as well as in web browsers such as Firefox and Internet Explorer.
    See video here: glitches2.mp4 - Google Drive
    It clearly doing something wrong, as evidenced by the redraw regions at the end of the animation (see image).  At the end of the animation, the scrollRect is set to null.  The bottom of the menu is still not updated, and neither is the drop shadow filter on the menu.
    Update: The artifacts go away if I set cacheAsBitmap to true: glitchesRemoved.mp4 - Google Drive
    Interestingly, they also go away if I remove the dropShadow filter from the background child control and leave cacheAsBitmap off.
    Anyway, this problem seems to manifest itself specifically when animating the scrollRect property of an object with cacheAsBitmap set to false that contains a child control with a filter applied.
    Here is a final video with everything working.  The drop shadow filter is on the object itself, rather than the background, which sets cacheAsBitmap to true as a side effect:
    glitchesFixed.mp4 - Google Drive

    I'm just animating the scrollRect property to achieve a simple clipping effect that reveals the menu.  Once per frame, it updates the scrollRect property, assigning a new rectangle with a height of "p * height", where p is the percentage through the animation (0 to 1).  There is nothing, otherwise, happening to the DisplayObject during the animation of that property.  The DisplayObject's height is not changing, nor are any other properties changing (besides scrollRect), so it's essentially static while it's being revealed.  When the scrollRect's size is increased frame-by-frame, Flash Player is simply failing to render the entire clipped area, almost as though there is a frame lag such that it's rendering the clipped area from the last frame, rather than the new area exposed by the latest value of scrollRect.  When I simply turn on cacheAsBitmap, it renders correctly and completely.  The only "interesting" thing about the menu would be the DropShadowFilter applied to a child, so my best guess is that there's some sort of bug in Flash Player's rendering system involving scrollRects whose size changes when one or more child objects have filters applied (just a guess from what I'm seeing).  To fix it, I moved the DropShadowFilter off the child object and onto the menu itself, which kills two birds by forcing cacheAsBitmap on (removes the artifacts) and also ensures the drop shadow is applied to the clipped area rather than being clipped by the scrollRect.  I'd imagine I could replicate this easily in a simple FLA that I could post, so I 'll try that.
    As far as the "sophisticated" nature of the framework, I was primarily referring to its layout engine, in the sense that it's non-trivial, finely-tuned, and powerful compared to existing offerings. There are no Stage3D effects; I ruled out using that years ago when I was seeing graphical artifacts throughout the display list whenever any 3D effects were applied (even setting DisplayObject.z to zero), because it seemed to change something about the overall rendering mode.  For example, I took these screenshots in 2010 and posted them to the bugbase:
    What started off as a slight offset that appeared when 3D rendering was active, turned into very obvious glitches at the edges of objects, as revealed in the latter two images; the random speckles of color and the yellow lines at the bottom and right edges of the button are not supposed to be there. (The striped lines in the overall background are supposed to be there; they are part of a "circle splash" shader filter that collapsed in from the edges of the screen making it look like the background is being eaten by the event horizon of a black hole, haha.)
    Anyway, the MenuTest.mp4 video I posted is actually using off-screen rendering to BitmapData via DisplayObject.draw, along with the application of a blur and tinting for the glass effect.  The real-time ripple effect is rendered with an animated ShaderFilter I wrote with PixelBender.

  • Rotoscoping techniques: Frame Animation vs Video Timeline

    Hi. I am using Photoshop CS6 and CC.
    I am looking for an easy workflow for rotoscoping a video (for the purpose of drawing on it to create animated look)
    and came across two different techniques for achieving the same purpose:
    1. frame animation as explained here
    2. Video Timeline as explained here
    seems to me the first technique is much more error prone.
    I am a bit confused on the proper use of those 2 features - when to use which
    as I see it Frame animation is for:
    1.making GIF'S from your current layer stack
    2.scanning a drawing by hand element that is supposed to appear in a stop motion fashion.
    and Video Timeline is for:
    1.real editing with layers moving in different points in time (which really can't see why would you do that in photoshop instead of AE or Premiere)
    2.cell animation with onion skin
    2.making blank video layer over a real one and create a live drawing.
    any Insights will be greatly appreciated

    thanks for the reply JJ.
    I think I have a better idea on what each tool is all about:
    allow me to sum it up for anyone who will look at this and for me too.
    this is a clarification of these different tools especially related to rotoscoping but not just.
    contributions would be appreciated
    Frame Animation
    useful for controlling each specific frame in a sequence of animated frames.
    in rotoscoping this is useful for instances where you want to work with many layers
    thus preserving the capability to change your work with different layers containing different data.
    for instance: hair, face, legs etc. like the following example here
    also efficient in making GIF's like here
    Advantages:
    1. more layers - more control
    2. easy to make GIFS by changing each layers transformations like here
    or here and would be cumbersome to do in Video Timeline
    Limitations:
    1. layout requires a bit of getting used to for instance: seconds for each frame and not overall frame rate - you have to select all of them to change them globally (there is a panel option for that though)
    and also each frame contains all of the layers with only the specific ones visible and position differently (like layer comps).
    if you are not careful you could make a mess of things by changing visibility of the wrong layers or worse
    so - order, color labeling and grouping is paramount. an example of a good workflow is here
    2. you have to make your own shortcuts since next frame/previous frame/first frames don't have shortcuts.
    3. there is no audio
    Video Timeline
    useful for  "classic" video editing - you get a timeline with layers
    that you can trim and make transitions.
    great for drawing on video or classic drawn Cell animation.
    you can also insert a blank video layer and draw on it.
    as explained here
    Advantages:
    1. familiar timeline structure - easy to understand and not mess it up. you can change the framerate and work like a real video workflow.
    2. great for rotoscoping on video -  you could insert a blank video layer and draw frame by frame on the same layer making it easy.
    3. great for cell animation because onion skin features are an option
    4. keyboard shortcuts are a click way by changing the panel options to "enable timeline shortcut keys"
    5. there is audio
    Limitations:
    1. not as efficient as Frame Animation  for GIF animations or animation that requires a lot of layers or different kind of layers.

  • Acrobat 9 Pro - PDF Optimizer fails when selecting overwrite file during save process

    Hello Everyone,
    I am wondering if Acrobat 9 Pro has changed in the way it handles PDF Optimizing functioning when trying to overwrite the file you are attempting to optimize. In Acrobat 8 Professional, it overwrites the file just fine when it gets to the save process. It does work when you modify the name during the save process, however, it would be nice to know if this is "the way it works" or the "program has an issue".
    Thoughts?
    Message received when trying to overwrite the file,
    COnversion Warnings> "An error was encountered while saving the document."
    Logically, it seems as though the application cannot save over an "opened" file thus dumping that message on me.
    Regards. D.

    > When using Save As to save a PDF, is it possible to have Commenting and Analysis enabled for that saved PDF by default?
    No.

  • Optimization techniques: cacheAsBitmap

    I read that cacheAsBitmap is advantageous when used on display objects. I have a btimap not a vector in my game which I convert to a movie clip - a hero ship for example. Do I use cacheAsBitmap on that too even though it is already a bitmap (well png)? I also read that scaling and rotating when using cacheAsBitmap is OK.
    edit:
    http://forums.adobe.com/thread/758774
    Just read this post. The information I could glean was:
    a. cacheAsBitmapMatrix  - is needed or preferred if you want to rotate and scale mcs
    b. You DO use cacheAsBitmapMatrix even if the mc is a bitmap(png)
    c. Even static backgrounds images should be cached
    However at the end ot the post it says that if you are using a large background then just add it from the library as a bitmap
    var myLibraryBitmap:Bitmap = new Bitmap(new LibraryBitmapSymbol());
    Nothing to cache as Bitmap, also no memory overhead of your movieclip
    This is incredibly useful if an expert could confirm the above as everybody making games should be optimizing properly.
    Message was edited by: codeBeastAdobe

    Everything mentioned is true but I'd add a few notes.
    cacheAsBitmapMatrix is good when you're not literally rotating/scaling/etc the object constantly. It's for an object that is merely adjusting x/y properties most of the time but may occasionally rotate or scale. If it's a ship of some sort that is constantly on the move I wouldn't even bother with cacheAsBitmapMatrix because it's just going to re-draw the clip constantly anyhow. 
    Static backgrounds / bitmaps (buttons, graphics, etc) should always be cached to keep the redraw down.
    Huge backgrounds should use a blitting technique to keep the display list simplified. Bitmaps for backgrounds will indeed remove a little overhead. As important, backgrounds and all other non-interactive objects should also have
    mouseChildren=false set so events don't phase through them. Every single object that has no interactive purpose should set this to drastically reduce events.
    Lastly keep in mind that cacheAsBitmap is a toggle and works best when you have several objects in a single clip. Caching a single object inside a clip isn't really a big advantage unless it's a vector. But as you animate, if you know a complex object isn't going to change for a while you can enable cacheAsBitmap. Then when the object is going to transform, simply turn it off until you're finished and then re-enable it, like a toggle. 

  • Phenomenal optimization technique!

    I just discovered an amazing way of optimizing a script, which I thought
    I'd share.
    I have a script that adds line numbers to an InDesign document
    (www.freelancebookdesign.com under the scripting tab).
    It works by adding a text frame alongside each InDesign frame and adds
    numbers in that frame.
    I've done quite a lot of optimization on it already, and the script
    starts at a very nice pace, but it soon slows down.
    So on a book with 100 pages, it's pretty quick. But adding line numbers
    to a 500-page book becomes very slow, because by the last 1/3 or so of
    pages the script has slowed to a crawl.
    Now, many of you will probably recognize the symptoms: with each page, a
    new text frame + contents has been created, so after 200 or so
    operations, the undo queue has become very long.
    The question then becomes: how to flush the undo queue.
    Now, I remember reading once a suggestion to do a "save as". Thing is, I
    don't want to "save as" the user's document -- they won't thank me if
    they need to undo a few steps before they ran the script!
    Of course, the script already uses a doScript call with
    UndoModes.ENTIRE_SCRIPT so it's all a single step. And we know that
    FAST_ENTIRE_SCRIPT isn't safe to use -- it's quite buggy.
    What I figured out, and am quite proud of , is to break up the loop
    that goes through those 500 pages into 10 loops of around 50 pages each
    -- and run each loop with a separate doScript (ENTIRE_SCRIPT) call. So
    we have a nested doScript.
    The thing about UndoModes.ENTIRE_SCRIPT seems to be that the undo queue
    is still written to, and when the doScript call ends, they are all
    deleted and turned into one step. So each time a doScript call finishes,
    even if your call involved a thousand steps, they will all be reduced to
    a single undo step when it finishes -- and this is the equivalent of a
    "save as".
    And since it seems to take exponentially longer to execute a command the
    longer the undo queue is, by dividing the queue into 10 chunks of 50
    (instead of a single chunk of 500), a huge amount of time is saved.
    Every 50 iterations, the undo queue is flushed, and the script therefore
    continues at the same pace as when it was first run. (Obviously, if
    there are thousands of iterations, it is probably a good idea to add
    another nested doScript call).
    So, case in point: Experiments with a 500-page book have show a 360%
    increase in efficiency! What used to take 288 seconds now takes 80 seconds!
    I'm pretty impressed!
    Do you have a better way of dealing with undo slowness?
    Ariel

    Thanks. @Pickory: Yes, a nested doScript.
    Here's a test script. The script creates a new document and adds 1000
    pages, each with a text frame on it. It does this twice: First time,
    with a single doScript call, second time with a nested doScript (ie 10 x
    100 pages).
    The results I get are 48 seconds for the first run, 31 seconds for
    the second -- only 2/3 of the time it takes the first loop!
    And this is for a relatively simple operation: The more the script does,
    the more the advantage is noticeable (as I mentioned, in my Line Number
    script, it took 1/4 of the time for a long document!).
    Ariel
    // TEST 1: Single doScript to create 1000 pages with text frame
    var myDoc = app.documents.add();
    alert("Starting Test 1");
    $.hiresTimer;
    app.doScript (main, undefined, undefined, UndoModes.ENTIRE_SCRIPT, "test");
    alert("Single doScript call took "$.hiresTimer/1000000" seconds");
    function main(){
         for (var i = 0; i<1000; i++){
             myPage = myDoc.pages.add();
             myPage.textFrames.add();
    myDoc.close(SaveOptions.NO);
    // TEST 2: Nested doScript to create 1000 pages with text frame
    myDoc = app.documents.add();
    alert("Starting Test 2");
    $.hiresTimer;
    app.doScript(main2, undefined, undefined, UndoModes.ENTIRE_SCRIPT, "test
    2");
    alert("Nested doScript version took "$.hiresTimer/1000000" seconds.");
    function main2(){
         for (i = 0; i<10; i++){
             app.doScript(nestedDoScript, undefined, undefined,
    UndoModes.ENTIRE_SCRIPT, "The user will never see this");
    function nestedDoScript(){
         for (var j=0; j<100; j++){
             myPage = myDoc.pages.add();
             myPage.textFrames.add();

  • Optimizer Crashes when downsampling images below 200dpi

    Hi,
    I have a number of large pdf's that I need to optimize. They are all currently between 80mb and 170mb. I am trying to use the PDF Optimizer tool but it just keeps crashing right after it says "Optimizing Images". I have tried playing around with different settings, and the only thing that allows the optimizer to run with bicubic downsampling is setting the "Color Images" section of the optimizer to be at least 200 dpi... While this does greatly reduce the file size, because these are to be used on the web, I really don't need more than 72dpi.Can anyone help me understand why going  under 200dpi is crashing Acrobat, or provide some alternatives?
    It will go lower than 200dpi if I choose Average Downsampling instead of BiCubic.
    I really don't know what the difference is between the different kinds of downsampling (Bicubic, Average Downsampling, Subsampling...) is there a reason to use some over others?
    Any help is appreciated.
    Thanks!

    To figure out the differences, it might be worth creating a PDF from a TIFF file, save it, then optimize to different names using the various techniques. What is suggested by the names are various curve-fitting techniques. They would be average within a cell, a double quadratic to adjacent point, etc. The performance likely depends on the kind of graphic. A line drawing may do better under one form, while a landscape might be better with another form. You might check the manual and see what it has to say about the differences from an end result viewpoint. Unfortunately I am not the wrong machine to check right now.

  • Playback control changes when animation is inserted into anothr movie

    Hi,
    I have created a couple of movies and used a specific
    playback control. When these movies (saved in swf format) are ran
    separately, everything is fine and the appropriate palyback control
    is shown. However, when I insert these animation (swf) into another
    project, the playback control are different of this animation are
    different !!! Any idea ?

    Hi,
    The problem still exists even if I have re-created the child
    movies (swf version 8) with other playback control.
    I have removed the playback control of the child movies and
    then just re-imported them back to the master movie. The master
    moview has been then published to EXE and it is fine (the child
    movies dies not show anymore the playback control).
    Then i have re-inserted the playback control into the child
    movie, and re-imported them back into the master, then the master
    published to EXE. And the problem is back, the child movies are
    showing a diffeent playback control (the same as dpierre - the skin
    used is dark grey, align top left, and the buttons don't actually
    work.)
    Is there a way to clear any cache that exists in the files
    (if the problem is the chache) ? I have a huge project with 20-25
    child movies to import into the master movie. It looks like a bug
    ?

  • Is there a known bug when animating images with edge and dps?

    Hi, I've seen some oddities when using edge to create html5 elements for Indesign. Essentially I have spent hours trying to get an image to scroll from left to right. Works fine in a browser and worked fine in dps and iPad yesterday. Using the same setup: the stage in edge is 768x1024 I then animate a large image in the x axis 0 to -500 over 6 secs, I then apply a ease in and out. I then save the HTML, and publish for dps. Within dps I goto file > place and select the published file that edge generates for dps. I then update and preview either in desktop or iPad - the file loads but the animation does not play.
    Weirdly if I follow the exact same steps but add some other bits of animation ie drawing a square and moving it, the animation plays. If I open an existing file I.e your inbuilt tutorial files, delete the content and redo the image animation it works, it's only in a new project that it doesn't  preview or play in iPad?
    Another issue I have is when it does load the animation doesn't load in that well and shows most of the bottom and then after a .5sec the rest of it loads. so it's a vbit scrappy - any suggestions?

    Thanks,
    To give you a bit more info, we used my bosses computer and he was able to recreate the same issue with the animation not loading. however if we change the process by using the rectangular frame tool and using the web content folio overlay it works, and inserting the HTMLResources.zip. However we noticed that the animation did not run smoothly, very jerky like the ipad was struggling. I've now corrected this, by changing the properties of the image within edge from div to img. This seem to have worked nicely.
    So there are some weird things going on, more so around the publishing function within edge.
    Many thanks

  • Full optimize Failed when scheduled

    Hi everyone,
    I got an issue with a full optimize package in SAP BPC 7.0 MS when scheduled.
    The package admin_Optimize i use is the standard one. They are no overlapping package scheduled at the same time.
    When i manually run the package for a Full optimize on one application it works.
    When i manually run the package for a Lite optimize on one application it works.
    When i schedule the package for a Lite optimize on one application it works.
    When i schedule the package for a Full optimize on one application the package failed and stay stuck. Then the application stay unavailable.
    This issue takes place in the production environment so i can't make a test during the day. and some people are using the application very early in the morning so they stay stuck in the morning.
    Have you ever meet this issue ? What can i do ? What could be the test ?
    Best regards,
    Paul

    When i start the full optimize by hand it's working fine (But we are supposed  to launch it at 4h15 in the morning so i would prefer to schedule it !)
    Are you receiving any error message or actually the statusof package show that was not completed and no erro mesage is provided?
    The package stay stuck, it never ends even after 4 hours. We are obliged to kill it.
    Did you check tbllogs durin the time when the package is running ? It can provide very usefull information.
    Yes we did, nothings appears in this table ... all the other package are correclty recorded even the LiteOptimize that is run 30 minutes after ... byt the way this Lite optimize doesn't failed.
    Also please check the event viewer from BPC servers during the period when the full optimize was running.
    No error  related to bpc. But i had an error 45 minutes before related to Osoft data manager

Maybe you are looking for

  • Hide a column in one view, but show in other

    Hi all. I need to hide a column on "Table" view, but column needs to show up on "Pivot table". I know we can hide it on pivot ( exclude It ) and show on table, but I want to know if reverse is possible. This is needed because I am doing a view select

  • CDATA issue in RFC XML conversion

    Hi all, I am trying to consume a webservice from ABAP. One of the webservice method responds with a data holding an xml fragment under the tags CDATA, like, <!CDATA[[<?xml version="1.0" encoding=.....]]> But the program throws exception during RFC XM

  • Taking an Outlook email and auto-filling into Sharepoint List

    I am trying to figure out a way to set up a rule in my outlook that when certain emails come into outlook, it auto populate a partially filled out workflow into Sharepoint 2010. Is this even possible? If so what would be the best way to make this hap

  • Universe is recamonded on cubes or query

    Hi is SAP recamonded build universe based on OLAP cubes or queries (bex analyzer queries) if i connect cubes, can i get all functions in BO ( like functions in bex analyzer, variables...) please let me know

  • How to create service PO

    Hi,    How can i create service PO. Im trying to create one service PO. But it is asking G/L account & cost center. Both are not matching. Is there any tcode to set G/L account to Cost center.