Limitation of BitmapData loaded into memory through Loader.load()

I am creating a photo gallery using Loader.load() to load
pictures into Flash Player 9. I plan to make them preloaded in
advance so when the users turn the page they can see the pic on the
new page right away without waiting. So the pics are loaded one
after another automatically on backend. When users request a new
page, the bitmap object are simply added onto the page’s
display list.
I publish the code and test on IE 6 and Firefox 1.5. Only
about 50 pics can be preloaded on FF and about 70 pics on IE. I
read some articles talking about memory problem of Loader in Flash
8.5 alpha. So is it still a problem for "Flash 9 Actionscript 3
Preview Alpha" ? Check the article -
http://www.jessewarden.com/archives/..._battlefi.html
Please see attached my code for preloading job. No error is
thrown.
Please advise. Thank you very much

Hi,
if it's a IA report the limit of columns you can show at the same time is 100,
check http://docs.oracle.com/cd/E17556_01/doc/user.40/e15517/limits.htm for more limnitations.
Regards
Bas

Similar Messages

  • How do I find out the exact path of each and every file that LabVIEW finds and loads into memory for a given top level vi?

    How do I find out the exact path of each and every file that LabVIEW finds and loads into memory for a given top level vi? There is probably a trivial, easy way to get this info, but I have not yet found it!  Thanks..

    Or if you want to grab all the paths programatically, try the attached VI.
    Open the top level that you want all the paths from and close all others, then open the
    attached and run it. It will return an array of all the VIs that the VI
    in question uses, including vi.lib VIs. You can filter these as well if
    you like.
    Ed
    Message Edited by Ed Dickens on 08-01-2005 07:01 PM
    Ed Dickens - Certified LabVIEW Architect - DISTek Integration, Inc. - NI Certified Alliance Partner
    Using the Abort button to stop your VI is like using a tree to stop your car. It works, but there may be consequences.
    Attachments:
    Get all paths.vi ‏29 KB

  • Why does [i]entire[/i] rt.jar get loaded into memory?

    We're developing under JDK 1.4-beta2 and are close to releasing to in-house use of our application. One thing noticed is that the entire rt.jar and other jars get loaded into memory equalling ~45MB !! I am amazed if this is 'normal' behaviour. Are we missing some tweak or option grossly obvious or well known?
    Even a simple Hello World test program loads in the entire rt.jar.
    HELP!!

    cross post
    http://forum.java.sun.com/thread.jsp?thread=184491&forum=37&message=588369

  • Why is ENTIRE rt.jar loaded into memory?!?!?

    We're developing under JDK 1.4-beta2 and are close to releasing to in-house use of our application. One thing noticed is that the entire rt.jar and other jars get loaded into memory equaling ~45MB !! I am amazed if this is 'normal' behaviour. Are we missing some tweak or option grossly obvious or well known?
    Even a simple Hello World test program loads in the entire rt.jar.
    HELP!!

    cross post
    http://forum.java.sun.com/thread.jsp?thread=184491&forum=37&message=588369

  • Help! Help! Help! Why is DPV and LACS not loading into memory ?

    Postalsoft Desktop Mailer - load into memory is checked and it does not load into memory,
    I tried it unchecked and it is the same speed.
    How do I fix this problem?????

    Tim,
    Please make sure to post any "classic Firstlogic software" questions under the Business Objects Enterprise Information Management (EIM) forum and you will get a much faster response.  Also you can log a case for support by going to the Help and Support tab on the SAP website, then click on Report A Product Error to log a case for support.  Just make sure to choose BOJ-EIM-COM for the component when logging a case for DeskTop Mailer or Business Edition. 
    Steve is correct.... we are aware of the slow speed issue with address correction when using DPV and LACS in DeskTop Mailer and Business Edition 7.90.  Rev 4 should be out by the end of this week.  This will correct the speed issue.  Please make sure to have the auto update option turned on in the software so you get this update.
    Thanks,
    Kendra

  • Why does LabVIEW sometimes hang when DLL loads into memory?

    I'm calling a third party DLL from LabVIEW 2010.  LV occassionally hangs (Not Responding) when either loading the DLL into memory or when closing my main VI.  When it doesn't hang, it communicates with the DLL seamlessly.  When I try to build an Application (exe), LV always hangs during the build at the point that it is saving the main VI (the scroll on the builder moves until it says "Saving main.vi").  Any insight into what needs to be done to the DLL (or VI) to resolve this issue?

    What does the DLL do? One cause of this could be to try to load/unload other DLLs in PROCESS_ATTACH or PROCESS_DETACH of DLLMain. Microsoft has in many places said that doing this is highly unsafe and asking for all kinds of troubles, since the DLL loading is not fully reentrant.
    Another possibility would be incorporation of ActiveX components that use some form of RPC mechanisme to communicate with out of process ActiveX/OLE components. The necessary RPC proxy hooks into the calling processes message loop and that is a delicate piece of code in LabVIEW. Even when the DLL does not use ActiveX itself, it might employ some message hooking on its own and mess up things in a way that Windows and/or LabVIEW get confused.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Exporting from Motion Loads Into Memory

    I completed a slideshow in Motion, and am trying to export it out for a DVD.
    First I tried exporting from Motion to DV NTSC format, and after watching my memory & CPU, it appears that Motion is loading the project into memory for export.
    So I quit the process, quit Motion, loaded the Motion project file into compressor, and submitted a job to the Batch monitor to compress from my Motion project to DVD.
    The process is taking 5 hours for a 10 minute piece. The CPU's are running at around 10% and my memory is full? I was hoping compressor would not load the project into memory and instead use the CPU to render.
    I suspect my slow down is from the project paging in and out of memory to compress?
    Does anyone know how to force compressor to use CPU for rendering and not load a Motion Project FIle into memory?
    Thank you.

    I do slideshows regularly. I render out of Motion to DV. Clean, fast, slick.
    I pull that DV movie file into DVDSP (if I'm going that direction) and let the defaults to their job. Clean, fast, predictable results. Your output efficiency depends on your image sizes and effects and nesting so I carefully reduce the size of my stills, preplan nests, and prerender/reimport where possible
    I don't see where you are having a problem unless you are simply misinformed. You must render out of Motion or you must render the .motn project file from within another app like FCP but it's going to be processed in exactly the same way as if you have rendered out of Motion.
    You CAN set up Quemaster to run batces using all of your Mac's cores as separate rendering engines but that gains you nothing in Motion projects, usually, since you only have one graphic card. And Quemaster, despite a few success stories on that forum, remains a cruel joke.
    bogiesan

  • Unable to load clob data through sql loader

    Hi Experts ,
    My ctl file is :
    LOAD DATA infile '$di_top/conversion/devtrack_notes.csv'
    truncate into table xxdi_proj
    fields terminated by ','
    optionally enclosed by '"'
    trailing nullcols (bugid,note *clob*)
    {code}
    The problem is note column is a clob and one of the
    values has line breaks like this :
    {code}
    Hi Sir,
    Would you please inform when the reports are scheduled for automatic process?
    Maria will stop his process to avoid duplication.
    Please inform asap
    With Regards ,
    Ronaldinho
    {code}
    When the data gets loaded ,
    The first column gets the sentence 'Would you please inform.....' i.e . the data of second columns gets loaded into first column as nulls are recognized as end delimiter.
    How to overcome this problem?
    Thanks                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Pl post your exact OS and database versions, along with your complete sqlldr command, the table description and a sample of your input csv file.
    HTH
    Srini

  • Delete and full load into ODS versus Full load into ODS

    Can someone please advise, that in the context of DSO being populated with overwrite function, would the FULL LOAD in following two options (1 and 2) take the same amount of time to execute or one would take lesser than the other. Please also explain the rationale behind your answer
    1. Delete of data target contents of ODS A, Followed by FULL LOAD into ODS A, followed by activation of ODS
    2. Plain FULL LOAD into ODS A (i.e. without the preceeding delete data target contents step), Followed by activation of ODS
    Repeating the question - whether FULL LOAD in 1 will take the same time as FULL LOAD in 2, if one takes lesser than the other then why so.
    Context : Normal 3.5 ODS with seperate change log, new and active record tables.

    Hi,
    I think the main difference is that you with the process "Delete and full load" you will only get the full loaded data inside the dso. With full load you the system uses the modify logic (overwrite for all datasets already in the dso) and insert for all new datasets which are not inside yet inside the dso
    Regards
    Juergen

  • Issue with batch load into bpel through MQ    Urgent

    Hi Guys,
    I am stuck with an issue and need your help and suggestions.
    I am working in a project where there is a synch bpel process in which there is a MQ adapter which picks up messages from a source queue which is populated by an external application. This bpel process calls another bpel process in which there is a sychronous call to an external webservice which updates the payload and bpel puts that payload in an output queue.
    The input queue is continuously populated with a lost of messages like - 100k messages in 1 hr (in 2 or 3 intervals) and MQ picks up all the messages even if they are 100k in a second and gives it to bpel.
    Everything runs fine until the external webservice which is called is up and running or it is completely down. But the issue is sometimes when the external webservice is up but is unresponsive for sometime(it does garbage collection or dump or something which we don't have control on) all the instances in bpel are in running and pending state until the response is given. And in this mean time MQ adapter still polls the source queue and gives it to bpel. so what is happening is the number of pending req's are getting increased by very percentage and soa is becoming unstable. Eventually when the external service is up all these instances are completed but will take long time to do it.
    ---> So the solutions which came to me was, block MQ adapter from sending messages to bpel when the external service is doing the garbage collection, but we dont seem to have any control of that sort on MQ adapter. if any please suggest.
    There are 2 prop for MQ adapter -- inboundthreadcount and delaybetweenmessages which can be tried out but it will effect the overall throughput which is not desired.
    ---> Another was we tried to make the call to the external service asynch but we cannot do that as the external service is a synch process and we cannot call it ashyncronously.
    ---> The timeout value for the external webserive is 10sec but the issue here is when it is busy doin some other work, it is not taking the input from bpel and so not able to give timeout fault. And also the we are not getting a remote fault as the service is reachable and active.
    ---> There is one other property which is SyncMaxWaitTime in Oracle BPEL properties in enterprise console where we can set the max wait time for an instance to wait to get the reponse or it will time out at bpel level and that can be handled in bpel and continued with the flow. the default value is 45sec and by decreasing it we may achieve gud results but setting it at domain level will take effect to all the composites (and i dnt think client will be ok with it).
    (If there is any property which we can set at composite level then Please suggest)
    (One thing is we do not have control on the external webservice or the application which puts messages into the source queue.)
    --> Another thought was to put a notification to the prod support when the pending requests are getting higher to shut off the first bpel process so that we are not accepting any more messages from the queue so that SOA is stable. but that doesnt seem feasible as after an e-mail goes to the support they need to instantly shut off otherwise it cause the same issue and they have to be available all the time to monitor it.
    Please suggest your solutions guys...it will be very helpful for me.... thx all

    Hi Guys,
    I am stuck with an issue and need your help and suggestions.
    I am working in a project where there is a synch bpel process in which there is a MQ adapter which picks up messages from a source queue which is populated by an external application. This bpel process calls another bpel process in which there is a sychronous call to an external webservice which updates the payload and bpel puts that payload in an output queue.
    The input queue is continuously populated with a lost of messages like - 100k messages in 1 hr (in 2 or 3 intervals) and MQ picks up all the messages even if they are 100k in a second and gives it to bpel.
    Everything runs fine until the external webservice which is called is up and running or it is completely down. But the issue is sometimes when the external webservice is up but is unresponsive for sometime(it does garbage collection or dump or something which we don't have control on) all the instances in bpel are in running and pending state until the response is given. And in this mean time MQ adapter still polls the source queue and gives it to bpel. so what is happening is the number of pending req's are getting increased by very percentage and soa is becoming unstable. Eventually when the external service is up all these instances are completed but will take long time to do it.
    ---> So the solutions which came to me was, block MQ adapter from sending messages to bpel when the external service is doing the garbage collection, but we dont seem to have any control of that sort on MQ adapter. if any please suggest.
    There are 2 prop for MQ adapter -- inboundthreadcount and delaybetweenmessages which can be tried out but it will effect the overall throughput which is not desired.
    ---> Another was we tried to make the call to the external service asynch but we cannot do that as the external service is a synch process and we cannot call it ashyncronously.
    ---> The timeout value for the external webserive is 10sec but the issue here is when it is busy doin some other work, it is not taking the input from bpel and so not able to give timeout fault. And also the we are not getting a remote fault as the service is reachable and active.
    ---> There is one other property which is SyncMaxWaitTime in Oracle BPEL properties in enterprise console where we can set the max wait time for an instance to wait to get the reponse or it will time out at bpel level and that can be handled in bpel and continued with the flow. the default value is 45sec and by decreasing it we may achieve gud results but setting it at domain level will take effect to all the composites (and i dnt think client will be ok with it).
    (If there is any property which we can set at composite level then Please suggest)
    (One thing is we do not have control on the external webservice or the application which puts messages into the source queue.)
    --> Another thought was to put a notification to the prod support when the pending requests are getting higher to shut off the first bpel process so that we are not accepting any more messages from the queue so that SOA is stable. but that doesnt seem feasible as after an e-mail goes to the support they need to instantly shut off otherwise it cause the same issue and they have to be available all the time to monitor it.
    Please suggest your solutions guys...it will be very helpful for me.... thx all

  • Why does why does Flash container services load into memory whenever I manually clear all history? It's always set to 'ask before activate'.

    I am running Windows 7 SP-1 Ultimate and I always keep tabs on my processes with Task Manager. It gets very old always have to stop that process tree after manually clearing Firefox history from within the browser. Is this a glitch or intended? Thank you for any answer.

    I am not certain but I believe it is likely to be due to the fact that FlashPlayer use will result in Localy Stored Objects and possibly other cached items and so the service will need to open in order to clear them.
    I know a couple of years ago there was a glitch* in the process and plugin container opened unnecessarily multiple times and stayed open for the session
    * <sub> Bug 633427 - Clearing cookies launches instance of plugin-container for each plugin installed </sub>
    As an aside you can choose not to save History or to use Private Browsing on those occasions when it is important not to save the History. There should be no need to clear History manually to improve Firefox performance.
    * [[Private Browsing - Browse the web without saving information about the sites you visit]]

  • Extract Data from XML and Load into table using SQL*Loader

    Hi All,
    We have a XML file (sample.xml) which contains credit card transaction information. We have a standard SQL*Loader control file which loads the data from a flat file and the control file code is written as position based method. Our requirement is to use this control file as per our requirement(i.e) load the data into the table from our XML file), But we need help in converting the XML to a flat file or Extract the data from the XML tags and pass the information to the control file and in turn it loads the table.
    Your suggestion is highly appreciated.
    Thanks in advance

    Hi,
    First of all go to PSA maintanance ( Where you will see PSA records ).
    Goto list---> Save-> File---> Spreadsheet (Choose Radio Button)
    > Give the proper file name where you want to download and then-----> Generate.
    You will get ur PSA data in Excel Format.
    Thanks
    Mayank

  • Loading time into memory for a large datastore ?

    Is there some analysis/statistics about what would be the loading time for a timesten data store according to the size of the data store.
    We have a problem with one of our clients where loading of datastore into memory takes a long time. but only certain instances it takes this long.. maximum size for data store is set to be 8GB (64bit AIX with 45GB physical memory), is it something to do with transactions which are not committed?
    Also is it advisable to have multiple smaller datastores or one single large datastore...

    When a TimesTen datastore is loaded into memory it has to go through the following steps. If the datastore was shut down (unloaded from memory) cleanly, then the recovery steps essentially are no-ops; if not then they may take a considerable time:
    1. Allocate appropriately sized shared memory segment from the O/S (on some O/S this can take a significant time if the segment is large)
    2. Read the most recent checkpoint file into the shared memory segment from disk. The time for this step depends on the size of the checkpoint file and the sustained read performance of the storage subsystem; a large datastore, slow disks or a lot of I/O contention on the disks can all slow down this step.
    3. Replay all outstanding transaction log files from the point corresposnding to the checkpoint until the end of the log stream is reached. Then rollback any still open transactions. If there is a very large amount of log data to replay then this can take quite some time. This step is skipped if the datastore was shut down cleanly.
    4. Any indices that would have been modified during the log replay are dropped and rebuilt. If there are many indices, on large tables, that need to be rebuilt then this step can also take some time. This phase can be done in parallel (see the RecoveryThreads DSN attribute).
    Once these 4 steps have been done the datastore is usable, but if recovery had to be done then we will immediately take a checkpoint which will happen in the background.
    As you can see from the above there are several variables and so it is hard to give general metrics. For a clean restart (no recovery) then the time should be very close to size of datastore divided by disk sustained read rate.
    The best ways to minimise restart times are to (a) ensure that checkpoints are occurring frequently enough and (b) ensure that the datastore(s) are always shutdown cleanly before e.g. stopping the TimesTen main daemon or rebooting the machine.
    As to whether it is better to have multiple smaller stores or one large one - that depends on several factors.
    - A single large datastore may be more convenient for the application (since all the data is in one place). If the data is split across multiple datastores then transactions cannot span the datastores and if cross-datastorestore queries/joins are needed they must be coded in the application.
    - Smaller datastores can be loaded/unloaded/recovered faster than larger datastores but the increased number of datastores could make system management more complex and/or error prone.
    - For very intensive workloads (especially write workloads) on large SMP machines overall better throughput and scalability will be seen from multiple small datastores compared to a single large datastore.
    I hope that helps.
    Chris

  • Question to load the entire database into memory.

    I am planing to load the whole database into memory. Suppose mydb is 10G. Then I plan Max Memory for 10G. Then I can create a named cache with 10G and bind the mydb to this cache. Is this the best way to load entire db into memory?
    If the whole db can be loaded into memory, if procedure cache, cache for tempdb and all other params are not important any more? Or still need to follow common practice to configure memory params?

    Hi Kent,
    12-15GB sounds reasonable.
    I recommend always including your version with your initial posting (unless the version simply doesn't apply to the question).  Particularly when running an unusual version, and 12.5.x has been end-of-lifed long enough to be unusual now.  Are you running SAP Applications on this system?  If not, please questions post to the SAP Adaptive Server Enterprise (SAP ASE) for Custom Applications space instead, it is the space for general-purpose ASE questions.
    Cheers,
    -bret

  • Loading images into memory

    I have an applet that I need to load images into memory. For instance, lets say I am in the first section of the applet. While that section is being showed, I want to load the background image of the second section into memory.
    To load the initial image into momory, I am just using a Mediatracker. No problem there. But I don't want to load all 20 backgrounds into memory at the same time as they take a lot of time. My understanding is that if I create a new MediaTracker while the first chapter is running, it will potentially cause some chaos, as that will stop my thread from running while I have an image loading.
    Somebody told me perhaps I could create a new thread and have that thread load the backgroudn into momory? Perhaps something like this?
    public class TestClass extends JApplet {
         private TestClass thisClass;
         public void init() {
              thisClass = this;
              Runnable r = new Runnable() {
                   public void run() {
                        MediaTracker tracker = new MediaTracker(thisClass);
                        Image nextImage = getImage( getDocumentBase(), getImagePath() +"img1.jpg");
                        tracker.addImage(nextImage,0);
                        try {
                             tracker.waitForID(0);
                        } catch (InterruptedException ie) {
                             System.out.println(ie.toString());
              Thread t = new Thread(r);
              t.setDaemon(false);
              t.start();
              while(t.isAlive()) {
                   int i = 1;     
              t.stop();
              t.destroy();
    }No idea if I am on the right track or not? Another friend told me something about swing helpers but couldn't tell me much more?
    Thanks in advance!

    I use media tracker when I need information about how percent the image is loaded. you can use JLabel to load the images since it has own image observer in it.
    hope you just want to deal with it. easiest way I can offer :)

Maybe you are looking for

  • How to add push button in report.

    Hi, how to add a push button in the standard list report and on clicking which the line-size of the screen should reduce from 300 to 100. how to proceed. Can anyone help me. Regards Guhapriyan

  • PC 64Bit Photoshop 6 won't open an image

    My Photoshop 6 (64bit) starts but will not open an image and just hangs, you can't even shut it down. I have uninstalled and reinstalled (with all updates) Photoshop 6 completely and they same thing happens. Photoshop 32bit works perfectly so I know

  • A way to trash junk email without opening/clicking it?

    I have lots of junk emails every day on my Mail application. I do not want to open the junk emails before trash them. Is there a way to do it? Thanks in advance!

  • I need to learn exadata

    Friends , I am an Oracle APPS DBA and i need some basic info on Exadata . Can you please help me on how to start learning on Exadata. Regards, Atif

  • Unexpected error.(WIJ 20002) in webi in infoview

    hi all,    When ever I am runnig a WebI report it used to run for some time and after that gives an error message that "Unexpected error. If you cannot reconnect to the server, close Web Intelligence and start again. (WIJ 20002)" https:/server/Analyt