Batch sequence memory leak?

I wrote a JavaScript batch sequence and ran it in Acrobat 9 Pro on a folder of PDFs. Acrobats memory use steadily increased until it hit 1.1 GB. Then an Acrobat dialog said no files were processed, the memory use dropped back to normal and the CPU usage dropped to zero. So it looks like the problem includes a memory leak. As a test, I replaced my batch sequence with a simple, one-line test:
var r = test;
It has the same behavior. Any ideas?

thanks for your reply.
i have gone through those APIs you have mentioned and understood that we can run any custom command or existing global command using those APIs.
my requirement is to launch the sequence file created. is there any way to directly launch the sequence file or else do i need to process the sequence file to know the command and parameters?
if i need to process the sequence file, it will be always "Recognize text using OCR" and parameters may differ. for this, what is the command i need to execute?
i am new to this environment, please dont mistake me.
thanks in advance.
regards

Similar Messages

  • TestStand 2010 Memory Leak when calling sequence in New Thread or New Execution

    Version:  TestStand 4.5.0.310
    OS:  Windows XP
    Steps to reproduce:
    1) Unzip 2 attached sequences into this folder:  C:\New Thread Memory Leak
    2) Open "New Thread Memory Leak - Client" SEQ file in TestStand 2010
    3) Open Task Manager, click Processes tab, sort A-Z (important), and highlight the "SeqEdit.exe" process.  Note the memory useage.
    4) Be ready to click Terminate All in TestStand after you see the memory start jumping.
    5) Run the "New Thread Memory Leak - Client" sequence.
    6) After seeing the memory consumption increase rapidly in Task Manager, press Terminate All in TestStand.
    7) Right click the "While Loop - No Wait (New Thread)" step and set Run Mode » Skip
    8) Right click the "While Loop - No Wait (New Execution)" step and set Run Mode » Normal
    9) Repeat steps 3 through 6
    I've removed all steps from the While Loop to isolate the problem.  I've also tried the other methods you'll see in the ZIP file but all cause the memory leak (with the exception of the Message Popup).
    I have not installed the f1 patch, but none of the bug fixes listed appear to address this issue.  NI Applications Engineering has been able to reproduce the issue (with Windows 7) and is working on it in parallel.  That said, are we missing something??
    Any ideas?
    Certified LabVIEW Architect
    Wait for Flag / Set Flag
    Separate Views from Implementation for Strict Type Defs
    Solved!
    Go to Solution.
    Attachments:
    New Thread Memory Leak.zip ‏14 KB

    Good point Doug.  In this case parallel sequences are being launched at the beginning of the sequential process model, but I'll keep that in mind for later.  Take away:  be intentional about when to wait at the end of the sequence for threads to complete.
    Certified LabVIEW Architect
    Wait for Flag / Set Flag
    Separate Views from Implementation for Strict Type Defs

  • DataSocket memory leak problem (2VO0SF00) -- more info?

    When upgrading to LabVIEW 8.5 recently, I noticed the following known issue in the readme file:
    "ID: 2VO0SF00
    DataSocket/OPC Leaks Memory using ActiveX VIs to perform open-write-close repeatedly
    If you call the DataSocket Open, DataSocket Write, and DataSocket Close functions in succession repeatedly, LabVIEW leaks memory. Workaround — To correct this problem, call the DataSocket Open function once, use the DataSocket Write function to write multiple times, and then use the DataSocket Close function."
    Looking back, I think this problem may have been present in previous LabVIEW releases as well, and might be giving rise to a problem that's been dogging me for quite some time (see my thread, "Error 66 with DataSockets", http://forums.ni.com/ni/board/message?board.id=170&thread.id=187206), in addition to general slow/glitchy behaviour when my VI's have been running continuously for a long time. But in order to determine whether or not this issue affects me, and how I should go about fixing it in the context of my own programs, I need a bit more information about the nature of the issue itself and the inner workings of the DataSocket VI's. Any help or insight the community can provide into this would be greatly appreciated!
    Here are my questions:
    It is my understanding from the "known issue" description above that the memory leak happens when you have a DS Open wired to a DS Write wired to a DS Close, all inside a loop (example 1), and that the suggested workaround would be to move the DS Open and DS Close functions out of the loop on opposite sides, wired to the DS Write which remains inside the loop (example 2). Is this correct?
    Does this leak also happen when performing DS open-read-close's repeatedly (example 3)?
    What happens when a DS Write (or DS Read) is called without a corresponding DS Open and DS Close (examples 4a and 4b)? Does it implicitly do a DS open before doing the write operation and a DS close afterwards? What I'm getting at is this: would having an isolated DS Write (or DS Read) inside a loop, not connected to any DS Open or DS Close functions at all, cause this same memory leak?
    If one computer is running the DS server and a second computer is running the VI with the repeated open-write-close's, on which computer does the memory leak occur?
    In my question #1 workaround (example 2), the DS Open and DS Close outside the loop are routed through a shift register and in to and out of the DS Write inside the loop. If the DS connection id goes into the DS Write "connection in" and then splits and goes around the DS Write and out to the DS Close, without coming out of the DS Write "connection out" (example 5), will the memory leak still be avoided? I.e. if the DS Write function doesn't have anything connected to its "connection out", will it try to do an implicit DS Close?
    If the VI causing the memory leak is stopped, but LabVIEW stays running, will the leaked memory be reclaimed? What if the VI is closed? What if all of LabVIEW is closed?
    FYI, in the examples above "x1a" is a statically-defined DataSocket on the DS server running on the computer Max, to which the computer running the example VI's has read/write access. My actual application has numerous VI's and hundreds of DataSocket items, many of which are written to / read from every 50-100 ms in the style of examples 4a and 4b.
    Does anyone have any idea about this stuff?
    Thanks in advance,
    Patrick
    Attachments:
    examples_jpg1.zip ‏63 KB
    examples_vi1.zip ‏40 KB

    Hi Meghan,
    Yes, some of the larger VIs in my application do write to / read from several hundred DataSockets, so it's not feasible to use shift registers for each one individually, and hence why I'm passing the references into an array, etc.
    Your Alternate Solution 2 is more along the lines of something that would work for me. However, my actual code has a lot of nested loops, sequences and DataSocket items which are not all written to in the same frame, so this solution would still be difficult to implement: it would be cumbersome to unpack the entire 500-element reference id array and build a new one (maintaining the positions and values of the unaffected elements) every time I write to some small subset of the DataSockets.
    I think I have a solution which solves the problem and is also scalable to the size of my application -- I've attached it as Example 7. Do you think this will avoid the memory leak? It's the same as your Alternate Solution 2, except that instead of building a new array out of the DS Write reference outs, each reference out replaces the appropriate element of the original array.
    If I understand you correctly, in order to avoid implicit reference opens and closes, a DS Write needs to have both it's reference in and reference out wired to something. Thus, even though my Example 7 replaces an element of the array with an identical value, and therefore doesn't actually change the array (which would be a silly thing to do normally), the DS Writes have their reference outs wired to something, and eventually in a convoluted way to a DS Close, so it should avoid the memory leak.
    Just out of curiosity (I don't think anything like this would apply to my application or any fixes I implement), when would the implicit reference close happen in the attached Example 8? The DS Write has its reference in and reference out both connected to temporally "adjacent" DS Writes via the shift register, so perhaps it wouldn't try to close the reference on each loop iteration? Or would it look into the future and see that there is no DS Close and decide to implicitly do that itself? Or maybe only the DS Write on the last loop iteration does this?
    Thanks for bearing with me through this,
    Patrick
    Attachments:
    example73.JPG ‏40 KB
    example83.JPG ‏14 KB

  • How to fix huge iTunes memory leak in 64-bit Windows 7?

    iTunes likes to allocate as much as 1.6GB of memory on my dual-quad XEON 8GB 64-Bit Windows computer and then becomes unresponsive.
    This can happen several times a day and has been going on for as long as I can remember.  No other software that I use does this - only Apple's iTunes.  Each version I have installed of iTunes appears to have this same memory leak.  Currently I am running version 10.7.0.21.
    I love iTunes when it works.  But having to constantly kill and relaunch the app throughout the day is bringing me down.
    Searching for a fix for this on the internet just surfaces more and more complaints about this problem - but without a solution.
    Having written shrinkwrapped software for end users as well as for large corporations and governments for more than 25 years I know a thing or two about software.  A leak like this should take no more than a day or two to locate using modern software tools and double that to fix it.  So why with each new version of iTunes does this problem persist?  iTunes for Windows is the flagship software product Apple makes for non-Mac users - yet they continue to pass up each opportunity they have had over the years with each new release to fix this issue.  Why is this?
    Either the software engineers are not that good or they have been told NOT to spend time on this issue.  I personally believe that the engineers at Apple are very good, and therefore am left thinking that the latter is more likely the case.  Maybe this is to coax people to purchase a Mac so that they can finally run iTunes without these egregious memory leaks.  I would like to offer another issue to consider.
    Just as Amazon sold Kindles and Google sold Nexus tablets at low cost - not counting on margin for profit - but instead they wanted to saturate the marketplace with tools for making future purchases of content almost trivial to do with their devices.  Apple also counts on this model with their pricer hardware - but they also have iTunes.  Instead of trying to get people to switch to a MAC by continuing to avoid fixing this glaring issue in iTunes for Windows I would like to suggest that by allowing their engineers to address this issue that Apple will help keep Windows users from jumping ship to another music app.  The profit to be made by keeping those Windows users happy and wedded to the iTunes store is obvious.
    By continuing to keep this leak in iTunes for Windows all it does is lower my esteem for the company and start to make me wonder if the software is just as buggy on Macs.

    I have same issue. Ongoing for more than 1 year and currently running iTunes 11.3.
    My PC is Dell OptiPlex 990 I7 processor, 8GB ram, W7 64 [always keep things patched up to latest OS updates etc]
    I use this iTunes install to stream music videos etc to multiple appleTVs, ipads, iphones etc .. via Home Sharing
    Store all my media including music, videos and apps on separate NAS  .. so the iTunes running on PC is only doing the traffic cop role and streaming / using files stored on NAS .. creates lots of IO across my network
    Previous troubleshooting suggest possible contributing causes include
    a) podcast updates  .. until recently I had this auto updates on multiple podcast subscriptions, presumably the iTunes would flow this from the PC to save on the NAS across the network .. if the memory leak is in the iTunes network communication layer (?bonjour?)  this may be sensitive to IO that would not normally occur if the iTunes file saving was local on the same PC
    b) app updates .. have 200+ apps in my library and there is always a batch of updates .. some updates 100s of MB is size .. routinely see 500MB to 1GB of updates in single update run .. all my apps are
    c) streaming music / movies .. seems when we ramp up streamlining of music or movies . memory leak grows faster .. ie within hours of clean start
    c) large syncs of music or videos to ipads or iphones .. noticed that get big problems when I rebuild an ipad .. I typically have 60+ GB of data in terms of apps /  music / videos to load .. have to do rebuild in phases due to periodic lockups

  • Memory leak in tagsrv.exe

    I'm seeing a memory leak in tagsrv.exe on my laptop, where I use DSC
    v8.2.  When I close the screen (suspend the laptop), and then open
    it again (resume), the tagsrv.exe memory usage jumps dramatically,
    often to over 125MB.  What is more, it appears each time I do this
    the memory jumps again by this amount (I'm not completely sure how
    consistent the incremental increase is, but I've often seen over 250MB
    memory usage, and as high as 500MB).  Has anyone else encountered
    this, or does anyone have a solution?
    Thanks.
    David Moerman
    TruView Technology Integration Ltd.

    Attached are some images of memory usage.  Interesting, but the
    problem does NOT occur if I simply hibernate then resume over a short
    period of time.  The images you see are the result of hibernating
    OVERNIGHT.  Not sure what to make of that.  I did not have
    LabVIEW running when the memory jump occured.
    The LV DSC project I'm working on has about 40 shared variables, many
    of which are supposed to be connected to a Modbus/TCP client via
    OPC.  However, during this development period I do not have the
    hardware, so some shared variable errors result which I ignore. 
    Also, I'm using a dataset I/O server to create batch-oriented datasets.
    -Dave
    Attachments:
    memory1.GIF ‏49 KB
    memory2.GIF ‏49 KB

  • Htmleditorkit memory leak

    We are using the HtmlEditorKit class to parse documents in a Java program using Java version 1.4.1_02. While the class seems to work correctly, we eventually run out of memory when processing large numbers of documents. The problem seems to worsen when we process big documents. We had the same problem with Java version 1.4.0_03. In our class constructor we have the following code:
    // create the HTML document
    kit = new HTMLEditorKit();
    doc = kit.createDefaultDocument();
    Inside our processing loop we do the following:
    if (doc.getLength() > 0 ) {
    doc.remove(0, doc.getLength());
    // Create a reader on the HTML content.
    Reader rd = getReader(inFilePath);
    // Parse the HTML. -- Here is a memory leak
    kit.read(rd, doc, 0);
    // Close a reader
    rd.close();
    Any help will be much appreciated.
    Michael Sperling
    516-998-4803

    Hi,
    if you process HTML files in a loop I assume you do not want to actually show them on the screen. It seems as if you rather do something with the HTML file such as reading the HTML code and change it / store it / transform it as you go in batch type of manner on multiple files.
    If the above is true I would not recommend to use HTMLEditorKit.read at all because this will not only parse HTML, it will also build a HTMLDocument with respective element structure each time. This in turn is only necessary for showing the HTML file in a text component and to work on the document interactively.
    Depending on the actual purpose of your processing loop, you might want to simply parse the HTML file using class HTMLEditorKit.ParserCallback. By creating a subclass if HTMLEditorKit.ParserCallback you can implement / customize the way you would like to process HTML files.
    The customized subclass of HTMLEditorKit.ParserCallback then can be used by calling
    new ParserDelegator().parse(new FileReader(srcFileName), MyParserCallback, true);
    HTH
    Ulrich

  • Memory Leak With Spatial queries

    We are using 8.1.6 on NT (4.0) for spatial data queries. We are facing memory leak problems. At the starting our job will run very fast and after some time it will start slipping. I'm monitoring PGA size from v$sessionstat/v$sysstat and it is regularly increasing. Same is the case for memory on NT machine when I'm monitoring thru performance monitor. I have already applied the spatial patch available for 8.1.6 but no improvement.
    Please let me know If there is any workaround. When I'm submitting my job in parts and shutdown the database in between then It is releasing all the memory and It is working fine. Without shutting the database it is not relasing the memory even though I stop my spatial data batch job.
    null

    Hi,
    Thanks for your responses.
    This is the query:
    SELECT a.geo_id, mdsys.sdo_geom.sdo_length(
    mdsys.sdo_cs.transform(
    mdsys.sdo_geometry(2002, 8307, null,
    mdsys.sdo_elem_info_array(1,2,1),
    mdsys.sdo_ordinate_array(' | |
    longi | |
    ', ' | |
    lati | |
    a.geo_geometry.sdo_point.x,
    a.geo_geometry.sdo_point.y )),
    mdsys.sdo_dim_array(mdsys.sdo_dim_element(' | |
    '''' | |
    'X' | |
    '''' | |
    ',-180,180, .00000005),
    mdsys.sdo_dim_element(' | |
    '''' | |
    'Y' | |
    '''' | |
    ',-90,90, .00000005)), 41004),
    .00000005) * 6.213712e-04 distance_in_miles
    FROM ' | |
    t_name | |
    ' a
    WHERE mdsys.sdo_nn(a.geo_geometry,
    mdsys.sdo_geometry(1, 8307,
    mdsys.sdo_point_type(' | |
    longi | |
    ', ' | |
    lati | |
    ', null),
    null, null),' | |
    '''' | |
    'SDO_NUM_RES=5' | |
    '''' | |
    ') = ' | |
    '''' | |
    'TRUE' | |
    '''' | |
    AND a.geo_id ' | |
    filter | |
    ORDER BY 2;
    Here we are passing the tname and filter dynamically based on certain conditions and the memory leak is almost 100K to 200K per query.
    First I tried to closing the session only but that didn't work. Database shutdown is only able to release the memory. I'm monitoring v$sysstat/v$sesstat and size of oracle.exe in NT performance monitor. Please let me know If something else need to be monitor.
    Thanks.
    Sandeep
    null

  • Memory leak with fieldpoint and labview

    I have an application which is showing an issue of a memory leak.  The application does several things, but the part that seems to be causing the trouble is related to use of Fieldpoint VIs.  The application reads individual AI channels on a Fieldpoint AI-110 (10 channels, where the set of channels is measured once per second)  I have attached the code related to this.  The memory leak is quite large (~1.5GB in 24 hours of operation).
    I am using LabView 7.1, and Fieldpoint 4.1.  The parent application which uses the attached code is a stand-alone application.  The operating system is Windows 2000.  Fieldpoint communication occurs over a RS-232 link.
    Thanks in advance,
    Andy
    Attachments:
    FPAI100_meas_voltage.vi ‏62 KB

    Hi Andy,
    I did not see anything fundamentally wrong with what you wrote, but there were
    a few things that I think could be used to be changed.  However, there
    were a few things that I did modify that might make a bit of a
    difference.  In your application you were using sequences and a bunch of
    local variables.  Since LabVIEW is based upon data flow, you can control
    the sequence of execution by making data dependencies and simply wiring one
    thing to the next.  By simply using LabVIEW the way it is meant to run I
    was able to completely remove the sequence structure and also eliminate the use
    of all of the local variables, all while having the exact same execution order. 
    It could be that the local variables were causing the memory leak that you
    noticed, but I really doubt they could be the cause of such a large leak. 
    I really think there is probably something else going on in the application
    because from what I saw from this bit of code there really is no way that it
    would have such large problems.  Users use the FP commands daily without
    any problems, so most likely these are not the root of the problem.
    Go ahead and try the modified code and see if you can implement similar local
    and global variable reducing techniques throughout your application. 
    Hopefully that will help reduce some of the memory leaks you are seeing. 
    Typically the largest cause of an apparent memory leak really occurs from
    building an array within a loop, so make sure you don't have any situations
    where that occurs in your code either.
    Regards,
    Otis
    Training and Certification
    Product Support Engineer
    National Instruments
    Attachments:
    724727-FPAI100_meas_voltage.vi ‏56 KB

  • Memory Leak with GotoAndStop()

    I am trying to track down a solution to a memory leak within an embedded video.  If I embed a video on the timeline and then run gotoAndStop or gotoAndPlay to a specific frame every time the frame changes it takes in more memory until it crashes at about 1.7GB however if I run a straight play() on the same clip the memory usage remains constant.  I have also tried System.gc() as it is an AIR App and have tried unloadAndStop() with the embeded video being loaded into a Loader and neither can reclaim any memory.
    I am looking for a a way to resolve the memory leak or another way to be able to rapidly jump to specific frames of video as far as I know this is not possible with external flvs.
    Thanks
    Dave

    video is much like a .gif file.  if you start at frame 0 and jump to frame 30 it needs to redraw what it does nt have in frame 30  which means tellign it to start at frame 30 is the same as telling it to get all information from when it last changed which might be frame 27 to complete the frame of what should display on frame 30.
    this is why its slow to play backwards.  and best to play an flv forward.
    what i would suggest if you do not need video persay, would be to use png sequences.  but if your videos are long then i suppose it would be a greater advantage to keep it compressed.
    what is the quality as well of the video and its size and your frame rate?
    You mention that you are creating a 3d view, how is that?  do you use more than 1 swf ?
    I still think this does not sound like a play head issue, since the video is already compiled.

  • Memory Leak in QTPlugin ? (QT7, and QT 6.5+)

    Trying to track this down, there is a problem with a memory leak when doing a endless loop quicktime sequence of images. To get the loop, I've created a SMIL file with the images, then have a HTML page that uses the EMBED tag to reference these SMIL files via QTNEXTxx, with a final GOTO0 to get back to the start.
    It works, but if you watch the process with Activity Monitor (or top) you can see that memory consumption grows over time and will eventually crash whatever browser you are using. This happens if I open the HTML file with Safari, iCab, Opera, Firefox, RealPlayer, etc...
    Very simple example with even one image will leak memory, albeit at a slower pace:
    HTML: (replaced brackets with curly braces)
    {html}
    {head}
    {title}radarLoops.html{/title}
    {/head}
    {body bgcolor="#000000"}
    {center}
    {embed src="file1.smil BGCOLOR="#000000" autoplay="true" controller="false" width="800" height="620" pluginspage="http://www.apple.com/quicktime/download/" QTNEXT1="GOTO0"}
    {/center}
    {/body}
    {/html}
    file1.smil:
    SMILtext{smil xmlns:qt="http://www.apple.com/quicktime/resources/smilextensions" qt:autoplay="true" qt:time-slider="false"}
    {head}
    {layout}
    {root-layout width="800" height="620" background-color="black" /}
    {region id="region_1" left="0" top="0" width="800" height="620" fit="fill" /}
    {/layout}
    {/head}
    {body}
    {seq}
    {img src="TEST.JPG" region="region_1" dur="1s" /}
    {/seq}
    {/body}
    {/smil}
    To workaround this issue, we had to revert to a 10.3.x system with QT6.4.x on it, reverting a 10.3.9 to QT6.5.x didn't work. A Tiger installation with 7.0.3 doesn't work either, it leaks like a sieve.
    How can I debug this?

    Write a main() that does the polling in a tight loop 1000 times. Then System.gc(); Thread.sleep(1000); System.exit(). Run the thing with -Xrunhprof. Observe the "live bytes" and "live objs" columns in the generated java.hprof.txt file. Anything that strikes as suspicious? If every polling round leaks one object, there's likely to be 1000 (or N*1000) of something live. Make sure the leak is in the polling routine by running it 1,000,000 rounds and getting an OutOfMemory.

  • Memory leak with FP closed

    Well this is an odd one I must say.
    I have an application where I would like the front panel to disappear
    at the user's discretion so it's not in the way while it's processing
    information in the background.  To process the information I have
    a custom DLL and to check the progress of the custom DLL, I have a
    routine that I can call that will return the status.  So the
    information processor is called and it just runs in the background, and
    then the status routine is called in a parallel loop until the status
    routine determines that the processing is finished.  When the
    status routine says it's done, the parallel loop exits and another
    routine sets a flag for the processor to exit (so everything quits).
    In order to make the DLL calls work right as far as execution
    sequencing, they are called as re-entrant.  I stumbled across that
    orginally because it was locking things up when I was trying to call
    them in parallel.
    I have no memory leaks in the external DLL calls and this is verifiable
    by running the processing routine indefinitely while checking the
    status indefinitely.  The problem is when I open a VI reference to
    the main VI then make it's front panel not visible.  At that
    point, memory usage starts increasing by 400k/sec (which is larger than
    the entire DLL).  It will eventually just crash the system. 
    If I set the front panel to be closed on a timer (for several seconds
    say), the memory usage will increase for that amount of time and then
    stop once the FP is opened again.
    I know this is really difficult without having some code to look at but
    it would be really hard for me to dumb down the code to get the problem
    in a simple set of VI's to look at.  I'm wondering however if the
    reentrant DLL calls are having some weird interaction with windows
    since I'm guessing at a lower level, the application's window handle is
    invalidated when the front panel is closed.  I'm wondering if the
    reentrant DLL status call is having new memory allocated for it every
    time it is called in the status loop.  I just can't figure out how
    that has anything to do with the front panel being closed though.
    Any suggestions on things to try or look at?

    Well after tweaking around, running the DLL calls in the UI thread
    instead of as re-entrant fixes the massive memory leak.  I can't
    think of a good reason why that isn't a bug, but whatever. 
    Perhaps it's not crashing the program now when running in the UI Thread
    since each DLL call is actually in it's own re-entrant VI
    wrapper.  Hmmm...

  • Memory leak - Node clean up?

    Hey guys,
    I'm running into a memory leak and was reading some other threads where people were having their own memory leaks. Is there a way through NetBeans to track memory usage or something to figure out what is consuming memory? My memory leak sounds similar to this gentleman's: [http://forums.sun.com/thread.jspa?threadID=5409788] and [http://forums.sun.com/thread.jspa?threadID=5357401]
    Setup:
    I have a mySQL database call that returns a bunch of results and in the end converts each record into its own custom node which then gets put into a sequence, which is then given to a listEventsNode (a custom node VBox) and finally put on the stage. As a note each custom node has its own animations, events, images, and shapes.
    listEventsNode gets its list of custom nodes from a publicly accessible sequence variable where its contents is simply cleared and new nodes are added by the next MySQL search. I cleared the eventSequence by using delete myCustomNodeSequence.
    I even go as far as setting the same sequence null (myCustomNodeSequence = null;). This unfortunately doesn't make any difference in terms of memory usage. java.exe will reach 100MB then crash.
    the listEventsNode is bound with eventSequence, this is to ensure that changes to the sequence of custom nodes are immediately reflected in the Vbox (listEventsNode).
    ListEventsNode is on the main stage and there is only one instance of it, but what changes is the content in the eventSequence. Even if I clear the contents of the eventSequence, it doesn't appear to "clean up" the memory.
    The way I'm doing it is probably breaking every rule in the book. I should probably make it so listEventsNode is its own object which isn't bound to any external variables (such as eventSequence) and in the event a new search takes place I simply delete the listEventsNode and when a new search is complete, it re-adds the node to the scene. Am I on the right track here?
    Is there a good "best practices" for JavaFX? For example, a typical mistake that would cause a node to "recreate" itself over and over again in memory when the programmer may have thought it was simply being "written over"? Does this make sense? Because I have a feeling that my application is not deleting the custom nodes that were created for the eventSequence and even if the eventSequence is "deleted" and re-assigned with new custom nodes, the original or previous custom nodes are still residing in memory.
    Let me know if you need to see the source or any logs/readouts from NetBeans during execution if this will help.
    Thanks for taking the time to read this.
    Cheers,
    Nik.

    Your heap usage looks pretty typical. In scenario 5, I think you are simply running out of memory. I doubt its a leak inside the javafx runtime (although it could be).+
    I think you might be right. It's running out of memory and I may have to increase the heap size.
    Say that my application legitimately needs more memory to operate, and I need to increase the heap size. How does this work if you have a fully deployed application? Is the heap size information built into the application itself so that when the jvm runs the application, it allocates the appropriate amount of memory? I've increased the heap size from 64mb to 128mb, and I added many many nodes and I still crapped out when I hit the 128mb ceiling. I changed it to 512 and I added a TON of nodes (when you click a node from the VBox, it adds a miniature version of the node to the scene graph) and I'm just under 200MB. I plan on setting a cap to how many concurrent additional nodes can be placed on the scene graph, which will help.
    If you deploy this as is, how does the application utilize memory if you've adjusted the heap size? Or is this specific to the IDE only?
    Do you know what objects are on the heap? Can you compare what objects are on the heap from one scenario to the next?+
    Where can I find this information in NetBeans profiler?
    Do you have a lot of images? Are they thumbnails or are they images scaled down to thumbnail size?+
    Actually, yes I am using a scaled down thumbnail size of the original image. The original image files are PNG format, 60x60 pixels, and about 8kb in size. I simply use the "FitWidth:" property to scale the image down. I was doing some more reading before I went to bed and I was going to use an alternative way to scale the image down. By simply doing this, the initial heap usage off the 500 node search went down form 44MB to 39MB. It's still slower on consecutive searches versus the first, but it's stable.
    Edit: I've used the width: property to downsize the image and it looks like I'm not running into that heap crash as fast but this poses a problem where I need to have the full size of the image available when a custom node is selected. What's the best way of doing this? I should probably store the image location in a string and recreate the image when I need it in full size since there is only one full size version of it on the screen at a given time. I've also completely disabled the addition of a picture in my custom node; it appears these images don't take up a lot of space since they are very small. I save an additional 3-5MB in heap space if I completely disable the pictures and have just the nodes themselves. Each node has animation effects (i.e. fading in/out of colors, changing of color if selected). Although the class itself is pretty dang long in comparison with any other classes I have.
    Are you clearing the nodes from your scene content before the search or after? If after, are you creating the new nodes before clearing out the old?+
    Yes, I have a function that reassigns the stage.scene.content sequence omitting the custom vbox that houses the list of custom nodes prior to the next search. The "cleanUp()" function is called prior to the insertion of the new custom vbox.
    It might be useful to turn on verbose garbage collection (-verbose:gc on the java command line) just to see what's happening in gc.+
    What is this exactly? I tried putting in System.gc() but I'm not sure if I'm seeing any difference yet.
    Edit: Actually, I've placed System.gc() after I run my cleanUp() function and I'm noticing the heap usage is more conservative. Seems to clear more often than it did before. But yes, the underlying problem of my running out of memory is to be looked at.
    You might also (just as an experiment) force garbage collection between your searches.+
    This seems to work well with smaller result sets. However, a search that produces over 500 custom nodes in a custom VBox uses more than half of the available heap size, so the next time it tries to do the same search it just crashes like you mentioned in your first point. The memory simply runs out. What I don't get is if I "delete" the custom vbox prior to inserting it, the memory doesn't seem to be released immediately. From what I'm reading about the garbage collector, it doesn't exactly do things in a prompt fashion.

  • Batch sequence to 'flatten' xfa form to static non-interactive AcroForm

    Hi,
    I'm looking for a way to flatten a bunch of xfa forms in a batch sequence. The result being the exact same as printing to pdf.
    I believe I use to just add optimizer settings in the sequence editor and that would do the trick. But now I'm getting an error saying optimization settings cannot be applied to an xfa form.
    Any help would be much appreciated.
    Thanks.
    Kyle

    Actually George I found a solution. They are dynamic xfa forms. I realized that the pdf printer settings has an option to suppress the save as dialog and default the output flat pdfs to a location of my choosing. I just set the pdf printer as my default printer and run a batch sequence to print the dynamic pdfs.
    It requires a decent system with lots of memory since an instance of Acrobat is opened for every pdf processed but it does the trick!
    I appreciate your response.
    Kyle

  • TestStand 3.1 Report Memory Leak

    Hi everyone,
    I been looking at the memory usage of my TestStand code lately.  I noticed I have a memory leak with On The Fly Reporting using XML report file.  I'm currenlty using labview 7.1.1 and TestStand 3.1f for development.   I came about this program from seeing test station go to a snails crawl.  The program starts out around 100MB and will keep growing using up all the systems memory, the largest I seen was 500MB.   What is happening is the testers like to keep the on the fly report screen up during the test  and not view the execusion window.  At the end of each run the program will grow 10MB to 30MB.    I know the problem can be solved by turning off on the fly reporting, but I would like to try find a programing option first.
    I have looked at the other forum entries about:
    Turning off database collection
    Editing  model options : ModelOptions.DiscardUnusedResults = True and Parameters.ModelOptions.NumTestSockets = 1
    Adjusted the array size of the report to 300
    None of the options seem to have worked for me, most the post say move to teststand 3.1 cause it an 3.0 issue.
    I was wondering if there are any other options to try  in teststand, or any way to turn off/close IE to release memory between tests.
    I'm currently running the code on Windows XP SP2 
    IE6/IE7 depending on the test station
    P4 or Centrino  with 512MB to 1GB of ram
    The GUI window is a modifed  example of the basic VI EXE that calls teststand
    The code is installed using labview and Teststand  deployment builders, so the test machine are using runtime labview 7.1.1  and runtime TestStand 3.1
    Thanks to any one who can offer alittle extra help on the topic

    Actually the reports are XML not HTML.
    There are alot of pass steps, usualy after a couple of failed steps, the sequence will terminate.
    The memory leak seem to exaggerated, since the technicians like to view the Reports Tab instead of Execution Tab when running program.  If the operator keeps the view on the execution tab during the program, the report screen is not being updated, so less memory is taken up.  Since the techs like to keep the report screen up during the entire test, the xml file constantly being update and  the program increases +50MB after the first run.   So turning off Reports on the fly, they can not view the report during the test and the memory usuage stays low.
    I do not see a leak at all if I do not click on the reports tab,  and the leak seems to only happen with XML files, TXT and HTML reports do not cause a memory leak, even when viewed on the fly for the entire test.   I wanted to keep XML-expandable cause it easier to see what sequences failed then scanning through the entire report.
    I'm using Test UUT call up the test.    The executable terminated if you press the exit button on the executable.  After test is finished, the standard Pass/Fail comes up then the standard  enter UUT serial number comes up.
    I'm including the reports option INI  I use .
    Thanks for you help in this matter, sorry being little late with replying. 
    Attachments:
    report.ini ‏4 KB

  • Help needed: Memory leak causing system crashing...

    Hello guys,
    As helped and suggested by Ben and Guenter, I am opening a new post in order to get help from more people here. A little background first...  
    We are doing LabView DAQ using a cDAQ9714 module (with AI card 9203 and AO card 9265) at a customer site. We run the excutable on a NI PC (PPC-2115) and had a couples of times (3 so far) that the PC just gone freeze (which is back to normal after a rebooting). After monitor the code running on my own PC for 2 days, I noticed there is a memory leak (memory usage increased 6% after one day run). Now the question is, where the leak is??? 
    As a newbee in LabView, I tried to figure it out by myself, but not very sucessful so far. So I think it's probably better to post my code here so you experts can help me with some suggestions. (Ben, I also attached the block diagram in PDF for you) Please forgive me that my code is not written in good manner - I'm not really a trained programmer but more like a self-educated user. I put all the sequence structures in flat as I think this might be easier to read, which makes it quite wide, really wide.
    This is the only VI for my program. Basically what I am doing is the following:
    1. Initialization of all parameters
    2. Read seven 4-20mA current inputs from the 9203 card
    3. Process the raw data and calculate the "corrected" values (I used a few formula nodes)
    4. Output 7 4-20mA current via 9265 card (then to customer's DCS)
    5. Data collection/calculation/outputing are done in a big while loop. I set wait time as 5 secs to save cpu some jucie
    6. There is a configuration file I read/save every cycle in case system reboot. Also I do data logging to a file (every 10min by default).
    7. Some other small things like local display and stuff.
    Again I know my code probably in a mess and hard to read to you guys, but I truely appreciate any comments you provide! Thanks in advance!
    Rgds,
    Harry
    Attachments:
    Debug-Harry_0921.vi ‏379 KB
    Debug-Harry_0921 BD.pdf ‏842 KB

    Well, I'll at least give you points for neatness. However, that is about it.
    I didn't really look through all of your logic but I would highly recommend that you check out the examples for implementing state machines. Your application suffers greatly in that once you start you basically jumped off the cliff. There is no way to alter your flow. Once in the sequence structure you MUST execute every frame. If you use a state machine architecture you can take advantage of shift registers and eliminate most of your local variables. You will also be able to stop execution if necessary such as a user abort or an error. Definitely look at using subVIs. Try to avoid implementing most of your program in formula nodes. You have basically written most of your processing there. While formula nodes are easier for very complex equations most of what you have can easily be done in native LabVIEW code. Also if you create subVIs you can iterate over the data sets. You don't need to duplicate the code for every data set.
    I tell this to new folks all the time. Take some time to get comfortable with data flow programming. It is a different paradigm than sequential text based languages but once you learn it it is extremely powerful. All your data flow to control execution rather than relying on the sequence frame structure. A state machine will also help quite a bit.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

Maybe you are looking for

  • An error occurred during submit process. Cannot process content of type

    Hi, When i click on the submit button, my pdf is able to call servlet but it shows error while preparing for submit. Why am i getting this error. The form is not getting submitted completly. Please someone provide me solution for this. The servlet ur

  • Running an mib query for T_CLIENT for a specific group

    I run the following query using ud32 SRVCNM .TMIB TA_OPERATION GET TA_CLASS T_CLIENT TA_FILTER 167778596 TA_FILTER 33560784 TA_FILTER 33560726 TA_LASTGRP 6 TA_FLAGS 65536 It is supposed to show result ony for TA_LASTGRP = 19. But instead wher i execu

  • No obpm partition in MDS

    Hi, My BPM Suite 11g is missing something. I have run RCU 11.1.1.3, installed WLS 10.3.3, installed SOA Suite 11.1.1.2.0, run SOA Suite patch set 11.1.1.3.0, created a domain with (among others) Oracle BPM Suite. But my MDS only has a soa-infra parti

  • Please see my new site Header - Opinions Please

    See my site www.planetshopfitting.co.uk I have been asked to add a pattern on the header, Does what I have done look professional? Thanks for your time. Lee

  • Photographer upgrading from Dual 2.5Ghz to 8 core

    Hi there. I hope someone can help me with this dilemma. I am a photographer who has been working with jpg's since the inception of digital photography. I am now at the point where I want to switch to RAW (bigger better files) as memory and storage pr