Memory leak in Waveform Graph?

Either thier is a huge memory leak in the waveform graph or I am really doing something wrong.
I created an example app with a waveform graph and a button the contructor looks as follows:
Form1(void)
InitializeComponent();
vals = gcnew array<double>(60000) ;
for (int i=0; i<60000; i++)
vals[i]=Math:in(Math:: PI*2*60/6000.0*i) ;
and the click event looks like this:
System::Void button1_Click(System:: Object^ sender, System::EventArgs^ e)
this->waveformGraph1->Plots->Clear() ;
NationalInstruments::UI::WaveformPlot^ plot = gcnew NationalInstruments::UI::WaveformPlot(xAxis1, yAxis1) ;
plot->PlotY(vals) ;
plot->LineColor = Color::Red ;
this->waveformGraph1->Plots->Add(plot) ;
every time I click the button the memory used on my system goes up by about 10 MB.  I tried this also with using this->waveformGraph1->PlotY(vals) and the memory usage stays solid as a rock.
Am I doing something wrong or what is causing the leak so I can work around it, my program plots 4 arrays of this size on one graph per test result.

The plot uses some unmanaged resources (gdi objects and other handles), which is why it implements IDisposible. Because of the GC, resource cleanup is not deterministic, clean up occurs when the GC deems it necessary. Calling delete (which based on C++/CLI syntax) ultimately ends up calling Dispose and this forces the object to release any handles it might have immedietly. See the documentation for the .NET Dispose pattern for more information.
If you don't call delete, what would end up happening is that eventually at some point in the application, the GC would fire and cleanup all the objects and handles and you would see a drop in the applications memory footprint, but you would need to run the application for a while before that might happen. In a long application run, things would end up stabilizing.
Bilal Durrani
NI

Similar Messages

  • Memory leak in Digital Waveform

    I have a program with a pretty serious memory leak that uses up all my system RAM and crashes my computer within a few hours of running the program.
    The program takes an array of U16s where each bit represents a digital signal. The VI converts each U16 to a digital array and groups the resulting 16 digital signals into different busses for display on a Digital Waveform Graph. The profiler doesn't show any excessive memory usage in the VI. I put the whole VI into a Diagram Disable structure and moved a few pieces out at a time, and eventually the only thing inside the disable structure was the Digital Waveform Graph indicator. When this indicator is enabled, the memory usage of my system rises slowly and steadily until it uses all available RAM and crashes the system.
    If I replace the Digital Waveform Graph indicator with a cluster, the memory leak still occurs (but much more slowly). I thought using the cluster fixed the leak until I reran the VI overnight while using the cluster instead of the Graph.
    If I stop the VI before all the RAM is used, the RAM will not release until I close LabVIEW entirely. Once LabVIEW closes, the memory is released slowly and exponentially unless I use the "End Process" option in Task Manager.
    This is a continuation of a previous post I made where I thought the memory leak was due to problems transferring data from an FPGA for display.
    I ran the MemLeak vi (attached) on two separate systems, both running LV 2013 SP1, and got the same results. The memory leak is noticeably fast when using the enable structure connected to the Digital Waveform Graph but still present when using the cluster of Digital Waveforms.
    Attachments:
    MemLeak.vi ‏33 KB
    LV shutdown.PNG ‏101 KB

    Thanks for the replies.
    In response to John's points:
    1. The attached VI is a simplification of an FPGA VI that read a fixed number of samples from a DMA FIFO using an FPGA Interface Invoke Method approach. I'm using a card (PXI-7842R) that doesn't allow use of the Acquire Read Region method. In order to allow people without an FPGA card to hopefully see the issue, I replaced it with the for loop. Assuming that this for loop does leak (which I don't believe it does; as altenbach said, it's a fixed size allocation that LV should be able to reuse), why would I see a difference in the leak magnitude depending on which indicator I connect to the array?
    2. I've previously reviewed the document you referenced, and I don't see any errors from it present in my code; do you? I have no global/local variables, strings/arrays displayed on front panel, property nodes, coercion dots, altered memory sizes, resizing/reallocations, etc. I don't see any weird buffer allocations. I used to have the conversion from U16 array to digital waveforms in a subVI but placed it on the same diagram to allow incremental use of the Diagram Disable structure.
    3. The forum post you referenced had many of the items discussed above, plus it was solved using an RT FIFO. I'm not passing data from a producer to a consumer; I'm just displaying acquisition results. I guess you could say I'm processing the data, but I'm really only converting it to a format that the indicator will take; I'm not operating on the data.
    It's good that the leak doesn't show up in 2014, but my SSP runs out in a couple of days; I never got an upgrade to 2014. This is the last item remaining on the development path, and we've already spent ~$4k to upgrade the controllers enough to display the acquisition without dragging down the CPU. I will be in hot water if I spent all that money and then end up having to scrap the display...

  • Memory Leak in cwui.ocx (graph control)

    Hi!
    I'm using cwgraph control (cwui.ocx V2.0.3.413) in VB6. Once I have assigned the graph properties to set UI appearance, I sent 2d arrays to update the plot only (either .plotY or .ChartY, doesn't matter). The application then consumes more and more memory. When I comment out the .plotY or .ChartY code line the problem disappears. So it seems to be a memory leak in the cwui control. Who can help?

    Keep in mind that the graph keeps its own copy of the data that you pass in via the Plot/Chart methods. It has to do this for several reasons, like if it needs to repaint or you want to pan the data. This could explain what you're seeing if the memory usage is comparable to the amount of data that you're passing in via the Plot/Chart methods. If that's not the case, please post a small test project that demonstrates the problem. Thanks.
    - Elton

  • What causes memory use of my program increase? (Write to Spreadshee​t? Running in LV environmen​t? External dlls? Waveform graphs?)

    Hi 
      I have attached a plot for the discussion here..
      I am monitoring the memory usage by my VI thru calling a window's dll to keep checking that (Many thanks to Matt). 
      I saw raising slopes and flat (wow! First time I catch this, that's what I am expecting)
      I am thinking where could there be a reason for memory growing up!
    What I did:-
    1. Mostly use queues (all limited # of elements) for parameters delivery between loops and between subVIs
    2. I close Obtained queue ref. every time I finish with it (only leaving a few keeping alive so they won't be killed)
    3. For all arrays I initialized with a fixed size array constant and do all jobs with Replace subset, Index Array and that In-place block.
    Above are measures I intended to use to save memory.
      However for a few points I think there might cause a memory grows, and I wish that someone can share with me your experience or give me an answer...
    1. Write to spread file:-- I keep using this to log data, events into harddisk, in my use I always append new logs/data to the existing file, however I keep doing it all the time throughout the run-time.
    2. Running my program in LV.exe -- I havn't compiled it yet. However when I was taking the plot's data, my PC is left with no one using it.
    3. There is a couple of external dlls running -- however it sounds to me from other's view point, external dll's resource doesn't count into Labview.exe. Since I am monitoring Labview's memory use, that couldn't be a source of the raising I see from this plot, right?
    4. Waveform graphs -- I am not sure whether this can be a problem. Everytime I feed data into a waveform control, I initialized a constant array and then replace elements into it, I don't think my data source is casuing any problem.
      Can someone comment on my above descriptions of my program?
    Raymond

    vgbraymond wrote:
    1. Write to spread file:-- I keep using this to log data, events into harddisk, in my use I always append new logs/data to the existing file, however I keep doing it all the time throughout the run-time.
    Write to spreadsheet file is a high level VI that opens and closes the file with every call. I would recommend to open the file once at the start of the programs, then append using lowlevel functions. Close the file after the program is done.
    Have you done any profiling to see which subVIs shows the bulk of the memory use?
    You might also turn off debugging to see if it makes a difference. Don't open the front panel of subVIs (and avoid functions that force the front panel to be in memory) unless they need to show something important to the user.
    It would really help if we could see some actual code. Can you strip it down to the essentials that still show the problem?
    LabVIEW Champion . Do more with less code and in less time .

  • Free memory of waveform graph

    Hello,
    I'm logging temperature data once a minute and display 4 plots with 1440 points (data of one day) in a waveform graph.
    The data of the waveform graph is updated every 2 minutes. Over the time the application allocates continously more memory (maybe until the system crashes).
    Could anyone tell me how I can free the memory used by the waveform graph or its history ?
    Thanks for any help,
    Ralf.

    How to Clear Charts & Graphs.vi in LabVIEW help "Search Examples" demonstrates ways to clear charts before, during and after execution. This may provide you with some clues that you can apply to your vi.

  • Windows Audio Device Graph Isolation [audiodg.exe] MEMORY LEAK - EATS OUT 60-85% of CPU

    Hello,
    I've updated, 2 days ago, from Win 8 to Win 8.1 Pro and this memory leak problem that has occurred to me previously in other Win versions has arrived again. Didn't upgrade from boot, just from the desktop. Cleaned up the previous Windows installations and all
    that jazz that was inherent to Windows updates. Tried to fix this problem with the normal method of disabling the enhancements at the playback device but they are gone.
    My drivers are all up to date and have a Conexant SmartAudio HD sound platform. 
    Thanks in advance for any sort of help.

    I maybe got a solution for this... (Worked for me)
    I
    Right-click the speaker icon in the lower right corner.
    Select Playback Devices from the menu. A list of devices should appear on the screen.
    Double-click the device that has a green checkmark. The properties windows for that device should open.
    Click the Enhancements tab at the top.
    From the list of enhancements, uncheck all of them, or click the Disable all enhancements checkbox.
    Click the OK button to save your changes and close the window.
    Click OK to close the Playback Devices window.
    Please reply if that helped you :-)

  • Memory leak using CWGraph

    I have a memory leak problem using the CWGraph control.
    I have an SDI application (MFC using Measurement Studio) and I generate dynamicaly a dialog containing a 2D Graph, and I use the OnTimer() of the dialog to generate data and to update the graph, with a timer of 50ms. In OnTimer() function I have a loop to generate
    and to update two plots on the graph. When I call a method of the graph (for example for changing the color of the plot or for updating a plot (using PlotXvsY)), I have a periodic increasing of memory with a fixed amount of memory (4k). In the same OnTimer() function I update also some CWSlide controls without memory leaks.
    If I comment the line that call a method of graph
    (ex. m_Graph.Plots.Item(1)....), the code works wi
    thout memory leaks.
    I'll apreciate any suggestion about this problem.

    I had the same memory leak problem with my program as well. I do not think it is because of using CWGraph. Memory leaks occur when you allocate memory and did not free them. The problem will accumulate and crashed randomly (sometime it crashed when you just move mouse around). Try this: if the program does not crash (memory leak crash) on the first time it compiles and runs, it is probable has nothing to do with the CWGraph. On the second and third run, if you did not free variable, the program will usually crashed. If your program crashed on the first time it runs, the problem might be something else.

  • Maximize server uptime - memory leak?

    Since we stabilized the newly introduced WLI 8.1 application we are now fine tuning the JVM. We are facing some kind of memory leak which forces us to reboot the WLS instances daily.
    I'm now asked to identify some strategies how we could let the WLI instances run for longer than 1 day. My goal is 7 days, so that the machines must only be touched once a week. The relevant JVM settings are like this:
    -Xms2048m -Xmx2048m -Xmanagement -Djrockit.managementserver.port=30011 -XgcPrio:throughput
    I've choosen the "throughput" strategy as we have here a systems which acts asynchronously most of the time (WLI). I have attached two JRA records. The first ("1day") shows a system which has an uptime of now 16 hours. The heap utilization is almost all the time between 90%-100%. Things get worse after a while. In the consequence we are rebooting the server when we see more and more timeout-exceptions in our WLI layer and heavy GC activity ("average time spent in GC")
    The other record ("leakdetector") shows a system which is now running for almost 3 hours. Here I connected the memory leak detector. The graph looks a bit better / more balanced. Means, that after 3 hours the avg heap util remains between the margins of 40-50 %. The trend of the bottom margin however clearly indicates an increasing memory footprint.
    To understand this in more detail I started using the memory leak analyzer. So far I got no benefit from using the tool, though it looks very impressive. However, when I connect the memory leak analyzer I can observe some strange heap graph changes. I suppose the memory leak detector has some impact onto the GC strategies, doesn't it? My heap graph looks totally different with no leak detector attached. What can be the reason on this?
    Also I'm quite a bit confused why the "Growth" indicator always stays at 0 bytes/sec, even when I can see that objects are getting bigger and bigger. What is the secret here?
    i appreciate every comment you have on my case,
    thanks a lot

    You might also look at approaching the problem for a different perspective. Which transactions, requests, components are creating the most amount of memory? What concurrent requests where active when the memory jumped or an OOME was thrown. I have found this approach effective in getting a good idea on where to focus memory diagnostics after ruling of course a resource capacity issue.
    The following article discusses the difference between a leak and a capacity problem.
    http://www.jinspired.com/products/jxinsight/outofmemoryexceptions.html
    Also you should try high level monitoring to see whether there are global patterns in metric data that provides clues. This blog entry shows what is possible with professional performance management tools with visualizations going beyond pie charts and table views.
    Beautiful Evidence: Metric Monitoring
    http://blog.jinspired.com/?p=33
    There are also many low level memory inspection tools on the market that might help quickly navigate the heap and identify the problem though I think the JRA tool has probably most of the same features.
    Regards,
    William Louth
    JXInsight Product Architect
    CTO, JInspired
    "Java EE tuning, testing, tracing, and monitoring with JXInsight"
    http://www.jinspired.com

  • I think I've got a memory leak and could use some advice

    We've got ourselves a sick server/application and I'd like to gather a little community advice if I may. I believe the evidence supports a memory leak in my application somewhere and would love to hear a second opinion and/or suggestions.
    The issue has been that used memory (as seen by FusionReactor) will climb up to about 90%+ and then the service will start to queue requests and eventually stop processing them all together. A service restart will bring everything back up again and it could run for 2 days or 2 hours before the issue repeats itself. Due to the inconsistant up time, I can't be sure that it's not some trouble bit of code that runs only occasionally or if it's something that's a core part of the application. My current plan is to review the heap graph on the "sick" server and look for sudden jumps in memory usage then review the IIS logs for requests at those times to try and establish a pattern. If anyone has some better suggestions though, I'm all ears! The following are some facts about this situation that may be usefull.
    The "sick" server:
    - CF 9.0.1.274733 Standard
    - FusionReactor 4.0.9
    - Win2k8 Web R2 (IIS7.5)
    - Dual Xeon 2.8GHz CPUs
    - 4GB RAM
    JVM Config (same on "sick" and "good" servers):
    - Initial and Max heap: 1536
    -server -Xss10m -Dsun.io.useCanonCaches=false -XX:PermSize=192m  -XX:MaxPermSize=256m -XX:+UseParNewGC -Xincgc -Xbatch -Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib -Dcoldfusion.dotnet.disableautoconversion=true
    What I believe a "healthy" server graph should look like (from "good" server):
    And the "sick" server graph looks like this:

    @AmericanWebDesign, I would concur with BKBK (in his subsequent reply) that a more reasonable explanation for what you’re seeing (in the growth of heap) is something using and holding memory, which is not unusual for the shared variables scopes: session, application, and/or server. And the most common is sessions.
    If that’s enough to get you going, great. But I suspect most people need a little more info. If this matter were easy and straightforward, it could be solved in a tweet, but it’s not, so it can’t.
    Following are some more thoughts, addressing some of your concerns and hopefully pointing you in some new directions to find resolution. (I help people do it all the time, so the good news is that it can be done, and answers are out there for you.)
    Tracking Session Counts
    First, as for the observation we’re making about the potential impact of sessions, you may be inclined to say “but I don’t put that much in the session scope”. The real question to start with, though, is “how many sessions do you have”, especially when memory use is high like that (which may be different than how many you have right now). I’ve helped many people solve such problems when we found they had tens or hundreds of thousands of sessions.  How can you tell?
    a) Well, if you were on CF Enterprise, you could look at the Server Monitor. But since you’re not, you have a couple of choices.
    b) First, any CF shop could use a free tool called ServerStats, from Mark Lynch, which uses the undocumented servicefactory objects in CF to report a count of sessions, overall and per application, within an instance. Get it here: http://www.learnosity.com/techblog/index.cfm/2006/11/9/Hacking-CFMX--pulling-it-all-togeth er-serverStats . You just drop the files (within the zip) into a web-accessible directory and run the one CFM page to get the answer instantly.
    c) Since you mention using FusionReactor 4.0.9, here’s another option: those using FR 4 (or 4.5, a free update for you since you’re on FR 4) can use its available (but separately installed) FusionReactor Extensions for CF, a free plugin (for FR, at http://www.fusion-reactor.com/fr/plugins/frec.cfm). It causes FR to grab that session count (among many other really useful things about CF) to log it every 5 seconds, which can be amazingly helpful. And yes, FREC can grab that info whether one is on CF Standard or Enterprise.
    And let’s say you find you do have tens of thousands of sessions (or more). You may wonder, “how does that happen?“ The most common explanation is spiders and bots hitting your site (from legit or unexpected search engines and others). Some of these visit your site perhaps daily to gather up the content of all the pages of your site, crawling through every page. Each such page hit will create a new session. For more on why and how (and some mitigation), see:
    http://www.carehart.org/blog/client/index.cfm/2006/10/4/bots_and_spiders_and_poor_CF_perfo rmance
    About “high memory”
    All that said, I’d not necessarily conclude so readily that your “bad” memory graph is “bad”. It could just be “different”.
    Indeed, you say you plan to “look for sudden jumps in memory usage“, but if you look at your “bad” graph, it simply builds very slowly. I’d think this supports the notion that BKBK and I are asserting: that this is not some one request that “goes crazy” and uses lots of memory, but instead is the “death by a thousand cuts” as memory use builds slowly.  Even then, I’d not jump at a concern that “memory was high”.
    What really matters, when memory is “high” is whether you (or the JVM) can do a GC (garbage collection) to recover some (or perhaps much) of that “high, used memory”. Because it’s possible that while it “was” in use in the past (as the graph shows), it might no longer be “in use” at the moment . 
    Since you have FR, you can use its “System Metrics page” to do a GC, using the trash can in the top left corner of the top right-most memory graph. (Those with the CFSM can do a GC on its “Memory Usage Summary” page, and SeeFusion users can do it on its front page.)
    If you do a GC, and memory drops q lot, then you had memory that “had been” but no longer ”still was” in use, and so the high memory shown was not a problem. And the JVM can sometimes be lazy (because it’s busy) about getting to doing a GC, so this is not that unusual. (That said, I see you have added the Xincgc arg to your JVM. Do you realize that tells the JVM not to do incremental GCs? Do you really want that? I understand that people trade jvm args like baseball cards, trying to solve problems for each other, but I’d argue that’s not the place to start. In fact, rarely do I find myself that any new JVM args are needed to solve most problems.)
    (Speaking of which, why did you set the – xss value? And do you know if you were raising or lowering it form the default?)
    Are you really getting “outofmemory” errors?
    But certainly, if you do hit a problem where (as you say) you find requests hanging, etc., then you will want to get to the bottom of that. And if indeed you are getting “outofmemory” problems, you need to solve those. To confirm if that’s the case, you’ll really want to look at the CF logs (specifically the console or “out” logs). For more on finding those logs, as well as a general discussion of memory issues  (understanding/resolving them), see:
    http://www.carehart.org/blog/client/index.cfm/2010/11/3/when_memory_problems_arent_what_th ey_seem_part_1
    This is the first of a planned series of blog entries (which I’ve not yet finished) on memory issues which you may find additionally helpful.
    But I’ll note that you could have other explanations for “hanging requests” which may not necessarily be related to memory.
    Are you really getting “queued” requests?
    You also say that “the service will start to queue requests and eventually stop processing them all together”. I’m curious: do you really mean “queuing”, in the sense of watching something in CF that tells you that? You can find a count of queued requests, with tools like CFSTAT, jrun metrics, the CF Server Monitor, or again FREC. Are you seeing one of those? Or do you just mean that you find that requests no longer run?
    I address matters related to requests hanging and some ways to address them in another entries:
    http://www.carehart.org/blog/client/index.cfm/2010/10/15/Lies_damned_lies_and_CF_timeouts
    http://www.carehart.org/blog/client/index.cfm/2009/6/24/easier_thread_dumps
    Other server differences
    You presented us a discussion of two servers, but you’ve left us in the dark on potential differences between them. First, you showed the specs for the “sick” server, but not the “good” one. Should we assume perhaps you mean that they are identical, like you said the JVM.config is?
    Also, is there any difference in the pattern of traffic (and/or the sites themselves) on the two servers? If they differ, then that could be where the explanation lies. Perhaps the sites on one are more inclined to be visited often by search engine spiders and bots (if they sites are more popular or just have become well known to search engines). There are still other potential differences that could explain things, but these are all enough to hopefully get you started.
    I do hope that this is helpful. I know it’s a lot to take in. Again, if it was easier to understand and explain, there wouldn’t be so much confusion. I do realize that many don’t like to read long emails (let alone write them), which only exacerbates the problem. Since all I do each day is help people resolve such problems (as an independent consultant, more at carehart.org/consulting), I like to share this info when I can (and when I have time to elaborate like this), especially when I think it may help someone facing these (very common) challenges.
    Let us know if it helps or raises more questions. :-)
    /charlie

  • Memory Leak on SpoolSv.exe

    Hi
    I have a Windows 7 x64 (Ultimate) running on my laptop.  Weeks ago i started noticing weird errors like: "memory low", can not start virtual machine.  I wasn't running special memory consuming programs.  "A reboot a day keeps the troubles away" was my temporary workaround. 
    When i started digging into the problem, i noticed that memory usage of the process spoolsv.exe was rather high.  I have a screenshot of my task manager showing that the "Private Working Set" and also Commit Size is very large.  
    In the meantime, I have read in another post about a general memory leak in Windows 7 where they suggested using the driver verifier, but it didn't gave me any clue. I uploaded the screenshot of the driververifier.
    When the process is skyhigh in memory usage, i can kill it.  But a few minutes later i have taken these screenshots from the process explorer. 
    1) SpoolSv is using CPU.  Apparently it is not only consuming RAM but also CPU power.
    2) spoolsv.exe graph
    I am not printing, or have not printed since days (and several reboots).
    Normally I wouldn't care about (temporarly) high memory usage, but it really never drops...  Other programs do crash because the lack memory (like videodriver or skype).
    Has someone any idea why this process slows down my pc and keeps getting bigger minute after minute?
    Kind regards
    Please click 'Mark as Answer' on the post that helped you.

    Hi DamPee,
    In addition, I would like to suggest you perform the following steps to check the issue.
    Clear Printer Spooler Files and Restart the Spooler Service
    =================================
    1. Click Start, type "Services.msc" (without the quotation marks) in the Start Search box and press Enter.
    2. Double-click "Printer Spooler" in the Services list.
    3. Click Stop and click OK.
    4. Click Start, type "%WINDIR%\system32\spool\printers" in the Start Search box and press Enter, delete all files in this folder.
    5. Click Start, type "Services.msc" (without the quotation marks) in the Start Search box and press Enter.
    6. Double-click "Printer Spooler" in the Services list.
    7. Click on Start. In the Startup Type list, make sure that "Automatic" is selected and click OK.
    What’s the result?
    Arthur Li - MSFT

  • Memory leak - Node clean up?

    Hey guys,
    I'm running into a memory leak and was reading some other threads where people were having their own memory leaks. Is there a way through NetBeans to track memory usage or something to figure out what is consuming memory? My memory leak sounds similar to this gentleman's: [http://forums.sun.com/thread.jspa?threadID=5409788] and [http://forums.sun.com/thread.jspa?threadID=5357401]
    Setup:
    I have a mySQL database call that returns a bunch of results and in the end converts each record into its own custom node which then gets put into a sequence, which is then given to a listEventsNode (a custom node VBox) and finally put on the stage. As a note each custom node has its own animations, events, images, and shapes.
    listEventsNode gets its list of custom nodes from a publicly accessible sequence variable where its contents is simply cleared and new nodes are added by the next MySQL search. I cleared the eventSequence by using delete myCustomNodeSequence.
    I even go as far as setting the same sequence null (myCustomNodeSequence = null;). This unfortunately doesn't make any difference in terms of memory usage. java.exe will reach 100MB then crash.
    the listEventsNode is bound with eventSequence, this is to ensure that changes to the sequence of custom nodes are immediately reflected in the Vbox (listEventsNode).
    ListEventsNode is on the main stage and there is only one instance of it, but what changes is the content in the eventSequence. Even if I clear the contents of the eventSequence, it doesn't appear to "clean up" the memory.
    The way I'm doing it is probably breaking every rule in the book. I should probably make it so listEventsNode is its own object which isn't bound to any external variables (such as eventSequence) and in the event a new search takes place I simply delete the listEventsNode and when a new search is complete, it re-adds the node to the scene. Am I on the right track here?
    Is there a good "best practices" for JavaFX? For example, a typical mistake that would cause a node to "recreate" itself over and over again in memory when the programmer may have thought it was simply being "written over"? Does this make sense? Because I have a feeling that my application is not deleting the custom nodes that were created for the eventSequence and even if the eventSequence is "deleted" and re-assigned with new custom nodes, the original or previous custom nodes are still residing in memory.
    Let me know if you need to see the source or any logs/readouts from NetBeans during execution if this will help.
    Thanks for taking the time to read this.
    Cheers,
    Nik.

    Your heap usage looks pretty typical. In scenario 5, I think you are simply running out of memory. I doubt its a leak inside the javafx runtime (although it could be).+
    I think you might be right. It's running out of memory and I may have to increase the heap size.
    Say that my application legitimately needs more memory to operate, and I need to increase the heap size. How does this work if you have a fully deployed application? Is the heap size information built into the application itself so that when the jvm runs the application, it allocates the appropriate amount of memory? I've increased the heap size from 64mb to 128mb, and I added many many nodes and I still crapped out when I hit the 128mb ceiling. I changed it to 512 and I added a TON of nodes (when you click a node from the VBox, it adds a miniature version of the node to the scene graph) and I'm just under 200MB. I plan on setting a cap to how many concurrent additional nodes can be placed on the scene graph, which will help.
    If you deploy this as is, how does the application utilize memory if you've adjusted the heap size? Or is this specific to the IDE only?
    Do you know what objects are on the heap? Can you compare what objects are on the heap from one scenario to the next?+
    Where can I find this information in NetBeans profiler?
    Do you have a lot of images? Are they thumbnails or are they images scaled down to thumbnail size?+
    Actually, yes I am using a scaled down thumbnail size of the original image. The original image files are PNG format, 60x60 pixels, and about 8kb in size. I simply use the "FitWidth:" property to scale the image down. I was doing some more reading before I went to bed and I was going to use an alternative way to scale the image down. By simply doing this, the initial heap usage off the 500 node search went down form 44MB to 39MB. It's still slower on consecutive searches versus the first, but it's stable.
    Edit: I've used the width: property to downsize the image and it looks like I'm not running into that heap crash as fast but this poses a problem where I need to have the full size of the image available when a custom node is selected. What's the best way of doing this? I should probably store the image location in a string and recreate the image when I need it in full size since there is only one full size version of it on the screen at a given time. I've also completely disabled the addition of a picture in my custom node; it appears these images don't take up a lot of space since they are very small. I save an additional 3-5MB in heap space if I completely disable the pictures and have just the nodes themselves. Each node has animation effects (i.e. fading in/out of colors, changing of color if selected). Although the class itself is pretty dang long in comparison with any other classes I have.
    Are you clearing the nodes from your scene content before the search or after? If after, are you creating the new nodes before clearing out the old?+
    Yes, I have a function that reassigns the stage.scene.content sequence omitting the custom vbox that houses the list of custom nodes prior to the next search. The "cleanUp()" function is called prior to the insertion of the new custom vbox.
    It might be useful to turn on verbose garbage collection (-verbose:gc on the java command line) just to see what's happening in gc.+
    What is this exactly? I tried putting in System.gc() but I'm not sure if I'm seeing any difference yet.
    Edit: Actually, I've placed System.gc() after I run my cleanUp() function and I'm noticing the heap usage is more conservative. Seems to clear more often than it did before. But yes, the underlying problem of my running out of memory is to be looked at.
    You might also (just as an experiment) force garbage collection between your searches.+
    This seems to work well with smaller result sets. However, a search that produces over 500 custom nodes in a custom VBox uses more than half of the available heap size, so the next time it tries to do the same search it just crashes like you mentioned in your first point. The memory simply runs out. What I don't get is if I "delete" the custom vbox prior to inserting it, the memory doesn't seem to be released immediately. From what I'm reading about the garbage collector, it doesn't exactly do things in a prompt fashion.

  • Memory Leak in 8.1 sp5

    Has anyone experienced a memory leak when a webservice (jws)
    returns a large string (5-15 Meg). The string itself is a XML. When the webservice is called numerous times it eventually runs out of memory.

    We have seen exactly the same behaviour you just described, also related to webservices that return large XML response documents. Initially we had the test console enabled, which consumed all the memory rapidly. Once this was disabled, we've been seeing the exception messages that you've been getting. We did have an issue where the JVM was configured with too much memory (sounds counter intuitive), and we found the max JVM size should be less than 2MB (we're now using 1.3MB). This reduce the frequency of the error occuring, but it does still occasionally happen.
    I did some testing where I watched the memory used by weblogic when returning a large XML document. The XML document size was around 8MB when saved as a txt file. From the weblogic console server->monitor->performance page, the memory consumed by weblogic when returning this XML document was much larger than I expected. In fact weblogic used about 100MB of memory and the memory graph went up very steeply when returning this document. I adjusted my JVM memory settings and found that if I had less than approximately 80MB of free memory, then my webservice couldn't return this large XML response document without getting the OutOfMemory error. This test was done using a weblogic server that had just booted up and so had no scope for garbage collection of memory.

  • Memory leak in JDK5

    Folks,
    I develop a Java application (non-J2EE) which is highly multithreaded and runs on server class machines. One of the recent changes we made to the application was to replace a bunch of JNI code with pure Java code.
    Load tests of our application show a very prett Java-side picture (as seen from JConsole). We see the familiar sawtooth heap usage graph with no rising trend. When we switch to Windows perfmon, however, the picture is very different. The process' private bytes show a steady increase of about 4M/hour. The handle count also shows a monotonically rising curve.
    So far we have tried the following:
    -Updated JDK from JDK 5u5 to JDK 5u10: no impact
    - Changed garbage collectors from the default (conc mark sweep compact) to ParallelGC: This changed the trend slighly but still a rising trend.
    Through all of the above, the Java-side things still look pretty.
    I could attach graphs from perfmon showing the private byte and handle trends.
    Does anyone have ideas on what is the best way to proceed? How do I narrow it down further?
    One of the recent changes was to integrate NIST SIP stack. Is that code known to trigger a JVM memory leak?
    Thanks.
    -Raj

    It looks like we found the bug which is the root cause: http://bugs.sun.com/bugdatabase/view_bug.do;jsessionid=12d672291c95e52a1f6916c59d7:WuuT?bug_id=6434648
    Thanks.
    -Raj

  • SQL Server 2008R2 SP2 Query optimizer memory leak ?

    It looks like we are facing a SQL Server 2008R2 queery optimizer memory leak.
    We have below version of SQL Server
    Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
     Jun 28 2012 08:36:30
     Copyright (c) Microsoft Corporation
     Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
    The instance is set MAximum memory tro 20 GB.
    After executing a huge query (2277 kB generated by IBM SPSS Clementine) with tons of CASE and a lot of AND/OR statements in the WHERE and CASE statements and muliple subqueries the server stops responding on Out of memory in the internal pool
    and the query optimizer has allocated all the memory.
    From Management Data Warehouse we can find that the query was executed at
    7.11.2014 22:40:57
    Then at 1:22:48 we recieve FAIL_PACE_ALLOCATION 1
    2014-11-08 01:22:48.70 spid75       Failed allocate pages: FAIL_PAGE_ALLOCATION 1
    And then tons of below errors
    2014-11-08 01:24:02.22 spid87      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:02.22 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:02.22 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:02.30 Server      Error: 17312, Severity: 16, State: 1.
    2014-11-08 01:24:02.30 Server      SQL Server is terminating a system or background task Fulltext Host Controller Timer Task due to errors in starting up the task (setup state 1).
    2014-11-08 01:24:02.22 spid74      Error: 701, Severity: 17, State: 123.
    2014-11-08 01:24:02.22 spid74      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:13.22 Server      Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:13.22 spid87      Error: 701, Severity: 17, State: 123.
    2014-11-08 01:24:13.22 spid87      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:13.22 spid63      Error: 701, Severity: 17, State: 130.
    2014-11-08 01:24:13.22 spid63      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:13.22 spid57      Error: 701, Severity: 17, State: 123.
    2014-11-08 01:24:13.22 spid57      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:13.22 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:18.26 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:24.43 spid81      Error: 701, Severity: 17, State: 123.
    2014-11-08 01:24:24.43 spid81      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:18.25 Server      Error: 18052, Severity: -1, State: 0. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:18.25 Server      BRKR TASK: Operating system error Exception 0x1 encountered.
    2014-11-08 01:24:30.11 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:30.11 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:35.18 spid57      Error: 701, Severity: 17, State: 131.
    2014-11-08 01:24:35.18 spid57      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:35.18 spid71      Error: 701, Severity: 17, State: 193.
    2014-11-08 01:24:35.18 spid71      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:35.18 Server      Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:35.41 Server      Error: 17312, Severity: 16, State: 1.
    2014-11-08 01:24:35.41 Server      SQL Server is terminating a system or background task SSB Task due to errors in starting up the task (setup state 1).
    2014-11-08 01:24:35.71 Server      Error: 17053, Severity: 16, State: 1.
    2014-11-08 01:24:35.71 Server      BRKR TASK: Operating system error Exception 0x1 encountered.
    2014-11-08 01:24:35.71 spid73      Error: 701, Severity: 17, State: 123.
    2014-11-08 01:24:35.71 spid73      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:46.30 Server      Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:51.31 Server      Error: 17053, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:51.31 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:51.31 Logon       Error: 18052, Severity: -1, State: 0. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    Last error message is half an hour after the inital Out of memory at 2014-11-08 01:52:54.03. Then the Instance is completely shut down
    From the memory information in the error log we can see that all the memory is consumed by the QUERY_OPTIMIZER
    Buffer Pool                                   Value
    Committed                                   2621440
    Target                                      2621440
    Database                                     130726
    Dirty                                          3682
    In IO                                            
    0
    Latched                                          
    1
    Free                                           
    346
    Stolen                                      2490368
    Reserved                                          0
    Visible                                     2621440
    Stolen Potential                                  0
    Limiting Factor                                  17
    Last OOM Factor                                   0
    Last OS Error                                     0
    Page Life Expectancy                             28
    2014-11-08 01:22:48.90 spid75     
    Process/System Counts                         Value
    Available Physical Memory                29361627136
    Available Virtual Memory                 8691842715648
    Available Paging File                    51593969664
    Working Set                               628932608
    Percent of Committed Memory in WS               100
    Page Faults                                48955000
    System physical memory high                       1
    System physical memory low                        0
    Process physical memory low                       1
    Process virtual memory low                        0
    MEMORYCLERK_SQLOPTIMIZER (node 1)                KB
    VM Reserved                                       0
    VM Committed                                      0
    Locked Pages Allocated                            0
    SM Reserved                                       0
    SM Committed                                      0
    SinglePage Allocator                       19419712
    MultiPage Allocator                             128
    Memory Manager                                   KB
    VM Reserved                               100960236
    VM Committed                                 277664
    Locked Pages Allocated                     21483904
    Reserved Memory                                1024
    Reserved Memory In Use                            0
    On the other side MDW reports that the MEMORYCLERK_SQLOPTIMIZER increases since the execution of the query up to the point of OUTOF MEMORY, but the Average value is 54.7 MB during that period as can be seen on attached graph.
    We have encountered this issue already two times (every time the critical query is executed).

    Hi,
    This does seems to me kind of memory Leak and actually it is from SQL Optimizer which leaked memory from buffer pool so much that it did not had any memory to be allocated for new page.
    MEMORYCLERK_SQLOPTIMIZER (node 1)                KB
    VM Reserved                                       0
    VM Committed                                      0
    Locked Pages Allocated                            0
    SM Reserved                                       0
    SM Committed                                      0
    SinglePage Allocator                       19419712
    MultiPage Allocator                             128
    Can you post complete DBCC MEMORYSTATUS output which was generated in errorlog. Is this the only message in errorlog or there are some more messages before and after it.
    select (SUM(single_pages_kb)*1024)/8192 as total_stolen_pages, type
    from sys.dm_os_memory_clerks
    group by typeorder by total_stolen_pages desc
    and
    select sum(pages_allocated_count * page_size_in_bytes)/1024,type from sys.dm_os_memory_objects
    group by type
    If you can post the output of above two queries with dbcc memorystaus output on some shared drive and share location with us here. I would try to find out what is leaking memory.
    You can very well apply SQL Server 2008 r2 SP3 and see if this issue subsides but I am not sure whether this is fixed or actually it is a bug.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • WARNING TDMS memory leak in LV 2010

    Hopefully this will save someone the headache that I've been through the last couple of days.  I have a very large applicaiton that is running a final verification test on a production line.  In my testing I noticed a memory leak in the application and after 2 days of debug discovered that the TDMS logging is the culprit.  This is very disappointing since I am (I mean was) a huge fan of the TDMS file format and the LabVIEW functions.  The attached VI reproduces the leak by checking and unchecking the Memory Leak checkbox.  Its a bit ugly but I was just coping and pasting the sections from my application and trying to reproduce the issue.  Luckily for me in this particular application I was only writing to the TDMS log files so I was able to eliminate the problem by switching to the gTDMS versions of the write functions.  I found these referenced in another post about TDMS memory leak, but in that case the leak was caused by the indexing and the fact that the SAME file was continuously written too over a very long period.  As you can see in my case, a log file is opened and closed for each "Test".
    gTDMS link
    Thanks,
    Brian
    Brian Gangloff
    DataAct Incorporated
    Attachments:
    TDMS Memory testing.vi ‏31 KB

    YongqingYe wrote:
    Hi Brian,
    I'm one of the developers of TDMS in NI R&D. Well, this is a problem of TDMS which has been complained by some customers. The reason you see the "memory leak" or the memory usage increment is because TDMS needs to bookkeep some information in memory and when you writing more and more data values, the information we keep in memory will keep increasing.
    There are some workarounds, gTDMS probably is also one of them, but the original purpose of creating gTDMS is to support writing TDMSs on Linux, Mac and other platforms:
    Using "NI_MinimumBufferSize" propertie on channels, you can find the details of the help documentation of TDMS Set Property. It can not eliminate this problem, but would reduce the memory usage significantly. Normally we would set it as 1,000 to 10,000.
    From LV 2009, if you writing to the file always with the same layout, like same channels same number of data values, you will not have memory increament.
    If you are using LV 2010 and later, you can try to play with TDMS Advanced API, this API will not have any memory increasing problem at all.
    Thank you!
    Yongqing Ye
    NI R&D
    Hello Yongqing,
    Apparently you did not bother to look at the examples that I provided or read any of the description either.  As Hooovahh has already pointed out, INDEXING is NOT the issue.  The example writes an array to multiple channels ONE time and then the reference is CLOSED.  In the case that does NOT leak, there are multiple waveform arrays written to the file which would require some indexing but the memory does NOT increase.  The problem is when an array of strings is written to multiple channels and the reference is CLOSED.  Unfortunately this type of quick assumption about the problem is why the real issue was overlooked back in 2009.
    Thanks,
    Brian
    Brian Gangloff
    DataAct Incorporated

Maybe you are looking for