Request deallocation

I understand LV "manages" it's own memory. So what is the purpose of the request deallocation vi?
PaulG.
"I enjoy talking to you. Your mind appeals to me. It resembles my own mind except that you happen to be insane." -- George Orwell

Labview is a managed memory language which makes programming very easy but you do loose some fine control over the process.  An analogy:
Imagine used memory like garbage.  You place the trash (no longer needed allocated memory) on the curb every night, in an unmanaged system this will build up until there is no more space on the curb (out of memory error/slow system response/ crashes).  In c++ and other unmanaged languages when you allocate memory you must explicitly free it, memory is a finite resource so if you use a lot of make sure you return it.  So in an unmanaged system you must take the garbage to the local dump (deallocate used memory), this is a tedious task best left up to a trash collection service.  This is where managed systems like labview (java and c# do this but LV did it many years before it became cool to be managed).  Labview has a garbage service which will periodically pick up your trash so it doesn't build up.  So you wonder "Why request deallocation?".  Lets say you had a party last weekend and you have 10 huge bags of garbage which you put on the curb.  Garbage services don't come till Wednesday, so you call up for a special pickup to collect the excess trash.  I use the request deallocation for the same instances: a subvi which uses a huge amount (large arrays) for temporary processing and are called often, you can deallocate memory after such calls explicitly.  If you notice that your application seems to slow after many repeated calls, it can help to let LV know you wont need the large chunks of memory a subvi needed for temporary buffers.  I don't know the inner workings on LV garbage collection but I found that the request deallocation has helped me out of a few difficult situations.  As far as deallocating memory while a vi or subvi is still running (like is possible in c/c++) I haven't figured out how to do it if it is even possible.
Paul
Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA

Similar Messages

  • Request deallocation function with reentrant clones

    I have a standalone application (Labview version 12) that is processing very large chunks of data. Each batch run can take hours to complete.  I am storing all intermediate data in files to avoid running out of memory but still I am having occasional issues of running out of memory.  I never have an error on the first batch, only on the 2nd or 3rd.  I am experimenting with using the request deallocation function at the end of each batch but I am not clear on how/when it takes affect.
    There are 2 sub VIs that do all the work so I have placed the request deallocation in these with a boolean input to be true the last time it is called.  After the last call, the main application is idle waiting for the user to request another batch so this seems like the logical time to deallocate. These subs are configured as shared clone reentrant. They also have subs 2-3 levels deep.  How does the request for deallocation take effect for reentrant VIs?  Are all clones deallocated or just one?  What about VIs called within a deallocated sub? Are they included in the garbage collection or does each sub VI called have to be deallocated seperatly?

    Hi chiraldude
    I think that if the SubVI is not dynamic, (being part of the application) it will be keep in memory while the top level VI is running. If you load the VI dynamically, the deallocation will be done when all references are closed.
    The last time I poked around with the de-allocate, it only cam e into play when the VI in question was marked for removal from memory. If the sub-VI is part of the app (not dynamic) it will not be marked for removal while the top-level VI is running.
    However, I would like to recomen another tool that might come in very handy with this case, the place element structure.
    http://zone.ni.com/reference/en-XX/help/371361G-01/glang/in_place_element_structure/
    Regarding memory administration, this links might be useful as well
     How Can I Optimize the Memory Use in My LabVIEW VI?
    http://digital.ni.com/public.nsf/allkb/771AC793114A5CB986256CAB00079F57?OpenDocument
    Determining When and Where LabVIEW Allocates a New Buffer
    http://digital.ni.com/public.nsf/allkb/C18189E84E2E415286256D330072364A?OpenDocument
    Warm Regards
    Fabián M.
    Internal Sales Engineer
    National Instruments

  • "Request Deallocation" breaks "Current Path" constant in LV8

    When a subVI includes the "request deallocation" block and the "current path" constant, and is called multiple times from another VI, only the first call will yield the subVI's path - subsequent calls return an empty path.  Attached is an example.  Note that the error only occurs when request deallocation is true.  This error is unique to LabVIEW 8.0... the same process under 7.1 worked fine.
    Attachments:
    RequestDeallocationError.llb ‏21 KB

    Hello,
    This problem has been reported to LabVIEW R&D.  For now, the workaround is to remove the Request Deallocation function, or set its input to False.
    -D
    Darren Nattinger, CLA
    LabVIEW Artisan and Nugget Penman

  • Request deallocation" vi

    hi  i have some doubt,
     i have reentrant vi , i call it by "open vi ref function" at 5 no of  times,so its makes 5 clone vi.in ram its make individual space for each clone
    1.suppose i close the clone vi but now top level vi isnot closed ,in this case labview clear the memory space of old clone.
    vi or still retain in ram?
    2.in this case is request deallocation" vi  useful suppose old clone vi mem retain in ram?
    if its how can i impement by request deallocation" vi
    Raj

    Request deallocation only is effective when the VI is removed from memory. This normally will happen when the ref to the sub-VI is closed.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Request Deallocati​on?

    I am looking at a couple of subvi that pull and process a large 2d array.  At the end of each subvi, the Request Deallocation is used.  Is that neccessary?  Is it a good practice?
    Kudos and Accepted as Solution are welcome!

    jyang72211 wrote:
    [...]  My question is
    How do I prevent labview to allocate mulitple copies for the same array when the array is being passed in and out of different subvi? 
    Passing data in and out usually does not force LabVIEW to copy the data: You can verify with   Tools >> Profile >> Show Buffer Allocations...   that LabVIEW reuses the data on the wire and no copy will be made -> This is no use case for "Request Deallocation".
    jyang72211 wrote:  [...]
    If I have a state machine, [...]  If I have 3 states with subvi plus the main state machine with shift register, would I have 4 copies of the same array in memory?
    It depends on how the subVIs are allocating buffers.  Again, use Show Buffer Allocations...  in all subVIs. If no allocations take place, you do not have to worry. On the other hand: If you are unsure if the subVI allocates a buffer (e.g. because you might make a small change to it that forces LabVIEW to copy and you do not consider this at wiring time), you could place a "Request Deallocation" on the subVIs block diagram.  If there is a chance to free any memory, LabVIEW will do so when this VI finishes.  If there is nothing to deallocate -> "Request Deallocation" should be merely a "no operation".

  • Avoiding data memory duplication in subVI calls

    Hi,
    I am on a Quest to better understand some of the subtle ways of the LabVIEW memory manager. Overall, I want to (as much as practically possible) eliminate calls to the memory manager while the code is running.
    (I mainly do RT code that is expected to run "forever", the more static and "quiet" the memory manager activity is, the faster and simpler it is to prove beyond reasonable doubt that your application does not have memory leaks, and that if will not run into memory fragmentation (out of memory) issues etc. What I like to see as much as possible, are near static "used memory" and "largest contiguous block available" stats over days and weeks of deployed RT code.)
    In my first example (attached, "IPE vs non-IPE.png"), I compared IPE buffer allocation (black dots) for doing some of the operations in an IPE structure vs. "the old way". I see fewer dots the old way, and removed the IPE structure.
    Next I went from initializing an array of size x to values y to using a constant array (0 values) with an "array add" to get an array with the same values as my first version of the code. ("constant array.png")
    The length of the constant array is set to my "worst case" of 25 elements (in example). Since "replace sub-array" does not change the size of the input array even when the sub-array is "too long", this saves me from constantly creating small, variable sized arrays at run-time. (not sure what the run-time cpu/memory hit is if you tried to replace the last 4 elements with a sub-array that is 25 elements long...??)
    Once I arrived at this point, I found myself wondering "how exactly the constant array is handled during run-time?". Is it allocated the first time that this sub-vi is called then remains in memory until the main/top VI terminates, or is it unloaded every time the SubVI finishes execution? (I -think- Mac's could unload, while windows and linux/unix it remains in memory until top level closes?)  When thinking (and hopefully answering),  consider that the the code is compiled to an RTEXE runningg on a cRIO-9014 (vxWorks OS).  
    In this case, I could make the constant array a control, and place the constant on the diagram of the caller, and pipe the constant all the way up to the top level VI, but this seems cumbersome and I'm not convinced that the compiler would properly reckognize that at the end of a long chain of sub-sub-sub VI's all those "controls" are actually always tied off to a single constant. Another way would perhaps be to initialize a FG with this constant array and always "read it" out from the FG. (using this cool trick on creating large arrays on a shift register with only one copy which avoids the dual copy (one for shift register, one from "initialize array" function)).
    This is just one example of many cases where I'm trying to avoid creating memory manager activity by making LabVIEW assign memory space once, then only operate on that data "in-place" as much as possible. In another discussion on "in-place element" structures (here), I got the distinct sense that in-place very rarely adds any advantage as the compiler can pick up on and do "in-place" automatically in pretty much any situation. I find the NI documentation on IPE's lacking in that it doesn't really show good examples of when it works and when it doesn't. In particular, this already great article would vastly benefit from updates showing good/bad use of IPE's.
    I've read the following NI links to try and self-help (all links should open in new window/tab):
    cool trick on creating large arrays on a shift register with only one copy
    somewhat dated but good article on memory optimization
    IPE caveats and recommendations
    How Can I Optimize the Memory Use in My LabVIEW VI?
    Determining When and Where LabVIEW Allocates a New Buffer
    I do have the memory profiler tool, but it shows min/max/average allocations, it doesn't really tell me (or I don't know how to read it properly) how many times blocks are allocated or re-allocated.
    Thanks, and I hope to build on this thread with other examples and at the end of the thread, hopefully everyone have found one or two neat things that they can use to memory optimize their own applications.  Next on my list are probably handling of large strings, lots of array math operations on various input arrays to create a result output array etc.
    -Q
    QFang
    CLD LabVIEW 7.1 to 2013
    Attachments:
    IPE vs non-IPE.png ‏4 KB
    constant array.png ‏3 KB

    I sense a hint of frustration on your part, I'm not trying to be dense or difficult, but do realize that this is more towards the "philosophical" side than "practical" side. Code clarity and practicalities are not necessarily the objectives here.
    Also, I have greatly appreciated all your time and input on this and the other thread!
    The answer to your first question is actually "yes, sort of". I had a RT application that developed a small memory leak (through a bug with the "get volume info.vi' from NI), but to isolate it and prove it out took a very long time because the constant large allocation/deallocations would mask out the leak. (Trace's didn't work out either since it was a very very slow leak and the traces would bomb out before showing anythinng conclusive.) The leak is a few bytes, but in addition to short term memory oscilations and  long term (days) cyclical "saw-tooth" ramps in memory usage, made this very hard to see. A more "static" memory landscape would possibly have made this simpler to narrow down and diagnose. or maybe not. 
    Also, you are missing my point entierely, this is not about "running out of memory" (and the size of 25 in my screen-shot may or may not be what that array (and others) end up being). This is about having things allocated in memory ONCE then not de-allocated or moved, and how/when this is possible to accomplish.  Also this is a quest (meaning something  I'm undertaking to improve and expand my knowledge, who said it has to be practical).
    You may find this document really interesting, its the sort of thing you could end up being forced to code to, albeit, I don't see how 100% compliance with this document would ever be possible in LabVIEW, thats not to say its worthless: JPL Institutional Coding Standard for the C Programming Language (while it is directed at C, they have a lot of valid general points in there.)
    Yes, you are right that the IPE would grow the output if the lenght of my replacement array is not the same, and since I can't share the full VI's its a bit of a stretch to expect people to infer from the small screen dummp that the I32 wires on the right guarantee the lengths will match up in the IPE example.
    Once, on recomendation of NI support, I actually did use the Request deallocation primitive during the hunt for what was going on in that RT app I was debugging last year. At that particular time, the symptom was constant fragmentation of memory, until the largest contiguous block would be less than a couple of kB and the app would terminate with 60+MB of free memory space.. (AKA memory leak, though we could not yet prove that from diagnostic memory consumption statistics due to the constant dynamic behavior of the program)  I later removed them. Also, they would run counter to my goal of "allocate once, re-use forever" that I'm chasing. and again, I'm chasing this more as a way to learn than because all my code MUST run this way. 
    I'm not sure I see what you mean by "copying data in and out of some temporary array". Previously (before the constant array) at every call to the containing sub-vi, I used to "initialize array" with x elements of value y (where x depends to a large degree on a configuration parameter, and y is determined by the input data array). Since I would call to "initialize" a new array each time the code was called, and the size of the array could change, I looked for a way that I could get rid of the dynamic size, and get rid of dynamically creating the array from scratch each time the sub-vi was called. What I came up with is perhaps not as clear as the old way I did it, but with some comments, I think its clear enough. In the new way, the array is created as a constant, so I would think that would cause less "movement" in memory as it at that point should be preventing the "source" array from (potentially) moving around in memory?  Considering the alternative of always re-creating a new array, how is this adding an "extra" copy that creating new ones would not create?
    How would you accomplish the task of creating an array of "n" elements, all of value "y" without creating "extra" copies? -auto-indexing using a for loop is certainly a good option, but again, is that sure to reuse the same memory location with each call? Would that not, in a nit-picking way, use more CPU cycles since you are building the array one element at the time instead of just using a primitive array add operation (which I have found to be wicked fast operations) and operate on a constant data structure?
    I cannot provide full VI's without further isolation, maybe down the road (once my weekends clear up a bit). Again, I appreciate your attention and your time!
    QFang
    CLD LabVIEW 7.1 to 2013

  • What happens to the array built inside a subvi

    Hi
    My operation inside a subvi goes like this, i acquire data from a source continuously in a loop, build it in the loop and pass it out of the loop. I complete the processing of the built array in the same subvi and come out of it.
    I have the following doubts ;
    1. Does the memory allocated for building the array gets cleared when i come out of the sub vi.

    The answer is no, the memory allocated by the sub-VI will not be released, it will remain in use until the next time the sub-VI is called...then it might reduce or increase it's size depending on how it works. You can improve the performance by ensuring that the VI always works on the same memory (do not build arrays but initialize a shift register only at the first run and then use the replace array elements function instead e.g.). Building arrays in a loop is a no no.
    If the VI is only to run now and then you can force the memory to be released either by loading the VI dynamically and then closing all references to it when you are finished with it for that run...or you can use the Request Deallocation function that you can find on the advanced -> data manipulati
    on menu in LV7. If the VI runs all the time your better off leaving it in memory.
    MTO

  • Thoughts on Stream-to-Disk Application and Memory Fragmentation

    I've been working on a LabVIEW 8.2 app on Windows NT that performs high-speed streaming to disk of data acquired by PXI modules.  I'm running with the PXI-8186 controller with 1GB of RAM, and a Seagate 5400.2 120GB HD.  My current implementation creates a separate DAQmx task for each DAQ module in the 8-slot chassis.  I was initially trying to provide semaphore-protected Write to Binary File access to a single log file to record the data from each module, but I had problems with this once I reached the upper sampling rates of my 6120's, which is 1MS/sec, 16-bit, 4-channels per board.  With the higher sampling rates, I was not able to 'start off' the file streaming without causing the DaqMX input buffers to reach their limit.  I think this might have to do with the larger initial memory allocations that are required.  I have the distinct impression that making an initial request for a bunch of large memory blocks causes a large initial delay, which doesn't work well with a real-time streaming app.
    In an effort to see if I could improve performance, I tried replacing my reentrant file writing VI with a reentrant VI that flattened each module's data record to string and added it to a named queue.  In a parallel loop on the main VI, I am extracting the elements from that queue and writing the flattened strings to the binary file.  This approach seems to give me better throughput than doing the semaphore-controlled write from each module's data acq task, which makes sense, because each task is able to get back to acquiring the data more quickly.
    I am able to achieve a streaming rate of about 25MB/sec, running 3 6120s at 1MS/sec and two 4472s at 1KS/sec.  I have the program set up where I can run multiple data collections in sequence, i.e. acquire for 5 minutes, stop, restart, acquire for 5 minutes, etc.  This keeps the file sizes to a reasonable limit.  When I run in this mode, I can perform a couple of runs, but at some point the memory in Task Manager starts running away.  I have monitored the memory use of the VIs in the profiler, and do not see any of my VIs increasing their memory requirements.  What I am seeing is that the number of elements in the queue starts creeping up, which is probably what eventually causes failure.
    Because this works for multiple iterations before the memory starts to increase, I am left with only theories as to why it happens, and am looking for suggestions for improvement.
    Here are my theories:
    1) As the streaming process continues, the disk writes are occurring on the inner portion of the disk, resulting in less throughput. If this is what is happening, there is no solution other than a HW upgrade.  But how to tell if this is the reason?
    2) As the program continues to run, lots of memory is being allocated/reallocated/deallocated.  The streaming queue, for instance, is shrinking and growing.  Perhaps memory is being fragmented too much, and it's taking longer to handle the large block sizes.  My block size is 1 second of data, which can be up to a 1Mx4x16-bit array from each 6120's DAQmx task.  I tried added a Request Deallocation VI when each DAQmx VI finishes, and this seemed to help between successive collections.  Before I added the VI, task manager would show about 7MB more memory usage than after the previous data collection.  Now it is running about the same each time (until it starts blowing up).  To complicate matters, each flattened string can be a different size, because I am able to acquire data from each DAQ board at a different rate, so I'm not sure preallocating the queue would even matter.
    3) There is a memory leak in part of the system that I cannot monitor (such as DAQmx).  I would think this would manifest itself from the very first collection, though.
    4) There is some threading/threadlocking relationship that changes over time.
    Does anyone have any other theories, or comments about one of the above theories?  If memory fragmentation appears to be the culprit, how can I collect the garbage in a predictable way?

    It sounds like the write is not keeping up with the read, as you suspect.  Your queues can grow in an unbounded fashion, which will eventually fail.  The root cause is that your disk is not keeping up.  At 24MBytes/sec, you may be pushing the hardware performance line.  However, you are not far off, so there are some things you can do to help.
    Fastest disk performance is achieved if the size of the chunks you write to disk is 65,000 bytes.  This may require you to add some double buffering code.  Note that fastest performance may also mean a 300kbyte chunk size from your data acquisition devices.  You will need to optimize and double buffer as necessary.
    Defragment your disk free space before running.  Unfortunately, the native Windows disk defragmentor only defragments the files, leaving them scattered all over the disk.  Norton's disk utilities do a good job of defragmenting the free space, as well.  There are probably other utilities which also do a good job for this.
    Put a monitor on your queues to check the size and alarm if they get too big.  Use the queue status primitive to get this information.  This can tell you how the queues are growing with time.
    Do you really need to flatten to string?  Unless your data acquisition types are different, use the native data array as the queue element.  You can also use multiple queues for multiple data types.  A flatten to string causes an extra memory copy and costs processing time.
    You can use a single-element queue as a semaphore.  The semaphore VIs are implemented with an old technology which causes a switch to the UI thread every time they are invoked.  This makes them somewhat slow.  A single-element queue does not have this problem.  Only use this if you need to go back to a semaphore model.
    Good luck.  Let us know if we can help more.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • LV memory leak - How to use windows API SetProcessWorkingSetSize (from Kernel32.dll)

    Hi fellow LV'ers
    Okay - this is a bit tricky, but i'll try and explain the problem, then ask for the solution, because it may be that someone knows a better way to deal with this.. might get a bit long, sorry - if a solution comes up this will enable all of us to make more memory efficient LV code so please read on..
    Here is the deal:
    When building even a very simple LV executable, looking at the windows task manager will yield a rather large amount of memory allocated for such a small program - and the only way to free this up is by physically clicking the windows minimize button, then suddenly the amount drops to only a few MB and upon maximizing the window again the memory consumption will increase somewhat again, but for a simple VI build to an exe this move may change the consumption from +70MB to less than 15 MB.. This is irregardless of the code you put in the VI, so no coding example in this post as it is how LV works - you can even test it with the development environment - look at the task mgr and check LabVIEW's memory consumption, minimze ALL open NI windows incl project explorer etc, and you will see a significant decrease in memory usage even after maximizing again.. This has annoyed me since day one, but since RAM is a near zero cost these days it is not something I stay awake at night to think about.. However - I have moved into the "publish to web" tools now, wanting to do a remote monitoring part for my application for my customers to experience increased usability from the software i sell them..
    All is well, publishing is really easy (i use the monitor function, NOT the embeded, as customers need not have Labview RunTimeEngine installed, because they might look at it from a non RTE supported platform such as a mobilephone web browser)
    Everything is working fine also for the build application. However - I have noticed that once users start to remote monitor the running application - memory consumption of the running LV application starts to increase - and it keeps doing so - to such an extend that you can drain the computer complete and run off the cliff with a windows error... This is off course not very productive for me, being specialized in measurement applications that usually runs for a long period of time - I initially thought that I had done some poor programming in the VI used to display on the webpage - but it turns out that I can reproduce this behaviour with a simple boolean on an empty front panel..
    NI support has been informed, and they admit there is a problem, but so far solutions from them has been a bit too exotic for my taste, and thus I'm seeking the help of fellow LV programmers...
    You see - The method to solve the increasing memory consumption, is the exact same as mentioned above..minimize the application running with the "minimize" button and all memory will be freed, as soon as you maximize the application and users are viewing it remotely, the memory usage raises again, and history repeats... As previously mentioned, minimizing the window via normal LV calls to property nodes does not yield the same result, nor does a request deallocation of a VI(When you profile a project, there are no VI's increasing in memory, it is the LV process it self doing it) 
    After many many hours googling I stumbled upon this:
    http://support.microsoft.com/?kbid=293215
    I believe trimming the process with SetProcessWorkingSetSize would solve this problem, and now I would really like to be able to do this in my program, so that users are not forced to minimize the program every X hour depending on their system size...
    However - I have absolutely NO experience in calling windows API from LV, i need someone with that knowledge to provide an example of how to call this.. I've looked at examples on how to do calls to windows API - there is an example in this forum with some llb's in it, and I have gained a fair understanding of how parameters are passed between the calls, but none of those include the "hProcess" handle that is apparently needed for this specific winAPI call to work - Anyone in this forum with the knowledge on how to obtain this handle from a VI, if at all possible, and could provide an example VI for me to use - or even better , someone with the knowledge of how to do this within LV it self??
    Your help is much appreciated
    Best Regards
    Jacob
    LV8.6.1 patch something
    Win XP 
    Solved!
    Go to Solution.

    Hi Enrico
    Finally I can give something to the community that has given me so much  :-)
    The "official" statement is that "yes we know it is a problem".. Not sure what that will do to the future.. 
    I have the problem on 8.6.1 as well  - and in fact it is a general LV problem, that I first time reported to NI with LV8.2 as I was pissed by the fact that even the smallest exe file would consume + 50MB of memory until you manually minimized the window. Well - thanks to the feedback from Cosmin I seem to have solved the problem.
    I most warn that having started to "empty process" once in a while has led to occasional program crashes in the lethal "app.exe performed an illegal action and is closed" windows dialog - however what I did was to move the webserver to a seperate exe file and then communicate the data that I want to use via datasocket in a cluster.. It works like a charm and I simply stall the single thread that the webserver is running when ever the empty process is called and I have not seen a crash since then.. (the initial implementation was done in the main app with 4 parallel loops running, and I guess that was a disaster waiting to happen)
    Either way - what I have done is made a VI that at a user defined interval calls the empty process, simply by getting the .exe name from the task manager of the calling program - it is simple and very effective. I call it every 5 minutes - needless to say that flushing too often will most likely kill performance of the system. I have not noticed problems with VM - are you sure you are not storing large arrays or moving around copies of data not used frequently?
    For future reference to this forum, it is attached here including the .dll required to call - it is a LV8.6.1 file as I have not had the time to yet again test every single function of my program for new problems that could occur with upgrading to LV2009 
    I hope this solves your problem..
    best regards
    Jacob
    www.10EaZy.com 
    Attachments:
    EmptyProcess.zip ‏32 KB

  • Memory issue LabVIEW won't deallocte

    Hello,
    I was wondering if anyone could help. I am running LabVIEW 2012 and playing with manipulating data from a reasonably large file 40Mb using a lot of String Arrays.
    Everything runs ok, the problem I am having is that everytime I run the VIs associated with the program, even when I shut them down LabVIEW is keeping them in memory, so my Ram usage is going up to about 3GB each time and I cannot get it to reduce without shutting down LabVIEW which is reallly started to prove frustrating. I have put in a few de-allocation elements but that doesn't seem to be solving it. I was just wandering if anyone know a magic command to deallocate all VIs in LabVIEWs memory without me having to shut down completely.
    I know I could make the whole thing more memory efficient by not using String arrays and instead just using a single string but that is not the problem as each VI individually handles it fine and I dont have any indicators on my top level VI which would be holidng it up?
    Any ideas welcome.
    Thanks!

    Sorry, I will have to study this later, but just glancing at it shows quite a few questionable things:
    in snipped.png:
    the "time taken" will most likely show always zero, because both frames execute in immediate succession, in parallel to the rest of the code. what is the purpose?
    reshaping a 2D array to a 1D array could be done with the "reshape" primitive. Doing a sucessive "array to spreadsheet string" followed by "spreadsheet string to array" seems very Rube Goldberg. 
    The entire code is a constant dance between different representations (1D array, 2D array, cluster, cluster array, spreadsheet string, etc.) Are you sure this could not be done a bit simpler?
    In snippet 2.png:
    "request deallocation" does not care about execution order, so placing it inside a sequence frame is meaningless.
    LabVIEW Champion . Do more with less code and in less time .

  • How to deallocate memory used by the labview program?

    Hi,
    I have bulit a large application in labview 2012 that uses a couple of subvis, local and global variables, some uninitialized shift registers (functional global variables) and some c++ and .net dlls as well. When I open my application the memory usage shown at Windows task manager is around 1.4GB. After running the application and enabling all processes used in the application, the memory usage goes up to 1.55GB but when i stop the application, memory is never released/deallocated until i close the application plus exit labview. Can you suggest how to deallocate this memory? and how can i use the request deallocation function for this application? labview help says i have to place it inside a subVI for which i want to deallocate memory. But i have a lot of subVIs used in my application. I tried placing it in the top level VI and called it after stopping all processes but it didn't work... I am also closing references to all of the .net dlls at the end. Any workarounds??
    Thanks

    sandee wrote:
    When I open my application the memory usage shown at Windows task manager is around 1.4GB. After running the application and enabling all processes used in the application, the memory usage goes up to 1.55GB but when i stop the application, memory is never released/deallocated until i close the application plus exit labview.
    You already got some good advice. One thing that was not clear was how you are measuring memory. Since the task manager is capable of showing the memory used by LabVIEW alone (you simply need to look elsewhere), and you said that the memory gets released when you exit LabVIEW, you gave the impression that the 1.4GB was the LabVIEW portion.
    OK, so a couple of hundred MB used by LabVIEW is really nothing to worry about. Are you running into memory or other performance problems? What are the symptoms?
    sandee wrote:
    I have bulit a large application in labview 2012 that uses a couple of subvis, local and global variables, some uninitialized shift registers (functional global variables) and some c++ and .net dlls as well.
    We really need to see some code. It is very well posssible that you have a lots of unecessary data copies in memory due to sloppy programming. How big are the data structures? Do you use local variables for big data structures? What does the program actually do?
    LabVIEW Champion . Do more with less code and in less time .

  • How to analyses what takes memory

    Hi,
    I have a library with several functions written in labview and compiled into shared dll. Library provides functions to be used in test cases for Production (init, write, read, change, deint, etc). In init function some files are created, connections are established, and some functional globals are created to pass some data between functions.
    This library is used at the end in TestStand. I test it first in CVI.
    When I start prepared in CVI interface, it takes at start about 55MB. When I hit init for the first time it increases ~10MB. Deinit frees only some kB. Another init adds ~1 - 2MB, and every another init adds ~1-2MB. It seems like some memory is not released. That is not a problem at a test phase, but Production does not do restarts, they just run it on and on, and suddenly they might face a lack of memory.
    There are several global variables passed via shift registers, but there should be always the same part of memory used. I read into internal structure xml file that is of size about 400kB, but even if it is read at every init it should be overwritten.
    However how to deallocate place for such global.
    I was trying to use DesktopExecutionTraceTolkit, but I am unable to analyze 400k lines.
    Is there any other way to analyze what takes the memory, and what keeps it after vi ends?

    Hi Piotr,
    thank you for hints.
    I used function "request deallocation" before, but it gave not any better results.
    Filtering result from Desktop Execution Trace Tolkit gave me still klines to be analysed. It is impossible.
    I have never used In Place Element structure, but I can check if it can bring some positive results.
    As a clarification:
    INIT function reads some files (and close them after reading) and stores some of this info in functional globals (in registers) - ini file, xml file, log file. It also establish a connections to targets. This refnums are the only ones that are not closed until DEINIT funciton is used. Deinit function mailny closes the connections. I made a mistake in my first post. DEINIT function does not free any memory, it just takes only some kB.
    I made some tests.
    I removed reading of xml file. The consumption of memory decreased a lot as expected. But still memory was allocated and not freed in every start.
    I removed then almost all functional globals (but one responsible for holding of refnums to targets), but still some kB was taken and not released.
    I dont really understand whay the memory that should be self deallocated, is not. I write to some registers, and when i write again, it seems like some other components are created in memory but I still talk to the same register.
    Or maybe there is a problem in CVI environment? It could not be, that the momory is not released when vi finishes execution.

  • How long does subVI stay in memory

    Hi,
          I would like to know how long does a subvi stay in memory? If it is not used anymore will LabVIEW automatically close it? Thanks!
          Tom
    Solved!
    Go to Solution.

    Overall, we don't have enough information to fully answer the question, because the question is not very specific.
    For example:
    if the subVI is called dynamically, it probably can leave memory once it is closed.
    If the subVI is reentrant, It can have multiple instances in memory.
    To have a subVI in memory is typically irrelevant in terms of the memory the code alone occupies. More serious is the amount of data structures allocated by the subVI. Here we have some tools, e.g. the "request deallocation" primitive. This can be useful if a subVI is called only once, and then never again, but needs gigantic data structures. Typically, subVIs are called multiple times with similar data structures, so it would be a mistake to constantly deallocate, only to reallocate a few nanoseconds later.
    As Smercurio_fc already mentioned, another important question for performance tuning is if the front panel is in memory or not. A subVI that does not need to have the FP in memory executes often much faster and takes less memory. It is thus important not to have the FP of subVIs open unless there is a need for interaction. Certain coding habits (e.g. use of some invoke nodes or some property nodes) also force the FP in memory, even if the FP is not shown. This is documented in the help for each method/property so it is important to avoid these if they are not really needed.
     In summary, it would be interesting to know what exact concerns the OP had when he asked " I would like to know how long does a subvi stay in memory?".
    There has to be more to the question...
    LabVIEW Champion . Do more with less code and in less time .

  • Memory Management in LabView / DLL

    Hi all,
    I have a problem concerning the memory management of LabView. If my data is bigger than 1 GB, LabView crashes with an error message "Out of Memory" (As LabView passes Data only by value and not by reference, 1 GB can be easily achieved). My idee is to divide the data structure into smaller structures and stream them from Hard Disk as they are needed. To do so, i have to get access to a DLL which reads this data from disk. As a hard disk is very slow in comparison to RAM, the LabView program gets very slow.
    Another approach was to allocate memory in the DLL and pass the pointer back to LabView...like creating a Ramdisk and reading the data from this disk. But memory is allocated in the context of Labview...so LabView crashes because the memory was corrupted by C++. Allocating memory with LabView-h-Files included doesnt help because memory is still allocated in the LabView context. So does anybody know if it's possible to allocate memory in a C++-DLL outside the LabView context, so that i can read my Data with a DLL by passing the pointer to this DLL by LabView? It should work the following way:
    -Start LabView program--> allocate an amount of memory for the data, get pointer back to labview
    -Work with the program and the data. If some data is needed, a DLL reads from the memory space the pointer is pointing at
    -Stop LabView program-->Memory is freed
    Remember: The data structure should be used like a global variable in a DLL or like a ramdisk!
    Hope you can understand my problem
    Thanks in advance
    Christian
    THINK G!! ;-)
    Using LabView 2010 and 2011 on Mac and Win
    Programming in Microsoft Visual C++ (Win), XCode (Mac)

    If you have multiple subvis grabbing 200MB each you might try using the "Request Deallocation" function so that once a vi is done processing it releases the memory.
    LabVIEW Help: "When a top-level VI calls a subVI, LabVIEW allocates a data space
    of memory in which that subVI runs. When the subVI finishes running, LabVIEW
    usually does not deallocate the data space until the top-level VI finishes
    running or until the entire application stops, which can result in out-of-memory
    conditions and degradation of performance. Use this function to deallocate the
    data space immediately after the VI completes execution."
    Programming >> Application Control >> Memory Control >> Request Deallocation
    I think it first appeared in LabVIEW 7.1.
    Message Edited by Troy K on 07-14-2008 09:36 AM
    Troy
    CLDEach snowflake in an avalanche pleads not guilty. - Stanislaw J. Lec
    I haven't failed, I've found 10,000 ways that don't work - Thomas Edison
    Beware of the man who won't be bothered with details. - William Feather
    The greatest of faults is to be conscious of none. - Thomas Carlyle

  • Memory management in plugins using Cocoa

    Hello,
    In Obj-C memory management guide stated: "If you spawn a secondary thread, you must create your own autorelease pool as soon as the thread begins executing; otherwise, you will leak objects."
    I assume that it also applies to Acrobat Plugins? What are the guidelines of using NSAutoreleasePools in plugins and is there any specific things in plugins' memory manamgement?

    If you have multiple subvis grabbing 200MB each you might try using the "Request Deallocation" function so that once a vi is done processing it releases the memory.
    LabVIEW Help: "When a top-level VI calls a subVI, LabVIEW allocates a data space
    of memory in which that subVI runs. When the subVI finishes running, LabVIEW
    usually does not deallocate the data space until the top-level VI finishes
    running or until the entire application stops, which can result in out-of-memory
    conditions and degradation of performance. Use this function to deallocate the
    data space immediately after the VI completes execution."
    Programming >> Application Control >> Memory Control >> Request Deallocation
    I think it first appeared in LabVIEW 7.1.
    Message Edited by Troy K on 07-14-2008 09:36 AM
    Troy
    CLDEach snowflake in an avalanche pleads not guilty. - Stanislaw J. Lec
    I haven't failed, I've found 10,000 ways that don't work - Thomas Edison
    Beware of the man who won't be bothered with details. - William Feather
    The greatest of faults is to be conscious of none. - Thomas Carlyle

Maybe you are looking for

  • How to install Windows 7 over corrupt Windows XP

    Hello,I have a MSI U 100 wind Netbook with Windows XP. I suddenly get this Mistake: Windows could not start because the following file is missing or corrupt: \WINDOWS\SYSTEM32\CONFIG\SYSTEM  I would like to install Windows 7 Home 32 bit now. Can I ju

  • Hpe h8-1151sc (Graphiccard support)

    Hello i have a hpe h8-1151sc (only sold in scandinavia) - I has the IPISB-CH2 (Chicago) motherboard installed (with I7 2600). Right now i have a GTX 680 installed running wiht a Corsair HX 620 PSU. I am thinking of upgrading to a Radeon R9 290X. My c

  • Unknow error in a query

    Dear SAP gurus, I've created the following query based in another query that I made before, the only change I made is in 'WHERE' the field T3.[U_ExtON], before I used another user defined field, with the same characteristics as this one, but when I u

  • BO XI R2 and SQL server 2005 Analysis services

    Post Author: fasttrack CA Forum: Olap Hello, I am new in BO, I am using XI R2 and I would like to know this: I have to build some reports on Data Warehouse using SQL server 2005 Analysis Services. My question is: 1) How I have to build this reports:

  • Facebook app makes music quiet?

    I want to know if this happens to anyone else because my God it is the most annoying thing ever. I have iPhone 6. A lot of the time while playing music, when I have the Facebook app open in the background, my music plays quieter than it should. When