Labview memory

Hi all ,
I am using following VI to convert binary file into a CSV file with a specific formatting. 
I am facing problems in deallocation of memory using this VI. I check the PF usage under performance tab of windows task manager before and after the VI runs. It takes up about 0.23 GB of memory even after I have shut the VI. Consecutive runs of the same VI result in 0.23 GB or less being added up again and again.
Is there a way to allocate memory and recover all of it back when the VI shuts down?
Thanks
Vivan 
Vivan Sachdeva
Lasers for Science Facility
Rutherford Appleton Laboratory
STFC
Oxford
United Kingdom
Attachments:
test_4.vi ‏21 KB
test_4.vi ‏21 KB

Hi 
Thank you all for ur help
Sorry about not posting the other covert array VI earlier.
The binary file is about 40 MB, this one is just a test file and could be bigger in future reaching upto 1 GB.
About 'after i have shut the VI', i check the pf usage under performance tab in the windows task manager. its is about 850mb before i run the vi for the first time. it builds upto 1.15GB during the run. I assumed that the deallocation memory vi would return me all the memory and bring back the pf usage to 850mb but it only returns 0.15GB. so the pf usage is 1gb after vi run. it remains around 1GB after each successive run. 
is there a way to recover this 150 mb as well after each VI run?
Deallocation Memory VI - i tried putting using it in several ways. I made the sequences into subVIs and had the DelMem VI in each on of them. that didn't help either. i read some where that using a lot of DelMem VI could cause labVIEW to use more memory than required, so i just put one in last sequence box of the main vi.
I could try using local variable and post the results.
Thank 
Vivan
Vivan Sachdeva
Lasers for Science Facility
Rutherford Appleton Laboratory
STFC
Oxford
United Kingdom
Attachments:
convert aray test.vi ‏17 KB

Similar Messages

  • LabVIEW Memory full while fetching data from database

    Hi,
    In my program I need to sync some data from client PC as per the selected time frame.
    But while fecthing the data from the clinet database, my application is hanging and when I run it with code I get the 'LabVIEW memory full' error message.
    Kindly suggest to overcome this problem.

    Fetching the entire database is probably not a good idea.  You should narrow down how much you read at a time.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • LabVIEW: Memory full

    Hello guys, look I have this situation:
    I was running a executable file (I don't have acces to the code) and this error appeared: LabVIEW: Memory is Full VI"Nameofmyvi.vi"was stopped at unknown""at call to"Nameofmyvi" I've read that its possible that this happens when we deal with large data set, at a possible solution is manipulating the code, but since I told you guys before: I don't have access to it.
    So, I'll be very pleased if you guys can help me.
    Cheers

    As already said, the problem is likely with the code having poor memory management.  Without the code, there's not much you can do.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Labview memory problem

    I have an application generated by IOTech for using their analog I/O Card with LABVIEW 7.1. When I start the application. My task manager shows Labview.exe size about 30000K. Then after it goes on increasing with no final limit. This causes my PC to slow down and some time to crash situation. What is the reason for such huge memory increase, I dont understand. The file is attached here with with a hope to get solution.
    Thanks.
    Attachments:
    DaqBoard 1000 High Stress_v71.zip ‏1056 KB

    That is chowing down 1.8 Meg a minute so that would be a problem.  You options are:
    Add more memory to a point where you can complete a test
    Set break points in the program to identify what step or steps are allocating the memory.  Once identified you can make a decision to limit total memory consumed (use delete array at certain size) or FIFO data to disk to to prevent PC from crashing.
    Hope this help,
    Matthew Fitzsimons
    Certified LabVIEW Architect
    LabVIEW 6.1 ... 2013, LVOOP, GOOP, TestStand, DAQ, and Vison

  • LabVIEW memory management changes in 2009-2011?

    I'm upgrading a project that was running in LV8.6.  As part of this, I need to import a custoemr database and fix it.  The DB has no relationships in it, and the new software does, so I import the old DB and create the relationships, fixing any broken ones, and writ eot the new DB.
    I started getting memeory crashes on the program, so started looking at Task manager.  The LabVIEW 8.6 code on my machine will peak at 630MB of memory when the databse is fully loaded.  in LabVIEW 2011, it varies.  The lowest I have gotten it is 1.2GB, but it will go up to 1.5GB and crash.  I tried LV 2010 and LV 2009 and see the same behavior.
    I thought it may be the DB toolkit, as it looks like it had some changes made to it after 8.6, but that wasn't it (I copied the LV8.6 version into 2011 and saw the same problems).  I'm pretty sure it is now a difference in how LabVIEW is handling memory in these subVIs.  I modified the code to still do the DB SELECTS, but do nothing with the data, and there is still a huge difference in memory usage.
    I have started dropping memory deallocation VIs into the subVIs and that is helping, but I still cannot get back to the LV 8.6 numbers.  The biggest savings was by dropping one in the DB toolkit's fetch subVI.
    What changed in LabVIEW 2009 to cause this change in memory handling?  Is there a way to address it?

    I created a couple of VIs which will demonstrate the issue.
    For Memory Test 1, here's the memory (according to Task Manager):
    Pre-run
    Run 1
    Run 2
    Run 3
    LabVIEW 8.6
    55504
    246060
    248900
    248900
    LabVIEW 2011
    93120
    705408
    1101260
    1101260
    This gives me the relative memory increase of:
    Delta Run 1
    Delta Run 2
    Delta Run 3
    LabVIEW 8.6
    190556
    193396
    193396
    LabVIEW 2011
    612288
    1008140
    1008140
    For Memory Test 2, it's the same except drop the array of variants:
    Pre-run
    Run 1
    Run 2
    Run 3
    LabVIEW 8.6
    57244
    89864
    92060
    92060
    LabVIEW 2011
    90432
    612348
    617872
    621852
    This gives us delats of:
    Delta Run 1
    Delta Run 2
    Delta Run 3
    LabVIEW 8.6
    32620
    34816
    34816
    LabVIEW 2011
    521916
    527440
    531420
    What I found interesting in Memory Test #1 was that LabVIEW used more memory for the second run in LV2011 before it stopped.  I started with Test 1 because it more resembled what the DB toolkit was doing since it passes out variants that I then convert.  I htought maybe LabVIEW didn't store variants internally the same any more.  I dropped the indicator thinking it would make a huge difference in Memory Test 2, and it didn't make a huge difference.
    So what is happening?  I see similar behaviore in LV2009 and LV2010.  LV2009 was the worst (significantly), LV2010 was slightly better than 2011, but still siginificantly worse than 8.6.
    Pre-run
    Run 1
    Run 2
    Run 3
    LabVIEW 8.6
    55504
    246060
    248900
    248900
    LabVIEW 2011
    93120
    705408
    1101260
    1101260
    Attachments:
    Memory Test.vi ‏8 KB
    Memory Test2.vi ‏8 KB

  • LabVIEW Memory Allocation

    Hey,
    Is it possible to allocate predefined RAM Memory and accumulate data's into it?
    Before going in to detail – I am currently looking to write the inspection results in database for statistical analysis. I hope it will always consume some time to write it in database for each component / iteration. So decided to accumulate all the data in memory and write it at one shot.
    In detail, user has to inputs the memory size via front panel control. Let us assume for writing 1 row of string information occupies “XX” bytes. (Not yet sure how to calculate memory size of 1D string array of 10 elements (max of 20 character in each string)). Dividing the user input memory size with 1 row of memory size will give how many rows we can write at maximum say “N”.
    Use the for loop with “N” iteration and accumulates the 1D info to 2D array of information (auto indexing) and write it in Database at one shot.
    Any help or direction may helps a lot. 
    Waiting for the reply 
    Sasi.
    Certified LabVIEW Associate Developer
    If you can DREAM it, You can DO it - Walt Disney

     As far I know LabVIEW internally handles the memory allocation and we don't have any option to allocate it. There might be a way by using a windows dll but no direct function atleast.
    As you said you are going to use the for loop, in this case LabVIEW pre-allocates the memory depending on the data type and you don't have to worry about that. For details about the memory according to the data type you can check this link.
    The best solution is the one you find it by yourself

  • LV8.5.1 Labview memory is full error message Windows Vista 64bit

    I have a small application where the top level vi will no longer load into either LV8.5.1 or LV8.6 (see attachment) when using the project exployer.  Anyone else have this problem?
    Hardware:
    ASUS G1 4Gig memory
    Windows Vista 64bit
    Regards,
    Karl
    Attachments:
    LV load error code 14.jpg ‏24 KB

    Hi Jon,
    Here is the code,
    Regards,
    Kal
    Attachments:
    SUbvis.zip ‏499 KB

  • Memory issue LabVIEW won't deallocte

    Hello,
    I was wondering if anyone could help. I am running LabVIEW 2012 and playing with manipulating data from a reasonably large file 40Mb using a lot of String Arrays.
    Everything runs ok, the problem I am having is that everytime I run the VIs associated with the program, even when I shut them down LabVIEW is keeping them in memory, so my Ram usage is going up to about 3GB each time and I cannot get it to reduce without shutting down LabVIEW which is reallly started to prove frustrating. I have put in a few de-allocation elements but that doesn't seem to be solving it. I was just wandering if anyone know a magic command to deallocate all VIs in LabVIEWs memory without me having to shut down completely.
    I know I could make the whole thing more memory efficient by not using String arrays and instead just using a single string but that is not the problem as each VI individually handles it fine and I dont have any indicators on my top level VI which would be holidng it up?
    Any ideas welcome.
    Thanks!

    Sorry, I will have to study this later, but just glancing at it shows quite a few questionable things:
    in snipped.png:
    the "time taken" will most likely show always zero, because both frames execute in immediate succession, in parallel to the rest of the code. what is the purpose?
    reshaping a 2D array to a 1D array could be done with the "reshape" primitive. Doing a sucessive "array to spreadsheet string" followed by "spreadsheet string to array" seems very Rube Goldberg. 
    The entire code is a constant dance between different representations (1D array, 2D array, cluster, cluster array, spreadsheet string, etc.) Are you sure this could not be done a bit simpler?
    In snippet 2.png:
    "request deallocation" does not care about execution order, so placing it inside a sequence frame is meaningless.
    LabVIEW Champion . Do more with less code and in less time .

  • What's the maximum size of an array in labview? LV run out of memory

    I want to create a 1D array of 40M element of Double: so I use Initialize Array, element is double 1, dimension size is 40M (and put a Sum at the end so as to make sure lv do attempt to create that array). Supposed this only need 320M of memory, and my system has over 1G free memory available as shown in Task Manager, but when Labview runs this VI, it tells me "Not enough memory to complete this operation" "Labview: memory is full, top level VI is stopped at Initalize Array", so it seems LV is unable to handle such an array, even if there is apparently more than enough free memory in the system.
    Does anyone know why labview fails to allocate memory for this VI? Anyway to solve this problem?

    This question has been asked many times. Have you tried to do a search? Arrays require contiguous memory. It doesn't matter if you have 40 GB of RAM. Unless you have a contiguous block for your array, the allocation will fail. You should read the chapter in the LabVIEW Help on managing large data sets. There are also a couple of KnowledgeBase articles.
    Of course, the obvious question is: why do you need to create such a large array in the first place?
    Message Edited by smercurio_fc on 01-18-2010 10:25 PM

  • How can i detect "Memory leak" with large LabVIEW projects.

    Hi,
    I have a huge LabVIEW application that runs out of memory after running continuously for some time. I am not able to find out the VI that is hogging up memory. Is there any tool that dynamically detects the VI that is leaking memory.
    Or, is there a tool or a way to identify the critical areas which can be potential culprits that is leaking memory.
    Regards
    Bharath

    Bdev wrote:
    Thanks Dennis.
    I think Desktop Execution toolkit should solve the problem. 
    Wayne Wrote
    Have you tried Tools»Profile»Performance and Memory ?  http://zone.ni.com/reference/en-XX/help/371361F-01/lvdialog/profile/
    But this will just give me the amount of memory used by the VIs and not the amount of memory that is not getting released.
    And where is the problem about that? Just try to find what VIs keep increasing in memory size. That are the culprits. If you have real memory leaks, meaning there is memory that is not managed by LabVIEW directly but for instance by a DLL somewhere and that DLL looses references to memory, so it goes really lost, then the only way to find that is by successively exclude functionality in your application until you can find the culprit.
    There is no other simple way to find out about who is loosing memory references than by doing debugging by exclusion until the problem disappears. The only way to speed this up, which quite often works for me is doing an educated guess, about what components are most likely to do this misbehaviour.
    Not knowing anything about your application and if you are talking about memory hogs (fairly easily identifiable by the mentioned Performance and Memory monitor) or actual memory leaks, it is hard to tell how to go about it. Memory hogs are usually the first thing I suspect escpecially with software I inherit somehow from people from whom I'm not sure they know all the ins and outs of LabVIEW programming.
    If a leak seems likely the first culprit usually are custom DLLs (yes even DLLs I have written myself), then NI DLLs such as DAQmx, etc. and last there come leaks in LabVIEW itself. This last category is very seldom but it has happened to me. However before going to scream about LabVIEW having a memory leak you really, really should make sure you have very intensivly researched all the other possibilities. The chance that you run into a memory leak in LabVIEW, while not impossible, is so small compared to the other ways of causing either a memory hog or running into a leak in an external component to LabVIEW, that in 99.9% of the cases where someone screams about a LabVIEW memory leak, he is simply wrong.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Avoiding data memory duplication in subVI calls

    Hi,
    I am on a Quest to better understand some of the subtle ways of the LabVIEW memory manager. Overall, I want to (as much as practically possible) eliminate calls to the memory manager while the code is running.
    (I mainly do RT code that is expected to run "forever", the more static and "quiet" the memory manager activity is, the faster and simpler it is to prove beyond reasonable doubt that your application does not have memory leaks, and that if will not run into memory fragmentation (out of memory) issues etc. What I like to see as much as possible, are near static "used memory" and "largest contiguous block available" stats over days and weeks of deployed RT code.)
    In my first example (attached, "IPE vs non-IPE.png"), I compared IPE buffer allocation (black dots) for doing some of the operations in an IPE structure vs. "the old way". I see fewer dots the old way, and removed the IPE structure.
    Next I went from initializing an array of size x to values y to using a constant array (0 values) with an "array add" to get an array with the same values as my first version of the code. ("constant array.png")
    The length of the constant array is set to my "worst case" of 25 elements (in example). Since "replace sub-array" does not change the size of the input array even when the sub-array is "too long", this saves me from constantly creating small, variable sized arrays at run-time. (not sure what the run-time cpu/memory hit is if you tried to replace the last 4 elements with a sub-array that is 25 elements long...??)
    Once I arrived at this point, I found myself wondering "how exactly the constant array is handled during run-time?". Is it allocated the first time that this sub-vi is called then remains in memory until the main/top VI terminates, or is it unloaded every time the SubVI finishes execution? (I -think- Mac's could unload, while windows and linux/unix it remains in memory until top level closes?)  When thinking (and hopefully answering),  consider that the the code is compiled to an RTEXE runningg on a cRIO-9014 (vxWorks OS).  
    In this case, I could make the constant array a control, and place the constant on the diagram of the caller, and pipe the constant all the way up to the top level VI, but this seems cumbersome and I'm not convinced that the compiler would properly reckognize that at the end of a long chain of sub-sub-sub VI's all those "controls" are actually always tied off to a single constant. Another way would perhaps be to initialize a FG with this constant array and always "read it" out from the FG. (using this cool trick on creating large arrays on a shift register with only one copy which avoids the dual copy (one for shift register, one from "initialize array" function)).
    This is just one example of many cases where I'm trying to avoid creating memory manager activity by making LabVIEW assign memory space once, then only operate on that data "in-place" as much as possible. In another discussion on "in-place element" structures (here), I got the distinct sense that in-place very rarely adds any advantage as the compiler can pick up on and do "in-place" automatically in pretty much any situation. I find the NI documentation on IPE's lacking in that it doesn't really show good examples of when it works and when it doesn't. In particular, this already great article would vastly benefit from updates showing good/bad use of IPE's.
    I've read the following NI links to try and self-help (all links should open in new window/tab):
    cool trick on creating large arrays on a shift register with only one copy
    somewhat dated but good article on memory optimization
    IPE caveats and recommendations
    How Can I Optimize the Memory Use in My LabVIEW VI?
    Determining When and Where LabVIEW Allocates a New Buffer
    I do have the memory profiler tool, but it shows min/max/average allocations, it doesn't really tell me (or I don't know how to read it properly) how many times blocks are allocated or re-allocated.
    Thanks, and I hope to build on this thread with other examples and at the end of the thread, hopefully everyone have found one or two neat things that they can use to memory optimize their own applications.  Next on my list are probably handling of large strings, lots of array math operations on various input arrays to create a result output array etc.
    -Q
    QFang
    CLD LabVIEW 7.1 to 2013
    Attachments:
    IPE vs non-IPE.png ‏4 KB
    constant array.png ‏3 KB

    I sense a hint of frustration on your part, I'm not trying to be dense or difficult, but do realize that this is more towards the "philosophical" side than "practical" side. Code clarity and practicalities are not necessarily the objectives here.
    Also, I have greatly appreciated all your time and input on this and the other thread!
    The answer to your first question is actually "yes, sort of". I had a RT application that developed a small memory leak (through a bug with the "get volume info.vi' from NI), but to isolate it and prove it out took a very long time because the constant large allocation/deallocations would mask out the leak. (Trace's didn't work out either since it was a very very slow leak and the traces would bomb out before showing anythinng conclusive.) The leak is a few bytes, but in addition to short term memory oscilations and  long term (days) cyclical "saw-tooth" ramps in memory usage, made this very hard to see. A more "static" memory landscape would possibly have made this simpler to narrow down and diagnose. or maybe not. 
    Also, you are missing my point entierely, this is not about "running out of memory" (and the size of 25 in my screen-shot may or may not be what that array (and others) end up being). This is about having things allocated in memory ONCE then not de-allocated or moved, and how/when this is possible to accomplish.  Also this is a quest (meaning something  I'm undertaking to improve and expand my knowledge, who said it has to be practical).
    You may find this document really interesting, its the sort of thing you could end up being forced to code to, albeit, I don't see how 100% compliance with this document would ever be possible in LabVIEW, thats not to say its worthless: JPL Institutional Coding Standard for the C Programming Language (while it is directed at C, they have a lot of valid general points in there.)
    Yes, you are right that the IPE would grow the output if the lenght of my replacement array is not the same, and since I can't share the full VI's its a bit of a stretch to expect people to infer from the small screen dummp that the I32 wires on the right guarantee the lengths will match up in the IPE example.
    Once, on recomendation of NI support, I actually did use the Request deallocation primitive during the hunt for what was going on in that RT app I was debugging last year. At that particular time, the symptom was constant fragmentation of memory, until the largest contiguous block would be less than a couple of kB and the app would terminate with 60+MB of free memory space.. (AKA memory leak, though we could not yet prove that from diagnostic memory consumption statistics due to the constant dynamic behavior of the program)  I later removed them. Also, they would run counter to my goal of "allocate once, re-use forever" that I'm chasing. and again, I'm chasing this more as a way to learn than because all my code MUST run this way. 
    I'm not sure I see what you mean by "copying data in and out of some temporary array". Previously (before the constant array) at every call to the containing sub-vi, I used to "initialize array" with x elements of value y (where x depends to a large degree on a configuration parameter, and y is determined by the input data array). Since I would call to "initialize" a new array each time the code was called, and the size of the array could change, I looked for a way that I could get rid of the dynamic size, and get rid of dynamically creating the array from scratch each time the sub-vi was called. What I came up with is perhaps not as clear as the old way I did it, but with some comments, I think its clear enough. In the new way, the array is created as a constant, so I would think that would cause less "movement" in memory as it at that point should be preventing the "source" array from (potentially) moving around in memory?  Considering the alternative of always re-creating a new array, how is this adding an "extra" copy that creating new ones would not create?
    How would you accomplish the task of creating an array of "n" elements, all of value "y" without creating "extra" copies? -auto-indexing using a for loop is certainly a good option, but again, is that sure to reuse the same memory location with each call? Would that not, in a nit-picking way, use more CPU cycles since you are building the array one element at the time instead of just using a primitive array add operation (which I have found to be wicked fast operations) and operate on a constant data structure?
    I cannot provide full VI's without further isolation, maybe down the road (once my weekends clear up a bit). Again, I appreciate your attention and your time!
    QFang
    CLD LabVIEW 7.1 to 2013

  • I am a newer about LabVIEW.How can I realize a remote control in a Web brower.

    "To transform your application into a remote laboratory, make sure the VI that you want to publish is loaded into LabVIEW memory. Next, select the Web Publishing Tool option from the Tools menu. This window is the main window for interactively creating and publishing your remote laboratory
    The Web publishing tool will automatically load in the Document Title and VI Name text fields. As the sample image in Figure 6 illustrates, the Document Title, Text 1, and Text 2 are all text fields that you can use to customize the Web page created with the publishing tool.
    The second step necessary to enable a remote laboratory is to select the Start Web Server button. When pressed, this button activates the built-in LabVIEW Web server, which will publish and control your front panel images from the Internet.
    Once the Web server is activated, the actual HTML document needs to be created and saved so it can be accessed remotely. Clicking on Save to Disk places an HTML file called Document Title.htm into the LabVIEW file folder called WWW by default. Saving your Remote Panels HTML documents into this folder will ensure that the LabVIEW Web server can find them. Either keep the default name, or assign a new name and save the file. Once saved, a new panel entitled Document URL pops up with a message box containing the URL address of your enabled LabVIEW application.
    Click on OK in the Document URL window and then click on Done in the Web Publishing Tool window. Your lab is now ready for remote visitors
    Required Software
    To operate a LabVIEW program using remote panels, it is necessary to have the free LabVIEW run-time engine installed on the client computer. When a remote viewer logs onto the lab with the appropriate URL address, the LabVIEW front panel will appear in the browser, or reroute the user to install the run-time engine from the National Instruments Web site.
    Application Control
    Once connected to the remote laboratory, the client connection will automatically be in a monitor state. If another client is controlling the remote laboratory, the user will be able to monitor the actions of the controlling client. To request control of the program, right click on the front panel and select Request Control. Once selected, one of two possible messages will appear. Either the user will be granted control (Control Granted), or the user will see a message indicating that control is currently granted to another user (Waiting for control: Either the server is locked or another client has control). If another client has control, the controlling client will be notified that control time has now become limited. Once the timeout occurs or the controlling client has released control, application control is automatically switched to the requesting client (Control Granted). Once the user has been granted control, all icons and controls will become active and running the LabVIEW application is exactly like running the application from the local environment.
    Releasing Control
    When the remote viewer either moves on to a different URL address or relinquishes control by right clicking and selecting (Release Control), or when the remote laboratory times out, the remote laboratory is available to the next visitor."
    The above is what I read from a text of ni.com,
    but I can not obtain the result.
    Thanks a lot!
    ^^

    Hi
    I don't see which LV version you are using. It's important to know, that you need the FDS or PDS package of LV 6.1 to work with remote control. This feature is not supported in the LV base package of LV 6.1.
    If you have LV 6.1 FDS or PDS, which of the described steps are not working?
    Luca P.
    Regards,
    Luca

  • Referenced memory could not be read - error

    Hey, In an application on stop command, I exit from labivew on exit I get an error the referenced memory could not be read click ok to terminate program can any one suggest why this error is seen
    how this can be resolved?
    enclosed a screen shot of error
    Regards
    anil
    Attachments:
    error log.PNG ‏6 KB

    Hi AndreasC,
    This error message occurs when LabVIEW memory space goes corrupt and is often due a DLL or CIN code.
    There can be two reasons for this:
    1. Try removing all DLL calls in your code and see if it works. Most of the time, the corruption is traced to a call to a DLL function that has incorrectly passed inputs to the Call Library Function node, often by passing an uninitialized string or array, or by writing past the bounds of the string or array in the DLL function. Some DLL functions assume that a string buffer is presized to 256 bytes, 1 KB, or some other size. If a smaller sized string buffer is passed, the DLL can write past the buffer and corrupt the dataspace that follows.
    2. This error can also be caused when running a LabVIEW built executable. If the VI calls a WinAPI DLL function and uses the full path to the DLL in the Call Library Function Node, the LabVIEW Application Builder will create a copy of the DLL in the data directory of the executable. Some DLLs such as WinAPI DLLs should only reside in one location, such as C:\WINDOWS\system32, otherwise errors/crashes can occur when called. To prevent this, remove the DLL path in the Call Library Function Node when calling WinAPI DLLs.
    Some users have faced similar problems and the above solutions have worked for them.
    Here are the links to those discussions:
    http://forums.ni.com/ni/board/message?board.id=170&message.id=67230&requireLogin=False
    http://forums.ni.com/ni/board/message?board.id=170&thread.id=229434&view=by_date_ascending&page=1
    Regards,
    Ujjval

  • What can I do about "LabVIEW load error code 38: Failed to uncompress part of the VI."

    While attempting to load an executable LabVIEW application for LabVIEW 2009 SP1 on a Windows-XP machine when the following pop-up message occurs. "LabVIEW: Memory or data structure corrupt. An error occurs in loading VI 'NI_Gmath.lblib: Backward Bracket Search.VI'. LabVIEW load error code 38: Failed to uncompress part of the VI. The VI is most likely corrupt." What seems odd is that the same LabVIEW application loads fine when logged on as a privileged user account, but fails to load on a private user account.
    Attachments:
    2012-07-18 LabVIEW Load error code 38.jpg ‏1314 KB

    Here's a thought:
    So when something is decompressed, a temp folder is often used. 
    I have no idea why LabVIEW would be decompressing anything, but I suspect it is trying to put the decompressed file into a temp folder where the user does not have write permissions.
    In the .ini file for your executable, you can add a line that specified the location of the temp folder to use:
    tmpdir=C:\Temp
    On my Win7 machine, the location is:
    C:\Users\MyUserName\AppData\Local\Temp
    On WinXP, it is probably:
    C:\Documents And Settings\YourUserName\local settings\temp
     Try changing the tmpdir key in your ini file to something to C:\Temp and see if that helps.
    - john

  • Event handler eats up memory. Bad programing and/or bug.

    Hello
    I've been programming a GUI for a project. The basics of the program is a sample routine that updates a array once a second. Once the array is updated I use a event handler to plot the new array in a graph.
    When I wrote this gui I think I've stumbled upon a bug in Labviews memory allocation.
    If you have two loops. One that builds a array and then signals a loop with a eventhandler that reads the array and the event handler is stoped for a few seconds (by opening a sub vi or something inside the event handler), the memory goes berserk. When the event handler is free after the stop the memory is still allocated and does not return.
    I could not find any information on this problem in the forum so I thought I would share this information with everyone. I managed to reproduce this phenomena in a small example (attached), if anyone is interested in it. The problem is simple to fix once you recognize the problem. However it was not the simplest problem to find (imho that is ).
    Regards
    Andreas Beckman
    Attachments:
    bug.zip ‏23 KB

    Where are you seeing the memory being allocated? What tool are you using, task manager or LabVIEW profiling? I'm not seeing what you describe (LV 7.1.1 on a 3.4 GHz Pentium 4 w/ 1 Gb memory)
    Thanks,
    Putnam
    Certified LabVIEW Developer
    Senior Test Engineer
    Currently using LV 6.1-LabVIEW 2012, RT8.5
    LabVIEW Champion

Maybe you are looking for

  • How to implement reading data from a mat file on a cRIO?

    Hi all! I am not even sure, this is plausible, but I'd rather ask before i start complicating. So far, I have not found any helpful info about reading in data to a RT device from a file (kind of a simulation test - the data is simulated).  I have the

  • Where is the enjoylogo.gif stored in the database

    Can anyone tell me where these type of files are stored in the database? Are they in a table?

  • When will this get sorted?

    I've been with BT for a number of years and had Broadband with them too. But I have never had a decent connection. I can't even watch YouTube because the connection is so poor. I've tried various things after getting in touch with BT and nothing ever

  • Memory leak when running a Service

    I have used CAcroApp, CAcroPDDoc and CAcroAVDoc to search after words and split pages in a Service. When it is done I´m using the commands; AVDoc.Close(True) AVDoc = Nothing PDDoc.Close() PDDoc = Nothing AcroApp.CloseAllDocs() When I'm done the "acro

  • Concat result into one string

    I have following table col1 col2 A mouse B mouse C mouse D keyboard E keyboard I need to write a query which will give me an output mouse - A, B, C