Memory leaks in MFC while using CDatabase::OpenEx()

Hi,
This question has been asked previously but the explanation was not in detail and i could never reach to the bottom of it.
I hope i can elaborate more on the problem statement and i can get a resolution/explanation from the experts here.
Consider the following two sample codes;
1.
int count = 200;
for (int i=0; i <count; ++i)
try
CDatabase *db = new CDatabase;
BOOL bRes = db->OpenEx(_T("DSN=MyData;UID=anon;PWD=pass"));
db->Close();
delete db;
db=NULL;
catch (CDBException* e)
e->Delete();
2.
CDatabase *db = new CDatabase;
int count = 200;
for (int i=0; i <count; ++i)
try
BOOL bRes = db->OpenEx(_T("DSN=MyData;UID=anon;PWD=pass"));
db->Close();
catch (CDBException* e)
e->Delete();
delete db;
db=NULL;
return 0;
delete db;
db=NULL;
The first sample code leaks a lot of memory and it can be easily observed from the task manager, the memory usage keeps on growing.
The second sample code does not leak any memory if i observe the memory usage from the task manager.
To find out the cause of the memory leak i ran the code through rational purify, both the codes are leaking memory according to rational purify. The first code leaks significantly more memory than the second one. The DLL pointed by rational purify are MFC
DLLs (inserting the screen shot below)
Is this a known issue with the MFC DLL or am i doing something wrong?
I have a server application where i have to create CDatabase object multiple times and i end up leaking a lot of memory over a period of time.
I can provide more information about this issue if required. Thanks in advance.

I am trying to reproduce this issue on my side, but it seem there is no memory leak in my simple sample. I use _CrtDumpMemoryLeaks() to test the memory leak.
https://msdn.microsoft.com/en-us/library/d41t22sb.aspx
If I comment this line: db->Close();
I can detect the memory leak at CDatabase
*db =
new CDatabase; 
See the screenshot.
If follow your sample code, there is memory leak message in the output view pane.
#define CRTDBG_MAP_ALLOC
#include <stdlib.h>
#include <crtdbg.h>
#ifdef _DEBUG
#define new DEBUG_NEW
#endif.....int CMFCCdbMLTestApp::ExitInstance()
// TODO: Add your specialized code here and/or call the base class
_CrtDumpMemoryLeaks();
return CWinApp::ExitInstance();
May you can use some use tool like WinDbg tool to find more information about this issue. 
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.

Similar Messages

  • Memory leak in OCI while using AQ

    There seems to be a serious memory leak in the OCI driver (9.2.0.1) when using a JAVA client to dequeue a database queue (Advanced Queuing).
    Continuous dequeuing causes the heap memory to increase, and this memory never gets freed which leads me to suspect a memory leak in the OCI components (as the memory allocated for the JVM is constant). The heap memory increases by 3-4 MB after a dequeue of 1000 RAW messages,
    Has anyone come across this problem before and if so are there any solutions? Changing to a thin driver is not a solution for me due to other requirements.
    I'm using using Oracle client v9.2.0.1 libraries running on Solaris 8.
    The source code for my JAVA test client is as below:
    /* JAVA dequeue */
    package com.ubsw.risk.pce.eventqueues.test;
    import oracle.AQ.*;
    import java.sql.*;
    import oracle.jdbc.*;
    public class testRawDequeue {
    public testRawDequeue() {
    public static void main(String[] args) {
    Connection conn = null;
    AQSession aq_sess = null;
    try {
    Class.forName("oracle.jdbc.driver.OracleDriver");
    //Use OCI connection
    conn = DriverManager.getConnection("jdbc:oracle:oci:@DB_NAME.world","user","password");
    conn.setAutoCommit(false);
    Class.forName("oracle.AQ.AQOracleDriver");
    while(true) {
    aq_sess = AQDriverManager.createAQSession(conn);
    runTest(aq_sess);
    aq_sess.close();
    aq_sess = null;
    System.gc();
    } catch (Exception e) {
    e.printStackTrace();
    System.out.println(e.toString());
    try {
    if (aq_sess != null) {
    aq_sess.close();
    if (conn != null) {
    conn.close();
    } catch (SQLException sqle) {
    public static void runTest(AQSession aq_sess) throws AQException, SQLException
    AQQueueTable q_table;
    AQQueue queue;
    AQMessage message;
    AQRawPayload raw_payload;
    AQEnqueueOption enq_option;
    String test_data = "new message";
    AQDequeueOption deq_option;
    byte[] b_array;
    /* Get a handle to a queue - in aquser schema: */
    queue = aq_sess.getQueue ("user", "raw_msg_queue");
    System.out.println("Successful getQueue");
    /* Creating a AQDequeueOption object with default options: */
    deq_option = new AQDequeueOption();
    /* Dequeue a message: */
    message = queue.dequeue(deq_option);
    System.out.println("Successful dequeue");
    /* Retrieve raw data from the message: */
    raw_payload = message.getRawPayload();
    b_array = raw_payload.getBytes();
    System.out.println("bytes:" + b_array.toString());
    queue.close();
    ((AQOracleSession)aq_sess).getDBConnection().commit();

    This sounds very similar to the memory leak I have in Oracle 9i using Pro*C++. Everytime a connect is made memory appears to leak and it only happens in multithreaded mode not default mode. There is a thread about this under the Oracle C++ call interface. Under 9i it appears to leak about 60K per connect rather than 60 bytes.
    Paul

  • Memory leak in a client using EJBs deployed in a Bea Weblogic 10.0.0 cluste

    Hi all,
    We are having a memory leak in a client using stateless EJBs deployed in cluster. The client is a Tomcat 6.0.18 with java 6 but it is reproduced using Tomcat 5 with java 5. The client is calling a Weblogic Server 10.0 making
    calls to an EJB deployed in cluster that has two instances installed in two different machines.
    The client works fine if we shutdown one of the server instances and so when the client is using only one instance.
    Resuming the environment:
    Client Side:
    1 HP-Itanium machine with HP-UX.
    1 Tomcat 6 with java 6 (reproduced with java 5)
    Bea Weblogic client (wlclient.jar) for Weblogic 10.0.0
    Server Side:
    2 HP-Itanium machines with HP-UX
    Bea Weblogic Server 10.0.0 installed in both machines
    An unique domain
    Two Bea instances (one per machines) associated to a Bea Cluster
    EJBs deployed in both instances
    We have monitored the memory consumed in Tomcat and we have noticed that the VM memory PS OLD GEN grows up permanently when we make tests having the two server side Bea Instances up. We have extended
    the memory VM parameters in Tomcat client till 1G and it's only a way to delay the end: the free memory is empty, the GC is not able to free no more byte and the CPU is 100% consumed by the GC work. At the end Tomcat Client
    doesn't accept more http petitions and must be restarted.
    Besides, we have studied the VM memory in Tomcat using jmap and importing it using Eclipse Memory Analyzer. We have seen some strange memory blocks of several Mbytes that are always growing up and that are stored
    under data structures in the package com.sun.corba:
    com.sun.corba.se.impl.legacy.connection.SocketFactoryConnectionImpl (4.5Mb)
    |
    -> com.sun.corba.se.impl.transport.CorbaResponseWaitingRoomImpl
    |
    -> java.util.Hashtable
    |
    -> java.util.Hashtable$Entry
    |
    -> java.util.Hashtable$Entry
    -> java.util.Hashtable$Entry
    -> java.util.Hashtable$Entry
    Has anybody any idea about this problem?
    Thanks in advance.

    Hi all,
    We are having a memory leak in a client using stateless EJBs deployed in cluster. The client is a Tomcat 6.0.18 with java 6 but it is reproduced using Tomcat 5 with java 5. The client is calling a Weblogic Server 10.0 making
    calls to an EJB deployed in cluster that has two instances installed in two different machines.
    The client works fine if we shutdown one of the server instances and so when the client is using only one instance.
    Resuming the environment:
    Client Side:
    1 HP-Itanium machine with HP-UX.
    1 Tomcat 6 with java 6 (reproduced with java 5)
    Bea Weblogic client (wlclient.jar) for Weblogic 10.0.0
    Server Side:
    2 HP-Itanium machines with HP-UX
    Bea Weblogic Server 10.0.0 installed in both machines
    An unique domain
    Two Bea instances (one per machines) associated to a Bea Cluster
    EJBs deployed in both instances
    We have monitored the memory consumed in Tomcat and we have noticed that the VM memory PS OLD GEN grows up permanently when we make tests having the two server side Bea Instances up. We have extended
    the memory VM parameters in Tomcat client till 1G and it's only a way to delay the end: the free memory is empty, the GC is not able to free no more byte and the CPU is 100% consumed by the GC work. At the end Tomcat Client
    doesn't accept more http petitions and must be restarted.
    Besides, we have studied the VM memory in Tomcat using jmap and importing it using Eclipse Memory Analyzer. We have seen some strange memory blocks of several Mbytes that are always growing up and that are stored
    under data structures in the package com.sun.corba:
    com.sun.corba.se.impl.legacy.connection.SocketFactoryConnectionImpl (4.5Mb)
    |
    -> com.sun.corba.se.impl.transport.CorbaResponseWaitingRoomImpl
    |
    -> java.util.Hashtable
    |
    -> java.util.Hashtable$Entry
    |
    -> java.util.Hashtable$Entry
    -> java.util.Hashtable$Entry
    -> java.util.Hashtable$Entry
    Has anybody any idea about this problem?
    Thanks in advance.

  • Memory leaks in NSURLConnection when using with ssl.

    Hi,
    I am experiencing a memory leak while using nsurlconnection with https request.
    I have a web-service over http and as well as HTTPS. When i call the web-service with HTTP i don't get any memory leak while when I call the web-service over HTTPS i got memory leaks, responsible library for this memory leak is cfnnetwork and responsible caller is nsurlrequest::setsslrequest.
    I just wanna make sure that should we have to set any property in nsurl for HTTPS request..
    I will be really helpful If someone can help me out.
    thanks in advance.

    I get the same(Or similar) issue... 48 Bytes for every call is being leaked. Instruments says that it's NSURLConnection that's leaking an NSString somewhere... Blowed if I can find why, build and analyse doesn't complain about that piece of code (Except that I'm releasing the url request late... i.e. by the caller instead of the method that allocates it. Naughty I know, but autorelease'ing the object had the same problem).
    This is on the simulator BTW...
    H

  • Will the memory leak for queue when used in producer and consumer mode in DAQ to transfer different sized array.

    In the data acquisition, I use one loop to poll data from hardware, another loop to receive the data from polling loop sent by queue.
    But everytime the size of the transferred data array may not be the same, so the system may assign different array size and recycle very frequently.
    Will it cost memory leak. Or will it slow down the performance, since the array size is not fixed, so every time need to create a new sized array.
    Any suggestion or better method. 
    Solved!
    Go to Solution.

    As i understand your description, your DAQ-loop acquires data with the setting '-1' for samples to read at the DAQmx read function. This results in the different array sizes.
    Passing those arrays directly to a queue is valid and it does not have significant drawback in performance (at least as far as i know) and it definetly does not leak memory.
    So the question is more or less:
    Is it valid that your consumer receives different array sizes for analysis? How does your consumer handle those arrays? 
    hope this helps,
    Norbert 
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • MEGA memory leak!! cannot use PS ideas please?! deadline looming!!

    Hello PS community!!
    I'm running PS 5.5 on a Macbook pro Retina version 10.7.5
    Despite having more than 30gb free on my disk, as soon as I open my file (Doc 199.1M/655.6M) within 1 minute, all 30gb is consumed leaving just 10MB and I cannot work anymore after the 'scratch disks are full message appears' then as soon as I close PS, all of the memory is restored!
    The scratch disks are set to the default macHD. like they always have been with no problems and I'm working in other programs such as Premier pro with huge files and there's no problem.
    It's like there's a black hole only for PS, please help if anybody has any ideas I have a deadline loomig and I can't do anythung in PS at the moment!
    Many thanks
    ALLAN.

    Thanks c.pfaffenbichler
    Yes it's a top spec retina 15' with a solid state drive.
    My setup has always been sweet and worked brilliantly this is a new problem, i was handling huge 1.5gb ps.files with no issue and haven't changed any settings since then.
    my HD is just much fuller now.
    Around the same time the message began popping up when I open Premier pro that "The scratch disks are write protected or unavailable. To open this project, thescratch disks will be set to your Documents folder. Would you like continue?"
    (Then I just click yes and it all works no problem)
    I suppose these issues are related.
    I don't generally use smart objects but that's never been an issue with regards to extra weight of files
    Cheers

  • Why do I get a "Fatal Run Time Error: Out of Memory" after an hour while using RT on a PXI 8186?

    When I run this code with high speed DAQ+proccessing+control for over an hour in RT on an 8186, I get the above error. There were some arrays being built really quickly, etc. So I replaced these array initializations followed with "replace arrays subset" blocks. Still ,No luck.
    I'd really appreciate it if anyone could help me. the code can be posted here as well if reqd.
    I'm Running LV RT 7.1, and the target is a PXI-8186.
    Also, is there any way of allocating extra virtual memory in RT?

    Let me second everything that Aaron mentioned.
    Please post some code that demostrates your trouble and we will take a look.
    Gnerally I have to say that RT applications are rather demanding in the area of memory.
    I find myself avoiding data types that can vary in size, like strings. These can demand increases in buffer allocations such that the buffer previously allocated are insufficient, are tossed on the heap and another larger block is allocated. Too much of this happening can kill you in RT. Building strings is bad.
    Building arrays is just not allowed.
    LV2 globals (when properly coded) are useful in keeping memory useage static or diminishing. This dovetails with Aarons circular buffer.
    Review all code that is executed repeatedly using
    Tools >>> Advanced >>> Show Buffer Allocations
    In a nutshell, one or more of thes is killing you.
    Post your code so we can make specific suggestions.
    Trying to help,
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Client-side Memory leak while executing PL/SQL and reading from a view

    Iam noticing memory leaks in OCCI while performing the following:
    Sample function()
    1. Obtain a connection
    2. Create a statement to execute a PL/SQL procedure
    3 Execute the statement created in step #2
    4. Terminate the statement created in step #2
    5. Create a statement to read from a view which was populated
    by executing stored procedure in step #3
    6. Execute the statement created in step #5
    7. Terminate the statement created in step #5
    8. Release the connection
    The PL/SQL populates a view with fixed 65,000 records for every execution. PL/SQL opens a cursor, loads 65000 records and populates the target view and closes the cursor at the end. If i invoke the above function it results in memory leak of 4M for every call. I tried several variants such as:
    1. Disabling statement caching
    2. Using setSQL instead of newly creating second SQL statement
    3. Obtaining two separate connections for these two activities (PL/SQL exec and View read)
    4. Breaking the sample function into two, one for each of these activities (PL/SQL exec and View read).
    All the combinations results in the same behaviour of 4M memory leak.
    Iam using Oracle 10g Client/Server 10.2.0.1.0.
    Is there any known limitations in this area?

    Yes. Iam closing the result set and terminating the statement.
    My program contains layers of inhouse wrapper classes, which will take some time for
    me to present it in pure OCCI calls, to be posted here for your understanding.
    After some more debugging, i found that if the connection level statement caching is set to
    0, the memory leak is much lower than before.
    Thanks.
    Message was edited by:
    user498920

  • Memory leak when using DB_DBT_MALLOC in CDB.

    Hello!
    Recently when I'm using Berkeley DB CDB and set the value Dbt DB_DBT_MALLOC, I noticed that the app's memory keeps growing during the db.get while;
    I found that many codes on the internet use the syntax free(value.get_data()), but when I tried to free the value.get_data(), I always got an interruption error.
    It's OK when I'm using the DB_DBT_USERMEM flag. So I just get very confused about the DB_DBT_MALLOC and DB_DBT_REALLOC.
    And also, I wonder why there's no memory leak problem when we use the single thread BDB. I think BDB cannot free the value.data pointer too in single thread before the app finish using it, then why our apps don't need to free the data afterwards?
    Thanks a lot in advance!

    You should use free() to free the DBT memory allocated via DB_DBT_MALLOC. You should do that after each Db::get() operation (and of course after finishing processing/using the data), otherwise you will loose the pointer to the memory previously allocated if you happen to reuse the DBT with DB_DBT_MALLOC (the data field will point to a new memory address).
    Alternatively you could use DB_DBT_REALLOC or DB_DBT_USERMEM.
    Note that there are small structures that BDB creates in the environment regions that only get freed/cleaned when the environment is closed.
    If you suspect that there is a memory leak inside the BDB code, make sure you rebuild Berkeley DB using the following configuration options (along with the others you use) when building: enable-debug, enable-umrw. Than, run the program under a memory leak detection utility, like Valgrind (allow the application to open and close the BDB environment) and see if there are any leaks reported.
    If memory leaks are reported, then put together a small stand-alone test case program that demonstrates the leaks, and post it here.
    Regards,
    Andrei

  • Memory leak when using Threads?

    I did an experiment and noticed a memory leak when I was using threads.. Here's what I did.
    ======================================
    while( true )
         Scanner sc = new Scanner(System.in);
         String answer;
         System.out.print("Press Enter to continue...");
         answer = sc.next();
         new TestThread();
    ========================================
    And TestThead is the following
    ========================================
    import java.io.*;
    import java.net.*;
    public class TestThread extends Thread
    public TestThread() { start(); }
    public void run() {  }
    =====================================
    When I open windows Task Manager, every time a new thread starts and stops, the java.exe increases the Mem Usage.. Its a memory leak!? What is going on in this situation.. If I start a thread and the it stops, and then I start a new thread, why does it use more memory?
    -Brian

    MoveScanner sc = new
    Scanner(System.in);out of the
    loop.Scanner sc = new Scanner(System.in);
    while (true) {
    That won't matter in any meaningful way.
    Every loop iteration creates a new Scanner, but it also makes a Scanner eligible for GC, so the net memory requirement of the program is constant.
    Now, of course, it's possible that the VM won't bother GCing until 64 MB worth of Scanners have been created, but we don't care about that. If we're allowing the GC 64 MB, then we don't care how it uses it or when it cleans it up.

  • Memory Leak in .swf

    Memory Leak issue with CS4
    Using CS4, we have a memory leak and I can not find the
    source of the problem.
    http://tiny.cc/O7D3e here is the
    link to the testing site. If you take a look at your task manager
    you will see it your RAM will continue to increase even after two
    or three cycles. It does not stabilize.
    We are using FlashEff | Flash Effects Component in order to
    generate the smooth transitions. However, I have done some
    debugging and even completely deleted the plug n from the file and
    it still continually leaks memory. Does anyone have any possible
    solutions or causes for this.

    One idea - there is a separate stack of memory in the flash
    player where loaded classes in separate application domains exist,
    and these classes are not being garbage collected....however, there
    is a line in adobe's documentation here:
    http://livedocs.adobe.com/flash/9.0/main/wwhelp/wwhimpl/common/html/wwhelp.htm?context=Liv eDocs_Parts&file=00000327.html
    under "Usage C" :
    quote:
    Having a new application domain also allows you to unload all
    the class definitions for garbage collection, if you can ensure
    that you do not continue to have references to the child SWF.
    Given that, as far as I can see from this code, there is no
    reference to the loaded .swf maintained....it seems to me like the
    loaded data (graphical assets AND classes) should be garbage
    collected - but, while you WILL see a slight drop in memory after
    the removal of the SWF, the overall memory continues to increase
    the more you do it. Could Adobe be mistaken?

  • SQL toolkit 2.2 Stored Procedure Memory Leaks

    Hi
    we are using CVI 2012 and SQL Toolkit 2.2. My db is "MySql 5.5.28" and I use "MySQL Connector/ODBC 5.2(w)"
    I use only stored procedure (with and without the output parameters).  If I call continuously a stored procedure,
    with Sql toolkit code, I have memory leaks!! 
    My code (without error handler) is:
    // iDbConnId is the handle of DBConnect() called when program starts
    iStmt = DBPrepareSQL(iDbConnId, "spGetPrData");
    DBSetStatementAttribute(iStmt, ATTR_DB_STMT_COMMAND_TYPE, DB_COMMAND_STORED_PROC);
    DBExecutePreparedSQL(iStmt);
    DBBindColLongLong(iStmt, 1, &llPrId, &lStatus1);
    DBBindColInt(iStmt, 2, &iIpPort, &lStatus2);
    while(DBFetchNext(iStmt) != DB_EOF)
        //get data
    DBClosePreparedSQL(iStmt);
    DBDiscardSQLStatement(iStmt);
    If I call the same stored procedure by sql code ("CALL spProcedure()")
    I don't have memory leaks!!
    The code is (without error handler):
    // iDbConnId is the handle of DBConnect() called when program starts
    iStmt = DBActivateSQL(iDbConnId, "CALL spGetPrData();");
    DBBindColLongLong(iStmt, 1, &llPrId, &lStatus1);
    DBBindColInt(iStmt, 2, &iIpPort, &lStatus2);
    while(DBFetchNext(iStmt) != DB_EOF)
        //get data
    DBDeactivateSQL (iStmt)
    It seems to me that the DBDeactivateSQL function free the memory correctly,
    while the functions DBClosePreparedSQL and DBDiscardSQLStatement do not release properly the memory,
    (or I use improperly the library functions!!!)
    I also have a question to ask:
    Can I open and close the connection every time I connect to the database or is it mandatory to open and close it just one time?
    How can I check the status of the database connection, if is mandatory to open and close the db connection just one time?
    Thanks for your help.
    Best Regards
    Patrizio

    Hi,
    one of the functions "DBClosePreparedSQL and DBDiscardSQLStatement" is a known problem. In fact, there is a CAR (Corrective Action Request 91490) reporting the problem. What version of the toolkit are you using?
    About this function you refer to the manual:
    http://digital.ni.com/manuals.nsf/websearch/D3DF4019471D3A9386256B3C006CDC78
    Where functions are described. To avoid memory leaks DBDeactivateSQL must be used. DBDeactivateSQL is equivalent to calling DBCloseSQLStatement, then DBDiscardSQLStatement. DBDeativateSQL will work at all for parameterized SQL, as well.
    Regarding the connection mode advice to you is to open the connection once and close it at the end of your operations. What do you mean with "state of the database connection"? Remember that if you have connection problems or something goes wrong, the errors appears in any point in your code where you query the database. Bye
    Mario

  • Finding memory leaks in java

    Hello,
    I have a memory leak in my application. I have found other memory leaks in the software using JProfiler, but I have a problem with JProfiler finding this one. My biggest problem is the JProfiler itself. If I use short runs, the JProfiler works fine. If I run my application for more than half a day, the JProfiler eats up memory and after a while it stops responding.
    I tried JProbe, but after profiling my application for 18 hours straight, it got buggy and wouldn't work correctly. It also ate up alot of memory, and couldn't process my snapshot after 18 hours of run. I saved all snapshots, and restarted JProbe. The initial snapshot was read correctly, but the final snapshot threw an NegativeArraySizeException.
    So, now I am looking for other tools which are able to find memory leaks without hanging or crashing after running 24 hours. Any recommendations?

    I dont know any other profiler you can use, but how about using the divide and conquer approach?
    Disable a significant section of your application and let it run for a day. If the leak occurs, its in the non disabled part.
    If it doesn't, its in the disabled part. Next, enable all the code. Then disable about half of the half that failed and try that. Continue doing so until
    you isolate the code enough that you can look at it and hopefully find the problem. Note you will need to keep track of what you previously enabled and disabled as a test (I put comments in the code).
    Alternately, you can look at functions that perform complex functionality that you suspect might be the problem and have the function return
    dummy data rather than perform the functionality. If the leak goes away, its that function(s).
    Example:
    public String getComplexData(String arg1, String arg2){
    //this section bypasses the complex code and returns dummy data
    boolean x1= true;
    if(x1==true)
    return new String("some dummy data");
    //complex functionality goes here.
    Also of help would be to list everything that can cause a memory leak (I assume its a memory leak and not a resource leak such as not closing connections).
    Here are some ideas:
    1) persistant class variables that are a collection that have objects added to them but never deleted and allowed to be garbage collected
    such as static class variables, session scope variables, or application scope variables.
    2) An object tree where one of its nodes is never set to null to be garbage collected (such as a static class variable).
    If you are running a web application in a clustered enviornment, try running it on only one server.

  • How can i detect "Memory leak" with large LabVIEW projects.

    Hi,
    I have a huge LabVIEW application that runs out of memory after running continuously for some time. I am not able to find out the VI that is hogging up memory. Is there any tool that dynamically detects the VI that is leaking memory.
    Or, is there a tool or a way to identify the critical areas which can be potential culprits that is leaking memory.
    Regards
    Bharath

    Bdev wrote:
    Thanks Dennis.
    I think Desktop Execution toolkit should solve the problem. 
    Wayne Wrote
    Have you tried Tools»Profile»Performance and Memory ?  http://zone.ni.com/reference/en-XX/help/371361F-01/lvdialog/profile/
    But this will just give me the amount of memory used by the VIs and not the amount of memory that is not getting released.
    And where is the problem about that? Just try to find what VIs keep increasing in memory size. That are the culprits. If you have real memory leaks, meaning there is memory that is not managed by LabVIEW directly but for instance by a DLL somewhere and that DLL looses references to memory, so it goes really lost, then the only way to find that is by successively exclude functionality in your application until you can find the culprit.
    There is no other simple way to find out about who is loosing memory references than by doing debugging by exclusion until the problem disappears. The only way to speed this up, which quite often works for me is doing an educated guess, about what components are most likely to do this misbehaviour.
    Not knowing anything about your application and if you are talking about memory hogs (fairly easily identifiable by the mentioned Performance and Memory monitor) or actual memory leaks, it is hard to tell how to go about it. Memory hogs are usually the first thing I suspect escpecially with software I inherit somehow from people from whom I'm not sure they know all the ins and outs of LabVIEW programming.
    If a leak seems likely the first culprit usually are custom DLLs (yes even DLLs I have written myself), then NI DLLs such as DAQmx, etc. and last there come leaks in LabVIEW itself. This last category is very seldom but it has happened to me. However before going to scream about LabVIEW having a memory leak you really, really should make sure you have very intensivly researched all the other possibilities. The chance that you run into a memory leak in LabVIEW, while not impossible, is so small compared to the other ways of causing either a memory hog or running into a leak in an external component to LabVIEW, that in 99.9% of the cases where someone screams about a LabVIEW memory leak, he is simply wrong.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • How to determine memory leaks?

    I tried in XCODE, the RUN/ Start with Performance TOol / and tried out the various options. I was running my app and looking to see if it would report increasing memory use but it seemed to be looking at my total system (i was running under the simulator). In general what is the recommended procedure for determining memory leaks, which tool to use, and what tracing can i use?
    How does one look at the retain count of an object? are there system routines that have knonw leaks?

    You took the right path. Once instruments comes up select the Leaks tool. Turn off automatic leak detection. In your app, start off at some known state, do something, and come back to the known state and check for leaks. For instance start off in a view, do something that brings up another view then come back to the original view and check for leaks. Leaks will show you if you leaked. Since you took a very deterministic path then checked it should be straight forward to go to the code and find / fix the leaks. Leaks shows you where the code where the leak was generated.

Maybe you are looking for