Memory Leak in terminateConnection() function. How to resolve it ???

Hi,
We are facing a memory leak issue in terminateConnection() function. Here is a sample code regarding that. We have run around 1.5 hours simultaneously and it was consuming around 250MB for this simple program. How could I resolve this problem ?
bool test()
    string userName = "mapserver";
    string password = "a";
    string connectString = "//localhost:1521/orcl";
    string query = "SELECT GEOMETRY AS GEOM FROM AG_WATER";
    Environment *env = Environment::createEnvironment(Environment::OBJECT);
        Connection *conn;
        Statement *stmt;
        ResultSet *rs;
        try{
            conn = env->createConnection(userName, password, "");
        catch (SQLException ex) {
            printf("ORACLE execution error: [Error Code : %d] : %s\n", ex.getErrorCode(), ex.getMessage().c_str());
            return false;
        env->terminateConnection(conn);
        stmt = NULL;
        conn = NULL;
        rs = NULL;
    Environment::terminateEnvironment(env);
    return true;
int main()
    OracleResultSet pResultSet;
    while (true) {
        test();
    return 0;
Does any body help me to find out this problem ???

This is not a web dynpro related question.  Please restrict the questions in this forum to the Web Dynpro ABAP topic.

Similar Messages

  • Memory leak with callback function

    Hi,
    I am fairly new to LabWindows and the ninetv library, i have mostly been working with LabVIEW.
    I am trying to create a basic (no GUI) c++ client that sets up subscriptions to several network variables publishing DAQ data from a PXI.
    The data for each variable is sent in a cluster and contains various datatypes along with a large int16 2D array for the data acquired(average array size is 100k in total, and the average time between data sent is 10ms). I have on average 10 of these DAQ variables.
    I am passing the same callback function as an arguement to all of these subscriptions(CNVCreateSubcription).
    It reads all the correct data, but i have one problem which is that i am experiencing a memory leak in the callback function that i pass to the CNVCreateSubscription.
    I have reduced the code one by one line and found the function that actually causes the memory leak, which is a CNVGetStructFields(). At this point in the program the data has still not been passed to the clients variables.
    This is a simplified version of the callback function, where i just unpack the cluster and get the data (only showing from one field in the cluster in the example, also not showing the decleration).
    The function is passed into to the subscribe function, like so:
    static void CNVCALLBACK SubscriberCallback(void * handle, CNVData data, void * callbackData);
    CNVCreateSubscriber (url.c_str(), SubscriberCallback, NULL, 0, CNVWaitForever, 0 , &subscriber);
    static void CNVCALLBACK SubscriberCallback(void * handle, CNVData data, void * callbackData)
    int16_t daqValue[100000];
    long unsigned int nDims;
    long unsigned int daqDims[2];
    CNVData fields[1];
    CNVDataType type;
    unsigned short int numFields;
    CNVGetDataType(data, &type, &nDims);
    CNVGetNumberOfStructFields (data, &numFields);
    CNVGetStructFields (data, fields, numFields); // <-------HERE IS THE PROBLEM, i can comment out the code after this point and it still causes a memory leak.
    CNVGetDataType(fields[0], &type, &nDims);
    CNVGetArrayDataDimensions(fields[0], nDims, acqDims);
    CNVGetArrayDataValue(fields[0], type, daqValue, daqDims[0]*daqDims[1]);
    CNVDisposeData(data);
    At the average settings i use all my systems memory (4GB) within one hour. 
    My question is, have any else experienced this and what could the problem/solution to this be?
    Thanks.
    Solved!
    Go to Solution.

    Of course.....if it is something i hate more than mistakes, it is obvious mistakes.
    Thank you for pointing it out, now everything works

  • Memory leak in a function registered using set_restore_function()

    I experience a problem with memory leak caused by the following function:
    void RestorePhonemesSet(PhonemesSetStructType &phonemesSet, const void *src) {
    char p = (char ) src;
    memcpy(&phonemesSet.len, p, sizeof (int));
    p += sizeof (int);
    memcpy(&phonemesSet.whichFile, p, sizeof (int));
    p += sizeof (int);
    memcpy(&phonemesSet.whichPosition, p, sizeof (int));
    p += sizeof (int);
    phonemesSet.phonemes = (int *)malloc(sizeof(int)*phonemesSet.len);
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ here the problematic code
    memcpy(phonemesSet.phonemes, p, sizeof (int) * phonemesSet.len);
    This function is registered using a following call: DbstlElemTraits<PhonemesSetStructType>::instance()->set_restore_function(RestorePhonemesSet);
    The culprit it the malloc memory allocation. If I leave out the malloc the program crashes. If I free the phonemeSet.phonemes memory segment at the end of the restore function I lost data, and if I use the malloc there is a large memory leak while reading every record from the database.
    What should I do to prevent the memory leak?
    Regards,
    markur
    Edited by: 904259 on 2011-12-24 05:42

    the solution is using the memory allocated for p, no need to allocate new memory space for the variable: the problematic line should look like .phonemes = (int *)p;
    the memcpy function in the following line is thus superfluous.
    Edited by: 904259 on 2011-12-24 05:43

  • Serious Memory Leak Problem. Needs to be resolved ASAP

    Ok, I have been researching this problem All Night and I cannot figure it out!
    I am Running Final Cut Studio 2 Final Cut Pro 6. I Edited a piece Today and I have been trying to REnder it for 3 hours. I have cleared out the preferences. I have gone through all of the picture files, and I have deleted the Render Files and started over, No matter what, I watch in the activity monitor as it bleeds the memory and stops withj an out of memory error message. I would really appreciate if anyone has another solution i could try

    I don't think the extensions are the problem. As David said If any of those images were saved in a color mode other than RGB, you'll have trouble. This usually only occurs when you put the shots in the timeline but you'll want to get rid of them from the browser as well just to be sure.
    i basically imported a disc of random pics at the beginning of the project
    I assume that you copied the contents of the disk to your hard drive first otherwise FCP will be forever reading those files from the CD.
    rh

  • Every week Production is Non-Functional - How to resolve this?

    Dear Legends,
    Our ENV:
    OS - Redhat Linux 64 bit
    DB - 11.2.0.3 RAC
    Weblogic - 10.3.6
    SOA Suite - 11.1.1.6 - we have 2 managed soa_servers and 1 admin server
    JVM - Sun JDK 1.6.0_29
    RAM - 24G
    For the past month we are facing the following issues very often
    1. EM is extremely slow - due to low space in DB - Tot - 47G Usd - 42G Free - 5G
    2. If we click any of the process sometimes it will come with a FABRIC WARNING Status with duration of 30mins - Is it due to the Dehydration space? We do purge every week but still we have more than 50000 records which is older than 90days.
    3. Often now a days we are getting stuck threads and utilizing the JVM fully and getting the servers to react badly. - Is there a way to UNSTUCK a STUCK THREAD MANUALLY?
    Please help me to avoid servers going down until we have our maintenance.
    Thanks,
    Karthik

    Hi,
    Thanks, I expected that purge would be the issue but the purge scripts are looking only for the states which are in completed state. It is not looking for RUNNING and FAULTED, FAULTED, FAULT and COMPLETE, FAULT and RUNNING.
    To solve third issue, you'll have to take some thread dumps in the time you have a Stuck Thread on your JVM. If you post it here, I can help you analize it and detect what its causing this behavior.Currently I have 2 Stuck Threads in soa_server1 and it is prevailing for 3days which shows WARNING Status. Thread dumps are here
    And NO, there is no way to unstuck a thread. The only way to solve it is to setup work manager to restart this jvm when a thread become stucked. But the right way its to investigate the thread issues.Restarting JVM wont affect other web service calls? What will be the servers reaction when a work manager is restarting the JVM?

  • Memory leak versus inactive memory

    Hi:
    My iMac was becoming painfully slow a year or so ago. This seemed to be due to a lack of RAM, so I upgraded (2 GB to 4 GB total). This worked great, but now I seem to be running out of memory again, even when there are not a lot of programs open.
    I’ve started tracking memory usage with Activity Monitor, and typically a huge chunk of my memory is “Inactive”. I.e., at this time I have 215 MB free, 450 MB Wired, 1.6 GB active, and 1.7 GB inactive. This despite only having Word, Outlook, OmniOutliner, Safari and iTunes open. Under System Memory (in Activity Monitor) it says Finder at 250 MB, Word at 131 MB, iTunes at 120 MB, Outlook at 80 MB, OO at 46 MB, and Dropbox at 34 MB (then a bunch of smaller usages). Even being very generous, the total memory usage only seems to add up to ~1GB, far short of the 4 GB installed.
    I have no idea what a real “memory leak” actually is, but I’ve heard the term bandied about. I’ve had some weird, nonreproducible system crashes in the last few months where the system just totally freezes, often (but not always) putting a nice colour pattern on the monitors. Looking around, some folks have said that this might be due to a memory problem since everything else seems to check out AOK.
    Thus, several questions:
    Might I have a “memory leak”? If so, how do I diagnose and fix it?
    What is the 1.7 GB of “inactive” memory being used for? Why is my Free memory so small when this big chunk of inactive memory is sitting there?
    Thank you very much!
    OS 10.8.2, 2.8 GHz Intel Core 2 Duo with 4 GB 800 MHz DDR2 SDRAM

    The way Safari accumulates memory is normal. The way it is trying to page the memory to disc and error-ing is not. I think the integrity of your disc volume / catalog and directories may be to blame.
    Try a bit of basic maintenance:
    Repairing permissions is important, and should always be carried out both before and after any software installation or update.
    Go to Disk Utility (this is in your Utilities Folder in your Application folder) and click on the icon of your hard disk (not the one with all the numbers).
    In First Aid, click on Repair Permissions.
    This only takes a minute or two in Tiger, but much longer in Leopard.
    Background information here:
    http://docs.info.apple.com/article.html?artnum=25751
    and here:
    http://docs.info.apple.com/article.html?artnum=302672
    An article on troubleshooting Permissions can be found here:
    http://support.apple.com/kb/HT2963
    By the way, you can ignore any messages about SUID or ACL file permissions, as explained here:
    http://support.apple.com/kb/TS1448?viewlocale=en_US
    If you were having any serious problems with your Mac you might as well complete the exercise by repairing your hard disk as well. You cannot do this from the same start-up disk. Reboot from your install disk (holding down the C key). Once it opens, select your language, and then go to Disk Utility from the Utilities menu. Select your hard disk as before and click Repair.
    Once that is complete reboot again from your usual start-up disk.
    More useful reading here:
    Resolve startup issues and perform disk maintenance with Disk Utility and fsck
    http://support.apple.com/kb/TS1417?viewlocale=en_US
    For a full description of how to resolve Disk, Permission and Cache Corruption, you should read this FAQ from the X Lab:
    http://www.thexlab.com/faqs/repairprocess.html

  • Memory leak with CryptGetProvParam

    I suspect there is a memory leak in the function CryptGetProvParam. I made a Win32Console app in Visual Studio 2008 to recreate the problem. With the example app running it is possible to use Task Manager to see the memory usage of the .exe steadily increase
    over time.
    I called the app TestSecurityDescriptorWin32Console. Below I have pasted the contents of TestSecurityDescriptorWin32Console.cpp.
    I am looking to verify that the problem is with the CryptGetProvParam function, and if this is the case then to get the bug fixed.
    Thank you for any help
    // TestSecurityDescriptorWin32Console.cpp : Defines the entry point for the console application.
    #include "stdafx.h"
    #include <windows.h>
    HCRYPTPROV GetContext();
    void ModifyCryptProvSecurity(HCRYPTPROV hProv);
    int _tmain(int argc, _TCHAR* argv[])
    DWORD dwSDLen = 0;
    HCRYPTPROV hProv = GetContext();
    // get dwSDLen
    printf("First CryptGetProvParam: %d\n",CryptGetProvParam(hProv,PP_KEYSET_SEC_DESCR, NULL, &dwSDLen, DACL_SECURITY_INFORMATION));
    while (1) {
    BYTE* pbSD = (BYTE*)LocalAlloc(LHND,dwSDLen);
    PSECURITY_DESCRIPTOR
    pSD = (PSECURITY_DESCRIPTOR)pbSD;
    InitializeSecurityDescriptor(pSD,SECURITY_DESCRIPTOR_REVISION);
    // make pbSD size dwSDLen
    printf("Second CryptGetProvParam: %d\n",CryptGetProvParam(hProv,PP_KEYSET_SEC_DESCR, pbSD, &dwSDLen, DACL_SECURITY_INFORMATION));
    SECURITY_DESCRIPTOR*
    pSecurityDescriptor = (SECURITY_DESCRIPTOR*)pbSD;
    if (pSecurityDescriptor->Owner != NULL) FreeSid(pSecurityDescriptor->Owner);
    if (pSecurityDescriptor->Group != NULL) FreeSid(pSecurityDescriptor->Group);
    if (pSecurityDescriptor->Sacl != NULL) LocalFree(pSecurityDescriptor->Sacl);
    if (pSecurityDescriptor->Dacl != NULL) LocalFree(pSecurityDescriptor->Dacl);
    LocalFree(pbSD);
    Sleep(100);
    return 0;
    HCRYPTPROV GetContext()
    HCRYPTPROV hProv = NULL;
    HCRYPTKEY hKey = NULL;
        CHAR szPPName[100];
        DWORD dwPPNameLen = 100;
        CHAR szPPContainer[100];
        DWORD dwPPContainerLen = 100;
        // Attempt to acquire a handle to the default key container.
        if (CryptAcquireContext(
    &hProv,
    NULL,
    MS_DEF_PROV,
    PROV_RSA_FULL,
    CRYPT_MACHINE_KEYSET))
    printf("Opened default key container for crypto usage");
    else
    // Some sort of error occured...
    printf("(Allowable) Error acquiring default key container for crypto usage - ");
    // .. try to create default key container.
    if(CryptAcquireContext(
    &hProv,
    NULL,
    MS_DEF_PROV,
    PROV_RSA_FULL,
    CRYPT_MACHINE_KEYSET | CRYPT_NEWKEYSET))
    printf("Created key container for crypto usage");
    else
    printf("Error Creating key container for crypto usage - ");
    return (NULL);
    // Get CSP name.
    if(CryptGetProvParam(
    hProv,
    PP_NAME,
    (BYTE *)szPPName,
    &dwPPNameLen,
    0))
    printf("Key CSP name is '%s'",szPPName);
    else
    // Error getting key container name.
    printf("Failed to retrieve CSP name - ");
    // Get name of key container.
    if(CryptGetProvParam(
    hProv,
    PP_CONTAINER,
    (BYTE *)szPPContainer,
    &dwPPContainerLen,
    0))
    printf("Key container provider name is '%s'",szPPContainer);
    else
    // Error getting key container name.
    printf("Failed to retrieve key container name - \n");
    return hProv;

    "BYTE* pbSD = (BYTE*)LocalAlloc(LHND,dwSDLen);"
    LHND? It's surprising that this doesn't crash the program. Use LPTR.
    More generally, stay away from LocalAlloc/LocalFree, use malloc/new/HeapAlloc. Use LocalAlloc only if the API you're working with specifically requires LocalAlloc, that's rare.

  • Free memory after using GetRS232ErrorString() to avoid memory leak?

    Hello,
    Is it necessary to free memory after using function GetRS232ErrorString() to avoid memory leak?
    Example 1:
    int main();
    char *strError=NULL;
    strError = GetRS232ErrorString(55); /* just an example for error message */
    free(strError ); /* Do I need to free this pointer? */
    Example 2:
    int main();
    MessagePopup ("Error", GetRS232ErrorString(55)); ; /* Will I get a memory leak with this function call? */
    BR
    Frank

    It's a pity that the documentation is indeed so poor in this case, but testing shows that it always returns the same pointer, no matter the error code, so it seems to be using an internal buffer and you are not supposed to free the string (but need to copy it before the next call to GetRS232ErrorString if you need to keep the text). It does however return a different pointer for every thread, so atl least it seems to be thread safe.
    Cheers, Marcel 

  • Memory Leak on General Block when using Java Script

    Hi,
    I have a web view, that runs a html within the html there is a javascript which has a third party object this object displays some data and grows dynamically every time I invoke this object to grow I see in Instrument monitor whole bunch of Generalblock-256 ( 256 Bytes ) memory leak, I was wondering how can I manage this memory, I know in java script if we assign null to an object java will release it, but it's already reported the leak before I assign the object to null.
    Is there any way I can use let's say by NSAutoReleasePool.
    Thanks.

    The Memory leak is pretty much gone, with new release of OS 2.2.1 as apple has done better memory management for safari.

  • Memory leak in JCO when calling an ABAP-function that returns larg tables

    Hello everybody,
    I think discovered a memory leak in JCO when the calling functionions that have exporting tables with large datasets. For example the ABAP-function RFC_READ_TABLE, which in this example I use to retrieve data from a table called "RSZELTTXT", which contains ~ 120000 datasets. RFC_READ_TABLE exports the data as table "DATA".
    Here a simple JUnit test:
    http://pastebin.ca/1420451
    When running it with Sun Java 1.6 with standard heap size of 64mb I get a heapsize OutOfMemory error:
    http://pastebin.ca/1420472
    Looking at the heap dump (which I unfortunately cannot post here, because of it' size), I can see that I've 65000 char[512] array objects in my heap, which don't get cleaned up. I think, each char[512] array stands for one dataset in the exporting table "DATA", since the table contains 120000 datasets, the heap is full after the first 65000 datasets are parsed. Apparently, JCO tries to read all datasets in memory instead of justing reading the dataset to which the pointer (JCoTable.setRow(i)) currently points to and releasing it from memory after the pointer moves forward ...
    Did anybody else experience this?
    Is SAP going to remove to issue in upcoming versions of JCO?
    regards Samir

    Hi,
       Check Below links
    1) How To Analyze Performance Problems JCO
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/3fbea790-0201-0010-6481-8370ebc3c17d
    2) How to Avoid Memory Leaks 
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/c3e598fe-0601-0010-e990-b8622840c8c2
    Salil...
    Edited by: salil chavan on Jun 2, 2009 5:21 AM

  • How to root out memory leak with  Java JNI & Native BDB 11g ?

    We are testing a web application using the 32-bit compiled native 11g version of BDB (with replication) under 32-bit IBM 1.5 JVM via JNI under 64-bit RedHat Linux. We are experiencing what appears to be a memory leak without a commensurate increase in Java heap size. Basically the process size continues to grow until the max 32-process size is reached (4Gb) and eventually stops running (no core). Java heap is set to 2Gb min/max. GCs are nominal, so the leak appears to be native and outside Java bytecode.
    We need to determine whether there is a memory leak in BDB, or the IBM JVM or simply a mis-use of BDB in the Java code. What tools/instrumentation/db statistic should be used to help get to root cause? Do you recommend using System Tap (with some particular text command script)? What DB stats should we capture to get to the bottom of this memory leak? What troubleshooting steps can you recommend?
    Thanks ahead of time.
    JE.
    Edited by: 787930 on Aug 12, 2010 5:42 PM

    That's troublesome... DB itself doesn't have stats that track VM in any useful way. I am not familiar with SystemTap but a quick look at it seems to imply that it's better for kernel monitoring than user space. It's pretty hard to get DB to leak significant amounts of memory. The reason is that it mostly uses shared memory carved from the environment. Also if you are neglecting to close or delete some object DB generally complains about it somewhere.
    I don't see how pmap would help if it's a heap leak but maybe I'm missing something.
    One way to rule DB out is to replace its internal memory allocation functions with your own that are instrumented to track how much VM has been allocated (and freed). This is very easy to do using the interfaces:
    db_env_set_func_malloc()
    db_env_set_func_free()
    These are global to your process and your functions will be used where DB would otherwise call malloc() and free(). How you get usage information out of the system is an exercise left to the reader :-) If it turns out DB is the culprit then there is more thinking to do to isolate the problem.
    Other ideas that can provide information if not actual smoking guns:
    -- accelerate reproduction of the problem by allocating nearly all of the VM to the JVM and the DB cache (or otherwise limit the allowable VM in your process)
    -- change the VM allocated to the JVM in various ways
    Regards,
    George

  • How can i detect "Memory leak" with large LabVIEW projects.

    Hi,
    I have a huge LabVIEW application that runs out of memory after running continuously for some time. I am not able to find out the VI that is hogging up memory. Is there any tool that dynamically detects the VI that is leaking memory.
    Or, is there a tool or a way to identify the critical areas which can be potential culprits that is leaking memory.
    Regards
    Bharath

    Bdev wrote:
    Thanks Dennis.
    I think Desktop Execution toolkit should solve the problem. 
    Wayne Wrote
    Have you tried Tools»Profile»Performance and Memory ?  http://zone.ni.com/reference/en-XX/help/371361F-01/lvdialog/profile/
    But this will just give me the amount of memory used by the VIs and not the amount of memory that is not getting released.
    And where is the problem about that? Just try to find what VIs keep increasing in memory size. That are the culprits. If you have real memory leaks, meaning there is memory that is not managed by LabVIEW directly but for instance by a DLL somewhere and that DLL looses references to memory, so it goes really lost, then the only way to find that is by successively exclude functionality in your application until you can find the culprit.
    There is no other simple way to find out about who is loosing memory references than by doing debugging by exclusion until the problem disappears. The only way to speed this up, which quite often works for me is doing an educated guess, about what components are most likely to do this misbehaviour.
    Not knowing anything about your application and if you are talking about memory hogs (fairly easily identifiable by the mentioned Performance and Memory monitor) or actual memory leaks, it is hard to tell how to go about it. Memory hogs are usually the first thing I suspect escpecially with software I inherit somehow from people from whom I'm not sure they know all the ins and outs of LabVIEW programming.
    If a leak seems likely the first culprit usually are custom DLLs (yes even DLLs I have written myself), then NI DLLs such as DAQmx, etc. and last there come leaks in LabVIEW itself. This last category is very seldom but it has happened to me. However before going to scream about LabVIEW having a memory leak you really, really should make sure you have very intensivly researched all the other possibilities. The chance that you run into a memory leak in LabVIEW, while not impossible, is so small compared to the other ways of causing either a memory hog or running into a leak in an external component to LabVIEW, that in 99.9% of the cases where someone screams about a LabVIEW memory leak, he is simply wrong.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • How to determine memory leaks?

    I tried in XCODE, the RUN/ Start with Performance TOol / and tried out the various options. I was running my app and looking to see if it would report increasing memory use but it seemed to be looking at my total system (i was running under the simulator). In general what is the recommended procedure for determining memory leaks, which tool to use, and what tracing can i use?
    How does one look at the retain count of an object? are there system routines that have knonw leaks?

    You took the right path. Once instruments comes up select the Leaks tool. Turn off automatic leak detection. In your app, start off at some known state, do something, and come back to the known state and check for leaks. For instance start off in a view, do something that brings up another view then come back to the original view and check for leaks. Leaks will show you if you leaked. Since you took a very deterministic path then checked it should be straight forward to go to the code and find / fix the leaks. Leaks shows you where the code where the leak was generated.

  • How to resolve this Error ORA-04030: out of process memory when trying to a

    Hi
    I am connecting as a sysdba and trying to execute a query on the V$Logmnr_contents but getting the following Error
    ORA-04030: out of process memory when trying to allocate 408 bytes (T-LCR
    structs,krvuinl_InitNewLcr)
    Can anyone guide me how to resolve this issue.
    Thanks

    Hi,
    As root user, edit the /etc/sysconfigtab file, and try to set the udp_recvspace parameter to 262144 and reboot the machine :
    inet:
    udp_recvspace = 262144
    Metalink note 297030.1 Ora-04030 During Execution Of LogMiner Query
    Nicolas.

  • How to resolve the error " low paging memory"

    HI all ,
    I developed a report, which is running fine in development and in quality , but when it runs on production , it shows the dump 'low paging memory'. how to resolve this , can any one sujjest please ?

    Hi,
    1) Run a trace ST05 in ur quality system and fine tune the memory consuming select queries.
    2) Discuss with the basis regarding this error, probably they can help you out by increasing some space in pRODN.
    Regards
    Subbu

Maybe you are looking for