Memory leak using 10.2.0.3 OCCI client on Solaris 10

Hi,
We are using OCCI client libraries to connect our C++ program to the Oracle Database. The program does a lot of selects, inserts and SP calls.
Oracle client and Oracle server both are 10.2.0.3 on Solaris 10.
We have been observing a memory leak of 4M bytes in the C++ program every few minutes since last few days. On debugging through Purify, libumem, and Sun Studio 12, we finally managed to narrow down the problem to the Oracle client library OCI calls.
The Sun Studio leak check shows the following -
Leak #37, Instances = 157, Bytes Leaked = 655004
kpummapg + 0x00000098
kghgex + 0x00000648
kghfnd + 0x000005BC
kghalo + 0x00000A6C
kghgex + 0x000003BC
kghfnd + 0x000005BC
kghalo + 0x00000A6C
kghgex + 0x000003BC
kghfnd + 0x000005BC
kghalo + 0x00000A6C
kpuhhalo + 0x00000558
kpugdesc + 0x00000AD4
kpugparm + 0x00000374
COCIResultSet::InterpretData() + 0x000001B4
COCIResultSet::COCIResultSet(COCIStatement*,OCIStmt*,OCIError*) + 0x000000A4
COCIStatement::PrepareResult() + 0x00000190
A select is executed, a resultset is fetched and the resultset is immeidately closed. The same piece of code has been running at various production systems without any problems. Most of the other sites are either 10.2.0.4 or 9i.
On searching Metalink and various other forums, I found similar issues faced in 10.2.0.1.
Could someone advise if there are any bugs corresponding to this which have been closed. Would upgrading to 10.2.0.4 solve the problem?
Thanks.

Please ... one post and one post only in the group most appropriate to your inquiry. Please open an SR at metalink.

Similar Messages

  • Memory leak using 10.2.0.3 OCI client on Solaris

    Hi,
    We are using OCI client libraries to connect our C++ program to the Oracle Database. The program does a lot of selects, inserts and SP calls.
    Oracle client and Oracle server both are 10.2.0.3.
    We have been observing a memory leak of 4M bytes in the C++ program every few minutes since last few days. On debugging through Purify, libumem, and Sun Studio 12, we finally managed to narrow down the problem to the Oracle client library OCI calls.
    The Sun Studio leak check shows the following -
    Leak #37, Instances = 157, Bytes Leaked = 655004
    kpummapg + 0x00000098
    kghgex + 0x00000648
    kghfnd + 0x000005BC
    kghalo + 0x00000A6C
    kghgex + 0x000003BC
    kghfnd + 0x000005BC
    kghalo + 0x00000A6C
    kghgex + 0x000003BC
    kghfnd + 0x000005BC
    kghalo + 0x00000A6C
    kpuhhalo + 0x00000558
    kpugdesc + 0x00000AD4
    kpugparm + 0x00000374
    COCIResultSet::InterpretData() + 0x000001B4
    COCIResultSet::COCIResultSet(COCIStatement*,OCIStmt*,OCIError*) + 0x000000A4
    COCIStatement::PrepareResult() + 0x00000190
    A select is executed, a resultset is fetched and the resultset is immeidately closed. The same piece of code has been running at various production systems without any problems. Most of the other sites are either 10.2.0.4 or 9i.
    On searching Metalink and various other forums, I found similar issues faced in 10.2.0.1.
    Could someone advise if there are any bugs corresponding to this which have been closed. Would upgrading to 10.2.0.4 solve the problem?
    Thanks.

    Hi,
    Apparently a similar issue is being discussed over here:
    Re: Memory Leak
    Hope it helps.
    Regards,
    Naveed.

  • Bug:4705928 PLSQL: Memory leak using small varrays

    We have Oracle version 10.2.0.1.0
    We have a problem with a procedure.
    In our scenario we make use of VARRAY in the procedure to pass some filter parameters to a select distinct querying a view made on three tables.
    Unfotunately not always execution it is successful.
    Sometimes it returns wrong value (0 for the count parameter), sometimes (rarely) the server stops working.
    We suspect that this is caused by a bug fixed in versione 10.2.0.3.0
    Bug:4705928 PLSQL: Memory leak using small varrays when trimming the whole collection and inserting into it in a loop
    We suspect this becasue we made two procedure the first (spProductCount) uses a function (fnProductFilter) to calculate the values of a varray and passes them into the select,
    while in the second procedure (spProductCount2) parameters are passed directly into the statement without varray
    and there are failures only in the first procedure.
    On the other hand on another server 10.2.0.1.0 we never have this problem.
    The instance manifesting the bug runs under shared mode, while the other is under dedicated mode.
    Turning the first one to dedicated mode makes the bugs disapear.
    Unfortunately this is not a solution.
    In the sample there are the three table with all constraints, the view, tha varray custom type, the function and the two procedures.
    Is there someone that may examine our sample and tell us if the pl/sql code corresponds to the bug desciption.
    We also want to know if it's possibile that the same server running under different mode (SHARED/DEDICATED) doesn't behave the same way.
    The tables:
    --Products
    CREATE TABLE "Products" (
         "Image" BLOB
         , "CatalogId" RAW(16)
         , "ProductId" RAW(16)
         , "MnemonicId" NVARCHAR2(50) DEFAULT ''
         , "ProductParentId" RAW(16)
    ALTER TABLE "Products"
         ADD CONSTRAINT "NN_Products_M04" CHECK ("CatalogId" IS NOT NULL)
    ALTER TABLE "Products"
         ADD CONSTRAINT "NN_Products_M05" CHECK ("ProductId" IS NOT NULL)
    ALTER TABLE "Products"
    ADD CONSTRAINT "PK_Products"
    PRIMARY KEY ("ProductId")
    CREATE INDEX "IX_Products"
    ON "Products" ("CatalogId", "MnemonicId")
    CREATE UNIQUE INDEX "UK_Products"
    ON "Products" (DECODE("MnemonicId", NULL, NULL, RAWTOHEX("CatalogId") || "MnemonicId"))
    --Languages
    CREATE TABLE "Languages" (
         "Description" NVARCHAR2(250)
         , "IsStandard" NUMBER(1)
         , "LanguageId" RAW(16)
         , "MnemonicId" NVARCHAR2(12)
    ALTER TABLE "Languages"
         ADD CONSTRAINT "NN_Languages_M01" CHECK ("LanguageId" IS NOT NULL)
    ALTER TABLE "Languages"
         ADD CONSTRAINT "NN_Languages_M05" CHECK ("MnemonicId" IS NOT NULL)
    ALTER TABLE "Languages"
    ADD CONSTRAINT "PK_Languages"
    PRIMARY KEY ("LanguageId")
    ALTER TABLE "Languages"
    ADD CONSTRAINT "UK_Languages"
    UNIQUE ("MnemonicId")
    --ProductDesc
    CREATE TABLE "ProductDesc" (
         "Comment" NCLOB
         , "PlainComment" NCLOB
         , "Description" NVARCHAR2(250)
         , "DescriptionText" NCLOB
         , "PlainDescriptionText" NCLOB
         , "LanguageId" NVARCHAR2(12)
         , "ProductId" RAW(16)
    ALTER TABLE "ProductDesc"
         ADD CONSTRAINT "NN_ProductDescM01" CHECK ("LanguageId" IS NOT NULL)
    ALTER TABLE "ProductDesc"
         ADD CONSTRAINT "NN_ProductDescM02" CHECK ("ProductId" IS NOT NULL)
    ALTER TABLE "ProductDesc"
    ADD CONSTRAINT "PK_ProductDesc"
    PRIMARY KEY ("ProductId", "LanguageId")
    ALTER TABLE "ProductDesc"
    ADD CONSTRAINT "FK_ProductDesc1"
    FOREIGN KEY("ProductId") REFERENCES "Products" ("ProductId")
    ALTER TABLE "ProductDesc"
    ADD CONSTRAINT "FK_ProductDesc2"
    FOREIGN KEY("LanguageId") REFERENCES "Languages" ("MnemonicId")
    /The view:
    --ProductView
    CREATE OR REPLACE VIEW "vwProducts"
    AS
         SELECT
               "Products"."CatalogId"
              , "ProductDesc"."Comment"
              , "ProductDesc"."PlainComment"
              , "ProductDesc"."Description"
              , "ProductDesc"."DescriptionText"
              , "ProductDesc"."PlainDescriptionText"
              , "Products"."Image"
              , "Languages"."MnemonicId" "LanguageId"
              , "Products"."MnemonicId"
              , "Products"."ProductId"
              , "Products"."ProductParentId"
              , TRIM(NVL("ProductDesc"."Description" || ' ', '') || NVL("ParentDescriptions"."Description", '')) "FullDescription"
         FROM "Products"
         CROSS JOIN "Languages"
         LEFT OUTER JOIN "ProductDesc"
         ON "Products"."ProductId" = "ProductDesc"."ProductId"
         AND "ProductDesc"."LanguageId" = "Languages"."MnemonicId"
         LEFT OUTER JOIN "ProductDesc" "ParentDescriptions"
         ON "Products"."ProductParentId" = "ParentDescriptions"."ProductId"
         AND ("ParentDescriptions"."LanguageId" = "Languages"."MnemonicId")
    /The varray:
    --CustomType VARRAY
    CREATE OR REPLACE TYPE Varray_Params IS VARRAY(100) OF NVARCHAR2(1000);
    /The function:
    --FilterFunction
    CREATE OR REPLACE FUNCTION "fnProductFilter" (
         parCatalogId "Products"."CatalogId"%TYPE,
         parLanguageId                    NVARCHAR2 := N'it-IT',
         parFilterValues                    OUT Varray_Params
    RETURN INTEGER
    AS
         varSqlCondition                    VARCHAR2(32000);
         varSqlConditionValues          NVARCHAR2(32000);
         varSql                              NVARCHAR2(32000);
         varDbmsCursor                    INTEGER;
         varDbmsResult                    INTEGER;
         varSeparator                    VARCHAR2(2);
         varFilterValue                    NVARCHAR2(1000);
         varCount                         INTEGER;
    BEGIN
         varSqlCondition := '(T_Product."CatalogId" = HEXTORAW(:parentId)) AND (T_Product."LanguageId" = :languageId )';
         varSqlConditionValues := CHR(39) || TO_CHAR(parCatalogId) || CHR(39) || N', ' || CHR(39 USING NCHAR_CS) || parLanguageId || CHR(39 USING NCHAR_CS);
         parFilterValues := Varray_Params();
         varSql := N'SELECT FilterValues.column_value FilterValue FROM TABLE(Varray_Params(' || varSqlConditionValues || N')) FilterValues';
         BEGIN
              varDbmsCursor := dbms_sql.open_cursor;
              dbms_sql.parse(varDbmsCursor, varSql, dbms_sql.native);
              dbms_sql.define_column(varDbmsCursor, 1, varFilterValue, 1000);
              varDbmsResult := dbms_sql.execute(varDbmsCursor);
              varCount := 0;
              LOOP
                   IF (dbms_sql.fetch_rows(varDbmsCursor) > 0) THEN
                        varCount := varCount + 1;
                        dbms_sql.column_value(varDbmsCursor, 1, varFilterValue);
                        parFilterValues.extend(1);
                        parFilterValues(varCount) := varFilterValue;
                   ELSE
                        -- No more rows to copy
                        EXIT;
                   END IF;
              END LOOP;
              dbms_sql.close_cursor(varDbmsCursor);
         EXCEPTION WHEN OTHERS THEN
              dbms_sql.close_cursor(varDbmsCursor);
              RETURN 0;
         END;
         FOR i in parFilterValues.first .. parFilterValues.last LOOP
              varSeparator := ', ';
         END LOOP;
         RETURN 1;
    END;
    /The procedures:
    --Procedure presenting anomaly\bug
    CREATE OR REPLACE PROCEDURE "spProductCount" (
         parCatalogId "Products"."CatalogId"%TYPE,
         parLanguageId NVARCHAR2 := N'it-IT',
         parRecords OUT NUMBER
    AS
         varFilterValues Varray_Params;
         varResult INTEGER;
         varSqlTotal VARCHAR2(32000);
    BEGIN
         parRecords := 0;
         varResult := "fnProductFilter"(parCatalogId, parLanguageId, varFilterValues);
         varSqlTotal := 'BEGIN
         SELECT count(DISTINCT T_Product."ProductId") INTO :parCount FROM "vwProducts" T_Product
              WHERE ((T_Product."CatalogId" = HEXTORAW(:parentId)) AND (T_Product."LanguageId" = :languageId ));
    END;';
         EXECUTE IMMEDIATE varSqlTotal USING OUT parRecords, varFilterValues(1), varFilterValues(2);
    END;
    --Procedure NOT presenting anomaly\bug
    CREATE OR REPLACE PROCEDURE "spProductCount2" (
         parCatalogId "Products"."CatalogId"%TYPE,
         parLanguageId NVARCHAR2 := N'it-IT',
         parRecords OUT NUMBER
    AS
         varFilterValues Varray_Params;
         varResult INTEGER;
         varSqlTotal VARCHAR2(32000);
    BEGIN
         parRecords := 0;
         varSqlTotal := 'BEGIN
         SELECT count(DISTINCT T_Product."ProductId") INTO :parCount FROM "vwProducts" T_Product
              WHERE ((T_Product."CatalogId" = HEXTORAW(:parentId)) AND (T_Product."LanguageId" = :languageId ));
    END;';
         EXECUTE IMMEDIATE varSqlTotal USING OUT parRecords, parCatalogId, parLanguageId;
    END;Edited by: 835125 on 2011-2-9 1:31

    835125 wrote:
    Using VARRAY was the only way I found to transform comma seprated text values (e.g. "'abc', 'def', '123'") in a collection of strings.A varray is just a functionally crippled version of a nested table collection type, with a defined limit you probably don't need. (Why 100 specifically?) Instead of
    CREATE OR REPLACE TYPE varray_params AS VARRAY(100) OF NVARCHAR2(1000);try
    CREATE OR REPLACE TYPE array_params AS TABLE OF NVARCHAR2(1000);I don't know whether that will solve the problem but at least it'll be a slightly more useful type.
    What makes you think it's a memory leak specifically? Do you observe session PGA memory use going up more than it should?
    btw good luck with all those quoted column names. I wouldn't like to have to work with those, although they do make the forum more colourful.
    Edited by: William Robertson on Feb 11, 2011 7:54 AM

  • Memory leak using repeater tag

    Hi,
    I am trying to show a report that has 33113 rows using a repeater tag, but i get the following error: "An error has occurred: java.lang.OutOfMemoryError".
    I am using the pageFlow to bind a ArrayList that already contains the rows.
    Any ideas?
    Thanks a lot.

    Link to Microsoft Connect issue
    https://connect.microsoft.com/VisualStudio/feedback/details/1045459/memory-leak-using-windows-accessibility-tools-microsoft-narrator-windows-speech-recognition-with-net-applications
    Link to Microsoft Community post
    http://answers.microsoft.com/en-us/windows/forum/windows_7-performance/memory-leak-using-windows-accessibility-tools/8b32a81c-f828-415c-aec8-34e3106f9cb0?tm=1420469245606

  • Nasty memory leak using sockets inside threads

    In short, any time I create or use a socket inside a worker thread, that Thread object instance will never be garbage collected. The socket does not need to be connected or bound, it doesn't matter whether I close the socket or not, and it doesn't matter whether I create the Socket object inside the thread or not. As I said, it's nasty.
    I initially encountered this memory leak using httpclient (which creates a worker thread for every connect() call), but it is very easy to reproduce with a small amount of stand-alone code. I'm running this test on Windows, and I encounter this memory leak using the latest 1.5 and 1.6 JDK's. I'm using the NetBeans 5.5 profiler to verify which objects are not being freed.
    Here's how to reproduce it with an unbound socket created inside a worker thread:
    public class Test {
         public static class TestRun extends Thread {
              public void run() {
                   new Socket();
         public static void main(String[] strArgs) throws Exception {
              for(;;) {
                   (new TestRun()).start();
                   Thread.sleep(10);
    }Here's how to reproduce it with a socket created outside the thread and used inside the worker thread:
    public class Test {
         public static class TestRun extends Thread {
              Socket s;
              public TestRun(Socket s) { this.s = s; }
              public void run() {
                   try {
                        s.bind(new InetSocketAddress(0));
                        s.close();
                   } catch(Exception e) {}
         public static void main(String[] strArgs) throws Exception {
              for(;;) {
                   (new TestRun(new Socket())).start();
                   Thread.sleep(10);
    }Here's how to reproduce it implementing Runnable instead of extending Thread:
    public class Test {
         public static class TestRun implements Runnable {
              public void run() {
                   Socket s = new Socket();
                   try { s.close(); } catch(Exception e) {}
         public static void main(String[] strArgs) throws Exception {
              for(;;) {
                   (new Thread(new TestRun())).start();
                   Thread.sleep(10);
    }I've played with this a lot, and no matter what I do the Thread instance leaks if I create/use a socket inside it. The Socket instance gets cleaned up properly, as well as the TestRun instance when it's implementing Runnable, but the Thread instance never gets cleaned up. I can't see anything that would be holding a reference to it, so I can only imagine it's a problem with the JVM.
    Please let me know if you can help me out with this,
    Sean

    Find out what is being leaked. In the sample programs, add something like this:
        static int loop_count;
            while (true) {
                if (++count >= 1000) {
              System.gc();
              Thread.sleep(500); // In case gc is async
              System.gc();
              Thread.sleep(500);
              System.exit(0);
            }Then run with java -Xrunhprof:heap=sites YourProgram
    At program exit you get the file java.hprof.txt which contains something like this towards the end of the file:
              percent          live          alloc'ed  stack class
    rank   self  accum     bytes objs     bytes  objs trace name
        1  0.47%  0.47%       736    5       736     5 300036 char[]
        2  0.39%  0.87%       616    3       616     3 300000 java.lang.Thread
        3  0.30%  1.17%       472    2       472     2 300011 java.lang.ThreadLocal$ThreadLocalMap$Entry[]
        4  0.27%  1.43%       416    2       416     2 300225 java.net.InetAddress$Cache$Type[]See, only three live Thread objects (the JVM allocates a few threads internally, plus there is the thread running main()). No leak there. Your application probably has some type of object that it's retaining. Look at "live bytes" and "live objs" to see where your memory is going. The "stack trace" column refers to the "TRACE nnnnn"'s earlier in the file, look at those to see where the leaked objects are allocated.
    Other quickies to track allocation:
    Print stats at program exit:
    java -Xaprof YourProgram
    If your JDK comes with the demo "heapViewer.dll" (or
    heapViewer.so or whatever dynamic libraries are called on your system):
    java -agentpath:"c:\Program Files\Java\jdk1.6.0\demo\jvmti\heapViewer\lib\heapViewer.dll" YourProgram
    Will print out statistics at program exit or when you hit Control-\ (Unix) or Control-Break (Windows).

  • Memory leak using xslprocessor.valueof in 11.1.0.6.0 - 64bit ??

    My company has made the decision to do all of our internal inter-system communication using XML. Often we may need to transfer thousands of records from one system to another and due to this (and the 32K limit in prior versions) we're implementing it in 11g. Currently we have Oracle 11g Enterprise Edition Release 11.1.0.6.0 on 64 bit Linux.
    This is a completely network/memory setup - the XML data comes in using UTL_HTTP and is stored in a CLOB in memory and then converted to a DOMDocument variable and finally the relevant data is extracted using xslprocessor.valueof calls.
    While this is working fine for smaller datasets, I've discovered that repeated calls with very large documents cause the xslprocessor to run out of memory with the following message:
    ERROR at line 1:
    ORA-04030: out of process memory when trying to allocate 21256 bytes
    (qmxdContextEnc,)
    ORA-06512: at "XDB.DBMS_XSLPROCESSOR", line 1010
    ORA-06512: at "XDB.DBMS_XSLPROCESSOR", line 1036
    ORA-06512: at "XDB.DBMS_XSLPROCESSOR", line 1044
    ORA-06512: at "SCOTT.UTL_INTERFACE_PKG", line 206
    ORA-06512: at line 28
    Elapsed: 00:03:32.45
    SQL>
    From further testing, it appears that the failure occurs after approximately 161,500 calls to xslprocessor.valueof however I'm sure this is dependent on the amount of server memory available (6 GB in my case).
    I expect that we will try and log a TAR on this, but my DBA is on vacation right now. Has anyone else tried calling the xslprocessor 200,000 times in a single session?
    I've tried to make my test code as simple as possible in order to track down the problem. This first block simply iterates through all of our offices asking for all of the employees at that office (there are 140 offices in the table).
    DECLARE
    CURSOR c_offices IS
    SELECT office_id
    FROM offices
    ORDER BY office_id;
    r_offices C_OFFICES%ROWTYPE;
    BEGIN
    OPEN c_offices;
    LOOP
    FETCH c_offices INTO r_offices;
    EXIT WHEN c_offices%NOTFOUND;
    utl_interface_pkg.get_employees(r_offices.office_id);
    END LOOP;
    CLOSE c_offices;
    END;
    Normally I'd be returning a collection of result data from this procedure, however I'm trying to make things as simple as possible and make sure I'm not causing the memory leak myself.
    Below is what makes the SOAP calls (using the widely circulated UTL_SOAP_API) to get our data and then extracts the relevant parts. Each office (call) should return between 200 and 1200 employee records.
    PROCEDURE get_employees (p_office_id IN VARCHAR2)
    l_request utl_soap_api.t_request;
    l_response utl_soap_api.t_response;
    l_data_clob CLOB;
    l_xml_namespace VARCHAR2(100) := 'xmlns="' || G_XMLNS_PREFIX || 'EMP.wsGetEmployees"';
    l_xml_doc xmldom.DOMDocument;
    l_node_list xmldom.DOMNodeList;
    l_node xmldom.DOMNode;
    parser xmlparser.Parser;
    l_emp_id NUMBER;
    l_emp_first_name VARCHAR2(100);
    l_emp_last_name VARCHAR2(100);
    BEGIN
    --Set our authentication information.
    utl_soap_api.set_proxy_authentication(p_username => G_AUTH_USER, p_password => G_AUTH_PASS);
    l_request := utl_soap_api.new_request(p_method => 'wsGetEmployees',
    p_namespace => l_xml_namespace);
    utl_soap_api.add_parameter(p_request => l_request,
    p_name => 'officeId',
    p_type => 'xsd:string',
    p_value => p_office_id);
    l_response := utl_soap_api.invoke(p_request => l_request,
    p_url => G_SOAP_URL,
    p_action => 'wsGetEmployees');
    dbms_lob.createtemporary(l_data_clob, cache=>FALSE);
    l_data_clob := utl_soap_api.get_return_clob_value(p_response => l_response,
    p_name => '*',
    p_namespace => l_xml_namespace);
    l_data_clob := DBMS_XMLGEN.CONVERT(l_data_clob, 1); --Storing in CLOB converted symbols (<">) into escaped values (&lt;, &qt;, &gt;).  We need to CONVERT them back.
    parser := xmlparser.newParser;
    xmlparser.parseClob(parser, l_data_clob);
    dbms_lob.freetemporary(l_data_clob);
    l_xml_doc := xmlparser.getDocument(parser);
    xmlparser.freeparser(parser);
    l_node_list := xslprocessor.selectNodes(xmldom.makeNode(l_xml_doc),'/employees/employee');
    FOR i_emp IN 0 .. (xmldom.getLength(l_node_list) - 1)
    LOOP
    l_node := xmldom.item(l_node_list, i_emp);
    l_emp_id := dbms_xslprocessor.valueOf(l_node, 'EMPLOYEEID');
    l_emp_first_name := dbms_xslprocessor.valueOf(l_node, 'FIRSTNAME');
    l_emp_last_name := dbms_xslprocessor.valueOf(l_node, 'LASTNAME');
    END LOOP;
    xmldom.freeDocument(l_xml_doc);
    END get_employees;
    All of this works just fine for smaller result sets, or fewer iterations (only the first two or three offices). Even up to the point of failure the data is being extracted correctly - it just eventually runs out of memory. Is there any way to free up the xslprocessor? I've even tried issuing DBMS_SESSION.FREE_UNUSED_USER_MEMORY but it makes no difference.

    Replying to both of you -
    Line 206 is the first call to xslprocessor.valueof:
    LINE TEXT
    206 l_emp_id := dbms_xslprocessor.valueOf(l_node, 'EMPLOYEEID');
    This is one function inside of a larger package (the UTL_INTERFACE_PKG). The package is just a grouping of these functions - one for each type of SOAP interface we're using. None of the others exhibited this problem, but then none of them return anywhere near this much data either.
    Here is the contents of V$TEMPORARY_LOBS immediately after the crash:
    SID CACHE_LOBS NOCACHE_LOBS ABSTRACT_LOBS
    132 0 0 0
    148 19 1 0
    SID 132 is a SYS session and SID 148 is mine.
    I've discovered with further testing that if I comment out all of the xslprocessor.valueof calls except for the first one the code will complete successfully. It executes the valueof call 99,463 times. If I then uncomment one of those additional calls, we double the number of executions to a theoretical 198,926 (which is greater than the 161,500 point where it usually crashes) and it runs out of memory again.

  • Memory Leak using CertOpenStore on Windows 2008 R2

    I have a program (runs both 32 and 64 bit ) that when using CertOpenStore results in a memory leak on Windows 2008 R2 only (I haven't tried 2012, but this code has run for years on 2000, 2003, 2008 without issues)
    Sample code to reproduce the issue:
    int main( int argc, char** argv )
    HCERTSTORE store;
    int go = 1;
    while( go ){
    store = CertOpenStore(
    CERT_STORE_PROV_SYSTEM_A,
    0,
    0,
    CERT_SYSTEM_STORE_LOCAL_MACHINE | CERT_STORE_OPEN_EXISTING_FLAG,
    "MY" );
    if( store ){
    //if( !CertCloseStore( store , CERT_CLOSE_STORE_CHECK_FLAG ) ){
    if( !CertCloseStore( store , CERT_CLOSE_STORE_FORCE_FLAG ) ){
    if( GetLastError() == CRYPT_E_PENDING_CLOSE ){
    printf( "!" );
    else{
    go = 0;
    else{
    go = 0;
    Sleep( 10 );
    return 0;
    The callstack for allocations pretty much are all like this  (this is from a 32 bit process on a 2008R2 box)
          HEAP_ENTRY Size Prev Flags    UserPtr UserSize - state
            02e75af0 000f 0000  [00]   02e75b08    0005e - (busy)
            7724dff2 ntdll!RtlAllocateHeap+0x00000274
            745f6017 AcLayers!malloc+0x00000079
            7460dc96 AcLayers!NS_VirtualRegistry::MakePath+0x00000056
            7460e817 AcLayers!NS_VirtualRegistry::CVirtualRegistry::OpenKeyW+0x000000a9
            7460f21a AcLayers!NS_VirtualRegistry::APIHook_RegOpenKeyExW+0x00000036
            743d2641 AcGenral!NS_WRPMitigation::APIHook_RegOpenKeyExW+0x00000024
            7527a246 crypt32!RegOpenHKCUKeyExU+0x00000055
            7527de26 crypt32!OpenSubKeyEx+0x00000108
            7527e4d8 crypt32!OpenSubKey+0x00000015
            7527ed88 crypt32!OpenSystemRegPathKey+0x00000033
            75281a2f crypt32!EnumPhysicalStore+0x00000162
            75281d0e crypt32!I_CertDllOpenSystemStoreProvW+0x0000015c
            752c9fce crypt32!I_CertDllOpenSystemStoreProvA+0x0000006c
            7527e49a crypt32!CertOpenStore+0x0000010e
    Anyone have any ideas of a hotfix available for this?
    thanks

    Might ask them here about this.
    Windows Desktop Dev forums on MSDN
    Regards, Dave Patrick ....
    Microsoft Certified Professional
    Microsoft MVP [Windows]
    Disclaimer: This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.

  • Memory leak using CWGraph

    I have a memory leak problem using the CWGraph control.
    I have an SDI application (MFC using Measurement Studio) and I generate dynamicaly a dialog containing a 2D Graph, and I use the OnTimer() of the dialog to generate data and to update the graph, with a timer of 50ms. In OnTimer() function I have a loop to generate
    and to update two plots on the graph. When I call a method of the graph (for example for changing the color of the plot or for updating a plot (using PlotXvsY)), I have a periodic increasing of memory with a fixed amount of memory (4k). In the same OnTimer() function I update also some CWSlide controls without memory leaks.
    If I comment the line that call a method of graph
    (ex. m_Graph.Plots.Item(1)....), the code works wi
    thout memory leaks.
    I'll apreciate any suggestion about this problem.

    I had the same memory leak problem with my program as well. I do not think it is because of using CWGraph. Memory leaks occur when you allocate memory and did not free them. The problem will accumulate and crashed randomly (sometime it crashed when you just move mouse around). Try this: if the program does not crash (memory leak crash) on the first time it compiles and runs, it is probable has nothing to do with the CWGraph. On the second and third run, if you did not free variable, the program will usually crashed. If your program crashed on the first time it runs, the problem might be something else.

  • Memory leak using Oracle thin driver on wls6.1...

    Hi, I've been attempting to find a memory leak in an application that
    runs on WLS 6.1 SP2, on Solaris 8 and accessing an Oracle 9i db. We
    are using the Type 4 (Thin) driver and JProbe reports that hundreds of
    oracle.jdbc.* objects are left on the heap after my test case
    completes. Specifically oracle.jdbc.ttc7.TTCItem is the most common
    loiterer on the heap. I have verified that after each database access
    the resources are release correctly (i.e. ResultSet, Connection,
    PreparedStatement, etc.)
    Has anyone encountered similar problems? or does anyone know how to
    fix this?
    Thanks,
    Tim Watson

    Hi Tim!
    We have seen problem using oracle 817 client that has been resolved using
    901 client for type2(oci) driver, But i am not aware of thin driver
    problem. You should check with oracle if they have find any customer's
    with this problem.
    Thanks,
    Mitesh
    Tim Watson wrote:
    Hi, I've been attempting to find a memory leak in an application that
    runs on WLS 6.1 SP2, on Solaris 8 and accessing an Oracle 9i db. We
    are using the Type 4 (Thin) driver and JProbe reports that hundreds of
    oracle.jdbc.* objects are left on the heap after my test case
    completes. Specifically oracle.jdbc.ttc7.TTCItem is the most common
    loiterer on the heap. I have verified that after each database access
    the resources are release correctly (i.e. ResultSet, Connection,
    PreparedStatement, etc.)
    Has anyone encountered similar problems? or does anyone know how to
    fix this?
    Thanks,
    Tim Watson

  • Memory leak using GWT 1.4 and JDK 1.5

    We are running the following:
    OS : Solaris 5.10
    WebLogic version: 10.0
    JDK : Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_14-b03)
    Java HotSpot(TM) Server VM (build 1.5.0_14-b03, mixed mode)
    GWT : 1.4
    Oracle : 10g
    We have found memory leak with the above configuration.
    After running 1 session we are facing memory leak. The used Java heap is 4% higher than the one used after we conduct
    our memory tests for 1 user.
    Similarly, after running 5 concurrent sessions we are also facing memory leak where Java heap memory is utilised more
    by about 4%.
    I have used JRockit JDK 1.5 for figuring out memory leak. I have not found a memory leak in any of the modules
    developed by us.
    The memory leak issue is we think concerned with the version of JDK, Weblogic, Sun OS.
    Can somebody please suggest whether we can use the version as mentioned above?
    Any help on this front will be appreciated.

    gc log:
    #log information
    JAVA_OPTS="$JAVA_OPTS -verbose:gc "
    JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCDetails "
    JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCTimeStamps "
    JAVA_OPTS="$JAVA_OPTS -XX:+DisableExplicitGC "
    JAVA_OPTS="$JAVA_OPTS -Xloggc:/path/to/gclog`date +%Y.%m.%d-%H:%M:%S`.log "Check sun papers for garbage collecting tips.
    >
    Is there any other way we can detect memory leak?
    >
    You have to profile your Application Server like you did with your own code.
    regards
    slowfly

  • Memory leak using thin client

    Subject: Memory Leak while using thin client.
    From: [email protected] (Alon Albert)
    Newsgroups: weblogic.developer.interest.jndi
    I have a small test program that causes a memory leak while connecting
    to a weblogic server. The code is appended to the end of the msg. The
    problem is definatly caused by the InitialContext creation.
    Even if I explicitly call the InitialContext.close() method, the
    problem persists. I realize I can avoid the problem by using a one
    time connection outside the loop but keep in mind that this is only a
    test program. The full application REQUIRES me to do a full
    reconnection each time it is accessed.
    I have read in a previous simmilar post that the problem mught be RMI
    related. Is anything I can do about this?
    package jmxleak;
    import java.net.*;
    import java.util.*;
    import javax.management.*;
    import javax.naming.*;
    public class Main {
    String host;
    String username;
    String password;
    MBeanServer mbs;
    public static void main(String[] args) {
    while (true) {
    Main main = new Main(args[0], args[1], args[2]);
    main.connect();
    main = null;
    System.out.println("Collecting garbage.");
    System.gc();
    public Main(String host, String username, String password) {
    this.host = host;
    this.username = username;
    this.password = password;
    private void connect() {
    try {
    System.out.print("Connecting..., ");
    Thread.currentThread().setContextClassLoader(new
    URLClassLoader(new URL[] { new URL("http://" + host + "/classes/")}));
    Hashtable env = new Hashtable();
    env.put("java.naming.factory.initial",
    "weblogic.jndi.WLInitialContextFactory");
    env.put("java.naming.provider.url", "t3://" + host);
    if (username.length() > 0 && password.length() > 0) {
    env.put("java.naming.security.principal", username);
    env.put("java.naming.security.credentials", password);
    System.out.print("Creating context..., ");
    InitialContext ctx = new InitialContext(env);
    catch (Exception e) {
    e.printStackTrace();

    Subject: Memory Leak while using thin client.
    From: [email protected] (Alon Albert)
    Newsgroups: weblogic.developer.interest.jndi
    I have a small test program that causes a memory leak while connecting
    to a weblogic server. The code is appended to the end of the msg. The
    problem is definatly caused by the InitialContext creation.
    Even if I explicitly call the InitialContext.close() method, the
    problem persists. I realize I can avoid the problem by using a one
    time connection outside the loop but keep in mind that this is only a
    test program. The full application REQUIRES me to do a full
    reconnection each time it is accessed.
    I have read in a previous simmilar post that the problem mught be RMI
    related. Is anything I can do about this?
    package jmxleak;
    import java.net.*;
    import java.util.*;
    import javax.management.*;
    import javax.naming.*;
    public class Main {
    String host;
    String username;
    String password;
    MBeanServer mbs;
    public static void main(String[] args) {
    while (true) {
    Main main = new Main(args[0], args[1], args[2]);
    main.connect();
    main = null;
    System.out.println("Collecting garbage.");
    System.gc();
    public Main(String host, String username, String password) {
    this.host = host;
    this.username = username;
    this.password = password;
    private void connect() {
    try {
    System.out.print("Connecting..., ");
    Thread.currentThread().setContextClassLoader(new
    URLClassLoader(new URL[] { new URL("http://" + host + "/classes/")}));
    Hashtable env = new Hashtable();
    env.put("java.naming.factory.initial",
    "weblogic.jndi.WLInitialContextFactory");
    env.put("java.naming.provider.url", "t3://" + host);
    if (username.length() > 0 && password.length() > 0) {
    env.put("java.naming.security.principal", username);
    env.put("java.naming.security.credentials", password);
    System.out.print("Creating context..., ");
    InitialContext ctx = new InitialContext(env);
    catch (Exception e) {
    e.printStackTrace();

  • Memory leak using pthreads on Solaris 7

    I'm on Solaris 7
    uname -a:
    SunOS zelda 5.7 Generic_106541-18 sun4u sparc SUNW,Ultra-250
    compiling with g++ (2.95.2)
    purify 5.2
    I consistently get memory leaks related (apparently)
    to the pthreads lib. I've set the error reporting chain length to 30 (way big), the start of the chain in every case starts with threadstart [ libthread.so.1 ]
    when I end my process, I Join the threads before deleting them. What else can I do to get rid of these leaks?
    thanks,
    rich

    is it worth upgrading to a 64 bit OS with more ram.
    Well you're talking a fairly hefty investment in fact your best bets buying a new machine at that point since the motherboard would need to be upgraded as well as memory and OS.
    Now  I run on a 64 bit OS 12 gigs of ram, but i love my plug-ins, and most are 32 bit, so you would still be using the 32 bit Photoshop if you want the majority of all plugins for photoshop to work. Well while the 32 bit version still have the memory limitations on how much memory it can use, Because its in the 64 bit OS, you have available the full amount of that limitation available to the application.
    I can run several different memory intensive applications at one time and normally not have an issue.  I say normally cause, sometimes i will crash my graphics driver if i open one too many 3d apps hooked into the Nvidia drivers.
    I normally only reboot maybe once a week.
    So in short, would it help you to be able to go 64 bit with more ram, Most Certainly even more so if you could care less about plug-ins and want to use the 64 bit version. should you go to 16 gigs of ram.. That depends on your budget really,
    Personally I always plan to upgrade when I build my systems, Putting in the largest chips you can with out filling all the slots leaving room for upgrading if needed. that way you're not filling all your slots with cheaper lower memory ram that you would have to replace them all to upgrade.
    Hope this helps a bit

  • Check memory leak using sun studio 10

    Hi,
    I'm using sun studio 10 to find the memory leak. I pre loaded the librtc.so and attached the process. Then I set the break point at the beginning of scenairo and the end of scenario. Then I trigger the event and program flow hits the break point at the beginning of scenairo, I try to enable the "Memory Check". Then I continue my application. But, I don't see anything show up in the "memory check" tab in the debug window.
    Is anyone there who know what is the problem here? Appreciate your help.
    Li

    Can you help mi. Novice
    mail [email protected]

  • Help with memory leak using DAQmx & VB6

    I'm using DAQmx 8 with VB6 to control an NI 6251 analog input board. The application calls for changing channel configurations as the system is running, thus there are many multiple create task/clear task calls. I've been trying to track a memory leak, so I've created a very simple VB app to help identify the source. I have a loop with the following calls in it:
    DAQmxCreateTask
    DAQmxAddGlobalChansToTask
    DAQmxCfgSampClkTiming
    DAQmxStartTask
    DAQmxReadAnalogF64
    DAQmxClearTask
    I see a little over 1Kb leak per loop iteration. If I skip everything except the create task, add channels and clear task, it still shows a leak.
    Am I doing something wrong? Should I not be calling CreateTask/ClearTask multiple times?
    Thanks for any help.
    mfrazer

    Mfrazer,
    You definitely don’t want to have CreateTask and subsequently
    ClearTask in a loop, nor does it look like you need to for your application.
    You only need on task, you just need to change the task, so there is no need to
    create multiple tasks. So leave the CreateTask and ClearTask on the outsides of
    your loop, and keep everything else on the inside of the loop and make the
    changes you need to the task each iteration. Also you are going to want to have
    a DAQmxStopTask after your read to make sure the task is stopped and in the
    proper state to be reconfigured. I hope that helps.
    -GDE

  • Memory leak using Oracle ODBC connection. Works perfect with MSSQL.

    Hello,
    what could cause memory leaks which is not persistent. Sometimes in different OS and sometimes in different hardware. the common player to my issue is only with oracle.

    A memory leak that is not persistent is not a memory leak.
    You'll have to be way more specific for a more meaningful answer.
    Have you tried memory profiling tools for your operating system to locate the "leak"?
    Yours,
    Laurenz Albe

Maybe you are looking for

  • Status de erro: 50 - Cancelamento/inutilização: erro de sistema PI

    Boa noite, srs! Estamos com um problema na inutilização e cancelamento de NFe junto ao SEFAZ. Foi executado o report J_1BNFECHECKNUMBERRANGES para inutilização de numeração de NFe. A tabela J_1BNFENUMGAP foi preenchida com a numeração, porém a mesma

  • Can't write large files to my WebDAV server

    Hi, yesterday I set up a WebDAV directory on my mini using Apache 2.0.56 with digest authentication. Everything works fine when I connect from home, however, when I tried to access it from work I got some very strange reactions. Connection worked out

  • Why my account payment method decline

    Why my account payment method decline

  • Black out on zen v p

    Hello, I cannot figure out why my screen went dark? I cannot read anything. It just happened one day. I would greatly appreciate some help. thank you

  • How large can a database be?

    Hi, just out of curiosity that came in my mind . . . 10 of 10 largest database in the world user Oracle :) How large are these databases?