Asp to oracle - memory leaks?
I am using Oracle OlEDB provider to connect to an oracle 8i database from asp.
it seems like after a few days, we start getting a "ROW-00001 cannot allocate memory" error on all our asp pages. If i restart IIS, it fixes the problem. But I would like to know if anybody else there has encountered this problem, and what would you suggest?
Thanks very much for your help,
-V
For Oracle 9.2 client, we applied three patches successfully:
1. Apply patchset 92021
2. Apply patch 2814865
3. Apply patch 2533353:
Create empty file tnsnames.ora in same directory as aspnet_wp.exe,
typically C:\WINNT\Microsoft.NET\Framework\v1.0.3705.
Regards,
Armin
Similar Messages
-
Memory leak in weblogic 6.0 sp2 oracle 8.1.7 thin driver
Hi,
I have a simple client that opens a database connection, selects from
a table containing five rows of data (with four columns in each row)
and then closes all connections. On running this in a loop, I get the
following error after some time:
<Nov 28, 2001 5:57:40 PM GMT+06:00> <Error> <Adapter>
<OutOfMemoryError in
Adapter
java.lang.OutOfMemoryError
<<no stack trace available>>
>
<Nov 28, 2001 5:57:40 PM GMT+06:00> <Error> <Kernel> <ExecuteRequest
failed
java.lang.OutOfMemoryError
I am running with a heap size of 64 Mb. The java command that runs
the client is:
java -ms64m -mx64m -cp .:/opt/bea/wlserver6.0/lib/weblogic.jar
-Djava.naming.f
actory.initial=weblogic.jndi.WLInitialContextFactory
-Djava.naming.provider.url=
t3://garlic:7001 -verbose:gc Test
The following is the client code that opens the db connection and does
the select:
import java.util.*;
import java.sql.*;
import javax.naming.*;
import javax.sql.*;
public class Test {
private static final String strQuery = "SELECT * from tblPromotion";
public static void main(String argv[])
throws Exception
String ctxFactory = System.getProperty
("java.naming.factory.initial");
String providerUrl = System.getProperty
("java.naming.provider.url");
Properties jndiEnv = System.getProperties ();
System.out.println ("ctxFactory : " + ctxFactory);
System.out.println ("ProviderURL : " + providerUrl);
Context ctx = new InitialContext (jndiEnv);
for (int i=0; i <1000000; i++)
System.out.println("Running query for the "+i+" time");
Connection con = null;
Statement stmnt = null;
ResultSet rs = null;
try
DataSource ds = (DataSource) ctx.lookup
(System.getProperty("eaMDataStore", "jdbc/eaMarket"));
con = ds.getConnection ();
stmnt = con.createStatement();
rs = stmnt.executeQuery(strQuery);
while (rs.next ())
//System.out.print(".");
//System.out.println(".");
ds = null;
catch (java.sql.SQLException sqle)
System.out.println("SQL Exception : "+sqle.getMessage());
finally
try {
rs.close ();
rs = null;
//System.out.println("closed result set");
} catch (Exception e) {
System.out.println("Exception closing result set");
try {
stmnt.close ();
stmnt = null;
//System.out.println("closed statement");
} catch (Exception e) {
System.out.println("Exception closing result set");
try {
con.close();
con = null;
//System.out.println("closed connection");
} catch (Exception e) {
System.out.println("Exception closing connection");
I am using the Oracle 8.1.7 thin driver. Please let me know if this
memory leak is a known issue or if its something I am doing.
thanks,
rudyRepost in JDBC section ... very serious issue but it may be due to Oracle or
to WL ... does it happen if you test inside WL itself?
How many iterations does it take to blow? How long? Does changing to a
different driver (maybe Cloudscape) have the same result?
Peace,
Cameron Purdy
Tangosol Inc.
<< Tangosol Server: How Weblogic applications are customized >>
<< Download now from http://www.tangosol.com/download.jsp >>
"R.C." <[email protected]> wrote in message
news:[email protected]...
Hi,
I have a simple client that opens a database connection, selects from
a table containing five rows of data (with four columns in each row)
and then closes all connections. On running this in a loop, I get the
following error after some time:
<Nov 28, 2001 5:57:40 PM GMT+06:00> <Error> <Adapter>
<OutOfMemoryError in
Adapter
java.lang.OutOfMemoryError
<<no stack trace available>>
>
<Nov 28, 2001 5:57:40 PM GMT+06:00> <Error> <Kernel> <ExecuteRequest
failed
java.lang.OutOfMemoryError
I am running with a heap size of 64 Mb. The java command that runs
the client is:
java -ms64m -mx64m -cp .:/opt/bea/wlserver6.0/lib/weblogic.jar
-Djava.naming.f
actory.initial=weblogic.jndi.WLInitialContextFactory
-Djava.naming.provider.url=
t3://garlic:7001 -verbose:gc Test
The following is the client code that opens the db connection and does
the select:
import java.util.*;
import java.sql.*;
import javax.naming.*;
import javax.sql.*;
public class Test {
private static final String strQuery = "SELECT * from tblPromotion";
public static void main(String argv[])
throws Exception
String ctxFactory = System.getProperty
("java.naming.factory.initial");
String providerUrl = System.getProperty
("java.naming.provider.url");
Properties jndiEnv = System.getProperties ();
System.out.println ("ctxFactory : " + ctxFactory);
System.out.println ("ProviderURL : " + providerUrl);
Context ctx = new InitialContext (jndiEnv);
for (int i=0; i <1000000; i++)
System.out.println("Running query for the "+i+" time");
Connection con = null;
Statement stmnt = null;
ResultSet rs = null;
try
DataSource ds = (DataSource) ctx.lookup
(System.getProperty("eaMDataStore", "jdbc/eaMarket"));
con = ds.getConnection ();
stmnt = con.createStatement();
rs = stmnt.executeQuery(strQuery);
while (rs.next ())
//System.out.print(".");
//System.out.println(".");
ds = null;
catch (java.sql.SQLException sqle)
System.out.println("SQL Exception : "+sqle.getMessage());
finally
try {
rs.close ();
rs = null;
//System.out.println("closed result set");
} catch (Exception e) {
System.out.println("Exception closing result set");
try {
stmnt.close ();
stmnt = null;
//System.out.println("closed statement");
} catch (Exception e) {
System.out.println("Exception closing result set");
try {
con.close();
con = null;
//System.out.println("closed connection");
} catch (Exception e) {
System.out.println("Exception closing connection");
I am using the Oracle 8.1.7 thin driver. Please let me know if this
memory leak is a known issue or if its something I am doing.
thanks,
rudy -
Memory leak issue with link server between SQL Server 2012 and Oracle
Hi,
We are trying to use the linked server feature with SQL Server 2012 to connect SQL server and Oracle database. We are concerned about the existing memory leak issue. For more context please refer to the link.
http://blogs.msdn.com/b/psssql/archive/2009/09/22/if-you-use-linked-server-queries-you-need-to-read-this.aspx
The above link talks about the issues with SQL Server versions 2005 and 2008, not sure if this is still the case in 2012. I could not find any article that talks about if this issue was fixed by Microsoft in later version.
We know that SQL Server process crashes because of the third-party linked server provider which is loaded inside SQL Server process. If the third-party linked server provider is enabled together with the
Allow inprocess option, the SQL Server process crashes when this third-party linked server experiences internal problems.
We wanted to know if this fixed in SQL Server 2012 ?So your question is more of a information type or are you really facing OOM issue.
There can be two things for OOM
1. There is bug in SQL Server which is causing the issue which might be fixed in 2012
2. The Linked server provider used to connect to Oracle is not upto date and some patch is missing or more recent version is to be used. Did you made sure that you are using latest version.
What is Oracle version you are trying to connect(9i,10g, R2...)
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
Oracle 8.1.5 Linux Memory Leak?
We have been running Oracle 8.1.5 Server and Client on Redhat 6.1
kernel 2.2.12-20
glibc-2.1.2-11
Blackdown's JDK 1.2 RC3
We are having bad memory leaks and we have verified that they are
not in our source code using a memory profiler for Java-linux.
We are using the JDBC calls and the OCI JDBC driver.
We think that the memory leak is in libocijdbc8.so but we're not
sure. It could also be in glibc but the 2.1.2-11 version should
be pretty stable. It could also be in the Blackdown jvm.
Anyone have any ideas?
nullOracle8iR2's JDBC works fine with Oracle8.1.5.
You can download the new JDBC driver.
null -
Memory leak in oracle.exe and mds.exe
We are facing Memory leak in oue MDM server. Our environment details
are as follows;
MDM 5.5 SP5 ( Build 5.5.41.70)
Oracle 10.2 patch 2
windows server 2003 SP1
XI 7.0 SP 9
If server is running continuously 3-4 days then Nonpaged memory is
getting exausted and server does not respond. Now we have to retart the
windows server manually.
If we see the task manager it is shows more than 200,000 handles for
oracle.exe and more than 100,000 handles for mds.exe.
1: Oracle.exe -- more than 200000 handles ( Approx >5000 is problem)
2: Mds.exe -- more than 100000 handles ( Approx >5000 is problem)
Since these applications are not releasing the handles properly so all
nonpaged memory gets exausted and server stops responding.
If we restart the mdm server, database and OracleserviceMDMD, then
nonpaged memory is released. But some times even if we restart these
services, we do not get nonpaged memory released. So we have to restart
the windows server.
please help me if anyone else have faced the same problem.
regards
SaurabhClosing as question is answered in MDM forum.
-
Memory leak using Oracle thin driver on wls6.1...
Hi, I've been attempting to find a memory leak in an application that
runs on WLS 6.1 SP2, on Solaris 8 and accessing an Oracle 9i db. We
are using the Type 4 (Thin) driver and JProbe reports that hundreds of
oracle.jdbc.* objects are left on the heap after my test case
completes. Specifically oracle.jdbc.ttc7.TTCItem is the most common
loiterer on the heap. I have verified that after each database access
the resources are release correctly (i.e. ResultSet, Connection,
PreparedStatement, etc.)
Has anyone encountered similar problems? or does anyone know how to
fix this?
Thanks,
Tim WatsonHi Tim!
We have seen problem using oracle 817 client that has been resolved using
901 client for type2(oci) driver, But i am not aware of thin driver
problem. You should check with oracle if they have find any customer's
with this problem.
Thanks,
Mitesh
Tim Watson wrote:
Hi, I've been attempting to find a memory leak in an application that
runs on WLS 6.1 SP2, on Solaris 8 and accessing an Oracle 9i db. We
are using the Type 4 (Thin) driver and JProbe reports that hundreds of
oracle.jdbc.* objects are left on the heap after my test case
completes. Specifically oracle.jdbc.ttc7.TTCItem is the most common
loiterer on the heap. I have verified that after each database access
the resources are release correctly (i.e. ResultSet, Connection,
PreparedStatement, etc.)
Has anyone encountered similar problems? or does anyone know how to
fix this?
Thanks,
Tim Watson -
Memory Leak with Oracle ODBC Driver for Long Raw columns
Oracle version : 8.1.7
Platform : Win2K
Oracle ODBC Driver version : 8.0.1.7.5.0.0
Hi,
I've got an Oracle database upgraded from
V8.0.5 to V8.1.7 which has a table having one long raw +
normal columns. I was able to observe distinct memory
leaks (approx 80K) when using ODBC interface calls (thro C++ code) that referenced a combination of normal & long raw columns in a select statement. However, this leak was not observed when only normal columns were present in the
select statement. Is there any known restriction for using
long raw columns with other columns? Or do long raw columns have a known memory leak problem thro ODBC?
Thanks!
Regards
SanchayanDid you ever get an answer on this issue?
Thanks in advance -
Oracle JDBC Thin Driver Memory leak in scrollable result set
Hi,
I am using oracle 8.1.7 with oracle thin jdbc driver (classes12.zip) with jre 1.2.2. When I try to use the scrollable resultset and fetch records with the default fetch size, I run into memory leaks. When the records fetched are large(10000 records) over a period of access I get "outofmemory" error because of the leak. There is no use increasing the heap size as the leak is anyhow there.
I tried using optimizeit and found there is a huge amout of memory leak for each execution of scrollable resultsets and this memory leak is propotional to the no of records fetched. This memory leak is not released even when i set the resultset,statement objects to null. Also when i use methods like scrollabelresultset.last() this memory leak increases.
So is this a problem with the driver or i am doing some wrong.
If some of you can help me with a solution to solve this it would be of help. If needed i can provide some statistics of these memory leaks using optimize it and share the code.
Thanks
RajeshThis thread is ancient and the original was about the 8.1.7 drivers. Please start a new thread. Be sure to include driver and database versions, stack traces, sample code and why you think there is a memory leak.
Douglas -
Memory leak in Oracle Text under Oracle 8.1.7!!!
When I wanted to use USER_DATASOTRE preference to use my own formatting tag in Oracle Text, the memory leak occured in Oracle Text!
Memory just freed when I close SQL*Plus program
My formatting procedure is so easy
PROCEDURE format_tag(r IN ROWID, tlob IN OUT NOCOPY CLOB) IS
BEGIN
SELECT '<C>' | | catalog_id | | '</C></T>' | | tag_id | | '</T><V>' | | tag_value | | '</V>' INTO buf
FROM tbl_catalog WHERE ROWID=r;
dbms_lob.trim(tlob, 0); -- set LOB's size to zero
dbms_lob.write(tlob, length(buf), 1, buf);
END;
The typical rows are about 100,000 ( The actual records are tens of millions )
How can I solve this problem?Thanks for your reply
I'm using Oracle 8.1.7.0.0 on Windows 2000
The preferences are as follows:
DECLARE
ds VARCHAR2(30):= 'dts_catalog';
grp VARCHAR2(30):= 'grp_catalog';
lxr VARCHAR2(30):= 'lxr_catalog';
wrd VARCHAR2(30):= 'wrd_catalog';
BEGIN
ctx_ddl.create_preference(wrd, 'BASIC_WORDLIST');
ctx_ddl.set_attribute(wrd, 'STEMMER', 'NULL');
ctx_ddl.create_preference(lxr, 'BASIC_LEXER');
ctx_ddl.set_attribute(lxr, 'INDEX_TEXT', 'TRUE');
ctx_ddl.set_attribute(lxr, 'INDEX_THEMES', 'FALSE');
ctx_ddl.create_preference(ds, 'USER_DATASTORE');
ctx_ddl.set_attribute(ds, 'OUTPUT_TYPE', 'VARCHAR2');
ctx_ddl.set_attribute(ds, 'PROCEDURE', 'pkg_catalog.format');
ctx_ddl.create_section_group(grp, 'BASIC_SECTION_GROUP');
ctx_ddl.add_field_section(grp, 'catalog', 'C', TRUE);
ctx_ddl.add_field_section(grp, 'tag', 'T', TRUE);
TRUE);
ctx_ddl.add_field_section(grp, 'value', 'V', TRUE);
END;
the STORAGE preferences are not here ( because they are long )
Index statement that I used for indexing
CREATE INDEX idx_catalog_info ON tbl_catalog_info(val)
INDEXTYPE IS ctxsys.context
PARAMETERS(
'STORAGE stg_catalog
DATASTORE dts_catalog
SECTION GROUP grp_catalog
LEXER lxr_catalog
WORDLIST wrd_catalog
MEMORY 24M'
I changed my last version of procedure and using VARCHAR2 parameter type instead of CLOB, but memory leak persist during building index.
so I used a simple trick by writing a package and counting the records to be indexed, after each 100,000 records I used DBMS_SESSION.FREE_UNUSED_USER_MEMORY procedure to free unused session's memory.
the package is as follows:
CREATE OR REPLACE PACKAGE pkg_catalog IS
PROCEDURE format(r ROWID, buf IN OUT NOCOPY VARCHAR2);
END pkg_catalog;
CREATE OR REPLACE PACKAGE BODY pkg_catalog IS
recs PLS_INTEGER:= 0;
PROCEDURE format(r ROWID, buf IN OUT NOCOPY VARCHAR2) IS
BEGIN
SELECT '<C>'| |catalog_id| |'</C><T>'| |tag_id| |subfield_id| |'</T><V>'| |val| |'</V>' INTO buf FROM &usr.tbl_catalog_info WHERE ROWID=r;
recs:= recs + 1;
IF recs>100000 THEN
recs:= 0;
dbms_session.free_unused_user_memory; -- clean-out memory garbage
END IF;
END format;
END pkg_catalog;
My problem solved, but it is very marvelous "Why Oracle does not free unused session's memory?" -
Memory leak using Oracle ODBC connection. Works perfect with MSSQL.
Hello,
what could cause memory leaks which is not persistent. Sometimes in different OS and sometimes in different hardware. the common player to my issue is only with oracle.A memory leak that is not persistent is not a memory leak.
You'll have to be way more specific for a more meaningful answer.
Have you tried memory profiling tools for your operating system to locate the "leak"?
Yours,
Laurenz Albe -
Possible memory leak in Oracle 12.1.0 C client
Dear Oracle Users and Professionals,
I want to report Oracle 12.1.0 C client memory leak when reconnect feature is on place. I have used Valgrind/massif tool to diagnostic our components and there was small memory leak in libclntsh.so.12.1 which is calling libc function getaddrinfo(). This seems to be not freed when connection is closed, but my application is still running and keep reconnect when needed.
I sought a bit on internet and Oracle Portals about this and did not find any information that some has detected this particular issue.
In the attachment is trace back from massif: comparison of two different time slots.
We are developers and use only free available Oracle client versions. Our customer which will operate the system has available whole Oracle Support.
If you can give me advice, to reach state where we will have no memory leak, it would be helpfull.
Thank you very much
Jan Kianicka
([email protected])Hi Jan,
This forum is for questions about connecting to non-Oracle databases. For questions about the Oracle client connecting to Oracle databases then try either one of these forums - I am not sure which will be best -
ODBC
or
General Database Discussions
Regards,
Mike -
Avoid Bug 7146375 ORA-4030 MEMORY LEAK IN (Xml generation) in oracle 10g
Hi All,
I have to generate an xml from database which contains around 4 lac records. I had written a query using XmlSerialize and XmlElement.
It does run properly for records less than 2 lacs.
But when the record count goes above 2 lacs..it is throwing the following error -
{ ORA-04030: out of process memory when trying to allocate 1032 bytes (qmxlu subheap,qmemNextBuf:alloc)
ORA-06512: at "SYS.XMLTYPE", line 111!}
For the above error - I have tried increasing pga from 480M to 800M, but still we are getting the same error.
After researching i found out -
Cause
This is caused by the following bug:
Bug 7146375 ORA-4030 AND MEMORY LEAK IN SESSION HEAP: "KOH DUR HEAP D"
Solution
Bug 7146375 is fixed in 11.2
So i tried out the query in another a db which has 11g installed and my query runs perfectly fine for records of upto 4 lacs.
But since we have oracle 10g on our clients machine, are there other ways to achieve this XML generation other than this?
Thanks.913389 wrote:
After researching i found out -
Cause
This is caused by the following bug:
Bug 7146375 ORA-4030 AND MEMORY LEAK IN SESSION HEAP: "KOH DUR HEAP D"
Solution
Bug 7146375 is fixed in 11.2
So i tried out the query in another a db which has 11g installed and my query runs perfectly fine for records of upto 4 lacs.
But since we have oracle 10g on our clients machine, are there other ways to achieve this XML generation other than this?I doubt it. If Oracle have investigated and created a bug report that says the solution is to upgrade to 11.2, then that's the answer, otherwise they would indicate that a particular 10g patch set can also be used. -
Oracle 9.2 and memory leak detection
Hi All.
I have found the following error in trace file
after shutdown database.
=================================================================
/.../udump/cpaw_ora_4427.trc
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
ORACLE_HOME = /.../product/9.2.0
System name: Linux
Node name: host.com
Release: 2.4.18-10bigmem
Version: #1 SMP Wed Aug 7 10:26:52 EDT 2002
Machine: i686
Instance name: orcl
Redo thread mounted by this instance: 0 <none>
Oracle process number: 10
Unix process pid: 4427, image: [email protected] (TNS V1-V3)
*** SESSION ID:(9.3) 2002-10-10 14:17:14.265
Archiving is disabled
Archiving is disabled
******** ERROR: SGA memory leak detected 16 ********
KGH Latch Directory Information
ldir state: 2 next slot: 39
Slot [ 1] Latch: 0x50005be8 Index: 1 Flags: 3 State: 2 next: (nil)
<...>
=====================================================================
Linux RedHat 7.3 (db works in archivelog mode).
How can I correct this?
Thanks in advance.
Best regards,
Andrey Demchenko.Thanks for the answers. The oci8.dll is uncommented naturally. Otherwise it would start up just fine, but I couldn't use the database functions.
I got it working by installing 5.1.6, but replacing the oci8.dll with the one from the 5.1.0 -version. It's a very... desperate... solution, but at least it works.
I'm gonna have to try to sell the idea of using the 10g client to our DBA. I don't think though, that he'll be very enthusiastic to set it up on our production servers. We'll see. -
Memory Leak - Oracle 9.2 ADO/OLE DB Select Distinct
I'm using ADO (MDAC 2.8) and Oracle OLE DB (9.2.0.1.0) to access an Oracle 9.2.0.1.0 database. All queries run fine, but when I issue a query with the distinct keyword (e.g. Select distinct...), the application leaks memory. The memory leak does not occur when issuing the same query to MS SQL Server 2000. I also installed the latest 9.2.0.2.0 Oracle OLE DB update, but it didn't fix the problem.
The same problem appears to have been fixed with Oracle ODBC drivers. "Fixed memory leak when using �select distinct�. (Bug2685365)"
http://otn.oracle.com/software/tech/windows/odbc/htdocs/whatsnew.htm
I've also seen the same problem reported on DBForums.com.
"...select distinct query made through ADO causes a memory leak in that object... This issue is known by Oracle and I think that there may be a patch available for Oracle 9."
http://dbforums.com/arch/210/2003/3/733498
This is a critical problem for the product we are developing.
Is there a fix available for this problem?
BobThanks, I looked for the update yesterday, but all Oracle had posted was the update for 9.2.0.2.0.
Luckily, as of this morning there is a new update available from Oracle, 9.2.0.4.0. I installed it and it fixed the memory leak with OLE DB and Select Distinct queries.
The installer for 9.2.0.4.0 is a bit rough. It doesn't stop the Distributed Transaction Coordinator or Oracle MT Service on your computer, so you must stop them before installing. Also, you can't install all the products at once and must install the Oracle uninstaller first. -
ADO memory leak when getting Recordset from an Oracle stored procedure?
I am programming in C++ (VC 6) and using ADO 2.7 to
access Oracle 9i database. My connection string looks
like this:
Provider=MSDAORA.1;Persist Security Info=True;User ID=scott;Password=tiger;Data Source=blahblah
I have Oracle stored procedure that returns data in a
REF CURSOR output parameter. Since the stored procedure
takes input parameters, I prepare a Command object with
Parameters initialized and attached to it. I use the
Recordset Open method to execute the call. This approach
works because I get correct data back from the call in
the Recordset, but the problem is when I do this in a
infinite loop and watch the process in Windows Taks
Manager, I see 4k or 8k memory delta all the time and
the Peak Memory Usage of the process keeping going up.
I hope someone knows something in this scenario and points
me to the right direction.
Thanks, please see the following code for specifics.
HRESULT CallSP3Params(VARIANT vp1, VARIANT vp2, int spretcode, LPDISPATCH ppRSet, char *pCmdLine)
_RecordsetPtr pRs;
_CommandPtr pCmd;
_ParameterPtr paramVProfiler[3];
bstrt strMissing(L"");
*ppRSet = NULL;
variantt ErrConn;
ErrConn.vt = VT_ERROR;
ErrConn.scode = DISP_E_PARAMNOTFOUND;
try {
//Create instance of command object
pCmd.CreateInstance(__uuidof(Command));
pRs.CreateInstance(__uuidof(Recordset));
if ( vp1.vt == VT_BSTR ) {
paramVProfiler[0] = pCmd->CreateParameter("P1",adVarChar,adParamInput,SysStringLen(vp1.bstrVal) + 10,strMissing );
paramVProfiler[0]->Value = vp1;
else if ( vp1.vt == VT_I4 )
paramVProfiler[0] = pCmd->CreateParameter("P1",adNumeric,adParamInput,15,vp1);
else
TESTHR( PARAMETER_OPERATION_ERROR );
pCmd->Parameters->Append(paramVProfiler[0]);
if ( vp2.vt == VT_BSTR ) {
paramVProfiler[1] = pCmd->CreateParameter("P2",adVarChar,adParamInput,SysStringLen(vp2.bstrVal) + 10,strMissing );
paramVProfiler[1]->Value = vp2;
else if ( vp2.vt == VT_I4 )
paramVProfiler[1] = pCmd->CreateParameter("P2",adNumeric,adParamInput,15,vp2);
else
TESTHR( PARAMETER_OPERATION_ERROR );
pCmd->Parameters->Append(paramVProfiler[1]);
paramVProfiler[2] = pCmd->CreateParameter("RETCODE",adNumeric,adParamOutput,10);
pCmd->Parameters->Append(paramVProfiler[2]);
//Catch COM errors
catch( comerror &e) {
try {
// I manage my connection through this little C++ class of my own
CCUsage myconnection( &Connectionkeeper[0] );
//Set the active connection property of command object to open connection
pCmd->ActiveConnection = myconnection.m_conn;
//The command type is text
pCmd->CommandType = adCmdText;
//Set command text to call the stored procedure
pCmd->CommandText = pCmdLine;
//Open the Recordset to get result
pRs->Open( variantt((IDispatch *)pCmd,true), ErrConn, adOpenStatic, adLockReadOnly, adOptionUnspecified );
//Disconnect the command object
pCmd->PutRefActiveConnection( NULL );
if ( GetSPRetCode( pCmd, "RETCODE", spretcode ) != S_OK )
TESTHR(DB_OBJECT_OPERATION_ERROR);
// pRs->QueryInterface(IID_IDispatch, (void**) ppRSet);
// I return the Recordset by calling QueryInterface, but even without that, closing the Recordset right here still shows memory leak.
pRs->Close( );
pRs = NULL;
//Catch COM errors
catch (_com_error e) {
return S_OK;
}Whenever large numbers of BSTRs are allocated and freed quickly the process memory will continue to climb towards a stabalizing value. BSTRs are not freed until the system frees them. You can see this by making many calls allocating and freeing BSTRs, memory will climb, but when you stop for a while the gargage collection of the sys strings will take place. I've done much research to see that a server doing many queries very rapidly is not leaking memory, but out pacing the garbage collection, it will stabilize and when the process has some "rest time" the processes memory usage will decline.
In my research a suspected memory leak was not one.
Maybe you are looking for
-
Error in installation of Sap Solution Manager - AIX / Oracle
<b>Hello, We are installing SAP Solution Manager in Operating System is AIX 5.3 and Database System is Oracle 10.2 After the Oracle Installation, in the phase of Abap Import (16 of 44), we are receiving the follow error message:</b> Required system r
-
BAPI to update PO's and Corresponding line items numbers **URGENT**
Hi All, I have a requirement to create a program which will take list of POs and corresponding line item number and update Final invoice indicator for the same. This should be updated in SRM as well as R/3. I think, it can be done by a BAPI. Do anyon
-
Unable to continue because of hardware or system error. Sorry, but this error is unrecoverable. (reinstalled software (CS3) not working. Next step - Any advice please?
-
Occasionally - without any obvious reasons, I experience that PhotoShop crashes after a few seconds when I start it in 64bit mode. No problems in 32 bit mode though. Is there a solution to this? (And do forgive the norwegian language in the crashrepo
-
I'm having a sleep problem with my Lacie firewire drive (the Porsche design). Everything works great but it will not sleep when the mini goes to sleep (it starts to sleep but then immediately boots back up). I've tried repairing the disk permissions