Potential memory leak in 11.2.0.1.0 cluster stack components
Hi All,
We are running 11.2.0.1.0 RAC on OEL 5.5 and OEL 5.5. On all of our boxes, we have observed a gradual increase and filling of swap space. Swap usage slowly but steadily increase to a point where either the Node boots automatically or we have to manually restart it.
To get more insight into the issues, we recently upgraded one of the nodes to Oracle unbreakable kernel and installed smem to profile the memory consumption pattern. smem will give the process set size (RSS), the unique set size (USS) and the proportional set size (PSS) which is the unique set size plus a portion of the shared memory that is being used by this process.
It appears that there is a gradual increase in memory/swap consumption of some clusterware components. Also the weird part is that the process is filling up the swap space and not utilizing the unallocated RAM.
Memory footprint at the Node start...
PID User Command Swap USS PSS RSS
4001 root /u01/app/11.2.0/grid/bin/ohasd.bin 16236 18128 18829 30284
5200 oracle /u01/app/11.2.0/grid/bin/oraagent.bin 13536 74512 75587 87156
4255 grid /u01/app/11.2.0/grid/bin/oraagent.bin 13484 2644 3628 15056
4688 root /u01/app/11.2.0/grid/bin/crsd.bin 8240 71584 72628 85692
Memory footprint after 24 hrs...
PID User Command Swap USS PSS RSS
5200 oracle /u01/app/11.2.0/grid/bin/oraagent.bin 94952 121168 122161 133488
4688 root /u01/app/11.2.0/grid/bin/crsd.bin 66220 76684 77723 90776
4001 root /u01/app/11.2.0/grid/bin/ohasd.bin 21448 24708 25410 36892
4255 grid /u01/app/11.2.0/grid/bin/oraagent.bin 13840 2372 3202 14316
#free -m
total used free shared buffers cached
Mem: 3964 3856 108 0 5 1846
-/+ buffers/cache: 2004 1959
Swap: 4094 617 3477
Has anyone experience similar situation? I did google search as well as Metalink, but did not find anything useful.
Any thoughts/suggestions are welcome.
Thanks,
-Sanjeev
Edited by: user12219014 on Jan 9, 2011 5:58 AM
Thanks for pointing to MOS notes, they were quite helpful. Though sometime on our system, ohasd.bin consumes more resources. Is it safe to kill it?
Also, we have observed that there are multiple oraagents belonging to different users such as root,grid and oracle.
grid 14620 1 0 20:32 ? 00:00:14 /u01/app/11.2.0/grid/bin/oraagent.bin
root 14625 1 0 20:32 ? 00:00:02 /u01/app/11.2.0/grid/bin/orarootagent.bin
root 14627 1 0 20:32 ? 00:00:00 /u01/app/11.2.0/grid/bin/cssdagent
grid 14803 1 0 20:32 ? 00:00:06 /u01/app/11.2.0/grid/bin/oraagent.bin
oracle 14807 1 0 20:32 ? 00:01:53 /u01/app/11.2.0/grid/bin/oraagent.bin
root 14811 1 0 20:32 ? 00:00:38 /u01/app/11.2.0/grid/bin/orarootagent.bin
When these are killed, not all are re-spawned automatically - typically oraagent belonging to "oracle" user is left out. Is this an expected behaviour or it will cause some instability in the clusterware?
Thanks
Similar Messages
-
Potential Memory Leak during Marshelling of a Web Service Response
I believe I have found a memory leak when using the configuration below.
The memory leak occurs when calling a web service. When the web service function is marshelling the response of the function call, an "500 Internal Server Error ... java.lang.OutOfMemoryError" is returned from OC4J. This error may be seen via the TCP Packet Monitor in JDeveloper.
Unfortunately no exception dump is outputted to the OC4J log.
Configuration:
Windows 2000 with 1 gig ram
JDeveloper 9.0.5.2 with JAX/RPC extension installed
OC4J 10.0.3
Sun JVM version 1.4.2_03-b02
To demonstrate the error I created a simple web service and client. See below the client and web service function that demonstrates it.
The web service is made up of a single function called "queryTestOutput".
It returns an object of class "TestOutputQueryResult" which contains an int and an array.
The function call accepts a one int input parameter which is used to vary the size of array in the returned object.
For small int (less than 100). Web service function returns successfully.
For larger int and depending on the size of memory configuration when OC4J is launched,
the OutOfMemoryError is returned.
The package "ws_issue.service" contains the web service.
I used the Generate JAX-RPC proxy to build the client (found in package "ws_issue.client"). Package "types" was
also created by Generate JAX-RPC proxy.
To test the web service call execute the class runClient. Vary the int "atestValue" until error is returned.
I have tried this with all three encodings: RPC/Encoded, RPC/Literal, Document/Literal. They have the
same issue.
The OutOfMemory Error is raised fairly consistently using the java settings -Xms386m -Xmx386m for OC4J when 750 is specified for the input parameter.
I also noticed that when 600 is specified, the client seems to hang. According to the TCP Packet Monitor,
the response is returned. But, the client seems unable to unmarshal the message.
** file runClient.java
// -- this client is using Document/Literal
package ws_issue.client;
public class runClient
public runClient()
* @param args
* Test out the web service
* Play with the atestValue variable to until exception
public static void main(String[] args)
//runClient runClient = new runClient();
long startTime;
int atestValue = 1;
atestValue = 2;
//atestValue = 105; // last one to work with default memory settings in oc4j
//atestValue = 106; // out of memory error as seen in TCP Packet Monitor
// fails with default memory settings in oc4j
//atestValue = 600; // hangs client (TCP Packet Monitor shows response)
// when oc4j memory sessions are -Xms386m -Xmx386m
atestValue = 750; // out of memory error as seen in TCP Packet Monitor
// when oc4j memory sessions are -Xms386m -Xmx386m
try
startTime = System.currentTimeMillis();
Ws_issueInterface ws = (Ws_issueInterface) (new Ws_issue_Impl().getWs_issueInterfacePort());
System.out.println("Time to obtain port: " + (System.currentTimeMillis() - startTime) );
// call the web service function
startTime = System.currentTimeMillis();
types.QueryTestOutputResponse qr = ws.queryTestOutput(new types.QueryTestOutput(atestValue));
System.out.println("Time to call queryTestOutput: " + (System.currentTimeMillis() - startTime) );
startTime = System.currentTimeMillis();
types.TestOutputQueryResult r = qr.getResult();
System.out.println("Time to call getresult: " + (System.currentTimeMillis() - startTime) );
System.out.println("records returned: " + r.getRecordsReturned());
for (int i = 0; i<atestValue; i++)
types.TestOutput t = r.getTestOutputResults();
System.out.println(t.getTestGroup() + ", " + t.getUnitNumber());
catch (Exception e)
e.printStackTrace();
** file wsmain.java
package ws_issue.service;
import java.rmi.RemoteException;
import javax.xml.rpc.ServiceException;
import javax.xml.rpc.server.ServiceLifecycle;
public class wsmain implements ServiceLifecycle, ws_issueInterface
public wsmain()
public void init (Object p0) throws ServiceException
public void destroy ()
System.out.println("inside ws destroy");
* create an element of the array with some hardcoded values
private TestOutput createTestOutput(int cnt)
TestOutput t = new TestOutput();
t.setComments("here are some comments");
t.setConfigRevisionNo("1");
t.setItemNumber("123123123");
t.setItemRevision("arev" + cnt);
t.setTestGroup(cnt);
t.setTestedItemNumber("123123123");
t.setTestedItemRevision("arev" + cnt);
t.setTestResult("testResult");
t.setSoftwareVersion("version");
t.setTestConditions("conditions");
t.setStageName("world's a stage");
t.setTestMode("Test");
t.setTestName("test name");
t.setUnitNumber("UnitNumber"+cnt);
return t;
* Web service function that is called
* Create recCnt number of "records" to be returned
public TestOutputQueryResult queryTestOutput (int recCnt) throws java.rmi.RemoteException
System.out.println("Inside web service function queryTestOutput");
TestOutputQueryResult r = new TestOutputQueryResult();
TestOutput TOArray[] = new TestOutput[recCnt];
for (int i = 0; i< recCnt; i++)
TOArray[i] = createTestOutput(i);
r.setRecordsReturned(recCnt);
r.setTestOutputResults(TOArray);
System.out.println("End of web service function call");
return r;
* @param args
public static void main(String[] args)
wsmain wsmain = new wsmain();
int aval = 5;
try
TestOutputQueryResult r = wsmain.queryTestOutput(aval);
for (int i = 0; i<aval; i++)
TestOutput t = r.getTestOutputResults()[i];
System.out.println(t.getTestGroup() + ", " + t.getUnitNumber());
catch (Exception e)
e.printStackTrace();
** file ws_issueInterface.java
package ws_issue.service;
import java.rmi.Remote;
import java.rmi.RemoteException;
public interface ws_issueInterface extends java.rmi.Remote
public TestOutputQueryResult queryTestOutput (int recCnt) throws java.rmi.RemoteException;
** file TestOutputQueryResult.java
package ws_issue.service;
public class TestOutputQueryResult
private long recordsReturned;
private TestOutput[] testOutputResults;
public TestOutputQueryResult()
public long getRecordsReturned()
return recordsReturned;
public void setRecordsReturned(long recordsReturned)
this.recordsReturned = recordsReturned;
public TestOutput[] getTestOutputResults()
return testOutputResults;
public void setTestOutputResults(TestOutput[] testOutputResults)
this.testOutputResults = testOutputResults;
** file TestOutput.java
package ws_issue.service;
public class TestOutput
private String itemNumber;
private String itemRevision;
private String configRevisionNo;
private String testName;
private String testConditions;
private String stageName;
private String testedItemNumber;
private String testedItemRevision;
private String unitNumber;
private String testStation;
private String testResult;
private String softwareVersion;
private String operatorID;
private String testDate; // to be datetime
private String comments;
private int testGroup;
private String testMode;
public TestOutput()
public String getComments()
return comments;
public void setComments(String comments)
this.comments = comments;
public String getConfigRevisionNo()
return configRevisionNo;
public void setConfigRevisionNo(String configRevisionNo)
this.configRevisionNo = configRevisionNo;
public String getItemNumber()
return itemNumber;
public void setItemNumber(String itemNumber)
this.itemNumber = itemNumber;
public String getItemRevision()
return itemRevision;
public void setItemRevision(String itemRevision)
this.itemRevision = itemRevision;
public String getOperatorID()
return operatorID;
public void setOperatorID(String operatorID)
this.operatorID = operatorID;
public String getSoftwareVersion()
return softwareVersion;
public void setSoftwareVersion(String softwareVersion)
this.softwareVersion = softwareVersion;
public String getStageName()
return stageName;
public void setStageName(String stageName)
this.stageName = stageName;
public String getTestConditions()
return testConditions;
public void setTestConditions(String testConditions)
this.testConditions = testConditions;
public String getTestDate()
return testDate;
public void setTestDate(String testDate)
this.testDate = testDate;
public String getTestName()
return testName;
public void setTestName(String testName)
this.testName = testName;
public String getTestResult()
return testResult;
public void setTestResult(String testResult)
this.testResult = testResult;
public String getTestStation()
return testStation;
public void setTestStation(String testStation)
this.testStation = testStation;
public String getTestedItemNumber()
return testedItemNumber;
public void setTestedItemNumber(String testedItemNumber)
this.testedItemNumber = testedItemNumber;
public String getTestedItemRevision()
return testedItemRevision;
public void setTestedItemRevision(String testedItemRevision)
this.testedItemRevision = testedItemRevision;
public String getUnitNumber()
return unitNumber;
public void setUnitNumber(String unitNumber)
this.unitNumber = unitNumber;
public int getTestGroup()
return testGroup;
public void setTestGroup(int testGroup)
this.testGroup = testGroup;
public String getTestMode()
return testMode;
public void setTestMode(String testMode)
this.testMode = testMode;I use web services a lot and I sympathize with your issue. I
struggle with similar issues and I found this great utility that
will help you confirm if your webservice is returning the data
correctly to Flex. I know you said it works in other applications
but who knows if flex is calling it correctly etc. This utility is
been the most amazing tool in helping me resolve web service
issues.
http://www.charlesproxy.com/
Once you can confirm the data being returned is good you can
try several things in flex. Try changing your result format to
object or e4x etc. See how that plays out. Not sure where your
tapping in to look at your debugger, you might want to catch it
right at the result handler before converting to any collections. .
If nothing here helps maybe post some code to look at. .
. -
We have a service that is being monitored via JMX. The JVM heap usage is growing and even major collections are not able to remove the garbage. Inspecting the heap shows garbage consisting of RMI related references (mostly, if not all, related class loaders). The only way to alleviate the issue is to issue explicit gc call through JMX (that removes all accumulated garbage). Our gc related options are:
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly
And we have not touched either of: DisableExplicitGC or sun.rmi.dgc.server.gcInterval
I believe the problem is supposed to addressed by the code in sun.misc.GC.Daemon:
public void run() { for (;;) { long l; synchronized (lock) { l = latencyTarget; if (l == NO_TARGET) { /* No latency target, so exit */ GC.daemon = null; return; } long d = maxObjectInspectionAge(); if (d >= l) { /* Do a full collection. There is a remote possibility * that a full collection will occurr between the time * we sample the inspection age and the time the GC * actually starts, but this is sufficiently unlikely * that it doesn't seem worth the more expensive JVM * interface that would be required. */ System.gc(); d = 0; } /* Wait for the latency period to expire, * or for notification that the period has changed */ try { lock.wait(l - d); } catch (InterruptedException x) { continue; } } } }
For some reason the above System.gc is not being invoked (that has been verified by looking at gc logs). Anyone has a suggestion as to how to address the issue?Thanks for pointing to MOS notes, they were quite helpful. Though sometime on our system, ohasd.bin consumes more resources. Is it safe to kill it?
Also, we have observed that there are multiple oraagents belonging to different users such as root,grid and oracle.
grid 14620 1 0 20:32 ? 00:00:14 /u01/app/11.2.0/grid/bin/oraagent.bin
root 14625 1 0 20:32 ? 00:00:02 /u01/app/11.2.0/grid/bin/orarootagent.bin
root 14627 1 0 20:32 ? 00:00:00 /u01/app/11.2.0/grid/bin/cssdagent
grid 14803 1 0 20:32 ? 00:00:06 /u01/app/11.2.0/grid/bin/oraagent.bin
oracle 14807 1 0 20:32 ? 00:01:53 /u01/app/11.2.0/grid/bin/oraagent.bin
root 14811 1 0 20:32 ? 00:00:38 /u01/app/11.2.0/grid/bin/orarootagent.bin
When these are killed, not all are re-spawned automatically - typically oraagent belonging to "oracle" user is left out. Is this an expected behaviour or it will cause some instability in the clusterware?
Thanks -
I've been writing a multi-threaded, non-blocking I/O game server and I'm kinda taking a break from hammering out code at the moment to analyze how efficient the server is. With 100-300 clients each transmitting data and receiving a proper response once every second or so, the server has almost 0% CPU load (on an AMD 4000+ 64bit CPU) so I am very happy with this.
However, I find that with -verbose:gc on, I can watch the memory every so slowly leak away.. for instance, I will lose around 3 kb / minute with an initial stack of 128mb, 256mb max with a few clients, and this can get as high as 100kb / minute with 200 clients hammering the server. I have looked quite hard at the code and I cannot see anywhere where I am continuously allocating memory and retaining references to it, so everything should be getting GC'd, but some additional kilobytes remain after each GC.
What I am wondering is, how can I profile the memory usage to, for instance, have an up-to-date count of how many objects of type X (including Strings and ArrayLists and stuff) that are currently in existance at any given time? This would at least give me a better idea of where this potential memory leak is?
Thanks in advance,
JamesYou may find the JVM is still warming up. It attempts to optimise as it goes and can still be consuming more memory as it re-optimises.
-
URGENT: Memory leak in UIX 2.1.7?????
We are using UIX 2.1.7 and when searching for potential memory leaks we found a situation where UIX does not release some of our dataObjects after a page has been rendered. Analyzing the problem we found out that there are CompositeRenderingContext objects which seem to be reused but they always maintain a reference to the dataObjects that have been used when rendering the last page.
The references are: CompositeRenderingContext holds CompositeRenderingContext holds Comp...... holds TableRenderingContext holds CustomDataObject
In our case this is a very bad behaviour because some of our dataObjects are quite large and maintain references to lots of other objects, which can never be released (even after all users logged out) unless the RenderingContext releases our dataObjects.
So my question:
Are these observations correct?
Is there a way to make the renderingContext release the dataObjects after rendering the page - or to explicitly say remove all unused renderingContext objects?
How many such CompositeRenderingContext trees can the UIXFramework at most hold?
Please help - this is very urgent as we have a customer that thinks the memory consumption of our application grows indefinitely and they cannot go into production with such a problem.
Thanks,
GuidoHi Guido,
This was fixed in UIX 2.1.16 and UIX 2.2.0. This is not an unbounded memory leak; it will eventually peak. Patching to UIX 2.1.16 or later will resolve the problem. You'll have to contact your Oracle support team to get this release. I'm not sure how that works.
JDeveloper 10g Preview has UIX 2.2.
Thanks,
Jeanne -
Memory leaks in NI-DAQ 6.9.1
Can anyone tell me if the API for NI-DAQ 6.9.1 has been purified to eliminate all memory leaks? I'm using DIG_Block_In() etc. with PCI-653X DIO cards. My Win2K MSC++ V6 Purify (tm) reports many potential leaks similar to the following:
[I] MPK: Potential memory leak of 11550 bytes from 350 blocks allocated in RegistrySession::~RegistrySession(void)
Distribution of potentially leaked blocks
11550 bytes from 350 blocks of 33 bytes (first block: 0x04a34ae8)
Allocation location
malloc [msvcrt.DLL]
RegistrySession::~RegistrySession(void) [nipsm.dll]
moot::moot(basic_string,allocator> const&) [nipsm.dll]
moot::moot(basic
_string,allocator> const&) [nipsm.dll]
moot::load(PSMQueue&) [nipsm.dll]
BinaryFileProxy::Load(PSMQueue&) [nipsm.dll]
BinaryFileProxy::Begin(void) [nipsm.dll]
KeyProxy:pen(moot const&,DWORD) [nipsm.dll]
CfqCloseConnectionToServer [nicfq32.dll]
CfqQueryDigitalAvailability [nicfq32.dll]
moot::moot(basic_string,allocator> const&) [nipsm.dll]John,
Thanks for your reply. I'll include another Purify "Potential Leak" report and try to annotate it more.
This is the biggest reported leak at 11550 bytes from the destructor of an object called RegistrySession found in nipsm.dll.
The lines below "RegistrySession::~RegistrySession(void) [nipsm.dll]" form a call stack reading down. ie object moot called RegistrySession, and BinaryFileProxy::Load() called that, etc. The RegistrySession destructor is really the only culprit for allocating memory then losing it. The whole chain of events starts, however, at a call to CfqQueryDigitalAvailability in the nicfq32.dll.
Now, my application certainly didn't call this and I have no idea what it's trying to do, but I suspect that it's called sometime
during the loading and initialization of nidaq32.dll which I link against.
If you indeed Purify your libraries prior to release then you should be able to duplicate my results.
I can send you my code if you like.
Thanks for you help,
Dan Stine
MPK: Potential memory leak of 11550 bytes from 350 blocks allocated in RegistrySession::~RegistrySession(void)
Distribution of potentially leaked blocks
Allocation location
malloc [msvcrt.dll]
RegistrySession::~RegistrySession(void) [nipsm.dll]
moot::moot(basic_string,allocator> const&) [nipsm.dll]
moot::moot(basic_string,allocator> const&) [nipsm.dll]
moot::load(PSMQueue&) [nipsm.dll]
BinaryFileProxy::Load(PSMQueue&) [nipsm.dll]
BinaryFileProxy::Begin(void) [nipsm.dll]
KeyProxy:pen(moot const&,DWORD) [nipsm.dll]
CfqCloseConnection
ToServer [nicfq32.dll]
CfqQueryDigitalAvailability [nicfq32.dll] -
Memory leak in query preparation in dbxml-2.3.10
Hi,
We are using dbxml-2.3.10 in our production. I have ran valgrind and see big memory leaks under two categories:
Definetly lost:
The complete stack trace is as below:
==25482== 13,932 bytes in 129 blocks are definitely lost in loss record 32 of 35
2547 ==25482== at 0x4004790: operator new(unsigned) (vg_replace_malloc.c:164)
2548 ==25482== by 0x4144131: XQSort::SortSpec::staticResolution(StaticContext*, StaticResolutionContext&) (in /usr/netscreen/GuiSvr/utils /dbxml-2.3.10/lib/libxqilla.so.1.0.0)
2549 ==25482== by 0x4144BEC: XQSort::staticResolution(StaticContext*, StaticResolutionContext&) (in /usr/netscreen/GuiSvr/utils/dbxml-2.3 .10/lib/libxqilla.so.1.0.0)
2550 ==25482== by 0x4145A2A: XQFLWOR::staticResolutionImpl(StaticContext*) (in /usr/netscreen/GuiSvr/utils/dbxml-2.3.10/lib/libxqilla.so. 1.0.0)
2551 ==25482== by 0x4146018: XQFLWOR::staticResolution(StaticContext*) (in /usr/netscreen/GuiSvr/utils/dbxml-2.3.10/lib/libxqilla.so.1.0. 0)
2552 ==25482== by 0x41780CC: XQQuery::staticResolution(StaticContext*) (in /usr/netscreen/GuiSvr/utils/dbxml-2.3.10/lib/libxqilla.so.1.0. 0)
2553 ==25482== by 0x4563D6E: DbXml::StaticResolver::optimize(XQQuery*) (Optimizer.cpp:64)
2554 ==25482== by 0x4563C42: DbXml::Optimizer::startOptimize(XQQuery*) (Optimizer.cpp:42)
2555 ==25482== by 0x4563C5B: DbXml::Optimizer::startOptimize(XQQuery*) (Optimizer.cpp:39)
2556 ==25482== by 0x4563C5B: DbXml::Optimizer::startOptimize(XQQuery*) (Optimizer.cpp:39)
2557 ==25482== by 0x4563C5B: DbXml::Optimizer::startOptimize(XQQuery*) (Optimizer.cpp:39)
2558 ==25482== by 0x4563C5B: DbXml::Optimizer::startOptimize(XQQuery*) (Optimizer.cpp:39)
2559 ==25482== by 0x4563C5B: DbXml::Optimizer::startOptimize(XQQuery*) (Optimizer.cpp:39)
2560 ==25482== by 0x4563C5B: DbXml::Optimizer::startOptimize(XQQuery*) (Optimizer.cpp:39)
2561 ==25482== by 0x4563C5B: DbXml::Optimizer::startOptimize(XQQuery*) (Optimizer.cpp:39)
2562 ==25482== by 0x4563C5B: DbXml::Optimizer::startOptimize(XQQuery*) (Optimizer.cpp:39)
2563 ==25482== by 0x4563C5B: DbXml::Optimizer::startOptimize(XQQuery*) (Optimizer.cpp:39)
2564 ==25482== by 0x4446CE9: DbXml::QueryExpression::QueryExpression(std::string const&, DbXml::XmlQueryContext&, DbXml::Transaction*) (S copedPtr.hpp:41)
2565 ==25482== by 0x44A3F63: DbXml::XmlManager::prepare(std::string const&, DbXml::XmlQueryContext&) (XmlManager.cpp:601)
2566 ==25482== by 0x82B3152: XQuery::prepare(unsigned, unsigned short, char const*, char const*, char const*, char const*, char const*, R efCountedAutoPtr<XdbQueryContext>, unsigned) (XQuery.cpp:152)
We see another huge leak in possiibly lost category:
371,895 bytes in 121 blocks are possibly lost in loss record 33 of 35
2570 ==25482== at 0x4004405: malloc (vg_replace_malloc.c:149)
2571 ==25482== by 0x818C330: malloc (guiDaemon.c:783)
2572 ==25482== by 0x44A5B0C: DbXml::SimpleMemoryManager::allocate(unsigned) (Globals.cpp:67)
2573 ==25482== by 0x497CCFC: xercesc_2_7::XMemory::operator new(unsigned, xercesc_2_7::MemoryManager*) (in /usr/netscreen/GuiSvr/utils/db xml-2.3.10/lib/libxerces-c.so.27.0)
2574 ==25482== by 0x48D681A: xercesc_2_7::XMLPlatformUtils::makeMutex(xercesc_2_7::MemoryManager*) (in /usr/netscreen/GuiSvr/utils/dbxml- 2.3.10/lib/libxerces-c.so.27.0)
2575 ==25482== by 0x44A61B6: DbXml::Globals::initialize(DbEnv*) (Globals.cpp:78)
2576 ==25482== by 0x44A766C: DbXml::Manager::initialize(DbEnv*) (Manager.cpp:167)
2577 ==25482== by 0x44A8CBB: DbXml::Manager::Manager(DbEnv*, unsigned) (Manager.cpp:98)
2578 ==25482== by 0x44A2EAD: DbXml::XmlManager::XmlManager(DbEnv*, unsigned) (XmlManager.cpp:58)
2579 ==25482== by 0x83398EA: XdbImpl::initDb(bool, int) (XdbImpl.cpp:478)
2580 ==25482== by 0x8337407: XdbImpl::start(char const*, int) (XdbImpl.cpp:159)
2581 ==25482== by 0x8321123: Xdb::start(char const*, int) (Xdb.cpp:56)
Are these leaks addressed in some 2.3.10 patch? Please suggest way to solve the same.
PS : We are trying to upgrade to 2.5.16 but there are certain issues already reported in another thread due to which we are not able to migrate.Have you tried turning on diagnostic logging in BDBXML and trying to parse the output? The library gives out some pretty detailed output. It might be helpful to see what the query optimizer is trying to do, as well as see what the XQuery looks like that you're running, to see if we can either pinpoint the bug or find a suitable workaround that doesn't trigger the memory leak.
-
Hi all,
I have a problem with 2 SunFire 240 (4Gb of Ram) with solaris 10 in a Veritas Cluster.
These nodes are 2 NFS server and they have 10 nfs client.
We have a memory leak on these servers. The memory utilization increase day by day.
The memory seems to be allocated by the kernel and not from some process.
So I would like to know if this is a common issue (NFS?) or this is a single case.
Thanks in advance for you help
Regards
Daniele
Edited by: Danx on Jan 2, 2008 5:23 PMThat message relates to how the application deals with its threads, which for a the most part isn't actually an issue. However, since it does have the potential to cause a leak under certain circumstances we did make a change in 10.3 to address that issue, so I suggest you upgrade to that release.
-
Hello,
For my work i need to login with the Cisco VPN client. This works good, but sometimes i get a memory leak, and then my Mac get a Grey screen of death. The errorlog give the following error:
+Mon Jun 2 13:56:05 2008+
+panic(cpu 1 caller 0x001A8C8A): Kernel trap at 0x00197e36, type 14=page fault, registers:+
+CR0: 0x8001003b, CR2: 0x03667004, CR3: 0x01177000, CR4: 0x00000660+
+EAX: 0x12da7020, EBX: 0x00000014, ECX: 0x00000025, EDX: 0x00000094+
+CR2: 0x03667004, EBP: 0x20e27e68, ESI: 0x03667004, EDI: 0x12da7020+
+EFL: 0x00010212, EIP: 0x00197e36, CS: 0x00000008, DS: 0x00000010+
+Error code: 0x00000000+
+Backtrace, Format - Frame : Return Address (4 potential args on stack)+
+0x20e27bf8 : 0x12b0f7 (0x4581f4 0x20e27c2c 0x133230 0x0)+
+0x20e27c48 : 0x1a8c8a (0x461720 0x197e36 0xe 0x460ed0)+
+0x20e27d28 : 0x19ece5 (0x20e27d40 0x50 0x20e27e68 0x197e36)+
+0x20e27d38 : 0x197e36 (0xe 0x20e20048 0x10 0x21260010)+
+0x20e27e68 : 0x2126a3c9 (0x20e27ed0 0x20e27ecc 0x20e27ed4 0x20e27ed8)+
+0x20e27ef8 : 0x2154b4 (0x0 0x3260404 0x2 0x20e27f74)+
+0x20e27f68 : 0x2158bb (0x0 0x1cf61700 0x0 0x31bc2ac)+
+0x20e27fc8 : 0x19eadc (0x31bc284 0x0 0x1a20b5 0x2afe128)+
+Backtrace terminated-invalid frame pointer 0+
+Kernel loadable modules in backtrace (with dependencies):+
com.cisco.nke.ipsec(2.0.1)@0x21268000->0x212d6fff
+BSD process name corresponding to current thread: kernel_task+
+Mac OS version:+
9C7010
+Kernel version:+
+Darwin Kernel Version 9.2.2: Tue Mar 4 21:17:34 PST 2008; root:xnu-1228.4.31~1/RELEASE_I386+
+System model name: MacBook3,1 (Mac-F22788C8)+
IS this a bug in Mac OS X or into Cisco ?Why do you think it is a memory leak? It sounds like just a Cisco bug. There is a reason Cisco version numbers are 6 digits long. Try getting a newer version.
-
I think I've got a memory leak and could use some advice
We've got ourselves a sick server/application and I'd like to gather a little community advice if I may. I believe the evidence supports a memory leak in my application somewhere and would love to hear a second opinion and/or suggestions.
The issue has been that used memory (as seen by FusionReactor) will climb up to about 90%+ and then the service will start to queue requests and eventually stop processing them all together. A service restart will bring everything back up again and it could run for 2 days or 2 hours before the issue repeats itself. Due to the inconsistant up time, I can't be sure that it's not some trouble bit of code that runs only occasionally or if it's something that's a core part of the application. My current plan is to review the heap graph on the "sick" server and look for sudden jumps in memory usage then review the IIS logs for requests at those times to try and establish a pattern. If anyone has some better suggestions though, I'm all ears! The following are some facts about this situation that may be usefull.
The "sick" server:
- CF 9.0.1.274733 Standard
- FusionReactor 4.0.9
- Win2k8 Web R2 (IIS7.5)
- Dual Xeon 2.8GHz CPUs
- 4GB RAM
JVM Config (same on "sick" and "good" servers):
- Initial and Max heap: 1536
-server -Xss10m -Dsun.io.useCanonCaches=false -XX:PermSize=192m -XX:MaxPermSize=256m -XX:+UseParNewGC -Xincgc -Xbatch -Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib -Dcoldfusion.dotnet.disableautoconversion=true
What I believe a "healthy" server graph should look like (from "good" server):
And the "sick" server graph looks like this:@AmericanWebDesign, I would concur with BKBK (in his subsequent reply) that a more reasonable explanation for what you’re seeing (in the growth of heap) is something using and holding memory, which is not unusual for the shared variables scopes: session, application, and/or server. And the most common is sessions.
If that’s enough to get you going, great. But I suspect most people need a little more info. If this matter were easy and straightforward, it could be solved in a tweet, but it’s not, so it can’t.
Following are some more thoughts, addressing some of your concerns and hopefully pointing you in some new directions to find resolution. (I help people do it all the time, so the good news is that it can be done, and answers are out there for you.)
Tracking Session Counts
First, as for the observation we’re making about the potential impact of sessions, you may be inclined to say “but I don’t put that much in the session scope”. The real question to start with, though, is “how many sessions do you have”, especially when memory use is high like that (which may be different than how many you have right now). I’ve helped many people solve such problems when we found they had tens or hundreds of thousands of sessions. How can you tell?
a) Well, if you were on CF Enterprise, you could look at the Server Monitor. But since you’re not, you have a couple of choices.
b) First, any CF shop could use a free tool called ServerStats, from Mark Lynch, which uses the undocumented servicefactory objects in CF to report a count of sessions, overall and per application, within an instance. Get it here: http://www.learnosity.com/techblog/index.cfm/2006/11/9/Hacking-CFMX--pulling-it-all-togeth er-serverStats . You just drop the files (within the zip) into a web-accessible directory and run the one CFM page to get the answer instantly.
c) Since you mention using FusionReactor 4.0.9, here’s another option: those using FR 4 (or 4.5, a free update for you since you’re on FR 4) can use its available (but separately installed) FusionReactor Extensions for CF, a free plugin (for FR, at http://www.fusion-reactor.com/fr/plugins/frec.cfm). It causes FR to grab that session count (among many other really useful things about CF) to log it every 5 seconds, which can be amazingly helpful. And yes, FREC can grab that info whether one is on CF Standard or Enterprise.
And let’s say you find you do have tens of thousands of sessions (or more). You may wonder, “how does that happen?“ The most common explanation is spiders and bots hitting your site (from legit or unexpected search engines and others). Some of these visit your site perhaps daily to gather up the content of all the pages of your site, crawling through every page. Each such page hit will create a new session. For more on why and how (and some mitigation), see:
http://www.carehart.org/blog/client/index.cfm/2006/10/4/bots_and_spiders_and_poor_CF_perfo rmance
About “high memory”
All that said, I’d not necessarily conclude so readily that your “bad” memory graph is “bad”. It could just be “different”.
Indeed, you say you plan to “look for sudden jumps in memory usage“, but if you look at your “bad” graph, it simply builds very slowly. I’d think this supports the notion that BKBK and I are asserting: that this is not some one request that “goes crazy” and uses lots of memory, but instead is the “death by a thousand cuts” as memory use builds slowly. Even then, I’d not jump at a concern that “memory was high”.
What really matters, when memory is “high” is whether you (or the JVM) can do a GC (garbage collection) to recover some (or perhaps much) of that “high, used memory”. Because it’s possible that while it “was” in use in the past (as the graph shows), it might no longer be “in use” at the moment .
Since you have FR, you can use its “System Metrics page” to do a GC, using the trash can in the top left corner of the top right-most memory graph. (Those with the CFSM can do a GC on its “Memory Usage Summary” page, and SeeFusion users can do it on its front page.)
If you do a GC, and memory drops q lot, then you had memory that “had been” but no longer ”still was” in use, and so the high memory shown was not a problem. And the JVM can sometimes be lazy (because it’s busy) about getting to doing a GC, so this is not that unusual. (That said, I see you have added the Xincgc arg to your JVM. Do you realize that tells the JVM not to do incremental GCs? Do you really want that? I understand that people trade jvm args like baseball cards, trying to solve problems for each other, but I’d argue that’s not the place to start. In fact, rarely do I find myself that any new JVM args are needed to solve most problems.)
(Speaking of which, why did you set the – xss value? And do you know if you were raising or lowering it form the default?)
Are you really getting “outofmemory” errors?
But certainly, if you do hit a problem where (as you say) you find requests hanging, etc., then you will want to get to the bottom of that. And if indeed you are getting “outofmemory” problems, you need to solve those. To confirm if that’s the case, you’ll really want to look at the CF logs (specifically the console or “out” logs). For more on finding those logs, as well as a general discussion of memory issues (understanding/resolving them), see:
http://www.carehart.org/blog/client/index.cfm/2010/11/3/when_memory_problems_arent_what_th ey_seem_part_1
This is the first of a planned series of blog entries (which I’ve not yet finished) on memory issues which you may find additionally helpful.
But I’ll note that you could have other explanations for “hanging requests” which may not necessarily be related to memory.
Are you really getting “queued” requests?
You also say that “the service will start to queue requests and eventually stop processing them all together”. I’m curious: do you really mean “queuing”, in the sense of watching something in CF that tells you that? You can find a count of queued requests, with tools like CFSTAT, jrun metrics, the CF Server Monitor, or again FREC. Are you seeing one of those? Or do you just mean that you find that requests no longer run?
I address matters related to requests hanging and some ways to address them in another entries:
http://www.carehart.org/blog/client/index.cfm/2010/10/15/Lies_damned_lies_and_CF_timeouts
http://www.carehart.org/blog/client/index.cfm/2009/6/24/easier_thread_dumps
Other server differences
You presented us a discussion of two servers, but you’ve left us in the dark on potential differences between them. First, you showed the specs for the “sick” server, but not the “good” one. Should we assume perhaps you mean that they are identical, like you said the JVM.config is?
Also, is there any difference in the pattern of traffic (and/or the sites themselves) on the two servers? If they differ, then that could be where the explanation lies. Perhaps the sites on one are more inclined to be visited often by search engine spiders and bots (if they sites are more popular or just have become well known to search engines). There are still other potential differences that could explain things, but these are all enough to hopefully get you started.
I do hope that this is helpful. I know it’s a lot to take in. Again, if it was easier to understand and explain, there wouldn’t be so much confusion. I do realize that many don’t like to read long emails (let alone write them), which only exacerbates the problem. Since all I do each day is help people resolve such problems (as an independent consultant, more at carehart.org/consulting), I like to share this info when I can (and when I have time to elaborate like this), especially when I think it may help someone facing these (very common) challenges.
Let us know if it helps or raises more questions. :-)
/charlie -
SQL Server 2008R2 SP2 Query optimizer memory leak ?
It looks like we are facing a SQL Server 2008R2 queery optimizer memory leak.
We have below version of SQL Server
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
Jun 28 2012 08:36:30
Copyright (c) Microsoft Corporation
Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
The instance is set MAximum memory tro 20 GB.
After executing a huge query (2277 kB generated by IBM SPSS Clementine) with tons of CASE and a lot of AND/OR statements in the WHERE and CASE statements and muliple subqueries the server stops responding on Out of memory in the internal pool
and the query optimizer has allocated all the memory.
From Management Data Warehouse we can find that the query was executed at
7.11.2014 22:40:57
Then at 1:22:48 we recieve FAIL_PACE_ALLOCATION 1
2014-11-08 01:22:48.70 spid75 Failed allocate pages: FAIL_PAGE_ALLOCATION 1
And then tons of below errors
2014-11-08 01:24:02.22 spid87 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:02.22 Server Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:02.22 Server Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:02.30 Server Error: 17312, Severity: 16, State: 1.
2014-11-08 01:24:02.30 Server SQL Server is terminating a system or background task Fulltext Host Controller Timer Task due to errors in starting up the task (setup state 1).
2014-11-08 01:24:02.22 spid74 Error: 701, Severity: 17, State: 123.
2014-11-08 01:24:02.22 spid74 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:13.22 Server Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:13.22 spid87 Error: 701, Severity: 17, State: 123.
2014-11-08 01:24:13.22 spid87 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:13.22 spid63 Error: 701, Severity: 17, State: 130.
2014-11-08 01:24:13.22 spid63 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:13.22 spid57 Error: 701, Severity: 17, State: 123.
2014-11-08 01:24:13.22 spid57 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:13.22 Server Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:18.26 Server Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:24.43 spid81 Error: 701, Severity: 17, State: 123.
2014-11-08 01:24:24.43 spid81 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:18.25 Server Error: 18052, Severity: -1, State: 0. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:18.25 Server BRKR TASK: Operating system error Exception 0x1 encountered.
2014-11-08 01:24:30.11 Server Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:30.11 Server Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:35.18 spid57 Error: 701, Severity: 17, State: 131.
2014-11-08 01:24:35.18 spid57 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:35.18 spid71 Error: 701, Severity: 17, State: 193.
2014-11-08 01:24:35.18 spid71 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:35.18 Server Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:35.41 Server Error: 17312, Severity: 16, State: 1.
2014-11-08 01:24:35.41 Server SQL Server is terminating a system or background task SSB Task due to errors in starting up the task (setup state 1).
2014-11-08 01:24:35.71 Server Error: 17053, Severity: 16, State: 1.
2014-11-08 01:24:35.71 Server BRKR TASK: Operating system error Exception 0x1 encountered.
2014-11-08 01:24:35.71 spid73 Error: 701, Severity: 17, State: 123.
2014-11-08 01:24:35.71 spid73 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:46.30 Server Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:51.31 Server Error: 17053, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:51.31 Server Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:51.31 Logon Error: 18052, Severity: -1, State: 0. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
Last error message is half an hour after the inital Out of memory at 2014-11-08 01:52:54.03. Then the Instance is completely shut down
From the memory information in the error log we can see that all the memory is consumed by the QUERY_OPTIMIZER
Buffer Pool Value
Committed 2621440
Target 2621440
Database 130726
Dirty 3682
In IO
0
Latched
1
Free
346
Stolen 2490368
Reserved 0
Visible 2621440
Stolen Potential 0
Limiting Factor 17
Last OOM Factor 0
Last OS Error 0
Page Life Expectancy 28
2014-11-08 01:22:48.90 spid75
Process/System Counts Value
Available Physical Memory 29361627136
Available Virtual Memory 8691842715648
Available Paging File 51593969664
Working Set 628932608
Percent of Committed Memory in WS 100
Page Faults 48955000
System physical memory high 1
System physical memory low 0
Process physical memory low 1
Process virtual memory low 0
MEMORYCLERK_SQLOPTIMIZER (node 1) KB
VM Reserved 0
VM Committed 0
Locked Pages Allocated 0
SM Reserved 0
SM Committed 0
SinglePage Allocator 19419712
MultiPage Allocator 128
Memory Manager KB
VM Reserved 100960236
VM Committed 277664
Locked Pages Allocated 21483904
Reserved Memory 1024
Reserved Memory In Use 0
On the other side MDW reports that the MEMORYCLERK_SQLOPTIMIZER increases since the execution of the query up to the point of OUTOF MEMORY, but the Average value is 54.7 MB during that period as can be seen on attached graph.
We have encountered this issue already two times (every time the critical query is executed).Hi,
This does seems to me kind of memory Leak and actually it is from SQL Optimizer which leaked memory from buffer pool so much that it did not had any memory to be allocated for new page.
MEMORYCLERK_SQLOPTIMIZER (node 1) KB
VM Reserved 0
VM Committed 0
Locked Pages Allocated 0
SM Reserved 0
SM Committed 0
SinglePage Allocator 19419712
MultiPage Allocator 128
Can you post complete DBCC MEMORYSTATUS output which was generated in errorlog. Is this the only message in errorlog or there are some more messages before and after it.
select (SUM(single_pages_kb)*1024)/8192 as total_stolen_pages, type
from sys.dm_os_memory_clerks
group by typeorder by total_stolen_pages desc
and
select sum(pages_allocated_count * page_size_in_bytes)/1024,type from sys.dm_os_memory_objects
group by type
If you can post the output of above two queries with dbcc memorystaus output on some shared drive and share location with us here. I would try to find out what is leaking memory.
You can very well apply SQL Server 2008 r2 SP3 and see if this issue subsides but I am not sure whether this is fixed or actually it is a bug.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
The Security ID is not valid causes memory leak in Ldap
Hi, all:
We are using the Novell LDAP Provider to allow our server application
be configured in LDAP mode. One of our clients is experiencing a memory
leak and we believe that the problem could be related to a "The Security
ID is not valid" error. When he changes to native Active Directory mode
the memory leak dissapears (he still gets the "Security ID" error, but
all works fine). So, we think that the problem caused by the "Security
ID" error is affecting the Novell Ldap Provider library. He is using a
Windows 2008 R2 platform.
My question is: Do you know if these kind of errors are properly
handled so that resources are released?
We are using the 2.1.10.1 version of the library.
Many thanks for you help,
Luis
luixrodix
luixrodix's Profile: http://forums.novell.com/member.php?userid=107647
View this thread: http://forums.novell.com/showthread.php?t=435894ab;2091346 Wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Have the exact error message? Is it safe to assume you are querying
> MAD
> and not eDirectory? I make that assumption because you mentioned
> SIDs.
>
> Good luck.
>
> Yes the message is: "The Security ID structure is invalid". It is the
> error 1337 of Windows and cannot be fixed manually since SIDs cannot be
> modified manually.
>
>
> When the client has configured our tool as Active Directory via LDAP
> (using the Novell Ldap library) the error is not logged anywhere (that's
> another reason for thinking that the library is not handling the error
> correctly) and the application has memory leaks everytime the AD server
> is queried, but when they use the same Active Directory but in a pure AD
> configuration (using System.DirectoryServices and Windows API directly)
> the error is logged and the memory of the application remains stable.
>
> I asked the client to fix the Security ID problem, trying to find the
> user(s) whose SID are wrong and re-creating them, but if the Novell.Ldap
> library is not handling this error correctly the potential problem is
> still there.
>
>
>
>
>
> On 03/30/2011 08:36 AM, luixrodix wrote:
> >
> > I mean, everytime a query is sent to the LDAP server an Invalid SID
> is
> > reported, and some resources are not released. We think that the
> problem
> > could be in the LDAP Novell library.
> >
> >
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2.0.15 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - 'Enigmail: A simple interface for
> OpenPGP email security' (http://enigmail.mozdev.org/)
>
> iQIcBAEBAgAGBQJNk2bcAAoJEF+XTK08PnB5K1oP/RB5AUOUtXi13jS/3bSG0wVC
> uErEfdqBj6R7yliZ8oqkLApQXzEomMwmSRwa4K6v+Rj1MDFBW+ nBFTOv4aHVgq53
> ANslfM0inboZIxuQxBEhB/5HD062s4yGHTgL81LgeKdYyZvx0np6zmgDOVA/Ogx5
> GS0nQfAhUZ+tAlgrhzRj3FB9WaamSQsdmEbXCTLjIrhy2FjH14 RidAmY0civvAsw
> 5EoAlPe56JzQWhdzyIMhodVB2lIa2b+ttoKY7+Q35PsW2KJ3zl +O2MgHBdBtGUOQ
> DekIR3h5kOjsRGAia8Td1eqSjziNB04fBcjR++B1vLuzE7YSGR mfRVAofOdjtlsR
> lQ7sRX5Wg9cKN0KniMmvgrKaMqYcnl3wGgvhbVDA+vgriOxnRt PRssrTckrRaOcU
> KvE3efwvgbdWeRNdfVAwU2qMPrsEA051XBtRKCclv5Ebi6AgwZ uuT3rYRm2Ycusy
> TebrcX+YkiCZE+GYfULzN2KDUoxCbB85xBGwsg9Iz2/nTt6mkHT0+KqNM713uNJX
> OqtJJP1fvfw6JMeQW5rS0VrTl2yGncJHf+cvrp0cXx8l+CJkB3 X7phZ5N0c1ttTc
> 9cPCT+WuC7lCn4QviC8QlmZUYfbGDZmYbxm4ewUalB4J6uoBgy HPSbDITHXIeTz9
> ISovMI9iFXzrS+Cjd7dk
> =HEVD
> -----END PGP SIGNATURE-----
luixrodix
luixrodix's Profile: http://forums.novell.com/member.php?userid=107647
View this thread: http://forums.novell.com/showthread.php?t=435894 -
I'm running Ubuntu Linux and it looks like I see a memory leak that I'm not seeing when I ran it on my PC.
I get heap exception everntually. I was using sam maxm heap size of 500M.
Is there anything I have to do special for linux?
Thanksmorgalr wrote:
Yes, my crystal ball says to fix the leak you observe in Linux, and it will fix any potential problem you have not viewed in Windows yet.(chuckle) Your comment expresses something that flicked through my mind even reading the title. The thought that flicked through my mind, did not quite have the eloquence of your words. -
After 5 Releases, Why Doesn't Mozilla Care About Memory Leaks?
I'm baffled as to why this is still a problem. After a few hours of use, my memory usage while using Firefox 5.0 is at 1.5GB. I've followed every FAQ, disabled every extension, done everything to stop the memory leaks when using this browser and I'm growing very tired.
I have a Core i7 MacBook Pro with 4GB RAM and Firefox still manages to bring it to it knees. I've been a devoted Firefox user since 2002-3 and I've just about lost my faith in this browser.
Chrome is a featureless, ugly, dinky browser that I hate to use, but it leaves Firefox in its dust performance-wise. Where is the happy medium? I don't get it.
My favorite answer is always, "disable your extensions." Here are the problems with that:
1. Without extensions, Firefox is nothing. I might as well use Chrome.
2. It never seems to help, and when it does a little, it is difficult to figure out which extensions are doing the most damage. Why doesn't Firefox provide a way to look at which extensions are using the most memory?
3. Firefox should lay that smack down on extensions that could potentially leak memory, and yet, nothing. It should at least steal memory back when it gets out of control, regardless of what extension is using it.
4. Mozilla recommends some extensions that are supposed to help reduce memory usage, but none of them work on OS X.
I'm exhausted. I shouldn't have to restart my browser a million times day to get anything done. Where are the real solutions? How do years go by with problems like this still getting worse? Firefox 5 was supposed to be better at handling memory, but it's only gotten worse for me.
When will the madness end? We don't want new features, we want performance! I've always loved this browser, but is it really a surprise that Chrome is taking over?
To sum it up, if your browser is slower than Internet Explorer, you need to hurry up and fix the problem or pack it up and go home.My sentiments exactly!! I have all the exact same complaints and concerns, and I've also tried the solutions provided at no avail.
This the only beef I have with FireFox, but it's a bad one and I've been shopping around for a better browser. Chrome is the best alternative I've found, but it still isn't quite at parity yet.
Please fix this issue or at least make an attempt at it to let your users know it's somewhat of a future priority.
Attached a screen shot of memory usage after 1 hour, and this is the new FF 5 update. -
ListOfValues Cancel button creates memory leak
I believe one nuance of the listOfValues element (in an LOV page) gives rise to a memory leak. The examples and listOfValues documentation shows storing data needed by listOfValues events in the session. This is all fine and good if the user clicks on the "Select" button, as you get both a lovSelect and lovUpdate event to work with and clean out session data. But in the case of a user clicking the "Cancel" button, no event is fired, nor is a forward to another DataAction done. The window is simply closed. This strands all the data for the LOV (which could be quite sizable) in the user's session. You can't send this data on the request, because the LOV data must exist across several requests.
Am I completely missing something? How does one clean out a user's session when the Cancel button is clicked on a listOfValues component?
BradI am using JDeveloper 9.0.5.2. The restrictions of the project prevent using a newer version of JDeveloper. I am using ADF/Struts, and pure UIX (no JSPS).
The functionality I am speaking of is the standard behavior of the <listOfValues> component (lov for short). The lov component generates events for various behaviors (lovFilter, lovUpdate, loveSelect, etc.). It also completely encapsulates the Cancel and Select buttons, so you have no direct access to those. In order to manage an lov web page (presumably launched from an <lovInput> component on a previous page), the events need access to a certain collection of data (data for the table, max size, starting row, selection, validity, etc.). Because use of the lov page will result in potentially multiple submits, the request is not a good place to store this data, as the data needs to persist for the life of the lov page.
If you look at some of the lovInput/listOfValues examples on the OTN, you'll see that this data is persisted in the user's session. In and of itself, this is fine.
The problem is introduced by the fact that the lov's Cancel button (or window close) does not generate any events, and you don't have direct access to those controls to add an event of your own. When the cancel button is clicked, the window just closes, and in the words of the lov documentation "no other events occur."
This is very problematic -- your session is still stuffed full of data to support the lov. I am looking for a way to remove that data.
Frank -- in your post, you say:
"why can't you add an event to clean the session from the data?"
If you know how to add such an event -- one that fires when the Cancel button is clicked, please enlighten me. I would greatly appreciate it!
Thanks,
Brad
Maybe you are looking for
-
Help! Early 2008 MacPro keeps crashing suddenly. EtreCheck help needed.
My MacPro 2008 keeps crashing, rebooting, sometimes successfully, sometimes not. This has happened before a couple of years ago, I think I reset PRAM then and it worked. This time, it hasn't. The things that have changed since my original configura
-
Color change in authorization object in maintained, partially ,unmaintained
hi gurus, In PFCG once we get into authorization and display, some are in red and some in yellow and green. It is based on maintained, partially ,unmaintained authorizations. But, when a red changes to green and yellow change to green and green c
-
I installed Mountian Lion and moneywiz. I do not have a qif file from my old Quicken 2007 for mac. I do have a Quicken backup. how do I inport this into moneywiz?
-
Adobe: Feature Request - Keystroke Help
I don't know why it isn't there but I ask you please to put the keyboard shortcuts visibly into the menus so they are easy to find. Thanks. Pt
-
Can any of you send me the OSS note 832994 to thanu.krishnan AT in.bosch.com Thank You Thanu Edited by: Thanu Krishnan on Jun 4, 2008 12:33 PM