High Java memory consumption.
Hello,
We are developing a solution using Servoy, which is a database server framework built in java. We have now 2 Servoy instances running on Debian linux servers, one with 64bit, both of them running java 1.6 update 26. We are having lot of crashes of the server with this error log:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 32776 bytes for Chunk::new
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
# Out of Memory Error (allocation.cpp:317), pid=12359, tid=1777961872
# JRE version: 6.0_26-b03
# Java VM: Java HotSpot(TM) Server VM (20.1-b02 mixed mode linux-x86 )
--------------- T H R E A D ---------------
Current thread (0x092c8400): JavaThread "C2 CompilerThread1" daemon [_thread_in_native, id=12368, stack(0x69f18000,0x69f99000)]
Stack: [0x69f18000,0x69f99000], sp=0x69f95fe0, free space=503k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0x7248b0]
We checked both the heap memory and the memory used by java in this case and we notice a highly increase of the total java memory, while the heap memory is resonable:
JVM Information
java.vm.name=Java HotSpot(TM) Server VM
java.version=1.6.0_26
java.vm.info=mixed mode
java.vm.vendor=Sun Microsystems Inc.
Operating System Information
os.name=Linux
os.version=2.6.29-xs5.5.0.17
os.arch=i386
System Information
Heap memory: allocated=141440K, used=105555K, max=699072K
None Heap memory: allocated=49312K, used=49018K, max=180224K
root 16388 2.5 43.0 2829220 1811772 ? Sl 20:11 3:43 java -Djava.awt.headless=true -Xmx768m -Xms128m -XX:MaxPermSize=128m -classpath .:lib/ohj-jewt.jar:lib/MRJAdapter.jar:lib/compat141.ja
Right now we are running it on the 64bit machine with -Xmx256m -Xms64m, and the memory is the same as on the 32bit one.
We also tried different memory configurations, but the result is the same. Java goes up to more the 2GB of memory used, while the heap is about 100MB - 400MB. In the output above you can see 1.8GB used, but this is at startup, after few hours it goes to more then 2GB - 2.5GB, then it crashes in at most few days, sometimes in few hours or even less.
Can the profiler see more then the heap? OK, I'll try to profile the server.
Servoy uses Tomcat 5.
The OS we run on is Debian. The 64bit machine is this:
JVM Information
java.vm.name=Java HotSpot(TM) 64-Bit Server VM
java.version=1.6.0_26
java.vm.info=mixed mode
java.vm.vendor=Sun Microsystems Inc.
Operating System Information
os.name=Linux
os.version=2.6.32-5-amd64
os.arch=amd64
and the current memory usage is this:
System Information
Heap memory: allocated=253440K, used=228114K, max=253440K
None Heap memory: allocated=112512K, used=112223K, max=180224K
, while the java process memory is this:
root 11629 6.9 44.4 2105640 1829808 hvc0 Sl 11:42 14:18 java -Djava.awt.headless=true -Xmx256m -Xms64m -XX:MaxPermSize=128m -classpath .:lib/ohj-jewt.jar:lib/MRJAdapter.jar:lib/compat141.jar:lib/commons-codec.jar:lib/commons-httpclient.jar:lib/activation.jar:lib/antlr.jar:lib/commons-collections.jar:lib/commons-dbcp.jar:lib/commons-fileupload-1.2.1.jar:lib/commons-io-1.4.jar:lib/commons-logging.jar:lib/commons-pool.jar:lib/dom4j.jar:lib/help.jar:lib/jabsorb.jar:lib/hibernate3.jar:lib/j2db.jar:lib/j2dbdev.jar:lib/jdbc2_0-stdext.jar:lib/jmx.jar:lib/jndi.jar:lib/js.jar:lib/jta.jar:lib/BrowserLauncher2.jar:lib/jug.jar:lib/log4j.jar:lib/mail.jar:lib/ohj-jewt.jar:lib/oracle_ice.jar:lib/server-bootstrap.jar:lib/servlet-api.jar:lib/wicket-extentions.jar:lib/wicket.jar:lib/wicket-calendar.jar:lib/slf4j-api.jar:lib/slf4j-log4j.jar:lib/joda-time.jar:lib/rmitnl.jar:lib/networktnl.jar com.servoy.j2db.server.ApplicationServer
About another app, I'm confused, because we do have another java app that uses much less:
root 1149 0.3 2.1 843312 86988 ? Sl 10:16 0:58 java -Xmx512m -Xms128m -cp noaa_server.zip server.WebServer
and for this one, the heap is this:
Memory: total: 129892352, free: 114666408, maximum: 518979584
So yes, this one looks better. It uses 15MB from heap and 80MB in total from ram.
But still, I don't get it, how can the process memory be affected while the heap is much low?
Edited by: 897090 on Nov 14, 2011 6:35 AM
Edited by: 897090 on Nov 14, 2011 6:39 AM
Edited by: 897090 on Nov 14, 2011 6:40 AM
Similar Messages
-
Hi
I want to know how much java memory is used.
How can I know the memory comsumption.
regards,Hai,
Please check the below link.....
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/d0eaafd5-6ffd-2910-019c-9007a92b392f
Regards,
Yoganand.V -
High memory consumption in XSL transformations (XSLT)
Hello colleagues!
We have the problem of a very high memory consumption when transforming XML
files with CALL TRANSFORMATION.
Code example:
CALL TRANSFORMATION /ipro/wml_translate_cls_ilfo
SOURCE XML lx_clause_text
RESULT XML lx_temp.
lx_clause_text is a WordML xstring (i.e. it is a Microsoft Word file in XML
format) and can therefore not be easily splitted into several parts.
Unfortunately this string can get very huge (e.g. 50MB). The problem is that
it seems that CALL TRANSFORMATION allocates memory for the source and result
xstrings but doesn't free them after the transformation.
So in this example this would mean that the transformation allocates ~100MB
memory (50MB for source, ~50MB for result) and doesn't free it. Multiply
this with a couple of transformations and a good amount of users and you see
we get in trouble.
I found this note regarding the problem: 1081257
But we couldn't figure out how this problem could be solved in our case. The
note proposes to "use several short-running programs". What is meant with
this? By the way, our application is done with Web Dynpro for ABAP.
Thank you very much!
With best regards,
Mario DüsselHi,
q1. how come the Ram consumption is increased to 99% on all the three boxes?If we continue with the theory that network connectivity was lost between the hosts, the Coherence servers on the local hosts would form their own clusters. Prior to the "split", each cache server would hold 1/12 of the primary and 1/12 of the backup (assuming you have one backup). Since Coherence avoids selecting a backup on the same host as the primary when possible, the 4 servers on each host would hold 2/3 of the cache. After the spit, each server would hold 1/6 of the primary and 1/6 of the backup, i.e., twice the memory it previously consumed for the cache. It is also possible that a substantial portion of the missing 1/3 of the cache may be restored from the near caches, in which case, each server would then hold 1/4 of the primary and 1/4 of the backup, i.e., thrice the memory it previously consumed for the cache.
q2: where is the cache data stored in the coherence servers?on which memory?The cache data is typically stored in the jvm's heap memory area.
Have you reviewed the logs?
Regards,
Harv -
Environment:
MAC OSX 10.9.5
Firefox 32.0.3
Firefox keeps consuming lot of memory when you keep refreshing a tab after an interval.
I opened a single tab in my firefox and logged into my gmail account on that. At this stage the memory consumption was about 400MB. I refreshed the page after 10 seconds and it went to 580MB. Again i refreshed after 10 seconds and this time it was 690MB. Finally, when i refreshed 3rd time after 10 seconds, it was showing as 800MB.
Nothing was changed on the page (no new email, chat conversation, etc. nothing). Some how i feel that the firefox is not doing a good job at garbage collection. I tested this use case with lot of other applications and websites and got the similar result. Other browsers like Google chrome, safari, etc. they just work fine.
For one on of my application with three tabs open, the firefox literally crashed after the high memory consumption (around 2GB).
Can someone tell me if this is a known issue in firefox? and is firefox planning to fix it? Right now, is there any workaround or fix for this?Hi FredMcD,
Thanks for the reply. Unfortunately, i don't see any crash reports in about:crashes. I am trying to reproduce the issue which will make browser to crash but somehow its not happening anymore but the browser gets stuck at a point. Here is what i am doing:
- 3 tabs are open with same page of my application. The page has several panels which has charts and the javascript libraries used for this page are backbone.js, underscore.js, require.js, highcharts.js
- The page automatically reloads after every 30 seconds
- After the first loading of there three tabs, the memory consumption is 600MB. But after 5 minutes, the memory consumption goes to 1.6GB and stays at this rate.
- After sometime, the page wont load completely for any of the tabs. At this stage the browser becomes very slow and i have to either hard refresh the tabs or restart the browser. -
Memory consumption to high!!!
Hi SDN,
we've in SAP ECC 6.0 and we have already applyed SAP Zero Administration for Windows (SQL2005), when we start SAP System, it occupies about 8GB memory but 2 to 4 hours later it is consuming about 12GB (!!!!!) even if the system is not used by any user, it seems that memory is not released for other processes.
someone knows how can I reduce this memory consumption of a SAP System?
Thanks in advance, best Regards,
Pedro RodriguesHi,
thanks for you answers, this is win2003, i think MS KB 931308 is applyed but i'm not able to see it now, this is an ABAP stack only.
Regards,
Pedro -
Very high memory consumption of B1i and cockpit widgets
Hi all,
finally I have managed it to install B1i successfully, but I think something is wrong though.
Memory consumption in my test environment (Win2003, 1024 MB RAM), while no other applications and no SAP addons are started:
tomcat5.exe 305 MB
SAP B1 client 315 MB
SAP B1DIProxy.exe 115 MB
sqlservr.exe 40 MB
SAPB1iEventSender.exe 15 MB
others less than 6 MB and almost only system based processes...
For each widget I open (3 default widgets, one on each standard cockpit), the tomcat grows bigger and leaves less for the sql server, which has to fetch all the data (several seconds on 100% of CPU usage).
Is this heavy memory consumption normal? What happens if several users are logged into SAP B1 using widgets?
Thanks in advance
Regards
SebastianHi Gordon,
so this is normal? Then I guess the dashboards are not suitable for many customers, especially for them who are working on a terminal server infrastructure. Even if the tomcat server has this memory consumption only on the SAP server, when each client needs about 300 MB (and add some hundred for the several addons they need!), I could not activate the widgets. And generally SAP B1 is not the only application running at the customers site. Suggesting to buy more memory for some Xcelsius dashboards won't convince the customer.
I hope that this feature will be improved in the future, otherwise the cockpit is just an extension of the old user menu (except for the brilliant quickfinder on top of the screen).
Regards
Sebastian -
MAIL Version 7.2 (1874) High Memory Consumption
My MacBook Air has lately increased its Memory consumption ( It has 4 GB Ram ) up to the point that is getting very slow operationally speaking. Is there something that I can do to improve it, controlled it.
Have you installed anything recently? Open up Activity Monitor and check what applications are running and see what's consuming the most resources.
-
Integration Builder Memory Consumption
Hello,
we are experiencing very high memory consumption of the Java IR designer (not the directory). Especially for loading normal graphical idoc to EDI mappings, but also for normal idoc to idoc mappings. examples (RAM on client side):
- open normal idoc to idoc mapping: + 40 MB
- idoc to edi orders d93a: + 70 MB
- a second idoc to edi orders d93a: + 70 MB
- Execute those mappings: no additional consumption
- third edi to edi orders d93a: + 100 MB
(alle mappings in same namespace)
After three more mappings RAM on client side goes on 580 MB and then Java heap error. Sometimes also OutOfMemory, then you have to terminate the application.
Obviously the mapping editor is not quite will optimized for RAM usage. It seems to not cache the in/out message structures. Or it loads for every mapping very much dedicated functionality.
So we cannot really call that fun. Working is very slow.
Do you have similar experiences ? Are there workarounds ? I know the JNLP mem setting parameters, but the problem is the high load of each mapping, not only the overall maximum memory.
And we are using only graphical mappings, no XSLT !
We are on XI 3.0 SP 21
CSYHii
Apart from raising tablespace..
Note 425207 - SAP memory management, current parameter ranges
you have configure operation modes to change work processes dynamically using rz03,rz04.
Please see the below link
http://help.sap.com/saphelp_nw04s/helpdata/en/c4/3a7f53505211d189550000e829fbbd/frameset.htm
You can Contact your Basis administrator for necessary action -
SetTransform() memory consumption
Hi,
I'm currently working on a application which needs to move a sphere very quickly. The position is calculated every 40 ms and set via the TransformGroup.setTransfom() method. This raises a problem as this statement rapidly consumes huge amounts of memory, especially when called in short time intervals.
I also tested it with the java3d example program "AWTInteraction" by simply putting the statement in a for loop and watched the memory climb:
for (int i=0; i<1e+6; i++)
objTrans.setTransform(trans);The result is a java.lang.OutOfMemoryError
Is there a soultion or workaround for this kind of problem?
Any hints appreciated. (Project has to be finished on Monday. It's really urgent.)
TIA
ErichErich,
i've never had any memory problems when dealing with Transforms. Is it perhaps possible that the leakage results from another instruction? For high memory consumption I saw responsible only working with textures so far. In order to find the problem, I would check out three things in your code:
1. Are you working with textures and if yes, how does the program behave, if the textures are omitted?
2. Are there any "new" instructions inside your loop? If yes, try to reuse objects and eliminate all "new" commands inside the loop.
3. Did you consider the Mantra to do all changes on a live scene graph within a behavior (and from the behavior scheduler)? It seems unusual to me to change transforms inside a loop.
Good luck,
Oliver -
J2EE Engine memory consumption (Usage)
Dear experts,
We have J2EE Engine (a Jawa stack). When I run routine monitoring via the browser and read the memory consumption I am meet with a chart that show a sawtooth like graph. Every hour from 19:00 to 02:00 the memory consumption will rise with approx. 200 MB after 7 hours all of a sudden the memory consumption drops down to normal idel levvel and start over again. I can inform that at the time there are no user on the system.
My question is what are the J2EE doing? since there is no user activity.Are the J2EE engine running some system applications? is it filling up the log files and then empty(storing) them.
I hope some of the experts can answer.
I just want to undertand what's going on, on the system. If there is some documentation/white paper on how to interpret/read the J2EE monitor I will great full if you drop the information or link here.
MikeHi Mike
To understand what exactly is being executed in Java engine, I'd suggest you perform Thread dump analysis as per:
http://help.sap.com/saphelp_smehp1/helpdata/en/10/3ca29d9ace4b68ac324d217ba7833f/frameset.htm
Generally 4-5 thread dumps are triggered at the interval of 20-25 seconds for better analysis.
Here's some useful SAP notes related to thread dump analysis:
710154 - How to create a thread dump for the J2EE Engine 6.40/7.0
1020246 - Thread Dump Viewer for SAP Java Engine
742395 - Analyzing High CPU usage by the J2EE Engine
Kind regards,
Ved -
How to determine the java memory cosumption
Hi.
In our system Netweaver7.1, (on windows)
I want to know java heap memory consumption.
We can see the memory consumption from windows task manager, but the AS JAVA caught the memory heap memory size during startup.
So itisn't correct.
In NWA, many paerformance monitors are, but I don't know which tool is useful.
I want to sizing the memory size with following logic.
8:00~9:00 50% load
The java memory is conusmed 3GB.
11:00~12:00 100% load
The java memorry will "may" be consumed 6GB.
regards,I found the directory with java.exe on my XP client. After updating my Path and then typing 'java -versions' I still see a 'java not found message'. No problem though - a README.TXT says that I have JRE 1.1.7B.
One final question - a co-worker who also has XP just starting seeing a pop-up window saying 'Runtime' error when running a Java applet. His java.exe is in a path that includes the sub-directory 'JRE' On my XP client, java.exe is in a path which includes a 'JRE11' sub-directory. We therefore seem to have different versions of the JRE. Since I don't see the Runtime error when running the same applet, should my co-worker try upgrading his JRE?
Thank you. -
Query on Memory consumption of an object
Hi,
I am able to get information on the number of instances loaded, the memory occupied by those instances using heap histogram.
Class Instance Count Total Size
class [C 10965 557404
class [B 2690 379634
class [S 3780 220838
class java.lang.String 10807 172912 Is there way to get detailed info like, String object of which class consume much memory.
In other words,
The memory consumption of String is 172912. can I have a split up like
String Objects of Class A - 10%
String Objects of Class B - 90%
ThanksI don't know what profiler you are using but many memory profilers can tell you where the strings are allocated.
-
Memory Consumption: Start A Petition!
I am using SQL Developer 4.0.0.13 Build MAIN 13.80. I was praying that SQL Developer 4.0 would no longer use so much memory and, when doing so, slow to a crawl. But that is not the case.
Is there a way to start a "petition" to have the SQL Development team focus on the products memory usage? This is problem has been there for years now with many posts and no real answer.
If there isn't a place to start a "petition" let's do something here that Oracle will respond to.
Thank youYes, at this point (after restarting) SQL Developer is functioning fine. Windows reports 1+ GB of free memory. I have 3 worksheets open all connected to two different DB connections. Each worksheet has 1 to 3 pinned query results. My problem is that after working in SQL Developer for a a day or so with perhaps 10 worksheets open across 3 database connections and having queried large data sets and performing large exports it becomes unresponsive even after closing worksheets. It appears like it does not clean up after itself to me.
I will use Java VisualVM to compare memory consumption and see if it reports that SQL Developer is releasing memory but in the end I don't care about that. I just need a responsive SQL Developer and if I need to close some worksheets at times I can understand doing so but at this time that does not help. -
BW data model and impacts to HANA memory consumption
Hi All,
As I consider how to create BW models where HANA is the DB for a BW application, it makes sense moving the reporting target from Cubes to DSOs. Now the next logical progression of thought is that the DSO should store the lowest granularity of data(document level). So a consolidated data model that reports on cross functional data would combine sales, inventory and purchasing data all being stored at document level. In this scenario:
Will a single report execution that requires data from all 3 DSOs use more memory vs the 3 DSOs aggregated say at site/day/material?Lower Granularity Data = Higher Memory Consumption per report execution
I'm thinking that more memory is required to aggregate the data in HANA before sending to BW. Is aggregation still necessary to manage execution memory usage?
Regards,
Dae JinLet me rephrase.
I got an EarlyWatch that said my dimensions on one of cube were too big. I ran SAP_INFOCUBE_DESIGNS in SE38 in my development box and that confirmed it.
So, I redesigned the cube, reactivated it and reloaded it. I then ran SAP_INFOCUBE_DESIGNS again. The cube doesn't even show up on it. I suspect I have to trigger something in BW to make it populate for that cube. How do I make that happen manually?
Thanks.
Dave -
Portal Session Memory Consumption
Dear All,
I want to see the user sessions memory consumption for portal 7.0. i.e. if a Portal user opens a session, how much memory is consumed by him/her. How can i check this. Any default value that is associated with this?
Backend System memory load will get added to portal consumption or to that specific Backend System memory consumption.
Thanks in Advance......
VinayakI'm seeing the exact same thing with our setup (it essentially the same
as yours). The WLS5.1 documentation indicates that java objects that
aren't serializeable aren't supported with in-memory replication. My
testing has indicated that the <web_context>._SERVLET_AUTHENTICATION_
session value (which is of class type
weblogic.servlet.security.ServletAuthentication) is not being
replicated. From what I can tell in the WLS5.1 API Javadocs, this class
is a subclass of java.lang.object (doesn't mention serializeable) as of
SP9.
When <web_context>._SERVLET_AUTHENTICATION_ doesn't come up in the
SECONDARY cluster instance, the <web_context>.SERVICEMANAGER.LOGGED.IN
gets set to false.
I'm wondering if WLCS3.2 can only use file or JDBC for failover.
Either way, if you learn anything more about this, will you keep me
informed? I'd really appreciate it.
>
Hi,
We have clustered two instances of WLCS in our development environment with
properties file configured for "in memory replication" of session data. Both the
instances come up properly and join the cluster properly. But, the problem is
with the in memory replication. It looks like the session data of the portal is
getting replicated.
We tried with the simplesession.jsp in this cluster and its session data is properly
replicated.
So, the problem seems to be with the session data put by Portal
(and that is the reason why I am posting it here). Everytime the "logged in "
check fails with the removal of one of the instances, serving the request. Is
there known bug/patch for the session data serialization of WLCS? We are using
3.2 with Apache as the proxy.
Your help is very much appreciated.--
Greg
GREGORY K. CRIDER, Emerging Digital Concepts
Systems Integration/Enterprise Solutions/Web & Telephony Integration
(e-mail) gcrider@[NO_SPAM]EmergingDigital.com
(web) http://www.EmergingDigital.com
Maybe you are looking for
-
Problem with PDFBox-0.7.3 library - Runtime Error
Hello, The problem is inside the method "chamaConversor". " conversor.pdfToText(arquivoPdf,arquivoTxt);" make a file.txt from one file.pdf. After that it don?t return the control to "ConstrutorDeTemplate2.java", and show the following error message:
-
hello every one. I'm looking for a way to deny the shutdown of the system from a remote connection as ssh. do you have some tips? tks bye Fabrizio Cardarello
-
What keeps making the cyclical noise while in sleep? Sounds like iMac is loading a graphic or different screen. Does it periodically while in sleep mode. How do I correct this? Just started doing it in the last few months.
-
FaceTime freezes when I call my son's iPad. Calling his iPhone in the same room has no problems. The picture is high resolution. There are no freezes when calling my daughter's iPad but the picture is lower resolution. Can I troubleshoot this?
-
All , Could you please help me in getting the details about this error message ? LIBGW_CAT:1000: ERROR: Message dropped by gw_msg_recv(). Error = 402003. GWIDOMAIN.20404.1438440448.0: LIBGW_CAT:1000: ERROR: Message dropped by gw_msg_recv(). Error = 4