Java Memory Problem

Hi,
We are using apache and tomcat webserver for our site (built on Unix). We are experiencing a serious problem with java memory.
For every 1 hour (at around 1000-1500 page hits), site is getting down and giving error,'java.lang.OutOfMemoryError'.
We observed that the memory consumed by java parocess is not being garbage collected. It is building up gradually and reaching its max limit(128M), and then site is getting down, throwing exception 'java.lang.OutOfMemoryError'.
I dont under stand what to do.
Can you please suggest me what to do to fix the problem..
thankyou,
Mo

Test run your server with java -Xloggc:<file> option which logs GC activity. This should give you a clue as to how often gc is running and how much of mem it reclaimed in each cycle. This is just an initial test. Once u know for sure that enough mem is not being reclaimed then you use some memory profiler tools and identify the memory leak and fix the code to release unwanted references.

Similar Messages

  • Memory problems with PreparedStatements

    Driver: 9.0.1 JDBC Thin
    I am having memory problems using "PreparedStatement" via jdbc.
    After profiling our application, we found that a large number oracle.jdbc.ttc7.TTCItem objects were being created, but not released, even though we were "closing" the ResultSets of a prepared statements.
    Tracing through the application, it appears that most of these TTCItem objects are created when the statement is executed (not when prepared), therefore I would have assumed that they would be released when the ResultSet is close, but this does not seem to be the case.
    We tend to have a large number of PreparedStatement objects in use (over 100, most with closed ResultSets) and find that our application is using huge amounts of memory when compared to using the same code, but closing the PreparedStatement at the same time as closing the ResultSet.
    Has anyone else found similar problems? If so, does anyone have a work-around or know if this is something that Oracle is looking at fixing?
    Thanks
    Bruce Crosgrove

    From your mail, it is not very clear:
    a) whether your session is an HTTPSession or an application defined
    session.
    b) What is meant by saying: JSP/Servlet is growing.
    However, some pointers:
    a) Are there any timeouts associated with session.
    b) Try to profile your code to see what is causing the memory leak.
    c) Are there references to stale data in your application code.
    Marilla Bax wrote:
    hi,
    we have some memory - problems with the WebLogic Application Server
    4.5.1 on Sun Solaris
    In our Customer Projects we are working with EJB's. for each customer
    transaction we create a session to the weblogic application server.
    now there are some urgent problems with the java process on the server.
    for each session there were allocated 200 - 500 kb memory, within a day
    the JSP process on our server is growing for each session and don't
    reallocate the reserved memory for the old session. as a work around we
    now restart the server every night.
    How can we solve this problem ?? Is it a problem with the operating
    system or the application server or the EJB's ?? Do you have problems
    like this before ?
    greetings from germany,

  • Memory problem with jdk/jre 1.1.8

    My name is BERGMANN Yannick.
    I'm working for IRM in Li?ge and we developped an application (user interface for an industrial measurement system) in Java (JDK/JRE version : 1.1.8).
    We have a big memory problem with this application :
    - This user interface is running on a WINDOWS NT PC with 128MB.
    - This is the command to lanch our application :
    C:\Program Files\JavaSoft\JRE\1.1\bin\jrew.exe" -ms32m -mx32m -cp "\Program Files\HMI\HMI.zip;\Velocis\Add_On\Jdbc\raima.jar;\Program Files\Swing-1.1.1\swingall.jar" be.irm.hmi.kernel.HMI -t15 -d"Velocis rdstcp" -newdb -mf1m -mr20
    - When our application is running, everything seems to be OK in memory for it. The garbage collector seems to work properly and our application has always at least 5MB free memory (We use the java instruction "Runtime.getRuntime().freeMemory()" to know this).
    - But when we look in the "Windows NT task manager" for the "jrew" application, the memory increases ALWAYS.
    - After 5 days our application is completely frozen and blocked ...???
    - here is a memory map of our Windows NT PC :
         "jre.exe"     "commit total"     "commit limit"     "commit peak"     "physical total"     "physical available"     "physical file cache"     
    Monday     92264     109256     194944     109424     130484     19492     6216     
    Thuesday     106196     123072     194944     123348     130484     6072     5840     
    Wednesday     110836     132288     194944     132416     130484     4408     5140     
    Thursday     108200     144980     194944     145140     130484     4888     5148     
    Friday     109440     158319     194944     161334     130484     4911     4992     
    Monday     111600     209060     228548     209148     130484     5184     3484     
    Have you any idea of what is happening with "jrew" in memory ?
    We have had this problem for six month and we are totaly out of idea.
    If you can give us any idea, we'll appreciate a lot.
    Thanks in advance,
    BERGMANN Yannick
    IRM SA - Software Engineer
    Tel. (32)4/239.90.10
    Tel. (32)4/239.90.74 (direct)
    Fax (32)4/263.40.97
    E-mail [email protected]

    We had a memory problem with a swing applet in our company. The major reason for this was that we added new components in a JTree and removed them later again, and the components we removed were never garbage collected. This was because with these components we added different listeners, and we didn't remove the listeners after we didn't need the components anymore. After we corrected this, the components where garbage collected.
    Perhaps it's a similar problem you have, or I have no idea. Check that you remove actionlisteners, mouselisteners etc from components you want to be garbage collected.
    You could also test your application with OptimizeIt to see what objects you create and how many you get of them over time: http://www.vmgear.com

  • How to determine the java memory cosumption

    Hi.
    In our system Netweaver7.1, (on windows)
    I want to know java heap memory consumption.
    We can see the memory consumption from windows task manager, but the AS JAVA caught the memory heap memory size during startup.
    So itisn't correct.
    In NWA, many paerformance monitors are, but I don't know which tool is useful.
    I want to sizing the memory size with following logic.
    8:00~9:00 50% load
    The java memory is conusmed 3GB.
    11:00~12:00 100% load
    The java memorry will "may" be consumed 6GB.
    regards,

    I found the directory with java.exe on my XP client. After updating my Path and then typing 'java -versions' I still see a 'java not found message'. No problem though - a README.TXT says that I have JRE 1.1.7B.
    One final question - a co-worker who also has XP just starting seeing a pop-up window saying 'Runtime' error when running a Java applet. His java.exe is in a path that includes the sub-directory 'JRE' On my XP client, java.exe is in a path which includes a 'JRE11' sub-directory. We therefore seem to have different versions of the JRE. Since I don't see the Runtime error when running the same applet, should my co-worker try upgrading his JRE?
    Thank you.

  • JAVA UFL java-heap problem

    Hi,
    Outline
    I have developed a User Function Library (UFL) in Java to Internationalize the reports. I have followed the guide provided by BusinessObject: http://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/20d050fc-6464-2b10-88aa-a31e24c4febf&overridelayout=true
    System Specifications:
    Windows XP
    Intel Pentium 4 CPU 3.00GHz
    Memory: 3 GB (RAM) 
    Crystal Reports: 2008  version 12.1.3.1028
    Java: JDK 1.6.0.13
    Problem
    Run into a Java heap space when the report is refreshing.
    Description.
    I open the report and click the refresh button. The report starts to quickly fetch data but then starts to slow down, it comes to the point where it stops fetching data. After a few seconds, the Formula Editor pops-up, highlights the Internationalization formula and then another small window titled "Crystal Reports" pops up with the message "Java heap space".
    Comments
    I know that I am running into a memory problem. I have check my memory in the Wondows Task Manager under the tab Performance and I see that my memory never reaches the maximum amount I have. I have also tried changing the Java versions, have also tried the UFL with Crystal reports XI and also tried with different computers all with which I get the same problem.
    Has anyone encountered the same problem, if so, how were you able to fix it?
    Thank you very much for your help.
    Valentine

    Valentine,
    How big is the report? I had a similar problem when trying to produce a large report, although it did not have any UFL reference.
    I increased the JVM stack size with the "-Xms32m -Xmx256m" flags.
    But for the life of me I cannot remember where I set this from running within Eclipse. I will have a look around and see
    if I can remember, but in the mean time it might help.
    Darren

  • Java Memory Management/Out of Memory

    Hi Guys,
    I have a few questions about java memory management
    Because i keep encounter a lot of out of memory error which i think java does not handle Vector/ArrayList re initialisation automatically
    Asumme i have 2 million record in database and , i will process every 80000 and store it in Vector
    while(true)
    list = new Vector();
    list = GetResultFromDatabase() // Process Every 80000
    if list.size() > 0 =======> My VEctor list contain 80000
    //loop the 800000
    //Process Some logic and data
    list.clear();
    list = null;
    If u See , i need to call list.clear and list = Null every process so it wont cause me out of memory
    Before i put that 2 lines , i always hit out of memory Exception.
    Seems like garbage collector cannot claim memory if i dont declare
    Is Memory Occupied by VEctor cannot be recoverable if we dont explitcitynya clear it and set it to NULL??
    Because in term of logic wise it wont cause a problem if i just
    do in this statement after it process like below
    list = new Vector() which will reinstatiate the object.
    Thanks.

    Damm i should hacve read your post again
    Look here:
    while(true)
    list = new Vector();What uer doing is craeting a new vector object everytime the while does an ityteration so when your while loop does 40000 loops there will be 40000 new objects in jou memory
    i sugest moving the decleration outside the while loop:
      list = new Vector();
    while(true)
    ///rest of loop
    } This could also be a problem
    hope it help :-)
    werns

  • Analysing SAP Java Memory Usage in Unix/Linux

    Hi,
    I need to analyze the SAP Java memory usage of Unix /Linux machine..NW 7.0
    Please guide with the commands and steps..complete prcedure.
    Based on it I should decide whether to create a new server node (or) increasing heap size
    Thanks in advance

    Hi,
    Do you have performance problems?
    How many CPU's are in the server?
    Did you check Log Configuration for delays or errors?
    Did you tune any exisiting parameters?
    You can add the nodes only if there is performance problems. You may think of adding one node to start with
    Proper number of server nodes within an instance:
    u2013 #ServerNodes = availableMemory / (JavaHeapPermSpaceStack)
    You can calculate the server nodes based on below formula
    No. of server Node = (RAM you want to assign or available RAM in GB)/2.5 ============> for 64-bit system
    No. of server Node = (RAM you want to assign or available RAM in GB)/1.5 ============> for 32-bit system
    Hence as per above discussion, we should go with 5 server nodes means,
    5 = RAM/2.5 (Assuming you are on 64-bit platform)
    i.e. RAM = 12.5 GB
    2). u2013 Configure JVM heap according to Note 723909 and Note 1008311 - Recommended Settings for NW 7.0 >= SR2 for the AIX JVM (J9)

  • Out of memory problem using the API

    Hi all,
    I need your assistance, we are working with CDB 10.2 making searches and retrieving the documents with all their attributes.
    In our actual scenario we have a single user (which represents an application) accessing CDB. This user use several persistent sessions simultaneously. I mean, several thousands of final users connect to an application that uses one user of CDB to connect to CDB with several persistent sessions.
    To simulate this scenario we wrote java code that open five threads and make several searches (requesting all the attributes) using the same user on cdb.
    Retrieving a considerable amount of data found on the search (~5000), we found a “Out of memory” problem when we made these tests:
    -     5 threads obtaining 100 documents (and all their attributes) / search
    -     1 thread obtaining 500 documents (and all their attributes) / search
    -     Also we have same problem if we make several searches with less results
    We suppose it’s a configuration or code issue so we ask for your assistance and experience to solve it.
    Thanks for your help,
    Dani
    import java.sql.Connection;
    import oracle.ifs.examples.api.constants.AttributeRequests;
    import oracle.ifs.examples.api.util.CommonUtils;
    import oracle.ifs.fdk.Attributes;
    import oracle.ifs.fdk.ClientUtils;
    import oracle.ifs.fdk.FdkConstants;
    import oracle.ifs.fdk.FdkCredential;
    import oracle.ifs.fdk.ManagersFactory;
    import oracle.ifs.fdk.NamedValue;
    import oracle.ifs.fdk.Options;
    import oracle.ifs.fdk.SearchExpression;
    import oracle.ifs.fdk.SearchManager;
    import oracle.ifs.fdk.SimpleFdkCredential;
    import oracle.jdbc.pool.OracleDataSource;
    public class Prueba {
         public static void main(String args[]) {
              Thread thread = new BasicThread1();
              Thread thread1 = new BasicThread1();
              Thread thread2 = new BasicThread1();
              Thread thread3 = new BasicThread1();
              Thread thread4 = new BasicThread1();
              thread.start();
              thread1.start();
              thread2.start();
              thread3.start();
              thread4.start();
    class BasicThread1 extends Thread {
         public void run() {
              ManagersFactory session = null;
              try {
                   System.out.println(this.getName() + "-->init");
                   session = getSession();
                   SearchManager sManager = session.getSearchManager();
                   SearchExpression srchExpr = new SearchExpression(Attributes.SIZE,
                             new Integer(20000000), FdkConstants.OPERATOR_LESS_THAN);
                   NamedValue[] res = null;
                   for (int i = 0; i < 100000; i++) {
                        res = sManager.search(srchExpr, basicSearchOptions2,
                                  AttributeRequests.DOCUMENT_CATEGORY_ATTRIBUTES);
                   System.out.println(this.getName()+" --> fin sin error: " + res.length);
                   } catch (Throwable t) {
    t.printStackTrace();
    System.out.println("<--"+this.getName());
              } finally {
                   CommonUtils.bestEffortLogout(session);
         static NamedValue[] basicSearchOptions2 = new NamedValue[] {
                   ClientUtils.newNamedValue(
                                       Options.MULTILEVEL_FOLDER_RESTRICTION,
                                       Boolean.TRUE),
                   ClientUtils.newNamedValue(Options.SEARCH_FOR_DOCUMENTS,
                             Boolean.TRUE),
                   ClientUtils.newNamedValue(Options.SEARCH_FOR_FOLDERS,
                             Boolean.FALSE),
                   ClientUtils.newNamedValue(Options.RETURN_COUNT,
                             new Integer(500)) //<<Maximo nº de elementos
         private static ManagersFactory getSession() throws Exception {
         OracleDataSource ods = new OracleDataSource();
         ods.setURL("URL");
         Connection conn = ods.getConnection();
         FdkCredential credential = new SimpleFdkCredential("USER","PSW");
         ManagersFactory session = ManagersFactory.login(credential,
         "SERVER");
    return session;
    }

    re-Post

  • Diagnostics Workload Analysis - Java Memory Usage gives BI query input

    Dears
    I have set up diagnostics (aka root cause analysis) at a customer side and I'm bumping into the problem that on the Java Memory Usage tab in Workload analyis the BI query input overview is given
    Sol Man 7.0 EHP1 SPS20 (ST component SP19)
    Wily Introscope 8.2.3.5
    Introscope Agent 8.2.3.5
    Diagnostics Agent 7.20
    When I click on the check button there I get the following:
    Value "JAVA MEMORY USAGE" for variable "E2E Metric Type Variable" is invalid
    I already checked multiple SAP Notes like the implementation of the latest EWA EA WA xml file for the Sol Man stack version.
    I already reactivated BI content using report CCMS_BI_SETUP_E2E and it gave no errors.
    The content is getting filled in Wily Introscope, extractors on Solution Manager are running and capturing records (>0).
    Did anyone come accross this issue already?
    ERROR MESSAGE:
    Diagnosis
    Characteristic value "JAVA MEMORY USAGE" is not valid for variable E2E Metric Type Variable.
    Procedure
    Enter a valid value for the characteristic. The value help, for example, provides you with suggestions. If no information is available here, then perhaps no characteristic values exist for the characteristic.
    If the variable for 0DATE or 0CALDAY has been created and is being used as a key date for a hierarchy, check whether the hierarchies used are valid for this characteristic. The same is valid for variables that refer to the hierarchy version.
      Notification Number BRAIN 643 
    Kind regards
    Tom
    Edited by: Tom Cenens on Mar 10, 2011 2:30 PM

    Hello Paul
    I checked the guide earlier on today. I also asked someone with more BI knowledge to take a look with me but it seems the root cause analysis data fetching isn't really the same as what is normally done in BI with BI cubes so it's hard to determine why the data fetch is not working properly.
    The extractors are running fine, I couldn't find any more errors in the diagnostics agent log files (in debug mode) and I don't find other errors for the SAP system.
    I tried reactivating the BI content but it seems to be fine (no errors). I reran the managed system setup which also works.
    One of the problems I did notice is the fact that the managed SAP systems are half virtualized. They aren't completely virtualized (no seperate ip address) but they are using virtual hostnames which also causes issues with Root Cause Analysis as I cannot install only one agent because I cannot assign it to the managed systems and when I install one agent per SAP system I have the message that there are already agents reporting to the Enterprise Manager residing on the same host. I don't know if this could influence the data extractor. I doubt it because in Wily the data is being fetched fine.
    The only thing that it not working at the moment is the workload analysis - java memory analysis tab. It holds the Key Performance Indicators for the J2EE engine (garbage collection %). I can see them in Wily Introscope where they are available and fine.
    When I looked at the infocubes together with a BI team member, it seemed the infocube for daily stats on performance was getting filled properly (through RSA1) but the infocube for hourly stats wasn't getting filled properly. This is also visible in the workload analysis, data from yesterday displays fine in workload analysis overview for example but data from an hour ago doesn't.
    I do have to state the Solution Manager doesn't match the prerequisites (post processing notes are not present after SP-stack update, SLD content is not up to date) but I could not push through those changes within a short timeframe as the Solution Manager is also used for other scenarios and it would be too disruptive at this moment.
    If I can't fix it I will have to explain to the customer why some parts are not working and request them to handle the missing items so the prerequisites are met.
    One of the notes I found described a similar issue and noted it could be caused due to an old XML file structure so I updated the XML file to the latest version.
    The SAPOscol also throwed errors in the beginning strange enough. I had the Host Agent installed and updated and the SAPOscol service was running properly through the Host Agent as a service. The diagnostics agent tries to start SAPOscol in /usr/sap/<SID>/SMDA<instance number>/exe which does not hold the SAPOscol executable. I suppose it's a bug from SAP? After copying the SAPOscol from the Host Agent to the location of the SMD Agent the error disappeared. Instead the agent tries to start SAPOscol but then notices SAPOscol is already running and writes in the log that SAPOscol is already running properly and a startup is not neccesary.
    To me it comes down the point where I have little faith in the scenario if the Solution Manager and the managed SAP systems are not maintained and up to date 100%. I could open a customer message but the first advice will be to patch the Solution Manager and meet the prerequisites.
    Another pain point is the fact that if the managed SAP systems are not 100% correct in transaction SMSY it also causes heaps of issues. Changing the SAP system there isn't a fast operation as it can be included in numerous logical components, projects, scenario's (CHARM) and it causes disruption to daily work.
    All in all I have mixed feelings about the implementation, I want to deliver a fully working scenario but it's near impossible due to the fact that the prerequisites are not met. I hope the customer will still be happy with what is delivered.
    I sure do hope some of these issues are handled in Solution Manager 7.1. I will certainly mail my concerns to the development team and hope they can handle some or all of them.
    Kind regards
    Tom

  • DAC Service shuts down with Java Memory when regenerating indexes.

    We have set up to run the DAC as a windows service. We have just set up a new execution plan comprising subject areas from Financials and Inventory. When we run the first full load, the execution plan executes all steps, but when it starts to run the last task: QUERY_INDEX_CREATION, the DAC service shuts down subsequently failing the execution plan. In the stderr.log we see the following:
    11-11-2008 16:44:59 global
    SEVERE:
    ANOMALY INFO:::
    MESSAGE:::Database Object should be specified for sql commands.
    EXCEPTION CLASS::: com.siebel.analytics.etl.etltask.TaskInitializationException
    com.siebel.analytics.etl.etltask.SQLTask.doInit(SQLTask.java:86)
    com.siebel.analytics.etl.etltask.CountTableTask.doInit(CountTableTask.java:59)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.init(GenericTaskImpl.java:129)
    com.siebel.etl.engine.core.Session.getTargetTableRowCounts(Session.java:3057)
    com.siebel.etl.engine.core.Session.run(Session.java:2972)
    java.lang.Thread.run(Thread.java:619)
    11-11-2008 16:45:04 global
    SEVERE: MESSAGE:::Java heap space
    EXCEPTION CLASS::: java.lang.OutOfMemoryError
    java.util.Arrays.copyOfRange(Arrays.java:3209)
    java.lang.String.<init>(String.java:216)
    java.lang.StringBuilder.toString(StringBuilder.java:430)
    com.siebel.etl.engine.core.IndexPropertyFactory.createIndexProperty(IndexPropertyFactory.java:28)
    com.siebel.etl.engine.core.Index.<init>(Index.java:71)
    com.siebel.etl.engine.core.TableIndexHandler.loadIndexes(TableIndexHandler.java:329)
    com.siebel.etl.engine.core.TableIndexHandler.populate(TableIndexHandler.java:96)
    com.siebel.etl.command.IndexCreationCommand.doExecute(IndexCreationCommand.java:64)
    com.siebel.etl.command.SqlCommand.doExecute(SqlCommand.java:9)
    com.siebel.etl.command.MultiSourceSqlCommand.execute(MultiSourceSqlCommand.java:82)
    com.siebel.etl.database.AsyncDatabaseCall.run(AsyncDatabaseCall.java:34)
    java.lang.Thread.run(Thread.java:619)
    11-11-2008 16:45:04 global
    SEVERE: MESSAGE:::Java heap space
    EXCEPTION CLASS::: java.lang.OutOfMemoryError
    java.util.Arrays.copyOfRange(Arrays.java:3209)
    java.lang.String.<init>(String.java:216)
    oracle.jdbc.driver.CharCommonAccessor.getString(CharCommonAccessor.java:385)
    oracle.jdbc.driver.T4CVarcharAccessor.getString(T4CVarcharAccessor.java:411)
    oracle.jdbc.driver.OracleResultSetImpl.getString(OracleResultSetImpl.java:397)
    oracle.jdbc.driver.OracleResultSet.getString(OracleResultSet.java:1515)
    com.siebel.etl.database.DAWResultSet.getString(DAWResultSet.java:597)
    com.siebel.etl.engine.core.TableIndexHandler.loadIndexes(TableIndexHandler.java:301)
    com.siebel.etl.engine.core.TableIndexHandler.populate(TableIndexHandler.java:96)
    com.siebel.etl.command.IndexCreationCommand.doExecute(IndexCreationCommand.java:64)
    com.siebel.etl.command.SqlCommand.doExecute(SqlCommand.java:9)
    com.siebel.etl.command.MultiSourceSqlCommand.execute(MultiSourceSqlCommand.java:82)
    com.siebel.etl.database.AsyncDatabaseCall.run(AsyncDatabaseCall.java:34)
    java.lang.Thread.run(Thread.java:619)
    11-11-2008 16:45:05 global
    SEVERE: Failed due to the following reason: Java heap space
    11-11-2008 16:45:05 global
    SEVERE: Failed due to the following reason: Java heap space
    11-11-2008 16:45:10 global
    SEVERE: MESSAGE:::Java heap space
    EXCEPTION CLASS::: java.lang.OutOfMemoryError
    This is repeated many times in the log. It manages to create some indexes, but eventually it shuts down the DAC windows service
    In the windows event viewer we then see this at 11-11-2008 16:
    "The Java Virtual Machine has exited with a code of 10, the service is being stopped."
    My guess is that we need to allocate more memory to the DAC Java VM, but how/where?
    Any ideas?
    best regards,
    Henrik Verup
    Edited by: [email protected] on Nov 20, 2008 10:34 AM
    We have managed to work around the problem, by changing the DAC Sytem Property (under Setup): CreateQueryIndexesAtTheEnd from true to false. This way, indexes are being rebuilt during the load - not at the end. This has helped in getting the execution pla to finisf succesfully. We are now working on how to increase Java heap memory of the DAC Server, when started as a windows service. We have installed a windows service to stop and start the DAC serve,r using a document by Olivier Lemaire from Oracle.
    Does anyone have experiences on this?

    java Heap memory in DAC can be increased from the Client Side .
    Open the file for editing----startclient
    echo off
    title Siebel DAC Client
    call config.bat
    Rem Uncomment the below if you want to see a DOS window with messages.
    Rem and comment out the JAVAW line.
    Rem
    Rem %JAVA% -Xms256m -Xmx1024m -cp %DACCLASSPATH% com.siebel.analytics.etl.client.view.EtlViewsInitializer
    Rem
    start %JAVAW% -Xms256m -Xmx1024m -cp %DACCLASSPATH% com.siebel.analytics.etl.client.view.EtlViewsInitializer
    edit 1024 and increase the heap memory sizee,this should work.We faced a similar issue and increasing the size of the java memory solved the issue.
    Let me know if this is solved

  • JVM memory problem

    I have a memory problem which it exceeds a maximum limit and my application will suspend to restart my service of this application, after that everything is okay.
    so i need ask about,
    How can I know if my Garbage Collection is working or not??
    How can I force Garbage Collection to work???
    How can I force Garbage Collection to stop???
    if there are addetional way to trace Memory performance beside Runtime class??

    Cross post:
    http://forum.java.sun.com/thread.jspa?threadID=693125

  • Learn about Java memory spaces?

    Hi, I'm a student, and I need to do some research on and write a small paper about the "three or more Java memory spaces", what they do, how they differ.
    The only problem is that spending a half hour searching google and reading quite a bit of documentation, I can't find any significant reference to or classification of these memory spaces! Can anyone provide me with a link to documentation that describes these memory spaces and their uses?
    Thanks for your help!

    Also
    http://java.sun.com/products/hotspot/docs/whitepaper/Java_HotSpot_WP_Final_4_30_01.html
    http://java.sun.com/developer/technicalArticles/Programming/turbo/ and the documents referenced at the end of the article
    http://java.sun.com/docs/hotspot/gc1.4.2/faq.html
    http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html
    Thes will lead you to additional documentation.

  • Java Memory Monitoring in Web Application

    Hi All,
    Request you to please review the below mentioned suggestion and provide inputs:
    Over the years, I have been involved in some projects involving web development in J2ee. JAVA memory usage is an issue that is common amongst all.
    Following are some of the questions that come across to a developer regarding the JAVA memory:
    Memory Usage Statistics.
    Trending of Memory statistics.
    Memory Leak.
    Performance optimization in case memory leaks occur.
    When it comes to answering the above, the most common suggestion is to enable heap dumps and analyze it using a heap analyzer tool. However, there are times and projects where these options are not approved off and developer is always asked to review code again and again. This is again a frustrating option for someone who has just joined a maintenance project and reading through code is not a feasible option. It has happened to me and I did the following to solve some of my problems and eventually all.
    Instead of analyzing heap dumps, I decided to do the following:
    Add a request filter to my J2EE application.
    Add following log statements in the filter:
    URL fired.
    Runtime.getRuntime().freeMemory()
    Runtime.getRuntime().totalMemory()
    Runtime.getRuntime().maxMemory()
    Gather data from daily app usage and build some trending statistics.
    Not only were we able to decide an optimum memory setting for our server, we were able to detect leaks as well. However, i agree detecting leaks wasn't as simple as it's with other tools considering the debugging effort that's involved.It is not a conventional approach but come sin handy when projects don't want to involve costs and maintain equilibrium at production systems as well.

    Hi,
    Few questions!
    1> Have you tweaked your jvm?
    2> What are the values given for Xms and Xmx?
    3> What is the size of XX:MaxPermGen?
    4> How much RAM is available on the system where you have deployed your app?
    5> Are you using pre-complied JSPs for faster response?
    6> Which JDK are you using?
    7> Have you tried using latest version of Tomcat?
    8> If these doesnt help, use any profiler to find the leak. <JProfiler, JVMTI, YourKit profiler etc>
    I hope answering these questions would help you :)
    njoy!

  • Memory Problem with SEt and GET parameter

    hi,
    I m doing exits. I have one exit for importing and another one for changing parameter.
    SET PARAMETER exit code is ....
    *data:v_nba like eban-bsart,
           v_nbc like eban-bsart,
           v_nbo like eban-bsart.
           v_nbc = 'CAPX'.
           v_nbo = 'OPEX'.
           v_nba = 'OVH'.
    if im_data_new-werks is initial.
      if im_data_new-knttp is initial.
        if im_data_new-bsart = 'NBC' or im_data_new-bsart = 'SERC' or im_data_new-bsart = 'SERI'
           or im_data_new-bsart = 'SER' or im_data_new-bsart = 'SERM' or im_data_new-bsart = 'NBI'.
          set parameter id 'ZC1' field v_nbc.
        elseif im_data_new-bsart = 'NBO' or im_data_new-bsart = 'NBM' or im_data_new-bsart = 'SERO'.
          set parameter id 'ZC2' field v_nbo.
        elseif im_data_new-bsart = 'NBA' or im_data_new-bsart = 'SERA'.
          set parameter id 'ZC3' field  v_nba.
        endif.
      endif.
    endif. *
    and GET PARAMETER CODE IS....
      get parameter id 'ZC1' field c_fmderive-fund.
      get parameter id 'ZC2' field c_fmderive-fund.
      get parameter id 'ZC3' field c_fmderive-fund.
    FREE MEMORY ID 'ZC1'.
      FREE MEMORY ID 'ZC2'.
       FREE MEMORY ID 'ZC3'.
    In this code i m facing memory problem.
    It is not refreshing the memory every time.
    So plz give me proper solution.
    Its urgent.
    Thanks
    Ranveer

    Hi,
       I suppose you are trying to store some particular value in memory in one program and then retieve it in another.
    If so try using EXPORT data TO MEMORY ID 'ZC1'. and IMPORT data FROM MEMORY ID 'ZC1'.
    To use SET PARAMETER/GET PARAMETER the specified parameter name should be in table TPARA. Which I don't think is there in your case.
    Sample Code :
    Data declarations for the function codes to be transferred
    DATA : v_first  TYPE syucomm,
           v_second TYPE syucomm.
    CONSTANTS : c_memid TYPE char10 VALUE 'ZCCBPR1'.
    Move the function codes to the program varaibles
      v_first  = gv_bdt_fcode.
      v_second = sy-ucomm.
    Export the function codes to Memory ID
    EXPORT v_first
           v_second TO MEMORY ID c_memid.        "ZCCBPR1  --- Here you are sending the values to memory
    Then retrieve it.
    Retrieve the function codes from the Memory ID
      IMPORT v_first  TO v_fcode_1
             v_second TO v_fcode_2
      FROM MEMORY ID c_memid.                                   "ZCCBPR1
      FREE MEMORY ID c_memid.                                   "ZCCBPR1
    After reading the values from memory ID free them your problem should be solved.
    Thanks
    Barada
    Edited by: Baradakanta Swain on May 27, 2008 10:20 AM

  • Memory Problems with Adobe PDF iFilter for 64-bit

    In preparation to rebuild my Windows Search Index, I installed the Adobe PDF iFilter for 64-bit on my system (Vista Business 64).  When I finally rebuilt the index, I wasn't too surprised by what I saw happen, namely, the SearchFilter.exe process would kick in whenever I wasn't using the system and just eat RAM.  One time I turned it on and it had allocated over 4,000 MB (and my system only has 4,030 MB available) so of course it was forcing all the other processes to hard fault (ie. everything was moving like molasses--for example, it took 20 minutes to put the thing to sleep).  But I just let it do it's work, figuring that perhaps this was to be expected relative to the small library of PDF's that I've accumulated on my computer, ranging from LaTeX generated text files, to containers for hi-res scans.  So, after a day and a half of basically not using my laptop, everything finally calmed down and I enjoyed the benefits of searching the content of my library from the Windows Start menu--for a short while.
    However, to my dismay I've encountered the problem that this freezing of my computer would now occur after everytime I download a new PDF (in this particular case they were Google Books scans) and then left the computer to idle.  Again, the SearchFilter.exe would allocate all of my RAM for itself and just push everything else onto the Virtual RAM, which means the SLOWEST possibly fetching you can get.  I had to uninstall as this was making my computer unusable for 15-30 minutes after each idle. Everything is back in working order without the iFilter, but I would like to know if anyone has reported such problems on x64 systems.  Obviously, I will also report the problem to Microsoft, since the search engine should certainly have the precaution to handle such memory problems.   However, it is a problem that is created by the Adobe PDF iFilter interacting with the Windows Search engine.

    Hello,
    We believe we have figured this out.  It looks like it has to do with the length of the default folder location for the Adobe iFilter.
    I was able to reproduce the issue and the following resolved it for me.  See if this resolves it for you all as well.
    Here is how to get Adobe Version 11 PDF filter to work.
     1 . If you haven’t already, run the following in SQL Server:
    Sp_fulltext_service ‘Load_os_resources’, 1
    Go
    --you might also need to run: 
    sp_fulltext_service ‘Verify_signature’,0  --This is used to validate trusted iFilters. 0 disables it. So use with caution.
    --go
    2. Stop SQL Server.  (Make sure FDHost.exe stops)
    3.  
    Uninstall the Adobe ifilter (because it defaulted to having spaces or the folder name is too long).
    4.  
    Reinstall the Adobe iFilter and when it prompts for where to install it, change it to: C:\Program Files\Adobe\PDFiFilter
    5.  Once the installation finishes, go the computer’s Environment variables. Add the following to the PATH.
    C:\Program Files\Adobe\PDFiFilter\BIN
    NOTE: it must include the BIN folder
    NOTE: If you had the OLD location that included spaces, remove it from the path environment variable.
    6. Start SQL Server
    7.  IF you had an existing Full-text index on PDFs, drop the full-text index and recreate it.
    8. You should now get results when you run sys.dm_fts_index_keywords('db','tblname')  --Note: Change db to be the actual database name and tblname to be the actual table name.
     Give this a try and see if this fixes yours. 
    Sincerely,
    Rob Beene, MSFT

Maybe you are looking for