High Eden Java Memory Usage/Garbage Collection

Hi,
I am trying to make sure that my Coldfusion Server is optomised to the max and to find out what is normal limits.
Basically it looks like at times my servers can run slow but it is possible that this is caused by a very old bloated code base.
Jrun can sometimes have very high CPU usage so I purchased Fusion Reactor to see what is going on under the hood.
Here are my current Java settings (running v6u24):
java.args=-server -Xmx4096m -Xms4096m -XX:MaxPermSize=256m -XX:PermSize=256m -Dsun.rmi.dgc.client.gcInterval=600000 -Dsun.rmi.dgc.server.gcInterval=600000 -Dsun.io.useCanonCaches=false -XX:+UseParallelGC -Xbatch ........
With regards Memory, the only memory that seems to be running a lot of Garbage Collection is the Eden Memory Space. It climbs to nearly 1.2GB in total just under every minute at which time it looks like GC kicks in and the usage drops to about 100MB.
Survivor memory grows to about 80-100MB over the space of 10 minutes but drops to 0 after the scheduled full GC runs. Old Gen memory fluctuates between 225MB and 350MB with small steps (~50MB) up or down when full GC runs every 10 minutes.
I had the heap set to 2GB initally in total giving about 600MB to the Eden Space. When I looked at the graphs from Fusion Reactor I could see that there was (minor) Garbage Collection about 2-3 times a minute when the memory usage maxed out the entire 600MB which seemed a high frequency to my untrained eye. I then upped the memory to 4GB in total (~1.2GB auto given to Eden space) to see the difference and saw that GC happened 1-2 times per minute.
Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often? i.e do these graphs look normal?
Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
Any other advice for performance improvements would be much appreciated.
Note: These graphs are not from a period where jrun had high CPU.
Here are the graphs:
PS Eden Space Graph
PS Survivor Space Graph
PS Old Gen Graph
PS Perm Gen Graph
Heap Memory Graph
Heap/Non Heap Memory Graph
CPU Graph
Request Average Execution Time Graph
Request Activity Graph
Code Cache Graph

Hi,
>Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often?
Yes normal to garbage collect Eden often. That is a minor garbage collection.
>Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
Sometimes it is good to set Eden (Eden and its two Survivor Spaces combined make up New or Young Generation part of JVM heap) to a smaller size. I know your thinking - what make it less, but I want to make it bigger. Give less a try (sometimes less = more, bigger not = better) and monitor the situation. I like to use -Xmn switch, some sources say to use other method/s. Perhaps you could try java.args=-server -Xmx4096m -Xms4096m -Xmn172m etc. I better mention make a backup copy of jvm.config before applying changes. Having said that now you know how you can set the size to bigger if you want.
I think the JVM is perhaps making some poor decisions with sizing the heap. With Eden growing to 1Gb then being evacuated not many objects are surviving and therefore not being promoted to Old Generation. This ultimately means the object will need to be loaded again latter to Eden rather than being referenced in the Old generation part of the heap. Adds up to poor performance.
>Any other advice for performance improvements would be much appreciated.
You are using Parallel garbage collector. Perhaps you could enable that to run multi-threaded reducing the time duration of the garbage collections, jvm args ...-XX:+UseParallelGC -XX:ParallelGCThreads=N etc where N = CPU cores (eg quad core = 4).
HTH, Carl.

Similar Messages

  • Diagnostics Workload Analysis - Java Memory Usage gives BI query input

    Dears
    I have set up diagnostics (aka root cause analysis) at a customer side and I'm bumping into the problem that on the Java Memory Usage tab in Workload analyis the BI query input overview is given
    Sol Man 7.0 EHP1 SPS20 (ST component SP19)
    Wily Introscope 8.2.3.5
    Introscope Agent 8.2.3.5
    Diagnostics Agent 7.20
    When I click on the check button there I get the following:
    Value "JAVA MEMORY USAGE" for variable "E2E Metric Type Variable" is invalid
    I already checked multiple SAP Notes like the implementation of the latest EWA EA WA xml file for the Sol Man stack version.
    I already reactivated BI content using report CCMS_BI_SETUP_E2E and it gave no errors.
    The content is getting filled in Wily Introscope, extractors on Solution Manager are running and capturing records (>0).
    Did anyone come accross this issue already?
    ERROR MESSAGE:
    Diagnosis
    Characteristic value "JAVA MEMORY USAGE" is not valid for variable E2E Metric Type Variable.
    Procedure
    Enter a valid value for the characteristic. The value help, for example, provides you with suggestions. If no information is available here, then perhaps no characteristic values exist for the characteristic.
    If the variable for 0DATE or 0CALDAY has been created and is being used as a key date for a hierarchy, check whether the hierarchies used are valid for this characteristic. The same is valid for variables that refer to the hierarchy version.
      Notification Number BRAIN 643 
    Kind regards
    Tom
    Edited by: Tom Cenens on Mar 10, 2011 2:30 PM

    Hello Paul
    I checked the guide earlier on today. I also asked someone with more BI knowledge to take a look with me but it seems the root cause analysis data fetching isn't really the same as what is normally done in BI with BI cubes so it's hard to determine why the data fetch is not working properly.
    The extractors are running fine, I couldn't find any more errors in the diagnostics agent log files (in debug mode) and I don't find other errors for the SAP system.
    I tried reactivating the BI content but it seems to be fine (no errors). I reran the managed system setup which also works.
    One of the problems I did notice is the fact that the managed SAP systems are half virtualized. They aren't completely virtualized (no seperate ip address) but they are using virtual hostnames which also causes issues with Root Cause Analysis as I cannot install only one agent because I cannot assign it to the managed systems and when I install one agent per SAP system I have the message that there are already agents reporting to the Enterprise Manager residing on the same host. I don't know if this could influence the data extractor. I doubt it because in Wily the data is being fetched fine.
    The only thing that it not working at the moment is the workload analysis - java memory analysis tab. It holds the Key Performance Indicators for the J2EE engine (garbage collection %). I can see them in Wily Introscope where they are available and fine.
    When I looked at the infocubes together with a BI team member, it seemed the infocube for daily stats on performance was getting filled properly (through RSA1) but the infocube for hourly stats wasn't getting filled properly. This is also visible in the workload analysis, data from yesterday displays fine in workload analysis overview for example but data from an hour ago doesn't.
    I do have to state the Solution Manager doesn't match the prerequisites (post processing notes are not present after SP-stack update, SLD content is not up to date) but I could not push through those changes within a short timeframe as the Solution Manager is also used for other scenarios and it would be too disruptive at this moment.
    If I can't fix it I will have to explain to the customer why some parts are not working and request them to handle the missing items so the prerequisites are met.
    One of the notes I found described a similar issue and noted it could be caused due to an old XML file structure so I updated the XML file to the latest version.
    The SAPOscol also throwed errors in the beginning strange enough. I had the Host Agent installed and updated and the SAPOscol service was running properly through the Host Agent as a service. The diagnostics agent tries to start SAPOscol in /usr/sap/<SID>/SMDA<instance number>/exe which does not hold the SAPOscol executable. I suppose it's a bug from SAP? After copying the SAPOscol from the Host Agent to the location of the SMD Agent the error disappeared. Instead the agent tries to start SAPOscol but then notices SAPOscol is already running and writes in the log that SAPOscol is already running properly and a startup is not neccesary.
    To me it comes down the point where I have little faith in the scenario if the Solution Manager and the managed SAP systems are not maintained and up to date 100%. I could open a customer message but the first advice will be to patch the Solution Manager and meet the prerequisites.
    Another pain point is the fact that if the managed SAP systems are not 100% correct in transaction SMSY it also causes heaps of issues. Changing the SAP system there isn't a fast operation as it can be included in numerous logical components, projects, scenario's (CHARM) and it causes disruption to daily work.
    All in all I have mixed feelings about the implementation, I want to deliver a fully working scenario but it's near impossible due to the fact that the prerequisites are not met. I hope the customer will still be happy with what is delivered.
    I sure do hope some of these issues are handled in Solution Manager 7.1. I will certainly mail my concerns to the development team and hope they can handle some or all of them.
    Kind regards
    Tom

  • High uccx engine memory usage in uccx8.2 su4

    Hi all,
    We are facing High uccx engine memory usage ,our system version uccx 8.2 su4,when ever this problem hapening we are facing all agent desktop failure also.Any one facing this same issue.Kindly attached RTMT screen shot.
    Thanks.

    Hi Renji,
    This is a known defect CSCtn87921 and its caused by a memory leak in BIPPA service
    -Please follow the workaround of restarting the BIPPA service from serviceability section on both servers if present
    -Check if the alert disappears
    -It should have been fixed in SU4 but apparently not
    Keep me posted
    Thanks,
    Prashanth

  • Analysing SAP Java Memory Usage in Unix/Linux

    Hi,
    I need to analyze the SAP Java memory usage of Unix /Linux machine..NW 7.0
    Please guide with the commands and steps..complete prcedure.
    Based on it I should decide whether to create a new server node (or) increasing heap size
    Thanks in advance

    Hi,
    Do you have performance problems?
    How many CPU's are in the server?
    Did you check Log Configuration for delays or errors?
    Did you tune any exisiting parameters?
    You can add the nodes only if there is performance problems. You may think of adding one node to start with
    Proper number of server nodes within an instance:
    u2013 #ServerNodes = availableMemory / (JavaHeapPermSpaceStack)
    You can calculate the server nodes based on below formula
    No. of server Node = (RAM you want to assign or available RAM in GB)/2.5 ============> for 64-bit system
    No. of server Node = (RAM you want to assign or available RAM in GB)/1.5 ============> for 32-bit system
    Hence as per above discussion, we should go with 5 server nodes means,
    5 = RAM/2.5 (Assuming you are on 64-bit platform)
    i.e. RAM = 12.5 GB
    2). u2013 Configure JVM heap according to Note 723909 and Note 1008311 - Recommended Settings for NW 7.0 >= SR2 for the AIX JVM (J9)

  • Unnaturally high cpu and memory usage

    Hello.
    I have installed WL 6.1 and WL Portal 4.0 on a w2k machine. It has a
    800 cpu (I think), and 512 RAM.
    What happens is: After server startup, everything is low and nice. But
    after a few jsp compilations, the cpu jumps to 100% and stays there,
    even after the page has been returned and the browser says "done".
    Actually, memory usage isn't that high; the java process is using
    about 50 megs of memory. But it has exceeded this a couple of times,
    and used 200+ MB.
    The database is also running on another machine.
    I tried deploying the same application on a locally installed
    WL/Portal, and the same thing happened, only with much more memory
    usage, about 200 - 250 megs. My machine became useless, and I had to
    shut down the server.
    What is causing this? Is the server's configuration totally screwed,
    or can some code be doing this? Btw, I know I am the only user on this
    server...
    On other threads here, I have seen people supplying server dumps of
    processes etc. How do I see this dump, or what processes within the
    server are running?
    I am very grateful for any help with this.
    Christer

    Take a thread dump of the server. You should at least be able to see
    what it's doing.
    On UNIX, you can send a SIGQUIT. (ie kill -3 the process)
    On Windows, you can CTRL-BREAK in the window.
    If you search for thread dump on edocs.bea.com, you should see a full
    explanation.
    Also, these groups can be searched on groups.google.com.
    -- Rob
    Christer Brinchmann wrote:
    Hello.
    I have installed WL 6.1 and WL Portal 4.0 on a w2k machine. It has a
    800 cpu (I think), and 512 RAM.
    What happens is: After server startup, everything is low and nice. But
    after a few jsp compilations, the cpu jumps to 100% and stays there,
    even after the page has been returned and the browser says "done".
    Actually, memory usage isn't that high; the java process is using
    about 50 megs of memory. But it has exceeded this a couple of times,
    and used 200+ MB.
    The database is also running on another machine.
    I tried deploying the same application on a locally installed
    WL/Portal, and the same thing happened, only with much more memory
    usage, about 200 - 250 megs. My machine became useless, and I had to
    shut down the server.
    What is causing this? Is the server's configuration totally screwed,
    or can some code be doing this? Btw, I know I am the only user on this
    server...
    On other threads here, I have seen people supplying server dumps of
    processes etc. How do I see this dump, or what processes within the
    server are running?
    I am very grateful for any help with this.
    Christer

  • High Page Pool Memory Usage on a Windows 2012 R2 Hyper-V Cluster

    Hi, 
    may someone has a similar Problem or can give me a helping Hand. 
    I'm having a 9Node Cluster ( Windows 2012 R2 , fully Patched from RTM Version on ) 
    The Cluster is connected to a SAN which is an Equallogic ( Firmware 6.11).  HitKit Driver 4.7.1 
    The System running clean and without any Event Logs until the Page Pool Usage of the Nodes turn over 15 GB of Paged pool Memory. 
    I monitore that via a Performance Counter ( \Memory\Pool Paged Bytes)
    otherwise I have no indication by for example the Process View, the Calculated Sum on the Overview ( TaskManager / Memory ) but shows on the contraditory the 15GB an Memory Usage. 
    I downloaded the RAMMap Tool from Sysinternals but that doesnt show me any possible Information for the information , acutally the Information differs from RAMMap to the Performance Counter ( RAMMap = 251MB, Performance Counter = 15GB ) 
    Just to make that point clear , there is No Information beside the Sum of Page File on the System Page and the PerfCounter that so much Page Pool Memory is used by any Process . Therefor it is hidden which process "needs" it. 
    I tried many thinks ,  like search for a Memory Hole in the Drivers , Hyper-V Stack , Equallogic and hoped for resolution with any new Patch which had a "Memory Leak" Information to them , but so far none has made a difference , maybe someone
    has the same Problem. 
    My Cluster has routher large VMs on it compared to the Numa Seize ( VM 4-128GB, on DELL R720 ( 2x CPU (8C) 384GB RAM  ) 
    ( The VMs have also a High IO Usage , as there RAM is used as a RAM Database ( zb. Mongo DB ) ) 
    Regards
    M

    Hi Alex, 
    there is something wrong here and not an "normal" Memory Leak , as for that reason I would see the amount of Page Pool Memory assigned to a given Process.
    If that would be the Case I could use RAMMap or even Vmmap to adresse the Paged Pool usage, but those Tools dont reflect the Usage like the PerfCounter does. 
     The Windows Driver Kit for a 2012 R2 System is WDK 8.1 which has a necessity of VS2013 ( Important: Before
    installing WDK 8.1 Update, you need to install Visual Studio 2013. See the Visual Studio links on this page.)  My Last Hope was to get poolmon.exe and identify the Source for the "Paged Pool" Usage .
    We are having that Problem now since the Early Days of Windows 2012R2 and because the Usage is quite "hidden" we only found out , when we ran out of Memory without knowing why. 
    Worst Usage Rates where up to 150GB of Paged Pool  !! given a RAM 385GB ( Dell R720)  that is quite a Number. 
    Im wondering if noone else with a cluster ( seems that the Node numbers in a Cluster and the "RAM" Seize of the VM ( >32GB )  have a tremendous impact on that given Problem , with a 4 Node Cluster the Problem isnt so big )  

  • Getting to know used memory without garbage collectable objects

    Hi all,
    I would like to know what is the currently used memory without garbage, so only the objects that are still referenced.
    Is there a way to do that? Preferably using JConsole?
    Thanks,
    Kristof

    That is indeed a way but the problem is that you are never sure that all garbage will get collected... (JConsole indeed has a Force GC button)
    I suspect that the JVM only knows the amount of garbage when it performs a GC. And as this is an expensive operation is would not be effective from a performance point of view to constantly keep track of the amount of garbage in the heap. That is probably also the reason why there is no profiler that supports this.
    It makes it difficult to get to know the actual memory gain from one implementation versus another. The most reliable way to go I think is by running the application several times, lowering the max heap each time till it fails with OutOfMemoryException.

  • Which algirithm is being used by java runtime for garbage collection??

    On what basis does java reclaim objects using Garbage Collection ??
    and explain the method i.e the step by step basis on how it is being done??

    There are various whitepapers and the like on this.
    As far as which objects may be collected, it's just any object that isn't referred to - directly or indirectly - by one of the root set of threads in the JVM.
    Collecting algorithms vary from jvm to jvm. You'd do better to search the web for a whitepaper on the subject.
    ~Cheers

  • How to monitor java memory usage in enterprise manager

    I am running sqlplus to execute a sql package, which generates XML.
    When processing 2000+ rows, it will give a out of memory error.
    Where in enterprise manger can I see this memory usage?
    Thanks.

    Hello,
    it depends a little on what you want to do. If you use the pure CCMS monitoring with the table ALTRAMONI you get average response time per instance and you only get new measurements once the status changes from green to yellow or red.
    In order to get continuous measurements you should look into Business Process Monitoring and the different documentations under https://service.sap.com/bpm --> Media Libary --> Technical Information. E.g. the PDF Setup Guide for Application Monitoring describes this "newer" dialog performance monitor. Probably you have to click on the calendar sheet in the Media Libary to also see older documents as well. As the Business Process Monitoring integrates with BW (there is also a BI Setup Guide in the Media LIbrary) you can get trendlines there. This BW integration also integrates back with SL Reporting.
    Some guidance for SL Reporting is probably given under https://service.sap.com/rkt-solman but I am not 100% sure.
    Best Regards
    Volker

  • Java memory usage

    Dear All,
    We have got quite a big java application and we have tried the code below towards the end and it is keep showing around 2Mb is that considered high and is this the right way to do it? We have closed all the statement and resutlsets immediately after using them. How to know if there is any memory leakage if the values goes more then 2Mb ?
    Runtime runtime = Runtime.getRuntime();
    // Run the garbage collector
    runtime.gc();
    // Calculate the used memory
    long memory = runtime.totalMemory() - runtime.freeMemory();
    System.out.println("Used memory is bytes: " + memory);

    935486 wrote:
    I have google and found many profile tool but then again you said no hard rules so wont really be much helpful. So in my case say we have now close all the resultset and statements properly that should not worry much I guess rite. Previously it keep growing which I guess the resources was not close properly. At least you won't have to worry about leaks being introduced through database stuff, no.
    Anything else to be done to avoid out of memory exception? Thank you.Write proper code. Which means you have to write code with care. And have it reviewed by other people, that's something that people don't do enough anymore - let other people sniff through your stuff. They're bound to find things you just overlook.
    @EJP, morgalr and Dr. Clap (and all other regulars who happen to read this but have not replied yet) - right? I'm not alone in thinking that about the code reviewing am I?

  • High (and increasing) memory usage: Windows Driver Foundation

    Hello! I updated my Z50-70 to Windows 10 a few days ago, using the clean install method. Everything works great but I noticed one problem. There is one process "Windows Driver Foundation - user-mode driver framework" which consumes great amounts of RAM. But the worrying thing is that when I start the computer that process doesn't use too much memory, but it keeps increasing, slowly but surely, and it doesn't seem to stop. I have done some research and apparently it could be caused by some faulty driver. Does anybody have the same problem? Is it any way to identify what is causing this problem? Thanks.

    read this: https://msdn.microsoft.com/en-us/library/windows/hardware/ff550442(v=vs.85).aspx and this: https://msdn.microsoft.com/en-us/library/windows/hardware/ff557573(v=vs.85).aspx this will help you discover the driver with high usage or leakage.  

  • High CPU and Memory Usage

    Hi
    I have CPU usage on 100%. If i dont play games just surfing on internet its like 15-30%. But when i start game like GTA IV it goes to 100%.
    I have HP P6 2490eo and Windows 10 Build 9926. I have almost 200Gt free space on my C Drive. And this problem started about 3 days ago. I havent done virus software full check, but i think my pc does it automatically. And i have done "not full" test wih Windows Defender. With that i mean like fast test
    Could it be W10? or is my CPU broken?
    Thanks For Help

    Late98, welcome to the forum.
    I haven't used Windows 10.  However, I believe that your problem is caused by it.  I suggest running Windows Update if it is still available.  Also, if you haven't done so, you should install any patches that are available for the game.  The game is old enough that it shouldn't be causing problems for the components in your computer.
    Please click the "Thumbs up + button" if I have helped you and click "Accept as Solution" if your problem is solved.
    Signature:
    HP TouchPad - 1.2 GHz; 1 GB memory; 32 GB storage; WebOS/CyanogenMod 11(Kit Kat)
    HP 10 Plus; Android-Kit Kat; 1.0 GHz Allwinner A31 ARM Cortex A7 Quad Core Processor ; 2GB RAM Memory Long: 2 GB DDR3L SDRAM (1600MHz); 16GB disable eMMC 16GB v4.51
    HP Omen; i7-4710QH; 8 GB memory; 256 GB San Disk SSD; Win 8.1
    HP Photosmart 7520 AIO
    ++++++++++++++++++
    **Click the Thumbs Up+ to say 'Thanks' and the 'Accept as Solution' if I have solved your problem.**
    Intelligence is God given; Wisdom is the sum of our mistakes!
    I am not an HP employee.

  • Java memory usage/management

    Hi,
    I am trying to give my program as much memory as possible. I have a machine with over 6GB of RAM. However, when I try
    java -Xmx4096Mwhich is significantly less than what's available, I get this error:
    Invalid maximum heap size: -Xmx4096M
    Could not create the Java virtual machine.How come?
    Secondly, lets say I try a smaller number, like 3.8 GB:
    java -Xmx3800Mthings work perfectly.
    Now, if I try 3.9 GB:
    java -Xmx3900MI get this error:
    Exception in thread "main" java.lang.OutOfMemoryError
            at java.util.zip.ZipFile.open(Native Method)
            at java.util.zip.ZipFile.<init>(ZipFile.java:112)
            at java.util.jar.JarFile.<init>(JarFile.java:127)
            at java.util.jar.JarFile.<init>(JarFile.java:65)
            at sun.misc.URLClassPath$JarLoader.getJarFile(URLClassPath.java:575)
            at sun.misc.URLClassPath$JarLoader.<init>(URLClassPath.java:542)
            at sun.misc.URLClassPath$3.run(URLClassPath.java:320)
            at java.security.AccessController.doPrivileged(Native Method)
            at sun.misc.URLClassPath.getLoader(URLClassPath.java:309)
            at sun.misc.URLClassPath.getLoader(URLClassPath.java:286)
            at sun.misc.URLClassPath.getResource(URLClassPath.java:156)
            at java.net.URLClassLoader$1.run(URLClassLoader.java:191)
            at java.security.AccessController.doPrivileged(Native Method)
            at java.net.URLClassLoader.findClass(URLClassLoader.java:187)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:289)
            at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:274)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:235)
            at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:302)How come?
    I don't mind the fact that Java can't give me 4096M. I can live with that. But what I would like to know is why I get this last error and also what is the MAXIMUM that I can use for the -Xmx option? I have some serious testing to do and I can't just write that "-Xmx3900M didn't seem to work and so I went with -Xmx3800." People will not like that sentence.
    Thanks,
    Jeff

    OutOfMemoryError. My goal right now is to make sure
    that I let Java have as much memory as the JVM can
    handle. It seems like giving it 3800M is ok, but IBeing able to set the heap size to 3800M doesnot mean that your JVM is actually using it uptil 3800M.
    would like to know if there is a good reason that
    3900M doesn't work.On 32-bit processor machines, the largest contiguous memory address space the operating system can allocate to a process is 1.8GB. Because of this, the maximum heap size can only be set up to 1.8GB. On 64-bit processor machines, the 1.8 GB limit does not apply, as 64-bit processor machines have a larger memory address space. So now you need to see what processor do you have and if its a 32 bit processor no matter you set it to 3800 or 7800M the max size limit is 1.8G
    >
    Thanks guys for the help so far.

  • Garbage collection Java Virtual Machine : Hewlett-Packard Hotspot release 1.3.1.01

    "Hi,
    I try and understand the mechanism of garbage collection of the Java Virtual Machine : Hewlett-Packard Hotspot release 1.3.1.01.
    There is description of this mechanism in the pdf file : "memory management and garbage collection" available at the paragraph "Java performance tuning tutorial" at the page :
    http://h21007.www2.hp.com/dspp/tech/tech_TechDocumentDetailPage_IDX/1,1701,1607,00.html
    Regarding my question :
    Below is an extract of the log file of garbage collections. This extract has 2 consecutive garbage collections.
    (each begins with "<GC:").
    <GC: 1 387875.630047 554 1258496 1 161087488 0 161087488 20119552 0 20119552
    334758064 238778016 335544320
    46294096 46294096 46399488 5.319209 >
    <GC: 5 387926.615209 555 1258496 1 161087488 0 161087488 0 0 20119552
    240036512 242217264 335544320
    46317184 46317184 46399488 5.206192 >
    There are 2 "full garbage collections", one of reason "1" and one of reason "5".
    For the first one "Old generation After " =238778016
    For the second "Old generation After " =238778016
    Thus, "Old generation Before garbage collection" of the second is higher than "Old generation After garbage collection". Why?
    I expected all objects to be allocated in the "Eden" space. And therefore I did not expect to s

    I agree but my current Hp support is not very good on JVM issues.
    Rob Woollen <[email protected]> wrote:
    You'd probably be better off asking this question to HP.
    -- Rob
    Martial wrote:
    The object of this mail is the Hewlett-Packard 1.3.1.01 Hotspot JavaVirtual Machine
    release and its garbage collection mechanism.
    I am interested in the "-Xverbosegc" option for garbage collectionmonitoring.
    I have been through the online document :
    http://www.hp.com/products1/unix/java/infolibrary/prog_guide/java1_3/hotspot.html#-Xverbosegc
    I would like to find out more about the garbage collection mechanismand need
    further information to understand the result of the log file generatedwith the
    "-Xverbosegc"
    For example here is an extract of a garbage collection log file generatedwith
    Hewlett-Packard Hotspot Java Virtual Machine. Release 1.3.1.01.
    These are 2 consecutive rows of the files :
    <GC: 5 385565.750251 543 48 1 161087488 0 161087488 0 0 20119552 264184480255179792
    335544320 46118384 46118384 46137344 5.514721 >
    <GC: 1 385876.530728 544 1258496 1 161087488 0 161087488 20119552 020119552 334969696
    255530640 335544320 46121664 46106304 46137344 6.768760 >
    We have 2 full garbage collections, one of Reason 5 and the next oneof Reason
    1.
    What happened between these 2 garbage collections as we got : "Oldgeneration
    After" of row 2 is higher than "Old generation Before" of row 1? Iexpected Objects
    to be initially allocated in eden and so we could not get "old generation2modified
    between the end of one garbage collection and before the next one.
    Could you please clarify this issue and/or give more information aboutgarbage
    collection mechanisms with the Hewlett-Packard Hotspot Java VirtualMachine. Release
    1.3.1.01.

  • Ultra high memory usage when deleting a slide show

    Hallo,
    I have a DVD project with several menus and slideshows and a huge flowchart. I wanted to delete one of the slide shows. What happened then was that the memory usage rised up to over 2 GB. Not the usage of the exe shown in the windows task manager under "processes", but the swap file usage.
    The first time I waited for like 5 minutes, and then Encore closed itself. Without any error message.
    The next time I waited as long again, but then the swap file usage went down slowly again. After that Encore refreshed its window. The flow chart was what you could best describe as "broken". Encore then stated some abnormal condition and offered me to save the file, and recommended to save it under a different file name. I clicked on "ok", but instead of a file save dialog a C++ runtime error box appeared. I click "ok" for that, then the "you should save but under a different name" box appeared again. I clicked on "ok" again, the C++ error reappeared, and after clicking on "ok" there Encore closed itself.
    I made a screenshot on which you can see the high swap file memory usage and the broken flowchart. You can download the screenshot under
    http://www.digitale-bibliothek.de/Downloads/ScreenEncoreDVD.jpg
    I will now try to first delete all references to the slideshow in the flow chart. But as I cannot delete an asset that is still used in a slideshow or a timeline, Encore DVD should also prevent me from deleting objects that are used in the flowchart, if it is such a problem for Encore to delete the object then.
    Regards,
    Christian Kirchhoff

    I'm a little confused as to where you tried to delete the slideshow from.
    Was it from the actual Project Browser, where all the assets are listed?
    Can you try deleting the surplus slideshow from there, and let us know what happens please?

Maybe you are looking for

  • Aski user information while trying to print a report from a web application

    Hi,         I am trying to print a report from web application. When I click on the print button then the page is redirected to the below link and asking for user information. See the below given link for more help. _http://<server><port>PlatformServ

  • Target Unreachable, 'FirstName' returned null

    hi when i try to navigate to next page using a button am geting this error Target Unreachable, 'FirstName' returned null,am in jdeveloper 11.1.1.6.0 <SkinFactoryImpl> <getSkin> Cannot find a skin that matches family portal and version v1.1. We will u

  • Rectangle values won't print

    I am working on a program to create a rectangle class with attributes of length and width both set to 1 and then caculate the area and the perimeter but it won't display the data. and gives me an error on the line for printing area and perimeter say

  • Query regarding changing the background color of an input text

    Hi, I want to change the background color of an input text component.When I tried with the back-ground color inline style, it changed the color of the label instead of changing the background color.Is there any way I could do it? Thanks in advance

  • IPhoto paid once deleted?

    Dear community, I've just bought new macbook pro retina with mavericks 10.9 already installed on it. It came with iphoto already installed on it. Something happened and iphoto got corrupted. message shows "you cant open the application "iphoto" becau