What is heap dump

Dear Gurus
we are using EP7.0 14 patch level AIX 5.3 oracle 10.2
We are getting heap dumps since one week in cluster/server0, server1
server0
2345692 Sep 18 14:30 Snap0001.20080918.090052.1249392.trc
635944052 Sep 18 14:31 heapdump.20080918.090052.1249392.phd
6673042 Sep 18 14:31 javacore.20080918.090052.1249392.txt
server1
  1682064 Sep 19 07:05 Snap0001.20080919.013517.1495290.trc
  1508514 Sep 19 07:05 javacore.20080919.013517.1495290.txt
30549145 Sep 19 07:05 heapdump.20080919.013517.1495290.phd
1690256 Sep 21 05:25 Snap0001.20080920.235513.1167568.trc
1516201 Sep 21 05:25 javacore.20080920.235513.1167568.txt
30526141 Sep 21 05:25 heapdump.20080920.235513.1167568.phd
We currently having 2 server processes.
What is heap dump, in which circumstances it happens, the size of heap dump how it matters?
how to avoid it (like server processes to be increased or Max Heap Size to be increased)
Please guide me.
Balaji Nampally

Hi,
as a strat, you can have a look at https://www.sdn.sap.com/irj/sdn/wiki?path=/pages/viewpage.action?pageId=33456
Fabien.

Similar Messages

  • JVMPI_GC_ROOT_MONITOR_USED - what does this mean in a heap dump?

    I'm having some OutOfMemory errors in my application, so I turned on a profiler, and took a heap dump before and after an operation that is blowing up the memory.
    What changes after the operation is that I get an enormous amount of data that is reported under the node JVMPI_GC_ROOT_MONITOR_USED. This includes some Oracle PreparedStatements which are holding a lot of data.
    I tried researching the meaning of JVMPI_GC_ROOT_MONITOR_USED, but found little help. Should this be objects that are ready for garbage collection? If so, they are not being garbage collected, but I'm getting OutOfMemoryError instead (I thought the JVM was supposed to guarantee GC would be run before OutOfMemory occurred).
    Any help on how to interpret what it means for objects to be reported under JVMPI_GC_ROOT_MONITOR_USED and any ways to eliminate those objects, will be greatly appreciated!
    Thanks

    I tried researching the meaning of
    JVMPI_GC_ROOT_MONITOR_USED, but found little help.
    Should this be objects that are ready for garbage
    collection? Disclaimer: I haven't written code to use JVMPI, so anything here is speculation.
    However, after reading this: http://java.sun.com/j2se/1.4.2/docs/guide/jvmpi/jvmpi.html
    It appears that the "ROOT" flags in a level-2 dump are used with objects that are considered a "root reference" for GC (those references that are undeniably alive). Most descriptions of "roots" are static class members and variables in a stack frame. My interpretation of this doc is that objects used in a synchonize() statement are also considered roots, at least for the life of the synchronized block (makes a lot of sense when you think about it).

  • Heap Dump file generation problem

    Hi,
    I've configured configtool to have these 2 parameters:
    -XX:+HeapDumpOnOutOfMemoryError
    -XX:+HeapDumpOnCtrlBreak
    In my understanding, with these 2 parameters, the heap dump files will only be generated under 2 situations, ie, when out of memory occurred, or user manually click CLTR + BREAK in MMC.
    1) Unfortunately, there are many heap dump files (9 in total) generated when none of the above situation occured. I couldnt find "OutOfMemoryError" in the default trace, nor the shallow heap size of those heap dump files are anywhere near the memory limit. The consequences are our server run out of disk space.
    My question is, what are the other possibilities that heap dump file will be generated?
    2) In the Memory Consumption graph (NWA (http://host:port/nwa) -> System Management -> Monitoring -> Java Systems Reports), out of memory error occurred when the memory usage is reaching about 80% of the allocated memory. What are the remaining 20% or so reserved for ?
    Any help would be much appreciated.
    Thanks.

    Hi,
    Having the -XX:+HeapDumpOnCtrlBreak option makes the VM trigger a heap dump, whenever a CTRL_BREAK event appears. The same event is used also to trigger a thread dump, an action you can do manually from the SAP Management Console, I think it is called "Dump stacks". So if there was someone triggering thread dumps for analysis of other types of problems, this has the side effect of writing also a heap dump.
    Additionally, the server itself may trigger a thread dump (and by this also a heap dump if the option is present). It does this for example when a timeout appears during the start or stop of the server. A thread dump from such a moment allows us to see for example which service is unable to start.
    Therefore, I would recommend that you leave only the -XX:+HeapDumpOnOutOfMemoryError, as long as you don't plan to trigger any heap dumps on your own. The latter will cause the VM to write a heap dump only once - on the first appearance of an OutOfMemoryError.
    In case you need to trigger the heap dumps manually, leave the -XX:+HeapDumpOnCtrlBreak option for the moment of troubleshooting, but consider if you want to keep it afterwards.
    If heap dumps were written because of an OutOfMemoryError you should be able to see this in the dev_server file in /usr/sap/<SID>/<inst>/work/ . Also there you should be able to see if indeed thread dumps were triggered (just search for "Full Thread ").
    I hope this helps.
    Regards,
    Krum

  • AD4J: Unable to start Heap Dump due to OS Error

    When Java 'Heap Dump' is requested, the following message is shown:
    Unable to start Heap Dump due to OS Error.
    Details of the monitored JDK and application server are given below:
    JDK: Sun JDK 1.4.2_08 on Solaris 9
    Application Server: WebLogic 8.1 SP4
    1. What could be the possible cause? No errors are logged in jamserv/logs/error_log. Is there any way to enable detailed logging?
    2. Each time the heap dump is requested, a file of the format heapdump<n> gets created in /tmp (e.g. /tmp/heapdump12.txt). If you see the file, it contains the following:
    a) a line containing summary of the heap usage, and
    b) stack traces of all the threads
    Thanks!

    Wrong Forum?

  • ERROR while analysing 2.13 GB heap dump using HAT and JHAT.

    I am trying to analyse a 2.13 GB heap dump taken from java1.4 "IBM hotspot" running in solaris 8, but i get an error, java.io.IOException: Bad record length of -2021068228 at byte 0x00da146e. Is there any way that i can read this dump to analyse the objects.

    I have no idea what "IBM HotSpot" is but it sounds like you are running this bug:
    http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6614052
    Grab the latest build of jdk6u update 10 or jdk7 and use that to examine the dump file.

  • Thread Dump vs Heap Dump

    Hi,
    The below question are related to J2EE 7.00.
    - What is the difference between Thred dump and Heap dump?
    - When I go to Config tool then I go to Instance ID.
    There are two nodes under the Instance ID, DIspatcher and Server (could be multiple).
    I see different tabs on Instance ID and they are Message Server and BootStrap, Server General. There are lots of setting under Server General tab.
    My question is that what is the usage of the "Server General" tab?
    Please let me know.
    I will appreciate your reply.
    Regards.
    Sume

    A java engine runs all its work in 'threads' much like WP's in SM50 but unfortunately not so stand alone in 7.0.
    A thread dump is a bit like taking a photo of SM50, that is why you will be asked to take 2 or 3 so as to see what sticks in threads for a while.
    A java engine uses memory to store all its objects. Most memory is a single area called heap. A heap dump lists all objects in memory at the time.
    Thread dump = activities working/waiting
    Heap Dump = objects used
    As for the configtool, I do not have one in front of me to double check but I believe you are looking at the general settings as opposed to local instance settings. So you can set parameters at a local level using an instance number or for all instances by setting it as a general setting. Quite a few settings that are set at the local level are set globally anyway so be aware that that may be the case.

  • Java heap dump location

    Hi Everyone.
    How do I configure where I can output the java heap dumps?
    Right now they're dumped in /tmp automatically.
    Thanks

    Hi Khaled,
    It depends on what is the operating system and the jdk. For sun jdk:
    -XX:HeapDumpPath=<directory where to save the heap dumps>
    for IBM:
    BM_HEAPDUMPDIR environment variable should be set to point to the desired location.
    Greetings, Myriana

  • EM Agent get restarted on heap dump generation with jmap command

    Hi all,
    To find the cause of OutOfMemory error we are generating heap dump of perticular JVM process by using the command "jmap -dump:format=b,file=heap.bin <pid>".
    But when we excute this command, then our all the other jvm process get crashed and it also crashed emagent process.
    I would like to know the reason why these processes and specially emagent process
    get crashed.
    Regards,
    $

    Hi Michal,
    that looks indeed very odd. First, let me ask which version do you use? We had a problem in the past where classes loaded by the system class loader were not marked as garbage collection roots and hence were removed. This problem is fixed in the current version (1.1). If it is version 1.1, then I would love to have a look at the heap dump and find out if it is us.
    Having said that, this is what we do: After parsing the heap dump, we remove objects which are not reachable from garbage collection roots. This is necessary, because the heap dump can contain garbage. For example, the mark-sweep-compact of the old/perm generation leaves some dead space in the form of int arrays or java.lang.Object to win time during the compacting phase: by leaving behind dead objects, not every live object has to be moved which means not every object needs a new address. This is the kind of garbage we remove.
    Of course, we do not remove objects kept alive only by weak or soft references. To see what memory is kept alive only through weak or soft references, one can run the "Soft Reference Statistics" from the menu.
    Kind regards,
       - Andreas.
    Edited by: Andreas Buchen on Feb 14, 2008 6:23 PM

  • Error while taking GC heap Dump using Microsoft PerfView

    Hello,
    When I try to take heap dump for an application process using Microsoft
    PerfView , error observed
    and the error log given below. Can you please let me know the root cause for his issue ?
    Steps followed
    From the PerfView UI, choose “Take Heap Snapshot,” located on the Memory menu.
    And choose the process to capture
    Click the “Dump GC Heap” button or simply double click on the process name.
    Error Log
    Completed: Dumping GC Heap to C:\Install\TM\PerfView\TestProcess.gcDump   (Elapsed Time: 1.156 sec)
    Error: HeapDump failed with exit code 1
    Directory TestProcess.gcdump does not exist
    Started: Dumping GC Heap to C:\Install\TM\PerfView\TestProcess.1.gcDump
    Collecting a GC Heap SnapShot for process 1704
    [Taking heap snapshot of process '1704' ID 1704 to TestProcess.1.gcdump.  This can take 10s of seconds to minutes.]
    During the dump the process will be frozen.   If the dump is aborted, the process being dumped will need to be killed.
    Starting dump at 8/08/2014 3:55:15 PM
    Starting Heap dump on Process 1704 running architecture AMD64.
    set _NT_SYMBOL_PATH=SRV*C:\Users\UserId\AppData\Local\Temp\3\symbols*http://msdl.microsoft.com/download/symbols
    Exec: "C:\Users\UserId\AppData\Roaming\PerfView\VER.2014-08-08.13.49.17.346\AMD64\HeapDump.exe"  /MaxDumpCountK=250 "1704" "TestProcess.1.gcdump"
    Looking for C:\Users\UserId\AppData\Roaming\PerfView\VER.2014-08-08.13.49.17.346\Microsoft.Diagnostics.FastSerialization.dll
    Dumping process 1704 with id 1704.
    Process Has DotNet: False Has JScript: False Has ClrDll: False
    HeapDump Error: Could not dump either a .NET or JavaScript Heap.  See log file for details
    Completed: Dumping GC Heap to C:\Install\TM\PerfView\TestProcess.1.gcDump   (Elapsed Time: 1.172 sec)
    Error: HeapDump failed with exit code 1
    Directory TestProcess.1.gcdump does not exist

    Hi,
    Below is the DDL taken from different database. Will this be enough ? One more thing please, what shall be the password should it be DMSYS.....since this will not be used by me but system.
    CREATE USER "DMSYS" PROFILE "DEFAULT" IDENTIFIED BY "*******" PASSWORD EXPIRE DEFAULT TABLESPACE "SYSAUX" TEMPORARY TABLESPACE "TEMP" QUOTA 204800 K ON "SYSAUX" ACCOUNT LOCK
    GRANT ALTER SESSION TO "DMSYS"
    GRANT ALTER SYSTEM TO "DMSYS"
    GRANT CREATE JOB TO "DMSYS"
    GRANT CREATE LIBRARY TO "DMSYS"
    GRANT CREATE PROCEDURE TO "DMSYS"
    GRANT CREATE PUBLIC SYNONYM TO "DMSYS"
    GRANT CREATE SEQUENCE TO "DMSYS"
    GRANT CREATE SESSION TO "DMSYS"
    GRANT CREATE SYNONYM TO "DMSYS"
    GRANT CREATE TABLE TO "DMSYS"
    GRANT CREATE TRIGGER TO "DMSYS"
    GRANT CREATE TYPE TO "DMSYS"
    GRANT CREATE VIEW TO "DMSYS"
    GRANT DROP PUBLIC SYNONYM TO "DMSYS"
    GRANT QUERY REWRITE TO "DMSYS"
    GRANT SELECT ON "SYS"."DBA_JOBS_RUNNING" TO "DMSYS"
    GRANT SELECT ON "SYS"."DBA_REGISTRY" TO "DMSYS"
    GRANT SELECT ON "SYS"."DBA_SYS_PRIVS" TO "DMSYS"
    GRANT SELECT ON "SYS"."DBA_TAB_PRIVS" TO "DMSYS"
    GRANT SELECT ON "SYS"."DBA_TEMP_FILES" TO "DMSYS"
    GRANT EXECUTE ON "SYS"."DBMS_LOCK" TO "DMSYS"
    GRANT EXECUTE ON "SYS"."DBMS_REGISTRY" TO "DMSYS"
    GRANT EXECUTE ON "SYS"."DBMS_SYSTEM" TO "DMSYS"
    GRANT EXECUTE ON "SYS"."DBMS_SYS_ERROR" TO "DMSYS"
    GRANT DELETE ON "SYS"."EXPDEPACT$" TO "DMSYS"
    GRANT INSERT ON "SYS"."EXPDEPACT$" TO "DMSYS"
    GRANT SELECT ON "SYS"."EXPDEPACT$" TO "DMSYS"
    GRANT UPDATE ON "SYS"."EXPDEPACT$" TO "DMSYS"
    GRANT SELECT ON "SYS"."V_$PARAMETER" TO "DMSYS"
    GRANT SELECT ON "SYS"."V_$SESSION" TO "DMSYS"
    The other database has the DMSYS and the status is EXPIRED & LOCKED but I'm still able to take the dump using datapump??

  • Heap dump file size vs heap size

    Hi,
    I'd like to clarify my doubts.
    At the moment we're analyzing Sun JVM heap dumps from Solaris platform.
    Observation is that heap dump file is around 1,1GB while after loading to SAP Memory Analyzer it displays statistics: "Heap: 193,656,968" which as I understood is size of heap.
    After I run:
    jmap -heap <PID>
    I get following information:
    using thread-local object allocation
    Parallel GC with 8 thread(s)
    Heap Configuration:
       MinHeapFreeRatio = 40
       MaxHeapFreeRatio = 70
       MaxHeapSize      = 3221225472 (3072.0MB)
       NewSize          = 2228224 (2.125MB)
       MaxNewSize       = 4294901760 (4095.9375MB)
       OldSize          = 1441792 (1.375MB)
       NewRatio         = 2
       SurvivorRatio    = 32
       PermSize         = 16777216 (16.0MB)
       MaxPermSize      = 67108864 (64.0MB)
    Heap Usage:
    PS Young Generation
    Eden Space:
       capacity = 288620544 (275.25MB)
       used     = 26593352 (25.36139678955078MB)
       free     = 262027192 (249.88860321044922MB)
       9.213949787302736% used
    From Space:
       capacity = 2555904 (2.4375MB)
       used     = 467176 (0.44553375244140625MB)
       free     = 2088728 (1.9919662475585938MB)
       18.27830779246795% used
    To Space:
       capacity = 2490368 (2.375MB)
       used     = 0 (0.0MB)
       free     = 2490368 (2.375MB)
       0.0% used
    PS Old Generation
       capacity = 1568669696 (1496.0MB)
       used     = 1101274224 (1050.2569427490234MB)
       free     = 467395472 (445.74305725097656MB)
       70.20434109284916% used
    PS Perm Generation
       capacity = 67108864 (64.0MB)
       used     = 40103200 (38.245391845703125MB)
       free     = 27005664 (25.754608154296875MB)
       59.75842475891113% used
    So I'm just wondering what is this "Heap" in Statistic Information field visible in SAP Memory Analyzer.
    When I go to Dominator Tree view, I look at Retained Heap column and I see that they roughly sum up to 193,656,968.
    Could someone put some more light on it?
    thanks
    Michal

    Hi Michal,
    that looks indeed very odd. First, let me ask which version do you use? We had a problem in the past where classes loaded by the system class loader were not marked as garbage collection roots and hence were removed. This problem is fixed in the current version (1.1). If it is version 1.1, then I would love to have a look at the heap dump and find out if it is us.
    Having said that, this is what we do: After parsing the heap dump, we remove objects which are not reachable from garbage collection roots. This is necessary, because the heap dump can contain garbage. For example, the mark-sweep-compact of the old/perm generation leaves some dead space in the form of int arrays or java.lang.Object to win time during the compacting phase: by leaving behind dead objects, not every live object has to be moved which means not every object needs a new address. This is the kind of garbage we remove.
    Of course, we do not remove objects kept alive only by weak or soft references. To see what memory is kept alive only through weak or soft references, one can run the "Soft Reference Statistics" from the menu.
    Kind regards,
       - Andreas.
    Edited by: Andreas Buchen on Feb 14, 2008 6:23 PM

  • How to take regular heap dumps using HPROF

    Hi Folks,
    I am using Oracle App server as my application server. I found that the memory is growing gradualy and gets maxed out with in 1 hour. I am using 1 GB of heap.
    I defently feel this is a memory leak issue. Once the Heap usage reaches 100%, I will start getting the FULL GCs and my whole server hangs and nothing will work. Some times even the JVM crashes and restarts again.
    I didn't find Out of Memory exception also in any of my logs.
    I came to know that we can use Hprof to deal with this.
    I use the below as my JVM agrs...
    -agentlib:hprof=heap=all,format=b,depth=10,file=$ORACLE_HOME\hprof\Data.hprof
    I run my load run for 10 mins, now my heap usage has been grown to some extent.
    My Questions:
    1. Why there are 2 files generated, one is with the name Data.hprof and another with Data.hprof.tmp. Which is what?
    2. How to get the dump at 2 different points. So that I can compare the the 2 dumps and I can say which object is growing more.
    I downloaded the HAT and If I use to open this Data.hprof file from HAT, I am getting this error. This error will come if I open the file with out stoping the JVM process.
    java.io.EOFException
    at java.io.DataInputStream.readFully(DataInputStream.java:178)
    at java.io.DataInputStream.readFully(DataInputStream.java:152)
    at hat.parser.HprofReader.read(HprofReader.java:202)
    at hat.parser.Reader.readFile(Reader.java:90)
    at hat.Main.main(Main.java:149)
    If I stop the JVM process, and then open through HAT I am getting this error,
    Started HTTP server on port 7000
    Reading from hprofData.hprof...
    Dump file created Wed Dec 13 02:35:03 MST 2006
    Warning: Weird stack frame line number: -688113664
    java.io.IOException: Bad record length of -1551478782 at byte 0x0008ffab of file.
    at hat.parser.HprofReader.read(HprofReader.java:193)
    at hat.parser.Reader.readFile(Reader.java:90)
    at hat.Main.main(Main.java:149)
    JVm version I am using is: Sun JVM 1.5.0_06
    I am seriously fed up of this memory leak issue... Please help me out folks... I need this as early as possible..
    I hope I get early replys...
    Thanks in advance...

    First, the suggestion of using jmap is an excellent one, you should try it. On large applications, using the hprof agent you have to restart your VM, and hprof can disturb your JVM process, you may not be able to see the problem as quickly. With jmap, you can get a heap snapshot of a running JVM when it is in the state you want to understand more of, and it's really fast compared to using the hprof agent. The hprof dump file you get from jmap will not have the stack traces of where objects were allocated, which was a concern of mine a while back, but all indications are that these stack traces are not critical to finding memory leak problems. The allocation sites can usually be found with a good IDE ot search tool,
    like the NetBeans 'Find Usages' feature.
    On hprof, there is a temp file created during the heap dump creation, ignore the tmp file.
    The HAT utility has been added to JDK6 (as jhat) and many problems have been fixed. But most importantly, this JDK6 jhat can read ANY hprof dump file, from JDK5 or even JDK1.4.2. So even though the JDK6 jhat is using JDK6 itself, the hprof dump file it is given could have come from pretty much anywhere, including jmap. As long as it's a valid hprof binary dump file.
    So even if it's just to have jhat handy, you should get JDK6.
    Also, the Netbeans profiler (http://www.netbeans.org) might be helpful too. But it will require a restart of the VM.
    -kto

  • Why does hprof=heap=dump have so much overhead?

    I udnerstand why the HPROF option heap=sites incurs a massive performance overhead; it has to intercept every allocation and record the current call stack.
    However, I don't understand why the HPROF option heap=dump incurs so much of a performance overhead. Presumably it could do nothing until invoked, and only then trace from the system roots the entire heap.
    Can anyone speak to why it doesn't work that way?
    - Gordon @ IA

    Traditionally agents like hprof had to be loaded into the virtual machine at startup, and this was the only way to capture these object allocations. The new hprof in the JDK 5.0 release (Tiger) was written using the newer VM interface JVM TI and this new hprof was mostly meant to reproduce the functionality of the old hprof from JDK 1.4.2 that used JVMPI. (Just FYI: run 'java -Xrunhprof:help' for help on hprof).
    The JDK 5.0 hprof will at startup, instrument java.lang.Object.<init>() and all classes and methods that use the newarray bytecodes. This instrumentation doesn't take long and is just an initial startup cost, it's the run time and what happens then that is the performance bottleneck. At run time, as any object is allocated, the instrumented methods trigger an extra call into a Java tracker class which in turn makes a JNI call into the hprof agent and native code. At that point, hprof needs to track all the objects that are live (the JVM TI free event tells it when an object is freed), which takes a table inside the hprof agent and memory space. So if the machine you are using is low on RAM, using hprof will cause drastic slowdowns, you might try heap=sites which uses less memory but just tracks allocations based on site of allocation not individual objects.
    The more likely run time performance issue is that at each allocation, hprof wants to get the stack trace, this can be expensive, depends on how many objects are allocated. You could try using depth=0 and see if the stack trace samples are a serious issue for your situation. If you don't need stack traces, then you would be better off looking at the pmap command that gets you an hprof binary dump on the fly, no overhead, then you can use jhat (or HAT) to browse the heap. This may require use of the JDK 6 (Mustang) release for this experiment, see http://mustang.dev.java.net for the free downloads of JDK 6 (Mustang).
    There is an RFE for hprof to allow the tracking of allocations to be turned on/off in the Java tracker methods that were injected, at the Java source level. But this would require adding some Java APIs to control sun/tools/hprof/Tracker which is in rt.jar. This is very possible and more with the JVM TI interfaces.
    If you haven't tried the NetBeans Profiler (http://www.netbeans.org) you may want to look at it. It does take an incremental approach to instrumentation and tries to focus in on the areas of interest and allows you to limit the overhead of the profiler. It works with the latest JDK 5 (Tiger) update release, see http://java.sun.com/j2se.
    Oh yes, also look at some of the JVM TI demos that come with the JDK 5 download. Look in the demo/jvmti directory and try the small agents HeapTracker and HeapViewer, they have much lower overhead and the binaries and all the source is right there for you to just use or modify and customize for yourself.
    Hope this helps.
    -kto

  • Heap dumps on 32bit and 64 bit JVM

    We are experiencing memory leak issues with one of our application deployed on JBOSS (SUN JVM 1.5, Win 32 OS). The application is already memory intensive and consumes the maximum heap (1.5 GB) allowed on a 32-bit JVM on Win32.
    This leaves very few memory for heap dump and the JVM crashes whenever we try adding the heap dump flag (-agentlib:), with a "malloc error"..
    Has anyone faced a scenario like this?
    Alternatively for investigation purpose, we are trying to deploy it on a Windows X64 - but the vendor advises only to run on 32 bit JVM. Here is my question:
    1) Can we run 32bit JVM on Windows 64? Even if we run, can i allocate more than 2 GB for heap memory?
    2) I dont see the rational why we cannot run on 64bit JVM - because, JAVA programs are supposed to be 'platform-independent' and the application in form of byte code should be running no matter if it is a 32bit or 64-bit JVM?
    3) Do we have any other better tools (except HPROF heapdumps) to analyze memory leaks - we tried using profiling tools but they too fail becos of very few memory available.
    Any help is really appreciated! :-)
    Anush
    This topic has no replies.

    Originally Posted by gleach1
    I've done this a few time with ZCM 11.2
    take special note that for each action in a bundle you can set a requirement - this is where you shold be specifying the appropriate architecture for the action, takes a bit longer to do but saves on the number of bundles and clogging up event logs with junk
    modifying your existing actions in your bundles would probably be easier and make things cleaner to manage in the long term, this is more so the case when you have software that might only differ in the launch action depending on what program files folder it ends up in
    Sorry to dig this late topic, but I've created a bundle which have an action that needs to be executed on 32 / 64 bits, each by an specific executable. Said,
    (32 bits) ipxfer -s officescan.server -m 1 -p 8080 -c 63016
    (64 bits) ipxfer_x64 -s officescan.server -m 1 -p 8080 -c 63016
    So I added these two actions, marked each to have the correct arch ("Architecture / = / 32" or "Architecture / = / 64"), no fail on pre-requisite.
    But now, every time one 32 bit machine executes the bundle, it gets an error:
    Error starting "ipXfer_x64 -s officescan.server -m 1 -p 8080 -c 63016". Windows Error: This version of %1 is not compatible with the windows version in use. Please verify in your computer's system information if you need a x86 (32 bit) or x64 (64 bit) version of the program and contact software vendor
    All the machines' CPU are 64 bits, but some of them have 32-bit Windows 7. These are the ones that are raising these errors.
    I have other bundles restricted to 32/64 architectures, working correctly, but the restriction was created on the bundle level, not on the action level.
    Sorry for the bad english.

  • Problems when doing heap dumps

    Hirt,
    Thanks for your help. With the changes you suggested it is now working with jmx authentication as well.
    However i had another question. I can not make JFlow work. I did install the plugin and everytime i right click on my remote JVM tab and click on Heap Dump, it does open system popup for me to select where to save the dump. But as soon as i select and click OK, it throws this error
    Problems when doing Heap Dump for JOverflow analysis
    Problems when doing Heap Dump for JOverflow analysis

    Hi there!
    There can be multiple different reasons for this that I can think of, incorrect permissions granted to the user, the effective user of the Java process running Mission Control not having write permissions in the folder you are trying to save to etc.
    What does it say if you run with the debug flags?
    http://hirt.se/blog/?p=281
    Kind regards,
    Marcus

  • Jvm heap dump options

    We have a memory leak occurring on our appserver (glassfish) and are having trouble pinpointing its cause, so I want to turn on some of the JVM heap dump options. I have turned on the following two:
    -XX:-HeapDumpOnOutOfMemoryError
    -XX:HeapDumpPath=/heap_dump.hprofAre there any other options I can use to help resolve this? Also what will the dump output contain? Plain-text?
    Thanks

    a profiler comes to mind. But how do you know it is an actual memory leak? Are you getting out of memory exceptions?

Maybe you are looking for

  • Mini DVI to DVI adapter not working for MacBook BlackRAM upgrade for MacBook black (Early / Mid 2008 model - MB404LL/A)

    Hello. I have recently purchased an external display monitor (LG D2342P LED) and I am trying to connect my MacBook black - MB404LL/A (Mac OS X 10.6.8, Core2Duo 2.4 GHz) to the LG LED monitor using mini DVI to DVI adapter. The dispay is getting detect

  • How to get two empty columns in reporting?

    Hi Experts, How to get Two empty columns in  a report, i have taken a formula and i have given '=0' in the formula box after executing the query that two columns containing zero's, but i don't want zero's, i want to display empty columns. pl help to

  • Posting from vendor consignment into project not possible?

    Hello, I want to consume my materials out of vendor consignment stock for a project. Everytime I try to do a GI consgt f. project (mvmt 221 K) with MIGO, I receive following error message: "Account 41100100 requires an assignment to a CO object". Is

  • Error SD card

    How can I solve my SD card error problem fixing from my Mobil Windows Lumia 620

  • Sudden shutdowns in Yosemite

    I have a new computer, MacBook Pro 13" with fresh installed programs from Apple, Microsoft, Adobe, Mozilla. There are also a couple of programs like PageSpinner and Fetch from smaller contributers, but they are also fresh installed and as far as I kn