Diagnostics Workload Analysis - Java Memory Usage gives BI query input

Dears
I have set up diagnostics (aka root cause analysis) at a customer side and I'm bumping into the problem that on the Java Memory Usage tab in Workload analyis the BI query input overview is given
Sol Man 7.0 EHP1 SPS20 (ST component SP19)
Wily Introscope 8.2.3.5
Introscope Agent 8.2.3.5
Diagnostics Agent 7.20
When I click on the check button there I get the following:
Value "JAVA MEMORY USAGE" for variable "E2E Metric Type Variable" is invalid
I already checked multiple SAP Notes like the implementation of the latest EWA EA WA xml file for the Sol Man stack version.
I already reactivated BI content using report CCMS_BI_SETUP_E2E and it gave no errors.
The content is getting filled in Wily Introscope, extractors on Solution Manager are running and capturing records (>0).
Did anyone come accross this issue already?
ERROR MESSAGE:
Diagnosis
Characteristic value "JAVA MEMORY USAGE" is not valid for variable E2E Metric Type Variable.
Procedure
Enter a valid value for the characteristic. The value help, for example, provides you with suggestions. If no information is available here, then perhaps no characteristic values exist for the characteristic.
If the variable for 0DATE or 0CALDAY has been created and is being used as a key date for a hierarchy, check whether the hierarchies used are valid for this characteristic. The same is valid for variables that refer to the hierarchy version.
  Notification Number BRAIN 643 
Kind regards
Tom
Edited by: Tom Cenens on Mar 10, 2011 2:30 PM

Hello Paul
I checked the guide earlier on today. I also asked someone with more BI knowledge to take a look with me but it seems the root cause analysis data fetching isn't really the same as what is normally done in BI with BI cubes so it's hard to determine why the data fetch is not working properly.
The extractors are running fine, I couldn't find any more errors in the diagnostics agent log files (in debug mode) and I don't find other errors for the SAP system.
I tried reactivating the BI content but it seems to be fine (no errors). I reran the managed system setup which also works.
One of the problems I did notice is the fact that the managed SAP systems are half virtualized. They aren't completely virtualized (no seperate ip address) but they are using virtual hostnames which also causes issues with Root Cause Analysis as I cannot install only one agent because I cannot assign it to the managed systems and when I install one agent per SAP system I have the message that there are already agents reporting to the Enterprise Manager residing on the same host. I don't know if this could influence the data extractor. I doubt it because in Wily the data is being fetched fine.
The only thing that it not working at the moment is the workload analysis - java memory analysis tab. It holds the Key Performance Indicators for the J2EE engine (garbage collection %). I can see them in Wily Introscope where they are available and fine.
When I looked at the infocubes together with a BI team member, it seemed the infocube for daily stats on performance was getting filled properly (through RSA1) but the infocube for hourly stats wasn't getting filled properly. This is also visible in the workload analysis, data from yesterday displays fine in workload analysis overview for example but data from an hour ago doesn't.
I do have to state the Solution Manager doesn't match the prerequisites (post processing notes are not present after SP-stack update, SLD content is not up to date) but I could not push through those changes within a short timeframe as the Solution Manager is also used for other scenarios and it would be too disruptive at this moment.
If I can't fix it I will have to explain to the customer why some parts are not working and request them to handle the missing items so the prerequisites are met.
One of the notes I found described a similar issue and noted it could be caused due to an old XML file structure so I updated the XML file to the latest version.
The SAPOscol also throwed errors in the beginning strange enough. I had the Host Agent installed and updated and the SAPOscol service was running properly through the Host Agent as a service. The diagnostics agent tries to start SAPOscol in /usr/sap/<SID>/SMDA<instance number>/exe which does not hold the SAPOscol executable. I suppose it's a bug from SAP? After copying the SAPOscol from the Host Agent to the location of the SMD Agent the error disappeared. Instead the agent tries to start SAPOscol but then notices SAPOscol is already running and writes in the log that SAPOscol is already running properly and a startup is not neccesary.
To me it comes down the point where I have little faith in the scenario if the Solution Manager and the managed SAP systems are not maintained and up to date 100%. I could open a customer message but the first advice will be to patch the Solution Manager and meet the prerequisites.
Another pain point is the fact that if the managed SAP systems are not 100% correct in transaction SMSY it also causes heaps of issues. Changing the SAP system there isn't a fast operation as it can be included in numerous logical components, projects, scenario's (CHARM) and it causes disruption to daily work.
All in all I have mixed feelings about the implementation, I want to deliver a fully working scenario but it's near impossible due to the fact that the prerequisites are not met. I hope the customer will still be happy with what is delivered.
I sure do hope some of these issues are handled in Solution Manager 7.1. I will certainly mail my concerns to the development team and hope they can handle some or all of them.
Kind regards
Tom

Similar Messages

  • Analysing SAP Java Memory Usage in Unix/Linux

    Hi,
    I need to analyze the SAP Java memory usage of Unix /Linux machine..NW 7.0
    Please guide with the commands and steps..complete prcedure.
    Based on it I should decide whether to create a new server node (or) increasing heap size
    Thanks in advance

    Hi,
    Do you have performance problems?
    How many CPU's are in the server?
    Did you check Log Configuration for delays or errors?
    Did you tune any exisiting parameters?
    You can add the nodes only if there is performance problems. You may think of adding one node to start with
    Proper number of server nodes within an instance:
    u2013 #ServerNodes = availableMemory / (JavaHeapPermSpaceStack)
    You can calculate the server nodes based on below formula
    No. of server Node = (RAM you want to assign or available RAM in GB)/2.5 ============> for 64-bit system
    No. of server Node = (RAM you want to assign or available RAM in GB)/1.5 ============> for 32-bit system
    Hence as per above discussion, we should go with 5 server nodes means,
    5 = RAM/2.5 (Assuming you are on 64-bit platform)
    i.e. RAM = 12.5 GB
    2). u2013 Configure JVM heap according to Note 723909 and Note 1008311 - Recommended Settings for NW 7.0 >= SR2 for the AIX JVM (J9)

  • High Eden Java Memory Usage/Garbage Collection

    Hi,
    I am trying to make sure that my Coldfusion Server is optomised to the max and to find out what is normal limits.
    Basically it looks like at times my servers can run slow but it is possible that this is caused by a very old bloated code base.
    Jrun can sometimes have very high CPU usage so I purchased Fusion Reactor to see what is going on under the hood.
    Here are my current Java settings (running v6u24):
    java.args=-server -Xmx4096m -Xms4096m -XX:MaxPermSize=256m -XX:PermSize=256m -Dsun.rmi.dgc.client.gcInterval=600000 -Dsun.rmi.dgc.server.gcInterval=600000 -Dsun.io.useCanonCaches=false -XX:+UseParallelGC -Xbatch ........
    With regards Memory, the only memory that seems to be running a lot of Garbage Collection is the Eden Memory Space. It climbs to nearly 1.2GB in total just under every minute at which time it looks like GC kicks in and the usage drops to about 100MB.
    Survivor memory grows to about 80-100MB over the space of 10 minutes but drops to 0 after the scheduled full GC runs. Old Gen memory fluctuates between 225MB and 350MB with small steps (~50MB) up or down when full GC runs every 10 minutes.
    I had the heap set to 2GB initally in total giving about 600MB to the Eden Space. When I looked at the graphs from Fusion Reactor I could see that there was (minor) Garbage Collection about 2-3 times a minute when the memory usage maxed out the entire 600MB which seemed a high frequency to my untrained eye. I then upped the memory to 4GB in total (~1.2GB auto given to Eden space) to see the difference and saw that GC happened 1-2 times per minute.
    Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often? i.e do these graphs look normal?
    Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Any other advice for performance improvements would be much appreciated.
    Note: These graphs are not from a period where jrun had high CPU.
    Here are the graphs:
    PS Eden Space Graph
    PS Survivor Space Graph
    PS Old Gen Graph
    PS Perm Gen Graph
    Heap Memory Graph
    Heap/Non Heap Memory Graph
    CPU Graph
    Request Average Execution Time Graph
    Request Activity Graph
    Code Cache Graph

    Hi,
    >Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often?
    Yes normal to garbage collect Eden often. That is a minor garbage collection.
    >Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Sometimes it is good to set Eden (Eden and its two Survivor Spaces combined make up New or Young Generation part of JVM heap) to a smaller size. I know your thinking - what make it less, but I want to make it bigger. Give less a try (sometimes less = more, bigger not = better) and monitor the situation. I like to use -Xmn switch, some sources say to use other method/s. Perhaps you could try java.args=-server -Xmx4096m -Xms4096m -Xmn172m etc. I better mention make a backup copy of jvm.config before applying changes. Having said that now you know how you can set the size to bigger if you want.
    I think the JVM is perhaps making some poor decisions with sizing the heap. With Eden growing to 1Gb then being evacuated not many objects are surviving and therefore not being promoted to Old Generation. This ultimately means the object will need to be loaded again latter to Eden rather than being referenced in the Old generation part of the heap. Adds up to poor performance.
    >Any other advice for performance improvements would be much appreciated.
    You are using Parallel garbage collector. Perhaps you could enable that to run multi-threaded reducing the time duration of the garbage collections, jvm args ...-XX:+UseParallelGC -XX:ParallelGCThreads=N etc where N = CPU cores (eg quad core = 4).
    HTH, Carl.

  • How to monitor java memory usage in enterprise manager

    I am running sqlplus to execute a sql package, which generates XML.
    When processing 2000+ rows, it will give a out of memory error.
    Where in enterprise manger can I see this memory usage?
    Thanks.

    Hello,
    it depends a little on what you want to do. If you use the pure CCMS monitoring with the table ALTRAMONI you get average response time per instance and you only get new measurements once the status changes from green to yellow or red.
    In order to get continuous measurements you should look into Business Process Monitoring and the different documentations under https://service.sap.com/bpm --> Media Libary --> Technical Information. E.g. the PDF Setup Guide for Application Monitoring describes this "newer" dialog performance monitor. Probably you have to click on the calendar sheet in the Media Libary to also see older documents as well. As the Business Process Monitoring integrates with BW (there is also a BI Setup Guide in the Media LIbrary) you can get trendlines there. This BW integration also integrates back with SL Reporting.
    Some guidance for SL Reporting is probably given under https://service.sap.com/rkt-solman but I am not 100% sure.
    Best Regards
    Volker

  • Java memory usage/management

    Hi,
    I am trying to give my program as much memory as possible. I have a machine with over 6GB of RAM. However, when I try
    java -Xmx4096Mwhich is significantly less than what's available, I get this error:
    Invalid maximum heap size: -Xmx4096M
    Could not create the Java virtual machine.How come?
    Secondly, lets say I try a smaller number, like 3.8 GB:
    java -Xmx3800Mthings work perfectly.
    Now, if I try 3.9 GB:
    java -Xmx3900MI get this error:
    Exception in thread "main" java.lang.OutOfMemoryError
            at java.util.zip.ZipFile.open(Native Method)
            at java.util.zip.ZipFile.<init>(ZipFile.java:112)
            at java.util.jar.JarFile.<init>(JarFile.java:127)
            at java.util.jar.JarFile.<init>(JarFile.java:65)
            at sun.misc.URLClassPath$JarLoader.getJarFile(URLClassPath.java:575)
            at sun.misc.URLClassPath$JarLoader.<init>(URLClassPath.java:542)
            at sun.misc.URLClassPath$3.run(URLClassPath.java:320)
            at java.security.AccessController.doPrivileged(Native Method)
            at sun.misc.URLClassPath.getLoader(URLClassPath.java:309)
            at sun.misc.URLClassPath.getLoader(URLClassPath.java:286)
            at sun.misc.URLClassPath.getResource(URLClassPath.java:156)
            at java.net.URLClassLoader$1.run(URLClassLoader.java:191)
            at java.security.AccessController.doPrivileged(Native Method)
            at java.net.URLClassLoader.findClass(URLClassLoader.java:187)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:289)
            at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:274)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:235)
            at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:302)How come?
    I don't mind the fact that Java can't give me 4096M. I can live with that. But what I would like to know is why I get this last error and also what is the MAXIMUM that I can use for the -Xmx option? I have some serious testing to do and I can't just write that "-Xmx3900M didn't seem to work and so I went with -Xmx3800." People will not like that sentence.
    Thanks,
    Jeff

    OutOfMemoryError. My goal right now is to make sure
    that I let Java have as much memory as the JVM can
    handle. It seems like giving it 3800M is ok, but IBeing able to set the heap size to 3800M doesnot mean that your JVM is actually using it uptil 3800M.
    would like to know if there is a good reason that
    3900M doesn't work.On 32-bit processor machines, the largest contiguous memory address space the operating system can allocate to a process is 1.8GB. Because of this, the maximum heap size can only be set up to 1.8GB. On 64-bit processor machines, the 1.8 GB limit does not apply, as 64-bit processor machines have a larger memory address space. So now you need to see what processor do you have and if its a 32 bit processor no matter you set it to 3800 or 7800M the max size limit is 1.8G
    >
    Thanks guys for the help so far.

  • Java memory usage

    Dear All,
    We have got quite a big java application and we have tried the code below towards the end and it is keep showing around 2Mb is that considered high and is this the right way to do it? We have closed all the statement and resutlsets immediately after using them. How to know if there is any memory leakage if the values goes more then 2Mb ?
    Runtime runtime = Runtime.getRuntime();
    // Run the garbage collector
    runtime.gc();
    // Calculate the used memory
    long memory = runtime.totalMemory() - runtime.freeMemory();
    System.out.println("Used memory is bytes: " + memory);

    935486 wrote:
    I have google and found many profile tool but then again you said no hard rules so wont really be much helpful. So in my case say we have now close all the resultset and statements properly that should not worry much I guess rite. Previously it keep growing which I guess the resources was not close properly. At least you won't have to worry about leaks being introduced through database stuff, no.
    Anything else to be done to avoid out of memory exception? Thank you.Write proper code. Which means you have to write code with care. And have it reviewed by other people, that's something that people don't do enough anymore - let other people sniff through your stuff. They're bound to find things you just overlook.
    @EJP, morgalr and Dr. Clap (and all other regulars who happen to read this but have not replied yet) - right? I'm not alone in thinking that about the code reviewing am I?

  • Java Memory Monitoring in Web Application

    Hi All,
    Request you to please review the below mentioned suggestion and provide inputs:
    Over the years, I have been involved in some projects involving web development in J2ee. JAVA memory usage is an issue that is common amongst all.
    Following are some of the questions that come across to a developer regarding the JAVA memory:
    Memory Usage Statistics.
    Trending of Memory statistics.
    Memory Leak.
    Performance optimization in case memory leaks occur.
    When it comes to answering the above, the most common suggestion is to enable heap dumps and analyze it using a heap analyzer tool. However, there are times and projects where these options are not approved off and developer is always asked to review code again and again. This is again a frustrating option for someone who has just joined a maintenance project and reading through code is not a feasible option. It has happened to me and I did the following to solve some of my problems and eventually all.
    Instead of analyzing heap dumps, I decided to do the following:
    Add a request filter to my J2EE application.
    Add following log statements in the filter:
    URL fired.
    Runtime.getRuntime().freeMemory()
    Runtime.getRuntime().totalMemory()
    Runtime.getRuntime().maxMemory()
    Gather data from daily app usage and build some trending statistics.
    Not only were we able to decide an optimum memory setting for our server, we were able to detect leaks as well. However, i agree detecting leaks wasn't as simple as it's with other tools considering the debugging effort that's involved.It is not a conventional approach but come sin handy when projects don't want to involve costs and maintain equilibrium at production systems as well.

    Hi,
    Few questions!
    1> Have you tweaked your jvm?
    2> What are the values given for Xms and Xmx?
    3> What is the size of XX:MaxPermGen?
    4> How much RAM is available on the system where you have deployed your app?
    5> Are you using pre-complied JSPs for faster response?
    6> Which JDK are you using?
    7> Have you tried using latest version of Tomcat?
    8> If these doesnt help, use any profiler to find the leak. <JProfiler, JVMTI, YourKit profiler etc>
    I hope answering these questions would help you :)
    njoy!

  • How to analyse the main memory usage in SAP ERP systems?

    Dear expert,
    I'm doing a research work about analysing the main memory usage in SAP ERP systems.
    I would like to find out what is load in buffers and when. That means, which process have the control of these memories and which are always performing something, tables loaded, and so on. Becuase I tried to isolate the space needed by a simple webservice call (create one material) in my test system, but even after a $SYN there are something stored in the buffers. I use a BAPI to avoid the execution of the SAPGUI and its repercussion in the system (I know the BAPI called uses resources too, but when I run this BAPI to get the statistics, it's like ST02, I get different values). Could someone help me or recommend something specific to read? Thanks a lot in advance.

    Dear expert,
    Thanks a lot for your answer. The point is now that I want to isolate the memory used by a webservice that I call, I mean, I would like to know how many memory is this webservice using in each buffer. And could you tell me where could I read something about the order that things happen in SAP System when a webservice is called (always memory related), that's which steps are done to store data in buffers and so on. Thanks in advance.

  • SQL 2012 Memory usage analysis

    We have this clustered SQL Server, win 2008 R2 + SQL 2012 Standard SP2,
    Server RAM is 64GB, maximum memory is set at 50GB, and minimum memory is set at 10GB.
    When I check memory usage, it seems
    CLR takes more memory than Buffer Pool, is it normal? Or does it show memory allocation issue? Thanks a lot!
    The "total server memory" and "target server memory" performance counters are both 50GB.
    Please kindly see below screen shots below:

    Hi Vivian,
    I would assume you are asking question related to SQL Server 2012 memory the whole discussion would change if it is SQL Server 2008 r2 memory
    You are not drawing correct information from first query.
    CLR has Reserved 6 G but ONLY committed 12 MB. And that too you are looking at virtual memory which is 6TB for SQL Server process so of course CLR can reserve there is actually no issue with it. A committed memory is actually what SQL Server is using
    reserved in simple language is what SQL Server thinks it might need in near future.
    As you said Target and total memory are same its good.
    For both 2012 and 2008 r2 below query will give total memory utilization by SQL Server instance
    select
    (physical_memory_in_use_kb/1024)Memory_usedby_Sqlserver_MB,
    (locked_page_allocations_kb/1024 )Locked_pages_used_Sqlserver_MB,
    (total_virtual_address_space_kb/1024 )Total_VAS_in_MB,
    process_physical_memory_low,
    process_virtual_memory_low
    from sys. dm_os_process_memory
    If you want to see Breakdown of memory utilized by various clerks in 2008 r2
    select
    type,
    (SUM(single_pages_kb)/1024) Single_page_allocator_memory,
    (SUM(multi_pages_kb)/1024) multi_page_allocator_memory,
    (sum(awe_allocated_kb)/1024) [AWE API memory]
    from sys.dm_os_memory_clerks
    group by type
    order by Single_page_allocator_memory desc
    For 2012
    select type,
    (SUM(pages_kb)/1024) as Memory_Utilized
    (sum(awe_allocated_kb)/1024) as Memory Allocated by AWE API
    from
    sys.dm_os_memory_clerks
    group by type
    order by Memory_Utilized desc
    Starting from 2012 Max server memory controls much more than buffer pool
    Max server memory controls SQL Server memory allocation, including the buffer pool, compile memory, all caches, qe memory grants, lock manager memory, and CLR memory (basically any “clerk” as found in dm_os_memory_clerks). Memory for thread stacks, heaps,
    linked server providers other than SQL Server, or any memory allocated by a “non SQL Server” DLL is not controlled by max server memory
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • How to specify maximum memory usage for Java VM in Tomcat?

    Does any one know how to setup memory usage for Java VM, such as "-Xmx256m" parameter, in Tomcat?
    I'm using Tomcat 3.x in Apache web server on Sun Solaris platform. I already tried to add the following line into tomcat.properties, like:
    wrapper.bin.parameters=-Xmx512m
    However, it seems to me that this doesn't work. So, how about if my servlet will consume a large amount of memory that exceeds the default 64M memory boundary of Java VM?
    Any idea will be appreciated.
    Haohua

    With some help we found the fix. You have to set the -Xms and -Xmx at installation time when you install Tomcat 4.x as a service. Services do not read system variables. Go to the command prompt in windows, and in the directory where tomcat.exe resides, type "tomcat.exe /?". You will see jvm_options as part of the installation. Put the -Xms and -Xmx variables in the proper place during the install and it will work.
    If you can't uninstall and reinstall, you can apply this registry hack that dfortae sent to me on another thread.
    =-=-=-=-=-=
    You can change the parameters in the Windows registry. If your service name is "Apache Tomcat" The location is:
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Apache Tomcat\Parameters
    Change the JVM Option Count value to the new value with the number of parameters it will now have. In my case, I added two parameters -Xms100m and -Xmx256m and it was 3 before so I bumped it to 5.
    Then I created two more String values. I called the first one I added 'JVM Option Number 4' and the second 'JVM Option Number 5'. Then I set the value inside each. The first one I set to '-Xms100m' and the second I set to '-Xmx256m'. Then I restarted Tomcat and observed when I did big processing the memory limit was now 256 MB, so it worked. Hope this helps!
    =-=-=-=-=
    I tried this and it worked. I did not want to have to go through the whole reinstallation process, so this was best for me.
    Thanks to all who helped on this.

  • Memory usage in Analysis Services tabular model

    Hello,
    I've been researching and investigating trying to understand what is consuming memory resources in a tabular model that I'm working with. Using SQL Server Management Studio, the Estimated Size of the database is reported as 7768.34 MBs. Using
    Kasper de Jonge's BISM Server Memory Report, the database is reported as 15,465.13 MBs. However, a majority of the fields in the BISM Server Memory Report are empty, and so I cannot determine what is consuming the memory. The data source for this particular
    workbook is $SYSTEM.DISCOVER_OBJECT_MEMORY_USAGE.
    For example: I drill-down to an individual column (ColumnA) in the BISM Server Memory Report (Database > Dimensions > Table > In-Memory Table > Columns > Column) and the reported memory usage is 706.97 MBs. Underneath ColumnA,
    I see a blank level with a reported memory usage of 623.59 MBs and a Segments level with a reported memory usage of 83.39 MBs. Looking at $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS, if I SUM the USED_SIZE for ColumnA, it totals roughly 83 MBs which matches
    what is reported in the BISM Server Memory Report for the segment size. How do I determine what the other 623.59 MBs is being used for? Again, this discrepancy occurs for all columns in the model and not just this one example.
    Thanks!

    Follow up to my original question. It appears that the blank levels (at least under the column level) the Kasper de Jonge's BISM Server Memory Report reports the dictionary size of the column. The memory usage size matches the DICTIONARY_SIZE attribute in
    the $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMNS. I made a mis-assumption as to what the information $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS was providing.
    In my original post I reference one database in particular in where the Estimated Size property in the Database Properties dialog is listed as 7,768.34 MBs. and the $SYSTEM.DISCOVER_OBJECT_MEMORY_USAGE reports 15,465.13 MBs. Thoughts, comments, or opinions
    on why the Estimated Size property does not match what's reported in $SYSTEM.DISCOVER_OBJECT_MEMORY_USAGE?
    Thanks!

  • Calcualte Oracle applications memory usage

    Is there a way or does anyone have a script for oracle applications memory usage of a period of 30 days??

    Hi
    By implementing the statspack you can get most of the statistics information including memory usage of apps instance with time intervals. Since there is no GUI performance analysis is complicated with statspack. If you are on 10g a new feature called AWR(automatic workload repository) which gives you statistics and metrics with html format report.
    There are some tools to monitor the Oracle applications services like cpu , memory consumption... I prefer to go with your own customization scripts to monitor the instance.
    You can get plenty of scripts in the net.
    http://www.orafaq.com/scripts/
    I have little experience with bmcsoftware which monitors oracle Applicatons. But I am not sure whether it gives statistics for the timeperiod. See the below whitepaper of bmc.
    http://documents.bmc.com/products/documents/66/54/56654/56654.pdf
    Regards
    Srinath

  • High memory usage on JDBC 10.2.0.1.0 driver on Prepared/Callable Statements

    We are observing high memory usage for each callable/prepared stmt, using 10.2.0.1.0 JDBC Driver. The char[] in oracle/jdbc/driver/T4CVarcharAccessor was alloted 64K to 320K and grows with usage. This problem is worse with 10.1.0.2. driver which was alloted 720K byte of memory for each stmt right at the start.
    We found this by doing a JVM heap dump and analyzing the heap dump using IBM's heap analyser. Here is a snapshot of the heap dump for this object:
    321,240 [216] 11 oracle/jdbc/driver/T4CVarcharAccessor 0x72752968
    - 320,616 [320,616] 0 char[] 0x72761028
    - 216 [216] 0 short[] 0x727527d8
    - 72 [32] 1 java/lang/String 0x727530a0
    - 24 [24] 0 int[] 0x72752938
    - 24 [24] 0 int[] 0x72752948
    - 24 [24] 0 int[] 0x72752958
    - 16 [16] 0 bool[] 0x72752928
    - 16 [16] 0 byte[] 0x727528b0
    - 16 [16] 0 bool[] 0x72752918
    - 10,336 [88] 15 oracle/jdbc/driver/T4CMAREngine 0x712e7128
    - 1,544 [1,032] 79 oracle/jdbc/driver/T4CPreparedStatement 0x72754c58
    It is repeated many times for each prepared/callable stmt call.
    Details of our platform is:
    Database - Oracle Database 10g Release 10.2.0.1.0 - 64bit Production
    JDBC Driver - Oracle Database 10g Release 2 (10.2.0.1.0) JDBC Drivers
    JDK - [Classic VM, Version 1.4.2] from [IBM Corporation]
    Our callable stmts are not using any of the Oracle caching facility. It is a simple call stmt with OUT parameters and the stmt is closed after each execution. However, we implement our own connection pooling and do not close the connection after each stmt.
    Is there a workaround to this? Would appreciate any feedback.

    What is happening is that each new CallableStatement you create allocates a new char[]. I would strongly encourage you to use the implicit statement cache if at all possible. That way instead of creating a new statement each time with a new char[] you will get an already existing statement and reuse the existing char[]. Closing a statement releases the char[] so if you really are closing the statements the char[]s should be GC'd.
    Douglas

  • JDK1.4.0.2 Memory usage on Win 2K.

    By default Win2K does not yield much of the requested Java heap space in physical memory. It appears to give a small fraction to physical memory and the rest is kept in virtual memory.
    I've tuned the platform to have only absolutely necessary system applications running, minimizing other apps memory usage, it always appears that the heap could be allocated in physical memory, but it is just not done by the OS.
    My server has 2 Gb of ram, and my java app requests 1 Gb of heap space.
    I see quite a bit of page faults when my app executes under load.
    Looking at the memory meter in task manager, their appears to be easily more than 1Gb of free ram available, yet my process continues to page fault.
    Is there a way to get the JVM to lock/retain more physical memory and reduce the need to use virtual memory so much? It really does not appear to be a memory contention issue with other apps? I'm using jvm 1.4.0.02 on a dual cpu del 1.5 Ghx with 2gb Ram.
    Regards.

    I am having same problem, did you get a solution yet.
    Thanks
    -Aaron

  • XML parser memory usage

    I try to proove the advantage of SAX (and StAX) parser, i.e. the memory usage over time is very low and quite constant over time while parsing a large XML file.
    DOM APIs create a DOM that is stored in the memory and therefore the memory usage is at least the filesize.
    To analyse the SAX heap usage over time I used the following source:
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.InputStream;
    import org.xml.sax.InputSource;
    import org.xml.sax.XMLReader;
    import org.xml.sax.helpers.XMLReaderFactory;
    public class ParserMemTest {
        public static void main(String[] args) {
            System.out.println("Start");
            try {
                InputStream xmlIS = new FileInputStream(
                        new File("xmlTestFile-0.xml") );
                HeapAnalyser ha = new HeapAnalyser(xmlIS);
                InputSource insource = new InputSource(ha);
                XMLReader SAX2parser = XMLReaderFactory.createXMLReader();
                //SAX2EventHandler handler = new SAX2EventHandler();
                //SAX2parser.setContentHandler(handler);
                SAX2parser.parse(insource);
            } catch (Exception e) {
            System.out.println("Finished.");
    }and the HeapAnalyser class:
    import java.io.IOException;
    import java.io.InputStream;
    public class HeapAnalyser extends InputStream {
        private InputStream is = null;
        private int byteCounter = 0;
        private int lastByteCounter = 0;
        private int byteStepLogging = 200000; //bytes between logging times of measurement
        public HeapAnalyser(InputStream is) {
            this.is = is;
        @Override
        public int read() throws IOException {
            int b = is.read();
            if(b!=-1) {
                byteCounter++;
            return b;
        @Override
        public int read(byte b[]) throws IOException {
            int i = is.read(b);
            if(i!=-1) {
                byteCounter += i;
            //LOG
            if ((byteCounter-lastByteCounter)>byteStepLogging) {
                lastByteCounter = byteCounter;
                System.out.println(byteCounter + ": " + getHeapSize() + " bytes.");
            return i;
        @Override
        public int read(byte b[], int off, int len) throws IOException {
            int i = is.read(b, off, len);
            if (i!=-1) {
                byteCounter += i;
            //LOG
            if ((byteCounter-lastByteCounter)>byteStepLogging) {
                lastByteCounter = byteCounter;
                System.out.println(byteCounter + ": " + getHeapSize() + " bytes.");
            return i;
        public static String getHeapSize(){
            Runtime.getRuntime().gc();
            return Long.toString((Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory())/1000);
    }and these are the results:
    Start
    204728: 1013 bytes.
    409415: 1713 bytes.
    614073: 2400 bytes.
    818763: 3085 bytes.
    1023449: 3772 bytes.
    1228130: 4458 bytes.
    1432802: 5145 bytes.
    1637473: 5832 bytes.
    1842118: 6519 bytes.
    2046789: 7206 bytes.
    2251470: 7894 bytes.
    2456134: 8580 bytes.
    2660814: 9268 bytes.
    2865496: 9955 bytes.
    3070177: 10625 bytes.
    3274775: 11287 bytes.
    3479418: 11950 bytes.
    3684031: 12612 bytes.
    3888695: 13275 bytes.
    4093364: 13937 bytes.
    4298027: 14600 bytes.
    4502694: 15262 bytes.
    4707372: 15925 bytes.
    4912040: 16586 bytes.
    5116662: 17249 bytes.
    5321331: 17912 bytes.
    5525975: 18574 bytes.
    5730640: 19237 bytes.
    5935308: 19898 bytes.
    Finished.
    As you can see while parsing the XML file (200k elements, about 6MB) the heap memory raises. I would expect this result when a DOM API is analysed, but not with SAX .
    What could be the reason? The Runtime class measurement, the SAX implementation or what?
    thanks!

    http://img214.imageshack.us/img214/7277/jprobeparser.jpg
    Test with jProbe while parsing the 64MB XML file.
    Testsystem: Windows 7 64bit, java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02), Java HotSpot(TM) 64-Bit Server VM (build 16.3-b01, mixed mode), Xerces 2.10.0
    Eclipse Console System output:
    25818828: 116752 bytes.
    26018980: 117948 bytes.
    26219154: 99503 bytes.
    26419322: 100852 bytes.
    26619463: 102275 bytes.
    26819642: 103624 bytes.
    27019805: 104974 bytes.
    27220008: 105649 bytes.
    27420115: 106998 bytes.
    27620234: 108348 bytes.
    27820330: 109697 bytes.
    Exception in thread "main" java.lang.OutOfMemoryError: PermGen space
    at java.lang.String.intern(Native Method)
    at org.apache.xerces.util.SymbolTable$Entry.<init>(Unknown Source)
    at org.apache.xerces.util.SymbolTable.addSymbol(Unknown Source)
    at org.apache.xerces.impl.XMLEntityScanner.scanQName(Unknown Source)
    at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanStartElement(Unknown Source)
    at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source)
    at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
    at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
    at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
    at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
    at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
    at ParserMemTest.main(ParserMemTest.java:47)<img src="file:///C:/Users/*******/AppData/Local/Temp/moz-screenshot.png" alt="" />
    Edited by: SUNMrFlipp on Sep 13, 2010 11:48 AM
    Edited by: SUNMrFlipp on Sep 13, 2010 11:50 AM

Maybe you are looking for

  • Cannot open an Excel 2010 workbook by using Internet Explorer 8.0 in sharepoint 2010

    When am trying to open a Microsoft Office Excel 2010  workbook from a Web site by browsing to the location with Microsoft Internet Explorer 8.0, you cannot open the file. Instead, you receive the following messages: Some files can harm your computer.

  • Customer Statement send by email

    Hi,     I have a problem in sending multiple customer statement seperatly to each customer by email .Here  i can able to send email to all the customers but the attachment contains all the customer statements.but i want to send the particular stateme

  • Preparing to send iMac back for replacement.

    I am planning on sending my iMac back for my replacement tomorrow and I have everything I need backed up safely. I want to erase the hard drive before I send it back and have read the instructions on how to do so. I have the OS X install cd's and I p

  • Regarding BW 3.5 help

    Monday will kick off my project. This is the first project. The project needs the following. 1.Initial analysis of environment 2.Performance review and recommendations 3.Refresh expertise on config extraction and post-   refresh steps 4.Process chain

  • Coding question for quiz game

    Hi, I want to create a quiz where the user can narrow down the list of possible questions by choosing from several menus.  Eg: User chooses "Cities", "Northern Hemisphere" and "Europe" which narrows the list of possible questions.  The code then need