Memory Leakage with Tomcat

Hi,
I have created an application that uses a single JSP and makes multiple calls to a database. It does populate a couple of arrays with values and then uses those values to get other values from the database.
I am noticing through Windows Task Manager that when I initially start Tomcat, the memory goes up to about 36mb. It then continues to grow by about 3mb per user that makes a new connection. It can get up to about 200mb, and then I forced to restart tomcat.
Im sure this is a memory leak issue. Is there any suggestions as to what I should do to get rid of this??
My connections are all closed and released, both database and result sets. I am preforming garbage collection.
I am using sessions, when users log in (dont know whether this has anything to do with it??)
Anyway, any help is greatly appreciated.
Cheers

Thanks for the help guys,
I run a query for the first time I load the site that loads everything into the session. I then chcek that the user is logged in? If the user is logged in, then it loads a instance of that single session.
If not, then it just loads the not logged in session page.
Thanks for that bit of connection architecture ronaldharing.
My connection class is similar, though I will make some modifications to this with your changes in place, and see how it goes.
This is my conncection class im using:
package Vidz;
import java.io.*;
import java.util.*;
import java.sql.*;
import javax.servlet.*;
import javax.servlet.http.*;
public class DBConnection {
     public String status;
     public int vendorStatus;
     public Connection conn;
     public Statement stmt;
     public CallableStatement callStmt;
     public PreparedStatement preparedStmt;
     public ResultSet rset;
     public SQLWarning thisWarning;
     public boolean thisConnClosed;
* Establishes a connection to the database.
public DBConnection() {
try {
          Class.forName ("sun.jdbc.odbc.JdbcOdbcDriver");
          conn = DriverManager.getConnection ( "jdbc:odbc:databaseName","","" );
     thisWarning = conn.getWarnings();
conn.setAutoCommit(true);
status = "open";
} catch (Exception e) {status="DBConnection construction error:"+e.toString();
* Return java.sql.ResultSet
public synchronized ResultSet executeQuery(String command) {
try {
status=command;
stmt = conn.createStatement ();
status+="*";
rset = stmt.executeQuery(command);
status+="executeQuery OK: "+command;
return rset;
} catch (Exception e) {status=e.toString();return null;}
* Return int indicator to show whether statement successful.
public synchronized int executeUpdate(String command) {
try {
stmt = conn.createStatement ();
status="executeUpdate OK: "+command;
return stmt.executeUpdate(command);
} catch (Exception e) {status=e.toString();return -1;}
public void close() throws java.sql.SQLException
if( conn != null )
if( ! conn.isClosed() )
conn.close();
* On garbage collection, close db connection
protected void finalize() throws IOException
try
if( conn != null )
if( ! conn.isClosed() )
conn.close();
catch (java.sql.SQLException e) {}
Sridharranganathan : I store the result of the query in the session, along with any images etc. I want each user to load just the one page, to improve optimization.
Thanks for all your help people!
Any more suggestions would be much appreciated

Similar Messages

  • Memory leakage with swing

    We have developed a applet using swing componenets . We have used dispose, callled gc() and made references null and removed the listeners to enable the components for garbage collection. We have used windows NT. When we see the task manager the memory usage only increases as we work on the applet. When we minimize the browser and maximise it again then the extra memory is releaased but as long as we work on the browser though we have done lot of things the memory usage in task manager only increases. Can you help us out. What are the reason for memory leakage . Why the memory is not released by the jvm to OS. Is it designed that way. What are the precautions to take. We use JDK1.2 for developing and run the applet with the jre1.2 and jre1.3 plug in installed on the windows nt system
    Thanks in advance

    Hi there
    This topic has been discussed several times before
    Basicly this is it
    The GC knows it is sluggish (well those who built it any whay)
    therefor it will not clean up until it is necessary
    therefor your memory for the JAVA app will grow in
    memory size (as long as there is more room why clean?)
    When you start/close another program the OS will
    demand more memory and the GC will run cleanup. How effective this cleanup realy is
    depends on the situation.
    To make things easier for the GC you should as you state
    clean all referenses etc
    If you whant to start another memory consuming program
    you should start it before the Java App. then the memory that is available for the Java VM is limmited.
    There are also whays to set the maximum / minimum memory for the Java VM
    Markus

  • Memory leakage with oracle oci driver

    I have developed a Solaris8 client/server application using the JAVA IDL CORBA implementation. The client sends requests to the server to update the database (database is Oracle 8.1.7 and I connect to it using oci oracle drivers). Requests are sent one at a time. No concurrent connections. I have a static connection that I establish with database once I start the server. If that connection is lost for any reason (timeout or database faliure) the application tries automatically to reconnect to database. I have noticed that if the new connection to database fails and an sql exception is thrown, memory used by the application process increases. This memory is not garbage collected so application hangs. I tried similar behaviour with the oracle thin driver and things went fine. There was no memory leakage.
    I would really appreciate, if you can help me in this since I can't use the thin driver because of failover limitations.

    I have noticed
    that if the new connection to database fails and an
    sql exception is thrown, memory used by the
    application process increases.
    How have you noticed this?I noticed this using the command pmap under solaris it operating system
    every time I test reconnecting to database I go and check the memory used by the application before after attepmting to reconnect:
    /usr/proc/bin/pmap [myapp pid] | tail -1
    If I'm using normal connection then the memory will be increased by 100KB. If I'm using the OraclePooledConnection class then the increase will be something like 500KB. Again this is if still there is a problem connecting to database. If connection to database is okay then no memory increase at all.
    This memory is not
    garbage collected so application hangs.
    Then it isn't a java problem. When java runs out of memory it throws a out of memory exception.Well I'm not saying it is a java problem for sure. I suspect that it might be oracle oci8 driver problem. I would appreciate if anyone can help in specifying teh source of the error.
    I tried
    similar behaviour with the oracle thin driver and
    things went fine. There was no memory leakage.
    I would really appreciate, if you can help me in this
    since I can't use the thin driver because of failover
    limitations.
    I don't understand that last sentence at all.What I mean here is that instead of using the oci8 driver to connect to database I used the thin driver and kept everything else the same. I simulated the faliure to reconnect to database and based on the pmap command observations there was no memory leakage.
    I want to know what is needed to be done in order to get a normal behavior once using the oci8 drivers.

  • Memory Error with Tomcat 4.1

    I have a Tomcat 4.1 installation on a Linux 7.2 box. Tomcat uses
    mod_jk with Apache. We are currently in a development phase and change alot of jsp's on a daily basis. Eventually it seems that Tomcat runs out of memory for the compilations and gives the following message:
    org.apache.jasper.JasperException: Unable to compile class for JSP
    An error occurred at line: -1 in the jsp file: null
    Generated servlet error:
    [javac] Compiling 1 source file
    The system is out of resources.
    Consult the following stack trace for details.
    java.lang.OutOfMemoryError
    After Tomcat is restarted everything appears to to be okay for a time. Eventually this problem will come back. The problem appears to be only when jsp files are changed. Running jsp's ( which were previously compiled and have been unchanged) run just fine.
    In the /var/tomcat4/conf/tomcat4.conf file I have the following command
    uncommented:
    JAVACMD="$JAVA_HOME/bin/java -Xms6m -Xmx100m"
    I am running java 1.4.1 on the Linux box.

    I was looking at the Jakarta web site and under the Tomcat4.1 documentation it gives a description of what is new in 4.1. It states:
    Rewritten Jasper JSP page compiler
    Performance and memory efficiency improvements
    Among other things. Could they have a memory leak?

  • Memory Leak with Tomcat version update 3.2 to 6.0

    Hi, I've been trying to update tomcat from 3.2 to 6.0. My issue is that I have a memory leak(s?) that make the web application unusable. Currently in my setup I am using these components:
    Tomcat 6.0, sun JDK 1.6.0_01, mssql 2005, Microsoft SQL Server 2005 JDBC Driver 1.2, xalan 2.7.0, log4j 1.0.4 (should be only out of date component)
    It is a fairly large application that uses xslt with xalan and java servlets to display web pages. There was no issue with memory leaks before the update from tomcat 3.2, sun jdk 1.4.2 and old xalan and jdbc (for mssql 2000) components.
    My question for the community is, where should I be looking for my memory leak. Are there known issues with my setup?
    thanks for your help,
    Matt

    Just in case someone goes down the same road as me, my problem was actually the one listed on the page below. My threads are not being released after a StandardContext reload. Which I'm not sure if this leak applies to tomcat version <4 or not.
    http://opensource.atlassian.com/confluence/spring/pages/viewpage.action?pageId=2669

  • Memory Leakage in Conky with Lua

    Hi there, I've been experiencing ongoing memory leakage using my Conky + Lua scripts. It tends to keep accumulating memory until I kill the process and restart it. Highest I've seen it go is 10% of my 4gb of RAM, so it does get to be substantial if unchecked.
    I did google it, and it mentioned something about cairo_destroy(cr), so I inserted it at the end of functions randomly where it made sense (to my limited scripting skills) and where it didn't (just in case), but it didn't seem to make any difference.
    Here is the lua script - it creates rings as % bars, I believe it was taken from somewhere on the arch forums months ago where it was also modified.
    Ring Meters by londonali1010 (2009)
    This script draws percentage meters as rings. It is fully customisable; all options are described in the script.
    To call this script in Conky, use the following (assuming that you save this script to ~/scripts/rings.lua):
    lua_load ~/scripts/rings-v1.2.1.lua
    lua_draw_hook_pre ring_stats
    -- Background settings
    corner_r=20
    main_bg_colour=0x060606
    main_bg_alpha=0.4
    -- Ring color settings
    ring_background_color = 0x000000
    ring_background_alpha = 0.6
    ring_foreground_color = 0x909090
    ring_foreground_alpha = 1
    -- Rings settings
    settings_table = {
    name='cpu',
    arg='cpu2',
    max=100,
    bg_colour=ring_background_color,
    bg_alpha=ring_background_alpha,
    fg_colour=ring_foreground_color,
    fg_alpha=ring_foreground_alpha,
    x=50, y=55,
    radius=31,
    thickness=3,
    start_angle=-180,
    end_angle=0
    name='cpu',
    arg='cpu1',
    max=100,
    bg_colour=ring_background_color,
    bg_alpha=ring_background_alpha,
    fg_colour=ring_foreground_color,
    fg_alpha=ring_foreground_alpha,
    x=50, y=55,
    radius=35,
    thickness=3,
    start_angle=-180,
    end_angle=0
    name='memperc',
    arg='',
    max=100,
    bg_colour=ring_background_color,
    bg_alpha=ring_background_alpha,
    fg_colour=ring_foreground_color,
    fg_alpha=ring_foreground_alpha,
    x=205, y=55,
    radius=32,
    thickness=10,
    start_angle=-180,
    end_angle=-0
    require 'cairo'
    local function rgb_to_r_g_b(colour,alpha)
    return ((colour / 0x10000) % 0x100) / 255., ((colour / 0x100) % 0x100) / 255., (colour % 0x100) / 255., alpha
    end
    local function draw_ring(cr,t,pt)
    local w,h=conky_window.width,conky_window.height
    local xc,yc,ring_r,ring_w,sa,ea=pt['x'],pt['y'],pt['radius'],pt['thickness'],pt['start_angle'],pt['end_angle']
    local bgc, bga, fgc, fga=pt['bg_colour'], pt['bg_alpha'], pt['fg_colour'], pt['fg_alpha']
    local angle_0=sa*(2*math.pi/360)-math.pi/2
    local angle_f=ea*(2*math.pi/360)-math.pi/2
    local t_arc=t*(angle_f-angle_0)
    -- Draw background ring
    cairo_arc(cr,xc,yc,ring_r,angle_0,angle_f)
    cairo_set_source_rgba(cr,rgb_to_r_g_b(bgc,bga))
    cairo_set_line_width(cr,ring_w)
    cairo_stroke(cr)
    -- Draw indicator ring
    cairo_arc(cr,xc,yc,ring_r,angle_0,angle_0+t_arc)
    cairo_set_source_rgba(cr,rgb_to_r_g_b(fgc,fga))
    cairo_stroke(cr)
    end
    local function conky_ring_stats()
    local function setup_rings(cr,pt)
    local str=''
    local value=0
    str=string.format('${%s %s}',pt['name'],pt['arg'])
    str=conky_parse(str)
    value=tonumber(str)
    if value == nil then value = 0 end
    pct=value/pt['max']
    draw_ring(cr,pct,pt)
    end
    if conky_window==nil then return end
    local cs=cairo_xlib_surface_create(conky_window.display,conky_window.drawable,conky_window.visual, conky_window.width,conky_window.height)
    local cr=cairo_create(cs)
    local updates=conky_parse('${updates}')
    update_num=tonumber(updates)
    if update_num>1 then
    for i in pairs(settings_table) do
    setup_rings(cr,settings_table[i])
    end
    end
    cairo_destroy(cr)
    end
    --[[ This is a script made for draw a transaprent background for conky ]]
    local function conky_draw_bg()
    if conky_window==nil then return end
    local w=conky_window.width
    local h=conky_window.height
    local cs=cairo_xlib_surface_create(conky_window.display, conky_window.drawable, conky_window.visual, w, h)
    local cr=cairo_create(cs)
    -- local thick=2
    cairo_move_to(cr,corner_r,0)
    cairo_line_to(cr,w-corner_r,0)
    cairo_curve_to(cr,w,0,w,0,w,corner_r)
    cairo_line_to(cr,w,h-corner_r)
    cairo_curve_to(cr,w,h,w,h,w-corner_r,h)
    cairo_line_to(cr,corner_r,h)
    cairo_curve_to(cr,0,h,0,h,0,h-corner_r)
    cairo_line_to(cr,0,corner_r)
    cairo_curve_to(cr,0,0,0,0,corner_r,0)
    cairo_close_path(cr)
    cairo_set_source_rgba(cr,rgb_to_r_g_b(main_bg_colour,main_bg_alpha))
    --cairo_set_line_width(cr,thick)
    --cairo_stroke(cr)
    cairo_fill(cr)
    cairo_destroy(cr)
    end
    function conky_main()
    conky_draw_bg()
    conky_ring_stats()
    cairo_destroy(cr)
    end
    And this is called into conky via:
    background no
    override_utf8_locale no
    use_xft yes
    xftfont Monospace:size=8
    ## orig font cure
    text_buffer_size 2048
    update_interval 1.0
    total_run_times 0
    own_window yes
    own_window_transparent yes
    own_window_type desktop
    own_window_colour 191919
    own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager
    double_buffer yes
    minimum_size 600 90
    maximum_width 320
    draw_shades no
    draw_outline no
    draw_borders no
    draw_graph_borders no
    default_color 909090
    default_shade_color fed053
    default_outline_color 7f8f9f
    alignment br
    gap_x 30
    gap_y 50
    no_buffers yes
    uppercase no
    cpu_avg_samples 2
    override_utf8_locale no
    color1 fff
    border_inner_margin 5
    border_outer_margin 5
    own_window_argb_visual no
    own_window_argb_value 200
    lua_load ~/.conky/rings.lua
    lua_draw_hook_pre main
    Left out the conky TEXT section unless anybody desperately needs to see that too.
    If anybody can point me in the right direction with this silly thing, that would be appreciated. Thanks!
    Last edited by ugugii (2011-11-16 17:42:00)

    No I meant that the destroy functions should not be in conky_main at all. Why? Because you are using cr as an argument when you have no local cr defined. You are passing the undefined value (nil) to these destroy functions and they do nothing.
    Like I said, I don't use conky or cairo but it is generally a good idea to destroy a resource you create. If you don't you will get memory leaks because "creating" is usually vague language that comes down to allocating memory and "destroy" deallocates memory.
    This line from your own post creates a surface and stores it in the cs variable:
    local cs=cairo_xlib_surface_create(conky_window.display,conky_window.drawable,conky_window.visual, conky_window.width,conky_window.height)
    Yet you forget to deallocate the surface stored in cs. So add a line like this after each cairo_destroy(cr):
    cairo_surface_destroy(cs)
    I hope that helps.

  • Are there any good tool for checking security risks, Code review, memory leakages for SharePoint projects?

    Are there any good tool for checking security risks, Code review, memory leakages for SharePoint projects?
    I found one such tool "Fortify" in the below link. Are there any such kind of tools available which supports SharePoint?
    Reference: http://www.securityresearch.at/en/development/fortify/
    Amalaraja Fernando,
    SharePoint Architect
    Please Mark As Answer if my post solves your problem or Vote As Helpful if a post has been helpful for you. This post is provided "AS IS" with no warrenties and confers no rights.

    Hi Amalaraja Fernando,
    I'm not sure that there is one more tool that combines all these features. But you may take a look at these solutions:
    SharePoint diagnostic manager
    SharePoint enterprise manager
    What is SPCop SharePoint Code Analysis?
    Dmitry
    Lightning Tools Check
    out our SharePoint tools and web parts |
    Lightning Tools Blog | Мой Блог

  • Does making objects equal null help the gc handle memory leakage problems

    hi all,
    does making objects equal null help the gc handle memory leakage problems ?
    does that help out the gc to collect unwanted objects ??
    and how can I free memory avoid memory leakage problems on devices ??
    best regards,
    Message was edited by:
    happy_life

    Comments inlined:
    does making objects equal null help the gc handle
    memory leakage problems ?To an extent yes. During the mark phase it will be easier for the GC to identify the nullified objects on the heap while doing reference analysis.
    does that help out the gc to collect unwanted objects
    ??Same answer as earlier, Eventhough you nullify the object you cannot eliminate the reference analysis phase of GC which definitelely would take some time.
    and how can I free memory avoid memory leakage
    problems on devices ??There is nothing like soft/weak reference stuffs that you get in J2SE as far as J2ME is concerned with. Also, user is not allowed to control GC behavior. Even if you use System.gc() call you are never sure when it would trigger the GC thread. Kindly as far as possible do not create new object instances or try to reuse the instantiated objects.
    ~Mohan

  • Vision OCR memory leakage

    Hi guys!
    I have a "problem" with simple OCR vi's made with Vision Assistant in NI Vision 2011.
    When I create a simple script, that only uses the OCR, and then create a .vi from it, the vi leaves the OCR session open. This results to huge leakage of RAM memory.
    You cannot even get the OCR session out from the .vi automatically when creationg the vi, so that it could be disposed outside of the .vi! So only solution I could figure out was to actually modify the .vi and build it inside there. Also the .abc file for the OCR have to be built, because it's not necessary in the same drive or base path, and Vision Assistant uses the whole path e.g. "D:\Labview Projects\OCR\... ...fonts.abc".
    It wouldn't be a problem unless I and especially others wouldn't be creating those OCR -vi's all the time. Now every vi have to be manually changed to dispose the session AND to take the .abc -file path in reference to the executable path.
    If somebody knows any solutions for this, please don't hesitate to tell me Thanks!

    Hi,
    System.gc() only suggests the objects are removed but there is no obligation for the JVM to do so.
    The implementation of different JVM's will approach this event in different ways. For example an embedded JVM might be designed to nearly alwaya remove objects that have no reference left.
    Anyway this is not strictly speaking memory leakage because if the JVM decides to it CAN remove the object.
    Real memory leakage is where a series of object references are created and not destroyed and so they persist in time because they cannot be GC'ed.
    A typical example is to create an object, stick it in an array of objects for some processing and then set the original object reference to null, thinking the object can now be GC'ed. But it can't unless the object reference in the array(which is a copy of the original object reference) is also set to null.
    Hope that helps,

  • EXCEPTION_STACK_OVERFLOW with Tomcat - service shuts down

    Hello,
    I get the following error periodically which causes my Tomcat service to stop. There do not seem to be any exceptions listed in my log files, only the "An unrecoverable stack overflow has occurred." error listed in the jakarta_service_yyyymmdd.log file.
    I am running tomcat5.5.23 on a windows 2000 machine as a service. I used the service.bat file included with the tomcat downloads to create this service.
    I have tried a few things that i dug up while researching this error.
    The first time it happened, i increase the values for JvmMs 128 JvmMx 256 to JvmMs 256 JvmMx 512. It didnt take right away, but the next day, after one crash, the error stopped.
    The next time it happened, i found that there may be an issue with Tomcat 5.5 15+ where JSP files are cached - the solution, which worked immediately was to add this to the options "-Djava.io.tmpdir=%CATALINA_BASE%\temp;-Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true"
    The third time, I found that there were a lot of bug fixes since the version i was using to the current ( 5.5.20 -> 5.5.23 ). after installing 23, it was fine.
    This time, i was asked to increase the session timeout, so i updated the web.xml files and restarted the service - it started failing immediately afterward.
    I haven't the slightest idea what could be going on. I have tried using JProfiler and modifying code in an attempt to reduce memory usage, but i don't think that had any effect.
    I don't have any problems when running/testing the application locally through Eclipse.. This only occurs on the 'prod' server.
    Please help, my users are not pleased.
    Listed below is the log entry from the system32 dir :
    # An unexpected error has been detected by HotSpot Virtual Machine:
    # EXCEPTION_STACK_OVERFLOW (0xc00000fd) at pc=0x080ad956, pid=1800, tid=1876
    # Java VM: Java HotSpot(TM) Server VM (1.4.2_13-b06 mixed mode)
    # Problematic frame:
    # V [jvm.dll+0xad956]
    --------------- T H R E A D ---------------
    Current thread (0x00655068): JavaThread "CompilerThread0" daemon [_thread_in_native, id=1876]
    siginfo: ExceptionCode=0xc00000fd, ExceptionInformation=0x00000001 0x545c0ffc
    Registers:
    EAX=0x55f14810, EBX=0x55f14810, ECX=0x545ff534, EDX=0x00000001
    ESP=0x545c1000, EBP=0x00000002, ESI=0x00000000, EDI=0x545ff494
    EIP=0x080ad956, EFLAGS=0x00010202
    Top of Stack: (sp=0x545c1000)
    0x545c1000: 55f14810 545ff534 080ada7a 545ff494
    0x545c1010: 55f14810 545ff494 00000000 00000037
    0x545c1020: 55ff98cc 545ff534 080ada7a 545ff494
    0x545c1030: 55f14810 545ff494 00000000 00000001
    0x545c1040: 55ff988c 545ff534 080ada7a 545ff494
    0x545c1050: 55ff98cc 545ff494 00000000 00000002
    0x545c1060: 55ff985c 545ff534 080ada7a 545ff494
    0x545c1070: 55ff988c 545ff494 00000000 00000001
    Instructions: (pc=0x080ad956)
    0x080ad946: 5e 83 c4 0c c2 04 00 90 90 90 51 53 8b 5c 24 10
    0x080ad956: 55 8b e9 8b 4b 1c 56 57 8b 7c 24 18 8b d1 89 6c
    Stack: [0x545c0000,0x54600000), sp=0x545c1000, free space=4k
    Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
    V [jvm.dll+0xad956]

    Hi, i had found some documentation that gave the meaning of some of the text in that error log. They had a suggested workaround that seems to have worked out for the time being. It said to switch to the client jvm rather than the server. As far as I can tell, there is no performance hit or drawback, so i am going to go with this until something else happens.
    Tim.

  • Find memory leakage when passing Object Reference from Teststand to vi

    I am using Teststand to call labview vi, and pass ThisContext of sequence to vi as object reference, but if I just loop this step and I can find the memory using keep increasing, how can I avoid the memory leakage inside the vi.
    see my vi, it is to post message to UI.
    Solved!
    Go to Solution.

    You should be using a close reference node to close the references you get as a result of an invoke. In the code below you should be closing the references you get from the following:
    AsPropertyObject
    Thread
    Close those two references once you are done with them.
    Also make sure you turned off result collection in your sequence or you will be using up memory continually for the step results.
    Hope this helps,
    -Doug

  • Memory leakage issue in Solaris

    Hi Team,
    Hope you doing good!!
    I am facing a memory leakage issue in the Solaris server configured.
    details:
    1. Frequent Increase in Memory Utilization
    2. Major Faults in System Events 189084 & Increasing.
    Server cofg.:
    Solaris 9 version 5.9,Sun Java Web Server 6.1,JDK 1.5,Oracle 10g.
    I really appreciate if you give conclusion for the above behavior of server ASAP.
    Thanks,
    Vivek
    +919990550305

    Please test it with release version as there may be a lot of things which are fixed after beta.
    Also please note down the statement cache size your application was using earlier. Starting from 2.111.7.10/2.111.7.20, the statement cache size is automatically tuned by default. This feature is called self tuning. You may disable self tuning and specify your own statement cache size as usual. To know more about self tuning, please consult ODP.NET Developer's Guide.
    I would suggest that you first upgrade to the release version 2.111.7.20. If that does not solve the problem, you may either
    - specify MaxStatementCacheSize
    or
    - disable self tuning and provide your own statement cache size

  • Memory leakage issue in Oracle-DOTNET environment

    Hi,
    One of my customer is facing a issue of memory leakage issue with their ASP.NET application. Following are the environment details
    1. ASP.NET 3.5 application (Uses some Infragistics controls for grids)
    2. SSRS for reporting
    3. Oracle Server as database (details given below)
    The memory leakage occurs while running load testing in his test environment.
    The memory is consumed (in tune of 1.2 GB) and then performance is hit.
    The memory dump is analyzed and we found Oracle client eating up a lot of memory which is not released.
    We tried to use 11g driver also but without any improvements.
    Environment Details :
    1. Oracle server version –
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE 10.2.0.1.0 Production
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 – Production
    2. Client version
    We are using following oracle client version SQL*Plus: Release 10.2.0.1.0 – Production
    (For performance purpose we have used following client version. SQL*Plus: Release 11.1.0.7.0 – Production)
    3. ODP.NET version
    Version of Oracle.DataAccess.dll is 2.111.7.10
    It will be great if any body can help on this issue and guide us on the right track.
    Thanks in advance.
    Thanks & Regards,
    Anoop

    Please test it with release version as there may be a lot of things which are fixed after beta.
    Also please note down the statement cache size your application was using earlier. Starting from 2.111.7.10/2.111.7.20, the statement cache size is automatically tuned by default. This feature is called self tuning. You may disable self tuning and specify your own statement cache size as usual. To know more about self tuning, please consult ODP.NET Developer's Guide.
    I would suggest that you first upgrade to the release version 2.111.7.20. If that does not solve the problem, you may either
    - specify MaxStatementCacheSize
    or
    - disable self tuning and provide your own statement cache size

  • Memory Leakage while parsing and schema validation

    It seems there is some kind of memory leakage. I was using xdk 9.2.0.2.0. Later i found that from this forum which contain a Topic on this and they (oracle ) claim they have memory leakage. And they have fixes this bugs in 9.2.0.6.0 xdk. When i used truss command, for each call to parser and schame validation, it was opening file descriptor for lpxus and lsxus file. And this connections were not close. And keep on openning it with each call to parser. I was able to diagonise this using truss command on on solaris. After making many calls, i was error message could not open file Result.xsd (0202). I am using one instance of Parser and Schema. And i am doing clean up for parser after each parse.
    Later i downloaded, 9.2.0.6.0,
    Above problem for the parser was solvedm but still the problem continued for schema validation. And even i tried with latest beta release 10.
    And this has caused great troubles to us. Please can u look whether there is come sort of leakage. Please advice if u have any solution.
    Code---
    This below code is called multiple times
    char* APIParser::execute(const char* xmlInput) {
              char* parseResult = parseDocument(xmlInput);
              //if(strcmp(parseResult,(const char*)"")==0) {
    if(parseResult == NULL) {
                   parseResult = getResult();
    parser.xmlclean();
         return parseResult;
              } else {
                   return parseResult;
    Parser and schema are intialised in Construtor and terminated in Destructor.

    Hi, here is the complete test case
    #include<iostream>
    #ifndef ORAXML_CPP_ORACLE
    # include <oraxml.hpp>
    #endif
    using namespace std;
    #define FAIL { cout << "Failed!\n"; return ; }
    void mytest(int count)
         uword ecode;
         XMLParser parser;
         Document *doc;
         Element root, elem;
         if (ecode = parser.xmlinit())
              cout << "Failed to initialze XML parser, error " << ecode << "\n";
              return ;
         cout << "\nCreating new document...\n";
         if (!(doc = parser.createDocument((oratext *) 0, (oratext *) 0,(DocumentType *) 0)))
         FAIL
         if (!(elem = doc->createElement((oratext *) "ROOT")))
                   FAIL
         string test="Elem";
         for(int i=0;i<count;i++)
              //test="Elem"+string(ltoa(i));
              if (!(elem = doc->createElement((oratext *) "element")))
                   FAIL
              if (!doc->appendChild(elem))
                   FAIL
         //doc ->print();
         //parser.xmlclean();
         parser.xmlterm();
    int main(int argc,char* argv[])
         int count=atol(argv[1]);
         mytest(count);
         char c;
         cout<<"check memory usage n press any key"<<endl;
         cin>>c;
         return 0;
    -------------------------------------------cut here-----
    Now, i cant use the xdk 10g because i work on a hpux machine. I have tried the above program with a count of 1000000. the memory usage at the end was around 2 gigabytes.
    Could someone please please help me? :(
    Thank you.

  • Memory Leakage Detection

    Hi,
    Want to know how can I detect any memory leakage using any profiler. There are so many links on the net but there is no simple documentation on net that gives simple steps to detect any memory leakage.
    Also most profilers do not show how many Garbage Collections an object has survived. This information could be critical in deciding memory leakage. What they show is how many objects are there in the Heap. If there are 10 String Objects in the heap after 1 min. of program start and 50 Objects after 2 min. (after invoking GC), this could be due to the normal activity of the program and not memory leakage.
    Pls remember any response to this thread would be beneficial to lot many Java guys I know with even 6-9 yrs. experience would want to know. So Instead of giving links to any site, I would appreciate if anyone can explain this in some simple language.
    Thanks,
    AA

    You don't have to launch Instruments from Xcode. To get started detecting memory leaks with Instruments, launch Instruments. A sheet will open asking you to choose a template. Select Leaks and click the Choose button.
    In the lower left corner of the trace window are three buttons. Click the right one. Doing so will open the detail view, which will let you configure how Instruments detects the leaks. Instruments is initially set to auto-detect leaks, and it checks every 10 seconds. This setup could be the cause of your problem where Instruments doesn't find any leaks. Your small test program may be finishing before Instruments detects the leak.
    After you get the trace configured, go to the Default Target pop-up menu in the trace window toolbar. Choose Launch Executable > Choose Executable. Select your app and click the Record button to start tracing.
    If Instruments doesn't work for you, there are alternatives for Mac applications. MallocDebug can detect memory leaks, and it's installed with the Xcode Tools. leaks is a command-line application that detects memory leaks. Valgrind is available for Mac OS X, and it detects memory leaks.

Maybe you are looking for