Max heap reached:Route Servlet:Out of memory

Hi All,
I have used max possible -Xmx value on a 32 bit Windows machine,
while starting oc4j from command line and the routeserver
initialization failed.
Did anyone face this issue or has any idea on how to initialize route servlet without errors
after reaching the max -Xmx value? Should we further partition the data?
Thank you,
-J

HI
On windows we have a limitation that heap cannot be more than 2gb.
You can have 2 JVM and check if it helps.
First and foremost you need to find whether there no objects leak in the code.
Cheers
Priya

Similar Messages

  • Max heap size reached : out of memory

    Hi All,
    I have used max possible -Xmx value on a 32 bit Windows machine,
    while starting oc4j from command line and the (oracle) routeserver
    initialization failed.
    Did anyone face this issue or has any idea on how to initialize route servlet without errors
    after reaching the max -Xmx value? Should we further partition the data?
    Thank you,
    -J

    HI
    On windows we have a limitation that heap cannot be more than 2gb.
    You can have 2 JVM and check if it helps.
    First and foremost you need to find whether there no objects leak in the code.
    Cheers
    Priya

  • Out of memory: Java heap space

    Hello,
    I am working on a project, that simulates large populations, with each individual being a separate Objecta(a couple of kilobytes in size each).
    Once i reach a bug enough number of those Objects (stored in a Vector), i get Out of memory: Java heap space exception.
    Am i having a memory leak in Java? Or am i really reaching the maximal JVM limit and i need to reconfigure it, to support larger heap size?
    :)

    Am i having a memory leak in Java?Because of garbage collection there, by far and large, aren't memory leaks in java. However, there certainly is memory waste.
    If each of these objects you make room for are unnecessarily laden with references to other large objects creating huge networks of objects that never free their memory, and then you allocate 1 million of them, then, while not a leak, you are wasting your memory space.
    Look through your class and inspect each field to see if any of them could be holding more data than you intend. If so, multiply the amount of that data by 1,000,000 or however many objects you are going to instantiate to determine if this produces and absurd amount of memory. You should be able to predict ~how much storage your data will need...each char is 2bytes, each int is 4 bytes, doubles are 8 etc

  • Uploading large files from applet to servlet throws out of memory error

    I have a java applet that needs to upload files from a client machine
    to a web server using a servlet. the problem i am having is that in
    the current scheme, files larger than 17-20MB throw an out of memory
    error. is there any way we can get around this problem? i will post
    the client and server side code for reference.
    Client Side Code:
    import java.io.*;
    import java.net.*;
    // this class is a client that enables transfer of files from client
    // to server. This client connects to a servlet running on the server
    // and transmits the file.
    public class fileTransferClient
    private static final String FILENAME_HEADER = "fileName";
    private static final String FILELASTMOD_HEADER = "fileLastMod";
    // this method transfers the prescribed file to the server.
    // if the destination directory is "", it transfers the file to
    "d:\\".
    //11-21-02 Changes : This method now has a new parameter that
    references the item
    //that is being transferred in the import list.
    public static String transferFile(String srcFileName, String
    destFileName,
    String destDir, int itemID)
    if (destDir.equals(""))
    destDir = "E:\\FTP\\incoming\\";
    // get the fully qualified filename and the mere filename.
    String fqfn = srcFileName;
    String fname =
    fqfn.substring(fqfn.lastIndexOf(File.separator)+1);
    try
    //importTable importer = jbInit.getImportTable();
    // create the file to be uploaded and a connection to
    servlet.
    File fileToUpload = new File(fqfn);
    long fileSize = fileToUpload.length();
    // get last mod of this file.
    // The last mod is sent to the servlet as a header.
    long lastMod = fileToUpload.lastModified();
    String strLastMod = String.valueOf(lastMod);
    URL serverURL = new URL(webadminApplet.strServletURL);
    URLConnection serverCon = serverURL.openConnection();
    // a bunch of connection setup related things.
    serverCon.setDoInput(true);
    serverCon.setDoOutput(true);
    // Don't use a cached version of URL connection.
    serverCon.setUseCaches (false);
    serverCon.setDefaultUseCaches (false);
    // set headers and their values.
    serverCon.setRequestProperty("Content-Type",
    "application/octet-stream");
    serverCon.setRequestProperty("Content-Length",
    Long.toString(fileToUpload.length()));
    serverCon.setRequestProperty(FILENAME_HEADER, destDir +
    destFileName);
    serverCon.setRequestProperty(FILELASTMOD_HEADER, strLastMod);
    if (webadminApplet.DEBUG) System.out.println("Connection with
    FTP server established");
    // create file stream and write stream to write file data.
    FileInputStream fis = new FileInputStream(fileToUpload);
    OutputStream os = serverCon.getOutputStream();
    try
    // transfer the file in 4K chunks.
    byte[] buffer = new byte[4096];
    long byteCnt = 0;
    //long percent = 0;
    int newPercent = 0;
    int oldPercent = 0;
    while (true)
    int bytes = fis.read(buffer);
    byteCnt += bytes;
    //11-21-02 :
    //If itemID is greater than -1 this is an import file
    transfer
    //otherwise this is a header graphic file transfer.
    if (itemID > -1)
    newPercent = (int) ((double) byteCnt/ (double)
    fileSize * 100.0);
    int diff = newPercent - oldPercent;
    if (newPercent == 0 || diff >= 20)
    oldPercent = newPercent;
    jbInit.getImportTable().displayFileTransferStatus
    (itemID,
    newPercent);
    if (bytes < 0) break;
    os.write(buffer, 0, bytes);
    os.flush();
    if (webadminApplet.DEBUG) System.out.println("No of bytes
    sent: " + byteCnt);
    finally
    // close related streams.
    os.close();
    fis.close();
    if (webadminApplet.DEBUG) System.out.println("File
    Transmission complete");
    // find out what the servlet has got to say in response.
    BufferedReader reader = new BufferedReader(
    new
    InputStreamReader(serverCon.getInputStream()));
    try
    String line;
    while ((line = reader.readLine()) != null)
    if (webadminApplet.DEBUG) System.out.println(line);
    finally
    // close the reader stream from servlet.
    reader.close();
    } // end of the big try block.
    catch (Exception e)
    System.out.println("Exception during file transfer:\n" + e);
    e.printStackTrace();
    return("FTP failed. See Java Console for Errors.");
    } // end of catch block.
    return("File: " + fname + " successfully transferred.");
    } // end of method transferFile().
    } // end of class fileTransferClient
    Server side code:
    import java.io.*;
    import javax.servlet.*;
    import javax.servlet.http.*;
    import java.util.*;
    import java.net.*;
    // This servlet class acts as an FTP server to enable transfer of
    files
    // from client side.
    public class FtpServerServlet extends HttpServlet
    String ftpDir = "D:\\pub\\FTP\\";
    private static final String FILENAME_HEADER = "fileName";
    private static final String FILELASTMOD_HEADER = "fileLastMod";
    public void doGet(HttpServletRequest req, HttpServletResponse resp)
    throws ServletException,
    IOException
    doPost(req, resp);
    public void doPost(HttpServletRequest req, HttpServletResponse
    resp)
    throws ServletException,
    IOException
    // ### for now enable overwrite by default.
    boolean overwrite = true;
    // get the fileName for this transmission.
    String fileName = req.getHeader(FILENAME_HEADER);
    // also get the last mod of this file.
    String strLastMod = req.getHeader(FILELASTMOD_HEADER);
    String message = "Filename: " + fileName + " saved
    successfully.";
    int status = HttpServletResponse.SC_OK;
    System.out.println("fileName from client: " + fileName);
    // if filename is not specified, complain.
    if (fileName == null)
    message = "Filename not specified";
    status = HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
    else
    // open the file stream for the file about to be transferred.
    File uploadedFile = new File(fileName);
    // check if file already exists - and overwrite if necessary.
    if (uploadedFile.exists())
    if (overwrite)
    // delete the file.
    uploadedFile.delete();
    // ensure the directory is writable - and a new file may be
    created.
    if (!uploadedFile.createNewFile())
    message = "Unable to create file on server. FTP failed.";
    status = HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
    else
    // get the necessary streams for file creation.
    FileOutputStream fos = new FileOutputStream(uploadedFile);
    InputStream is = req.getInputStream();
    try
    // create a buffer. 4K!
    byte[] buffer = new byte[4096];
    // read from input stream and write to file stream.
    int byteCnt = 0;
    while (true)
    int bytes = is.read(buffer);
    if (bytes < 0) break;
    byteCnt += bytes;
    // System.out.println(buffer);
    fos.write(buffer, 0, bytes);
    // flush the stream.
    fos.flush();
    } // end of try block.
    finally
    is.close();
    fos.close();
    // set last mod date for this file.
    uploadedFile.setLastModified((new
    Long(strLastMod)).longValue());
    } // end of finally block.
    } // end - the new file may be created on server.
    } // end - we have a valid filename.
    // set response headers.
    resp.setContentType("text/plain");
    resp.setStatus(status);
    if (status != HttpServletResponse.SC_OK)
    getServletContext().log("ERROR: " + message);
    // get output stream.
    PrintWriter out = resp.getWriter();
    out.println(message);
    } // end of doPost().
    } // end of class FtpServerServlet

    OK - the problem you describe is definitely what's giving you grief.
    The workaround is to use a socket connection and send your own request headers, with the content length filled in. You may have to multi-part mime encode the stream on its way out as well (I'm not about that...).
    You can use the following:
    http://porsche.cis.udel.edu:8080/cis479/lectures/slides-04/slide-02.html
    on your server to get a feel for the format that the request headers need to take.
    - Kevin
    I get the out of Memory Error on the client side. I
    was told that this might be a bug in the URLConnection
    class implementation that basically it wont know the
    content length until all the data has been written to
    the output stream, so it uses an in memory buffer to
    store the data which basically causes memory issues..
    do you think there might be a workaround of any kind..
    or maybe a way that the buffer might be flushed after
    a certain size of file has been uploaded.. ?? do you
    have any ideas?

  • Out Of Memory Error every time my playhead reaches a slug I placed in timel

    The past two days in one particular project I get an Error: Out Of Memory every time I play from the timeline and the playhead reaches the slugs I placed for the commercial breaks. I thought maybe the problem was with the commercial sequence I placed over the slug but I deleted it and still get the problem. When I get the message and then advance the playhead a little further to the right, but still over the slug, and press play I get a loud static cracking sound and the error message reappears. When I advance the playhead and hit play again FCP crashes. This happens in this order every time I reach a slug in the timeline. I am using FCP 6.0.5 (reinstalled last night) the project is HDV 1080i60 and I have a Mac Pro Quad Core with 6 GB of memory and I am saving to a esata Raid drive. I have ordered new memory although I do not think that is the problem. Why does stuff like this happen every time a deadline approaches?

    meh... spoke too soon! I find that edit-to-tape (assembly edit on Sony HVRM-15 DVCAM mode) is unreliable whenever there is a slug in the timeline. Typically playhback crashes a few seconds after we hit the slug.
    Im working around this by either using "black" clips rather than slug, or by dumping out the entire sequence to a separate QT movie, and then sending that out to tape.
    Its puzzling that my current-gen quad-core mac pro (4GB memory total, NVIDEA 8800 card, seagate 250GB internal video drives) seems to be having such a hard time handling this...

  • Download Servlet throwing Out Of Memory Exception

    I am trying to download file of more than 500 mb through servlet but getting out of memory exception .
    Before downloading i am zipping that huge file .
    try {
         String zipFileName = doZip(file);
          file =null;
          System.gc();
          File inputFile = new File(zipFileName);
          InputStream fileToDownload = new FileInputStream(
                                            inputFile);
           response.setContentType("application/zip");
         response.setHeader("Content-Disposition","attachment; filename=\""
                                                      + fileName.replaceAll("tmx", "zip")
                                                                .concat("\""));
         response.setContentLength(fileToDownload.available());
         byte buf[] = new byte[BUF_SIZE];
         int read;
         while ((read = fileToDownload.read(buf)) != -1) {
                   outs.write(buf, 0, read);
              fileToDownload.close();
                   outs.flush();
                                  outs.close();
    }catch(Exception e ) {
      //Getting out of memory.
    }Please suggest solution for this .

    cotton.m wrote:
    My zip suggestion was as follows.
    Take the file. Do not set the Content length header. Do set the Content encoding header to gzip. Create a GZIP output stream using the servlet output stream. Read the unzipped file in and output it through the gzip output stream.
    This cuts out one full cycle of file reading and writing from what you are doing currently.Thanks for u r reply
    InputStream fileToDownload = new FileInputStream(
                                            file);
    response.setContentType("application/gzip");
    response.setHeader("Transfer-Encoding", "chunked");
    response.setContentLength((int) file.length());
    GZIPOutputStream gzipoutputstream = new GZIPOutputStream(outs);
    byte buf[] = new byte[BUF_SIZE];
    int read;
    while ((read = fileToDownload.read(buf)) != -1) {
         gzipoutputstream.write(buf, 0, read);
    fileToDownload.close();
    outs.flush();
    outs.close();I made changes accordingly . Please provide u r view on this .

  • Max heap memory values

    Dear all,
    In the config tool of a Java 6.40 system, for a server process I see the Max Heap size under Tab "General" and the Max Heap size under Tab "Bootstrap". What is the difference of these two values? If you can send me a link with documentation on this, it would be very much appreciated
    Many thanks
    Andreas

    Check Note 876722 - the general setting for server node will take precedent over bootstap value
    If you can see process details watch for -xmx value for the java process to find the actual value of the heap
    Hope it helps

  • Out of Memory Error in iplanet 6.1

    While starting iplanet 6.1 sp2 in HP-UX 11.00 , out of memory error occurs. Value for JVM heapsize is set to min 128 MB and Max of 512 MB. Tried changing both the values to 512 MB, still the error occurs.
    Please revert with any solution.

    Please find the values of JVM HeapSIze and HP-UX process and address space limits.
    JVM Heap size has been changed to
    Min 128MB
    Max 2GB
    max_thread_proc 2048
    maxdsiz 1073741824
    maxssiz 401604608
    maxtsiz 1073741824
    After the modification of the above changes , out of memory errors occurs.Find the logfile below.
    [21/Feb/2005:09:37:31] failure ( 8528): for host 163.38.174.17 trying to POST /wect/servlets/com.citicorp.treasury.westerneurope.maintenance.PageLinksServlet, service-j2ee reports: StandardWrapperValve[PageLinksServlet]: WEB2792: Servlet.service() for servlet PageLinksServlet threw exception
    javax.servlet.ServletException: WEB2664: Servlet execution threw an exception
    at org.apache.catalina.core.StandardWrapperValve.invokeServletService(StandardWrapperValve.java:793)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:322)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:509)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:212)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:509)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:209)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:509)
    at com.iplanet.ias.web.connector.nsapi.NSAPIProcessor.process(NSAPIProcessor.java:161)
    at com.iplanet.ias.web.WebContainer.service(WebContainer.java:578)
    ----- Root Cause -----
    java.lang.OutOfMemoryError
    [21/Feb/2005:09:45:04] warning ( 8528): HTTP3039: terminating with 6 sessions st
    ill in progress
    [21/Feb/2005:09:45:04] failure ( 8528): CORE3189: Restart functions not called since 6 sessions still active
    Please suggest.

  • Allocated Heap v.s. Max Heap and OutOfMemoryError

    In my application, I specify heap size like following:
    -Xms384m -Xmx384m
    I got few OOME error recently. From our monitoring tool, I can see the 'allocated heap' was only about 280 - 340MB range when OOME happened. That means that the used heap size were close or reach the 'allocated' heap size but not the max heap that defined in '-Xmx'. My question is:
    Why does JVM act like this? What is the problem that prevents JVM from obtaining promised memory from OS?
    We are using JDK 1.5
    Thanks,
    J

    Its easy to check this, create as many threads as possible and configure a small heap size and a big stack size, count the number of created threads, then increase the heap and run the program again. Then you can see the number of threads increased too.
    I have created a [java program to do exactly this, check it out|http://weblogs.java.net/blog/claudio/archive/2007/05/how_many_thread_1.html] .
    See below, how thread stack size are allocated into the heap.
    [http://java.sun.com/docs/hotspot/threads/threads.html|http://java.sun.com/docs/hotspot/threads/threads.html]
    excerpt "TLEs (in 1.3) or TLABs (in 1.4) are thread local portions of the heap used in the young generation"
    [Java Memory White Paper|http://java.sun.com/javase/technologies/hotspot/gc/memorymanagement_whitepaper.pdf]
    For multithreaded applications, allocation operations need to be multithread-safe. If global locks were used to
    ensure this, then allocation into a generation would become a bottleneck and degrade performance. Instead,
    the HotSpot JVM has adopted a technique called Thread-Local Allocation Buffers (TLABs). This improves
    multithreaded allocation throughput by giving each thread its own buffer (i.e., a small portion of the
    generation) from which to allocate. Since only one thread can be allocating into each TLAB, allocation can take
    place quickly by utilizing the bump-the-pointer technique, without requiring any locking. Only infrequently,
    when a thread fills up its TLAB and needs to get a new one, must synchronization be utilized. Several techniques
    to minimize space wastage due to the use of TLABs are employed. For example, TLABs are sized by the allocator
    to waste less than 1% of Eden, on average. The combination of the use of TLABs and linear allocations using the
    bump-the-pointer technique enables each allocation to be efficient, only requiring around 10 native instructions.

  • Out of Memory Error running bea WLCS5.1

    Hi everybody !
    My team and I have to cope with the following bug :
    Java.lang.OutOfMemoryError :
    at weblogic.db.oci.OciCursor
    at weblogic.jdbcbase.oci.Statement.executeQuery....
    It runs under a cluster composed by 2 x BEA WLS 5.1 sp6.
    heap min is 500Mo
    heap max is 750Mo
    total RAM is 1Go per machine
    When memory use reaches 750Mo (max allowed), server should put
    current instances in a queuing system while the garbage
    collector is running.
    At this point it appears that client requests are not queued but
    canceled, returning an error to the client.
    Why do we have to specify a heap limit if JVM goes over it anyway ?
    Please give us a rope about it.

    Moi non plus.
    Cameron Purdy <[email protected]> wrote:
    Je ne sais pas. Je ne parle pas francais.
    Cameron Purdy
    Tangosol, Inc.
    http://www.tangosol.com
    Tangosol Server: Enabling enterprise application customization
    "Dimitri Rakitine" <[email protected]> wrote in message
    news:[email protected]..
    Pourquoi est-ce dommage?
    Cameron Purdy <[email protected]> wrote:
    C'est dommage.
    Cameron Purdy
    Tangosol, Inc.
    http://www.tangosol.com
    Tangosol Server: Enabling enterprise application customization
    "Mike Reiche" <[email protected]> wrote in message
    news:3b0a99c5$[email protected]..
    Salut Laurent -
    Faut que tu ecrit 'Mb' au lieu de 'Mo' puisse que les americains tecomprendre
    The JVM does not exceed what you specify for max heap - the Java heap
    is
    only
    part of the memory that the JVM uses.
    Out-of-Memory exception occurs when the garbage collection cannot
    recover
    sufficient
    memory for your application. A heap of 750Mb is very large - you may
    wish
    to verify
    with JProbe that your application is not hanging on to memory that itdoesn't
    need. Also the JVM arguments -verbose:gc will give you some additionalinformation
    on GC. Specifying it twice will give you more information.
    Mike
    "bea" <[email protected]> wrote:
    Hi everybody !
    My team and I have to cope with the following bug :
    Java.lang.OutOfMemoryError :
    at weblogic.db.oci.OciCursor
    at weblogic.jdbcbase.oci.Statement.executeQuery....
    It runs under a cluster composed by 2 x BEA WLS 5.1 sp6.
    heap min is 500Mo
    heap max is 750Mo
    total RAM is 1Go per machine
    When memory use reaches 750Mo (max allowed), server should put
    current instances in a queuing system while the garbage
    collector is running.
    At this point it appears that client requests are not queued but
    canceled, returning an error to the client.
    Why do we have to specify a heap limit if JVM goes over it anyway ?
    Please give us a rope about it.
    Dimitri
    Dimitri

  • Out of memory error

    Hello all,
    I am newbie to this forum, so kindly excuse me if I am posting an incorrect question.
    My application goes down every week and the error logs point to java.lang.OutOfMemoryError.
    Analysed the sysout logs and heap logs : -
    Sysout logs :
    Could not invoke the service() method on servlet action. Exception thrown : java.lang.OutOfMemoryError
         at oracle.jdbc.driver.OracleStatement.prepareAccessors(OracleStatement.java(Compiled Code))
         at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java(Compiled Code))
         at oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java(Compiled Code))
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java(Compiled Code))
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java(Compiled Code))
         at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java(Compiled Code))
         at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.pmiExecuteQuery(WSJdbcPreparedStatement.java(Compiled Code))
         at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.executeQuery(WSJdbcPreparedStatement.java(Compiled Code))
         at xxx.cis.tuf.server.connector.jdbc.v1.JDBCv1DBStatement.executeQuery(JDBCv1DBStatement.java(Compiled Code))
         at xxx.myapplication.dao.Person.loadRecord(Person.java(Compiled Code))
         at xxx.myapplication.SiteUtil.getUserJavaLocale(SiteUtil.java(Compiled Code))
         at xxx.myapplication.updateprofile.actions.UpdateProfileAction.execute(UpdateProfileAction.java:56)
         at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java(Inlined Compiled Code))
         at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java(Compiled Code))
         at org.apache.struts.action.ActionServlet.process(ActionServlet.java(Inlined Compiled Code))
         at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java(Compiled Code))
         at javax.servlet.http.HttpServlet.service(HttpServlet.java(Compiled Code))
         at javax.servlet.http.HttpServlet.service(HttpServlet.java(Compiled Code))
         at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java(Compiled Code))
         at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java(Compiled Code))
         at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java(Compiled Code))
         at xxx.myapplication.UTF8Filter.doFilter(UTF8Filter.java(Compiled Code))
         at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java(Compiled Code))
         at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java(Compiled Code))
         at xxx.cis.tuf.server.filter.TUFExtendedFilterChainImpl.doFilter(TUFExtendedFilterChainImpl.java(Compiled Code))
         at xxx.cis.tuf.server.directory.GroupModifixxxionEventFilter.doFilter(GroupModifixxxionEventFilter.java(Compiled Code))
         at xxx.cis.tuf.sys.server.filter.TUFFilterChainImpl.doFilter(TUFFilterChainImpl.java(Inlined Compiled Code))
         at xxx.cis.tuf.server.filter.TUFExtendedFilterChainImpl.doFilter(TUFExtendedFilterChainImpl.java(Compiled Code))
         at xxx.cis.tuf.server.security.SecurityFilter.doFilter(SecurityFilter.java(Compiled Code))
         at xxx.cis.tuf.sys.server.filter.TUFFilterChainImpl.doFilter(TUFFilterChainImpl.java(Inlined Compiled Code))
         at xxx.cis.tuf.server.filter.TUFExtendedFilterChainImpl.doFilter(TUFExtendedFilterChainImpl.java(Compiled Code))
         at xxx.cis.tuf.server.security.cws.CWSSecurityTokenContextFilter.doFilter(CWSSecurityTokenContextFilter.java(Compiled Code))
         at xxx.cis.tuf.sys.server.filter.TUFFilterChainImpl.doFilter(TUFFilterChainImpl.java(Inlined Compiled Code))
         at xxx.cis.tuf.server.filter.TUFExtendedFilterChainImpl.doFilter(TUFExtendedFilterChainImpl.java(Compiled Code))
         at xxx.cis.tuf.server.logging.LoggingRequestIDFilter.doFilter(LoggingRequestIDFilter.java(Compiled Code))
         at xxx.cis.tuf.sys.server.filter.TUFFilterChainImpl.doFilter(TUFFilterChainImpl.java(Compiled Code))
         at xxx.cis.tuf.server.filter.TUFExtendedFilterChainImpl.doFilter(TUFExtendedFilterChainImpl.java(Inlined Compiled Code))
         at xxx.cis.tuf.server.filter.TUFMasterFilter.doFilter(TUFMasterFilter.java(Compiled Code))
         at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java(Compiled Code))
         at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java(Compiled Code))
         at com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java(Compiled Code))
         at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java(Compiled Code))
         at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java(Compiled Code))
         at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java(Compiled Code))
         at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java(Compiled Code))
         at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java(Compiled Code))
         at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java(Compiled Code))
         at com.ibm.ws.http.channel.inbound.impl.HttpICLReadCallback.complete(HttpICLReadCallback.java(Compiled Code))
         at com.ibm.ws.tcp.channel.impl.WorkQueueManager.requestComplete(WorkQueueManager.java(Compiled Code))
         at com.ibm.ws.tcp.channel.impl.WorkQueueManager.attemptIO(WorkQueueManager.java(Compiled Code))
         at com.ibm.ws.tcp.channel.impl.WorkQueueManager.workerRun(WorkQueueManager.java(Compiled Code))
         at com.ibm.ws.tcp.channel.impl.WorkQueueManager$Worker.run(WorkQueueManager.java(Compiled Code))
         at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java(Compiled Code))
    The analysis of heap dump shows :-
    35,548,944 (29%) [32] 2 java/util/Hashtable$Entry 0x17e998f8
    35,548,888 (29%) [232] 15 com/ibm/ws/rsadapter/spi/WSRdbManagedConnectionImpl 0x128a8c40
    35,084,008 (28%) [24] 1 array of javax/resource/spi/ConnectionEventListener 0x128a8128
    35,083,984 (28%) [16] 1 com/ibm/ejs/j2c/ConnectionEventListener 0x128a7af8
         35,083,968 (28%) [224] 13 com/ibm/ejs/j2c/MCWrapper 0x161ccf50
         34,547,384 (28%) [24] 1 com/ibm/ejs/j2c/poolmanager/MCWrapperList 0x120ea638
         34,547,360 (28%) [216] 42 array of java/lang/Object 0x120ea558
              1,105,160 (0%) [224] 9 com/ibm/ejs/j2c/MCWrapper 0x1251fb88
              1,065,552 (0%) [224] 9 com/ibm/ejs/j2c/MCWrapper 0x1022c780
              1,061,056 (0%) [224] 9 com/ibm/ejs/j2c/MCWrapper 0x107157a0
              1,042,776 (0%) [224] 9 com/ibm/ejs/j2c/MCWrapper 0x148ab790
                   1,038,656 (0%) [224] 9 com/ibm/ejs/j2c/MCWrapper 0x168fbb30
                   1,000,784 (0%) [224] 9 com/ibm/ejs/j2c/MCWrapper 0x108cec00
                   There are 22 more children
    My analysis is that hashtable is causing this issue.
    In my application I am using hashmap to store resultset objects and use request.setAttribute("databasevaluesforuser", databasevalues) to show in jsp.
    Could it be that after the values are displayed the hashmap still contains those objects and when multiple users
    access the application, heap size increases and this causes out of memory issues. I also paginated some jsp so that users view part by part of the results from database and thus prevent memory outage.
    Could anybody help me in this issue. Please inform me if I need to provide any further information.

    Aplogies for answering late.
    I am using SUN 1.4 jvm
    The analysis of GC logs showed that lot of memory is consumed by java hashmap. In my application I am using hash map only to store the results from database and then represent at jsp using the request parameter.

  • Out of memory error while deploying Oracle Service Bus Ressources

    Hi,
    When i am trying to deploy resources on sbconsole,it is showing below error on activation.
    Oct 15, 2011 3:11:38 PM CDT&gt; &lt;Error&gt; &lt;Deployer&gt; &lt;BEA-149265&gt; &lt;Failure occurred
    in the execution of deployment request with ID '1318709496539' for task '14'. Error is:
    'java.lang.OutOfMemoryError'
    java.lang.OutOfMemoryError
    at java.util.zip.Inflater.init(Native Method)
    at java.util.zip.Inflater.&lt;init&gt;(Inflater.java:83)
    at java.util.zip.ZipFile.getInflater(ZipFile.java:278)
    at java.util.zip.ZipFile.getInputStream(ZipFile.java:224)
    at java.util.zip.ZipFile.getInputStream(ZipFile.java:192)
    # There is insufficient memory for the Java Runtime Environment to continue.
    # Native memory allocation (malloc) failed to allocate 124968 bytes for Chunk::new
    # An error report file with more information is saved as:
    I have min and max heap size 3072M and min and max perm size set to 512M. I tried number of ways to solve this issue .
    1. I restarted the weblogic and tried deploy again. every time i see the same error.
    2. This steps turn out into whole mess. I deleted plan files from $domain_dir/osb/config/plan and again restarted web logic. After deleting plan files also , i can see all my proxies in sbconsole. Can somebody explain what is plan files in plan directory and ear files in sbgen? How OSB uses this ?
    Thanks for your help
    -B

    plan.xml and ears are to deploy MDBs and customize them. When you deploy a Proxy Service who consumes JMS messages, OSB deploys a generic ejb.jar module and customizes it trhough a plan.xml (honestly I don't all all the nitty gritty)
    in your case, for the OOM problems I am using a JVM option HeapDumpOnOutOfMemory, it's priceless, it produces a heap dump on which you can run a complete analysis with yourkit.com or Eclipse MAT. PRICELESS! if I were you, I would run such a test and send to Oracle Support the heap dump and your findings.

  • ORA-27102: out of memory (while creation of drsite problem)

    Hi all,
    I am trying to create DRSITE at remote location, but whilw using the pfile of primary server i am getting the error ORA-27102: out of memory we are using Oracle 9.2 and RHEL and another is that in the primary server we are haing oracle 9.2.0.8 and at the drsite we are using oracle 9.2.0.6,actually aour patch got corrupted that's why we are using oracle 9.2.0.6, because of the differences os the patch creating a problem.....but i dno't think so..pls correct me if i am wrong
    SQL> conn sys/pwd as sysdba
    Connected to an idle instance.
    SQL> startup nomount pfile='/u01/initicai.ora';
    ORA-27102: out of memory
    SQL>we are haing total 8gb memory out of which we using 6gb for oracle i.e
    [oracle@icdb u01]$ cat /proc/meminfo
    MemTotal:      8175080 kB
    MemFree:         39912 kB
    Buffers:         33116 kB
    Cached:        7780188 kB
    SwapCached:         32 kB
    Active:          78716 kB
    Inactive:      7761396 kB
    HighTotal:           0 kB
    HighFree:            0 kB
    LowTotal:      8175080 kB
    LowFree:         39912 kB
    SwapTotal:    16779884 kB
    SwapFree:     16779660 kB
    Dirty:              28 kB
    Writeback:           0 kB
    Mapped:          48356 kB
    Slab:           265028 kB
    CommitLimit:  20867424 kB
    Committed_AS:    61372 kB
    PageTables:       2300 kB
    VmallocTotal: 536870911 kB
    VmallocUsed:    271252 kB
    VmallocChunk: 536599163 kB
    HugePages_Total:     0
    HugePages_Free:      0
    Hugepagesize:     2048 kB
    and
    [oracle@icdb u01]$ cat /etc/sysctl.conf
    # Kernel sysctl configuration file for Red Hat Linux
    # For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
    # sysctl.conf(5) for more details.
    # Controls IP packet forwarding
    net.ipv4.ip_forward = 0
    # Controls source route verification
    net.ipv4.conf.default.rp_filter = 1
    # Do not accept source routing
    net.ipv4.conf.default.accept_source_route = 0
    # Controls the System Request debugging functionality of the kernel
    kernel.sysrq = 0
    # Controls whether core dumps will append the PID to the core filename.
    # Useful for debugging multi-threaded applications.
    kernel.core_uses_pid = 1
    kernel.shmall=2097152
    kernel.shmmax=6187593113
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max =65536
    net.ipv4.ip_local_port_range = 1024 65000
    [oracle@icdb u01]$and bash profile is
    PATH=$PATH:$HOME/bin
    ORACLE_BASE=/u01/app/oracle
    ORACLE_HOME=$ORACLE_BASE/product/9.2.0
    ORACLE_SID=ic
    LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32
    PATH=$PATH:$ORACLE_HOME/bin
    export  ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH
    export PATH
    unset USERNAME
    ~
    ~please suggest me...

    init file
    [oracle@icdb u01]$ cat initicai.ora
    *.aq_tm_processes=1
    *.background_dump_dest='/u01/app/oracle/admin/ic/bdump'
    *.compatible='9.2.0.0.0'
    *.control_files='/bkp/data/ctl/control03.ctl'
    *.core_dump_dest='/u01/app/oracle/admin/ic/cdump'
    *.db_block_size=8192
    *.db_cache_size=4294967296
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='icai'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=icaiXDB)'
    *.fast_start_mttr_target=300
    *.hash_join_enabled=TRUE
    *.instance_name='icai'
    *.java_pool_size=157286400
    *.job_queue_processes=20
    *.large_pool_size=104857600
    *.open_cursors=300
    *.pga_aggregate_target=938860800
    *.processes=1000
    *.query_rewrite_enabled='FALSE'
    *.remote_login_passwordfile='EXCLUSIVE'
    *.shared_pool_size=818103808
    *.sort_area_size=524288
    *.star_transformation_enabled='FALSE'
    *.timed_statistics=TRUE
    *.undo_management='AUTO'
    *.undo_retention=10800
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='/u01/app/oracle/admin/ic/udump'
    #log_archive_dest='/bkp/arch/ic_'
    Log_archive_start=True
    sga_max_size=6444442450944
    log_archive_dest_1='location=/bkp/arch/ mandatory'
    log_archive_dest_2='service=prim optional reopen=15'
    log_archive_dest_state_1=enable
    remote_archive_enable=true
    standby_archive_dest='/arch/ic/ic_'
    standby_file_management=auto
    [oracle@icdb u01]$Edited by: user00726 on Nov 11, 2009 10:27 PM

  • Server goes out of memory when annotating TIFF File. Help with Tiled Images

    I am new to JAI and have a problem with the system going out of memory
    Objective:
    1)Load up a TIFF file (each approx 5- 8 MB when compressed with CCITT.6 compression)
    2)Annotate image (consider it as a simple drawString with the Graphics2D object of the RenderedImage)
    3)Send it to the servlet outputStream
    Problem:
    Server goes out of memory when 5 threads try to access it concurrently
    Runtime conditions:
    VM param set to -Xmx1024m
    Observation
    Writing the files takes a lot of time when compared to reading the files
    Some more information
    1)I need to do the annotating at a pre-defined specific positions on the images(ex: in the first quadrant, or may be in the second quadrant).
    2)I know that using the TiledImage class its possible to load up a portion of the image and process it.
    Things I need help with:
    I do not know how to send the whole file back to servlet output stream after annotating a tile of the image.
    If write the tiled image back to a file, or to the outputstream, it gives me only the portion of the tile I read in and watermarked, not the whole image file
    I have attached the code I use when I load up the whole image
    Could somebody please help with the TiledImage solution?
    Thx
    public void annotateFile(File file, String wText, OutputStream out, AnnotationParameter param) throws Throwable {
    ImageReader imgReader = null;
    ImageWriter imgWriter = null;
    TiledImage in_image = null, out_image = null;
    IIOMetadata metadata = null;
    ImageOutputStream ios = null;
    try {
    Iterator readIter = ImageIO.getImageReadersBySuffix("tif");
    imgReader = (ImageReader) readIter.next();
    imgReader.setInput(ImageIO.createImageInputStream(file));
    metadata = imgReader.getImageMetadata(0);
    in_image = new TiledImage(JAI.create("fileload", file.getPath()), true);
    System.out.println("Image Read!");
    Annotater annotater = new Annotater(in_image);
    out_image = annotater.annotate(wText, param);
    Iterator writeIter = ImageIO.getImageWritersBySuffix("tif");
    if (writeIter.hasNext()) {
    imgWriter = (ImageWriter) writeIter.next();
    ios = ImageIO.createImageOutputStream(out);
    imgWriter.setOutput(ios);
    ImageWriteParam iwparam = imgWriter.getDefaultWriteParam();
    if (iwparam instanceof TIFFImageWriteParam) {
    iwparam.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
    TIFFDirectory dir = (TIFFDirectory) out_image.getProperty("tiff_directory");
    double compressionParam = dir.getFieldAsDouble(BaselineTIFFTagSet.TAG_COMPRESSION);
    setTIFFCompression(iwparam, (int) compressionParam);
    else {
    iwparam.setCompressionMode(ImageWriteParam.MODE_COPY_FROM_METADATA);
    System.out.println("Trying to write Image ....");
    imgWriter.write(null, new IIOImage(out_image, null, metadata), iwparam);
    System.out.println("Image written....");
    finally {
    if (imgWriter != null)
    imgWriter.dispose();
    if (imgReader != null)
    imgReader.dispose();
    if (ios != null) {
    ios.flush();
    ios.close();
    }

    user8684061 wrote:
    U are right, SGA is too large for my server.
    I guess oracle set SGA automaticlly while i choose default installion , but ,why SGA would be so big? Is oracle not smart enough ?Default database configuration is going to reserve 40% of physical memory for SGA for an instance, which you as a user can always change. I don't see anything wrong with that to say Oracle is not smart.
    If i don't disincrease SGA, but increase max-shm-memory, would it work?This needs support from the CPU architecture (32 bit or 64 bit) and the kernel as well. Read more about the huge pages.

  • Setting max heap in Oracle JVM

    Hello -
    I'm having a problem with a Java stored procedure running out of heap memory in Oracle 10g running Java 1.4.
    Normally (running Java in a standard context) I would just modify the -Xmx with a higher value, but for the life of me I can't figure out how to do it in a stored procedure context.
    I have browsed Google and I have browsed the Oracle JVM installation stuff, all to no avail.
    Can anyone help me with how to set my max heap size, or verify that it's impossible? I have taken all of the standard Oracle memory parameters (JAVA_POOL, UGA/PGA/SGA limits) out of the picture by jacking them up and keeping an eye on memory values up to the point that the procedure fails (at ~700m), so I'm pretty sure that this is my problem.
    So far I have looked at:
    Config files
    Config tables
    DB parameters
    I haven't been able to find anything remotely related to JVM option configuration in any of the above.
    It is worth noting that I ran across another forum where someone was wanting to set their minimum heap size, and they were told that it was not possible. I'm just having trouble believing that it's the same story with something as critical as max heap size.
    Much obliged for any help.
    Thanks,
    Annaka

    From Metalink Note 466112.1:
    Applies to:
    Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 10.2.0.3
    This problem can occur on any platform.
    Symptoms
    When attempting to execute a java class that works fine in a stand alone JVM, fails with Oracle JVM with the following error:
    ERROR
    ORA-29532: Java call terminated by uncaught Java exception:
    java.lang.OutOfMemoryError
    ORA-06512: at "IDS_SYS.POD", line 3
    Cause
    The MaxMemorySize was set 256M (the default values) where the JSO needs memory more than 256 MB to run.
    This can be checked as following:
    SQL> create or replace function getMaxMemorySize return number
    2 is language java name
    3 'oracle.aurora.vm.OracleRuntime.getMaxMemorySize() returns long';
    4 /
    Function created.
    SQL> select getMaxMemorySize from dual;
    GETMAXMEMORYSIZE
    268435456
    After increasing the MaxMemorySize to a larger value(1 GB), the problem was fixed
    Solution
    Please increase the MaxMemorySize to a larger values(i.e. 1GB), this can be done as following:
    SQL> create or replace function setMaxMemorySize(num number) return number
    2 is language java name
    3 'oracle.aurora.vm.OracleRuntime.setMaxMemorySize(long) returns long';
    4 /
    Function created.
    SQL> select setMaxMemorySize(1024*1024*1024) from dual;
    SETMAXMEMORYSIZE(1024*1024*1024)
    Then you can check if the value is set correctly using the following:
    SQL> select getMaxMemorySize from dual;
    GETMAXMEMORYSIZE
    1073741824
    In my case I had to set the parameter within a job's the session.
    bye
    TPD
    Edited by: TPD on Sep 23, 2008 4:27 PM - tags added

Maybe you are looking for