OutOfMemoryError in Adapter

Saw the following in our WL server log twice - 5 minutes before and 20
seconds before it crashed. Could this be the cause of the crash? Scanning
the newsgroups, I haven't seen this exact error posted before (with the
"Adapter" part) Any help is appreciated...
Wed Mar 14 11:15:20 EST 2001:<E> <Adapter> OutOfMemoryError in Adapter
Wed Mar 14 11:15:20 EST 2001:<E> <Adapter> java.lang.OutOfMemoryError
at weblogic.utils.io.UnsyncByteArrayOutputStream.write(Compiled Code)
at java.io.DataOutputStream.write(Compiled Code)
at java.io.FilterOutputStream.write(Compiled Code)
at java.io.ObjectOutputStream.drain(Compiled Code)
at java.io.ObjectOutputStream.setBlockData(Compiled Code)
at java.io.ObjectOutputStream.writeObject(Compiled Code)
at weblogic.common.internal.WLObjectOutputStreamBase.writeObject(Compiled
Code)
at weblogic.common.internal.WLObjectOutputStreamBase.writeSpecial(Compiled
Code)
at weblogic.common.internal.WLObjectOutputStreamBase.writeObject(Compiled
Code)
at weblogic.common.internal.WLObjectOutputStreamBase.writeObjectWL(Compiled
Code)
at weblogic.common.internal.WLObjectOutputStreamBase.writeVector(Compiled
Code)
at
weblogic.common.internal.WLObjectOutputStreamBase.writeObjectBody(Compiled
Code)
at weblogic.common.internal.WLObjectOutputStreamBase.writeObject(Compiled
Code)
at weblogic.common.internal.WLObjectOutputStreamBase.writeObjectWL(Compiled
Code)
at weblogic.rmi.extensions.AbstractOutputStream2.writeObject(Compiled Code)
at sync.server.contact.CaseServiceBeanEOImpl_WLSkel.invoke(Compiled Code)
at weblogic.rmi.extensions.BasicServerObjectAdapter.invoke(Compiled Code)
at weblogic.rmi.extensions.BasicRequestHandler.handleRequest(Compiled Code)
at
weblogic.rmi.extensions.BasicRequestDispatcher$BasicExecuteRequest.execute(C
ompiled Code)
at weblogic.t3.srvr.ExecuteThread.run(Compiled Code)

We've got the heap size already set to 800M and our app normally runs fine
with this. I think the couple times we saw this before a crash it was in
the same place. That seems a little much for a coincidence, but maybe not.
Why does it need memory when it's writing out to the rmi socket? It's
possible this op returns a lot of data though I can't easily tell from just
the stack trace. We'll obviously look into our app's memory usage but I
thought perhaps there might a WL reason for it happening this way. Thanks.
[I should also mention that we are currently running WL 4.5 with 1.1.7b.
Soon we'll be 1.3, and then we'll be 5.1, but not for some time, so the
crashes are an issue.]
"Adam Messinger" <[email protected]> wrote in message
news:[email protected]...
This looks like the server ran out of memory while trying to write a
response to an RMI request. The Adapter in question is a portion of the RMI
runtime.
OutOfMemoryErrors are a notorious source of problems in VMs because they can
happen anywhere and are difficult to gracefully recover from in all cases.
You may be able to alleviate the problem by changing the maximum heap size
with the -Xmx command on many modern VMs. If it persists I would recommend
running the product under OptimizeIt or another memory profiler to see what
is using up the memory.
Cheers!
Adam
"Joe Herbers" <[email protected]> wrote in message
news:[email protected]...
Saw the following in our WL server log twice - 5 minutes before and 20
seconds before it crashed. Could this be the cause of the crash?Scanning
the newsgroups, I haven't seen this exact error posted before (with the
"Adapter" part) Any help is appreciated...
Wed Mar 14 11:15:20 EST 2001:<E> <Adapter> OutOfMemoryError in Adapter
Wed Mar 14 11:15:20 EST 2001:<E> <Adapter> java.lang.OutOfMemoryError
at weblogic.utils.io.UnsyncByteArrayOutputStream.write(Compiled Code)
at java.io.DataOutputStream.write(Compiled Code)
at java.io.FilterOutputStream.write(Compiled Code)
at java.io.ObjectOutputStream.drain(Compiled Code)
at java.io.ObjectOutputStream.setBlockData(Compiled Code)
at java.io.ObjectOutputStream.writeObject(Compiled Code)
at weblogic.common.internal.WLObjectOutputStreamBase.writeObject(Compiled
Code)
atweblogic.common.internal.WLObjectOutputStreamBase.writeSpecial(Compiled
Code)
at weblogic.common.internal.WLObjectOutputStreamBase.writeObject(Compiled
Code)
atweblogic.common.internal.WLObjectOutputStreamBase.writeObjectWL(Compiled
Code)
at weblogic.common.internal.WLObjectOutputStreamBase.writeVector(Compiled
Code)
at
weblogic.common.internal.WLObjectOutputStreamBase.writeObjectBody(Compiled
Code)
at weblogic.common.internal.WLObjectOutputStreamBase.writeObject(Compiled
Code)
atweblogic.common.internal.WLObjectOutputStreamBase.writeObjectWL(Compiled
Code)
at weblogic.rmi.extensions.AbstractOutputStream2.writeObject(CompiledCode)
at sync.server.contact.CaseServiceBeanEOImpl_WLSkel.invoke(Compiled Code)
at weblogic.rmi.extensions.BasicServerObjectAdapter.invoke(Compiled Code)
at weblogic.rmi.extensions.BasicRequestHandler.handleRequest(CompiledCode)
at
weblogic.rmi.extensions.BasicRequestDispatcher$BasicExecuteRequest.execute(C
ompiled Code)
at weblogic.t3.srvr.ExecuteThread.run(Compiled Code)

Similar Messages

  • OutOfMemoryError in Adapter module

    Hi all,
    I have scenario where I have to do some complex things in a adapter module. with little messages it works fine, but with the greate ones (about 15MB) I get a OutOfMemoryError after 2 messages. The system itself has anough memory I have about 5GB free but it is not allocated by the J2EE engine.
    I tried to increase MaxHeapSize over the configtool and restart, but with no effect.
    Can anyone give me a hint?
    I'm on a HP-UX machine.
    Thanx
    Olli

    Hi Oliver,
    Goto the exchange profile via URL
    http://<>:port/exchangeProfile, goto
    IntegrationBuilder in the left tree and increase parameter com.sap.aii.ib.client.jnlp.j2se.maxheapsize from 512m to 768m (or higherif necessary). Press the save button.
    Goto http://<>:port/rep/support/admin/index.html,
    press AII Properties on the left side and then press the REFRESH button until you see the new value on the right side.
    if the problem is no resolved even after doing the above, take a look at a look at note:
    580351 Java WebStart troubleshooting in XI 2.0
    855471
    (This note is also valid for XI3.0)
    Also have a look at this thread and see if it helps...
    Re: java.lang.OutofmemoryError while loading message in mapping
    Regards,
    Abhy

  • Memory leak in weblogic 6.0 sp2 oracle 8.1.7 thin driver

    Hi,
         I have a simple client that opens a database connection, selects from
    a table containing five rows of data (with four columns in each row)
    and then closes all connections. On running this in a loop, I get the
    following error after some time:
    <Nov 28, 2001 5:57:40 PM GMT+06:00> <Error> <Adapter>
    <OutOfMemoryError in
    Adapter
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    >
    <Nov 28, 2001 5:57:40 PM GMT+06:00> <Error> <Kernel> <ExecuteRequest
    failed
    java.lang.OutOfMemoryError
    I am running with a heap size of 64 Mb. The java command that runs
    the client is:
    java -ms64m -mx64m -cp .:/opt/bea/wlserver6.0/lib/weblogic.jar
    -Djava.naming.f
    actory.initial=weblogic.jndi.WLInitialContextFactory
    -Djava.naming.provider.url=
    t3://garlic:7001 -verbose:gc Test
    The following is the client code that opens the db connection and does
    the select:
    import java.util.*;
    import java.sql.*;
    import javax.naming.*;
    import javax.sql.*;
    public class Test {
    private static final String strQuery = "SELECT * from tblPromotion";
    public static void main(String argv[])
    throws Exception
    String ctxFactory     = System.getProperty
    ("java.naming.factory.initial");
    String providerUrl     = System.getProperty
    ("java.naming.provider.url");
    Properties jndiEnv          = System.getProperties ();
    System.out.println ("ctxFactory : " + ctxFactory);
    System.out.println ("ProviderURL : " + providerUrl);
    Context ctx     = new InitialContext (jndiEnv);
    for (int i=0; i <1000000; i++)
    System.out.println("Running query for the "+i+" time");
    Connection con = null;
    Statement stmnt = null;
    ResultSet rs     = null;
    try
    DataSource ds     = (DataSource) ctx.lookup
    (System.getProperty("eaMDataStore", "jdbc/eaMarket"));
    con = ds.getConnection ();
    stmnt = con.createStatement();
    rs = stmnt.executeQuery(strQuery);
    while (rs.next ())
    //System.out.print(".");
    //System.out.println(".");
    ds = null;
    catch (java.sql.SQLException sqle)
    System.out.println("SQL Exception : "+sqle.getMessage());
    finally
    try {
    rs.close ();
    rs = null;
    //System.out.println("closed result set");
    } catch (Exception e) {
    System.out.println("Exception closing result set");
    try {
    stmnt.close ();
    stmnt = null;
    //System.out.println("closed statement");
    } catch (Exception e) {
    System.out.println("Exception closing result set");
    try {
    con.close();
    con = null;
    //System.out.println("closed connection");
    } catch (Exception e) {
    System.out.println("Exception closing connection");
    I am using the Oracle 8.1.7 thin driver. Please let me know if this
    memory leak is a known issue or if its something I am doing.
    thanks,
    rudy

    Repost in JDBC section ... very serious issue but it may be due to Oracle or
    to WL ... does it happen if you test inside WL itself?
    How many iterations does it take to blow? How long? Does changing to a
    different driver (maybe Cloudscape) have the same result?
    Peace,
    Cameron Purdy
    Tangosol Inc.
    << Tangosol Server: How Weblogic applications are customized >>
    << Download now from http://www.tangosol.com/download.jsp >>
    "R.C." <[email protected]> wrote in message
    news:[email protected]...
    Hi,
    I have a simple client that opens a database connection, selects from
    a table containing five rows of data (with four columns in each row)
    and then closes all connections. On running this in a loop, I get the
    following error after some time:
    <Nov 28, 2001 5:57:40 PM GMT+06:00> <Error> <Adapter>
    <OutOfMemoryError in
    Adapter
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    >
    <Nov 28, 2001 5:57:40 PM GMT+06:00> <Error> <Kernel> <ExecuteRequest
    failed
    java.lang.OutOfMemoryError
    I am running with a heap size of 64 Mb. The java command that runs
    the client is:
    java -ms64m -mx64m -cp .:/opt/bea/wlserver6.0/lib/weblogic.jar
    -Djava.naming.f
    actory.initial=weblogic.jndi.WLInitialContextFactory
    -Djava.naming.provider.url=
    t3://garlic:7001 -verbose:gc Test
    The following is the client code that opens the db connection and does
    the select:
    import java.util.*;
    import java.sql.*;
    import javax.naming.*;
    import javax.sql.*;
    public class Test {
    private static final String strQuery = "SELECT * from tblPromotion";
    public static void main(String argv[])
    throws Exception
    String ctxFactory = System.getProperty
    ("java.naming.factory.initial");
    String providerUrl = System.getProperty
    ("java.naming.provider.url");
    Properties jndiEnv = System.getProperties ();
    System.out.println ("ctxFactory : " + ctxFactory);
    System.out.println ("ProviderURL : " + providerUrl);
    Context ctx = new InitialContext (jndiEnv);
    for (int i=0; i <1000000; i++)
    System.out.println("Running query for the "+i+" time");
    Connection con = null;
    Statement stmnt = null;
    ResultSet rs = null;
    try
    DataSource ds = (DataSource) ctx.lookup
    (System.getProperty("eaMDataStore", "jdbc/eaMarket"));
    con = ds.getConnection ();
    stmnt = con.createStatement();
    rs = stmnt.executeQuery(strQuery);
    while (rs.next ())
    //System.out.print(".");
    //System.out.println(".");
    ds = null;
    catch (java.sql.SQLException sqle)
    System.out.println("SQL Exception : "+sqle.getMessage());
    finally
    try {
    rs.close ();
    rs = null;
    //System.out.println("closed result set");
    } catch (Exception e) {
    System.out.println("Exception closing result set");
    try {
    stmnt.close ();
    stmnt = null;
    //System.out.println("closed statement");
    } catch (Exception e) {
    System.out.println("Exception closing result set");
    try {
    con.close();
    con = null;
    //System.out.println("closed connection");
    } catch (Exception e) {
    System.out.println("Exception closing connection");
    I am using the Oracle 8.1.7 thin driver. Please let me know if this
    memory leak is a known issue or if its something I am doing.
    thanks,
    rudy

  • Out of Memory for 6.0 sp2

    We received the following stack trace in the weblogic logs
    weblogic.log
    ####<Nov 16, 2001 7:33:30 PM GMT> <Notice> <WebLogicServer> <danvers>
    <danvers> <main> <system> <> <000216> <WebLogic Server started>
    ####<Nov 16, 2001 7:33:30 PM GMT> <Info> <Posix Performance Pack>
    <danvers> <danvers> <ListenThread> <system> <> <000000> <System has file
    descriptor limits of - soft: '1024', hard: '1024'>
    ####<Nov 16, 2001 7:33:30 PM GMT> <Info> <Posix Performance Pack>
    <danvers> <danvers> <ListenThread> <system> <> <000000> <Using effective
    file descriptor limit of: '1024' open sockets/files.>
    ####<Nov 16, 2001 7:33:30 PM GMT> <Info> <Posix Performance Pack>
    <danvers> <danvers> <ListenThread> <system> <> <000000> <Allocating: '3'
    POSIX reader threads>
    ####<Nov 16, 2001 7:33:57 PM GMT> <Info> <HTTP> <danvers> <danvers>
    <ExecuteThread: '11' for queue: 'default'> <system> <> <101047>
    <[WebAppServletContext(536060,console)] *.html: init>
    ####<Nov 16, 2001 7:33:57 PM GMT> <Info> <HTTP> <danvers> <danvers>
    <ExecuteThread: '11' for queue: 'default'> <system> <> <101047>
    <[WebAppServletContext(536060,console)] *.html: Using standard I/O>
    ####<Nov 16, 2001 7:36:11 PM GMT> <Info> <HTTP> <danvers> <danvers>
    <ExecuteThread: '11' for queue: 'default'> <> <> <101047>
    <[WebAppServletContext(2021663,DefaultWebApp_danvers)] /classes: init>
    ####<Nov 18, 2001 7:19:17 PM GMT> <Error> <Adapter> <danvers> <danvers>
    <Thread-69> <system> <> <000000> <OutOfMemoryError in Adapter>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:19:17 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-69> <system> <> <000000> <Error in Dispatcher>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:35:55 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '1' for queue: '__weblogic_admin_rmi_queue'> <system> <>
    <000000> <ExecuteRequest failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:38:51 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-27> <system> <> <000000> <Error in Dispatcher>
    ####<Nov 18, 2001 7:46:40 PM GMT> <Error> <Performance Pack> <danvers>
    <danvers> <ExecuteThread: '13' for queue: 'default'> <> <> <000000>
    <Muxer got error: null>
    ####<Nov 18, 2001 7:47:05 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '13' for queue: 'default'> <> <> <000000>
    <ExecuteRequest failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 9:08:40 PM GMT> <Error> <Posix Performance Pack>
    <danvers> <danvers> <ExecuteThread: '12' for queue: 'default'> <> <>
    <000000> <Uncaught Throwable in processSockets>
    ####<Nov 18, 2001 9:09:01 PM GMT> <Error> <Performance Pack> <danvers>
    <danvers> <ExecuteThread: '12' for queue: 'default'> <> <> <000000>
    <Muxer got error: null>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 19, 2001 1:45:30 AM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '5' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 19, 2001 1:14:01 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '3' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    ####<Nov 19, 2001 1:54:36 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '5' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 16, 2001 7:19:11 PM GMT> <Notice> <WebLogicServer> <danvers>
    <danvers> <ListenThread> <system> <> <000202> <ListenThread listening on
    port 9009>
    ####<Nov 16, 2001 7:33:30 PM GMT> <Notice> <WebLogicServer> <danvers>
    <danvers> <ListenThread> <system> <> <000202> <ListenThread listening on
    port 9009>
    ####<Nov 16, 2001 7:33:30 PM GMT> <Notice> <WebLogicServer> <danvers>
    <danvers> <main> <system> <> <000216> <WebLogic Server started>
    ####<Nov 18, 2001 7:19:17 PM GMT> <Error> <Adapter> <danvers> <danvers>
    <Thread-69> <system> <> <000000> <OutOfMemoryError in Adapter>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:19:17 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-69> <system> <> <000000> <Error in Dispatcher>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:35:50 PM GMT> <Error> <Adapter> <danvers> <danvers>
    <ExecuteThread: '1' for queue: '__weblogic_admin_rmi_queue'> <system> <>
    <000000> <OutOfMemoryError in Adapter>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:35:55 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '1' for queue: '__weblogic_admin_rmi_queue'> <system> <>
    <000000> <ExecuteRequest failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:36:44 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '11' for queue: 'default'> <> <> <000000>
    <ExecuteRequest failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:38:04 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '10' for queue: 'default'> <> <> <000000>
    <ExecuteRequest failed>
    ####<Nov 18, 2001 7:38:51 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-27> <system> <> <000000> <Error in Dispatcher>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:40:19 PM GMT> <Error> <RJVM> <danvers> <danvers>
    <ExecuteThread: '7' for queue: 'default'> <> <> <000000> <Failure in
    heartbeat trigger for RJVM: '-4587908185898992697C:164.179.193.100'>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:41:50 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-47> <system> <> <000000> <Error in Dispatcher>
    ####<Nov 18, 2001 7:41:59 PM GMT> <Error> <Adapter> <danvers> <danvers>
    <ExecuteThread: '4' for queue: 'default'> <system> <> <000000>
    <OutOfMemoryError in Adapter>
    ####<Nov 18, 2001 7:42:11 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '4' for queue: 'default'> <system> <> <000000>
    <ExecuteRequest failed>
    ####<Nov 18, 2001 7:42:21 PM GMT> <Error> <Adapter> <danvers> <danvers>
    <Thread-17> <system> <> <000000> <OutOfMemoryError in Adapter>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:42:30 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-17> <system> <> <000000> <Error in Dispatcher>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:44:46 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '6' for queue: 'default'> <system> <> <000000>
    <ExecuteRequest failed>
    ####<Nov 18, 2001 7:45:18 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '7' for queue: 'default'> <system> <> <000000>
    <ExecuteRequest failed>
    ####<Nov 18, 2001 7:45:27 PM GMT> <Error> <Posix Performance Pack>
    <danvers> <danvers> <ExecuteThread: '14' for queue: 'default'> <> <>
    <000000> <Uncaught Throwable in processSockets>
    ####<Nov 18, 2001 7:45:48 PM GMT> <Error> <Performance Pack> <danvers>
    <danvers> <ExecuteThread: '14' for queue: 'default'> <> <> <000000>
    <Muxer got error: null>
    ####<Nov 18, 2001 7:45:56 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '14' for queue: 'default'> <> <> <000000>
    <ExecuteRequest failed>
    ####<Nov 18, 2001 7:46:22 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-77> <system> <> <000000> <Error in Dispatcher>
    ####<Nov 18, 2001 7:46:27 PM GMT> <Error> <Posix Performance Pack>
    <danvers> <danvers> <ExecuteThread: '13' for queue: 'default'> <> <>
    <000000> <Uncaught Throwable in processSockets>
    ####<Nov 18, 2001 7:46:40 PM GMT> <Error> <Performance Pack> <danvers>
    <danvers> <ExecuteThread: '13' for queue: 'default'> <> <> <000000>
    <Muxer got error: null>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:47:05 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '13' for queue: 'default'> <> <> <000000>
    <ExecuteRequest failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:47:22 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '9' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    ####<Nov 18, 2001 7:51:55 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-79> <system> <> <000000> <Error in Dispatcher>
    ####<Nov 18, 2001 8:14:57 PM GMT> <Error> <RJVM> <danvers> <danvers>
    <ExecuteThread: '13' for queue: 'default'> <> <> <000000> <Failure in
    heartbeat trigger for RJVM: '7929968200845233656C:164.179.193.100'>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 9:08:40 PM GMT> <Error> <Posix Performance Pack>
    <danvers> <danvers> <ExecuteThread: '12' for queue: 'default'> <> <>
    <000000> <Uncaught Throwable in processSockets>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 9:09:01 PM GMT> <Error> <Performance Pack> <danvers>
    <danvers> <ExecuteThread: '12' for queue: 'default'> <> <> <000000>
    <Muxer got error: null>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 19, 2001 1:45:30 AM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '5' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 19, 2001 1:14:01 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '3' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 19, 2001 1:54:36 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '5' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 19, 2001 2:26:37 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '5' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    We are not using the server as a web server we are running a 3rd party
    application in the server configured with 1/2 Gb of memory. My guess is
    that they are not
    releasing some resources but if you BEA folks can point me into some
    direction
    that would be helpful. The vendor says its a weblogic problem which I
    doubt.

    Please give the details of java parameters like heap size, ratios etc.
    Look at the following link for various hotspot options.
    http://java.sun.com/docs/hotspot/VMOptions.html.
    You can try with -verbosegc option, which splits all gc statistics to
    know whether heap size is runningout of some other problem.
    Thread dump will helpful to analyze the problem.
    ctrl+\ ( ctrl + backslash) for solaris, ctrl+Break for windows will
    be give the thread dump.
    --sreeni.
    Larry Presswood <[email protected]> wrote in message news:<[email protected]>...
    Yes this is true but was wandering if the stack trace could point to anything
    in particular like the jdk bug in 1.3.x which will give outof memory after 400
    or
    so ejb creations. Also would have to run under optimizeit for days which is
    not
    entirely desirable in QA but will do if not other help can be found.
    Dimitri Rakitine wrote:
    You can always use OptimizeIt to see what leaks memory.
    Larry Presswood <[email protected]> wrote:
    We received the following stack trace in the weblogic logs
    weblogic.log
    ####<Nov 16, 2001 7:33:30 PM GMT> <Notice> <WebLogicServer> <danvers>
    <danvers> <main> <system> <> <000216> <WebLogic Server started>
    ####<Nov 16, 2001 7:33:30 PM GMT> <Info> <Posix Performance Pack>
    <danvers> <danvers> <ListenThread> <system> <> <000000> <System has file
    descriptor limits of - soft: '1024', hard: '1024'>
    ####<Nov 16, 2001 7:33:30 PM GMT> <Info> <Posix Performance Pack>
    <danvers> <danvers> <ListenThread> <system> <> <000000> <Using effective
    file descriptor limit of: '1024' open sockets/files.>
    ####<Nov 16, 2001 7:33:30 PM GMT> <Info> <Posix Performance Pack>
    <danvers> <danvers> <ListenThread> <system> <> <000000> <Allocating: '3'
    POSIX reader threads>
    ####<Nov 16, 2001 7:33:57 PM GMT> <Info> <HTTP> <danvers> <danvers>
    <ExecuteThread: '11' for queue: 'default'> <system> <> <101047>
    <[WebAppServletContext(536060,console)] *.html: init>
    ####<Nov 16, 2001 7:33:57 PM GMT> <Info> <HTTP> <danvers> <danvers>
    <ExecuteThread: '11' for queue: 'default'> <system> <> <101047>
    <[WebAppServletContext(536060,console)] *.html: Using standard I/O>
    ####<Nov 16, 2001 7:36:11 PM GMT> <Info> <HTTP> <danvers> <danvers>
    <ExecuteThread: '11' for queue: 'default'> <> <> <101047>
    <[WebAppServletContext(2021663,DefaultWebApp_danvers)] /classes: init>
    ####<Nov 18, 2001 7:19:17 PM GMT> <Error> <Adapter> <danvers> <danvers>
    <Thread-69> <system> <> <000000> <OutOfMemoryError in Adapter>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:19:17 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-69> <system> <> <000000> <Error in Dispatcher>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:35:55 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '1' for queue: '__weblogic_admin_rmi_queue'> <system> <>
    <000000> <ExecuteRequest failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:38:51 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-27> <system> <> <000000> <Error in Dispatcher>
    ####<Nov 18, 2001 7:46:40 PM GMT> <Error> <Performance Pack> <danvers>
    <danvers> <ExecuteThread: '13' for queue: 'default'> <> <> <000000>
    <Muxer got error: null>
    ####<Nov 18, 2001 7:47:05 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '13' for queue: 'default'> <> <> <000000>
    <ExecuteRequest failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 9:08:40 PM GMT> <Error> <Posix Performance Pack>
    <danvers> <danvers> <ExecuteThread: '12' for queue: 'default'> <> <>
    <000000> <Uncaught Throwable in processSockets>
    ####<Nov 18, 2001 9:09:01 PM GMT> <Error> <Performance Pack> <danvers>
    <danvers> <ExecuteThread: '12' for queue: 'default'> <> <> <000000>
    <Muxer got error: null>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 19, 2001 1:45:30 AM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '5' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 19, 2001 1:14:01 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '3' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    ####<Nov 19, 2001 1:54:36 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '5' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 16, 2001 7:19:11 PM GMT> <Notice> <WebLogicServer> <danvers>
    <danvers> <ListenThread> <system> <> <000202> <ListenThread listening on
    port 9009>
    ####<Nov 16, 2001 7:33:30 PM GMT> <Notice> <WebLogicServer> <danvers>
    <danvers> <ListenThread> <system> <> <000202> <ListenThread listening on
    port 9009>
    ####<Nov 16, 2001 7:33:30 PM GMT> <Notice> <WebLogicServer> <danvers>
    <danvers> <main> <system> <> <000216> <WebLogic Server started>
    ####<Nov 18, 2001 7:19:17 PM GMT> <Error> <Adapter> <danvers> <danvers>
    <Thread-69> <system> <> <000000> <OutOfMemoryError in Adapter>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:19:17 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-69> <system> <> <000000> <Error in Dispatcher>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:35:50 PM GMT> <Error> <Adapter> <danvers> <danvers>
    <ExecuteThread: '1' for queue: '__weblogic_admin_rmi_queue'> <system> <>
    <000000> <OutOfMemoryError in Adapter>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:35:55 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '1' for queue: '__weblogic_admin_rmi_queue'> <system> <>
    <000000> <ExecuteRequest failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:36:44 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '11' for queue: 'default'> <> <> <000000>
    <ExecuteRequest failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:38:04 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '10' for queue: 'default'> <> <> <000000>
    <ExecuteRequest failed>
    ####<Nov 18, 2001 7:38:51 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-27> <system> <> <000000> <Error in Dispatcher>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:40:19 PM GMT> <Error> <RJVM> <danvers> <danvers>
    <ExecuteThread: '7' for queue: 'default'> <> <> <000000> <Failure in
    heartbeat trigger for RJVM: '-4587908185898992697C:164.179.193.100'>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:41:50 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-47> <system> <> <000000> <Error in Dispatcher>
    ####<Nov 18, 2001 7:41:59 PM GMT> <Error> <Adapter> <danvers> <danvers>
    <ExecuteThread: '4' for queue: 'default'> <system> <> <000000>
    <OutOfMemoryError in Adapter>
    ####<Nov 18, 2001 7:42:11 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '4' for queue: 'default'> <system> <> <000000>
    <ExecuteRequest failed>
    ####<Nov 18, 2001 7:42:21 PM GMT> <Error> <Adapter> <danvers> <danvers>
    <Thread-17> <system> <> <000000> <OutOfMemoryError in Adapter>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:42:30 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-17> <system> <> <000000> <Error in Dispatcher>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:44:46 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '6' for queue: 'default'> <system> <> <000000>
    <ExecuteRequest failed>
    ####<Nov 18, 2001 7:45:18 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '7' for queue: 'default'> <system> <> <000000>
    <ExecuteRequest failed>
    ####<Nov 18, 2001 7:45:27 PM GMT> <Error> <Posix Performance Pack>
    <danvers> <danvers> <ExecuteThread: '14' for queue: 'default'> <> <>
    <000000> <Uncaught Throwable in processSockets>
    ####<Nov 18, 2001 7:45:48 PM GMT> <Error> <Performance Pack> <danvers>
    <danvers> <ExecuteThread: '14' for queue: 'default'> <> <> <000000>
    <Muxer got error: null>
    ####<Nov 18, 2001 7:45:56 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '14' for queue: 'default'> <> <> <000000>
    <ExecuteRequest failed>
    ####<Nov 18, 2001 7:46:22 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-77> <system> <> <000000> <Error in Dispatcher>
    ####<Nov 18, 2001 7:46:27 PM GMT> <Error> <Posix Performance Pack>
    <danvers> <danvers> <ExecuteThread: '13' for queue: 'default'> <> <>
    <000000> <Uncaught Throwable in processSockets>
    ####<Nov 18, 2001 7:46:40 PM GMT> <Error> <Performance Pack> <danvers>
    <danvers> <ExecuteThread: '13' for queue: 'default'> <> <> <000000>
    <Muxer got error: null>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:47:05 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '13' for queue: 'default'> <> <> <000000>
    <ExecuteRequest failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 7:47:22 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '9' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    ####<Nov 18, 2001 7:51:55 PM GMT> <Error> <AdapterDispatcher> <danvers>
    <danvers> <Thread-79> <system> <> <000000> <Error in Dispatcher>
    ####<Nov 18, 2001 8:14:57 PM GMT> <Error> <RJVM> <danvers> <danvers>
    <ExecuteThread: '13' for queue: 'default'> <> <> <000000> <Failure in
    heartbeat trigger for RJVM: '7929968200845233656C:164.179.193.100'>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 9:08:40 PM GMT> <Error> <Posix Performance Pack>
    <danvers> <danvers> <ExecuteThread: '12' for queue: 'default'> <> <>
    <000000> <Uncaught Throwable in processSockets>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 18, 2001 9:09:01 PM GMT> <Error> <Performance Pack> <danvers>
    <danvers> <ExecuteThread: '12' for queue: 'default'> <> <> <000000>
    <Muxer got error: null>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 19, 2001 1:45:30 AM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '5' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 19, 2001 1:14:01 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '3' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 19, 2001 1:54:36 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '5' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    ####<Nov 19, 2001 2:26:37 PM GMT> <Error> <Kernel> <danvers> <danvers>
    <ExecuteThread: '5' for queue: 'default'> <> <> <000000> <ExecuteRequest
    failed>
    We are not using the server as a web server we are running a 3rd party
    application in the server configured with 1/2 Gb of memory. My guess is
    that they are not
    releasing some resources but if you BEA folks can point me into some
    direction
    that would be helpful. The vendor says its a weblogic problem which I
    doubt.--
    Dimitri

  • JDBC Adapter: J2EE server crashes while sending large messages

    We want to use the following scenario to transfer data from a MS SQL Server to SAP BW via XI:
    JDBC Sender Adapter – XI – SAP ABAP Proxy.
    All works fine with a small amount of data. But if the select statement delivers too many record sets and the size of the transformed XML payload is greater then 50 MB the J2EE server crashes. A complete restart is necessary. It seems to be am memory problem.
    Here are the entries from our log files:
    dev_server0
    [Thr 6151] Mon Jul 24 12:46:57 2006
    [Thr 6151] JLaunchIExitJava: exit hook is called (rc=666)
    [Thr 6151] **********************************************************************
    ERROR => The Java VM terminated with a non-zero exit code.
    Please see SAP Note 940893 , section 'J2EE Engine exit codes'
    for additional information and trouble shooting.
    [Thr 6151] SigISetIgnoreAction : SIG_IGN for signal 17
    [Thr 6151] JLaunchCloseProgram: good bye (exitcode=666)
    std_server0.out
    FATAL: Caught OutOfMemoryError! Node will exit with exit code 666java.lang.OutOfMemoryError
    Is this a general problem of the XI or a specific one of our configuration? Is it possible to transfer such large messages via XI? If not, is there a workaround for such scenarios?
    (Memory heap size of the J2EE server is 1024 MB.)

    > Hi Gil,
    >
    > i had nearly the same problems some times in praxis
    > and the mapping was the reason. Just change your
    > interface determination temporary, delete the mapping
    > and test again to find out if the mapping is the
    > reason.
    >
    > Regards,
    > Udo
    I have changed my interface determination so that no message mapping is used. The J2EE server still crashes.
    > Hi Gil,
    > This does sounds like a memory problem especially
    > when it comes to 50M message with a minimum XI sys
    > requierments...
    > To be sure you can check on the RWB for the
    > componnent monitoring at the JDBC adapters and look
    > for your adapter
    > look at the status of the adapter and the trace
    > there...
    Hi Nimrod
    In case of such an error I have no entries in channel monitor. So I can't see anything there. I have also no entries in message monitor of the RWB in this case. So I don't get any information with standard XI tools.
    > My reccomendation to you is to set the poll intervall
    > to a shorter period,this way you'll make sure you get
    > less records...I hope you have remembered to add  a
    > status/flag column on the table to be set after
    > selection so no duplicate records will be taken on
    > the second pools.
    >
    The problem is that the source of my data is not a simple SQL statement but a stored procedure. So I don't know exactly how many records will be delivered. A update command is not possible.

  • OutOfMemory error on large file processing in sender file adapter

    Hi Experts,
    I got file to IDOC scenario, sender side we are using remote adapter engine, files are processing fine when the size is below 5mb, if the file size is more than 5mb we are getting java.lang.OutOfMemoryError: java heap space. can anyone suggest me what parameter i need to change in order to process more than 5mb.

    Hi Praveen,
    Suggestion from SAP is not to process huge files at a time. Instead, you can make the into chunks and process.
    To increase the heap Memory:
    For Java heap-size and other memory requirements, Refer to the following OSS note for details:
    Note 723909 - Java VM settings for J2EE 6.40/7.0
    Also check out the OSS Note 862405 - XI Configuration Options for Lowering Memory Requirements
    There are oss notes available for the Java Heap size settings specific to the OS environment too.
    Thanks,

  • "Performance" problems with the File adapter on Plain J2SE Adapter Engine

    Hi,
    At the moment I'm on a customer side to solve some XI issues for a few days. One of the issues is the performance of the Plain J2SE Adapter Engine, using the file adapter to transfer XML messages(already XI message format) from the legacy system to the Integration Engine. The File adapter has to deal with "large" XML messages(max at the moment is 65 Mb) and the engine fails with the following error when transferring the big XML file: "ERROR: Finished sending to Integration Engine with error "java.lang.OutOfMemoryError". Skip confirmation and quit this loop".
    As far I got the information from the customer the memory use of the Plain adapter engine is set to 512Mb. This is maybe to low. But I don't know where to look for this, I only have the adapter web interface in front of me, no access to the OS it self via for example remote connection.
    On the Integration Engine I know there is the ability to split large message with the file adapter(File Content Conversion), but I don't know this for the Plain Adapter Engine. Is there a possibility to do this also on the Plain Adapter Engine?
    Thanks in advance for any input.
    Greetings,
    Patrick

    Hi Sameer,
    Thanks for your answers.
    On the first solution, yes that is possible, we first decided to see if the legacy system can do the splitting, before starting developing a Java program.
    On the second solution, as far as I know is this solution possible on the Integration Engine. But we are facing the problems on the Plain J2SE Adapter Engine. I went trough that documentation(link:
    http://help.sap.com/saphelp_nw04/helpdata/en/6f/246b3de666930fe10000000a114084/frameset.htm ), to look for I similiar solution in the Plain Adapter Engine. So my question is, is this possible with the Plain Adapter? And if so, what kind of parameters I need to use to achieve this.
    Regards,
    Patrick

  • Can we split and fetch the records in Database Adapter

    Hi,
    I designed a Database Adapter to fetch the records from oracle Database. Some time, the Database Adapter need to fetch around 5000, or 10,000 records in single shot. In that case my BPEL process is choking and getting error as
    java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
    Could someone help me to resolve this?
    In Database Adapter can we split and fetch the records, if number of records more then 1000.
    ex. First 100 rec as one set and next 100 as 2nd set like this.
    Thank you.

    You can send the records as batches useing the debatching feature of db adapter. Refer documentation for implementation details.

  • JDBC sender adapter outofmemory error

    Dear friends,
    We are getting following error on JDBC sender Adapter: Error during conversion of query result to XML: java.lang.OutOfMemoryError: Java heap space (failed to allocate 1073741848 bytes)
    I have read blogs where there have been suggestions to limit the amount of data picked by using WHERE condition and not using SELECT *
    In our scenario we are creating GL IDOCs . In SQL when PI picks it up it needs to get all the corresponding Header and line items together. We cannot randomly pick 1000 records at a time. The SQL table has 500,000 rows in the following format
    H H H  L1 L1 L1
    H H H L2 L2 L2
    H H H  L3 L3 L3
    H1 H1 H1 L1 L1 L1
    H1 H1 H1 L2 L2 L2
    H1 H1 H1 L3 L3 L3
    Please let me know how can we solve this issue?
    Thank you,
    Teresa
    Edited by: Teresa lytle on Sep 27, 2011 3:13 PM

    If you are using oracle database, uses ROWNUM field to fetch first set of records and you can update those records with the flag as true.
    similarly If you are using MS SQL database, uses SELECT TOP command to fetch first set of records and you can update those records with the flag as true.
    Like File/FTP adapter, the next poll interval would be as scheduled. The Administrator can then alter the table contents, to ensure lesser no of records are picked up.
    Here You need to limit the no of rows if you again face the problem. Please chec the SAP Note
    https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1296819

  • Java.lang.OutOfMemoryError during message monitoring in RWB

    Java Engine Problem during message monitoring in runtime workbench!!  
    Posted: Nov 12, 2007 2:13 AM     Edit      E-mail this message      Reply 
    Hi,
    Our integrating processes are processing successfully.
    But there is a problem during message monitoring.
    When I use message monitoring in RWB,
    Integration Engine monitoring has no problem.
    But there is a problem during displaying adapter engine monitoring.
    The message is "java.lang.OutOfMemoryError: Java heap space".
    So I cannot see any adapter engine message.
    Actually same error occurred 3 weeks ago.
    At that time, we solved the problem with restarting java service.
    Does anyone know about this appearance?
    And is there any other solution about this problem??
    Thank you.

    Hi,
    As per the error analysis, it seems that the java heap space is low. So need to increase the heap size.
    1. Go to the C:\SAP\JP1\JC00\j2ee\configtool folder
    2. Start configtool.bat
    3. Go to cluster-data -> template - template_javaee -> instance - IDxxxxx
    4. Go to the VM Parameters tab and edit the maxHeapSize property
    5. Save and restart the system.
    I hope this will solve the problem.
    Regards,
    Sarvesh
    ***Reward points, if it helped you.

  • Java.lang.OutOfMemoryError: allocLargeObjectOrArray error for large payload

    Our is an outbound flow where one FTP adapter picks the files and it calls a requester service, requester service calls the EBS and EBS calls the provider service, and finally file is getting written using the B2B.
    Since last 4/5 days we are getting java.lang.OutOfMemoryError: allocLargeObjectOrArray.
    We are getting this error when large payloads are being used for doing testing.
    As per our understanding, when you have a tree of composite invocations (so A invokes B invokes C invokes D via flowN 100 times), none of the memory is released until they all complete.
    1. Could you please let us know exactly when memory is released?
    2. How to tune/optimize this?
    Our flow is like:
    SyncDisbursePaymentGetFtpAdapter --> CreateDisbursedPaymentEbizReqABCSImp l--> SyncDisbursePaymentRBTTEBS --> SyncDisbursedPaymentJPMC_CHKProvABCSImpl--> AIAB2BInterface --> Oracle B2B
    <Dec 12, 2012 8:17:06 PM EST> <Warning> <RMI> <BEA-080003> <RuntimeException thrown by rmi server: javax.management.remote.rmi.RMIConnecti\
    onImpl.invoke(Ljavax.management.ObjectName;Ljava.lang.String;Ljava.rmi.MarshalledObject;[Ljava.lang.String;Ljavax.security.auth.Subject;)
    javax.management.RuntimeErrorException: java.lang.OutOfMemoryError: allocLargeObjectOrArray: [B, size 667664.
    javax.management.RuntimeErrorException: java.lang.OutOfMemoryError: allocLargeObjectOrArray: [B, size 667664
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:858)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:869)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:838)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
    at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$16.run(WLSMBeanServerInterceptorBase.java:449)
    Truncated. see log file for complete stacktrace
    Caused By: java.lang.OutOfMemoryError: allocLargeObjectOrArray: [B, size 667664
    at java.util.Arrays.copyOf(Arrays.java:2786)
    at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
    at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1847)
    at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1756)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1169)
    Truncated. see log file for complete stacktrace
    Do any one have any idea how to rectify this error as whole UAT environment has become down because of this issue.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Please find the required info:
    1. Operating System--> LINUX
    2. JVM (Sun or JRockit)-->JRockit
    3. Domain info (production mode enabled?, log levels, number of servers in cluster, number of servers in a machine)-->
    a) production mode enabled-->>Production mode is not enabled, we are going to enable it.
    b)Log levels---There are many logs b2b, soa, bpel, integration, which log level do I need to do finest(32).
    c) number of servers in cluster-->2
    d) number of servers in a machine-->1
    3. Payload info (size, xml/non-xml?,)
    a) size-->more than 1 MB, and upto 25 MB
    b)xml/non-xm--> xml
    we are trying to do the changes as suggested by you and will update accordingly.

  • Passing Streaming Content to a JCA FTP adapter in OSB

    Hi,
    Is there any way to pass streaming content represented by e.g.<con:binary-content ref="cid:1b6ff6d0:1416f7a74ab:-1d8a" xmlns:con="http://www.bea.com/wli/sb/context"/> to JCA FTP(business service) adapter?
    My binary content already represents base64Binary format. Content is around 250Mb so there is no possibility to put it to memory(java.lang.OutOfMemoryError).
    FTP Adapter wsdl
    <wsdl:definitions
         name="FTPAdapter"
         targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/ftp/Adapter/FTPAdapter/FTPAdapter"
         xmlns:jca="http://xmlns.oracle.com/pcbpel/wsdl/jca/"
         xmlns:tns="http://xmlns.oracle.com/pcbpel/adapter/ftp/Adapter/FTPAdapter/FTPAdapter"
         xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
         xmlns:opaque="http://xmlns.oracle.com/pcbpel/adapter/opaque/"
         xmlns:FTPAPP="http://xmlns.oracle.com/pcbpel/adapter/ftp/"
         xmlns:plt="http://schemas.xmlsoap.org/ws/2003/05/partner-link/"
        >
      <plt:partnerLinkType name="SaveFile_plt" >
        <plt:role name="SaveFile_role" >
          <plt:portType name="tns:SaveFile_ptt" />
        </plt:role>
      </plt:partnerLinkType>
        <wsdl:types>
        <schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/opaque/"
                xmlns="http://www.w3.org/2001/XMLSchema" >
          <element name="opaqueElement" type="base64Binary" />
        </schema>
        <schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/ftp/" xmlns="http://www.w3.org/2001/XMLSchema"
                attributeFormDefault="qualified"
                elementFormDefault="qualified" >
           <element name="OutboundFTPHeaderType" >
             <complexType>
               <sequence>
                 <element name="fileName" type="string" />
                 <element name="directory" type="string" />
                </sequence>
               </complexType>
           </element>
        </schema>
        </wsdl:types>
        <wsdl:message name="SaveFile_msg">
            <wsdl:part name="opaque" element="opaque:opaqueElement"/>
        </wsdl:message>
        <wsdl:message name="Output_msg">
            <wsdl:part name="body" element="FTPAPP:OutboundFTPHeaderType"/>
        </wsdl:message>
        <wsdl:portType name="SaveFile_ptt">
            <wsdl:operation name="SaveFile">
                <wsdl:input message="tns:SaveFile_msg"/>
                <wsdl:output message="tns:Output_msg"/>
            </wsdl:operation>
        </wsdl:portType>
    </wsdl:definitions>

    aren't the default cluster aware settings enough for this ?
    http://docs.oracle.com/cd/E15523_01/integration.1111/e10231/adptr_db.htm
    Scalability
    you could enable a few in the adapter wizards

  • Java.lang.OutOfMemoryError when editing a routing note for a Proxy Service?

    Hi,
    Does anyone know why I am getting a java.lang.OutOfMemoryError in the AqualLogic Service Bus logs which freezes the browser (IE) when I try to edit a route node of a proxy service? I can change the name of the node but when I try to edit the node itself, the interface freezes and the out of memory thrown by the Service Bus. I have tried restarting the server etc..... The java memory allocated should already be adequate as it is set to 512Mb. All I can think of doing next is to uninstall the Service Bus and reinstall.
    The logs I am getting can be seen below.....
    Welcome, weblogic Connected to : servicebus Home WLS Console Logout Help AskBEA
    weblogic session Created 9/01/08 13:46 No Conflicts 3 Change(s) 1 Active Session(s)
    The source of this error is com.bea.portlet.adapter.scopedcontent.ActionLookupFailedException: java.lang.OutOfMemoryError at com.bea.portlet.adapter.scopedcontent.ScopedContentCommonSupport.executeAction(ScopedContentCommonSupport.java:699) at com.bea.portlet.adapter.scopedcontent.ScopedContentCommonSupport.renderInternal(ScopedContentCommonSupport.java:268) at com.bea.portlet.adapter.scopedcontent.StrutsStubImpl.render(StrutsStubImpl.java:107) at com.bea.netuix.servlets.controls.content.NetuiContent.preRender(NetuiContent.java:288) at com.bea.netuix.nf.ControlLifecycle$6.visit(ControlLifecycle.java:427) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:708) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walkRecursivePreRender(ControlTreeWalker.java:720) at com.bea.netuix.nf.ControlTreeWalker.walk(ControlTreeWalker.java:183) at com.bea.netuix.nf.Lifecycle.processLifecycles(Lifecycle.java:361) at com.bea.netuix.nf.Lifecycle.processLifecycles(Lifecycle.java:339) at com.bea.netuix.nf.Lifecycle.runOutbound(Lifecycle.java:186) Caused by: java.lang.OutOfMemoryError
    Thanking you in advance for any help/insight.

    Okay, I fixed the "Incompatible initial and maximum heap sizes specified" issue now. The problem was that in the lh.bat script the parameter -Xmx128M was hardcoded and overwrote my own -Xmx setting.
    I now set JAVA_OPTS to "-Xms1400m -Xmx1400m" which is the maximum that Java accepts without complaining about too little memory. However, I still get the OutOfMemoryError and in addition to that I now also get this error:
    com.waveset.util.Shutdown: Sorry!  This repository has been shut down by an administrator.  Please try to get another instance.
    I've googled already but found only two sites and those weren't really helpful. So, any help is very much appreciated!
    Thanks,
    David

  • Iphoto crashing after using mini-dvi to video adapter

    Hi, IPhoto on my Macbook is crashing. I can open it, then as soon as I scroll down it locks up and I have to force quit.
    This started happening right after I used a Mini-DVI to Video Adapter cable to hook my macbook up to my TV. The adapter/s-video connection worked and I was able to see the video on the tv. But iphoto immediately locked up the computer when I went to slide show and now it locks every time I open it.
    Any ideas?
    Thank you:)
    Dorothy

    It means that the issue resides in your existing Library.
    Option 1
    Back Up and try rebuild the library: hold down the command and option (or alt) keys while launching iPhoto. Use the resulting dialogue to rebuild. Choose to Rebuild iPhoto Library Database from automatic backup.
    If that fails:
    Option 2
    Download iPhoto Library Manager and use its rebuild function. This will create a new library based on data in the albumdata.xml file. Not everything will be brought over - no slideshows, books or calendars, for instance - but it should get all your albums and keywords, faces and places back.
    Because this process creates an entirely new library and leaves your old one untouched, it is non-destructive, and if you're not happy with the results you can simply return to your old one.
    Regards
    TD

  • Wrong 'mini dvi to video' adapter for my 12" Powerbook 1.33

    I recently bought the M9319 mini dvi - video adapter for my 12" Powerbook 1.33 to find that the mini dvi on the adapter is a different size to the one on the computer and the adapters supplied with the computer.
    I've been doing a bit of research into this and it seems a bit of a grey area. Apple have not made it clear in advertising this product that it is only suitable for the newest 12" Powerbooks and even state that it will work with a 1.33 machine in their following article:
    http://docs.info.apple.com/article.html?artnum=86507
    Has anyone had a similar experience, or suggest an alternative product that will work? Did Apple make an older version that works with the 1.33?
    Any help would be much appreciated. Thanks, Graeme.

    Just took the adapter to my local Apple store and they identified it as a mini VGA to video adapter - not what it said on the packaging!
    I've noticed a few posts with this problem, so just double check before you leave the store that the packaging matches the product.

Maybe you are looking for