Out of memory detected in log.xml

Hi,
Our db version is 11.1.0.7.0 and in log.xml we see out of memory error like this
Out of memory detected in /u01/app/oracle/admin/diag/rdbms/ABCD/alert/log.xml at time/line number: Tue Feb 15 03:09:11 2011/54933.
Alert log file looks ok.
How to prevent this?

You need to provide more information than that. I presume you got this from the OEM alert.
Try looking in /u01/app/oracle/admin/diag/rdbms/ABCD/ABCD/trace/alert_ABCD.log and look at any entries for February 15th. That might point you in the right direction.

Similar Messages

  • Getting an Out of memory exception while validating XML against XSD

    Hello friends,
    I am getting an Out Of Memory exception while validating my XML against a given XSd which is huge.
    SAXParserFactory saxParserFactory = SAXParserFactory.newInstance();
            saxParserFactory.setValidating(true);
              SAXParser saxParser = saxParserFactory.newSAXParser();
             saxParser.setProperty("http://java.sun.com/xml/jaxp/properties/schemaLanguage", "http://www.w3.org/2001/XMLSchema");
             saxParser.setProperty("http://java.sun.com/xml/jaxp/properties/schemaSource",new File("C:/todelxsd.xsd")); as u may see the darkened code. this basically Loads the XSD in Memmory , and JVM throws an out of Memory exception. is there any other way round of validating an XML against an XSD where i dont have to load my XSD if not then kindly let me know the solution for above problem .
    Thanks.

    Yes, but increasing the heap size is a temporary solution , isnt there a way where the XML can be validated against an XSD without having to load XSD in memory

  • Out of Memory Error bcoz of xml file size

    Hi,
    Help me to solve this out of memory error, if xml file size
    is increased means
    it is not displaying anything and displaying this out of
    memory error.
    Thanking you
    Regards
    Nirmalatha.N

    You should avoid loading large sized XML files in your Flash
    Lite application. There is a limit on incoming data, and anything
    beyond that will give an error. My experience has been around 1000
    characters in a single stream of incoming text.
    A possible solution your memory problem is to use a middle
    language like PHP, ASP etc, to stream a single XML data file in
    parts to your Flash Lite application. This means you avoid loading
    XML directly in Flash.
    Mariam

  • Large Pdf using XML XSL - Out of Memory Error

    Hi Friends.
    I am trying to generate a PDF from XML, XSL and FO in java. It works fine if the PDF to be generated is small.
    But if the PDF to be generated is big, then it throws "Out of Memory" error. Can some one please give me some pointers about the possible reasons for this errors. Thanks for your help.
    RM
    Code:
    import java.io.*;
    import javax.servlet.*;
    import javax.servlet.http.*;
    import org.xml.sax.InputSource;
    import org.xml.sax.XMLReader;
    import org.apache.fop.apps.Driver;
    import org.apache.fop.apps.Version;
    import org.apache.fop.apps.XSLTInputHandler;
    import org.apache.fop.messaging.MessageHandler;
    import org.apache.avalon.framework.logger.ConsoleLogger;
    import org.apache.avalon.framework.logger.Logger;
    public class PdfServlet extends HttpServlet {
    public static final String FO_REQUEST_PARAM = "fo";
    public static final String XML_REQUEST_PARAM = "xml";
    public static final String XSL_REQUEST_PARAM = "xsl";
    Logger log = null;
         Com_BUtil myBu = new Com_BUtil();
    public void doGet(HttpServletRequest request,
    HttpServletResponse response) throws ServletException {
    if(log == null) {
         log = new ConsoleLogger(ConsoleLogger.LEVEL_WARN);
         MessageHandler.setScreenLogger(log);
    try {
    String foParam = request.getParameter(FO_REQUEST_PARAM);
    String xmlParam = myBu.getConfigVal("filePath") +"/"+request.getParameter(XML_REQUEST_PARAM);
    String xslParam = myBu.SERVERROOT + "/jsp/servlet/"+request.getParameter(XSL_REQUEST_PARAM)+".xsl";
         if((xmlParam != null) && (xslParam != null)) {
    XSLTInputHandler input = new XSLTInputHandler(new File(xmlParam), new File(xslParam));
    renderXML(input, response);
    } else {
    PrintWriter out = response.getWriter();
    out.println("<html><head><title>Error</title></head>\n"+
    "<body><h1>PdfServlet Error</h1><h3>No 'fo' "+
    "request param given.</body></html>");
    } catch (ServletException ex) {
    throw ex;
    catch (Exception ex) {
    throw new ServletException(ex);
    public void renderXML(XSLTInputHandler input,
    HttpServletResponse response) throws ServletException {
    try {
    ByteArrayOutputStream out = new ByteArrayOutputStream();
    response.setContentType("application/pdf");
    Driver driver = new Driver();
    driver.setLogger(log);
    driver.setRenderer(Driver.RENDER_PDF);
    driver.setOutputStream(out);
    driver.render(input.getParser(), input.getInputSource());
    byte[] content = out.toByteArray();
    response.setContentLength(content.length);
    response.getOutputStream().write(content);
    response.getOutputStream().flush();
    } catch (Exception ex) {
    throw new ServletException(ex);
    * creates a SAX parser, using the value of org.xml.sax.parser
    * defaulting to org.apache.xerces.parsers.SAXParser
    * @return the created SAX parser
    static XMLReader createParser() throws ServletException {
    String parserClassName = System.getProperty("org.xml.sax.parser");
    if (parserClassName == null) {
    parserClassName = "org.apache.xerces.parsers.SAXParser";
    try {
    return (XMLReader) Class.forName(
    parserClassName).newInstance();
    } catch (Exception e) {
    throw new ServletException(e);

    Hi,
    I did try that initially. After executing the command I get this message.
    C:\>java -Xms128M -Xmx256M
    Usage: java [-options] class [args...]
    (to execute a class)
    or java -jar [-options] jarfile [args...]
    (to execute a jar file)
    where options include:
    -cp -classpath <directories and zip/jar files separated by ;>
    set search path for application classes and resources
    -D<name>=<value>
    set a system property
    -verbose[:class|gc|jni]
    enable verbose output
    -version print product version and exit
    -showversion print product version and continue
    -? -help print this help message
    -X print help on non-standard options
    Thanks for your help.
    RM

  • XML PL/SQL Parser Out of memory

    Im parsing a 20M file and recieving the following result.
    I have tried to cut the file size down and have found a small
    version that will parse successfully.
    We have tried adjusting
    the ulimit -d 2097152
    ulimit -s 32768
    and also set java_pool_size = 41943040 in init.ora
    none of these seemed to be enough to enable parsing of the file.
    The following are 2 executions of the parser using 2 different
    file sizes.
    Thanks,
    Steve
    BEGIN
    domsample('/u01/app/oracle/xmlparser/samp','dan55.xml','errors.tx
    t'); END;
    ERROR at line 1:
    ORA-29554: unhandled Java out of memory condition
    BEGIN
    domsample('/u01/app/oracle/xmlparser/samp','dan60.xml','errors.tx
    t'); END;
    ERROR at line 1:
    ORA-29532: Java call terminated by uncaught Java exception:
    java.lang.OutOfMemoryError
    ORA-06512: at "PHASE2.XMLPARSERCOVER", line 0
    ORA-06512: at "PHASE2.XMLPARSER", line 118
    ORA-06512: at "PHASE2.DOMSAMPLE", line 84
    ORA-06512: at line 1
    null

    Steve (guest) wrote:
    : Oracle XML Team wrote:
    : : Steve Coffman (guest) wrote:
    : : : Im parsing a 20M file and recieving the following result.
    : : : I have tried to cut the file size down and have found a
    small
    : : : version that will parse successfully.
    : : : We have tried adjusting
    : : : the ulimit -d 2097152
    : : : ulimit -s 32768
    : : : and also set java_pool_size = 41943040 in init.ora
    : : : none of these seemed to be enough to enable parsing of the
    : : file.
    : : : The following are 2 executions of the parser using 2
    : different
    : : : file sizes.
    : : : Thanks,
    : : : Steve
    : : : BEGIN
    : : : domsample
    : : ('/u01/app/oracle/xmlparser/samp','dan55.xml','errors.tx
    : : : t'); END;
    : : : ERROR at line 1:
    : : : ORA-29554: unhandled Java out of memory condition
    : : : BEGIN
    : : : domsample
    : : ('/u01/app/oracle/xmlparser/samp','dan60.xml','errors.tx
    : : : t'); END;
    : : : ERROR at line 1:
    : : : ORA-29532: Java call terminated by uncaught Java exception:
    : : : java.lang.OutOfMemoryError
    : : : ORA-06512: at "PHASE2.XMLPARSERCOVER", line 0
    : : : ORA-06512: at "PHASE2.XMLPARSER", line 118
    : : : ORA-06512: at "PHASE2.DOMSAMPLE", line 84
    : : : ORA-06512: at line 1
    : : On what OS and with how much installed memory are you
    running?
    : : Oracle XML Team
    : : http://technet.oracle.com
    : : Oracle Technology Network
    : The server is a
    : Digital UNIX V4.0E (Rev. 1091)
    : with 2Gig Ram
    Oracle version 8.1.5
    null

  • Getting an out of memory exception while validating my XML against a XSD

    Hello friends,
    I have asked this question in following thread too. Pasting it again here just to saye your time
    http://forum.java.sun.com/thread.jspa?threadID=690812&tstart=0
    I am getting an Out Of Memory exception while validating my XML against a given XSd which is huge.
    SAXParserFactory saxParserFactory = SAXParserFactory.newInstance();
            saxParserFactory.setValidating(true);
              SAXParser saxParser = saxParserFactory.newSAXParser();
             saxParser.setProperty("http://java.sun.com/xml/jaxp/properties/schemaLanguage", "http://www.w3.org/2001/XMLSchema");
             saxParser.setProperty("http://java.sun.com/xml/jaxp/properties/schemaSource",new File("C:/todelxsd.xsd")); as u may see the darkened code. this basically Loads the XSD in Memmory , and JVM throws an out of Memory exception. is there any other way round of validating an XML against an XSD where i dont have to load my XSD if not then kindly let me know the solution for above problem .
    Thanks.

    Yes, but increasing the heap size is a temporary solution , isnt there a way where the XML can be validated against an XSD without having to load XSD in memory

  • Oracle XML CLOB out of memory error

    I am running several of Oracle's XML tools
    to store large XML documents in Oracle.
    I am successful with a 3 MB file, but not
    with a 20 MB file. I get an "out of memory"
    exception in LobPlsqlUtil.class.
    The CLOB is accessed through the method
    CLOB.getCharacterStream(), which is then
    handed off to an XMLParser class. The
    exact exception is below:
    Any ideas? Thanks, Rich
    ==========================================
    Unhandled exception breakpoint occurred at line 135 in file [D:\programs\jdev\jdbc\lib\oracle8.1.7\classes12.zip]\oracle\sql\LobPlsqlUtil.class: java.lang.OutOfMemoryError.
    ==============================

    I have some more information to add to the
    original post. I found that if I access the
    CLOB very fast then I can get all 20 MB of
    data from the CLOB. If I access it slowly
    (meaning that I pass CLOB.getCharacterStream
    to a parser) then it fails.
    I wrote my own InputStream as a wrapper
    around CLOB, using the CLOB.getChars()
    function. If I pass this input stream to
    a parser (one of Oracles, or Xerces) and
    get the data chunks on demand, it still
    fails. If, instead, I read all the CLOB
    data as fast as possible, and buffer it
    locally before passing it on to the parser,
    then I get all 20 MB. Go figure!!
    Also, when the CLOB reading fails, it doesn't
    help to get a new ResultSet, or to close and
    open a new Connection and try to start where
    it left off.

  • Open Container: Logging region out of memory

    When opening a container I am getting the following error... according to doc's it appears as though a new log file should be created but this is not the case...
    Utils: Logging region out of memory; you may need to increase its size
    Utils: DB->get: method not permitted before handle's open method
    Utils: DB->put: method not permitted before handle's open method
    Utils: DB->put: method not permitted before handle's open method
    Utils: DB->put: method not permitted before handle's open method
    Utils: DB->put: method not permitted before handle's open method
    Utils: DB->put: method not permitted before handle's open method
    Error: com.sleepycat.dbxml.XmlException: Error: Cannot allocate memory, errcode = DATABASE_ERROR
    com.sleepycat.dbxml.XmlException: Error: Cannot allocate memory, errcode = DATABASE_ERROR
    at com.sleepycat.dbxml.dbxml_javaJNI.XmlManager_openContainer__SWIG_2(Native Method)
    at com.sleepycat.dbxml.XmlManager.openContainer(XmlManager.java:525)
    at com.sleepycat.dbxml.XmlManager.openContainer(XmlManager.java:190)
    at com.sleepycat.dbxml.XmlManager.openContainer(XmlManager.java:132)
    at com.iconnect.data.pool.BerkeleyXMLDBPool.getContainer(BerkeleyXMLDBPool.java:195)
    at com.iconnect.data.adapters.BerkleyXMLDBImpl.query(BerkleyXMLDBImpl.java:109)
    at com.iconnect.data.DataManagerFactory.query(DataManagerFactory.java:112)
    at controls.iConnectControl.IConnectImpl.login(IConnectImpl.java:100)
    at controls.iConnectControl.IConnectBean.login(IConnectBean.java:110)
    at app.a12.en.auth.LoginController.Login(LoginController.java:124)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at org.apache.beehive.netui.pageflow.FlowController.invokeActionMethod(FlowController.java:811)
    at org.apache.beehive.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:752)
    at org.apache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:446)
    at org.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java:237)
    at org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:299)
    at org.apache.beehive.netui.pageflow.internal.FlowControllerAction.execute(FlowControllerAction.java:55)
    at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484)
    at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processActionPerform(PageFlowRequestProcessor.java:1403)
    at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274)
    at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processInternal(PageFlowRequestProcessor.java:647)
    at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.process(PageFlowRequestProcessor.java:702)
    at org.apache.beehive.netui.pageflow.AutoRegisterActionServlet.process(AutoRegisterActionServlet.java:565)
    at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:525)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
    at com.iconnect.security.SecurityFilter.doFilter(SecurityFilter.java:99)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:186)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:198)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:152)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:118)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929)
    at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:160)
    at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:300)
    at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:374)
    at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:743)
    at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:675)
    at org.apache.jk.common.SocketConnection.runIt(ChannelSocket.java:866)
    at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683)
    at java.lang.Thread.run(Thread.java:595)

    Here is the output that we have... one thing to note that the exception implies that the container does not exists (although it does)... it is falling into a sub-routine for:
    myManager.existsContainer(containerName) == 0
    The exception is thrown (it seems) because of the "Logging region..." error...
    We had normal operation untill the loggin system ran out of memory... (I am using a shared Environment and am amanaging shared open Containers)...
    btw - Is there not a good end-to-end Java example that is available?
    // Initialize the Environment Configuration
    envConfig.setAllowCreate(true);           // If the environment does not
    envConfig.setInitializeCache(true);      // Turn on the shared memory
    envConfig.setInitializeLocking(true);      // Turn on the locking subsystem.
    envConfig.setInitializeLogging(false);      // Turn on the logging subsystem.
    envConfig.setTransactional(true);           // Turn on the transactional
    envConfig.setErrorStream(System.err);
    envConfig.setErrorPrefix("Utils");
    envConfig.setNoLocking(true);
    // Played using these settings as well
    //envConfig.setLogInMemory(true);
    //envConfig.setLogBufferSize(10 * 1024 * 1024);
    envConfig.setThreaded(true);
    Getting container: USER.dbxml
    Utils: Logging region out of memory; you may need to increase its size
    Container does not exist.... creating...
    Utils: Logging region out of memory; you may need to increase its size
    Utils: Logging region out of memory; you may need to increase its size
    com.sleepycat.dbxml.XmlException: Error: Cannot allocate memory, errcode = DATABASE_ERROR
    com.sleepycat.dbxml.XmlException: Error: Cannot allocate memory, errcode = DATABASE_ERROR
    at com.sleepycat.dbxml.dbxml_javaJNI.XmlManager_createContainer__SWIG_0(Native Method)
    at com.sleepycat.dbxml.XmlManager.createContainer(XmlManager.java:485)
    at com.sleepycat.dbxml.XmlManager.createContainer(XmlManager.java:152)
    at com.sleepycat.dbxml.XmlManager.createContainer(XmlManager.java:122)
    at com.iconnect.data.pool.BerkeleyXMLDBPool.getContainer(BerkeleyXMLDBPool.java:171)
    at com.iconnect.data.adapters.BerkleyXMLDBImpl.query(BerkleyXMLDBImpl.java:109)
    at com.iconnect.data.DataManagerFactory.query(DataManagerFactory.java:112)
    at controls.iConnectControl.IConnectImpl.login(IConnectImpl.java:100)
    at controls.iConnectControl.IConnectBean.login(IConnectBean.java:110)
    at auth.LoginController.Login(LoginController.java:123)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at org.apache.beehive.netui.pageflow.FlowController.invokeActionMethod(FlowController.java:811)
    at org.apache.beehive.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:752)
    at org.apache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:446)
    at org.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java:237)
    at org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:299)
    at org.apache.beehive.netui.pageflow.internal.FlowControllerAction.execute(FlowControllerAction.java:55)
    at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484)
    at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processActionPerform(PageFlowRequestProcessor.java:1403)
    at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274)
    at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processInternal(PageFlowRequestProcessor.java:647)
    at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.process(PageFlowRequestProcessor.java:702)
    at org.apache.beehive.netui.pageflow.AutoRegisterActionServlet.process(AutoRegisterActionServlet.java:565)
    at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:525)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
    at com.iconnect.security.SecurityFilter.doFilter(SecurityFilter.java:99)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:186)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:198)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:152)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:118)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929)
    at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:160)
    at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:300)
    at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:374)
    at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:743)
    at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:675)
    at org.apache.jk.common.SocketConnection.runIt(ChannelSocket.java:866)
    at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683)
    at java.lang.Thread.run(Thread.java:595)
    Setting namespace: http://iconnect.com/schemas/user
    com.sleepycat.dbxml.XmlException: Error: Cannot resolve container: USER.dbxml. Container not open and auto-open is not enabled. Container may not exist., errcode = CONTAINER_CLOSED

  • Generating large amounts of XML without running out of memory

    Hi there,
    I need some advice from the experienced xdb users around here. I´m trying to map large amounts of data inside the DB (Oracle 11.2.0.1.0) and by large I mean files up to several GB. I compared the "low level" mapping via PL/SQL in combination with ExtractValue/XMLQuery with the elegant XML View Mapping and the best performance gave me the View Mapping by using the XMLTABLE XQuery PATH constructs. So now I have a View that lies on several BINARY XMLTYPE Columns (where the XML files are stored) for the mapping and another view which lies above this Mapping View and constructs the nested XML result document via XMLELEMENT(),XMLAGG() etc. Example Code for better understanding:
    CREATE OR REPLACE VIEW MAPPING AS
    SELECT  type, (...)  FROM XMLTYPE_BINARY,  XMLTABLE ('/ROOT/ITEM' passing xml
         COLUMNS
          type       VARCHAR2(50)          PATH 'for $x in .
                                                                let $one := substring($x/b012,1,1)
                                                                let $two := substring($x/b012,1,2)
                                                                return
                                                                    if ($one eq "A")
                                                                      then "A"
                                                                    else if ($one eq "B" and not($two eq "BJ"))
                                                                      then "AA"
                                                                    else if (...)
    CREATE OR REPLACE VIEW RESULT AS
    select XMLELEMENT("RESULTDOC",
                     (SELECT XMLAGG(
                             XMLELEMENT("ITEM",
                                          XMLFOREST(
                                               type "ITEMTYPE",
    ) as RESULTDOC FROM MAPPING;
    ----------------------------------------------------------------------------------------------------------------------------Now all I want to do is materialize this document by inserting it into a XMLTYPE table/column.
    insert into bla select * from RESULT;
    Sounds pretty easy but can´t get it to work, the DB seems to load a full DOM representation into the RAM every time I perform a select, insert into or use the xmlgen tool. This Representation takes more than 1 GB for a 200 MB XML file and eventually I´m running out of memory with an
    ORA-19202: Error occurred in XML PROCESSING
    ORA-04030: out of process memory
    My question is how can I get the result document into the table without memory exhaustion. I thought the db would be smart enough to generate some kind of serialization/datastream to perform this task without loading everything into the RAM.
    Best regards

    The file import is performed via jdbc, clob and binary storage is possible up to several GB, the OR storage gives me the ORA-22813 when loading files with more than 100 MB. I use a plain prepared statement:
            File f = new File( path );
           PreparedStatement pstmt = CON.prepareStatement( "insert into " + table + " values ('" + id + "', XMLTYPE(?) )" );
           pstmt.setClob( 1, new FileReader(f) , (int)f.length() );
           pstmt.executeUpdate();
           pstmt.close(); DB version is 11.2.0.1.0 as mentioned in the initial post.
    But this isn´t my main problem, the above one is, I prefer using binary xmltype anyway, much easier to index. Anyone an idea how to get the large document from the view into a xmltype table?

  • Big Log Files resulting in Out Of Memory of serverpartition

    Hi Forte users,
    Using the Component/View Log from EConsole on a server partition triggers
    an
    Out-Of Memory of the server partition when the log file is too big (a few
    Mb).
    Does anyone know how to change the log file name or clean the log file of
    a server partition running interpreted with Forte 2.0H16 ???
    Any help welcome,
    Thanks,
    Vincent Figari
    You don't need to buy Internet access to use free Internet e-mail.
    Get completely free e-mail from Juno at http://www.juno.com
    Or call Juno at (800) 654-JUNO [654-5866]

    Ask in Photoshop General Discussion or go to Microsoft and search for an article on memory allocation
    http://search.microsoft.com/search.aspx?mkt=en-US&setlang=en-US
    This forum is about the Cloud as a delivery process, not about using individual programs
    If you start at the Forums Index https://forums.adobe.com/welcome
    You will be able to select a forum for the specific Adobe product(s) you use
    Click the "down arrow" symbol on the right (where it says All communities) to open the drop down list and scroll

  • RE: Big Log Files resulting in Out Of Memory of serverpartition

    To clean a log on nt, you can open it with notepad, select all and delte, add a space ans save as... with the same file name
    on unix, you can just redirect the standard input to the file name (e.g.:
    # > forte_ex_2390.log
    (Should work with nt but i never tried)
    Hope that will help
    De : Vincent R Figari
    Date : lundi 30 mars 1998 21:42
    A : [email protected]
    Objet : Big Log Files resulting in Out Of Memory of server partition
    Hi Forte users,
    Using the Component/View Log from EConsole on a server partition triggers
    an
    Out-Of Memory of the server partition when the log file is too big (a few
    Mb).
    Does anyone know how to change the log file name or clean the log file of
    a server partition running interpreted with Forte 2.0H16 ???
    Any help welcome,
    Thanks,
    Vincent Figari
    You don't need to buy Internet access to use free Internet e-mail.
    Get completely free e-mail from Juno at http://www.juno.com
    Or call Juno at (800) 654-JUNO [654-5866]

    So try treating your development box like a production box for a day and see if the problem manifests itself.
    Do a load test and simulate massive numbers of changes on your development box.
    Are there any OS differences between production and development?
    How long does it take to exhaust the memory?
    Does it just add new jsp files, or can it also replace old ones?

  • Repeated Opening of  database in a Txn  causes Logging region out of memory

    Hi
    BDB 4.6.21
    When I open and close a single database file repeatedly, it causes the error message "Logging region out of memory; you may need to increase its size". I have set the 65KB default size for set_lg_regionmax. Is there any work around for solving this issue, other than increasing the value for the set_lg_regionmax. Even if we set it to a higher value, we cannot predict how the clients of BDB will opens a file and closes a database file. Following is a stand alone program, using which one can reproduce the scenario.
    void main()
    const int SUCCESS = 0;
    ULONG uEnvFlags = DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOG | DB_INIT_TXN |DB_INIT_LOCK | DB_THREAD;// |
    //DB_RECOVER;
    LPCSTR lpctszHome = "D:\\Nisam\\Temp";
    int nReturn = 0;
    DbEnv* pEnv = new DbEnv( DB_CXX_NO_EXCEPTIONS );
    nReturn = pEnv->set_thread_count( 20 );
    nReturn = pEnv->open( lpctszHome, uEnvFlags, 0 );
    if( SUCCESS != nReturn )
    return 0;
    DbTxn* pTxn = 0;
    char szBuff[MAX_PATH];
    UINT uDbFlags = DB_CREATE | DB_THREAD;
    Db* pDb = 0;
    Db Database( pEnv, 0 );
    lstrcpy( szBuff, "DBbbbbbbbbbbbbbbbbbbbbbbbbbb________0" );// some long name
    // First create the database
    nReturn = Database.open( pTxn, szBuff, 0, DB_BTREE, uDbFlags, 0 );
    nReturn = Database.close( 0 );
    for( int nCounter = 0; 10000 > nCounter; ++nCounter )
    // Now repeatedly open and close the above created database
    pEnv->txn_begin( pTxn, &pTxn, 0 );
    Db Database( pEnv, 0 );
    nReturn = Database.open( pTxn, szBuff, 0, DB_BTREE, uDbFlags, 0 );
    if( SUCCESS != nReturn )
    // when the count reaches 435, the error occurs
    pTxn->abort();
    pDb->close( 0 );
    pEnv->close( 0 );
    return 0;
    pTxn->abort();
    pTxn = 0;
    Database.close( 0 );
    By the way, following is the content of my DB_CONFIG file
    set_tx_max 1000
    set_lk_max_lockers 10000
    set_lk_max_locks 100000
    set_lk_max_objects 100000
    set_lock_timeout 20000
    set_lg_bsize 1048576
    set_lg_max 10485760
    #log region: 66KB
    set_lg_regionmax 67584
    set_cachesize 0 8388608 1
    Thanks are Regards
    Nisam

    Hi Nisam,
    I was able to reproduce the problem using Berkeley DB 4.6.21. The problem is with releasing the FNAME structure in certain cases involving abort Transactions. In a situation where you have continuous (in a loop) transactional (open, abort, close) of databases you will notice (as you did) that the log region size needs to be increased (set_lg_regionmax).
    This problem was identified and reproduced yesterday (thanks for letting us know about this) and is reported as SR #15953. It will be fixed in the next release of Berkeley DB and is currently in code review/regression testing. I have a patch that you can apply to Berkeley DB 4.6 and have confirmed that your test program runs with the patch applied. If you send me email at (Ron dot Cohen at Oracle) I’ll send the patch to you.
    As you noticed, commiting the transaction will run cleanly without error. You could do that (with the suggestiion DB_TXN_NOSYNC below) but you may not even need transactions for this.
    I want to expand a bit on my recommendation that you not abort transactions in the manner that you are doing (though with the patch you can certainly do that). First, the open/close database is a heavyweight operation. Typically you create/open your databases and keep them open for the life of the application (or a long time).
    You also mentioned, that you noticed commits may have taken a longer time. We can talk about that (if you email me), but you could consider using the DB_TXN_NOSYNC flag losing durability. Make sure that this suggestion will work with your application requirements.
    Even if you have (create/open/get/commit/abort) that should not need transactions for a single get operation. For that case, there would be no logging for the open and close therefore this sequence would be faster. This was a code snippet so what you have in your application may be a lot more complicated and justify what you have done. But the simple test case above should not require a transaction since you are doing a single atomic get.
    I hope this helps!
    Ron Cohen
    Oracle Corporation

  • Out Of Memory Issue while dowloading infomraiton from DB to XML

    Hi,
    I am converting the Database tables into XML file format
    using Java IO and SAXP. I am keeping all XML files in
    download folder. But, while downloading process
    i am getting Out Of Memory Error.
    When i tried to download the real data, the download
    folder size is going tp 50 MB and i am getting Out Of
    memory error?
    Is that related to JVM Memory or System Memory. What
    would be the solution for this?
    Awaiting your answers

    By default the JVM appropriates 96Mb of memory for it's heap. You can increase the allocation by putting, say -Xmx128m on the Java command line, though perhaps a better solution is to avoid loading all your data at one time and write the XML out while reading the database.

  • Csstored: Logging region out of memory

    Hi.
    Newest calendar server (Communications suite 7).
    When we start the calendar server, we get this in the store.log:
    [08/Oct/2009:12:26:39 +0200] bane csstored[2896]: General Notice: csstored is refreshing
    [08/Oct/2009:12:26:39 +0200] bane csstored[2896]: General Notice: csstored is refreshed
    [08/Oct/2009:12:26:39 +0200] bane csstored[2896]: General Notice: csstored is ready
    [08/Oct/2009:12:26:39 +0200] bane csstored[2896]: General Notice: Copying all the transaction log files from live db to hotbackup/archive dirs.
    [08/Oct/2009:12:26:40 +0200] bane csstored[2896]: General Error: hotbackup: Logging region out of memory; you may need to increase its size
    [08/Oct/2009:12:26:40 +0200] bane csstored[2896]: General Error: hotbackup: Recovery function for LSN 10 3275844 failed on backward pass
    [08/Oct/2009:12:26:40 +0200] bane csstored[2896]: General Error: hotbackup: PANIC: Not enough space
    [08/Oct/2009:12:26:40 +0200] bane csstored[2896]: General Critical: Failed to recover databases in /var/opt/sun/comms/calendar/SUNWics5/csdb/hotbackup/20091008
    [08/Oct/2009:12:26:40 +0200] bane csstored[2896]: General Error: hotbackup: DB_ENV->open: /var/opt/sun/comms/calendar/SUNWics5/csdb/hotbackup/20091008: DB_RUNRECOVERY: Fatal error, run database recovery
    [08/Oct/2009:12:26:40 +0200] bane csstored[2896]: General Notice: The hotbackup task has been temporarily disabled.
    Google search on this error gives suggestion to put these settings in Berkely DB_config file:
    set_cachesize 0 8388608 8
    set_lg_regionmax 524288
    set_lg_bsize 2097152
    Does calendar server even have a berkely db_config file?
    This problem cropped up today, after a series of restarts due to another problem that is now fixed.
    What does this mean, and how do I fix it? :o)

    whaterverfdsa wrote:
    What platform are you running on (Solaris x86/Sparc/Redhat Linux)?OpenSolaris (I know it's not supported, but we needed HA Clustering without shared storage, and to my knowledge the only solution then was opensolaris...)OpenSolaris isn't supported or tested at all with Communication Suite 7 so if something does break you will not get any assistance from Sun to fix it. Make sure you keep plenty of backups.
    Try stopping calendar server, move all of the hotbackup files elsewhere (/calendar/SUNWics5/csdb/hotbackup/*) and restarting calendar server.
    Does this fix the problem?Actually, I had put off this problem while I fixed some others, and when I was now going to do as you suggested, the problem had vanished... :o)
    Possibly the bad hotbackup(s) had cycled out? (We use the default settings for hotbackup and archive)...A fresh hotbackup/archive directory is created each day.
    The hotbackup mechanism is designed to warn the calendar server administrator of a possible corruption/problem with the production database. If the production database was fine but there was some issue with replaying the production database transaction log to the hotbackup db then you can hit this scenario.
    So the problem is gone, but I don't think I understand what the errormessage really meant... The calendar server stored process attempts to replay the live production database transaction log to the hotbackup db copy. If there is a problem during this process (as was the case here) it could indicate a problem with the production database.
    I couldn't find any other reports of the error "Logging region out of memory; you may need to increase its size" with Calendar Server so this may indicate a problem with Calendar Server (Berkeley DB) and OpenSolaris, a problem with the CS6.3 patch release provided with Comm-Suite-7 or some issue specific to your environment.
    Does it try to restore from a hotbackup after doing the hotbackup to verify?No. It is up to the administrator to verify if the production database is corrupt (db_verify / csdb check) and if so then follow the recovery procedures as per:
    http://docs.sun.com/app/docs/doc/819-4654/acamh?a=view
    Regards,
    Shane.

  • Logging region out of memory

    Dear Berkeley dbxml users,
    I'm using an Oracle database accessed by two different clients via shared Web services. The amount of information stored in the database is fairly small since the two clients are still in development.
    After 50 or so transactions, I get an exception that states "Logging region out of memory, you may need to increase its size".
    I read in [this document|http://www.oracle.com/technology/documentation/berkeley-db/db/gsg_txn/JAVA/logconfig.html#logregionsize] that I might have to change the default settings to suit my needs. But I find it a bit strange to have to change the default configuration for such a small application.
    Does it seem reasonable for me to change the logging configuration or am I missing something here?
    I thank you all in advance for reading this.
    Guillaume CHAPUIS

    Thank you very much for your answer,
    I have a maximum of three different containers open at the same time. Is that too many?
    Anyway, now that I know where the exception comes from, I'm going to try and reduce the number of open containers and the length of the path.
    Thanks again and have a nice day.

Maybe you are looking for

  • Problems playing movies from Time Capsule

    I am having a problem playing movies from my Time Capsule to my Apple TV. I can make them work but have to go through the following process: I have to play the movie through iTunes and then stop it, and then go play the movie on apple TV. I have to d

  • Can't connect, did NOT download iTunes 9

    I've noticed that downloading iTunes 9 seems to cripple the Classic, so I did not download it and I can't get my computer to recognize my iPod. The unit claims to be connected and syncing, but I cannot disconnect it since it's not recognized. Hardwar

  • Question on SAX Parser: retrieving child

    Hi, I am using SAX parser to parse the XML file, because the XML file size are huge. But here i need to retrieve the child nodes of the element. Is it possible with SAX parser to retrieve the child nodes? (Like DOM parser getChildNodes method) Thanks

  • I have a problem with software activation of dedicated channel partners.

    I have a problem with software activation of dedicated channel partners. I find in the portal software and serial number for activation, but when I enter the serial (after you log in with the adobe id) tells me that the serial number is not correct.

  • Calling a procedure into select

    Is this possible ? select sp_info_element_ctrl(:new.code_usager_ctrl), from dual;