Logging Subsystem
Hi,
I would like to create a new Logging subsystem entry in the select
list that is available for the Domain log filter(the one that does
include EJB, Cluster, RMI, IIOP etc..).
In my application code I do:
new weblogic.logging.NonCatalogLogger("MySubsystem");
and I would like to see "mySubsystem" listed there!
Thanks
Simone
We are looking to do something similar, where we filter on the <application> name,
and would like to make it available in the Logging subsystem. The MBean provides
getters for all other Message Attributes, which leads me to think the Subsystem attribute
is configurable. Anyone with some insight on this would be appreciated.
Thanks.
[email protected] (Aspis73) wrote:>I would like to create a new Logging subsystem
entry in the select
list that is available for the Domain log filter(the one that does
include EJB, Cluster, RMI, IIOP etc..).
In my application code I do:
new weblogic.logging.NonCatalogLogger("MySubsystem");
and I would like to see "mySubsystem" listed there!
Thanks
Simone
Similar Messages
-
Logging subsystem failure in WS7u8
Hello, I am seeing an issue related to the logging system built in to WS7u8.
I have a fairly simple web service that I built in Netbeans 6.8 using JAX-WS 2.2. I have deployed the service on both Solaris 10/SPARC and RHEL 5.4/x84_64 installations of WS7u8, and in both cases I see the following error logged when I attempt to call the web service (with log level set to finest in the General->Log Preferences panel of the admin interface):
[13/May/2010:08:59:24] warning (16948): for host 172.20.72.34 trying to POST /svc/MyService, service-j2ee reports: unable to publish logs successfully.
java.lang.NullPointerException
at java.util.PropertyResourceBundle.handleGetObject(PropertyResourceBundle.java:136)
at java.util.ResourceBundle.getObject(ResourceBundle.java:368)
at java.util.ResourceBundle.getString(ResourceBundle.java:334)
at java.util.logging.Formatter.formatMessage(Formatter.java:108)
at com.sun.webserver.logging.ServerFormatter.format(ServerFormatter.java:54)
at com.sun.webserver.logging.NSAPIServerHandler.publish(NSAPIServerHandler.java:94)
at java.util.logging.Logger.log(Logger.java:458)
at java.util.logging.Logger.doLog(Logger.java:480)
at java.util.logging.Logger.log(Logger.java:569)
at com.sun.xml.ws.server.PeptTie.setRuntimeException(PeptTie.java:88)
at com.sun.xml.ws.server.PeptTie._invoke(PeptTie.java:79)
at com.sun.xml.ws.protocol.soap.server.SOAPMessageDispatcher.invokeEndpoint(SOAPMessageDispatcher.java:280)
at com.sun.xml.ws.protocol.soap.server.SOAPMessageDispatcher$SoapInvoker.invoke(SOAPMessageDispatcher.java:588)
at com.sun.xml.ws.protocol.soap.server.SOAPMessageDispatcher.receive(SOAPMessageDispatcher.java:147)
at com.sun.xml.ws.server.Tie.handle(Tie.java:90)
at com.sun.xml.ws.transport.http.servlet.WSServletDelegate.handle(WSServletDelegate.java:335)
at com.sun.xml.ws.transport.http.servlet.WSServletDelegate.doPost(WSServletDelegate.java:290)
at com.sun.xml.ws.transport.http.servlet.WSServlet.doPost(WSServlet.java:79)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:816)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:917)
at org.apache.catalina.core.ApplicationFilterChain.servletService(ApplicationFilterChain.java:398)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:277)
at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:255)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:586)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:556)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:187)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:586)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:556)
at com.sun.webserver.connector.nsapi.NSAPIProcessor.service(NSAPIProcessor.java:160)I have a custom resource that I have configured for the service, so I have set the class-path-prefix to provide the libraries that I need:
<class-path-prefix>/lib/apache-log4j-1.2.15/log4j-1.2.15.jar:/lib/myResourceLib.jar:/lib</class-path-prefix>(I have just /lib in the classpath because that's where my log4j.properties file is)
Also, I have the following libraries in WEB-INF/lib for this webapp:
22$ ls -1 WEB-INF/lib/
activation.jar
FastInfoset.jar
gmbal-api-only.jar
http.jar
jaxb-api.jar
jaxb-impl.jar
jaxb-xjc.jar
jaxws-api.jar
jaxws-rt.jar
jaxws-tools.jar
jdom.jar
jsr173_api.jar
jsr181-api.jar
jsr250-api.jar
management-api.jar
masonLib.jar
mimepull.jar
policy.jar
saaj-api.jar
saaj-impl.jar
stax-ex.jar
streambuffer.jar
woodstox.jarWhy is this failing? Is the Log4j jar somehow conflicting with the built-in logging in WS7? Where should I look next?
Thanks,
BillSo after reading your reply, I moved my JARs from prefix to suffix, and while it still throws the same exception, the stack trace is different. Actually, there are two stack traces now, and the first one points out where in my code things are going off the rails.
[18/May/2010:07:50:31] warning (19400): for host 172.20.16.17 trying to POST /credit/CreditService, service-j2ee reports: PWC4215: Unexpected exception resolving reference
java.lang.NullPointerException
at com.mason.jndi.PropertiesFactory.getObjectInstance(PropertiesFactory.java:102)
at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304)
at org.apache.naming.NamingContext.lookup(NamingContext.java:801)
at org.apache.naming.NamingContext.lookup(NamingContext.java:148)
at org.apache.naming.NamingContext.lookup(NamingContext.java:789)
at org.apache.naming.NamingContext.lookup(NamingContext.java:161)
at com.sun.web.naming.ResourceFactory.getObjectInstance(ResourceFactory.java:55)
at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304)
at org.apache.naming.NamingContext.lookup(NamingContext.java:801)
at org.apache.naming.NamingContext.lookup(NamingContext.java:148)
at org.apache.naming.NamingContext.lookup(NamingContext.java:789)
at org.apache.naming.NamingContext.lookup(NamingContext.java:161)
at com.domain.MyService.initializeAdvantage(MyService.java:296)
at com.domain.MyService.login(MyService.java:136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.sun.xml.ws.server.PeptTie._invoke(PeptTie.java:61)
at com.sun.xml.ws.protocol.soap.server.SOAPMessageDispatcher.invokeEndpoint(SOAPMessageDispatcher.java:280)
at com.sun.xml.ws.protocol.soap.server.SOAPMessageDispatcher$SoapInvoker.invoke(SOAPMessageDispatcher.java:588)
at com.sun.xml.ws.protocol.soap.server.SOAPMessageDispatcher.receive(SOAPMessageDispatcher.java:147)
at com.sun.xml.ws.server.Tie.handle(Tie.java:90)
at com.sun.xml.ws.transport.http.servlet.WSServletDelegate.handle(WSServletDelegate.java:335)
at com.sun.xml.ws.transport.http.servlet.WSServletDelegate.doPost(WSServletDelegate.java:290)
at com.sun.xml.ws.transport.http.servlet.WSServlet.doPost(WSServlet.java:79)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:816)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:917)
at org.apache.catalina.core.ApplicationFilterChain.servletService(ApplicationFilterChain.java:398)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:277)
at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:255)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:586)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:556)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:187)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:586)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:556)
at com.sun.webserver.connector.nsapi.NSAPIProcessor.service(NSAPIProcessor.java:160)
[18/May/2010:07:50:31] warning (19400): for host 172.20.16.17 trying to POST /credit/CreditService, service-j2ee reports: unable to publish logs successfully.
java.lang.NullPointerException
at java.util.PropertyResourceBundle.handleGetObject(PropertyResourceBundle.java:136)
at java.util.ResourceBundle.getObject(ResourceBundle.java:378)
at java.util.ResourceBundle.getString(ResourceBundle.java:344)
at java.util.logging.Formatter.formatMessage(Formatter.java:108)
at com.sun.webserver.logging.ServerFormatter.format(ServerFormatter.java:54)
at com.sun.webserver.logging.NSAPIServerHandler.publish(NSAPIServerHandler.java:94)
at java.util.logging.Logger.log(Logger.java:458)
at java.util.logging.Logger.doLog(Logger.java:480)
at java.util.logging.Logger.log(Logger.java:569)
at com.sun.xml.ws.server.PeptTie.setRuntimeException(PeptTie.java:88)
at com.sun.xml.ws.server.PeptTie._invoke(PeptTie.java:79)
at com.sun.xml.ws.protocol.soap.server.SOAPMessageDispatcher.invokeEndpoint(SOAPMessageDispatcher.java:280)
at com.sun.xml.ws.protocol.soap.server.SOAPMessageDispatcher$SoapInvoker.invoke(SOAPMessageDispatcher.java:588)
at com.sun.xml.ws.protocol.soap.server.SOAPMessageDispatcher.receive(SOAPMessageDispatcher.java:147)
at com.sun.xml.ws.server.Tie.handle(Tie.java:90)
at com.sun.xml.ws.transport.http.servlet.WSServletDelegate.handle(WSServletDelegate.java:335)
at com.sun.xml.ws.transport.http.servlet.WSServletDelegate.doPost(WSServletDelegate.java:290)
at com.sun.xml.ws.transport.http.servlet.WSServlet.doPost(WSServlet.java:79)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:816)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:917)
at org.apache.catalina.core.ApplicationFilterChain.servletService(ApplicationFilterChain.java:398)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:277)
at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:255)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:586)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:556)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:187)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:586)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:556)
at com.sun.webserver.connector.nsapi.NSAPIProcessor.service(NSAPIProcessor.java:160)Looking at my code in that context, I discovered that I was causing the NPE. That allowed me to find and fix the problem in my code.
It seems odd, though, that I would get that strange message from the web server originally, and just changing the classpath config would make such a difference.
Thanks,
Bill -
Why multiple log files are created while using transaction in berkeley db
we are using berkeleydb java edition db base api, we have already read/write CDRFile of 9 lack rows with transaction and
without transaction implementing secondary database concept the issues we are getting are as follows:-
with transaction----------size of database environment 1.63gb which is due to no. of log files created each of 10 mb.
without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb. so we want to know how REASON CONCRETE CONCLUSION ..
how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001.....plz reply soonwe are using berkeleydb java edition db base api, If you are seeing __db.NNN files in your environment root directory, these are environment's shared region files. And since you see these you are using Berkeley DB Core (with the Java/JNI Base API), not Berkeley DB Java Edition.
with transaction ...
without transaction ...First of all, do you need transactions or not? Review the documentation section called "Why transactions?" in the Berkeley DB Programmer's Reference Guide.
without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb.There should be no logs created when transactions are not used. That single log file has likely remained there from the previous transactional run.
how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001Have you reviewed the basic documentations references for Berkeley DB Core?
- Berkeley DB Programmer's Reference Guide
in particular sections: The Berkeley DB products, Shared memory regions, Chapter 11. Berkeley DB Transactional Data Store Applications, Chapter 17. The Logging Subsystem.
- Getting Started with Berkeley DB (Java API Guide) and Getting Started with Berkeley DB Transaction Processing (Java API Guide).
If so, you would have had the answers to these questions; the __db.NNN files are the environment shared region files needed by the environment's subsystems (transaction, locking, logging, memory pool buffer, mutexes), and the log.MMMMMMMMMM are the log files needed for recoverability and created when running with transactions.
--Andrei -
Open Container: Logging region out of memory
When opening a container I am getting the following error... according to doc's it appears as though a new log file should be created but this is not the case...
Utils: Logging region out of memory; you may need to increase its size
Utils: DB->get: method not permitted before handle's open method
Utils: DB->put: method not permitted before handle's open method
Utils: DB->put: method not permitted before handle's open method
Utils: DB->put: method not permitted before handle's open method
Utils: DB->put: method not permitted before handle's open method
Utils: DB->put: method not permitted before handle's open method
Error: com.sleepycat.dbxml.XmlException: Error: Cannot allocate memory, errcode = DATABASE_ERROR
com.sleepycat.dbxml.XmlException: Error: Cannot allocate memory, errcode = DATABASE_ERROR
at com.sleepycat.dbxml.dbxml_javaJNI.XmlManager_openContainer__SWIG_2(Native Method)
at com.sleepycat.dbxml.XmlManager.openContainer(XmlManager.java:525)
at com.sleepycat.dbxml.XmlManager.openContainer(XmlManager.java:190)
at com.sleepycat.dbxml.XmlManager.openContainer(XmlManager.java:132)
at com.iconnect.data.pool.BerkeleyXMLDBPool.getContainer(BerkeleyXMLDBPool.java:195)
at com.iconnect.data.adapters.BerkleyXMLDBImpl.query(BerkleyXMLDBImpl.java:109)
at com.iconnect.data.DataManagerFactory.query(DataManagerFactory.java:112)
at controls.iConnectControl.IConnectImpl.login(IConnectImpl.java:100)
at controls.iConnectControl.IConnectBean.login(IConnectBean.java:110)
at app.a12.en.auth.LoginController.Login(LoginController.java:124)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.beehive.netui.pageflow.FlowController.invokeActionMethod(FlowController.java:811)
at org.apache.beehive.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:752)
at org.apache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:446)
at org.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java:237)
at org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:299)
at org.apache.beehive.netui.pageflow.internal.FlowControllerAction.execute(FlowControllerAction.java:55)
at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processActionPerform(PageFlowRequestProcessor.java:1403)
at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processInternal(PageFlowRequestProcessor.java:647)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.process(PageFlowRequestProcessor.java:702)
at org.apache.beehive.netui.pageflow.AutoRegisterActionServlet.process(AutoRegisterActionServlet.java:565)
at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:525)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
at com.iconnect.security.SecurityFilter.doFilter(SecurityFilter.java:99)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:186)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:198)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:152)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:118)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929)
at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:160)
at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:300)
at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:374)
at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:743)
at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:675)
at org.apache.jk.common.SocketConnection.runIt(ChannelSocket.java:866)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683)
at java.lang.Thread.run(Thread.java:595)Here is the output that we have... one thing to note that the exception implies that the container does not exists (although it does)... it is falling into a sub-routine for:
myManager.existsContainer(containerName) == 0
The exception is thrown (it seems) because of the "Logging region..." error...
We had normal operation untill the loggin system ran out of memory... (I am using a shared Environment and am amanaging shared open Containers)...
btw - Is there not a good end-to-end Java example that is available?
// Initialize the Environment Configuration
envConfig.setAllowCreate(true); // If the environment does not
envConfig.setInitializeCache(true); // Turn on the shared memory
envConfig.setInitializeLocking(true); // Turn on the locking subsystem.
envConfig.setInitializeLogging(false); // Turn on the logging subsystem.
envConfig.setTransactional(true); // Turn on the transactional
envConfig.setErrorStream(System.err);
envConfig.setErrorPrefix("Utils");
envConfig.setNoLocking(true);
// Played using these settings as well
//envConfig.setLogInMemory(true);
//envConfig.setLogBufferSize(10 * 1024 * 1024);
envConfig.setThreaded(true);
Getting container: USER.dbxml
Utils: Logging region out of memory; you may need to increase its size
Container does not exist.... creating...
Utils: Logging region out of memory; you may need to increase its size
Utils: Logging region out of memory; you may need to increase its size
com.sleepycat.dbxml.XmlException: Error: Cannot allocate memory, errcode = DATABASE_ERROR
com.sleepycat.dbxml.XmlException: Error: Cannot allocate memory, errcode = DATABASE_ERROR
at com.sleepycat.dbxml.dbxml_javaJNI.XmlManager_createContainer__SWIG_0(Native Method)
at com.sleepycat.dbxml.XmlManager.createContainer(XmlManager.java:485)
at com.sleepycat.dbxml.XmlManager.createContainer(XmlManager.java:152)
at com.sleepycat.dbxml.XmlManager.createContainer(XmlManager.java:122)
at com.iconnect.data.pool.BerkeleyXMLDBPool.getContainer(BerkeleyXMLDBPool.java:171)
at com.iconnect.data.adapters.BerkleyXMLDBImpl.query(BerkleyXMLDBImpl.java:109)
at com.iconnect.data.DataManagerFactory.query(DataManagerFactory.java:112)
at controls.iConnectControl.IConnectImpl.login(IConnectImpl.java:100)
at controls.iConnectControl.IConnectBean.login(IConnectBean.java:110)
at auth.LoginController.Login(LoginController.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.beehive.netui.pageflow.FlowController.invokeActionMethod(FlowController.java:811)
at org.apache.beehive.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:752)
at org.apache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:446)
at org.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java:237)
at org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:299)
at org.apache.beehive.netui.pageflow.internal.FlowControllerAction.execute(FlowControllerAction.java:55)
at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processActionPerform(PageFlowRequestProcessor.java:1403)
at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processInternal(PageFlowRequestProcessor.java:647)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.process(PageFlowRequestProcessor.java:702)
at org.apache.beehive.netui.pageflow.AutoRegisterActionServlet.process(AutoRegisterActionServlet.java:565)
at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:525)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
at com.iconnect.security.SecurityFilter.doFilter(SecurityFilter.java:99)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:186)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:198)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:152)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:118)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929)
at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:160)
at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:300)
at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:374)
at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:743)
at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:675)
at org.apache.jk.common.SocketConnection.runIt(ChannelSocket.java:866)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683)
at java.lang.Thread.run(Thread.java:595)
Setting namespace: http://iconnect.com/schemas/user
com.sleepycat.dbxml.XmlException: Error: Cannot resolve container: USER.dbxml. Container not open and auto-open is not enabled. Container may not exist., errcode = CONTAINER_CLOSED -
Can i config log file extension?
Hi,
How can i config logfile like this format: log.4c01c0f3
an how can i config no env files like these: __db.001 ... __db.005
I have a env not created by me, that env does not have those __db.001....__db.005 files,
how to config env like that?
thanks alot.Hello,
What platform/version are you on?
The logging subsystem and related methods are documented at:
http://www.oracle.com/technology/documentation/berkeley-db/db/api_reference/C/lsn.html
I do not know of a method to change the log file name. Perhaps someone else might.
As the for __db.00X environment region files, these represent shared memory
regions which by default are created as files in the environment's home
directory. The region files can be configured to reside in-memory which is perhaps
why you are not seeing them. The following documentation provides additional
details:
http://www.oracle.com/technology/documentation/berkeley-db/db/gsg_txn/C/enabletxn.html#environments
Thanks,
Sandra
Edited by: Oracle, Sandra Whitman on Jun 1, 2010 8:35 AM -
1.4 Logging (shutdown hook question really)
If I have a handler configured and log a record inside a thread running as a shutdown hook, more times than not I get nothing in the log file (or console, or wherever) and have to resort to System.err.println().
Now, from studying the code for LogManager this turns out to be the result of the inner Cleaner thread (which is the log managers shutdown hook) calling reset() on the manager.
So, this question morphs itself into how to get my shutdown hook to run before the one that resets the log manager.
Any ideas or adjsting the priority (order of execution) of shutdown hooks?Guys (silkm, trejkaz and the good doctor),
That's for the input. Appreciate it.
On shutdown hooks being bad design - interesting. In the case I'm thinking of, the entry point class for a distributed task manager (see http://forum.java.sun.com/thread.jsp?forum=4&thread=335843&tstart=0&trange=30) is "embedded" in consumer code and does indead have a clean shutdown method that we hope the consumer code will invoke before shutting itself down. But in the same was as the logging subsystem can not rely on it's consumer code to "do the right thing", we can't rely on our consumer code to behave itself. In addition, we do want to catch the case where an operator re-starts the consumer system (internal procedure demands unix processes are sent an interrupt rather than a kill). The code invoked by the clean shutdown and the caught shutdown persists (as XML) the state of the task manager which is read on re-start to perform recovery operations. Now, before you all dive in and say "save the state every time it changes" let me say that the state changes very rapidly and the I/O overhead of saving it is considerable. Also, having a background thread that persists state on a regular basis was considered, but it introduces the problem that it becomes different to gaurantee the validity of the persisted state.
On setting the thread priority - that had occured to me and I shall be using this as one of my test cases to see what happens. As mentioned, the effects this has will vary by platform and VM. However, as the component in question will be running in a controlled environment we should be able to pick a configuration that works.
On Linux threads and processes - Good point. But do unstarted threads have a process? This is significant to us as the target platforms for the task manager are Solaris and Linux.
On java.lang.Runtime's implementation of shutdown hooks - I tend to agree that it's a bit simplistic. Maybe we should raise a change request to allow greater control over how hooks are executed.
Thanks again and please chip in if you have any other ideas. -
WL5.1 Log format config
Is there a way to control the format of weblogic's logging?
I'm logging to Weblogic's log using Java code that looks something like:
T3Services.getT3Services().log().log( "Foo..." );
This results in log messages that look something like:
Wed Jan 17 15:57:03 CST 2001:<I> <T3Services> Foo...
I'd like to be able to modify, replace or even suppress the date, time, <I> and/or <T3Services>.
For example, my log subsystem is capable of generating the following message which I'd prefer to send to WL's log:
01-17-2001 15:57:03 <TRACE> <GUI> Foo...
Does WL5.1 support the level of log format customization I'm looking for, and if so where can I find information about configuring the logging format?Owen Horne <[email protected]> wrote in message
news:3a663105$[email protected]..
>
Is there a way to control the format of weblogic's logging?No. The format of the logfiles is fixed in 5.1 and 6.0 release.
>
I'm logging to Weblogic's log using Java code that looks something like:
T3Services.getT3Services().log().log( "Foo..." );
This results in log messages that look something like:
Wed Jan 17 15:57:03 CST 2001:<I> <T3Services> Foo...
I'd like to be able to modify, replace or even suppress the date, time,<I> and/or <T3Services>.
>
For example, my log subsystem is capable of generating the followingmessage which I'd prefer to send to WL's log:
01-17-2001 15:57:03 <TRACE> <GUI> Foo...
Does WL5.1 support the level of log format customization I'm looking for,and if so where can I find information about configuring the logging format? -
Synchronous/Asynchronous Message Logging for WebLogic 6.1
Hi,
I have read through the documentation for logging messages using
JMX on WebLogic 6.1, but I have not been able to find how log messages
are sent from managed servers to the Administration Server. I
understand that notifications are sent via JMX, but it never
explicitly states if this is using JMS, or the mechanism by which they
are broadcast (point-to-point versus publish/subscribe). If anyone
could direct me to the documentation showing this, or get me started
in the right direction, I'd really appreciate it. Thank you for the
help, in advance.
RichardAs far as I've come, trying to squeeze som info out of Bea, the logging subsystem is all synchronous, i.e. no JMS is involved.
This, btw, caused major bottleneck is a design of mine, since I attached a listener to the logging subsystem, assuming it was asynchronous. Boy, was I wrong! :)
If you find out how to make listening to the subsystems asynchronous, I'd love to hear it!
Cheers,
/JMK -
how can i log the dying status ??
I tried:
logging subsystem keepalive level debug-7
logging host <hostip> facility 7 log-level debug-7
without success.
using WebNS: 7.20 Build 305
thanks in advancehere the log,
nothing to see from the dying log change:
muecss01-24# show service nolta
Name: nolta Index: 26
Type: Local State: Dying
Rule ( 172.29.xx.xx ANY ANY )
Session Redundancy: Disabled
Redirect Domain:
Redirect String:
Keepalive: (TCP-80 30 5 60 )
Last Clearing of Stats Counters: 07/05/2004 11:55:17
Mtu: 1500 State Transitions: 4
Total Local Connections: 0 Total Backup Connections: 0
Current Local Connections: 0 Current Backup Connections: 0
Total Connections: 0 Max Connections: 65534
Total Reused Conns: 0
Weight: 1 Load: 2
DFP: Disable
muecss01-24#
muecss01-24#
muecss01-24# show log sys.log tail 50
JUL 23 13:29:13 1/1 2142 KAL-7: kal_ActiveIcp: kalIndex = 26
JUL 23 13:29:13 1/1 2143 KAL-7: ICP KAL START keepalive 26 !
JUL 23 13:29:13 1/1 2144 NETMAN-2: Enterprise:Service Transition:nolta -> down
JUL 23 13:31:50 1/1 2145 NETMAN-5: Enterprise:Service Transition:nolta -> alive
JUL 23 13:33:39 1/1 2146 KAL-7: kal_RemoveServiceToKal: keepalive = 26 serviceIndex = 26
JUL 23 13:33:39 1/1 2147 KAL-7: kal_SuspendIcp: kalIndex = 26
JUL 23 13:33:39 1/1 2148 KAL-7: ICP KAL STOP keepalive 26 !
JUL 23 13:33:39 1/1 2149 KAL-7: CREATING keepalive for service nolta !
JUL 23 13:33:39 1/1 2150 KAL-7: kal_AddServiceToKal: keepalive = 26 serviceIndex = 26
JUL 23 13:33:39 1/1 2151 KAL-7: kal_ActiveIcp: kalIndex = 26
JUL 23 13:33:39 1/1 2152 KAL-7: ICP KAL START keepalive 26 !
JUL 23 13:33:39 1/1 2153 KAL-7: kal_SingleServiceNotify: kalIndex = 26 kalSvcEvent=4
here also the configured log level:
logging subsystem syssoft level error-3
logging subsystem keepalive level debug-7
logging subsystem netman level debug-7
any ideas ??? -
Confusion about In-Memory logging and Permanent message
Dear sir:
In the Replication C-GSG.pdf document, it describes as follow:
For the master (again, using the replication framework), things are a little more
complicated than simple message acknowledgment. Usually in a replicated application,
the master commits transactions asynchronously; that is, the commit operation does not
block waiting for log data to be flushed to disk before returning. So when a master is
managing permanent messages, it typically blocks the committing thread immediately
before commit() returns. The thread then waits for acknowledgments from its replicas.
If it receives enough acknowledgments, it continues to operate as normal.
If the master does not receive message acknowledgments — or, more likely, it does not
receive enough acknowledgments — the committing thread flushes its log data to disk
and then continues operations as normal. The master application can do this because
replicas that fail to handle a message, for whatever reason, will eventually catch up to
the master. So by flushing the transaction logs to disk, the master is ensuring that the
data modifications have made it to stable storage in one location (its own hard drive).
My question:
If I have configured IN-Memory Logging in the logging subsystem, is it means that even the master does not receive enough acknowledgments, the committing thread will not flush the log data to disk and just let it stay in the memory region?Yes, that's correct.
You might find this additional information helpful:
db-4.5.20/docs/ref/program/ram.html -
Spontaneous trouble receiving mail from Mail Application and Web Mail
Hello,
Unfortunately I need some assistance. For unknown reasons, today at about 12 PM our in house email server (10.3.9) started failing to allow any user to connect to download mail. I have users running POP and other users running IMAP accounts that had the same lack of function.
When I connect using web mail I get the following error:
Error connecting to IMAP server: localhost. 61 : Connection refused
When I connect from the Mail application I get the following error:
The server "mail.mycompany.com" refused to allow a connection on port 110
When I connect using the Server Admin program the IMAP and POP logs are empty despite being set to the “All events” setting, and the SMTP log contains the following message frequently despite still being able to send email from the affected accounts:
(temporary failure. Command output: couldn't connect to lmtpd: Unknown Error Code: 0_ 421 4.3.0 deliver: couldn't connect to lmtpd_ )
I have restarted the server, I have run disk utility to repair permissions and made no intentional adjustments to the server in the last week. The last adjustment to the mail server I made was about 1 month ago to turn on IMAP authentication to allow me to switch from a POP to an IMAP email account.
Please help me figure this out!
Thanks in advance.
Mike
(PS I am a veterinarian, not an IT guru. I run my own mail and web servers because with Apple products, I can.)Thanks for your assistance.
I found the logs and while there is a mail.log there is no mailaccess.log file.
There is a mailaccess.log.0.gz which I decompressed and I also decompressed the mailacess.log.1.gz.
The mailaccess.log.1 file contains entries from July 10th through July 11th.
The mailaccess.log.0 file contains entries from August 14th through August 15th (today). Seems like there was a long period that did not get archived correctly.
Here is a subset of what was in the mail.log from this morning:
Aug 15 07:04:03 www postfix/pipe[24378]: 5B2201493AC: to=<[email protected]>, relay=cyrus, delay=1001, status=bounced (Command time limit exceeded: "/usr/bin/cyrus/bin/deliver")
Aug 15 07:04:03 www postfix/pipe[24376]: 5B2201493AC: to=<[email protected]>, relay=cyrus, delay=1001, status=bounced (Command time limit exceeded: "/usr/bin/cyrus/bin/deliver")
Aug 15 07:04:03 www postfix/cleanup[10663]: 86BD71493BA: message-id=<[email protected]>
Aug 15 07:04:03 www postfix/qmgr[25517]: 86BD71493BA: from=, size=19952, nrcpt=1 (queue active)
Aug 15 07:04:04 www postfix/smtp[10668]: 86BD71493BA: to=<sentto-12433195-1540-1187185326-mrbroome=avmi.net@returns.groups.yahoo.com> , relay=rtn7.grp.scd.yahoo.com[66.218.66.214], delay=1, status=sent (250 ok 1187186644 qp 73498)
Aug 15 07:06:16 www postfix/pipe[26600]: 8D50C1493AF: to=<[email protected]>, relay=cyrus, delay=1002, status=bounced (Command time limit exceeded: "/usr/bin/cyrus/bin/deliver")
Aug 15 07:06:16 www postfix/pipe[26602]: 8D50C1493AF: to=<[email protected]>, relay=cyrus, delay=1002, status=bounced (Command time limit exceeded: "/usr/bin/cyrus/bin/deliver")
Aug 15 07:06:16 www postfix/cleanup[12865]: 56B5E1493BC: message-id=<[email protected]>
Aug 15 07:06:16 www postfix/qmgr[25517]: 56B5E1493BC: from=, size=51949, nrcpt=1 (queue active)
Aug 15 07:06:22 www postfix/smtp[12866]: 56B5E1493BC: to=<[email protected]>, relay=mailer.versiontracker.com[66.179.48.93], delay=6, status=sent (250 2.0.0 l7FE6L807756 Message accepted for delivery)
Aug 15 07:30:50 www postfix/smtpd[7082]: unable to get certificate from '/etc/postfix/server.pem'
Aug 15 07:30:50 www postfix/smtpd[7082]: 7082:error:02001002:system library:fopen:No such file or directory:bss_file.c:278:fopen('/etc/postfix/server.pem','r'):
Aug 15 07:30:50 www postfix/smtpd[7082]: 7082:error:20074002:BIO routines:FILE_CTRL:system lib:bss_file.c:280:
Aug 15 07:30:50 www postfix/smtpd[7082]: 7082:error:140DC002:SSL routines:SSLCTX_use_certificate_chainfile:system lib:ssl_rsa.c:760:
Aug 15 07:30:50 www postfix/smtpd[7082]: TLS engine: cannot load RSA cert/key data
Aug 15 07:30:50 www postfix/smtpd[7082]: connect from valhalla.mailpure.com[66.109.52.210]
Aug 15 07:30:51 www postfix/smtpd[7082]: 183AA1493D0: client=valhalla.mailpure.com[66.109.52.210]
Aug 15 07:30:51 www postfix/cleanup[7088]: 183AA1493D0: message-id=<[email protected]>
Aug 15 07:30:52 www postfix/qmgr[25517]: 183AA1493D0: from=<[email protected]>, size=26283, nrcpt=2 (queue active)
Aug 15 07:30:52 www postfix/smtpd[7082]: disconnect from valhalla.mailpure.com[66.109.52.210]
Aug 15 07:47:32 www postfix/pipe[7105]: 183AA1493D0: to=<[email protected]>, relay=cyrus, delay=1001, status=bounced (Command time limit exceeded: "/usr/bin/cyrus/bin/deliver")
Aug 15 07:47:32 www postfix/pipe[7109]: 183AA1493D0: to=<[email protected]>, relay=cyrus, delay=1001, status=bounced (Command time limit exceeded: "/usr/bin/cyrus/bin/deliver")
Aug 15 07:47:32 www postfix/cleanup[22267]: CD81D1493F0: message-id=<[email protected]>
Aug 15 07:47:32 www postfix/qmgr[25517]: CD81D1493F0: from=, size=28115, nrcpt=1 (queue active)
Aug 15 07:47:37 www postfix/smtp[22271]: CD81D1493F0: to=<[email protected]>, relay=cvm39.vetmed.wsu.edu[134.121.130.6], delay=5, status=sent (250 2.6.0 <[email protected]> Queued mail for delivery)
The following is copied from the mailaccess.log.0 file from yesterday when the trouble started:
Aug 14 21:43:56 www master[269]: exiting on SIGTERM/SIGINT
Aug 14 21:43:56 www deliver[24981]: backend_connect(): couldn't read initial greeting: (null)
Aug 14 21:43:56 www master[25420]: process started
Aug 14 21:48:55 www deliver[25478]: connect(/var/imap/socket/lmtp) failed: Connection refused
Aug 14 21:50:22 www master[25519]: process started
Aug 14 22:07:03 www deliver[25632]: connect(/var/imap/socket/lmtp) failed: Connection refused
Aug 14 22:40:22 www deliver[25829]: connect(/var/imap/socket/lmtp) failed: Connection refused
Aug 15 06:23:50 www ctl_cyrusdb[25520]: DBERROR db4: PANIC: Too many open files
Aug 15 06:23:50 www ctl_cyrusdb[25520]: DBERROR: critical database situation
Aug 14 23:23:50 www master[25519]: process 25520 exited, status 75
Aug 14 23:23:50 www master[25519]: ready for work
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR db4: fatal region error detected; run recovery
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR: dbenv->open '/var/imap/db' failed: DB_RUNRECOVERY: Fatal error, run database recovery
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR: init() on berkeley
Aug 14 23:23:50 www ctl_cyrusdb[26081]: checkpointing cyrus databases
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR db4: txn_checkpoint interface requires an environment configured for the transaction subsystem
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR: couldn't checkpoint: Invalid argument
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR: sync /var/imap/db: cyrusdb error
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR db4: DBENV->logarchive interface requires an environment configured for the logging subsystem
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR: error listing log files: Invalid argument
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR: archive /var/imap/db: cyrusdb error
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR db4: txn_checkpoint interface requires an environment configured for the transaction subsystem
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR: couldn't checkpoint: Invalid argument
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR: sync /var/imap/db: cyrusdb error
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR db4: DBENV->logarchive interface requires an environment configured for the logging subsystem
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR: error listing log files: Invalid argument
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR: archive /var/imap/db: cyrusdb error
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR db4: txn_checkpoint interface requires an environment configured for the transaction subsystem
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR: couldn't checkpoint: Invalid argument
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR: sync /var/imap/db: cyrusdb error
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR db4: DBENV->logarchive interface requires an environment configured for the logging subsystem
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR: error listing log files: Invalid argument
Aug 14 23:23:50 www ctl_cyrusdb[26081]: DBERROR: archive /var/imap/db: cyrusdb error
Aug 14 23:23:50 www ctl_cyrusdb[26081]: done checkpointing cyrus databases
Looks like my mail database is trashed. Let me know if this is correct and if so, how I should go about trying to restore/rebuild, etc.
Thanks again for your assistance.
Mike -
Very subtle DB corruption ?
Hello all,
I'm experiencing what I call a very subtle bug. Subtle because there are no exceptions, no memory leak, not even an error message.
My application uses BDB XML and it inserts, updates, removes and searches documents - pretty basic functionality. But after some time of use, my users are calling me saying the searches never return any results. I mean, they send a request, then it's sent to BDB, but it doesnt return.
First I thought it was another memory leak, but memory cosumption is high as always, but it is stable at some point. Just for checking, I restarted the server and tried searching again - dog slow, no return, even with 600MB free RAM.
The only solution was to run recovery, which I chose to do manually at this time (and I dont do it automatically yet). Started the server, and then it returns to its normal state.
So, I can just wonder what can possibility cause this? The containers are getting corrupted (although no corruption messages appear)? My Environment is managed by this class:
package br.gov.al.delegaciainterativa.controles;
import java.io.File;
import java.io.FileNotFoundException;
import java.util.ArrayList;
import javax.naming.LimitExceededException;
import br.gov.al.delegaciainterativa.utils.Dir;
import com.sleepycat.db.DatabaseException;
import com.sleepycat.db.Environment;
import com.sleepycat.db.EnvironmentConfig;
import com.sleepycat.dbxml.XmlContainer;
import com.sleepycat.dbxml.XmlContainerConfig;
import com.sleepycat.dbxml.XmlException;
import com.sleepycat.dbxml.XmlManager;
import com.sleepycat.dbxml.XmlManagerConfig;
public class EnvironmentInit {
private Environment myEnv; // objeto ambiente
private XmlManager myManager;
private XmlManagerConfig managerConfig;
private File envPath;
private boolean ismyEnvOpen = false;
private String nomeAmbiente;
private String separador;
private String envHome;
private ArrayList containersAbertos;
public EnvironmentInit(String envHome, String nomeAmbiente)
throws Throwable {
this.nomeAmbiente = nomeAmbiente;
containersAbertos = new ArrayList();
separador = System.getProperty("file.separator"); // pega o separador
this.envHome = envHome + separador + nomeAmbiente;
System.out.println(".EnvironmentInit:iniciar");
iniciar();
public EnvironmentInit(String envHome, String nomeAmbiente, XmlManager mgr)
throws Throwable {
this.nomeAmbiente = nomeAmbiente;
this.myManager = mgr;
containersAbertos = new ArrayList();
separador = System.getProperty("file.separator"); // pega o separador
this.envHome = envHome + separador + nomeAmbiente;
iniciar();
private void iniciar() throws Exception {
Dir.criarDiretorio(this.envHome); // cria o diretorio
envPath = new File(this.envHome); // converte o caminho de String p/ File, requerido pelo construtor do Environment
//System.out.println(envPath);
if (!envPath.isDirectory()) {
// System.out.println("Criando dir.. " + envHome);
// Dir.criarDiretorio(this.envHome);
throw new Exception(envPath.getPath()
+ " does not exist or is not a directory.");
try {
EnvironmentConfig envConf = new EnvironmentConfig();
//envConf.setCacheSize(50 * 1024 * 1024); //let default cache size.
envConf.setAllowCreate(true); // If the environment does not
// exits,create it.
envConf.setInitializeCache(true); // Turn on the shared memory
// region.
envConf.setInitializeLocking(true); // Turn on the locking
// subsystem
envConf.setInitializeLogging(true); // Turn on the logging subsystem
envConf.setTransactional(true); // Turn on the transactional
// subsystem - Passo 1 para setar transa??????es.
//envConf.setLogInMemory(true);
//envConf.setLogBufferSize(10 * 1024 * 1024); //100 Mbs de log. Logs são usados para recuperação do banco em caso de corrupção.
envConf.setErrorStream(System.err);
//envConf.setRunRecovery(true); // Roda o recovery automaticamente
myEnv = new Environment(envPath, envConf);
managerConfig = new XmlManagerConfig();
managerConfig.setAdoptEnvironment(true); // autoriza ao XmlManager, quando for fechado, fechar tamb???m o ambiente
// managerConfig.setAllowAutoOpen(true); // autoriza a abrir um
// container automaticamente
//managerConfig.setAllowExternalAccess(true); // acesso externo
myManager = new XmlManager(myEnv, managerConfig);
myManager.setDefaultContainerType(XmlContainer.WholedocContainer);
ismyEnvOpen = true;
} catch (DatabaseException de) {
// Exception handling goes here
System.err.println("[erro] EnvironmentInit:iniciar - erro no banco");
} catch (FileNotFoundException fnfe) {
// Exception handling goes here
System.err.println("[erro] EnvironmentInit:iniciar - Config faltando");
} catch (Exception e) {
// Exception handling goes here
System.err.println("[erro] EnvironmentInit:iniciar \n" + e.toString());
//Returns the path to the database environment
public File getDbEnvPath() {
return envPath;
//Returns the database environment encapsulated by this class.
public Environment getEnvironment() {
return myEnv;
//Returns the XmlManager encapsulated by this class.
public XmlManager getManager() {
return myManager;
* Reabre o container com as configura??????es j??? preparadas. ??? necess???rio
* re-abrir posteriormente os containers
public void reabrir() {
if (ismyEnvOpen == false) {
try {
iniciar();
//System.out.println(".EnvironmentInit: reabrir()");
} catch (Exception e) {
System.err.println("[erro] EnvironmentInit:iniciar - Erro ao tentar reabrir ambiente!");
//e.printStackTrace();
public String getName() {
return nomeAmbiente;
public void cleanup() throws DatabaseException {
ismyEnvOpen = false;
fechaContainersAbertos();
try {
if (myManager != null) {
// myEnv.close(); // fechado automaticamente pelo myManager
//myManager.close();
myManager.delete(); //trocado pelo close() pois é mais seguro.
ismyEnvOpen = false;
} catch (Exception de) {
// Exception handling goes here
System.err.println("[erro] EnvironmentInit:iniciar - N�o conseguiu fechar o banco");
System.out.println(".EnvironmentInit:cleanup");
private void registraContainerAberto(XmlContainer container) {
containersAbertos.add(container);
private void fechaContainersAbertos() {
for (int i = 0; i < containersAbertos.size(); i++) {
try {
System.out.println(".EnvironmentInit:fechaContainer:" + ((XmlContainer) containersAbertos.get(i)).getName());
((XmlContainer) containersAbertos.get(i)).closeContainer();
((XmlContainer) containersAbertos.get(i)).delete(); //realmente garante o fechamento, destruindo o objeto associado.
} catch (XmlException e) {
System.err
.println("[erro] EnvironmentInit:iniciar - Erro ao tentar fechar container");
e.printStackTrace();
containersAbertos.clear();
public XmlManagerConfig getXmlManagerConfig() {
return managerConfig;
public XmlContainer abrirContainer(String nome) {
int ind = indiceContainerRegistrado(nome);
XmlContainer container = null;
if (ind >= 0) {
//System.out.println(".EnvironmentInit:abrirContainer (j??? aberto) : " + nome + " - ambiente aberto!");
return (XmlContainer) containersAbertos.get(ind);
boolean existe = Dir
.existeArquivo(envPath.toString() + separador, nome);
try {
if (!existe) {
container = myManager.createContainer(nome);
System.out.println(".EnvironmentInit:abrirContainer (create) : " + nome + " - criando container");
return null;
} else {
System.out.println(".EnvironmentInit:abrirContainer - nome = '" nome "'");
XmlContainerConfig conf = new XmlContainerConfig();
conf.setAllowCreate(true);
conf.setTransactional(true);
container = myManager.openContainer(nome, conf);
//System.out.println(".EnvironmentInit:abrirContainer (open) : " + nome + " - ambiente aberto!");
registraContainerAberto(container); // registra que o container foi
// aberto
return container;
} catch (XmlException e) {
e.printStackTrace();
return null;
public boolean removeContainer(XmlContainer c) {
String path = null;
try {
path = envPath.toString() + separador + c.getName();
//.closeContainer();
c.delete(); //trocado pelo close acima por ser mais seguro.
myManager.removeContainer(path);
System.out.println(".EnvironmentInit:removeContainer : " + path);
return true;
} catch (XmlException e) {
System.err.println("[erro] EnvironmentInit:iniciar - Erro ao tentar remover container ");
//e.printStackTrace();
return false;
public int indiceContainerRegistrado(String nome) {
try {
for (int i = 0; i < containersAbertos.size(); i++) {
XmlContainer cont = (XmlContainer)containersAbertos.get(i);
if (cont.getName().compareTo(nome) == 0)
return i;
} catch (Exception e) {
return -1;
Is there anything wrong with it, that could be causing this? Any help would be much appreciated.
thanks,
-- Breno CostaHello all,
I'm experiencing what I call a very subtle bug. Subtle because there are no exceptions, no memory leak, not even an error message.
My application uses BDB XML and it inserts, updates, removes and searches documents - pretty basic functionality. But after some time of use, my users are calling me saying the searches never return any results. I mean, they send a request, then it's sent to BDB, but it doesnt return.
First I thought it was another memory leak, but memory cosumption is high as always, but it is stable at some point. Just for checking, I restarted the server and tried searching again - dog slow, no return, even with 600MB free RAM.
The only solution was to run recovery, which I chose to do manually at this time (and I dont do it automatically yet). Started the server, and then it returns to its normal state.
So, I can just wonder what can possibility cause this? The containers are getting corrupted (although no corruption messages appear)? My Environment is managed by this class:
package br.gov.al.delegaciainterativa.controles;
import java.io.File;
import java.io.FileNotFoundException;
import java.util.ArrayList;
import javax.naming.LimitExceededException;
import br.gov.al.delegaciainterativa.utils.Dir;
import com.sleepycat.db.DatabaseException;
import com.sleepycat.db.Environment;
import com.sleepycat.db.EnvironmentConfig;
import com.sleepycat.dbxml.XmlContainer;
import com.sleepycat.dbxml.XmlContainerConfig;
import com.sleepycat.dbxml.XmlException;
import com.sleepycat.dbxml.XmlManager;
import com.sleepycat.dbxml.XmlManagerConfig;
public class EnvironmentInit {
private Environment myEnv; // objeto ambiente
private XmlManager myManager;
private XmlManagerConfig managerConfig;
private File envPath;
private boolean ismyEnvOpen = false;
private String nomeAmbiente;
private String separador;
private String envHome;
private ArrayList containersAbertos;
public EnvironmentInit(String envHome, String nomeAmbiente)
throws Throwable {
this.nomeAmbiente = nomeAmbiente;
containersAbertos = new ArrayList();
separador = System.getProperty("file.separator"); // pega o separador
this.envHome = envHome + separador + nomeAmbiente;
System.out.println(".EnvironmentInit:iniciar");
iniciar();
public EnvironmentInit(String envHome, String nomeAmbiente, XmlManager mgr)
throws Throwable {
this.nomeAmbiente = nomeAmbiente;
this.myManager = mgr;
containersAbertos = new ArrayList();
separador = System.getProperty("file.separator"); // pega o separador
this.envHome = envHome + separador + nomeAmbiente;
iniciar();
private void iniciar() throws Exception {
Dir.criarDiretorio(this.envHome); // cria o diretorio
envPath = new File(this.envHome); // converte o caminho de String p/ File, requerido pelo construtor do Environment
//System.out.println(envPath);
if (!envPath.isDirectory()) {
// System.out.println("Criando dir.. " + envHome);
// Dir.criarDiretorio(this.envHome);
throw new Exception(envPath.getPath()
+ " does not exist or is not a directory.");
try {
EnvironmentConfig envConf = new EnvironmentConfig();
//envConf.setCacheSize(50 * 1024 * 1024); //let default cache size.
envConf.setAllowCreate(true); // If the environment does not
// exits,create it.
envConf.setInitializeCache(true); // Turn on the shared memory
// region.
envConf.setInitializeLocking(true); // Turn on the locking
// subsystem
envConf.setInitializeLogging(true); // Turn on the logging subsystem
envConf.setTransactional(true); // Turn on the transactional
// subsystem - Passo 1 para setar transa??????es.
//envConf.setLogInMemory(true);
//envConf.setLogBufferSize(10 * 1024 * 1024); //100 Mbs de log. Logs são usados para recuperação do banco em caso de corrupção.
envConf.setErrorStream(System.err);
//envConf.setRunRecovery(true); // Roda o recovery automaticamente
myEnv = new Environment(envPath, envConf);
managerConfig = new XmlManagerConfig();
managerConfig.setAdoptEnvironment(true); // autoriza ao XmlManager, quando for fechado, fechar tamb???m o ambiente
// managerConfig.setAllowAutoOpen(true); // autoriza a abrir um
// container automaticamente
//managerConfig.setAllowExternalAccess(true); // acesso externo
myManager = new XmlManager(myEnv, managerConfig);
myManager.setDefaultContainerType(XmlContainer.WholedocContainer);
ismyEnvOpen = true;
} catch (DatabaseException de) {
// Exception handling goes here
System.err.println("[erro] EnvironmentInit:iniciar - erro no banco");
} catch (FileNotFoundException fnfe) {
// Exception handling goes here
System.err.println("[erro] EnvironmentInit:iniciar - Config faltando");
} catch (Exception e) {
// Exception handling goes here
System.err.println("[erro] EnvironmentInit:iniciar \n" + e.toString());
//Returns the path to the database environment
public File getDbEnvPath() {
return envPath;
//Returns the database environment encapsulated by this class.
public Environment getEnvironment() {
return myEnv;
//Returns the XmlManager encapsulated by this class.
public XmlManager getManager() {
return myManager;
* Reabre o container com as configura??????es j??? preparadas. ??? necess???rio
* re-abrir posteriormente os containers
public void reabrir() {
if (ismyEnvOpen == false) {
try {
iniciar();
//System.out.println(".EnvironmentInit: reabrir()");
} catch (Exception e) {
System.err.println("[erro] EnvironmentInit:iniciar - Erro ao tentar reabrir ambiente!");
//e.printStackTrace();
public String getName() {
return nomeAmbiente;
public void cleanup() throws DatabaseException {
ismyEnvOpen = false;
fechaContainersAbertos();
try {
if (myManager != null) {
// myEnv.close(); // fechado automaticamente pelo myManager
//myManager.close();
myManager.delete(); //trocado pelo close() pois é mais seguro.
ismyEnvOpen = false;
} catch (Exception de) {
// Exception handling goes here
System.err.println("[erro] EnvironmentInit:iniciar - N�o conseguiu fechar o banco");
System.out.println(".EnvironmentInit:cleanup");
private void registraContainerAberto(XmlContainer container) {
containersAbertos.add(container);
private void fechaContainersAbertos() {
for (int i = 0; i < containersAbertos.size(); i++) {
try {
System.out.println(".EnvironmentInit:fechaContainer:" + ((XmlContainer) containersAbertos.get(i)).getName());
((XmlContainer) containersAbertos.get(i)).closeContainer();
((XmlContainer) containersAbertos.get(i)).delete(); //realmente garante o fechamento, destruindo o objeto associado.
} catch (XmlException e) {
System.err
.println("[erro] EnvironmentInit:iniciar - Erro ao tentar fechar container");
e.printStackTrace();
containersAbertos.clear();
public XmlManagerConfig getXmlManagerConfig() {
return managerConfig;
public XmlContainer abrirContainer(String nome) {
int ind = indiceContainerRegistrado(nome);
XmlContainer container = null;
if (ind >= 0) {
//System.out.println(".EnvironmentInit:abrirContainer (j??? aberto) : " + nome + " - ambiente aberto!");
return (XmlContainer) containersAbertos.get(ind);
boolean existe = Dir
.existeArquivo(envPath.toString() + separador, nome);
try {
if (!existe) {
container = myManager.createContainer(nome);
System.out.println(".EnvironmentInit:abrirContainer (create) : " + nome + " - criando container");
return null;
} else {
System.out.println(".EnvironmentInit:abrirContainer - nome = '" nome "'");
XmlContainerConfig conf = new XmlContainerConfig();
conf.setAllowCreate(true);
conf.setTransactional(true);
container = myManager.openContainer(nome, conf);
//System.out.println(".EnvironmentInit:abrirContainer (open) : " + nome + " - ambiente aberto!");
registraContainerAberto(container); // registra que o container foi
// aberto
return container;
} catch (XmlException e) {
e.printStackTrace();
return null;
public boolean removeContainer(XmlContainer c) {
String path = null;
try {
path = envPath.toString() + separador + c.getName();
//.closeContainer();
c.delete(); //trocado pelo close acima por ser mais seguro.
myManager.removeContainer(path);
System.out.println(".EnvironmentInit:removeContainer : " + path);
return true;
} catch (XmlException e) {
System.err.println("[erro] EnvironmentInit:iniciar - Erro ao tentar remover container ");
//e.printStackTrace();
return false;
public int indiceContainerRegistrado(String nome) {
try {
for (int i = 0; i < containersAbertos.size(); i++) {
XmlContainer cont = (XmlContainer)containersAbertos.get(i);
if (cont.getName().compareTo(nome) == 0)
return i;
} catch (Exception e) {
return -1;
Is there anything wrong with it, that could be causing this? Any help would be much appreciated.
thanks,
-- Breno Costa -
BDB vxworks 6.6 kernel port error
Hello,
I have Berkeley DB 4.7.25 compiled in kernel. When I try the example in txn_guide.c, I have got the following error:
Error opening environment: S_dosFsLib_FILE_NOT_FOUND
I'm all done.
value = 10 = 0xa
I traced the source code it fails in __rep_reset_init() when it tries to open file __db.rep.init. How can I fix this? Thanks.
Allan
#ifdef HAVE_REPLICATION
if ((ret = __rep_reset_init(env)) != 0 ||
(ret = __env_remove_env(env)) != 0 ||
#else
Have I missed anything? Thanks.
/* File: txn_guide.c */
/* We assume an ANSI-compatible compiler */
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <db.h>
#ifdef _WIN32
#include <windows.h>
#define PATHD '\\'
extern int getopt(int, char * const *, const char *);
extern char *optarg;
typedef HANDLE thread_t;
#define thread_create(thrp, attr, func, arg) \
(((*(thrp) = CreateThread(NULL, 0, \
(LPTHREAD_START_ROUTINE)(func), (arg), 0, NULL)) == NULL) ? -1 : 0)
#define thread_join(thr, statusp) \
((WaitForSingleObject((thr), INFINITE) == WAIT_OBJECT_0) && \
GetExitCodeThread((thr), (LPDWORD)(statusp)) ? 0 : -1)
typedef HANDLE mutex_t;
#define mutex_init(m, attr) \
(((*(m) = CreateMutex(NULL, FALSE, NULL)) != NULL) ? 0 : -1)
#define mutex_lock(m) \
((WaitForSingleObject(*(m), INFINITE) == WAIT_OBJECT_0) ? 0 : -1)
#define mutex_unlock(m) (ReleaseMutex(*(m)) ? 0 : -1)
#else
#include <pthread.h>
#include <unistd.h>
#define PATHD '/'
typedef pthread_t thread_t;
#define thread_create(thrp, attr, func, arg) \
pthread_create((thrp), (attr), (func), (arg))
#define thread_join(thr, statusp) pthread_join((thr), (statusp))
typedef pthread_mutex_t mutex_t;
#define mutex_init(m, attr) pthread_mutex_init((m), (attr))
#define mutex_lock(m) pthread_mutex_lock(m)
#define mutex_unlock(m) pthread_mutex_unlock(m)
#endif
/* Run 5 writers threads at a time. */
#define NUMWRITERS 5
* Printing of a thread_t is implementation-specific, so we
* create our own thread IDs for reporting purposes.
int global_thread_num;
mutex_t thread_num_lock;
/* Forward declarations */
int count_records(DB *, DB_TXN *);
int open_db(DB **, const char *, const char *, DB_ENV *, u_int32_t);
int usage(void);
void writer_thread(void );
/* Usage function */
int
usage()
fprintf(stderr, " [-h <database_home_directory>]\n");
return (EXIT_FAILURE);
#if 0
int
main(int argc, char *argv[])
/* Initialize our handles */
DB *dbp = NULL;
DB_ENV *envp = NULL;
thread_t writer_threads[NUMWRITERS];
int ch, i, ret, ret_t;
u_int32_t env_flags;
char *db_home_dir;
/* Application name */
const char *prog_name = "txn_guide";
/* Database file name */
const char *file_name = "mydb.db";
/* Parse the command line arguments */
#ifdef _WIN32
db_home_dir = ".\\";
#else
db_home_dir = "./";
#endif
while ((ch = getopt(argc, argv, "h:")) != EOF)
switch (ch) {
case 'h':
db_home_dir = optarg;
break;
case '?':
default:
return (usage());
#endif
void myDbTest(char *home)
/* Initialize our handles */
DB *dbp = NULL;
DB_ENV *envp = NULL;
thread_t writer_threads[NUMWRITERS];
int ch, i, ret, ret_t;
u_int32_t env_flags;
char *db_home_dir;
/* Application name */
const char *prog_name = "txn_guide";
/* Database file name */
const char *file_name = "mydb.db";
/* Parse the command line arguments */
#ifdef _WIN32
db_home_dir = ".\\";
#else
db_home_dir = "./";
#endif
db_home_dir = home;
/* Create the environment */
ret = db_env_create(&envp, 0);
if (ret != 0) {
fprintf(stderr, "Error creating environment handle: %s\n",
db_strerror(ret));
goto err;
* Indicate that we want db to perform lock detection internally.
* Also indicate that the transaction with the fewest number of
* write locks will receive the deadlock notification in
* the event of a deadlock.
ret = envp->set_lk_detect(envp, DB_LOCK_MINWRITE);
if (ret != 0) {
fprintf(stderr, "Error setting lock detect: %s\n",
db_strerror(ret));
goto err;
envp->set_shm_key(envp, 10);
env_flags =
DB_CREATE | /* Create the environment if it does not exist */
DB_RECOVER | /* Run normal recovery. */
DB_INIT_LOCK | /* Initialize the locking subsystem */
DB_INIT_LOG | /* Initialize the logging subsystem */
DB_INIT_TXN | /* Initialize the transactional subsystem. This
* also turns on logging. */
DB_INIT_MPOOL | /* Initialize the memory pool (in-memory cache) */
DB_THREAD; /* Cause the environment to be free-threaded */
/* Now actually open the environment */
ret = envp->open(envp, db_home_dir, env_flags, 0);
if (ret != 0) {
fprintf(stderr, "Error opening environment: %s\n",
db_strerror(ret));
goto err;
* If we had utility threads (for running checkpoints or
* deadlock detection, for example) we would spawn those
* here. However, for a simple example such as this,
* that is not required.
/* Open the database */
ret = open_db(&dbp, prog_name, file_name,
envp, DB_DUPSORT);
if (ret != 0)
goto err;
/* Initialize a mutex. Used to help provide thread ids. */
(void)mutex_init(&thread_num_lock, NULL);
/* Start the writer threads. */
for (i = 0; i < NUMWRITERS; i++)
(void)thread_create(
&writer_threads, NULL, writer_thread, (void *)dbp);
/* Join the writers */
for (i = 0; i < NUMWRITERS; i++)
(void)thread_join(writer_threads[i], NULL);
err:
/* Close our database handle, if it was opened. */
if (dbp != NULL) {
ret_t = dbp->close(dbp, 0);
if (ret_t != 0) {
fprintf(stderr, "%s database close failed: %s\n",
file_name, db_strerror(ret_t));
ret = ret_t;
/* Close our environment, if it was opened. */
if (envp != NULL) {
ret_t = envp->close(envp, 0);
if (ret_t != 0) {
fprintf(stderr, "environment close failed: %s\n",
db_strerror(ret_t));
ret = ret_t;
/* Final status message and return. */
printf("I'm all done.\n");
return (ret == 0 ? EXIT_SUCCESS : EXIT_FAILURE);
* A function that performs a series of writes to a
* Berkeley DB database. The information written
* to the database is largely nonsensical, but the
* mechanism of transactional commit/abort and
* deadlock detection is illustrated here.
void *
writer_thread(void *args)
static char *key_strings[] = {
"key 1", "key 2", "key 3", "key 4", "key 5",
"key 6", "key 7", "key 8", "key 9", "key 10"
DB *dbp;
DB_ENV *envp;
DBT key, value;
DB_TXN *txn;
int i, j, payload, ret, thread_num;
int retry_count, max_retries = 20; /* Max retry on a deadlock */
dbp = (DB *)args;
envp = dbp->get_env(dbp);
/* Get the thread number */
(void)mutex_lock(&thread_num_lock);
global_thread_num++;
thread_num = global_thread_num;
(void)mutex_unlock(&thread_num_lock);
/* Initialize the random number generator */
srand(thread_num);
/* Write 50 times and then quit */
for (i = 0; i < 50; i++) {
retry_count = 0; /* Used for deadlock retries */
* Some think it is bad form to loop with a goto statement, but
* we do it anyway because it is the simplest and clearest way
* to achieve our abort/retry operation.
retry:
/* Begin our transaction. We group multiple writes in
* this thread under a single transaction so as to
* (1) show that you can atomically perform multiple writes
* at a time, and (2) to increase the chances of a
* deadlock occurring so that we can observe our
* deadlock detection at work.
* Normally we would want to avoid the potential for deadlocks,
* so for this workload the correct thing would be to perform our
* puts with autocommit. But that would excessively simplify our
* example, so we do the "wrong" thing here instead.
ret = envp->txn_begin(envp, NULL, &txn, 0);
if (ret != 0) {
envp->err(envp, ret, "txn_begin failed");
return ((void *)EXIT_FAILURE);
for (j = 0; j < 10; j++) {
/* Set up our key and values DBTs */
memset(&key, 0, sizeof(DBT));
key.data = key_strings[j];
key.size = (u_int32_t)strlen(key_strings[j]) + 1;
memset(&value, 0, sizeof(DBT));
payload = rand() + i;
value.data = &payload;
value.size = sizeof(int);
/* Perform the database put. */
switch (ret = dbp->put(dbp, txn, &key, &value, 0)) {
case 0:
break;
* Our database is configured for sorted duplicates,
* so there is a potential for a KEYEXIST error return.
* If we get one, simply ignore it and continue on.
* Note that you will see KEYEXIST errors only after you
* have run this program at least once.
case DB_KEYEXIST:
printf("Got keyexists.\n");
break;
* Here's where we perform deadlock detection. If
* DB_LOCK_DEADLOCK is returned by the put operation,
* then this thread has been chosen to break a deadlock.
* It must abort its operation, and optionally retry the
* put.
case DB_LOCK_DEADLOCK:
* First thing that we MUST do is abort the
* transaction.
(void)txn->abort(txn);
* Now we decide if we want to retry the operation.
* If we have retried less than max_retries,
* increment the retry count and goto retry.
if (retry_count < max_retries) {
printf("Writer %i: Got DB_LOCK_DEADLOCK.\n",
thread_num);
printf("Writer %i: Retrying write operation.\n",
thread_num);
retry_count++;
goto retry;
* Otherwise, just give up.
printf("Writer %i: ", thread_num);
printf("Got DB_LOCK_DEADLOCK and out of retries.\n");
printf("Writer %i: Giving up.\n", thread_num);
return ((void *)EXIT_FAILURE);
* If a generic error occurs, we simply abort the
* transaction and exit the thread completely.
default:
envp->err(envp, ret, "db put failed");
ret = txn->abort(txn);
if (ret != 0)
envp->err(envp, ret,
"txn abort failed");
return ((void *)EXIT_FAILURE);
} /** End case statement **/
} /** End for loop **/
* print the number of records found in the database.
* See count_records() for usage information.
printf("Thread %i. Record count: %i\n", thread_num,
count_records(dbp, NULL));
* If all goes well, we can commit the transaction and
* exit the thread.
ret = txn->commit(txn, 0);
if (ret != 0) {
envp->err(envp, ret, "txn commit failed");
return ((void *)EXIT_FAILURE);
return ((void *)EXIT_SUCCESS);
* This simply counts the number of records contained in the
* database and returns the result. You can use this function
* in three ways:
* First call it with an active txn handle.
* Secondly, configure the cursor for uncommitted reads (this
* is what the example currently does).
* Third, call count_records AFTER the writer has committed
* its transaction.
* If you do none of these things, the writer thread will
* self-deadlock.
* Note that this function exists only for illustrative purposes.
* A more straight-forward way to count the number of records in
* a database is to use DB->stat() or DB->stat_print().
int
count_records(DB dbp, DB_TXN txn)
DBT key, value;
DBC *cursorp;
int count, ret;
cursorp = NULL;
count = 0;
/* Get the cursor */
ret = dbp->cursor(dbp, txn, &cursorp,
DB_READ_UNCOMMITTED);
if (ret != 0) {
dbp->err(dbp, ret,
"count_records: cursor open failed.");
goto cursor_err;
/* Get the key DBT used for the database read */
memset(&key, 0, sizeof(DBT));
memset(&value, 0, sizeof(DBT));
do {
ret = cursorp->get(cursorp, &key, &value, DB_NEXT);
switch (ret) {
case 0:
count++;
break;
case DB_NOTFOUND:
break;
default:
dbp->err(dbp, ret,
"Count records unspecified error");
goto cursor_err;
} while (ret == 0);
cursor_err:
if (cursorp != NULL) {
ret = cursorp->close(cursorp);
if (ret != 0) {
dbp->err(dbp, ret,
"count_records: cursor close failed.");
return (count);
/* Open a Berkeley DB database */
int
open_db(DB **dbpp, const char progname, const char file_name,
DB_ENV *envp, u_int32_t extra_flags)
int ret;
u_int32_t open_flags;
DB *dbp;
/* Initialize the DB handle */
ret = db_create(&dbp, envp, 0);
if (ret != 0) {
fprintf(stderr, "%s: %s\n", progname,
db_strerror(ret));
return (EXIT_FAILURE);
/* Point to the memory malloc'd by db_create() */
*dbpp = dbp;
if (extra_flags != 0) {
ret = dbp->set_flags(dbp, extra_flags);
if (ret != 0) {
dbp->err(dbp, ret,
"open_db: Attempt to set extra flags failed.");
return (EXIT_FAILURE);
/* Now open the database */
open_flags = DB_CREATE | /* Allow database creation */
DB_READ_UNCOMMITTED | /* Allow dirty reads */
DB_AUTO_COMMIT; /* Allow autocommit */
ret = dbp->open(dbp, /* Pointer to the database */
NULL, /* Txn pointer */
file_name, /* File name */
NULL, /* Logical db name */
DB_BTREE, /* Database type (using btree) */
open_flags, /* Open flags */
0); /* File mode. Using defaults */
if (ret != 0) {
dbp->err(dbp, ret, "Database '%s' open failed",
file_name);
return (EXIT_FAILURE);
return (EXIT_SUCCESS);More information: After I create the DB, DB put works fine. The db_stat is OK.
After reboot the processor, db_stat gives error. After running db_recover, it gives errors.
Before reboot:
-> ls
CfgDbEr.log
__db.001
log.0000000001
OpvCfg.db
value = 0 = 0x0
-> db_stat "-h /bk1/db -d OpvCfg.db"
THU AUG 14 18:12:23 2008 Local time
53162 Btree magic number
9 Btree version number
Big-endian Byte order
Flags
2 Minimum keys per-page
4096 Underlying database page size
1007 Overflow key/data size
1 Number of levels in the tree
1 Number of unique keys in the tree
1 Number of data items in the tree
0 Number of tree internal pages
0 Number of bytes free in tree internal pages (0% ff)
1 Number of tree leaf pages
4058 Number of bytes free in tree leaf pages (0% ff)
0 Number of tree duplicate pages
0 Number of bytes free in tree duplicate pages (0% ff)
0 Number of tree overflow pages
0 Number of bytes free in tree overflow pages (0% ff)
0 Number of empty pages
0 Number of pages on the free list
value = 0 = 0x0
After reboot, it points out error:segment /bk1/db/__db.001 does not exist
-> db_stat "-h /bk1/db -d OpvCfg.db"
db_stat: segment /bk1/db/__db.001 does not exist
THU JAN 01 00:01:32 1970 Local time
53162 Btree magic number
9 Btree version number
Big-endian Byte order
Flags
2 Minimum keys per-page
4096 Underlying database page size
1007 Overflow key/data size
1 Number of levels in the tree
1 Number of unique keys in the tree
1 Number of data items in the tree
0 Number of tree internal pages
0 Number of bytes free in tree internal pages (0% ff)
1 Number of tree leaf pages
4058 Number of bytes free in tree leaf pages (0% ff)
0 Number of tree duplicate pages
0 Number of bytes free in tree duplicate pages (0% ff)
0 Number of tree overflow pages
0 Number of bytes free in tree overflow pages (0% ff)
0 Number of empty pages
0 Number of pages on the free list
value = 0 = 0x0
The result of running db_recover:
db_recover "-c -h /bk1/db"db_recover: segment /bk1/db/__db.001 does not exist
db_recover: /bk1/db/log.0000000002: log file unreadable: S_dosFsLib_FILE_NOT_FOU
ND
db_recover: PANIC: S_dosFsLib_FILE_NOT_FOUND
db_recover: PANIC: fatal region error detected; run recovery
db_recover: dbenv->close: DB_RUNRECOVERY: Fatal error, run database recovery
value = 1 = 0x1 -
Load an existing Berkeley DB file into memory
Dear Experts,
I have created some Berkeley DB (BDB) files onto disk.
I noticed that when I issue key-value retrievals, the page faults are substantial, and the CPU utilization is low.
One sample of the time command line output is as follow:
1.36user 1.45system 0:10.83elapsed 26%CPU (0avgtext+0avgdata 723504maxresident)k
108224inputs+528outputs (581major+76329minor)pagefaults 0swaps
I suspect that the bottleneck is the high frequency of file I/O.
This may be because of page faults of the BDB file, and the pages are loaded in/out of disk fairly frequently.
I wish to explore how to reduce this page fault, and hence expedite the retrieval time.
One way I have read is to load the entire BDB file into main memory.
There are some example programs on docs.oracle.com, under the heading "Writing In-Memory Berkeley DB Applications".
However, I could not get them to work.
I enclosed below my code:
--------------- start of code snippets ---------------
/* Initialize our handles */
DB *dbp = NULL;
DB_ENV *envp = NULL;
DB_MPOOLFILE *mpf = NULL;
const char *db_name = "db.id_url"; // A BDB file on disk, size 66,813,952
u_int32_t open_flags;
/* Create the environment */
db_env_create(&envp, 0);
open_flags =
DB_CREATE | /* Create the environment if it does not exist */
DB_INIT_LOCK | /* Initialize the locking subsystem */
DB_INIT_LOG | /* Initialize the logging subsystem */
DB_INIT_MPOOL | /* Initialize the memory pool (in-memory cache) */
DB_INIT_TXN |
DB_PRIVATE; /* Region files are not backed by the filesystem.
* Instead, they are backed by heap memory. */
* Specify the size of the in-memory cache.
envp->set_cachesize(envp, 0, 70 * 1024 * 1024, 1); // 70 Mbytes, more than the BDB file size of 66,813,952
* Now actually open the environment. Notice that the environment home
* directory is NULL. This is required for an in-memory only application.
envp->open(envp, NULL, open_flags, 0);
/* Open the MPOOL file in the environment. */
envp->memp_fcreate(envp, &mpf, 0);
int pagesize = 4096;
if ((ret = mpf->open(mpf, "db.id_url", 0, 0, pagesize)) != 0) {
envp->err(envp, ret, "DB_MPOOLFILE->open: ");
goto err;
int cnt, hits = 66813952/pagesize;
void *p=0;
for (cnt = 0; cnt < hits; ++cnt) {
db_pgno_t pageno = cnt;
mpf->get(mpf, &pageno, NULL, 0, &p);
fprintf(stderr,"\n\nretrieve %5d pages\n",cnt);
/* Initialize the DB handle */
db_create(&dbp, envp, 0);
* Set the database open flags. Autocommit is used because we are
* transactional.
open_flags = DB_CREATE | DB_AUTO_COMMIT;
dbp->open(dbp, // Pointer to the database
NULL, // Txn pointer
NULL, // File name -- NULL for inmemory
db_name, // Logical db name
DB_BTREE, // Database type (using btree)
open_flags, // Open flags
0); // File mode. defaults is 0
DBT key,data; int test_key=103456;
memset(&key, 0, sizeof(key));
memset(&data, 0, sizeof(data));
key.data = (int*)&test_key;
key.size = sizeof(test_key);
dbp->get(dbp, NULL, &key, &data, 0);
printf("%d --> %s ", *((int*)key.data),(char*)data.data );
/* Close our database handle, if it was opened. */
if (dbp != NULL) {
dbp->close(dbp, 0);
if (mpf != NULL) (void)mpf->close(mpf, 0);
/* Close our environment, if it was opened. */
if (envp != NULL) {
envp->close(envp, 0);
/* Final status message and return. */
printf("I'm all done.\n");
--------------- end of code snippets ---------------
After compilation, the code output is:
retrieve 16312 pages
103456 --> (null) I'm all done.
However, the test_key input did not get the correct value retrieval.
I have been reading and trying this for the past 3 days.
I will appreciate any help/tips.
Thank you for your kind attention.
WAN
SingaporeHi Mike
Thank you for your 3 steps:
-- create the database
-- load the database
-- run you retrievals
Recall that my original intention is to load in an existing BDB file (70Mbytes) completely into memory.
So following your 3 steps above, this is what I did:
Step-1 (create the database)
I have followed the oracle article on http://docs.oracle.com/cd/E17076_02/html/articles/inmemory/C/index.html
In this step, I have created the environment, set the cachesize to be bigger than the BDB file.
However, I have some problem with the code that opens the DB handle.
The code on the oracle page is as follow:
* Open the database. Note that the file name is NULL.
* This forces the database to be stored in the cache only.
* Also note that the database has a name, even though its
* file name is NULL.
ret = dbp->open(dbp, /* Pointer to the database */
NULL, /* Txn pointer */
NULL, /* File name is not specified on purpose */
db_name, /* Logical db name. */
DB_BTREE, /* Database type (using btree) */
db_flags, /* Open flags */
0); /* File mode. Using defaults */
Note that the open(..) API does not include the BDB file name.
The documentation says that this is so that the API will know that it needs an in-memory database.
However, how do I tell the API the source of the existing BDB file from which I wish to load entirely into memory ?
Do I need to create another DB handle (non-in-memory, with a file name as argument) that reads from this BDB file, and then call DB->put(.) that inserts the records into the in-memory DB ?
Step-2 (load the database)
My question in this step-2 is the same as my last question in step-1, on how do I tell the API to load in my existing BDB file into memory?
That is, should I create another DB handle (non-in-memory) that reads from the existing BDB file, use a cursor to read in EVERY key-value pair, and then insert into the in-memory DB?
Am I correct to say that by using the cursor to read in EVERY key-value pair, I am effectively warming the file cache, so that the BDB retrieval performance can be maximized ?
Step-3 (run your retrievals)
Are the retrieval API, e.g. c_get(..), get(..), for the in-memory DB, the same as the file-based DB ?
Thank you and always appreciative for your tips.
WAN
Singapore -
Need help with Berkeley XML DB Performance
We need help with maximizing performance of our use of Berkeley XML DB. I am filling most of the 29 part question as listed by Oracle's BDB team.
Berkeley DB XML Performance Questionnaire
1. Describe the Performance area that you are measuring? What is the
current performance? What are your performance goals you hope to
achieve?
We are measuring the performance while loading a document during
web application startup. It is currently taking 10-12 seconds when
only one user is on the system. We are trying to do some testing to
get the load time when several users are on the system.
We would like the load time to be 5 seconds or less.
2. What Berkeley DB XML Version? Any optional configuration flags
specified? Are you running with any special patches? Please specify?
dbxml 2.4.13. No special patches.
3. What Berkeley DB Version? Any optional configuration flags
specified? Are you running with any special patches? Please Specify.
bdb 4.6.21. No special patches.
4. Processor name, speed and chipset?
Intel Xeon CPU 5150 2.66GHz
5. Operating System and Version?
Red Hat Enterprise Linux Relase 4 Update 6
6. Disk Drive Type and speed?
Don't have that information
7. File System Type? (such as EXT2, NTFS, Reiser)
EXT3
8. Physical Memory Available?
4GB
9. Are you using Replication (HA) with Berkeley DB XML? If so, please
describe the network you are using, and the number of Replica’s.
No
10. Are you using a Remote Filesystem (NFS) ? If so, for which
Berkeley DB XML/DB files?
No
11. What type of mutexes do you have configured? Did you specify
–with-mutex=? Specify what you find inn your config.log, search
for db_cv_mutex?
None. Did not specify -with-mutex during bdb compilation
12. Which API are you using (C++, Java, Perl, PHP, Python, other) ?
Which compiler and version?
Java 1.5
13. If you are using an Application Server or Web Server, please
provide the name and version?
Oracle Appication Server 10.1.3.4.0
14. Please provide your exact Environment Configuration Flags (include
anything specified in you DB_CONFIG file)
Default.
15. Please provide your Container Configuration Flags?
final EnvironmentConfig envConf = new EnvironmentConfig();
envConf.setAllowCreate(true); // If the environment does not
// exist, create it.
envConf.setInitializeCache(true); // Turn on the shared memory
// region.
envConf.setInitializeLocking(true); // Turn on the locking subsystem.
envConf.setInitializeLogging(true); // Turn on the logging subsystem.
envConf.setTransactional(true); // Turn on the transactional
// subsystem.
envConf.setLockDetectMode(LockDetectMode.MINWRITE);
envConf.setThreaded(true);
envConf.setErrorStream(System.err);
envConf.setCacheSize(1024*1024*64);
envConf.setMaxLockers(2000);
envConf.setMaxLocks(2000);
envConf.setMaxLockObjects(2000);
envConf.setTxnMaxActive(200);
envConf.setTxnWriteNoSync(true);
envConf.setMaxMutexes(40000);
16. How many XML Containers do you have? For each one please specify:
One.
1. The Container Configuration Flags
XmlContainerConfig xmlContainerConfig = new XmlContainerConfig();
xmlContainerConfig.setTransactional(true);
xmlContainerConfig.setIndexNodes(true);
xmlContainerConfig.setReadUncommitted(true);
2. How many documents?
Everytime the user logs in, the current xml document is loaded from
a oracle database table and put it in the Berkeley XML DB.
The documents get deleted from XML DB when the Oracle application
server container is stopped.
The number of documents should start with zero initially and it
will grow with every login.
3. What type (node or wholedoc)?
Node
4. Please indicate the minimum, maximum and average size of
documents?
The minimum is about 2MB and the maximum could 20MB. The average
mostly about 5MB.
5. Are you using document data? If so please describe how?
We are using document data only to save changes made
to the application data in a web application. The final save goes
to the relational database. Berkeley XML DB is just used to store
temporary data since going to the relational database for each change
will cause severe performance issues.
17. Please describe the shape of one of your typical documents? Please
do this by sending us a skeleton XML document.
Due to the sensitive nature of the data, I can provide XML schema instead.
18. What is the rate of document insertion/update required or
expected? Are you doing partial node updates (via XmlModify) or
replacing the document?
The document is inserted during user login. Any change made to the application
data grid or other data components gets saved in Berkeley DB. We also have
an automatic save every two minutes. The final save from the application
gets saved in a relational database.
19. What is the query rate required/expected?
Users will not be entering data rapidly. There will be lot of think time
before the users enter/modify data in the web application. This is a pilot
project but when we go live with this application, we will expect 25 users
at the same time.
20. XQuery -- supply some sample queries
1. Please provide the Query Plan
2. Are you using DBXML_INDEX_NODES?
Yes.
3. Display the indices you have defined for the specific query.
XmlIndexSpecification spec = container.getIndexSpecification();
// ids
spec.addIndex("", "id", XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
spec.addIndex("", "idref", XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// index to cover AttributeValue/Description
spec.addIndex("", "Description", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ELEMENT | XmlIndexSpecification.KEY_SUBSTRING, XmlValue.STRING);
// cover AttributeValue/@value
spec.addIndex("", "value", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// item attribute values
spec.addIndex("", "type", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// default index
spec.addDefaultIndex(XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ELEMENT | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
spec.addDefaultIndex(XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// save the spec to the container
XmlUpdateContext uc = xmlManager.createUpdateContext();
container.setIndexSpecification(spec, uc);
4. If this is a large query, please consider sending a smaller
query (and query plan) that demonstrates the problem.
21. Are you running with Transactions? If so please provide any
transactions flags you specify with any API calls.
Yes. READ_UNCOMMITED in some and READ_COMMITTED in other transactions.
22. If your application is transactional, are your log files stored on
the same disk as your containers/databases?
Yes.
23. Do you use AUTO_COMMIT?
No.
24. Please list any non-transactional operations performed?
No.
25. How many threads of control are running? How many threads in read
only mode? How many threads are updating?
We use Berkeley XML DB within the context of a struts web application.
Each user logged into the web application will be running a bdb transactoin
within the context of a struts action thread.
26. Please include a paragraph describing the performance measurements
you have made. Please specifically list any Berkeley DB operations
where the performance is currently insufficient.
We are clocking 10-12 seconds of loading a document from dbd when
five users are on the system.
getContainer().getDocument(documentName);
27. What performance level do you hope to achieve?
We would like to get less than 5 seconds when 25 users are on the system.
28. Please send us the output of the following db_stat utility commands
after your application has been running under "normal" load for some
period of time:
% db_stat -h database environment -c
% db_stat -h database environment -l
% db_stat -h database environment -m
% db_stat -h database environment -r
% db_stat -h database environment -t
(These commands require the db_stat utility access a shared database
environment. If your application has a private environment, please
remove the DB_PRIVATE flag used when the environment is created, so
you can obtain these measurements. If removing the DB_PRIVATE flag
is not possible, let us know and we can discuss alternatives with
you.)
If your application has periods of "good" and "bad" performance,
please run the above list of commands several times, during both
good and bad periods, and additionally specify the -Z flags (so
the output of each command isn't cumulative).
When possible, please run basic system performance reporting tools
during the time you are measuring the application's performance.
For example, on UNIX systems, the vmstat and iostat utilities are
good choices.
Will give this information soon.
29. Are there any other significant applications running on this
system? Are you using Berkeley DB outside of Berkeley DB XML?
Please describe the application?
No to the first two questions.
The web application is an online review of test questions. The users
login and then review the items one by one. The relational database
holds the data in xml. During application load, the application
retrieves the xml and then saves it to bdb. While the user
is making changes to the data in the application, it writes those
changes to bdb. Finally when the user hits the SAVE button, the data
gets saved to the relational database. We also have an automatic save
every two minues, which saves bdb xml data and saves it to relational
database.
Thanks,
Madhav
[email protected]Could it be that you simply do not have set up indexes to support your query? If so, you could do some basic testing using the dbxml shell:
milu@colinux:~/xpg > dbxml -h ~/dbenv
Joined existing environment
dbxml> setverbose 7 2
dbxml> open tv.dbxml
dbxml> listIndexes
dbxml> query { collection()[//@date-tip]/*[@chID = ('ard','zdf')] (: example :) }
dbxml> queryplan { collection()[//@date-tip]/*[@chID = ('ard','zdf')] (: example :) }Verbosity will make the engine display some (rather cryptic) information on index usage. I can't remember where the output is explained; my feeling is that "V(...)" means the index is being used (which is good), but that observation may not be accurate. Note that some details in the setVerbose command could differ, as I'm using 2.4.16 while you're using 2.4.13.
Also, take a look at the query plan. You can post it here and some people will be able to diagnose it.
Michael Ludwig
Maybe you are looking for
-
How to find cause of this message
Our plsql web application is getting this message in IE: "This document was found at the following location. HTTP/1.1 200 OK Date:Mon, 26 Jan 2004 18:52:02 GMT Allow:GET, HEAD Server. Oracle_Web_listener/4.0.8.1.0EnterpriseEdition Content-Type:text/h
-
Applications not visible in Software Center
Hello Guys, some Clients have Problems to show Application Deployments in Software Center. In Deployment Monitoring Tool I can see the Deployments. But if you select the Deployment I get the Error "DT cannot be empty". So I think the Policy isn't dow
-
Hello All, My question is: When I post a goods movement(cross plants belong to different company), i want to change the value at the FI document: case: Plant A, company code A, Material M01, standard price = 10 usd/pc Plant B, company code B, Mate
-
Best way to implement a basic text output window
Hello, I want to monitor the activity of an hardware devices which sends text, in some case, a lot and fairly fast. So far I send this to the standard output and this is fine. Now I need to support several devices concurrently so I need to create a v
-
Shared Services Mixed Native-MSAD group nesting
Is anyone doing this? I am trying to make an MSAD group a member of a native group using shared services and after adding the MSAD group, the console errors out for the group i just made whenever trying to view the group members. This is repeatable a