Repeated Opening of  database in a Txn  causes Logging region out of memory

Hi
BDB 4.6.21
When I open and close a single database file repeatedly, it causes the error message "Logging region out of memory; you may need to increase its size". I have set the 65KB default size for set_lg_regionmax. Is there any work around for solving this issue, other than increasing the value for the set_lg_regionmax. Even if we set it to a higher value, we cannot predict how the clients of BDB will opens a file and closes a database file. Following is a stand alone program, using which one can reproduce the scenario.
void main()
const int SUCCESS = 0;
ULONG uEnvFlags = DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOG | DB_INIT_TXN |DB_INIT_LOCK | DB_THREAD;// |
//DB_RECOVER;
LPCSTR lpctszHome = "D:\\Nisam\\Temp";
int nReturn = 0;
DbEnv* pEnv = new DbEnv( DB_CXX_NO_EXCEPTIONS );
nReturn = pEnv->set_thread_count( 20 );
nReturn = pEnv->open( lpctszHome, uEnvFlags, 0 );
if( SUCCESS != nReturn )
return 0;
DbTxn* pTxn = 0;
char szBuff[MAX_PATH];
UINT uDbFlags = DB_CREATE | DB_THREAD;
Db* pDb = 0;
Db Database( pEnv, 0 );
lstrcpy( szBuff, "DBbbbbbbbbbbbbbbbbbbbbbbbbbb________0" );// some long name
// First create the database
nReturn = Database.open( pTxn, szBuff, 0, DB_BTREE, uDbFlags, 0 );
nReturn = Database.close( 0 );
for( int nCounter = 0; 10000 > nCounter; ++nCounter )
// Now repeatedly open and close the above created database
pEnv->txn_begin( pTxn, &pTxn, 0 );
Db Database( pEnv, 0 );
nReturn = Database.open( pTxn, szBuff, 0, DB_BTREE, uDbFlags, 0 );
if( SUCCESS != nReturn )
// when the count reaches 435, the error occurs
pTxn->abort();
pDb->close( 0 );
pEnv->close( 0 );
return 0;
pTxn->abort();
pTxn = 0;
Database.close( 0 );
By the way, following is the content of my DB_CONFIG file
set_tx_max 1000
set_lk_max_lockers 10000
set_lk_max_locks 100000
set_lk_max_objects 100000
set_lock_timeout 20000
set_lg_bsize 1048576
set_lg_max 10485760
#log region: 66KB
set_lg_regionmax 67584
set_cachesize 0 8388608 1
Thanks are Regards
Nisam

Hi Nisam,
I was able to reproduce the problem using Berkeley DB 4.6.21. The problem is with releasing the FNAME structure in certain cases involving abort Transactions. In a situation where you have continuous (in a loop) transactional (open, abort, close) of databases you will notice (as you did) that the log region size needs to be increased (set_lg_regionmax).
This problem was identified and reproduced yesterday (thanks for letting us know about this) and is reported as SR #15953. It will be fixed in the next release of Berkeley DB and is currently in code review/regression testing. I have a patch that you can apply to Berkeley DB 4.6 and have confirmed that your test program runs with the patch applied. If you send me email at (Ron dot Cohen at Oracle) I’ll send the patch to you.
As you noticed, commiting the transaction will run cleanly without error. You could do that (with the suggestiion DB_TXN_NOSYNC below) but you may not even need transactions for this.
I want to expand a bit on my recommendation that you not abort transactions in the manner that you are doing (though with the patch you can certainly do that). First, the open/close database is a heavyweight operation. Typically you create/open your databases and keep them open for the life of the application (or a long time).
You also mentioned, that you noticed commits may have taken a longer time. We can talk about that (if you email me), but you could consider using the DB_TXN_NOSYNC flag losing durability. Make sure that this suggestion will work with your application requirements.
Even if you have (create/open/get/commit/abort) that should not need transactions for a single get operation. For that case, there would be no logging for the open and close therefore this sequence would be faster. This was a code snippet so what you have in your application may be a lot more complicated and justify what you have done. But the simple test case above should not require a transaction since you are doing a single atomic get.
I hope this helps!
Ron Cohen
Oracle Corporation

Similar Messages

  • Open Container: Logging region out of memory

    When opening a container I am getting the following error... according to doc's it appears as though a new log file should be created but this is not the case...
    Utils: Logging region out of memory; you may need to increase its size
    Utils: DB->get: method not permitted before handle's open method
    Utils: DB->put: method not permitted before handle's open method
    Utils: DB->put: method not permitted before handle's open method
    Utils: DB->put: method not permitted before handle's open method
    Utils: DB->put: method not permitted before handle's open method
    Utils: DB->put: method not permitted before handle's open method
    Error: com.sleepycat.dbxml.XmlException: Error: Cannot allocate memory, errcode = DATABASE_ERROR
    com.sleepycat.dbxml.XmlException: Error: Cannot allocate memory, errcode = DATABASE_ERROR
    at com.sleepycat.dbxml.dbxml_javaJNI.XmlManager_openContainer__SWIG_2(Native Method)
    at com.sleepycat.dbxml.XmlManager.openContainer(XmlManager.java:525)
    at com.sleepycat.dbxml.XmlManager.openContainer(XmlManager.java:190)
    at com.sleepycat.dbxml.XmlManager.openContainer(XmlManager.java:132)
    at com.iconnect.data.pool.BerkeleyXMLDBPool.getContainer(BerkeleyXMLDBPool.java:195)
    at com.iconnect.data.adapters.BerkleyXMLDBImpl.query(BerkleyXMLDBImpl.java:109)
    at com.iconnect.data.DataManagerFactory.query(DataManagerFactory.java:112)
    at controls.iConnectControl.IConnectImpl.login(IConnectImpl.java:100)
    at controls.iConnectControl.IConnectBean.login(IConnectBean.java:110)
    at app.a12.en.auth.LoginController.Login(LoginController.java:124)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at org.apache.beehive.netui.pageflow.FlowController.invokeActionMethod(FlowController.java:811)
    at org.apache.beehive.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:752)
    at org.apache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:446)
    at org.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java:237)
    at org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:299)
    at org.apache.beehive.netui.pageflow.internal.FlowControllerAction.execute(FlowControllerAction.java:55)
    at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484)
    at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processActionPerform(PageFlowRequestProcessor.java:1403)
    at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274)
    at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processInternal(PageFlowRequestProcessor.java:647)
    at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.process(PageFlowRequestProcessor.java:702)
    at org.apache.beehive.netui.pageflow.AutoRegisterActionServlet.process(AutoRegisterActionServlet.java:565)
    at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:525)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
    at com.iconnect.security.SecurityFilter.doFilter(SecurityFilter.java:99)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:186)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:198)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:152)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:118)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929)
    at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:160)
    at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:300)
    at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:374)
    at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:743)
    at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:675)
    at org.apache.jk.common.SocketConnection.runIt(ChannelSocket.java:866)
    at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683)
    at java.lang.Thread.run(Thread.java:595)

    Here is the output that we have... one thing to note that the exception implies that the container does not exists (although it does)... it is falling into a sub-routine for:
    myManager.existsContainer(containerName) == 0
    The exception is thrown (it seems) because of the "Logging region..." error...
    We had normal operation untill the loggin system ran out of memory... (I am using a shared Environment and am amanaging shared open Containers)...
    btw - Is there not a good end-to-end Java example that is available?
    // Initialize the Environment Configuration
    envConfig.setAllowCreate(true);           // If the environment does not
    envConfig.setInitializeCache(true);      // Turn on the shared memory
    envConfig.setInitializeLocking(true);      // Turn on the locking subsystem.
    envConfig.setInitializeLogging(false);      // Turn on the logging subsystem.
    envConfig.setTransactional(true);           // Turn on the transactional
    envConfig.setErrorStream(System.err);
    envConfig.setErrorPrefix("Utils");
    envConfig.setNoLocking(true);
    // Played using these settings as well
    //envConfig.setLogInMemory(true);
    //envConfig.setLogBufferSize(10 * 1024 * 1024);
    envConfig.setThreaded(true);
    Getting container: USER.dbxml
    Utils: Logging region out of memory; you may need to increase its size
    Container does not exist.... creating...
    Utils: Logging region out of memory; you may need to increase its size
    Utils: Logging region out of memory; you may need to increase its size
    com.sleepycat.dbxml.XmlException: Error: Cannot allocate memory, errcode = DATABASE_ERROR
    com.sleepycat.dbxml.XmlException: Error: Cannot allocate memory, errcode = DATABASE_ERROR
    at com.sleepycat.dbxml.dbxml_javaJNI.XmlManager_createContainer__SWIG_0(Native Method)
    at com.sleepycat.dbxml.XmlManager.createContainer(XmlManager.java:485)
    at com.sleepycat.dbxml.XmlManager.createContainer(XmlManager.java:152)
    at com.sleepycat.dbxml.XmlManager.createContainer(XmlManager.java:122)
    at com.iconnect.data.pool.BerkeleyXMLDBPool.getContainer(BerkeleyXMLDBPool.java:171)
    at com.iconnect.data.adapters.BerkleyXMLDBImpl.query(BerkleyXMLDBImpl.java:109)
    at com.iconnect.data.DataManagerFactory.query(DataManagerFactory.java:112)
    at controls.iConnectControl.IConnectImpl.login(IConnectImpl.java:100)
    at controls.iConnectControl.IConnectBean.login(IConnectBean.java:110)
    at auth.LoginController.Login(LoginController.java:123)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at org.apache.beehive.netui.pageflow.FlowController.invokeActionMethod(FlowController.java:811)
    at org.apache.beehive.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:752)
    at org.apache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:446)
    at org.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java:237)
    at org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:299)
    at org.apache.beehive.netui.pageflow.internal.FlowControllerAction.execute(FlowControllerAction.java:55)
    at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484)
    at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processActionPerform(PageFlowRequestProcessor.java:1403)
    at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274)
    at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processInternal(PageFlowRequestProcessor.java:647)
    at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.process(PageFlowRequestProcessor.java:702)
    at org.apache.beehive.netui.pageflow.AutoRegisterActionServlet.process(AutoRegisterActionServlet.java:565)
    at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:525)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
    at com.iconnect.security.SecurityFilter.doFilter(SecurityFilter.java:99)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:186)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:198)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:152)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:118)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929)
    at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:160)
    at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:300)
    at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:374)
    at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:743)
    at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:675)
    at org.apache.jk.common.SocketConnection.runIt(ChannelSocket.java:866)
    at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683)
    at java.lang.Thread.run(Thread.java:595)
    Setting namespace: http://iconnect.com/schemas/user
    com.sleepycat.dbxml.XmlException: Error: Cannot resolve container: USER.dbxml. Container not open and auto-open is not enabled. Container may not exist., errcode = CONTAINER_CLOSED

  • HT201317 dear, i have a question please.  if i want to keep a folder of pictures in my icloud account; then delete this folder off my phone would it be deleted from icloud account too?? cause im running out of memory on my iphone

    dear, i have a question please.  if i want to keep a folder of pictures in my icloud account; then delete this folder off my phone would it be deleted from icloud account too?? cause im running out of memory on my iphone
    thanks

    You can't use icloud to store photos.  There's a service called photo stream which is used to sync photos from one device to another, but this can't be used to store photos permanently.
    Always sync photos from a device to your computer,  Then you can delete them from the device.
    Read this...  import photos to your computer...
    http://support.apple.com/kb/HT4083

  • I can not open timeline and the canvas can not display because out of memory.

    I can not open timeline and the canvas can not diplay. I get a pop up saying "error out of memory" what does it mean and what can I do?

    Usually this is because you've got graphics that are cmyk or grayscale not RGB, or that are larger than 4000 by 4000 pixels.   Move them on your hard disk so fcp can't find them and make them rgb and resize to an acceptable pixel dimension. 

  • ResultSet frm Database resulted in out of memory

    Application is for report generation based on the huge amount of data present in the database. JDBC query results me in out of memory error while fetching 72000 records. Any solution from the application side that could resolve this problem. Mail [email protected]

    Let's see...
    72000 rows with each row on a line and 80 lines per page give a 900 page report.
    Is someone going to actually read this? Is it possible that they actually want something else, like a summary?

  • Database out of memory error getting Web Intelligence prompts

    The following code generates an exception for a particular web intelligence report object ID:
    m_engines = (ReportEngines)m_entSession.getService("ReportEngines");
    m_widocRepEngine = (ReportEngine)m_engines.getService(ReportEngines.ReportEngineType.WI_REPORT_ENGINE);
    DocumentInstance doc = m_widocRepEngine.openDocument(id);
    Prompts prompts = doc.getPrompts();
    The exception is as follows:
    A database error occurred. The database error text is: The system is out of memory. Use system side cursors for large result sets: Java heap space. Result set size: 31,207, 651. JVM total memory size: 66,650,112..(WIS 10901).
    I can't understand how the result set could be over 31 million, or how to fix this. Any ideas?

    So what happens in InfoView?
    I ask since it doesn't appear to be a SDK coding issue.
    Sincerely,
    Ted Ueda

  • Can't open sequence - out of memory!

    Hi,
    I put a long HD clip onto a sequence and my computer couldn't handle it. I couldn't edit the sequence without getting "error: out of memory" messages. I closed the sequence and now I can't open it because of the lack of memory.
    I saved changes to my project (stupid I know, but there were previous changes I wanted to keep) and closed my project. Now I can't open that sequence at all.
    I closed all other apps, I rebooted, I made sure my scratch disk had plenty of space, but I still can't open it up to delete the clip! Error: Out of memory. i have 1.25gigs of ram installed, but I don't really want to upgrade.
    Does any one have any hot tips before I have to take my project to another mac with more memory?
    Cheers

    Let me try a couple suggestions. I assume you can open FCE without problem, while you cannot open that specific project.
    - Have you checked the Memory Usage in System Preferences ? if you have enough RAM the Application value should be 100%; if not, increase this value.
    - The Still Cache in the same panel refers to additional memory used for stills: by temporarily reducing this you might be able to add more memory to the application (as above)
    (BTW: is it possible that you added too many stills, and the problem is the still cache itself ? - if so do the opposite: increase Still Cache till you can open the project again)
    - The Thumbnail Cache uses application memory: again, by temporarily reducing it to the minimum (probably 0) you add some memory to the application.
    I hope one of these suggestions work with you.
    Piero

  • Database could not be opened. May be caused because database does not exist or lack of authentication to open the database.

    Hello,
    I've been running the DMV 'sys.event_log', and have noticed that I am getting a lot of errors about connection issues to some of my SQL Azure databases saying "Database could not be opened. May be caused because database does not exist or lack of authentication
    to open the database."
    The event type column says: 'connection_failed' and the event_subtype_desc column says: 'failed_to_open_db' both are associated with the above error message.
    I know that these databases are on-line as I have numerous people connected to them, all of whom are not experiencing any issues.  My question is, is there a query that you can run on SQL Azure to try and find out a bit more information about the connection
    attempts?
    If this was a hosted SQL solution it would be much easier.
    Marcus

    Hello,
    As for Windows Azure SQL Database, we can't access the error log file as On-premise SQL Server. Currently, it is only support troubleshooting the connection error with the following DMV. The SQL database connections events are collected and aggregated in
    two catalog views that reside in the logical master database: sys.database_connection_stats and sys.event_log. We can use sys.event_log view to display the details if there is error occurs.
    Just as  the connection failed describe, it may ocurrs when user didnot has login permission when connect to the SQL Database. If so, please verify the user has logon permission.
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • [bdb bug]repeatly open and close db may cause memory leak

    my test code is very simple :
    char *filename = "xxx.db";
    char *dbname = "xxx";
    for( ; ;)
    DB *dbp;
    DB_TXN *txnp;
    db_create(&dbp,dbenvp, 0);
    dbenvp->txn_begin(dbenvp, NULL, &txnp, 0);
    ret = dbp->open(dbp, txnp, filename, dbname, DB_BTREE, DB_CREATE, 0);
    if(ret != 0)
    printf("failed to open db:%s\n",db_strerror(ret));
    return 0;
    txnp->commit(txnp, 0);
    dbp->close(dbp, DB_NOSYNC);
    I try to run my test program for a long time opening and closing db repeatly, then use the PS command and find the RSS is increasing slowly:
    ps -va
    PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
    1986 pts/0 S 0:00 466 588 4999 980 0.3 -bash
    2615 pts/0 R 0:01 588 2 5141 2500 0.9 ./test
    after a few minutes:
    ps -va
    PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
    1986 pts/0 S 0:00 473 588 4999 976 0.3 -bash
    2615 pts/0 R 30:02 689 2 156561 117892 46.2 ./test
    I had read bdb's source code before, so i tried to debug it for about a week and found something like a bug:
    If open a db with both filename and dbname, bdb will open a db handle for master db and a db handle for subdb,
    both of the two handle will get an fileid by a internal api called __dbreg_get_id, however, just the subdb's id will be
    return to bdb's log region by calling __dbreg_pop_id. It leads to a id leak if I tried to open and close the db
    repeatly, as a result, __dbreg_add_dbentry will call realloc repeatly to enlarge the dbentry area, this seens to be
    the reason for RSS increasing.
    Is it not a BUG?
    sorry for my pool english :)
    Edited by: user9222236 on 2010-2-25 下午10:38

    I have tested my program using Oracle Berkeley DB release 4.8.26 and 4.7.25 in redhat 9.0 (Kernel 2.4.20-8smp on an i686) and AIX Version 5.
    The problem is easy to be reproduced by calling the open method of db handle with both filename and dbname being specified and calling the close method.
    My program is very simple:
    #include <stdlib.h>
    #include <stdio.h>
    #include <sys/time.h>
    #include "db.h"
    int main(int argc, char * argv[])
    int ret, count;
    DB_ENV *dbenvp;
    char * filename = "test.dbf";
    char * dbname = "test";
    db_env_create(&dbenvp, 0);
    dbenvp->open(dbenvp, "/home/bdb/code/test/env",DB_CREATE|DB_INIT_LOCK|DB_INIT_LOG|DB_INIT_TXN|DB_INIT_MPOOL, 0);
    for(count = 0 ; count < 10000000 ; count++)
    DB *dbp;
    DB_TXN *txnp;
    db_create(&dbp,dbenvp, 0);
    dbenvp->txn_begin(dbenvp, NULL, &txnp, 0);
    ret = dbp->open(dbp, txnp, filename, dbname, DB_BTREE, DB_CREATE, 0);
    if(ret != 0)
    printf("failed to open db:%s\n",db_strerror(ret));
    return 0;
    txnp->commit(txnp, 0);
    dbp->close(dbp, DB_NOSYNC);
    dbenvp->close(dbenvp, 0);
    return 0;
    DB_CONFIG is like below:
    set_cachesize 0 20000 0
    set_flags db_auto_commit
    set_flags db_txn_nosync
    set_flags db_log_inmemory
    set_lk_detect db_lock_minlocks
    Edited by: user9222236 on 2010-2-28 下午5:42
    Edited by: user9222236 on 2010-2-28 下午5:45

  • Opening and closing Access database in a loop causes an Error.

    I am loading test conditions from an Access DB in a multiple nested loop. The loops successively drill into the DB. ei Temperature, Humidity, Power. Consequently the DB is opened and closed numerous (2000) times. The Errors returned are(-2147467259) Unspecified Error or (2147024882) System Resources low. I have disabled result recording in the edit sequence properties dialog. I do see a constant memory consumption, but of 128MB, it never gets below 40MB. I have enclosed the example sequence file I am using.
    Attachments:
    Open-Close.seq ‏35 KB

    Jacy,
    "jacy" wrote in message
    news:[email protected]..
    > I am loading test conditions from an Access DB in a multiple nested
    > loop. The loops successively drill into the DB. ei Temperature,
    > Humidity, Power. Consequently the DB is opened and closed numerous
    > (2000) times. The Errors returned are(-2147467259) Unspecified Error
    > or (2147024882) System Resources low. I have disabled result recording
    > in the edit sequence properties dialog. I do see a constant memory
    > consumption, but of 128MB, it never gets below 40MB. I have enclosed
    > the example sequence file I am using.
    I've seen problems with OLEDB (which I assume TestStand used behind the
    scenes) with Access and SQL where rapid opening/closing of the sam
    e source
    (database) can generate errors. I don't know for sure, but I assume that
    the changes from the last close are not fully propogated before the next
    open is processed.
    Getting back to TestStand, if all the tables you're querying are in the same
    database, then you should just open the database once at the beginning and
    close it at the end. Then do seperate table open/closes between the
    database open/close.
    Bob.

  • Library recovery: how can I recover a library after I get this message: There was an error opening the database for the library "/Users/Jim/Pictures/Libraries/K2 Library.aplibrary"???

    Library recovery: how can I recover a library after I get this message: "There was an error opening the database for the library “/Users/Jim/Pictures/Libraries/K2 Library.aplibrary”???

    Thanks a lot, Frank. The lsregister did the trick! I am testing this on 10.8.2.
    http://support.apple.com/kb/TA24770 : I deleted the "com.apple.LaunchServices.plist", and restarted the Finder, even logged off and on again; did not change anything. The file has not been recreated, so it may not be used anymore.
    http://itpixie.com/2011/05/fix-duplicate-old-items-open-with-list/#.ULZqa6XAoqY
    The direct "copy and paste" from the post did not work: I had to retype it :
    /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/LaunchSe rvices.framework/Versions/A/Support/lsregister -kill -r -domain local -domain system -domain user
    but then it worked like a charm!
    Cheers
    Léonie
    And btw: I turned on the "-v" option for the lsregister to see, what was going on, and saw plenty of error messages (error -10811); so I repeated the command with "sudo". After that I still saw five iPhotos. Repeating as regular user finally got rid of the redundant iPhoto entries. It looks like registering as super user may be causing this trouble.

  • Getting `No such file or directory` error while trying to open bdb database

    I have four multi-threaded processes (2 writer and 2 reader processes), which make use of Berkeley DB transactional data store. I have multiple environments and the associated database files and log files are located in separate directories (please refer to the DB_CONFIG below). When all these four processes start to perform open and close of databases in the environments very quickly, one of the reader process is throwing a No such file or directory error even though the file actually exists.
    I am making use of Berkeley DB 4.7.25 for testing out these applications.
    The four application names are as follows:
    Writer 1
    Writer 2
    Reader 1
    Reader 2
    The application description is as follows:
    &lsquo;*Writer 1*&rsquo; owns 8 environments and each environment having 123 Berkeley databases created using HASH access method. At any point of time, &lsquo;*Writer 1*&rsquo; will be acting on 24 database files across 8 environments (3 database files per environment) for carrying out write operation. Where as reader process will be accessing all 123 database files / per environment (Total = 123 * 8 environments = 984 database files) for read activities. Similar configuration for Writer 2 as well &ndash; 8 separate environments and so on.
    Writer 1, Reader 1 and Reader 2 processes share the environments created by Writer 1
    Writer 2 and Reader 2 processes share the environments created by Writer 2
    My DB_CONFIG file is configured as follows
    set_cachesize 0 104857600 1   # 100 MB
    set_lg_bsize 2097152                # 2 MB
    set_data_dir ../../vol1/data/
    set_lg_dir ../../vol31/logs/SUBID/
    set_lk_max_locks 1500
    set_lk_max_lockers 1500
    set_lk_max_objects 1500
    set_flags db_auto_commit
    set_tx_max 200
    mutex_set_increment 7500
    Has anyone come across this problem before or is it something to do with the configuration?

    Hi Michael,
    I should have included about how we are making use of DB_TRUNCATE flag in my previous reply itself. Sorry for that.
    From writers, DB truncation happens periodically. During truncate (DB handle is not associated with any environment handle i.e. stand-alone DB
    ) following parameters are passed to db-&gt;open function call:
    DB-&gt;open(DB *db,                    
    DB_TXN *txnid,          =&gt; NULL
    const char *file,          =&gt; file name (absolute DB file path)
    const char *database,  =&gt; NULL
    DBTYPE type, =&gt; DB_HASH
    u_int32_t flags, =&gt; DB_READ_UNCOMMITTED | DB_TRUNCATE | DB_CREATE | DB_THREAD
    int mode); =&gt; 0
    Also, DB_DUP flag is set.
    As you have rightly pointed out, `No such file or directory` is occuring during truncation.
    While a database is being truncated it will not be found by others trying to open it. We simulated this by stopping the writer process (responsible for truncation) and by opening & closing the databases repeatedly via readers. The reader process did not crash. When readers and writers were run simultaneously, we got `No such file or directory` error.Is there any solution to tackle this problem (because in our case writers and readers are run independently. So readers won't come to know about truncation from writers)?
    Also, we are facing one more issue related to DB_TRUNCATE. Consider the following scenario:
    <ul><li>Reader process holds reference of database (X) handle in environment 'Y' at time t1
    </li>
    <li>Writer process opens the database (X) in DB_TRUNCATE mode at time t2 (where t2 &gt; t1)</li>
    <li>After truncation, writer process closes the database and joins the environment 'Y'</li>
    <li>After this any writes to database X is not visible to the reader process</li>
    <li>Once reader process closes the database X and re-joins the environment, all the records inserted from writer process are visible</li>
    </ul>
    Is it the right behavior? If yes, how to make any writes visible to a reader process without closing and re-opening the database in the above-mentioned scenario?
    Also, when [db_set_errfile|http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/db_set_errfile.html] was set, we did not get any additional information in the error file.
    Thanks,
    Magesh

  • Error when opening the database with resetlogs

    ORA-00603: ORACLE SERVER SESSION TERMINATED BY FATAL ERROR
    WHEN I open the database with RESETLOGS

    Well you definitely need to post more information here.
    Looking at what you have posted, all that I can reply is
    ORA-00603: ORACLE server session terminated by fatal error
    Cause: An ORACLE server session is in an unrecoverable state.
    Action: Login to ORACLE again so a new server session will be created

  • Opening a folder with multiple bookmarks causes tabs to open continually.

    I have a folder in the toolbar that contains bookmarks to all the comics I read daily. In the previous version of Firefox, I could right click the folder and "Open all in tabs". Now it opens them but doesn't stop! It repeats opening the tabs from the top to the bottom of the list.

    Start Firefox in [[Safe Mode]] to check if one of the add-ons is causing the problem (switch to the DEFAULT theme: Tools > Add-ons > Appearance/Themes).
    * Don't make any changes on the Safe mode start window.
    See:
    * [[Troubleshooting extensions and themes]]

  • Program crashes without error message when opening a database

    One of our users is experiencing an issue with Discoverer Desktop whereby the program crashes with no error message when he attempts to open a database from a local or network drive. If he logs in to Windows as himself, and runs Discoverer as himself, the program crashes under any Discoverer login; if I log in to Windows using an admin account, and run the program under the same context, the same issue occurs; however, if I log in to Windows as the user and run the program under the context of admin, the error does not occur under any Discoverer login.
    I've tried removing and reinstalling the program but this has not alleviated the issue. The program version is 10.1.2.55 running on Windows XP SP3. If there are any other relevant details required to diagnose the issue I will gladly provide them. Could someone please help me resolve this?
    Edited by: 805312 on Oct 27, 2010 6:25 AM

    Welcome to the forums !
    Has this ever worked ? Pl see if MOS Doc 197716.1 (Running Discoverer Desktop on Windows 2000 Or Later Releases As a Restricted User Fails With "Failed to update the system registry. Please try using REGEDIT.") can help.
    One option might be to install Process Monitor (http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx) to identify the cause of the issue.
    HTH
    Srini

Maybe you are looking for

  • Creative Cloud and Master Collection CONFLICT.

    I had first subscribed to Photoshop on the Cloud, then when the Black Friday sale came along I unsubscribed from that and subscribed to the full Creative Cloud for the same price. Then I got the Master Collection on disc, so I unsubscribed from the C

  • Remove carriage returns from a field in an oracle table

    I have a field that is defined as a LONG in my oracle table; the data contained in this field has carriage returns/line feeds (it's free form text); as i'm selecting data from this field, i need the carriage returns removed so that the data from this

  • Safari Not Visible In Applications Folder

    I've not used tax prep software on my year-old iMac G5 (OS 10.4.8) until now. I just installed Tax Cut 2006. When I tried to use Tax Cut, it couldn't find Safari (which it requires) in the Applications folder, and refused to run. When I went looking

  • 'All day' events not syncing correctly

    I am using a 9700 Bold and syncing to Outlook 2007 using BlackBerry Desktop Software 6. 'All day' calendar events are synced in both directions when I create them. However, when I delete an all-day event from my BlackBerry and then sync, it is not re

  • How to reflect changes to Address Book entries?

    I have 2 SL clients connected to a SL server with Address Book bound. But when I make a change to the one my "Office Contacts" entries, the change doesn't get reflected anywhere (only locally). Even if I make the change in Workgroup Manager, I still