Berkeley DB and DB_LOG_AUTOREMOVE

Hello,
When does DB_LOG_AUTOREMOVE remove log files ? Is there some periodicity or it's done when there are checkpoints or ... ?
In other words, can I have some control on when this happens ?
Thanks
José-Marcio

Bonjour José,
Yes, the DB_LOG_AUTOREMOVE depends on if you are running checkpoints or not, so it is indicated to run checkpoints periodically.
Regards,
Bogdan Coman

Similar Messages

  • Berkeley DB and Tuxedo

    Dear all,
    I am trying to set up Berkeley DB from Sleepycat Software (an open
    source database implementation) as a backend database for Tuxedo with
    X/Open transaction support on a HP-UX 11 System. According to the
    documentation, this should work. I have successfully compiled and
    started the resource manager (called DBRM) and from the logs
    everything looks fine.
    The trouble starts, however, when I try to start services that use
    DBRM. The startup call for opening the database enviroment ("database
    enviroment" is a Berkeley DB specific term that refers to a grouping
    of files that are opened together with transaction support) fails with
    the error message
    error: 12 (Not enough space)
    Some digging in the documentation for Berkeley DB reveals the
    following OS specific snippet (DBENV->open is the function call that
    causes the error message above):
    <quote>
    An ENOMEM error is returned from DBENV->open or DBENV->remove.
    Due to the constraints of the PA-RISC memory architecture, HP-UX
    does not allow a process to map a file into its address space
    multiple times. For this reason, each Berkeley DB environment may
    be opened only once by a process on HP-UX, i.e., calls to
    DBENV->open will fail if the specified Berkeley DB environment
    has been opened and not subsequently closed.
    </quote>
    OK. So it appears that a call to DBENV->open does a mmap and that
    cannot happen twice on the same file in the same process. Looking at
    the source for the resource manager DBRM it appears, that there is
    indeed a Berkeley DB enviroment that is opened (once), otherwise
    transactions would not work. A ps -l on the machine in question looks
    like this (I have snipped a couple of columns to fit into a newsreader):
    UID PID PPID C PRI NI ADDR SZ TIME COMD
    101 29791 1 0 155 20 1017d2c00 84 0:00 DBRM
    101 29787 1 0 155 20 10155bb00 81 0:00 TMS_QM
    101 29786 1 0 155 20 106d54400 81 0:00 TMS_QM
    101 29790 1 0 155 20 100ed2200 84 0:00 DBRM
    0 6742 775 0 154 20 1016e3f00 34 0:00 telnetd
    101 29858 6743 2 178 20 100ef3900 29 0:00 ps
    101 29788 1 0 155 20 100dfc500 81 0:00 TMS_QM
    101 29789 1 0 155 20 1024c8c00 84 0:00 DBRM
    101 29785 1 0 155 20 1010d7e00 253 0:00 BBL
    101 6743 6742 0 158 20 1017d2e00 222 0:00 bash
    So every DBRM is started as its own process and the service process
    (which does not appear above) would be its own process as well. So how
    can it happen that mmap on the same file is called twice in the same
    process? What exactly does tmboot do in terms of startup code? Is it
    just a couple of fork/execs or there more involved?
    Thanks for any suggestions,
    Joerg Lenneis
    email: [email protected]

    Peter Holditch:
    Joerg,
    Comments in-line.
    Joerg Lenneis wrote:[snip]
    I have no experience of Berkley DB. Normally the xa_open routine provided by
    your database, and called by tx_open, will connect the server process itself to
    the database. What that means is database specific. I expect in the case of
    Berkley DB, it has done the mmap for you. I guess the open parameters in your
    code above are also in your OPENINFO string in the Tuxedo ubbconfig file?
    It does not sound to me like you have a problem.Fortunately, I do not any more. Your comments and looking at the
    source for the xa interface have put me on the right track. What I did
    not realise is that (as you point out in the praragraph above) a
    Tuxedo service process that uses a resource manager gets the
    following structure linked in:
    const struct xa_switch_t db_xa_switch = {
    "Berkeley DB", /* name[RMNAMESZ] */
    TMNOMIGRATE, /* flags */
    0, /* version */
    __db_xa_open, /* xa_open_entry */
    __db_xa_close, /* xa_close_entry */
    __db_xa_start, /* xa_start_entry */
    __db_xa_end, /* xa_end_entry */
    __db_xa_rollback, /* xa_rollback_entry */
    __db_xa_prepare, /* xa_prepare_entry */
    __db_xa_commit, /* xa_commit_entry */
    __db_xa_recover, /* xa_recover_entry */
    __db_xa_forget, /* xa_forget_entry */
    __db_xa_complete /* xa_complete_entry */
    This is database specific, of course, so it would look different for,
    say, Oracle. The entries in that structure are pointers to various
    functions which are called by Tuxedo on behalf of the server process
    on startup and whenever transaction management is necessary. xa_open
    does indeed open the database, which means opening an enviroment
    with a mmap somwhere in the case of Berkeley DB. In my code I then
    tried to open the enviroment again (you are right, the OPENINFO string
    is the same in ubbconfig as in my code) which led to the error message
    posted in my initial message.
    I had previously thougt that the service process would contact the
    resource manager via some IPC mechanism for opening the database.
    >>
    >>
    If I am mistaken, then things look a bit dire. Provided that this is
    even the correct thing to do I could move the tx_open() after the call
    to env->open, but this would still mean there are two mmaps in the
    same process. I also need both calls to i) initiate the transaction
    subsystem and ii) get hold of the pointer DB_ENV *env which is the
    handle for all subsequent DB access.
    In the case or servers using OCI to access Oracle, there is an OCI API that
    allows a connection established through xa to be associated with an OCI
    connection endpoint. I suspect there is an equivalent function provided by
    Berkley DB?There is not, but see my comments below about how to get to the
    Berkeley DB enviroment.
    [snip]
    I doubt it. xa works because xa routines are called in the same thread as the
    data access routines. Typically, a server thread will run like this...
    xa_start(Tuxedo Transaction ID) /* this is done by the Tux. service dispatcher
    before your code is executed */
    manipulate_data(whatever parameters necessary) /* this is the code you wrote in
    your service routine */
    xa_end() /* Tuxedo calls this after your service calls tpreturn or tpforward */
    The association between the Tuxedo Transaction ID and the data manipulation is
    made by the database because of this calling sequence.OK, this makes sense. Good to know this as well ...
    [snip]
    For somebody else trying this, here is the correct way:
    ==================================================
    int
    tpsvrinit(int argc, char *argv[])
    int ret;
    if (tpopen() < 0)
         userlog("error tpopen");
    userlog("startup, opening database\n");
    if (ret = db_create(&dbp, NULL, DB_XA_CREATE)) {
         userlog("error %i db_create: %s", ret, db_strerror(ret));
         return -1;
    if (ret = dbp->open(dbp, "sometablename", NULL, DB_BTREE, DB_CREATE, 0644)) {
         userlog("error %i db->open", ret);
         return -1;
    return(0);
    ==================================================
    What happens is that the call to the xa_open() function implicitly
    opens the Berkely DB enviroment for the database in question, which is
    given in the OPENINFO string in the configuration file. It is an error
    to specify the enviroment in the call to db_create() in such a
    context. All calls to change the database do not need an enviroment
    specified and the calls to begin/commit/abort transactions that are
    normally used by Berkeley DB which use the enviroment are superseded
    by tpopen(), tpclose() and friends. It would be an error to use those
    calls anyway.
    Thank you very much Peter for your comments which have helped a lot.
    Joerg Lenneis
    email: [email protected]

  • Oracle Mobile Server with SQLLite/Berkeley Db and dbsql

    Hi all,
    i am not sure if i am correct here but hopefully i am.
    In the past we have had Oracle Mobile Server with Oracle Lite.
    We decided to switch to new mobile Server because oracle webtogo is not longer supported and incompatible with windows 7. My administrator did migration of mobile server but migration utility reported that the available applications are incompatible.
    So I decided to create a completely New Publication with a Java application. The new Publication contains only one publication Item. For the first tests I simply wanted to spool out the data contained in my local database.
    In bin directory of sqlite folder i can find a utility named "dbsql". I understood it in this way that I can attach to an existing database file and take a look into that database.
    If i call dbsql.exe BerkeleyTest all seems to be ok. But if i try to select some data from that file i only get the errormessage that databse is in wrong format or encrypted. What am i doing wrong there?
    Am I right that the sql interface (I need that interface because I dont want to rewrite dataaccesslayer of my app) is only available in sqlite but not on "BerkeleyDb"?
    Is anyone here to help me a little bit with my problem here?
    Regards!
    Martin

    I do not know much about Oracle Mobile Server with Oracle Lite, does it use SQLite or BDB?  I do know that databases created by SQLite cannot be read by Berkeley DB SQL (of which dbsql.exe is part of), and databases created by Berkeley DB SQL cannot be read by SQLite.  Also, databases created by Berkeley DB outside of the SQL API cannot be read by the BDB SQL API.  You can open BDB SQL databases with BDB outside of the SQL API, but I would not recommend that outside of a few BDB utilities described in the documentation.  So if your BerkeleyTest database was created by SQLite or BDB outside of the SQL API, then it makes sense that dbsql.exe is returning an error when trying to read it.
    Calling dbsql.exe BerkeleyTest does not open the database, that happens when the first operation is performed on it, which is why you did not get an error until you tried to select something.
    Lauren Foutz

  • Berkeley DB and DB optimization

    Hi,
    I have been testing BerkeleyDB-4.7.25 with 16 *.bdb files using BTREE on a Linux server 64bits. Each *.bdb file reaches approximately a size of 3.2 GB.
    I have run a set of operations that include puts/gets/updates/deletes
    I would like to ask a couple of questions, please:
    1)
    Is there is any Berkeley DB tool/function to optimize the *.bdb files for/after deletion.
    2)
    I have been running dbstat -e (please find the output of db_stat below) trying to improve some of the DB_CONFIG parameters.
    set_flags DB_TXN_WRITE_NOSYNC
    set_cachesize 0 2147483648 1
    mutex_set_max 1000000
    set_tx_max 500000
    set_lg_regionmax 524288
    set_lg_bsize 4194304
    set_lg_max 20971520
    set_lk_max_locks 10000
    set_lk_max_lockers 10000
    set_lk_max_objects 10000
    I have increased the cache size, but it does not seem to be helping to improve the operation response times.
    I would really appreciate any help.
    Would the use of DB_SYSTEM_MEM (create the shared regions in system shared memory) help ?
    Would the preallocation of the db files help ?
    Would the increase of the log buffer help ?
    Would the size of the log help (based on the values related to data written since last checkpoint in db_stat) ?
    Could you please help ?
    Thanks,
    Mariella
    This is the output of db_stat -e:
    0x40988 Log magic number
    14 Log version number
    4MB Log record cache size
    0 Log file mode
    20Mb Current log file size
    72M Records entered into the log (72944260)
    92GB 761MB 385KB 636B Log bytes written
    1GB 805MB 40KB 747B Log bytes written since last checkpoint
    6596982 Total log file I/O writes
    0 Total log file I/O writes due to overflow
    7295 Total log file flushes
    39228 Total log file I/O reads
    4749 Current log file number
    18526992 Current log file offset
    4748 On-disk log file number
    20970984 On-disk log file offset
    1 Maximum commits in a log flush
    1 Minimum commits in a log flush
    4MB 512KB Log region size
    303613 The number of region locks that required waiting (0%)
    100 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    10000 Maximum number of locks possible
    10000 Maximum number of lockers possible
    10000 Maximum number of lock objects possible
    40 Number of lock object partitions
    16 Number of current locks
    274 Maximum number of locks at any one time
    7 Maximum number of locks in any one bucket
    0 Maximum number of locks stolen by for an empty partition
    0 Maximum number of locks stolen for any one partition
    100 Number of current lockers
    108 Maximum number of lockers at any one time
    16 Number of current lock objects
    176 Maximum number of lock objects at any one time
    4 Maximum number of lock objects in any one bucket
    0 Maximum number of objects stolen by for an empty partition
    0 Maximum number of objects stolen for any one partition
    118M Total number of locks requested (118356655)
    118M Total number of locks released (118356639)
    119802 Total number of locks upgraded
    16 Total number of locks downgraded
    20673 Lock requests not available due to conflicts, for which we waited
    0 Lock requests not available due to conflicts, for which we did not wait
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    500000 Transaction timeout value
    0 Number of transactions that have timed out
    7MB 768KB The size of the lock region
    5019 The number of partition locks that required waiting (0%)
    328 The maximum number of times any partition lock was waited for (0%)
    0 The number of object queue operations that required waiting (0%)
    280 The number of locker allocations that required waiting (0%)
    958 The number of region locks that required waiting (0%)
    4 Maximum hash bucket length
    2GB Total cache size
    1 Number of caches
    1 Maximum number of caches
    2GB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    150M Requested pages found in the cache (92%)
    12M Requested pages not found in the cache (12855704)
    8449044 Pages created in the cache
    12M Pages read into the cache (12855704)
    20M Pages written from the cache to the backing file (20044721)
    32M Clean pages forced from the cache (32698230)
    1171137 Dirty pages forced from the cache
    9227380 Dirty pages written by trickle-sync thread
    505880 Current total page count
    356352 Current clean page count
    149528 Current dirty page count
    262147 Number of hash buckets used for page location
    184M Total number of times hash chains searched for a page (184542797)
    34 The longest hash chain searched for a page
    945M Total number of hash chain entries checked for page (945465289)
    430 The number of hash bucket locks that required waiting (0%)
    34 The maximum number of times any hash bucket lock was waited for (0%)
    5595 The number of region locks that required waiting (0%)
    0 The number of buffers frozen
    0 The number of buffers thawed
    0 The number of frozen buffers freed
    34M The number of page allocations (34375350)
    76M The number of hash buckets examined during allocations (76979039)
    18 The maximum number of hash buckets examined for an allocation
    33M The number of pages examined during allocations (33869157)
    4 The max number of pages examined for an allocation
    2 Threads waited on page I/O
    Pool File: file_p10.bdb
    4096 Page size
    0 Requested pages mapped into the process' address space
    9376233 Requested pages found in the cache (92%)
    800764 Requested pages not found in the cache
    4096 Page size
    0 Requested pages mapped into the process' address space
    9376233 Requested pages found in the cache (92%)
    800764 Requested pages not found in the cache
    526833 Pages created in the cache
    800764 Pages read into the cache
    1179504 Pages written from the cache to the backing file
    Pool File: file_p3.bdb
    4096 Page size
    4658/8873223 File/offset for last checkpoint LSN
    Thu Apr 30 22:00:23 2009 Checkpoint timestamp
    0x806584b8 Last transaction ID allocated
    500000 Maximum number of active transactions configured
    0 Active transactions
    8 Maximum active transactions
    6653112 Number of transactions begun
    60327 Number of transactions aborted
    6592785 Number of transactions committed
    144048 Snapshot transactions
    257302 Maximum snapshot transactions
    0 Number of transactions restored
    185MB 24KB Transaction region size
    90116 The number of region locks that required waiting (0%)
    Active transactions:
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    129MB 720KB Mutex region size
    108 The number of region locks that required waiting (0%)
    4 Mutex alignment
    200 Mutex test-and-set spins
    1000000 Mutex total count
    331261 Mutex free count
    668739 Mutex in-use count
    781915 Mutex maximum in-use count
    Mutex counts
    331259 Unallocated
    16 db handle
    1 env dblist
    2 env handle
    1 env region
    43 lock region
    274 logical lock
    1 log filename
    1 log flush
    2 log region
    16 mpoolfile handle
    16 mpool filehandle
    17 mpool file bucket
    1 mpool handle
    262147 mpool hash bucket
    262147 mpool buffer I/O
    1 mpool region
    1 mutex region
    1 twister
    1 txn active list
    1 transaction checkpoint
    144050 txn mvcc
    1 txn region

    user11096811 wrote:
    i have same questionWhat is the question exactly? What DB release are you using?
    user11096811 wrote:
    the app throws com.sleepycat.db.LockNotGrantedException .what should i do?The LockNotGrantedException exception being thrown is a subclass of DeadlockException.
    A LockNotGrantedException is thrown when a lock requested using the Environment.getLock or Environment.lockVector methods, where the noWait flag or lock timers were configured, could not be granted before the wait-time expired.
    Additionally, LockNotGrantedException is thrown when a Concurrent Data Store database environment configured for lock timeouts was unable to grant a lock in the allowed time.
    Additionally, LockNotGrantedException is thrown when lock or transaction timeouts have been configured and a database operation has timed out. Applications can handle all deadlocks by
    catching the DeadlockException. You can read more on how to configure the locking subsystem and resolve the deadlocks at [The Locking Subsystem|http://www.oracle.com/technology/documentation/berkeley-db/db/gsg_txn/JAVA/lockingsubsystem.html].
    Thanks,
    Bogdan Coman

  • Berkeley DB and DRBD

    Hello,
    Do you have any advices to give about the use of Berkeley DB with DRBD (on linux clusters)?
    Things I must have in mind, etc.
    Note : Currently, my Berkeley DB databases are stored on a replicated storage managed by DRBD on a two-node Linux cluster. These DBs are handled inside an environment using transactions.
    Thanks

    Hello,
    The default is memory mapped files. For the BDB SQL API, we do not
    yet support DB_SYSTEM_MEM, for allocating memory from system
    shared memory. For more details see:
    BDB SQL Performance
    The "Shared memory regions" documentation at:
    http://download.oracle.com/docs/cd/E17076_02/html/programmer_reference/env_region.html
    provides a further discussion of memory mapped files vs DB_SYSTEM_MEM.
    Thanks,
    Sandra

  • A tale of a Berkeley DB project: Success and future developements

    The site http://www.electre.com is an on-line catalog of all published French books (close to 1 million of them), from the 1970’s to today, with pricing and availability information for professional users.
    The entire book database and search engine was developed with Berkeley DB. The site runs relatively problem free, and owes much to the stability and quality of the Berkeley DB source code. Moreover, the software is sold as an intranet version and runs on remote configurations that are out of our direct control. And yet, we don’t get much problem reports from these remote sites.
    Development started in 2002. Using Berkeley DB was my decision. It was, with hindsight, a very good decision, but it would be hypocritical to affirm I knew all along it would work out so well. It was partly a gamble, and partly a calculated risk. I am not ashamed to confess that when I finally made the decision to go with Berkeley DB, there were some key areas in the technical design I had no clue how to tackle. Needless to say, any flaws that remain in this area are due to lack of knowledge and lack of foresight on my part, not because of some flaw in Berkeley DB.
    What convinced me 4 years ago to “go for it” was Sleepycat’s no-nonsense approach to support from guys like Michael Cahill, Keith Bostic, John Merrells, Dave Seglau, Michael Ubell, Liz Pennel and others. Support was sending a message to [email protected]. Replies were never more than a couple of days away (including time zone differences). There was always something comforting when you read a reply from someone who says he “owns your problem” (especially when you also saw that someone’s name in the source code <g>). The majority of support questions were asked during evaluation and product development, before even a single dime was paid to Sleepycat. (Now that we sell the product as a closed source Intranet solution, we gladly pay the required license fees, of course). The fact that the project development went so well was in no small part thanks to the competence and reliability of the people I communicated with. I never expressed my thanks, so now is as good a time as any to do so. Thanks, guys (and gals).
    When Sleepycat was bought by Oracle, it was rumored that the goal was ultimately “to kill the product”. I thought that was ridiculous and an exaggeration by some paranoid people (especially from the MySQL-camp). I understand that Oracle is a big company, and that things are done differently now. Besides, how do you “kill” a product? I didn’t know until I hit my first support question “post-Sleepycat”. So now I do: you kill a product not directly, but, but by a process of what I call “incremental discouragement”. It’s a subtle, 3-pronged approach:
    1)     Erect bureaucratic barriers between competent people and your customers. No more [email protected], but a slow and unwieldy web site and a support procedure hiding behind numbers and requiring a 2 hour training session. I’m not sure how I’m going to motivate to management a 2 hour training session for a product we’ve been using for the past 4 years, and which I now master sufficiently to reduce my support requests to about 3 a year, and for which the only difference is a change of company name. There’s a “free” forum, but the primary motivation for it seems to be to allow users to support themselves. Yes, questions are answered on the forum… but compared to the level of support I’ve been accustomed to from the same people in the past, I can’t help noticing something has changed… and not for the better. Some questions aren’t even acknowledged, though they are read over 300 times.
    2)     Add useless levels of indirection: when we sell an intranet, we asked Sleepycat to mail us an invoice, which we got within the week. Now we ask Oracle the same thing: the last time we did this was on November 27 of last year, and we still waiting for someone from “the Belgian office” to contact us. The only reply we got was from someone telling us our request would be passed on: a level of indirection not exactly adding value.
    3)     Increase useless Information Noise: www.sleepycat.com was all about Berkeley DB and related products. Look at any page on the www.oracle.com site supposedly about Berkely DB and count the number of items and links pointing to products that have nothing to do with Berkeley DB. It’s like that famous analogy: you’re only interested in a banana, and you must take the whole gorilla.
    Berkeley DB is the fire behind the electre.com site. Our customers are satisfied, so this is not going to change for this version. But this developer is now convinced to view Berkeley DB as a medium to long-term liability. The fire will be kept alive for the remainder of this electre.com version, but for future versions and other similar projects, other solutions will have to be found. These other solutions will involve a similar gamble as the one I made a few years ago with Berkeley DB. My only hope is to meet similar competence, friendliness and professionalism, but without the overhead and bureaucracy of an organization for which the product is not even part of the core business.

    Vincent,
    I hear the frustration and concern in your post. I'd like to assure you that we're still here doing what we've done in the past and that your business is valuable to us, don't give up yet! :)
    Moving from a company of 30 people to a company of >50,000 people has been an interesting transition for us as well. I have to say that honestly, things are pretty good here inside the realm of Oracle. Of course, some things were bound to change. Oracle has processes in place to acquire and grow companies and that is their intended goal with Sleepycat's Berkeley DB products. As such, some customer facing processes have changed and our home on the internet has been incorporated into the oracle.com site. The interesting thing to me is that even within Oracle's infrastructure you, the customer, are not really much further away from our engineers and support staff than before. Your suggestions, concerns, questions and bugs go directly to us via the same people pre-acquisition for the most part, although that staff is growing. With these OTN forums you have a way to speak directly to most of the Sleepycat staff, 99% of whom are still here at Oracle nearly one year post acquisition. So, I'd argue that we've done a great job of keeping that small company feeling in one of the largest software providers around.
    Our web content is 80% identical to that which was on the sleepycat.com sites. Bookmarking one or two locations within Oracle and OTN will get you straight to that information. Sure, there are other product references floating around on the same page and we hope that over time Berkeley DB products and other Oracle products complement each other when used in combination. In general, I believe that most of the web experience is identical with different style sheets and a few extra links.
    Support for eval customers continues to be something we provide free of charge via these forums and in private email conversations with our staff. This is unchanged. Ownership of customer issues is much the same as well and we do still use '[email protected]' for tracking issues once opened. We agree that '[email protected]' method of communication was simple and highly effective for us and our customers. We are working with Oracle's support infrastructure to consider adopting similar methods. This is another reason Oracle purchased Sleepycat, to learn from our efficient effective operational model. This is something that is ongoing, in the mean time we still track issues behind the scenes the same way as before with the added information provided by TARs so that we fit into the overall Oracle support infrastructure.
    The rumors of our early demise are highly exaggerated. Berkeley DB products are alive and well in Oracle. Just look at the releases we've made in the past 12 months.
    As for our three pronged attack. ;-)
    1. The distance between first contact with support and helpful information has, in some ways, increased. This is due to the Oracle infrastructure for support that manages all Oracle products. The forums are the replacement for the discussion email lists we managed at sleepycat.com. We're managing a much larger amount of traffic than in the past and most of it is done with the same Berkeley DB engineering team you've come to know and love. Maybe we need to make it more obvious when we answer a question by having a signature indicating who we are, sometimes that's not obvious. As to the particular question you posed, I don't know what happened but the same thing can happen on an email list. My apologies for the lack of a response.
    2. Oracle has a huge sales force worldwide. We're only beginning to fully function within this new infrastructure. Sorry we dropped the ball on your sales inquiry, rest assured that we're interested in all commercial deals. Once connected into the Belgium office you'll have a direct relationship, as before, with a sales rep for your use of Berkeley DB.
    3. I've already talked about this. Oracle has many products, we're just one of those. We have to fit within the overall site structure. For the most part I'd give us an A- or B+ for transitioning our sleepycat.com information into oracle.com and OTN.com. If you have suggestions as to how we might improve that, drop me a line I'm all ears.
    In general I'd like to believe that you and other developers like you will find that our place in Oracle doesn't prevent you from choosing Berkeley DB. Certainly it is a bigger company with some additional process overhead, but hopefully not so much that it prevents you from remaining a loyal customer and someone who would recommend us to others.
    regards,
    -greg
    Gregory Burd [email protected]
    Product Manager, Berkeley DB/JE/XML Oracle Corporation

  • The applestore in berkeley told me that my iphone 5 hasnt a retina display, but i bought in apple store and now ?

    hey guys i bought my iphone 5 in germany in a apple store, im a exchange student livin in berkely CA now. last week felt my iphone down and there is a problem under the display and i wen to the apple store in berkeley.. and they couldnt fix it and in addition they told me the display of my iphone  arent the retina display from apple is. they gave me the adress from a repair store in berkeley downtown to fix it.. i didnt understand why "the apple store" cant fix the iphone ?

    The warranty on the iphone is NOT international.  It is only valid in the country of purchase (the EU is considered one country for warranty purposes).  You wil need to send your phone back home to someone in Germany or another EU country to bring the phone into Apple.  When it's done being serviced they can send it back to you.

  • Berkeley DB Sessions at Oracle OpenWorld Sept 19 - 23

    All,
    Just posting some of the Berkeley DB related sessions at Oracle OpenWorld this year. Hope to see you there.
    Session ID:      S317033
    Title:      Oracle Berkeley DB: Enabling Your Mobile Data Strategy
    Abstract:      Mobile data is everywhere. Deploying applications and updates, as well as collecting data from the field and synchronizing it with the Oracle Database server infrastructure, is everyone?s concern today in IT. Mobile devices, by their very nature, are easily damaged, lost, or stolen. Therefore, enabling secure, rapid mobile deployment and synchronization is critically important. By combining Oracle Berkeley DB 11g and Oracle Database Lite Mobile Server, you can easily link your mobile devices, users, applications, and data with the corporate infrastructure in a safe and reliable manner. This session will discuss several real-world use cases.
    Speaker(s):
    Eric Jensen, Oracle, Principal Product Manager
    Greg Rekounas, Rekounas.org,
    Event:      JavaOne and Oracle Develop
    Stream(s):      ORACLE DEVELOP, DEVELOP
    Track(s):      Database Development
    Tags:      Add Berkeley DB
    Session Type:      Conference Session
    Session Category:      Case Study
    Duration:      60 min.
    Schedule:      Wednesday, September 22, 11:30 | Hotel Nikko, Golden Gate
    Session ID:      S318539
    Title:      Effortlessly Enhance Your Mobile Applications with Oracle Berkeley DB and SQLite
    Abstract:      In this session, you'll learn the new SQL capabilities of Oracle Berkeley DB 11g. You'll discover how Oracle Berkeley DB is a drop-in replacement for SQLite; applications get improved performance and concurrency without sacrificing simplicity and ease of use. This hands-on lab explores seamless data synchronization for mobile applications using the Oracle Mobile Sync Server to synchronize data with the Oracle Database. Oracle Berkeley DB is an OSS embedded database that has the features, options, reliability, and flexibility that are ideal for developing lightweight commercial mobile applications. Oracle Berkeley DB supports a wide range of mobile platforms, including Android.
    Speaker(s):
    Dave Segleau, Oracle, Product Manager
    Ashok Joshi, Oracle, Senior Director, Development
    Ron Cohen, Oracle, Member of Technical Staff
    Eric Jensen, Oracle, Principal Product Manager
    Event:      JavaOne and Oracle Develop
    Stream(s):      ORACLE DEVELOP, DEVELOP
    Track(s):      Database Development
    Tags:      Add 11g, Berkeley DB, Embedded Development, Embedded Technology
    Session Type:      Hands-on Lab
    Session Category:      Features
    Duration:      60 min.
    Schedule:      Wednesday, September 22, 16:45 | Hilton San Francisco, Imperial Ballroom A
    Session ID:      S317032
    Title:      Oracle Berkeley DB: Adding Scalability, Concurrency, and Reliability to SQLite
    Abstract:      Oracle Berkeley DB and SQLite: two industry-leading libraries in a single package. This session will look at use cases where the Oracle Berkeley DB library's advantages bring strong enhancements to common SQLite scenarios. You'll learn how Oracle Berkeley DB?s scalability, concurrency, and reliability significantly benefit SQLite applications. The session will focus on Web services, multithreaded applications, and metadata management. It will also explore how to leverage the powerful features in SQLite to maximize the functionality of your application while reducing development costs.
    Speaker(s):
    Jack Kreindler, Genie DB,
    Scott Post, Thomson Reuters, Architect
    Dave Segleau, Oracle, Product Manager
    Event:      JavaOne and Oracle Develop
    Stream(s):      ORACLE DEVELOP, DEVELOP
    Track(s):      Database Development
    Tags:      Add Berkeley DB
    Session Type:      Conference Session
    Session Category:      Features
    Duration:      60 min.
    Schedule:      Monday, September 20, 11:30 | Hotel Nikko, Nikko Ballroom I
    Session ID:      S317038
    Title:      Oracle Berkeley DB Java Edition: High Availability for Your Java Data
    Abstract:      Oracle Berkeley DB Java Edition is the most scalable, highest performance Java application data store available today. This session will focus on the latest features, including triggers and sync with Oracle Database as well as new performance and scalability enhancements for high availability, with an emphasis on real-world use cases. We'll discuss deployment, configuration, and maximized throughput scenarios. You'll learn how you can use Oracle Berkeley DB Java Edition High Availability to increase the reliability and performance of your Java application data storage.
    Speaker(s):
    Steve Shoaff, UnboundID Corp, CEO
    Alex Feinberg, Linkedin,
    Ashok Joshi, Oracle, Senior Director, Development
    Event:      JavaOne and Oracle Develop
    Stream(s):      ORACLE DEVELOP, DEVELOP
    Track(s):      Database Development
    Tags:      Add Berkeley DB
    Session Type:      Conference Session
    Session Category:      Features
    Duration:      60 min.
    Schedule:      Thursday, September 23, 12:30 | Hotel Nikko, Mendocino I / II
    Session ID:      S314396
    Title:      Java SE for Embedded Meets Oracle Berkeley DB at the Edge
    Abstract:      This session covers a special case of edge-to-enterprise computing, where the edge consists of embedded devices running Java SE for Embedded in combination with Oracle Berkeley DB Java Edition, a widely used embedded database. The approach fits a larger emerging trend in which edge embedded devices are "smart"--that is, they come equipped with an embedded (in-process) database for structured persistent storage of data as needed. In addition, these devices may optionally come with a thin middleware layer that can perform certain basic data processing operations locally. The session highlights the synergies between both technologies and how they can be utilized. Topics covered include implementation and performance optimization.
    Speaker(s):      Carlos Lucasius, Oracle , Java Embedded Engineering
    Carlos Lucasius works in the Java Embedded and Real-Time Engineering product team at Oracle Corporation, where he is involved in development, testing, and technical support. Prior to joining Sun (now Oracle), he worked as an consultant to IT departments at various companies in both North-America and Europe; specific application domains he was involved in include artificial intelligence, pattern recognition, advanced data processing, simulation, and optimization as applied to complex systems and processes such as intelligent instruments and industrial manufacturing. Carlos has presented frequently at scientific conferences, universities/colleges, and corporations across North-America and Europe. He has also published a number of papers in refereed international journals covering applied scientific research in abovementioned areas.
    Event:      JavaOne and Oracle Develop
    Stream(s):      JAVAONE
    Track(s):      Java for Devices, Card, and TV
    Session Type:      Conference Session
    Session Category:      Case Study
    Duration:      60 min.
    Schedule:      Tuesday, September 21, 13:00 | Hilton San Francisco, Golden Gate 1
    Session ID:      S313952
    Title:      Developing Applications with Oracle Berkeley DB for Java and Java ME Smartphones
    Abstract:      Oracle Berkeley DB is a high-performance, embeddable database engine for developers of mission-critical systems. It runs directly in the application that uses it, so no separate server is required and no human administration is needed, and it provides developers with fast, reliable, local persistence with zero administration. The Java ME platform provides a new, rich user experience for cell phones comparable to the graphical user interfaces found on the iPhone, Google Android, and other next-generation cell phones. This session demonstrates how to use Oracle Berkeley DB and the Java ME platform to deliver rich database applications for today's cell phones.
    Speaker(s):      Hinkmond Wong, Oracle, Principal Member of Technical Staff
    Hinkmond Wong is a principal engineer with the Java Micro Edition (Java ME) group at Oracle. He was the specification lead for the Java Community Process (JCP) Java Specification Requests (JSRs) 36, 46, 218, and 219, Java ME Connected Device Configuration (CDC) and Foundation Profile. He holds a B.S.E degree in Electrical Engineering from the University of Michigan (Ann Arbor) and an M.S.E degree in Computer Engineering from Santa Clara University. Hinkmond's interests include performance tuning in Java ME and porting the Java ME platform to many types of embedded devices. His recent projects include investigating ports of Java ME to mobile devices, such as Linux/ARM-based smartphones and is the tech lead of CDC and Foundation Profile libraries. He is the author of the book titled "Developing Jini Applications Using J2ME Technology".
    Event:      JavaOne and Oracle Develop
    Stream(s):      JAVAONE
    Track(s):      Java ME and Mobile, JavaFX and Rich User Experience
    Tags:      Add Application Development, Java ME, Java Mobile, JavaFX Mobile, Mobile Applications
    Session Type:      Conference Session
    Session Category:      Tips and Tricks
    Duration:      60 min.
    Schedule:      Monday, September 20, 11:30 | Hilton San Francisco, Golden Gate 3
    I think I have them all. If I have missed any, please reply and I can update the list, or just post the info in the reply.
    Thanks,
    Greg Rekounas

    are any links to access these Seminars??

  • How to read the BDB log ... and other questions

    I am using a bdb database interface in the application openldap. When the bdb database there is established it creates, in addition to the data storage diles, a log file (log.0000000001). "file log.0000000001" reports that to be a binary file. How does one read that log?
    I have asked this question on the openldap forum and was advised it can be read using tools provided by Oracle to support Berkeley DB, and was furhter advised to go the the Oracle Berkeley DB site. Well, I have done that ... looked around for evidence of any such "tools", but have found nothing.
    Also, was advised there (in the openldap forum) that having that log file in the same directory with the data files is not a good idea, that it should be on a different spindle for performance purposes. I have looked at the BDB reference manual on line here but find no configuration options to move that log file to a different location.
    Help? Thanks.

    Hi Robert,
    The information about setting log directories can be found here:
    http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/env_set_lg_dir.html
    General information about log files that you may want to read about:
    http://www.oracle.com/technology/documentation/berkeley-db/db/gsg_txn/C/index.html
    You can use db_printlog to display the log files:
    http://www.oracle.com/technology/documentation/berkeley-db/db/utility/db_printlog.html
    The above link will also point you to a place to review the output. The db_printlog utility should be installed as part of your distribution.
    Ron

  • About Berkeley I/O.

    I am trying to understand the I/O characteristic of Berkeley DB. Can anyone confirm my observation that Berkeley DB does all the external I/O's (user data, meta data, and anything else) through routines defined in ./src/os/os_rw.cpp?
    Besides, I measured the total amount of data written by Berkeley DB and what surprises me is that the result is not constant under the same circumstance (empty db, same input, same cache size, single thread). How could this happen? Is there any random algorithm in the memory pool? I could not figure out whether it is because of the memory pool, since I have not found the exact codes of the replacement algorithm.

    Hello,
    In order to answer your questions please let me know the BDB version
    and platform you are running and what your program is trying to
    accomplish. Which API and access method is used? Please detail
    the specific tests you are making, results and how you measure them.
    It sounds like you might be measuring caching statistics and when
    data is flushed to disk. In that case the db_stat -m utility
    would be good information to capture, but I'm not sure that is
    the goal. You can start with the "Selecting a cache size" documentation
    in this case:
    http://download.oracle.com/docs/cd/E17076_02/html/programmer_reference/general_am_conf.html#am_conf_cachesize
    Thanks,
    Sandra

  • Re-licensing Berkeley DB back to

    Hi guys,
    I'm not sure if this has already been discussed here or even if this is right place to request such thing. If there is a better place, just point me there, please.
    My question is simple -- can upstream even think about re-licensing Berkeley DB 6.0 back to LGPLv2? It's not hard to imagine how many problems with licensing incompatibility were introduced by re-licensing Berkeley DB to AGPLv3. There are so many projects that use Berkeley DB and cannot be re-licensed as well, which basically means Berkeley DB started to die since 6.0, because such projects will need to switch to some alternative.. If this was desired by upstream to remove BDB from real life, then I can understand it. But anyway, I'd like to hear there is no way to convince upstream to re-licence back.
    Regards,
    Honza

    I am Jesús Cea, maintainer of the Python Berkeley DB bindings.
    The bindings are licensed as BSD 3-clauses, so my code is incompatible with BDB 6.0. No Python program, under any license, can use BDB 6.0, then. I can't change the license because it is inherited code and it is impossible to contact with every past author.
    I plan to do a release that ONLY allow linking with BDB 6.0 IF you define a environment variable saying "yes, I have a Oracle commercial license for this". Ugly and hacky, but hopefully safe for everybody.
    Personaly I use Berkeley DB HA and two phases commit A LOT in my internal projects. It is my main mode, in fact. I use it for everything, from mail storage to application deployment.

  • Some php + berkeley db questions

    I have some questions about using php with bdb.
    First, I compiled Berkeley db and then linked php to it using the configure directive.
    Then I accessed bdb through the php standard dba_* APIs. This works, but it seems like locking is broken. The php documentation (and common sense) says that calls to dba_open() with a write lock will block when another such call has succeeded in another process. But my tests show many concurrent processes all getting write locks no problem.
    So then I compiled the native php_db4 extension that ships with bdb. I tried to use the API documented here:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/ext/php.html
    Can anybody direct me to a more complete (and more correct) fuller documented version of this API? For instance, the put method is not shown in the Db4 class, but it does exist.
    I'm trying to infer how the php API works from the C API docs, but it's not very easy, particularly when it comes to error codes returned. Is there a db_strerror in the php?
    I can get the simple demos that come in the db4 php dir to work, but what I need is a locking environment, much like the one documented here:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/cam/intro.html
    However, when I try to open the DBENV with the DB_INIT_CDB and DB_INIT_MPOOL flags, as directed, the call fails in the php. I cannot figure out why or how to get an errorr code or message I can debug?
    Any help will be much appreciated. If you could just point me at any real-world examples of php and berkeley db that would be a great start.

    Hi,
    From what I'm aware of, there is no extra documentation on php & BDB (maybe just php.net :) ). Also, I don't know if is there anyone who published his source code.
    What kind of application do you want to build? I think that a good option for the moment is to try to use BDB XML version ( Berkeley DB XML and http://www.oracle.com/technology/products/berkeley-db/xml/index.html ), since there are many cases there in which the BDB XML product is used via PHP, and this is why you can have a better support. I think that you can try to achieve the same approach using XML, please let me know if you agree or not.
    BDB XML's PHP API's are mapped over C++ API, and you'll have the ability to use XML and XQuery rather than tables and SQL.
    If you can point me with a specific issue in PHP APIs for BDB, and provide me with test cases, I can try to work them out. Also, in the next weeks, I 'll try to have a look on PHP APIs in my spare time, and maybe I'll be able to work on supporting the latest BDB APIs. If there is somebody working on a PHP app and is willing to help on testing and maintaining the PHP APIs, please post here.
    Regards,
    Bogdan Coman, Oracle

  • Is there is any other way which works same as berkeley db...

    hi every one,
    what i am trying to do is in a webpage of our web site we are have a news section so all those news are changing everyday so we are actually trying to build a database with the help of berkeley db sothat the search for our web site becomes fast to access.
    we will flush the updated data in the data base every day for news only,
    as we know berkeley db is one of fatest method to do these kind of things..
    but my boss is saying find out another method to do this thing...
    so pls guide me...
    thanks & Regards
    AT

    Hi,
    This is still a little vague to me but here is a suggestion:
    I suggest you read the getting started guides starting with Berkeley DB Java Edtion and possibly Berkeley DB and investigate whether you think this is a good fit based on your requirements.
    Are you writing your application in Java?
    http://www.oracle.com/technology/documentation/berkeley-db/je/GettingStartedGuide/applicationoverview.html
    The above is the Getting Started Guide for Berkeley DB Java Edtion. Below is a pointer to the documentation set:
    http://www.oracle.com/technology/documentation/berkeley-db/je/index.html
    Ron

  • Berkeley DB with C++

    I am a new comer to the Berkelet DB, Does anyone could give some examples how to start with Berkelet DB? Thanks. My e-mail is: [email protected]

    Hi Richard,
    You will find starting guides on Berkeley DB on the documentation page:
    http://www.oracle.com/technology/documentation/berkeley-db/db/index.html
    Also, in the directory where you built/installed/unzipped Berkeley DB, there are a couple of directories in which you can find samples (examples_c, examples_cxx, examples_java).
    Information on how to build Berkeley DB and run examples are found on the following link:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/build_win/intro.html
    Regards,
    Andrei Costache
    Oracle Support Services

  • Problem sqlite+berkeley PANIC: fatal region error detected.

    Excuse for my English and if this wrong place to explain the problem
    I'm checking to replace Berkeley DB and SQLite for testing the stability against involuntary interruptions of program I have encountered the following error:
    Berkeley DB trust in knowing his rnedimiento and stability with Subversion, but I doubt the API bridge SQLITE
    I'm testing the Berkeley DB database using the API Berkeley DB SQLITE and I did a small test program:;
    /* Open database. */
    sqlite3 *db;
    sqlite3_open("data/basedatos.db", &db);
    sqlite3_exec(db,"CREATE TABLE [test] ([key] INTEGER, [dat] varchar(64), PRIMARY KEY ([key]))",NULL,0,NULL);
    err_code = SQLITE_BUSY;
    while (err_code != SQLITE_OK ) {
         sqlite3_exec( db, "delete from test", NULL, 0, NULL );
         err_code = sqlite3_errcode( db );
    sqlite3_exec( db, "BEGIN", NULL, 0, NULL );+
    for( int i=0; i<_numCartones; i++ ) {
         char buf[1024];
         sprintf_s( buf, sizeof(buf), "insert into test( key, dat) values ( %d, 'test%d' )", i, i );
         sqlite3_exec( db, buf, NULL, 0, NULL );
    sqlite3_exec( db, "COMMIT", NULL, 0, NULL );
    sqlite3_close(db);I launched the program and insert about 150000 records in 17 seconds. Perfect!
    I created a file basedatos.db and basedatos.db-journal subdirectory with files: log.0000000016, __db.001, __db.002, __db.003, __db.004, __db.005, __db.006 and __db.register.
    Open it and prove the usefulness dbsql
    c: dbsql basedatos.db
    select count(*) from test;
    150000          ← Ok.
    Without closing the program again dbsql run the test program and this will get stuck in the call:
    sqlite3_exec( db, "delete from test", NULL, 0, NULL );I close dbsql and automatically releases the "delete from" and the test program again inserted 150,000 records
    While this by inserting 150,000 records run it again
    c: dbsql basedatos.db
    select count(*) from test; [WAIT]
    and select count (*) remains locked until you finish the test program, normal thing locks.Once you finish the select TEST responds to 150,000
    150000          ← Ok.Without closing the program again dbsql run the test program and this will get stuck in the call:
    sqlite3_exec( db, "delete from test", NULL, 0, NULL );I close dbsql and automatically releases the "delete from" and the test program again inserted 150,000 records
    while inserting test rerun:
    c: dbsql basedatos.db
    select count(*) from test;
    Error: database disk image is malformed
    and in my test program : PANIC: fatal region error detected; run recovery.
    Reviewing the files are only: badatos.db, log.0000000031, log.0000000032, log.0000000033, log.0000000034, log.0000000035, log.0000000036, __db.register.
    and __db*.* files?

    Had accidentally opened the program dbsql.exe while doing speed tests data insertion.
    While in a shell to make a select count (*) and I realized that was blocked waiting for the COMMIT release, normal thing in a process BEGIN / COMMIT.
    In one test was corrupt database which reduced the test software and simplify the test to repeat the problem.
    Today I repeated the test and the situation has changed
    1) Run test (all OK, inserted in 18 seconds 150000 entries)
    2) DBSQL run and I make a select count (*) (All Ok)
    3) DBSQL unsealed test run and stay lock on DELETE FROM
    4) DELETE FROM I close DBSQL and not released as yesterday.Repeat several times and I have the same behavior
    Move in the test code from "delete from..." begin and commit
    sqlite3_exec( db, "BEGIN", NULL, 0, NULL );
    err_code = sqlite3_errcode( db );
    err_code = SQLITE_BUSY;
    while (err_code != SQLITE_OK ) {
    sqlite3_exec( db, "delete from test", NULL, 0, NULL );
    err_code = sqlite3_errcode( db );
    for( int i=0; i<_numCartones; i++ ) {Repeat tests
    1)Test run, everything ok in 25 seconds. While inserting test this, I run realizao dbsql and a select count (*) and remains lock until test ends. Everything ok, 150000 records
    2)Dbsql unsealed test run it again and stay lock on delete until you close the program dbsql.
    3)I close dbsql and releasing the lock of seconds to delete the test the error "PANIC ..." like yesterdayRepeat several times and the behavior is the same, except that no desparencen db files.
    If I can not run dbsql test run multiple times without problems.
    Could be the problem dbsql and simultaneous access to the database?
    I'm going to migrate an application in production since SQLITE to Berkeley DB for testing.
    I have confidence in the performance of Berkeley DB and I know the proper functioning it does with subversion, but the subversion server on a server is protected with uninterrupted power.
    If I avoid using dbsql while the test software that I have that theoretical security operation will be correct when using Berkeley DB SQLITE layer and especially with unexpected off the machine?
    Thanks again for your help

Maybe you are looking for

  • Error message...while accessing the  BEx Web Analyzer on EP7

    hi experts, i have deployed and configured BI java on portal server.and while accessing the roles from portal(ep7) ,for the BEx Web Analyzer getting the error message as shown below.but the other two i.e. planning wizard and planning modeller are run

  • How to get a bootable vhd for a Linux on Hyper-V with two disks ?

    Hi folks, I am running a custom linux on my Hyper-V in Windows Server 2008 R2. I initially configured one disk(4GB) and then I added one more hard disk of higher capacity (8GB). Now I got two vhd files with different sizes. If I want to create a new

  • Mass change for  condition record

    I try to change the condition record for material with validity period of one year starting next year Jan 1st. Since that material is already exists in the system with validity period of current date to 12/31/9999, it is not saving my changes. When i

  • Send Email in HTML Format with Javamail.

    GOD BE BLESSED! Hello dudes, i create a jsp page let me send emails from a database using the API Javamail. But these emails when i send then, goes in text format only, and i want to send emails in HTML format too. Anyone can help me? I think the Jav

  • JRE 1.4 US only - Error occurred during initialization of VM

    JAVA.EXE in JRE give that message Error occurred during initialization of VM with throwing NullPointerException This is NOT happening when using JRE international OR sdk - which is almost like JRE international but for some unknown reason WITHOUT the